Beefy Boxes and Bandwidth Generously Provided by pair Networks
There's more than one way to do things
 
PerlMonks  

Help with web crawling

by eversuhoshin (Sexton)
on Dec 09, 2012 at 07:06 UTC ( #1007958=perlquestion: print w/ replies, xml ) Need Help??
eversuhoshin has asked for the wisdom of the Perl Monks concerning the following question:

Dear Monks,

I need help web crawling. I need to obtain the html code in the web page itself. I have tried WWW::Mechanize and URI to convert it to an absolute URL. But I have failed so far.

Can someone please help me crawl through or download the html code of the webpage of

www.sec.gov/Archives/edgar/data/935226/000114420411058092/0001144204-11-058092-index.htm

Here is the code trying to crawl the edgar website

use strict; use WWW::Mechanize; use LWP::Simple; use URI; my $url='edgar/data/1750/0001104659-06-059326-index.html'; my $web='www.sec.gov/Archives/'.$url; my @temp=split(/\//,$url); chomp($web); my $rel_url='/'.$temp[2].'/'.$temp[3]; my $base_url='www.sec.gov/Archives/edgar/data'; my $abs_url=URI->new_abs($rel_url,$base_url); my $text=get($abs_url) or die $!;

This is the SEC Edgar data base and once I figure out how to crawl through I can do the parsing. I just need the information between the "div class="infoHead"Items div" Thank you so much!

Comment on Help with web crawling
Download Code
Re: Help with web crawling
by space_monk (Chaplain) on Dec 09, 2012 at 07:55 UTC
    Can someone reap either this one or Reaped: Help with web crawling?
    A Monk aims to give answers to those who have none, and to learn from those who know more.
Re: Help with web crawling
by CountZero (Bishop) on Dec 09, 2012 at 09:27 UTC
    Downloading the HTML-code of a web page is easy. LWP::Simple is the traditional Perl way of doing it.

    use Modern::Perl; use LWP::Simple; my $content = get('http://www.sec.gov/Archives/edgar/data/935226/00011 +4420411058092/0001144204-11-058092-index.htm'); say $content;
    Of course that is only the simple part of your task. Extracting what you need is the difficult part.

    HTML::Treebuilder is one of the HTML parsing modules that can be helpful here.

    CountZero

    A program should be light and agile, its subroutines connected like a string of pearls. The spirit and intent of the program should be retained throughout. There should be neither too little or too much, neither needless loops nor useless variables, neither lack of structure nor overwhelming rigidity." - The Tao of Programming, 4.1 - Geoffrey James

    My blog: Imperial Deltronics

      Thank you so much!! I can extract the parts I need now!

Re: Help with web crawling
by tobyink (Abbot) on Dec 09, 2012 at 11:01 UTC
    use HTML::HTML5::Parser; my $uri = 'http://www.sec.gov/Archives/edgar/data/935226/00011442041 +1058092/0001144204-11-058092-index.htm'; my $xpath = '//*[@class="formGrouping" and ./*[@class="infoHead" and c +ontains(./text(), "Items")]]/*[@class="info"]'; my $item = HTML::HTML5::Parser -> load_html(location => $uri) -> findvalue($xpath); print $item, "\n";
    perl -E'sub Monkey::do{say$_,for@_,do{($monkey=[caller(0)]->[3])=~s{::}{ }and$monkey}}"Monkey say"->Monkey::do'

Log In?
Username:
Password:

What's my password?
Create A New User
Node Status?
node history
Node Type: perlquestion [id://1007958]
Approved by Tommy
Front-paged by Tommy
help
Chatterbox?
and the web crawler heard nothing...

How do I use this? | Other CB clients
Other Users?
Others taking refuge in the Monastery: (14)
As of 2014-09-30 13:00 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    How do you remember the number of days in each month?











    Results (369 votes), past polls