http://www.perlmonks.org?node_id=1007958

eversuhoshin has asked for the wisdom of the Perl Monks concerning the following question:

Dear Monks,

I need help web crawling. I need to obtain the html code in the web page itself. I have tried WWW::Mechanize and URI to convert it to an absolute URL. But I have failed so far.

Can someone please help me crawl through or download the html code of the webpage of

www.sec.gov/Archives/edgar/data/935226/000114420411058092/0001144204-11-058092-index.htm

Here is the code trying to crawl the edgar website

use strict; use WWW::Mechanize; use LWP::Simple; use URI; my $url='edgar/data/1750/0001104659-06-059326-index.html'; my $web='www.sec.gov/Archives/'.$url; my @temp=split(/\//,$url); chomp($web); my $rel_url='/'.$temp[2].'/'.$temp[3]; my $base_url='www.sec.gov/Archives/edgar/data'; my $abs_url=URI->new_abs($rel_url,$base_url); my $text=get($abs_url) or die $!;

This is the SEC Edgar data base and once I figure out how to crawl through I can do the parsing. I just need the information between the "div class="infoHead"Items div" Thank you so much!

Replies are listed 'Best First'.
Re: Help with web crawling
by CountZero (Bishop) on Dec 09, 2012 at 09:27 UTC
    Downloading the HTML-code of a web page is easy. LWP::Simple is the traditional Perl way of doing it.

    use Modern::Perl; use LWP::Simple; my $content = get('http://www.sec.gov/Archives/edgar/data/935226/00011 +4420411058092/0001144204-11-058092-index.htm'); say $content;
    Of course that is only the simple part of your task. Extracting what you need is the difficult part.

    HTML::Treebuilder is one of the HTML parsing modules that can be helpful here.

    CountZero

    A program should be light and agile, its subroutines connected like a string of pearls. The spirit and intent of the program should be retained throughout. There should be neither too little or too much, neither needless loops nor useless variables, neither lack of structure nor overwhelming rigidity." - The Tao of Programming, 4.1 - Geoffrey James

    My blog: Imperial Deltronics

      Thank you so much!! I can extract the parts I need now!

Re: Help with web crawling
by tobyink (Canon) on Dec 09, 2012 at 11:01 UTC
    use HTML::HTML5::Parser; my $uri = 'http://www.sec.gov/Archives/edgar/data/935226/00011442041 +1058092/0001144204-11-058092-index.htm'; my $xpath = '//*[@class="formGrouping" and ./*[@class="infoHead" and c +ontains(./text(), "Items")]]/*[@class="info"]'; my $item = HTML::HTML5::Parser -> load_html(location => $uri) -> findvalue($xpath); print $item, "\n";
    perl -E'sub Monkey::do{say$_,for@_,do{($monkey=[caller(0)]->[3])=~s{::}{ }and$monkey}}"Monkey say"->Monkey::do'
Re: Help with web crawling
by space_monk (Chaplain) on Dec 09, 2012 at 07:55 UTC
    Can someone reap either this one or Reaped: Help with web crawling?
    A Monk aims to give answers to those who have none, and to learn from those who know more.