Beefy Boxes and Bandwidth Generously Provided by pair Networks
Perl Monk, Perl Meditation

Help with web crawling

by eversuhoshin (Sexton)
on Dec 09, 2012 at 07:06 UTC ( #1007958=perlquestion: print w/replies, xml ) Need Help??
eversuhoshin has asked for the wisdom of the Perl Monks concerning the following question:

Dear Monks,

I need help web crawling. I need to obtain the html code in the web page itself. I have tried WWW::Mechanize and URI to convert it to an absolute URL. But I have failed so far.

Can someone please help me crawl through or download the html code of the webpage of

Here is the code trying to crawl the edgar website

use strict; use WWW::Mechanize; use LWP::Simple; use URI; my $url='edgar/data/1750/0001104659-06-059326-index.html'; my $web=''.$url; my @temp=split(/\//,$url); chomp($web); my $rel_url='/'.$temp[2].'/'.$temp[3]; my $base_url=''; my $abs_url=URI->new_abs($rel_url,$base_url); my $text=get($abs_url) or die $!;

This is the SEC Edgar data base and once I figure out how to crawl through I can do the parsing. I just need the information between the "div class="infoHead"Items div" Thank you so much!

Replies are listed 'Best First'.
Re: Help with web crawling
by CountZero (Bishop) on Dec 09, 2012 at 09:27 UTC
    Downloading the HTML-code of a web page is easy. LWP::Simple is the traditional Perl way of doing it.

    use Modern::Perl; use LWP::Simple; my $content = get(' +4420411058092/0001144204-11-058092-index.htm'); say $content;
    Of course that is only the simple part of your task. Extracting what you need is the difficult part.

    HTML::Treebuilder is one of the HTML parsing modules that can be helpful here.


    A program should be light and agile, its subroutines connected like a string of pearls. The spirit and intent of the program should be retained throughout. There should be neither too little or too much, neither needless loops nor useless variables, neither lack of structure nor overwhelming rigidity." - The Tao of Programming, 4.1 - Geoffrey James

    My blog: Imperial Deltronics

      Thank you so much!! I can extract the parts I need now!

Re: Help with web crawling
by tobyink (Abbot) on Dec 09, 2012 at 11:01 UTC
    use HTML::HTML5::Parser; my $uri = ' +1058092/0001144204-11-058092-index.htm'; my $xpath = '//*[@class="formGrouping" and ./*[@class="infoHead" and c +ontains(./text(), "Items")]]/*[@class="info"]'; my $item = HTML::HTML5::Parser -> load_html(location => $uri) -> findvalue($xpath); print $item, "\n";
    perl -E'sub Monkey::do{say$_,for@_,do{($monkey=[caller(0)]->[3])=~s{::}{ }and$monkey}}"Monkey say"->Monkey::do'
Re: Help with web crawling
by space_monk (Chaplain) on Dec 09, 2012 at 07:55 UTC
    Can someone reap either this one or Reaped: Help with web crawling?
    A Monk aims to give answers to those who have none, and to learn from those who know more.

Log In?

What's my password?
Create A New User
Node Status?
node history
Node Type: perlquestion [id://1007958]
Approved by Tommy
Front-paged by Tommy
[Lady_Aleena]: geany may not support that.
[shmem]: I see that there is a plugin geany-plugin- codenav
[shmem]: maybe that supports ctags, check the documentation
[Lady_Aleena]: I'd have to talk to a geany person to see.
[Lady_Aleena]: Right now, I'm trying to decide whether I want to rewrite how variables are entered in the sub I'm working on.
[shmem]: wait, what? you want that person to read the documentation for you?
[Lady_Aleena]: I might have to go with options instead of a straight list.
[Discipulus]: if more than 3 go for named variables LA
[Lady_Aleena]: Discipulus, it is at 2 now, but with what I am thinking about, it could go to 3. However, only 1 is needed. The second and third are optional.

How do I use this? | Other CB clients
Other Users?
Others contemplating the Monastery: (13)
As of 2017-04-27 12:17 GMT
Find Nodes?
    Voting Booth?
    I'm a fool:

    Results (506 votes). Check out past polls.