Beefy Boxes and Bandwidth Generously Provided by pair Networks
Clear questions and runnable code
get the best and fastest answer
 
PerlMonks  

WWW::Mechanize problem

by Anonymous Monk
on Oct 19, 2005 at 22:56 UTC ( [id://501468]=perlquestion: print w/replies, xml ) Need Help??

Anonymous Monk has asked for the wisdom of the Perl Monks concerning the following question:

Hi I have installed WWW::Mechanize using ppm. Im Running windows XP and active state version 5.8. I need to use WWW::Mechanize to visit one of my sites and follow all of the links. The site in question has 3 kinds of links, 2 of which are javascript functions which in turn builds a url. I have code to identify and deal with the javascript links. I want to visit each page of my site and read the link url and link text then output it to a file, and then follow all of the links I have found. I have managed to do this using the $mech->links method for the first page of the site, but I need to follow each of the links in order. Can anyone suggest how to do this?

Replies are listed 'Best First'.
Re: WWW::Mechanize problem
by davido (Cardinal) on Oct 20, 2005 at 00:45 UTC

    WWW::Mechanize's links() method returns an array of WWW::Mechanize::Link objects. You can use those objects to follow the URLs like this:

    my( @links ) = $mech->links(); foreach my $link ( @links ) { my $temp_mech = WWW::Mechanize->new(); $temp_mech->get( $link ); # Do whatever you want now... }

    Dave

      Cool, Thanks,
      My problem is that I cant figure out how to recurse this, so that I am visiting each link on the site. Can you offer any pointers there?
        Anonymous Monk,
        First of all, you probably want to verify that your code doesn't conflict with any of the site's policies. Even if you are "ok", you likely want to sleep in between page fetches like a good net citizen. Ok, now on to your question of recursion.

        This can easily turn into an infinite loop so it may be important to keep track of where you have already visited. I would suggest using a stack/queue approach along with a %seen cache. The following is an illustration of what I mean:

        # mechanize fetching of first page my %seen; my @links = $mech->links(); while ( @links && @links < 1_000 ) { my $link = shift @links; my $url = $link->url() next if $seen{$url}++; # mechanize fetch of $url push @links, $mech->links; sleep 1; }
        This will prevent you from fetching the same url and it will stop when you have no more links to visit or you find the site had far too many links to follow then you intended. The 1000 was an arbitrary limit and need not be there at all. You can change between depth first and breadth first by ajusting push/unshift and shift/pop.

        Cheers - L~R

Log In?
Username:
Password:

What's my password?
Create A New User
Domain Nodelet?
Node Status?
node history
Node Type: perlquestion [id://501468]
Approved by marto
help
Chatterbox?
and the web crawler heard nothing...

How do I use this?Last hourOther CB clients
Other Users?
Others avoiding work at the Monastery: (4)
As of 2024-04-19 13:44 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    No recent polls found