Beefy Boxes and Bandwidth Generously Provided by pair Networks
P is for Practical
 
PerlMonks  

Re^5: How to extract links from a webpage and store them in a mysql database

by chargrill (Parson)
on Dec 21, 2006 at 13:26 UTC ( #591090=note: print w/replies, xml ) Need Help??


in reply to Re^4: How to extract links from a webpage and store them in a mysql database
in thread How to extract links from a webpage and store them in a mysql database

And now a second bit of help, possibly a lot bigger of a bit than previously.

I'm not familiar with HTML::LinkExtor, and I really don't use LWP::UserAgent these days either, so I wrote something taking advantage of my personal favorite for anything webpage related, WWW::Mechanize.

I also never quite understood your original algorithm. If it were me (and in this case it is) I'd keep track of urls (and weeding out duplicates) for a given link depth on my own, in my own data structure, as opposed to inserting things into a database and fetching them back out to re-crawl them.

I'm also not clear on your specs as to whether or not you want urls that are off-site. The logic for the way this program handles that is pretty clearly documented, so if it isn't to your spec, adjust it.

Having said all that, here is a recursive link crawler. (Though now that I type out "recursive link crawler", I can't help but imagine that this hasn't been done before, and I'm certain a search would turn one up fairly quickly. Oh well.)

#!/usr/bin/perl use strict; use warnings; use WWW::Mechanize; my $url = shift || die "Please pass in base url as argument to $0\n"; my %visited; my @links; my $max_depth = 3; my $depth = 0; my $mech = WWW::Mechanize->new(); # This helps prevent following off-site links. # Note, assumes that url's passed in will represent the # highest level in a website hierarchy that will be visited. # i.e. http://www.example.com/dir/ will record a link to # http://www.example.com/, but will not follow it and report # subsequent links. my( $base_uri ) = $url =~ m|^(.*/)|; get_links( $url ); sub get_links { my @urls = @_; my @found_links; for( @urls ){ # This prevents following off-site or off-parent links. next unless m/^$base_uri/; $mech->get( $_ ); # Filters out links we've already visited, plus mailto's and # javascript:etc hrefs. Adjust to suit. @found_links = grep { ++$visited{$_} == 1 && ! /^(mailto|javascrip +t)/i } map { $_->url_abs() } $mech->links(); push @links, @found_links; } # Keep going, as long as we should. get_links( @found_links ) if $depth++ < $max_depth; } # Instead of printing them, you could insert them into the database. print $_ . "\n" for @links;

Inserting the links into a database is left as an exercise for the reader.



--chargrill
s**lil*; $*=join'',sort split q**; s;.*;grr; &&s+(.(.)).+$2$1+; $; = qq-$_-;s,.*,ahc,;$,.=chop for split q,,,reverse;print for($,,$;,$*,$/)

Replies are listed 'Best First'.
Re^6: How to extract links from a webpage and store them in a mysql database
by syedahmed.uos (Novice) on Jan 04, 2007 at 18:09 UTC
    hello wish u a happy new year thanks for the help !!! just want to ask when i set the $max depth variable to 3 or 2 it gives me the same output.

      Do you have links 3 deep?



      --chargrill
      s**lil*; $*=join'',sort split q**; s;.*;grr; &&s+(.(.)).+$2$1+; $; = qq-$_-;s,.*,ahc,;$,.=chop for split q,,,reverse;print for($,,$;,$*,$/)
        I cant really find that out whether i have links extracted to depth level of three while when i set the value of $max_depth to 3 or 2 i get the same output.
      use WWW::Mechanize; module ...

Log In?
Username:
Password:

What's my password?
Create A New User
Node Status?
node history
Node Type: note [id://591090]
help
Chatterbox?
and the web crawler heard nothing...

How do I use this? | Other CB clients
Other Users?
Others examining the Monastery: (4)
As of 2019-09-15 10:17 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?
    The room is dark, and your next move is ...












    Results (180 votes). Check out past polls.

    Notices?