Beefy Boxes and Bandwidth Generously Provided by pair Networks
Just another Perl shrine

WWW::Mechanize follow_link not working

by sbasbasba (Initiate)
on Oct 12, 2013 at 18:03 UTC ( #1058010=perlquestion: print w/replies, xml ) Need Help??
sbasbasba has asked for the wisdom of the Perl Monks concerning the following question:

Hi everybody,

I am trying to scrape some Google Scholar results. I have a problem in going from the first page of result to the second, and so on. In particular, I have tried to tell follow_link to click on the link with text 'Next' but it does not seem to recognize it. I have tried to use text_regex, but no success either. I believe I am not spotting the right "text", but I really need to rely on that to look for the link because the url is very complicated. Any clue?? Many many thanks!


Here is the code:

my $title = Raumchemie der festen Stoffe $mech->get( "" . $title ); $mech->follow_link( url_regex => qr/cites/i, n => 1 ); my $result = $mech->content; my $indi = $mech->uri(); my $rest = $out->scrape( $result, $indi ); #~ dd( $result, $rest ); dd( $rest ); print F3 $rest; for my $i (2..200) { my $ii = $i . "0"; print "page : ".$i."\n"; $mech->follow_link( text_regex => qr/Next$/)or die("finished on page : + ".$i."\n"); my $result = $mech->content; my $indi = $mech->uri(); my $rest = $out->scrape( $result, $indi ); #~ dd( $result, $rest ); dd( $rest ); print F3 $rest; sleep(5); }

Replies are listed 'Best First'.
Re: WWW::Mechanize follow_link not working
by ig (Vicar) on Oct 13, 2013 at 04:39 UTC

    The code you posted is not complete and has syntactic errors, making it difficult to be sure what your problem might have been.

    In the following code, I have fixed a few obvious errors to make it compile and run. Otherwise I replaced 'Next' with 'Avanti' as, on the pages I got back (Google may return different content to you) the button at the bottom of the page, to proceed to the next page of results, is labeled Avanti. Perhaps this working example will help you get your code working as you wish.

    use strict; use warnings; use Data::Dumper::Concise; use WWW::Mechanize; my $mech = WWW::Mechanize->new(); my $title = "Raumchemie der festen Stoffe"; $mech->get("" . $title ); unless($mech->success()) { die $mech->status(); } my $response = $mech->response(); my $content = $response->decoded_content(); print Dumper($content); my $link_result = $mech->follow_link( url_regex => qr/cites/i, n => 1 +); unless($link_result) { die "link not found"; } my $result = $mech->content; my $indi = $mech->uri(); #my $rest = $out->scrape( $result, $indi ); for my $i (2..5) { print "page : ".$i."\n"; $mech->follow_link( text_regex => qr/Avanti$/) or die("finished on page : ".$i."\n"); my $result = $mech->content; my $indi = $mech->uri(); print $indi->as_string() . "\n"; sleep(5); }
Re: WWW::Mechanize follow_link not working
by Old_Gray_Bear (Bishop) on Oct 13, 2013 at 21:52 UTC
    Hum --

    I found this:

    I love Google Scholar as my go-to place to search for papers. Some features like "forward-citations" and their nice-ish autogenerated bibtex are life savers.

    However, sometimes I (and others) wish we could write scripts to help us: Google Scholar with Matlab; Automatically building a database of forward and backward citations

    However, Google Scholar does not provide an API, their robots.txt disallows scrapers on most pages of interest (for instance the cited-by results are not suppose to be accessed by bots), and if you try to make many requests (as a bot would) you will get an CAPTCHA.

    Last year they used to have a EULA that said:

    You shall not, and shall not allow any third party to: ... (i) directly or indirectly generate queries, or impressions of or clicks on Results, through any automated, deceptive, fraudulent or other invalid means (including, but not limited to, click spam, robots, macro programs, and Internet agents); ... (l) "crawl", "spider", index or in any non-transitory manner store or cache information obtained from the Service (including, but not limited to, Results, or any part, copy or derivative thereof);

    Some Google services like custom search (for which I could find a EULA) still state this in section 1.4, but the link in the SO answer is now dead and I have not been able to find a new EULA for Scholar. From anecdotal evidence, I know that you can get in a decent amount of trouble of you try to circumvent Google's efforts to prevent scraping of Scholar.

    I believe the proper response here is "Don't do this. Call Google and ask them (politely) to either give you written permission or point you to the approved API. I suspect this is not what you wanted to hear, but....

    I Go Back to Sleep, Now.


Log In?

What's my password?
Create A New User
Node Status?
node history
Node Type: perlquestion [id://1058010]
Approved by rminner
and all is quiet...

How do I use this? | Other CB clients
Other Users?
Others making s'mores by the fire in the courtyard of the Monastery: (2)
As of 2018-05-28 01:49 GMT
Find Nodes?
    Voting Booth?