The code you posted is not complete and has syntactic errors, making it difficult to be sure what your problem might have been.
In the following code, I have fixed a few obvious errors to make it compile and run. Otherwise I replaced 'Next' with 'Avanti' as, on the pages I got back (Google may return different content to you) the button at the bottom of the page, to proceed to the next page of results, is labeled Avanti. Perhaps this working example will help you get your code working as you wish.
use strict;
use warnings;
use Data::Dumper::Concise;
use WWW::Mechanize;
my $mech = WWW::Mechanize->new();
my $title = "Raumchemie der festen Stoffe";
$mech->get("http://scholar.google.it/scholar?q=" . $title );
unless($mech->success()) {
die $mech->status();
}
my $response = $mech->response();
my $content = $response->decoded_content();
print Dumper($content);
my $link_result = $mech->follow_link( url_regex => qr/cites/i, n => 1
+);
unless($link_result) {
die "link not found";
}
my $result = $mech->content;
my $indi = $mech->uri();
#my $rest = $out->scrape( $result, $indi );
for my $i (2..5) {
print "page : ".$i."\n";
$mech->follow_link( text_regex => qr/Avanti$/) or
die("finished on page : ".$i."\n");
my $result = $mech->content;
my $indi = $mech->uri();
print $indi->as_string() . "\n";
sleep(5);
}
| [reply] [d/l] |
Hum --
I found this:
I love Google Scholar as my go-to place to search for papers. Some features like "forward-citations" and their nice-ish autogenerated bibtex are life savers.
However, sometimes I (and others) wish we could write scripts to help us:
Google Scholar with Matlab;
Automatically building a database of forward and backward citations
However, Google Scholar does not provide an API, their robots.txt disallows scrapers on most pages of interest (for instance the cited-by results are not suppose to be accessed by bots), and if you try to make many requests (as a bot would) you will get an CAPTCHA.
Last year they used to have a EULA that said:
You shall not, and shall not allow any third party to: ...
(i) directly or indirectly generate queries, or impressions of or clicks on Results, through any automated, deceptive, fraudulent or other invalid means (including, but not limited to, click spam, robots, macro programs, and Internet agents);
...
(l) "crawl", "spider", index or in any non-transitory manner store or cache information obtained from the Service (including, but not limited to, Results, or any part, copy or derivative thereof);
Some Google services like custom search (for which I could find a EULA) still state this in section 1.4, but the link in the SO answer is now dead and I have not been able to find a new EULA for Scholar. From anecdotal evidence, I know that you can get in a decent amount of trouble of you try to circumvent Google's efforts to prevent scraping of Scholar.
I believe the proper response here is "Don't do this. Call Google and ask them (politely) to either give you written permission or point you to the approved API. I suspect this is not what you wanted to hear, but....
----
I Go Back to Sleep, Now.
OGB
| [reply] |