Beefy Boxes and Bandwidth Generously Provided by pair Networks
Syntactic Confectionery Delight
 
PerlMonks  

Re: conditional testing for error 500 webpages before following them?

by Perlbotics (Archbishop)
on Oct 15, 2011 at 14:19 UTC ( [id://931660]=note: print w/replies, xml ) Need Help??


in reply to conditional testing for error 500 webpages before following them?

From WWW::Mechanize:

autocheck => [0|1]

Checks each request made to see if it was successful. This saves you the trouble of manually checking yourself. Any errors found are errors, not warnings.

The default value is ON, unless it's being subclassed, in which case it is OFF. This means that standalone WWW::Mechanizeinstances have autocheck turned on, which is protective for the vast majority of Mech users who don't bother checking the return value of get() and post() and can't figure why their code fails. However, if WWW::Mechanize is subclassed, such as for Test::WWW::Mechanize or Test::WWW::Mechanize::Catalyst, this may not be an appropriate default, so it's off.
Here, errors means die(). Since your program used autocheck=>1 by default, it dies when a problem occurs while calling $mech_cgi->follow_link(...). It has no chance to reach your own call to die - which is not what you want - as already observed by roboticus (Updated: paragraph).

Now you have at least two options:

  • Wrap the calls that can fail into an eval-block and check for exceptions ($@), or
  • create the WWW::Mechanize object using ...new( autocheck=>0 ) and check the results (see HTTP::Response) of $mech_cgi-calls for problems.

Example (2nd alternative):

use strict; use WWW::Mechanize; use Storable; my $mech_cgi = WWW::Mechanize->new( autocheck => 0 ); $mech_cgi->get( 'http://www.molmovdb.org/cgi-bin/browse.cgi' ); my @cgi_links = $mech_cgi->find_all_links( url_regex => qr/motion.cgi/ + ); for my $link ( @cgi_links ) { # no C-style loop... print "following link: ", $link->url, "\n"; my $res = $mech_cgi->follow_link( url => $link->url ); # $res is a HTTP::Response object if ( $res->is_success ) { print "OK : Processing result ...\n"; } else { print "ERR: Failed to retrieve page: ", $res->status_line, "\n"; } $mech_cgi->back; sleep 5; # anti-aggressive scraping }

Result:

... following link: http://www.molmovdb.org/cgi-bin/motion.cgi?ID=ntrc OK : Processing result ... following link: http://www.molmovdb.org/cgi-bin/motion.cgi?ID=ppar ERR: Failed to retrieve page: 500 Internal Server Error following link: http://www.molmovdb.org/cgi-bin/motion.cgi?ID=rhorbp OK : Processing result ... ...

Please check also if you have permission to scrape this site.

Log In?
Username:
Password:

What's my password?
Create A New User
Domain Nodelet?
Node Status?
node history
Node Type: note [id://931660]
help
Chatterbox?
and the web crawler heard nothing...

How do I use this?Last hourOther CB clients
Other Users?
Others rifling through the Monastery: (5)
As of 2024-04-23 15:11 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    No recent polls found