Beefy Boxes and Bandwidth Generously Provided by pair Networks
Syntactic Confectionery Delight
 
PerlMonks  

Re: WWW::Mechanize::Firefox - callbacks?

by dmz (Novice)
on Aug 30, 2010 at 17:13 UTC ( #858051=note: print w/ replies, xml ) Need Help??


in reply to WWW::Mechanize::Firefox - callbacks?

This works for vanilla WWW::Mechanize IIRC Firefox and InternetExplorer both mirror the methods in WWW::Mechanize When you initialize the scraper put a timeout=>15 in there. eg

WWW::Mechanize::new(timeout=>15);
Checking for login or any other mech actions you can use the preferred method of waiting until the action is complete. This works well for sites that don't timeout or go dead with too many requests.
wait until $mech->success;
or, a more robust version can wait for success and handle if you get non-success:
wait until $muck->success or $muck->status; if ($muck->status ne 200 ) //200 is HTML success { // do something } else { //do something else handle errors, sleep if page timeouts, recurse to + try again, etc }
I socked the latter into a sub passing the action and on fail recurse into the same sub until I get success.


Comment on Re: WWW::Mechanize::Firefox - callbacks?
Select or Download Code

Log In?
Username:
Password:

What's my password?
Create A New User
Node Status?
node history
Node Type: note [id://858051]
help
Chatterbox?
and the web crawler heard nothing...

How do I use this? | Other CB clients
Other Users?
Others avoiding work at the Monastery: (8)
As of 2014-11-26 23:43 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    My preferred Perl binaries come from:














    Results (177 votes), past polls