Beefy Boxes and Bandwidth Generously Provided by pair Networks
The stupid question is the question not asked

Re: WWW::Mechanize::Firefox - callbacks?

by dmz (Novice)
on Aug 30, 2010 at 17:13 UTC ( #858051=note: print w/ replies, xml ) Need Help??

in reply to WWW::Mechanize::Firefox - callbacks?

This works for vanilla WWW::Mechanize IIRC Firefox and InternetExplorer both mirror the methods in WWW::Mechanize When you initialize the scraper put a timeout=>15 in there. eg
Checking for login or any other mech actions you can use the preferred method of waiting until the action is complete. This works well for sites that don't timeout or go dead with too many requests.
wait until $mech->success;
or, a more robust version can wait for success and handle if you get non-success:
wait until $muck->success or $muck->status; if ($muck->status ne 200 ) //200 is HTML success { // do something } else { //do something else handle errors, sleep if page timeouts, recurse to + try again, etc }
I socked the latter into a sub passing the action and on fail recurse into the same sub until I get success.

Log In?

What's my password?
Create A New User
Node Status?
node history
Node Type: note [id://858051]
and the web crawler heard nothing...

How do I use this? | Other CB clients
Other Users?
Others meditating upon the Monastery: (1)
As of 2016-07-30 11:41 GMT
Find Nodes?
    Voting Booth?
    What is your favorite alternate name for a (specific) keyboard key?

    Results (265 votes). Check out past polls.