This works for vanilla WWW::Mechanize IIRC Firefox and InternetExplorer both mirror the methods in WWW::Mechanize
When you initialize the scraper put a timeout=>15 in there.
in reply to WWW::Mechanize::Firefox - callbacks?
Checking for login or any other mech actions you can use the preferred method of waiting until the action is complete. This works well for sites that don't timeout or go dead with too many requests.
or, a more robust version can wait for success and handle if you get non-success:
wait until $mech->success;
I socked the latter into a sub passing the action and on fail recurse into the same sub until I get success.
wait until $muck->success or $muck->status;
if ($muck->status ne 200 ) //200 is HTML success
// do something
//do something else handle errors, sleep if page timeouts, recurse to
+ try again, etc