This works for vanilla WWW::Mechanize IIRC Firefox and InternetExplorer both mirror the methods in WWW::Mechanize
When you initialize the scraper put a timeout=>15 in there.
Checking for login or any other mech actions you can use the preferred method of waiting until the action is complete. This works well for sites that don't timeout or go dead with too many requests.
wait until $mech->success;
or, a more robust version can wait for success and handle if you get non-success:
wait until $muck->success or $muck->status;
if ($muck->status ne 200 ) //200 is HTML success
// do something
//do something else handle errors, sleep if page timeouts, recurse to
+ try again, etc
I socked the latter into a sub passing the action and on fail recurse into the same sub until I get success.