AssFace has asked for the wisdom of the Perl Monks concerning the following question:
I have had an account at pair.com for a few years. Up until very recently, it was a shared account. As a result, any scripts that ran over 30 seconds, were killed.
I now I have my own pair dedicated server. One of the great benefits of that is that my scripts can run as long as they want now... One of the downsides is that the scripts can run as long as they want.
I have a script that connects to eBay and grabs some feedback from random users, and then connects to Yahoo and grabs the headlines. This script will work perfectly, most of the time.
But sometimes, this script will just run pretty much forever and take up all of the CPU. I can log in every now and then check uptime to see how things are going and then kill the script if need be... but aren't all us Perl users lazy by default? Therefore, I'd rather automate this scripts death.
The script has loops, so dumping out checkpoints to see where it is snagging up is a real hassle (especially since I've never gotten it to live forever on me while testing - it only does it "in the wild" sometimes) - it generates a lot of output with the loops (sure, I can set up a system so that it outputs every N times - but what if things are happening between those N times... I would just rather avoid that).
As far as I can tell, aside from rewriting the script or spending a lot of time debugging it to find the odd times it does go astray, I have two easy options - write my own script and run it via cron to check for this thing hogging the CPU and killing it, or I could - and this is what I would rather do - have the script itself be aware of how long it is alive for, and if that is over some amount of time (30 seconds or so), then it should kill itself.
I post here because I am asking for ideas as to how either of those options are feasible, or if there is an even better way that I'm just not seeing at this point.
There are some odd things afoot now, in the Villa Straylight.