There's a reason the unsafe signals are called "unsafe". It's been my loud opinion for a long time that Perl should never have had unsafe signals ... there is never an excuse for Perl dumping core, and with unsafe signals, Perl dumps core. If you're lucky.
-- Chip Salzenberg, Free-Floating Agent of Chaos
| [reply] |
Actually it is best to have both with deferred as the default. Why? Because sometimes it is better to fail completely and restart the process then to hang forever. This would be primarily useful in production environments where time is the critical factor (i.e. Trading systems).
update
I guess I should have been more clear... the scripts for maintenance mostly and information gathering not for the trading itself.
The maintenance windows tend to be very short. If a scripts hangs on say a gethostbyaddr() then abort the call, and retry the gethostbyaddr().. if the script goes down, start the entire job over.
If the process normally takes 10 mins and if the script hangs on 9 mins 59seconds, then I would rather risk restarting that hanging operation then having to start all over again... if it goes down, then I will start it all over again.
I'm using "trading systems" as an example..
| [reply] |
You would use unsafe signals in a trading system?
Have you gone totally mad?
Unsafe signals can hit at any time, and leave the Perl internals in an unknown, unsafe, unusable state. Then the unsafe signal handler proceeds to execute Perl code using those internals! A trading system is exactly the kind of high-stakes deployment where everything must be safe!
If you want a fast restart, then you should have a watchdog process outside the target, which can
kill it (with SIGKILL if necessary) and restart it. That way you're getting a fast response without putting yourself at risk for a big financial bath.
-- Chip Salzenberg, Free-Floating Agent of Chaos
| [reply] |