|XP is just a number|
Forking Required, Fork Bombs are Notby tradez (Pilgrim)
|on Dec 07, 2004 at 16:47 UTC||Need Help??|
tradez has asked for the
wisdom of the Perl Monks concerning the following question:
Hola my fellow Clergy,
The issue I come to you today with is one of a mix between distributed computing, a favorite of all my SETI friends, and forking (which until 5.8 was a favorite of no one). I am utilizing Proc::Queue for the Fork Management System currently and I am loving it. I just see that I have downtime while processes switch off that I would like to eliminate by allowing my children to spawn children, this is very scary though so I thought I would bounce this off the wall first. Consider the following:
This all works great. The only problem is that the first 2 steps in the first run_back (running remote script and getting the files from remote box) cause alot of local idle time. What I would like to do is instead of having the second run_back, just have the first run_back launch of 30 proc's to do the COPY while the next in the proc::queue line go and insantiate the scripts on the next remote box. Does this logic work? What is my best path to follow? Oh let your wisdom shine down upon me.
"Every official that come in
Cripples us leaves us maimed
Silent and tamed
And with our flesh and bones
He builds his homes"
- Zach de la Rocha