good chemistry is complicated, and a little bit messy -LW |
|
PerlMonks |
comment on |
( [id://3333]=superdoc: print w/replies, xml ) | Need Help?? |
I’d program the system so that it first put those 70-odd commands into a list, then launched a “configurable-n” number of children (yes, use system_detached ...), then waited for any one of them to complete before popping-off and starting the next command. So that the number of workers alive at any moment never exceeds this n, no matter how many commands are on the list to be executed. (In the design I am thinking of, the children don’t themselves take work from this list. The parent process uses it to create and launch the next command-to-do.) I think you’ll find that there is a “sweet spot” number of children that should be allowed to be active at one time, and that this number (found by experimentation ...) is likely to be relatively small. The total time required, to do the total amount of work that is to be done, will be optimal at or around this point. If, instead, you “just throw children at it,” the total amount of time might be much longer because the children are competing with each other (especially for I/O). In the general case, the performance curve of such things becomes “an elbow” at the so-called thrash point. It’s nice and linear up to that point, then it all goes sour exponentially fast. Regulating the number of workers, independently of the amount of work-to-do, is a reliable and controllable way to prevent that. Yeah, it means waitpid(). Incidentally, on Unix/Linux systems, the trusty xargs command ordinarily has a -P number_of_children option which allows you to specify the size of a worker-pool. On such systems, the need for this entire Perl script(!) might have been avoided. In reply to Re: Surprised by limitation of Win32 "system(1" hack
by sundialsvc4
|
|