in reply to fork always 'n' processes.
I do not readily see the advantage of using forks especially on the client side and especially since the server is handling requests one at a time. The client is always going to be stuck in a rather long I/O wait for the server to respond, and you simply need to throttle how many requests the client sends out at one time. (Each time a reply arrives, it can simply check a counter and submit a new request.)
Re^2: fork always 'n' processes.
by Anonymous Monk on Mar 20, 2018 at 16:48 UTC
|
Thanks for answering.
1. I have a queue of tasks, a socket is giving me;
2. I would like to process these tasks 'n' processes at a time;
3. Always 'n' processes should be active and running;
4. Until the end of the queue has been reached.
I am thinking of a 'scalable' way and use more clients to process the tasks queue.
TheMagician | [reply] [Watch: Dir/Any] |
|
| [reply] [Watch: Dir/Any] |
|
You should elaborate on your requirements. Is this intended to be portable? Windows/linux? Do you need it as perl code? There are utilities that can help in running parallel tasks (e.g. xargs). Do you need those jobs as separate processes; perhaps a threaded version might suit you as well?
Have you searched for modules (e.g. Parallel::ForkManager)? Do you actually need a robust, efficient solution or is this homework?
| [reply] [Watch: Dir/Any] |
|