Keep It Simple, Stupid | |
PerlMonks |
Re: Forking Multiple Threadsby sundialsvc4 (Abbot) |
on Feb 08, 2012 at 14:04 UTC ( [id://952506]=note: print w/replies, xml ) | Need Help?? |
Wince! 500 of anything?! Kindly consider that “adding more threads” to any soup does not speed things up: it slows it down by some amount, except to the extent that the hardware (on both ends of a communication link) actually can overlap I/O and computation. (And if you thereby overload resources, especially such as memory, the whole thing goes to hall in a hendbasket, very quickly.) I suggest that you add some measurements to your request-queues. Measure the time that a request actually sits in the queue before being sent to the host; then, measure the time the request takes to be returned. Now, experiment with what happens as you reduce ... I suggest that you drastically reduce ... the number of processes and/or threads; the so-called “multiprogramming level” of your system. Now, you can objectively measure the result. Let me predict what you will find... I suggest that what you will see is that the actual performance curve follows a “bent knee” pattern that is typical of most such situations that are subject to “thrashing.” Processing time will rise more-or-less linearly until it hits the wall and the curve goes straight-up into a pattern of exponential(ly bad...) increase. You are probably already there. (Notice that I am talking about “request completion time” every bit as much as, if not more so than simply, “how much smoke is coming out of the ventilation vents of your CPUs.”) I/O requests of this type are typically asynchronous: you can start a lot of them but you don’t have to dedicate a thread to wait for the completion of each one. You can use a select() type of mechanism and perhaps use only one thread for the whole shebang. Networks run in terms of milliseconds; CPUs in terms of nano.
In Section
Seekers of Perl Wisdom
|
|