Beefy Boxes and Bandwidth Generously Provided by pair Networks
Your skill will accomplish
what the force of many cannot
 
PerlMonks  

Re: Design advice: Classic boss/worker program memory consumption

by sundialsvc4 (Abbot)
on May 20, 2014 at 23:34 UTC ( #1086882=note: print w/ replies, xml ) Need Help??


in reply to Design advice: Classic boss/worker program memory consumption

Another trick is to reduce the parent’s role strictly to managing the children.   Spin-off the process of creating the %dataset and of determining the number of workers to a single child ... call it the project manager.   The PM then informs the parent how many children are needed, and the parent spawns them.   Worker processes can, say, ask the PM for another unit of work, which the PM doles-out and sends to them.   There is also another child process which is the recipient and filer of completed work units.

Now, you have a dumb parent who has two gifted children (the PM and the filer), as well as a variable number of hired grunts.   No large chunks of memory get duplicated on-fork, and that should eliminate your memory problem.   The parent’s only job is to see to it that its children are alive.   The children talk among themselves.   Like any good bureaucrat, the parent is responsible for everything but basically does nothing . . . :-)


Comment on Re: Design advice: Classic boss/worker program memory consumption
Re^2: Design advice: Classic boss/worker program memory consumption
by shadrack (Acolyte) on May 21, 2014 at 05:09 UTC

    Thanks. Although my sense is that your solution as described is actually a bit more complex than solution 1, a variation of the core idea might be the way to go (at least for me). The solution I'm thinking of is identical to the one you've described up to the point where the parent spawns the workers. At this point the PM, instead of hanging around and answering requests from the children, sends the entire %dataset to the parent, then quits. The parent then continues to run the show exactly as it does today (remember, for me, this code is not only written, but well-tested).

    Some inefficiency will be introduced by the need for the PM to communicate the entire %dataset to the parent via IPC. I'm thinking that, since this only happens once, it probably won't have a huge impact on performance, though obviously, I'll have to test it.

    At any rate, it's a really good idea. Thanks!

Re^2: Design advice: Classic boss/worker program memory consumption (less async)
by tye (Cardinal) on May 21, 2014 at 13:53 UTC

    Having the parent (that doesn't build the data nor do any of the tasks) "dole out" tasks to the children (for the one child) adds a lot of complexity vs. just having a pipe for the children to read from. It also can easily reduce concurrency. The significant increase in complexity comes from this one process trying to manage competing tasks that then need to be done asynchronously.

    To keep the code simple, you could have the parent just be responsible for spawning children. One child builds the hash and sends out jobs on a pipe that all of the other children read (fixed-length) jobs from. Those children then write their responses that aren't larger than "the system buffer" to a "return" pipe using one syswrite().

    This leads to only one tiny bit of "async" fiddling to worry about in only the one child and in a way that can be handled simply in Perl while also only requiring two simple pipes.

    But to be able to have children pick up tasks from a single pipe, you have to use fixed-length task data (not larger than the system buffer, 4KB on Linux). To be able to use a single pipe for responses from workers, responses need to be written in chunks (with a fixed-length prefix that specifies the length of the chunk that must total no more than the system buffer size) using a single syswrite(). (And not all worker systems require a response to be sent back to the holder of the huge hash.)

    There might be a bit of complexity added by this "just spawns children" parent needing to keep handles open to both ends of both pipes if we expect it to replace both types of children. I'd probably just have to write the code to be sure of the consequences of that. I'd probably simplify that a bit by having the death of the "build the big hash" child cause the parent to re-initialize by execing itself.

    There could still be some added complexity just from having one more process that needs to keep handles to one end of each of the two pipes open. But I think that would only impact the "shut down cleanly" code. Again, until I swap all of the little details in (probably by just writing the code), I'm not sure of the full consequences. (Update: Yeah, the complexity comes because, if workers send back responses, you'll need to add a way for the one "build the tasks" child to tell the parent that it is time to shut down or else a way for the parent to tell the one child "all workers are finished".)

    (And getting this approach to work reliably on Windows is mostly a whole separate problem because you can't just use fork() and because Windows pipes don't have the same guarantees -- though you can use "message read mode", which should work but is just a separate problem to solve.)

    - tye        

Log In?
Username:
Password:

What's my password?
Create A New User
Node Status?
node history
Node Type: note [id://1086882]
help
Chatterbox?
and the web crawler heard nothing...

How do I use this? | Other CB clients
Other Users?
Others cooling their heels in the Monastery: (9)
As of 2014-09-20 14:23 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    How do you remember the number of days in each month?











    Results (159 votes), past polls