Beefy Boxes and Bandwidth Generously Provided by pair Networks
Don't ask to ask, just ask

On handling multiple generations and data between them

by atcroft (Abbot)
on Aug 18, 2002 at 12:16 UTC ( #190976=perlquestion: print w/replies, xml ) Need Help??

atcroft has asked for the wisdom of the Perl Monks concerning the following question:

A friend came to me with a problem regarding multiple generations (levels) of child processes when forking that has stumped my poor store of information, and so I present the query to thee, hoping to increase in knowledge to answer. The questions follow the background information below.

My friend is writing a client-server process that involves multiple levels of children. While I am not privy to the internals of the project, I can provide the following basic information on the flow, which he agreed to allow me to post.

  • Level 0 is the process run from the command-line, which waits for a connection, gets the request, and hands it to a level 1 child for processing. This level should also be the only one to handle updating the data store (which I believe may currently be a tied hash).
  • Level 1 is a process which looks for data locally to respond to the request, and if necessary may spawn multiple level 2 children to look data up from other systems. Because of time constraints, and because some of these systems may be unavailable or heavily loaded, this child should only wait for the first responding level 2 child before responding to the request, then reap the remaining level 2 children, and pass back update information to the level 0 parent if necessary.
  • Level 2 is a process which makes a request to an external system. It should respond back with the data it receives in response to the query, or a code indicating that the external system was unavailable.

Part 1
How to prevent the SIGCHLD resulting from the exit of a level 2 child from being seen by the level 0 (grand)parent? It was my guess to redefine the handler routine pointed to for SIGCHLD, but is there anything at that point that could cause the signal to be propagated back to the level 0 process, other than the possibility it might be the last of the level 2 processes being waited upon by the level 1 process that spawned them? I would think in that case it would be a different instance of SIGCHLD.

Part 2
How can the data from a level 2 child best be returned to its level 1 parent without blocking the parent from getting the data from another child responding quicker? Or from the level 1 child to the level 0 (grand)parent without blocking it from responding to further incoming requests? Could/should either of these be handled in the SIGCHLD handler(s)? From the way it was described, it seems as if there is the possibility that a large number of level 1 children might at times exist, each spawning a number of level 2 children, so I would be concerned that using pipe() or the forking form of open() might resulting in opening too many file handles, and I am not sure what OS this may be loaded on (various forms of *nix certainly, although I think there was mentioned the possibility it might be run on some form of Windows, or possibly other systems as well), so I am not sure IPC::Shareable would be an option either.

Part 3
Would it be possible for the SIGCHLD handler defined at a particular parent-child level (0-1 or 1-2) to be able to interact with or initiate changes in data that that parent level (0 or 1, respectively)?

Any wisdom and/or instruction in these matters would be greatly appreciated.

  • Comment on On handling multiple generations and data between them

Replies are listed 'Best First'.
Re: On handling multiple generations and data between them
by Zaxo (Archbishop) on Aug 18, 2002 at 17:19 UTC

    Part 1:
    The grandparent will never see SIGCHLD from a grandchild. SIGCHLD is only delivered to the parent pid of a process. If the original parent has already exited, the init process inherits the kids.

    Part 2:
    In Many-to-One pipe, I demonstrated a technique for having many child processes speak to the parent over a single pipe by taking advantage of the duplication of file handles on fork. That method seems just right for your friend's problem. There is no overuse of file handles, and the parent only needs to read on one, making four-arg select unnecessary.

    I agree that starvation for open file descriptors or pids is a danger. At level 2, I'd suggest ordering the data sources, best first, and accepting however many children you can get. If you get none, sleep and retry. A level 2 child which cannot open a socket due to fd starvation should also sleep and retry. Each level 1 process represents a unique query, so the sleep-and-retry strategy is probably best for all its resource grabbing.

    This architecture is supposed to be possible on Windows, under new-enough perl, but I keep hearing of problems. Try a skeleton version of the program and see if it works.

    Part 3:
    Be careful what you do in signal handlers. A signal handler must avoid making system calls which alter the kernel's global state. The usual suspect is malloc. Make sure that any variable the handler modifies is defined, of the correct type, and has enough storage already for what you write to it. I would try to avoid signal handlers as much as possible, relying on wait or waitpid to reap exited kids. Where signal handlers are unavoidable, use them to set predefined globals which the running process can interpret for more dangerous kinds of operations.

    After Compline,

Re: On handling multiple generations and data between them
by Ovid (Cardinal) on Aug 18, 2002 at 18:00 UTC
    1. If the level 1 child exits, then its level 2 children will send their signals back to the level 0 process. Therefore either the level 0 process must remember which children to ignore, or else the level 1 child must not exit until after all of its children are accounted for. (kill may help.)
    2. The level 1 child should use select and sysread to try to read from all kids in parallel, or else the level 2 children should call getppid and then send a signal when they are ready to write. Using exit signals is not safe because it is possible that a child's exit may be blocked indefinitely on the fact that its write back to its parent filled a buffer that needs to be read before more can be written.

    More advice. Either plan on the final result not being portable to Windows or else use POE. (The latter is not guaranteed, but at least has a chance.) Give careful consideration to race conditions and plan carefully for handling inevitable errors; this is going to be tricky.


    Join the Perlmonks Setiathome Group or just click on the the link and check out our stats.

Re: On handling multiple generations and data between them
by PhiRatE (Monk) on Aug 19, 2002 at 00:46 UTC
    Once again side-stepping the directly asked question to discuss the broader issue (I always do this :)

    How is it that an architecture of this complexity was designed by someone unfamiliar with unix signals semantics?

    Understanding of SIGCHLD etc is all detailed within any basic unix text, thus my initial concern would be that the designer/implementor lacks understanding of many other issues that may prevent other (as yet unposted) aspects of the architecture from working as designed.

    Further to this, it appears that this massive forking of processes and heirachy of control is somehow related to efficiency. While I'm the first to stand up and shout for joy at the effectiveness of fork() in unixland, linux copy-on-write land in particular, the apache group and others can tell you straight up that it is not the be all and end all of efficiency.

    For a start, there is no mention of connection or process pooling within the architecture as described, something that the apache team soon recognised to be a necessity under heavy load.

    There is no discussion of statistical optimisation or caching, both of which should be at the core of any serious performance design.

    There is no discussion of why threads were dismissed as an option. Given that threads operate within the same memory space, and have their own synchronisation semantics which may well be portable, it would seem a vastly more effective option for data sharing than processes and pipes.

    In short, as usual, Not Enough Information for help beyond the most basic (which any decent unix text would have told you).

    I also note with amusement the concept of using a fork()ing architecture under windows (Windows process creation is stunningly slow in comparison to unix).

Log In?

What's my password?
Create A New User
Domain Nodelet?
Node Status?
node history
Node Type: perlquestion [id://190976]
Approved by Dog and Pony
Front-paged by Aristotle
and the web crawler heard nothing...

How do I use this? | Other CB clients
Other Users?
Others exploiting the Monastery: (3)
As of 2023-10-01 15:32 GMT
Find Nodes?
    Voting Booth?

    No recent polls found