Beefy Boxes and Bandwidth Generously Provided by pair Networks
laziness, impatience, and hubris
 
PerlMonks  

Parallel::ForkManager leaves a zombie process from last child

by makow2 (Initiate)
on May 30, 2013 at 18:06 UTC ( #1036110=perlquestion: print w/ replies, xml ) Need Help??
makow2 has asked for the wisdom of the Perl Monks concerning the following question:

Hello, Simple code:
#!/usr/bin/perl use HTTP::Daemon; use Parallel::ForkManager; $daemon = new HTTP::Daemon(LocalPort => 8080, LocalAddr => "127.0.0.1" +, Listen => 64, ReuseAddr => 1) or die "$!"; $pm = Parallel::ForkManager->new(3); while (1) { $inputcon = $daemon->accept(); $pm->start and next; do_client_stuff($inputcon); $pm->finish(); } sub do_client_stuff { my ($inputcon) = @_; $request = $inputcon->get_request; print $request . "\n"; $inputcon->send_error(403); }
Almost everything is OK. But the last child process leaves a zombie on the system. I can only do wget once and its enough to make <defunct> process. Is there any solution for this? I don't want to change forkmanager to proc::queue or something. Thanks. Makowik /Sorry for my english/

Comment on Parallel::ForkManager leaves a zombie process from last child
Download Code
Re: Parallel::ForkManager leaves a zombie process from last child
by RichardK (Vicar) on May 30, 2013 at 18:16 UTC

    Have you called  $pm->wait_all_children;?

    It will tidy up the zombies.

    UPDATE: ignore this -- your code never ends so there's nowhere to call wait_all_children from, I guess your problem must be something else.

      I can add it after 'while' but, hard to call this when script exits 'while' loop only when I kill it, Whole job is in 'while' loop.
Re: Parallel::ForkManager leaves a zombie process from last child
by vsespb (Hermit) on May 30, 2013 at 20:15 UTC
    I have not ever used this module. But I don't see signal handlers in its code. Probably you should handle SIGINT (Ctrl-C) by yourself. Also, related ticket https://rt.cpan.org/Public/Bug/Display.html?id=35659
      If I kill, end the job = no problem. Problem occurs when script is working so I don't care about ctrl+c I don't want to have zombie every client connection: new client connection -> child -> do something -> end child process -> wait for another connection Last child is always a zombie.
      5989 pts/5 S+ 0:00 \_ grep test.pl 5975 pts/4 S+ 0:00 \_ /usr/bin/perl ./test.pl 5987 pts/4 Z+ 0:00 \_ [test.pl] <defunct>
Re: Parallel::ForkManager leaves a zombie process from last child
by runrig (Abbot) on May 30, 2013 at 20:39 UTC
    Already asked and answered on StackOverflow. You just have not accepted the answer. Bad form to cross post and not mention it.

    Summary: Parallel::ForkManager reaps its processes when the number of processes has reached the max and you try to start a new one. Since you are in an infinite loop, there is always the 'last' process that remains unreaped until a new one starts (all of them remain unreaped until you start process max+1). If your loop would finish, you can call wait_all_children() and that last process will be reaped when it finishes. That is how P::FM works. If you want different behaviour, use something else.

    Also, why is this a problem? You seem to be under the misapprehension that having one zombie process temporarily hanging around is bad. If your process let many such processes accumulate, that might become a problem. But that does not seem to be the case...so, what is the problem?

Re: Parallel::ForkManager leaves a zombie process from last child
by vsespb (Hermit) on May 30, 2013 at 20:54 UTC
    anyway, this seems to work better
    while (1) { $pm->start and next; while ($inputcon = $daemon->accept()) { do_client_stuff($inputcon); } $pm->finish(); }
    it reuses processes and I don't see zombies.

      Run a process-monitor, now, and see which how many threads are running at 100% CPU utilization ... without me dumpster-diving right now into the guts of P::FM, I think you’ll find one if not many.

        Run a process-monitor, now, and see which how many threads are running at 100% CPU utilization
        Actually none. Everything is OK. CPU is 0% for all processes.

      You find 1 yet-to-be reaped process wasteful, but you're ok with having N blocked processes??? I don't think you know what you want!!!

      That said, this is a better solution. You fork *before* a connection comes in, so there's less lag in handling a response. And since you reuse the child process, you don't waste time forking repeatedly. One might wonder why P::FM is used at all, but it causes children that die for whatever reason to be restarted, so you always have a full pool of children waiting!

Log In?
Username:
Password:

What's my password?
Create A New User
Node Status?
node history
Node Type: perlquestion [id://1036110]
Approved by marto
help
Chatterbox?
and the web crawler heard nothing...

How do I use this? | Other CB clients
Other Users?
Others meditating upon the Monastery: (7)
As of 2015-07-06 22:44 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    The top three priorities of my open tasks are (in descending order of likelihood to be worked on) ...









    Results (84 votes), past polls