Beefy Boxes and Bandwidth Generously Provided by pair Networks
The stupid question is the question not asked

Should I call waitpid when using Parallel::ForkManager to fork in an infinite loop?

by unlinker (Monk)
on Aug 16, 2009 at 08:31 UTC ( #788991=perlquestion: print w/replies, xml ) Need Help??
unlinker has asked for the wisdom of the Perl Monks concerning the following question:

I am calling using Parallel::ForkManager to fork inside an infinite loop like this:
while (1) { $pm->start and next; ... # do child tasks here ... $pm->finish; }
Since the loop (ideally) goes on forever, I have no opportunity to make the cleanup call:
In such a situation, should I write a run_on_finish callback that calls waitpid on the exited process like this:
$pmgr->run_on_finish( sub { my ($pid, $exit_code, $ident) = @_; waitpid $pid, 0; } );
Is there a better way to clean up than this? Thank you for your attention.

Replies are listed 'Best First'.
Re: Should I call waitpid when using Parallel::ForkManager to fork in an infinite loop?
by ikegami (Pope) on Aug 16, 2009 at 08:58 UTC

    That's not a problem. start reaps children as well. Consider the following example:

    my $max_children = 5; my $pm = Parallel::ForkManager->new($max_children); for my $i (0..9) { $pm->start and next; #... $pm->finish; } $pm->wait_all_children;
    • The 6th start will reap a child.
    • The 7th start will reap a child.
    • The 8th start will reap a child.
    • The 9th start will reap a child.
    • The 10th start will reap a child.
    • wait_all_children will reap the remaining 5 children.

    With your infinite loop, it's no different. You'll be reaping child every time you create one (once you've created $max_children children).

    It's a bug to use waitpid in a run_on_finish handler since the process has already been reaped by then.

      Is that true? I thought that $pm->finish did the reap, and allowed the manager to start another process (assuming that $max_procs processes have been started already)?

      So the OPs code would maintain $max_procs children at all time (because of the while (1) ), and as you said, no explicit cleanup or reaping need be added, because of the wonderfulness of Parallel::ForkManager!

      Just a something something...

        I thought that $pm->finish did the reap

        finish is only executed in the child. It can't do any reaping.

Re: Should I call waitpid when using Parallel::ForkManager to fork in an infinite loop?
by unlinker (Monk) on Aug 16, 2009 at 15:14 UTC
    Thanks both. On some further work, I now have another question (should really have added to the original question): While I call
    the callback "run_on_finish" seems to be called much later. Is there some way to get Parallel::ForkManager to call run_on_finish very soon after or as soon as finish is called? Thank you once again

      run_on_finish is called in the parent process immediately after reaping the child (to get its exit code). It surprises me that it would be called "much later" since the parent is constantly checking if a child has ended if you use the formula in the documentation.

      The only reason it wouldn't be responsive is if you have slow code that executes in the parent. This would delay start getting called, which would delay checking if children have ended. Note that start will actually reap multiple children if more than one have ended.

        Allow me to post the complete code to explain what I see happening. The context is a worker application that watches a beanstalk message queue, forks a child to handle the request and immediately goes back to watching the queue. Let me paste the code and then describe what I see:
        #!/usr/bin/perl use strict; use warnings; use Beanstalk::Client; use Parallel::ForkManager; my $clnt = Beanstalk::Client->new({ server => '', debug => 1, }) || die "Cannot Connect to Queue Manager"; my $pmgr = Parallel::ForkManager->new(20); $pmgr->run_on_finish( sub { my ($pid, $exit_code, $ident) = @_; if ($clnt->delete($exit_code)) { print "Deleted Job $exit_code\n"; } else { print "Error Deleting Job $exit_code: " . $clnt->error . " +\n"; } } ); $pmgr->run_on_start( sub { my ($pid, $ident) = @_; if ($clnt->bury($ident)) { print "Buried Job $ident\n"; } else { print "Error burying Job $ident: " . $clnt->error . "\n"; } } ); INFINITE_LOOP: while (1) { sleep 5; my $job = $clnt->reserve(); #blocks until a message is found in qu +eue if (! $job) { print "Error Reserving - Possible Deadline Approaching: " . $c +lnt->error ."\n"; next INFINITE_LOOP; } print "Reserved Job " . $job->id . "\n"; $pmgr->start($job->id) and next; sleep 120; # Job work would go here $pmgr->finish($job->id); } exit;
        When I run this code, here is what I find: (1) When the messages are coming at intervals of say 10 seconds, run_on_finish is never called. (2) After the 10th or 12th message or so, suddenly (no pattern that I can discern) run_on_finish is called repeatedly to reap all 10 (or 12) processes that have ended. (3) Again for about 10/12 messages, no run_on_finish and then suddenly again after 10/12 messages all finished tasks are reaped. Would appreciate if you could help me understand whats going on

Log In?

What's my password?
Create A New User
Node Status?
node history
Node Type: perlquestion [id://788991]
Approved by ikegami
[ambrus]: Corion: pre-fork before creating any threads
[ambrus]: it's good practice anyway, even if Linux tolerates forking after creating threads (tolerates, as in, the fork works, but other threads are dead which can lead to interesting deadlocks)
[marto]: could be worse, could be dust
[ambrus]: Corion: if you're forking from perl, then schmorp has a module that helps this prefork magic

How do I use this? | Other CB clients
Other Users?
Others making s'mores by the fire in the courtyard of the Monastery: (11)
As of 2017-10-19 14:45 GMT
Find Nodes?
    Voting Booth?
    My fridge is mostly full of:

    Results (253 votes). Check out past polls.