Beefy Boxes and Bandwidth Generously Provided by pair Networks
P is for Practical
 
PerlMonks  

Parallel::ForkManager never reaching run_on_finish()

by swafo (Initiate)
on Jul 24, 2013 at 00:02 UTC ( [id://1045971]=perlquestion: print w/replies, xml ) Need Help??

swafo has asked for the wisdom of the Perl Monks concerning the following question:

I am trying to create a perl daemon that sits and runs non-stop.
I want it to create 20 child processes at a time. When one finishes, it should die and spawn another.

Here is the code I have now:

... use constant NUM_CHILDREN => 20; my $num_children = 0; my $child_index = 0; my $pm = Parallel::ForkManager->new(NUM_CHILDREN); $pm->run_on_start( sub { my ($pid,$ident)=@_; write_file(DIR."/daemon_${ident}.pid", $pid); $num_children++; msg("$ident) Starting $pid"); }); $pm->run_on_finish( sub { my ($pid, $exit_code, $ident, $exit_signal, $core_dump, $job) = @_ +; $num_children--; msg("${ident}) Process completed: @_"); if ($daemon->{'identity'}{'status'} > 0) { startChild($ident); } }); # reaper subroutine sub REAP { while (1) { my $id = waitpid( -1, WNOHANG); if ($id == -1) { return; } if ($id > 0) { msg("JUST REAPED $id"); } } $SIG{CHLD} = \&REAP; } $SIG{CHLD} = \&REAP; # start all the child processes startChildren(); sub startChildren { for (1..NUM_CHILDREN) { startChild(); } $pm->wait_all_child; msg('All children are done.'); } sub startChild { my $child_id = 'child_'.++$child_index; my $job = shift(@{$daemon->{jobs}}); $job->{process_id} = $child_id; $pm->start($child_id) and next; process($job); $pm->finish(0, $job); } ...

Things to note:
- the work in process() IS getting completed.
- all processes are getting REAPed,
- but none are ever getting to the $pm->run_on_finish() method.
- I never get the 'All children are done.' message.
- If I do a log inside $pm->run_on_wait() i get a BUNCH of of entries.

What am I doing wrong that prevents these processes from reaching run_on_finish() ?

If things are indeed running the way they should be, then what do I need to do to make this work the way I want? What I want: start 20 child processes, have each of them die in due course, and immediately have a new one start up, via the startChild() function. Each child may be dealing with a fair amount of data, which is why I want them to die, so as to free up that memory.

I do not want to use a while(1) {} loop in the master section of the code as that tends to spike the CPU at times.

Any feedback would be most welcome.

Replies are listed 'Best First'.
Re: Parallel::ForkManager never reaching run_on_finish()
by Loops (Curate) on Jul 24, 2013 at 00:33 UTC

    $pm->wait_all_child; should instead be $pm->wait_all_children.

    The other larger problem is that the signal handler for child termination is gumming up the works. If you just comment out $SIG{CHLD} = \&REAP; the code works okay and run_on_finish() is properly called

      Thank you so much.
      I knew it had to be something small I was missing... that's what you get when you stare at the code too long.

      Again, thank you very much!

Log In?
Username:
Password:

What's my password?
Create A New User
Domain Nodelet?
Node Status?
node history
Node Type: perlquestion [id://1045971]
Approved by ww
help
Chatterbox?
and the web crawler heard nothing...

How do I use this?Last hourOther CB clients
Other Users?
Others perusing the Monastery: (5)
As of 2024-04-25 23:47 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    No recent polls found