Beefy Boxes and Bandwidth Generously Provided by pair Networks
good chemistry is complicated,
and a little bit messy -LW

Feeding processes through one pipe

by Sergeyk (Novice)
on May 07, 2012 at 14:31 UTC ( #969269=perlquestion: print w/replies, xml ) Need Help??
Sergeyk has asked for the wisdom of the Perl Monks concerning the following question:

I'm trying to transfer data to child processes through single pipe. One parent, one pipe, multiple processes. Now this code has the following problems: 1) The parent immediately writes to the pipe and goes to the expectation of completing child processes. 2) Child processes immediately begin reading from this pipe, some of them immediately get EOF and die. therefore, processes get different load I decided that there is need to use semaphores Who can tell me how I can implement semaphores here or tell an alternative solution
#!/usr/bin/perl $file_name='config'; if ( ! open CISCOFILE, $file_name ) { die "Couldnt open router config file! ($!)"; } @cisco_list=<CISCOFILE>; use POSIX qw(:signal_h :errno_h :sys_wait_h); $SIG{CHLD} = \&REAPER; sub REAPER { my $pid; $pid = waitpid(-1, &WNOHANG); if ($pid == -1) { # no child waiting. Ignore it. } elsif (WIFEXITED($?)) { $exit_value = $? >> 8; $signal_num = $? & 127; $dumped_core = $? & 128; # print "$pid dead. exit_value=$exit_value, signal_num=$signal_n +um, dumped_core=$dumped_core\n"; $kids{"$pid"}="$pid dead. exit_value=$exit_value, signal_num=$ +signal_num, dumped_core=$dumped_core\n"; } else { print "false warn $pid.\n"; } $SIG{CHLD} = \&REAPER; } use IO::Handle; my ($reader, $writer); pipe $reader, $writer; $writer->autoflush(1); %kids=(); $SIG{INT} = sub { die "$$ dying\n" }; for (1 .. 10) { unless ($child = fork) { die "cannot fork: $!" unless defined $child; squabble( ); exit; } $kids{"$child"}="$child start \n"; } @key_arr=keys(%kids); foreach $string(@key_arr) { print $kids{"$string"}; } close $reader; foreach $string(@cisco_list) { print $writer "$string"; } close $writer; #-----Waiting for child processes---- $flag=0; while($flag==0){ print "\n--------------------\n"; sleep 5; $flag=1; @key_arr=keys(%kids); foreach $string(@key_arr) { if($kids{"$string"}=~/start/){$flag=0;} print $kids{"$string"}; } } #------Child process function------- sub squabble { close $writer; open(SUBINTFILE, ">","child $$.txt") or die "Can't open file f +or writing $!"; select SUBINTFILE; while ($line = <$reader>) { chomp($line); print "$line\n"; sleep 1; } close $reader; }

Replies are listed 'Best First'.
Re: Feeding processes through one pipe
by kennethk (Abbot) on May 07, 2012 at 15:14 UTC

    You could roll this all yourself, using flock to control semaphore access across threads, but this is a problem that has been solved already. I'd recommend checking out threads and Thread::Semaphore if you want to roll your own job queue. Alternatively, there are several available, including TheSchwartz and POE-based solutions.

    #11929 First ask yourself `How would I do this without a computer?' Then have the computer do it the same way.

Re: Feeding processes through one pipe
by sundialsvc4 (Abbot) on May 07, 2012 at 20:20 UTC

    Unfortunately for you, you dropped straight into “implementation mode” on this particular project ... creating a “solution” in terms of Unix pipes and what-not, and then encountering a problem and then immediately setting-out to debug it ... all without first checking yourself and asking, “Wait a minute, hasn’t this whole thing surely been done before?   Am I really, like, the first human on this planet to have tried to do this?”

    Had you done so, alas, you would have very quickly discovered how very thoroughly the answers were:   “Yes, and No.”

    In the purely abstract sense, your actual requirement consists of sending “requests” to a pool of “worker processes,” such that the exact methodology for doing so is almost entirely unimportant to your requirement “so long as it works.”   You therefore now find yourself, I am sorry to say, in the unenviable (but very common) position of having attempted to re-invent not only one but several dozen possible wheels.

    Actum Ne Agas:   Do Not Do A Thing Already Done.

    It is a very tough “lesson learned.”   And I surely would soften the blow if I could.   Trust me, if you can, that I do not mean you shame.

    If you start or end anything with Perl, then start and end here:   Start with the assumption that anything you are now setting out to do, has already been done, and that your true objective therefore is to discover it.   (And if this notion turns your entire perspective topsy-turvy, then (lo!!) I have just returned to you three of your work-days and all of your weekends.)

Re: Feeding processes through one pipe
by Anonymous Monk on May 07, 2012 at 16:09 UTC
Re: Feeding processes through one pipe
by seefurst (Initiate) on May 07, 2012 at 21:25 UTC
    It's funny. I was going to ask a question about pipes as well, however for this, I would recomend: Parallel::ForkManager. Great stuff for controlling forks.
Re: Feeding processes through one pipe
by Sergeyk (Novice) on May 08, 2012 at 04:35 UTC
    Thanks for the answers. Perhaps my mistake was trying to solve this problem by using examples from Perl Cookbook :-). All parts for my home-made wheel I found in this book. For my code, I found a solution using IPC::Shareable, by sharing array among processes. Shlock Shunlock for blocking access to array. Which of these modules is the most universal without much low-level work? TheSchwartz POE

Log In?

What's my password?
Create A New User
Node Status?
node history
Node Type: perlquestion [id://969269]
Front-paged by Corion
and all is quiet...

How do I use this? | Other CB clients
Other Users?
Others pondering the Monastery: (3)
As of 2018-02-20 23:36 GMT
Find Nodes?
    Voting Booth?
    When it is dark outside I am happiest to see ...

    Results (274 votes). Check out past polls.