http://www.perlmonks.org?node_id=857447

cmv has asked for the wisdom of the Perl Monks concerning the following question:

Monks-

I apparently don't understand what is going on with pipes here in Unix, please help.

The intent is to create 2 child processes from a perl script. The script writes to one child process, who passes that (via named pipe) to the second child process, who sends the data to stdout. The script then reads the data from child 2 and prints it.

use strict; use warnings; use FileHandle; # Create named pipe... my $pipe = "/tmp/mypipe"; system("ksh -c 'if [ ! -p $pipe ]; then mkfifo $pipe; fi'"); # Setup input process... open(IN, '|-', "cat - > $pipe") || die "Can't open IN process: $!"; IN->autoflush(1); # Setup output process... open(OUT, '-|', "tail -f $pipe") || die "Can't open OUT process: $!"; OUT->autoflush(1); # Send input... my $input = '12345678901234567890123456789012345678901234567890'; print STDERR "Sending input...\n"; print IN $input; # Read output... print "Reading...\n"; my $line; while($line = <OUT>) { print $line }
I've verified the pieces of this code work, but when I put it all together, I get nothing. What am I doing wrong?

Thanks

-Craig

Replies are listed 'Best First'.
Re: Pipes: Why does this fail?
by JavaFan (Canon) on Aug 26, 2010 at 14:06 UTC
    Two things: first, don't use tail -f on the pipe. Just open it as if it were a file. Second, as long as you don't actually write a newline to the pipe, the <OUT> will block.

    Note that with those changes, your program will still hang (after printing out 12345678901234567890123456789012345678901234567890) as you never write more to the pipe, and the <OUT> will wait till you do.

      JavaFan++

      Thanks for the quick help. I've modified the test script (see below) and it now works as expected.

      One question though. Using the tail -f line below causes the script to fail, reverting to the cat line makes it work. I don't understand why?

      Thanks

      -Craig

      use strict; use warnings; use FileHandle; # Create named pipe... my $pipe = "/tmp/mypipe"; system("ksh -c 'if [ ! -p $pipe ]; then mkfifo $pipe; fi'"); # Setup input process... open(IN, '|-', "cat - > $pipe") || die "Can't open IN process: $!"; IN->autoflush(1); # Setup output process... #open(OUT, '-|', "tail -f $pipe") || die "Can't open out process: $!"; open(OUT, '-|', "cat $pipe") || die "Can't open out process: $!"; OUT->autoflush(1); my $i=0; print STDERR "Sending input...\n"; print IN "This is message $i\n"; # Read output... print "Reading...\n"; my $line; while($line = <OUT>) { $i++; print $line; print IN "This is message $i\n"; sleep 1; }
        My guess would be that 'tail -f' first tries to seek to the end - but a named pipe isn't seekable. Why it doesn't die, I do not know.
Re: Pipes: Why does this fail?
by ikegami (Patriarch) on Aug 26, 2010 at 16:34 UTC
    Your variable names leave much to be desired. Your outputing to IN, inputting from OUT. And why use globals instead of lexicals? Finally, flushing an input handle won't do much.
    use strict; use warnings; use IO::File; my $pipe = "/tmp/mypipe"; system('mkfifo', '--', $pipe) if !-p $pipe; open(my $to_child, '|-', "cat > ".text_to_shell_lit($pipe)) or die "Can't open create process 1: $!"; $to_child->autoflush(1); open(my $fr_child, '|-', "cat < ".text_to_shell_lit($pipe)) or die "Can't open create process 2: $!"; my $input = '12345678901234567890123456789012345678901234567890'; print($to_child $input); close($to_child); while (<$fr_child>) { print; }

    Note that adding the close solves your problem, since it signals eof to the first cat, which causes it to close its output pipe, which signals eof to the second cat, which causes it to close its output pipe, which signals eof to your <>, which causes it to return.

    I hope that cat is only used as an example!

Re: Pipes: Why does this fail?
by kennethk (Abbot) on Aug 26, 2010 at 14:19 UTC