Beefy Boxes and Bandwidth Generously Provided by pair Networks
Perl-Sensitive Sunglasses
 
PerlMonks  

Re^2: STDOUT redirects and IPC::Open3

by salva (Monsignor)
on Oct 19, 2011 at 16:53 UTC ( #932446=note: print w/ replies, xml ) Need Help??


in reply to Re: STDOUT redirects and IPC::Open3
in thread STDOUT redirects and IPC::Open3

open STDOUT, ">&=1"; xopen \*STDOUT, ">&" . fileno $kid_wtr;

That is very buggy also:

File descriptor 1 may be closed, which would cause open STDOUT, ">&=1" to fail (not uncommon, for instance, mod_perl2 does that).

If file descriptor 1 is not closed and it is not STDOUT, then it is probably attached to some other unrelated file handler, say FOO. The xclose call will affect both STDOUT and FOO as they share the same file descriptor, breaking any code using FOO on the parent process.

IMO, the right solution would be to change the system 1, $cmd hack to attach to the child process whatever file handlers are at STDIN, STDOUT and STDERR (or NUL: when closed) irrespectively of their file descriptor numbers. I think that can be done on Windows using the CreateProcess function passing the handles inside the STARTUPINFO structure argument.


Comment on Re^2: STDOUT redirects and IPC::Open3
Select or Download Code
Re^3: STDOUT redirects and IPC::Open3
by Eliya (Vicar) on Oct 19, 2011 at 22:33 UTC
    File descriptor 1 may be closed, which would cause open STDOUT, ">&=1" to fail

    I'm aware of that — which is why I mentioned that you shouldn't check for errors here, but let the call just fail silently. I also mentioned that if file descriptor 1 is closed, the dup behind the subsequent open will pick the then free file descriptor 1 anyway, because it's the lowest available (this is the way dup works — and this is also why you need "&" and not "&=" in that open statement).

    The idea behind the open STDOUT, ">&=1" statement is simply to make sure STDOUT is associated with file descriptor 1 (to trigger the "special" behavior of open I mentioned, which results in dup'ing the descriptor of the child's side of the pipe to descriptor 1).  This will happen either way, when the call succeeds or when it fails.

    If file descriptor 1 is not closed and it is not STDOUT, then it is probably attached to some other unrelated file handler, say FOO. The xclose call will affect both STDOUT and FOO as they share the same file descriptor, breaking any code using FOO on the parent process.

    Not sure what xclose you're referring to, and why you're worried about breaking a file descriptor in the parent.  Closing a file descriptor in the child does not render the parent's descriptor dysfunctional (actually, it's a pretty common and healthy practice to close unneeded dups of file descriptors after a fork).

    Try this and you'll see what I mean:

    #!/usr/bin/perl -w use strict; close STDOUT; open FOO, ">", "/dev/tty" or die $!; printf STDERR "fileno(FOO): %d\n", fileno(FOO); open STDOUT, ">", "dummyfile" or die $!; pipe my $rdr, my $wtr; printf STDERR "fileno(pipe-r): %d\n", fileno($rdr); printf STDERR "fileno(pipe-w): %d\n", fileno($wtr); if (fork) { close $wtr; my $r = <$rdr>; chomp($r); print STDERR "r = <<$r>>\n"; print FOO "FOO still working\n"; } else { # child close $rdr; printf STDERR "[child] fileno(STDOUT) initially: %d\n", fileno(STD +OUT); # comment this line out (and edit "&=" below), and you'll see echo + will no longer write to the pipe open STDOUT, ">&=1"; printf STDERR "[child] fileno(STDOUT) after +&=1: %d\n", fileno(STDOUT); open STDOUT, ">&".fileno($wtr) or die $!; printf STDERR "[child] fileno(STDOUT) finally: %d\n", fileno(STD +OUT); exec "/bin/echo", "foobar"; } __END__ fileno(FOO): 1 fileno(pipe-r): 4 fileno(pipe-w): 6 [child] fileno(STDOUT) initially: 3 [child] fileno(STDOUT) after &=1: 1 [child] fileno(STDOUT) finally: 1 r = <<foobar>> FOO still working

    The general issue is that the child's side of the pipe must be accessible via file descriptor 1 before the exec, otherwise no normal exec'ed program (echo here) will send its standard output to it.

    I've strace'd the system calls Perl issues under the hood in the various cases, and I can't see any problem with what's happening due to the extra open STDOUT, ">&=1" statement.

    (Note that I'm addressing the Unix side of the issue only.)

      Note that I'm addressing the Unix side of the issue only

      Oops, for some reason, I got the impression the thread was about Windows.

      I can still see one issue on Unix when passing '-' as the command. In that case, the FOO handler (such that fileno(FOO) == 1) may be being used on the child Perl code. Though, this can be worked around easily just checking that the command is not '-'.

      Anyway, if you just want a Unix solution, why don't to use POSIX::dup2?.

Log In?
Username:
Password:

What's my password?
Create A New User
Node Status?
node history
Node Type: note [id://932446]
help
Chatterbox?
and the web crawler heard nothing...

How do I use this? | Other CB clients
Other Users?
Others contemplating the Monastery: (11)
As of 2014-09-17 11:08 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    How do you remember the number of days in each month?











    Results (72 votes), past polls