|Pathologically Eclectic Rubbish Lister|
Re^3: STDOUT redirects and IPC::Open3by Eliya (Vicar)
|on Oct 19, 2011 at 22:33 UTC||Need Help??|
File descriptor 1 may be closed, which would cause open STDOUT, ">&=1" to fail
I'm aware of that — which is why I mentioned that you shouldn't check for errors here, but let the call just fail silently. I also mentioned that if file descriptor 1 is closed, the dup behind the subsequent open will pick the then free file descriptor 1 anyway, because it's the lowest available (this is the way dup works — and this is also why you need "&" and not "&=" in that open statement).
The idea behind the open STDOUT, ">&=1" statement is simply to make sure STDOUT is associated with file descriptor 1 (to trigger the "special" behavior of open I mentioned, which results in dup'ing the descriptor of the child's side of the pipe to descriptor 1). This will happen either way, when the call succeeds or when it fails.
If file descriptor 1 is not closed and it is not STDOUT, then it is probably attached to some other unrelated file handler, say FOO. The xclose call will affect both STDOUT and FOO as they share the same file descriptor, breaking any code using FOO on the parent process.
Not sure what xclose you're referring to, and why you're worried about breaking a file descriptor in the parent. Closing a file descriptor in the child does not render the parent's descriptor dysfunctional (actually, it's a pretty common and healthy practice to close unneeded dups of file descriptors after a fork).
Try this and you'll see what I mean:
The general issue is that the child's side of the pipe must be accessible via file descriptor 1 before the exec, otherwise no normal exec'ed program (echo here) will send its standard output to it.
I've strace'd the system calls Perl issues under the hood in the various cases, and I can't see any problem with what's happening due to the extra open STDOUT, ">&=1" statement.
(Note that I'm addressing the Unix side of the issue only.)