Problems? Is your data what you think it is? | |
PerlMonks |
Re: How to find all open STDERR and STDOUT dups?by tlm (Prior) |
on Mar 31, 2009 at 20:06 UTC ( [id://754519]=note: print w/replies, xml ) | Need Help?? |
I see from the replies I've gotten so far that I did not explain the problem well enough, so here's a second attempt. The application is a CGI script that is meant to perform a lengthy calculation. In its normal operation, when first accessed, it forks a child (call it C) that will perform the calculation and cache the result. The parent (call it P) just returns a short response that includes a job id, and exits immediately. In pseudo-perl, the logic looks like this:
Upon receiving the initial response, the client can then use the included job id to periodically check with the server for information on the job's percent completion, and eventually to retrieve the finished results. This allows the client to provide some feedback to the user. I noticed recently that the client was freezing after sending the request, and not displaying any indication of progress. Instead, after some time of apparent inactivity, it would display the finished results all at once. The immediate reason for this was that the parent (P) was lingering around as a zombie after exiting (with Apache as its parent), which caused the connection to remain alive until the child (C) finished. After a lot of trial and error, I narrowed the problem down to a few open() statements in Parse::RecDescent. If I comment out these statements, the code once again works fine: P's process terminates immediately after it exits, and the client receives the job id right away, soon enough to be useful. I want to avoid another lengthy debugging ordeal in the future, if I ever decide to use a module that somehow leads to a similar case of leftover filehandles. What I need is a way to implement close_all_dups_to_stdout_and_stderr. Without it, the defunct P lingers around as a zombie until C terminates, which defeats the purpose of forking the child in the first place. It is this lingering P that causes the HTTP connection to remain open far too long. Now, L-R, the docs for fileno do in fact suggest that it would come in handy here, but, to my surprise, it does not work as advertised. Below is the line in the original module's code, followed immediately by two debugging lines that I've added: The output from the last two lines is: I'm not sure how to reconcile this with the docs for fileno. BTW, if anyone cares to verify all of this, the sticking points are in Parse::RecDescent, v. 1.94, lines 2847, 2865, and 2876. But even if fileno behaved as advertised, to implement close_all_dups_to_stdout_and_stderr in a general way I need a way to find all the open filehandles, so that I can test them with fileno against STDERR and STDOUT. This is what I'd like to figure out how to do cleanly. almut, I had the same idea of using lsof, but, here again, the results surprised me. I tried the following (somewhat brutal) experiment:
Bottom line: the problematic handle remains open even after this. And so does STDOUT, for that matter. I'm still scratching my head about this one as well. Cluebricks welcome. BTW, moving the loading of PRD to after the fork did not help (and would be a very inconvenient solution in any case). I hope this clarifies the situation. the lowliest monk
In Section
Seekers of Perl Wisdom
|
|