in reply to Re: Untangling Log Files
in thread Untangling Log Files
open $fh{$proc}, '>', "$proc.log" or die $!;
Uhm... I don't care for that. It's very likely that there will be more pids in each file than open descriptors permitted by resource limits... and if you fail to open you die, probably half-done and with no way to pick up where you left off.
I don't have an immediate fix though someone else suggested closing one handle randomly which would, I guess, work. (So long as you changed your open to '>>' and remembered to delete it from your hash.)
Personally, I'd probably take a less elegant—call it more braindead—approach as this seems to be a one-off thing anyway, and just close the filehandle and open a new one whenever the pid changed from the previous record.
Update: Well, I just re-read the OP and now I think I may have misinterpreted the bit about "30 processes" the first time around. If there are only 30 pids in the log files, then I like your approach just fine and my criticisms are all moot.
"My two cents aren't worth a dime.";
|
---|
Replies are listed 'Best First'. | |
---|---|
Re^3: Untangling Log Files
by davorg (Chancellor) on Feb 08, 2007 at 16:08 UTC |