Re: Capturing STDOUT and STDERR of system command, with pure perl?
by chrism01 (Friar) on Aug 24, 2007 at 14:41 UTC
|
| [reply] |
|
| [reply] |
Re: Capturing STDOUT and STDERR of system command, with pure perl?
by xdg (Monsignor) on Aug 24, 2007 at 15:31 UTC
|
use IO::CaptureOutput qw/capture/;
my ($stdout, $stderr);
capture sub {
system($command);
} => \$stdout, \$stderr;
There is also a capture_exec command, but it does some sort of shell escaping which you may not want.
-xdg
Code written by xdg and posted on PerlMonks is public domain. It is provided as is with no warranties, express or implied, of any kind. Posted code may not have been tested. Use of posted code is at your own risk.
| [reply] [d/l] [select] |
Re: Capturing STDOUT and STDERR of system command, with pure perl?
by zentara (Archbishop) on Aug 24, 2007 at 14:46 UTC
|
If you are sticking to *nix type systems, IPC::Open3 should work. Here is a general example using a shell, but you could open a new IPC::Open3 for each command. Of course, there is a file descriptor, but so does IPC::Run.
#!/usr/bin/perl
use warnings;
use strict;
use IPC::Open3;
use IO::Select;
my $pid = open3(\*WRITE, \*READ,\*ERROR,"/bin/bash");
my $sel = new IO::Select();
$sel->add(\*READ);
$sel->add(\*ERROR);
my($error,$answer)=('','');
while(1){
print "Enter command\n";
chomp(my $query = <STDIN>);
#send query to bash
print WRITE "$query\n";
foreach my $h ($sel->can_read)
{
my $buf = '';
if ($h eq \*ERROR)
{
sysread(ERROR,$buf,4096);
if($buf){print "ERROR-> $buf\n"}
}
else
{
sysread(READ,$buf,4096);
if($buf){print "$query = $buf\n"}
}
}
}
waitpid($pid, 1);
# zombie prevention
| [reply] [d/l] |
Re: Capturing STDOUT and STDERR of system command, with pure perl? (files)
by tye (Sage) on Aug 24, 2007 at 16:32 UTC
|
the usual solution would be
I'm a bit surprised that nobody called you on rejecting the obvious solution by simple fiat. Why is qx($command 2>&1) unacceptable?
If you run into problems with IPC::Open2, the most portable solution is to redirect your own STDOUT and STDERR to file(s) before using system. If you end up wanting to capture both STDOUT and STDERR but separately, then sending at least one of them to a file is much better than trying to get Perl to drain two file handles (which can easily result in deadlock).
If you need to restore your original STDOUT and STDERR after the command's output has been gathered, then use the example code found in open's documentation for saving and then restoring them.
| [reply] [d/l] |
|
It's a wierd caveat, I know, but the command I'm executing could contain anything, including its own file descriptors.
# command that redirects STDERR to file
$command = 'myapp --flag=somearg 2>>/var/log/myapp.err';
In such a case, if I just add new descriptors to the command, the original descriptor is apparently ignored, and the new one is used instead (on OSX anyway; YMMV).
# NOTHING is written to the original .err file
# instead, STDERR is sent to STDOUT
qx($command 2>&1)
So I needed a way to execute the command unaltered, then catch the output. I respect the constructive criticism though.
At any rate, xdg's suggestion works well, and I may end up taking your other advice and temporarily redirect the script's stdout/err to scalars just to avoid the girth of one more dependacy :)
__________
Systems development is like banging your head against a wall...
It's usually very painful, but if you're persistent, you'll get through it.
| [reply] [d/l] [select] |
Re: Capturing STDOUT and STDERR of system command, with pure perl?
by FunkyMonk (Chancellor) on Aug 24, 2007 at 14:38 UTC
|
open my $oldout, ">&STDOUT" or die "Can't dup STDOUT: $!";
open OLDERR, ">&", \*STDERR or die "Can't dup STDERR: $!";
open STDOUT, '>', "foo.out" or die "Can't redirect STDOUT: $!";
open STDERR, ">&STDOUT" or die "Can't dup STDOUT: $!";
select STDERR; $| = 1; # make unbuffered
select STDOUT; $| = 1; # make unbuffered
print STDOUT "stdout 1\n"; # this works for
print STDERR "stderr 1\n"; # subprocesses too
open STDOUT, ">&", $oldout or die "Can't dup \$oldout: $!";
open STDERR, ">&OLDERR" or die "Can't dup OLDERR: $!";
print STDOUT "stdout 2\n";
print STDERR "stderr 2\n";
Are any of these methods suitable when used couples with the open FH, "<", \$string method?
updated | [reply] [d/l] [select] |
|
Perhaps I needed to be more specific. I'm wanting to capture the stdout/err of the command i'm executing in the script. Not the stdout/err of the script itself.
__________
Systems development is like banging your head against a wall...
It's usually very painful, but if you're persistent, you'll get through it.
| [reply] |
Re: Capturing STDOUT and STDERR of system command, with pure perl?
by Anonymous Monk on Jan 21, 2009 at 08:46 UTC
|
If you want to return a variable containing the contents of the last issued command rather than have it print to the screen you could do something like...
#use diagnostics;
use strict;
use warnings;
sub SafeSystem
{
(my $command, my $error) = @_;
# Run the command, redirect standard and error output to log file
my $result = system($command . " 1>C:/output 2>&1");
# Read the output
open(SYS_OUT, "C:/output") or die "SafeSystem couldn't open C:/output";
my $output = join "", <SYS_OUT>;
close SYS_OUT;
# Did the command succeed?
if ($result ne 0) or die;
return $output;
}
# RUN A COMMAND BUT PRINT THE OUTPUT TO A VARIABLE RATHER THAN STDOUT
my $thecmd = "ping www.google.com";
my $res = SafeSystem($thecmd, "Error"); | [reply] |
|
Simplified to this...
# To get the numerical result (in most cases, the error code) of the command use the $res variable...
my $thecmd = "dir";
my $res = system("$thecmd . " 1>C:/output 2>1");
print $res;
# To get the string output result of the command, just look at the file in C:/output...
open(SYS_OUT, "C:/output") or die "Could not open the output";
my $output = join "", <SYS_OUT>;
close SYS_OUT;
print $output;
| [reply] |