Anonymous Monk has asked for the wisdom of the Perl Monks concerning the following question:
Hello Monks,
Using fork and exec, is there a way I can start some processes and for each process started, I read the output line by line.
If I find a searchword I am looking for, then I would like to stop all the other forks processes running immediately.
So for example, If I start three processes in fork: Dir C:, Dir P:, Di
+r Q:
and I read line by line, if I find "desktop" in any of the lines, I wi
+ll stop all the other processes running immediately.
If someone could provide a working example, that would be really helpful. I would learn a lot about fork + exec.
Thank you.
Re: forking and monitoring processes
by zentara (Cardinal) on Jan 08, 2005 at 12:30 UTC
|
#!/usr/bin/perl
use warnings;
use strict;
use Parallel::ForkManager;
my @dirs = qw ( 1 2 3 4 5 6 7 8 9 );
my $max_tasks = 3;
my $pm = new Parallel::ForkManager($max_tasks);
$|++;
my $start = time();
for my $dir (@dirs) {
my $pid = $pm->start and next;
printf "Begin $dir at %d secs.....\n", time() - $start;
#do your processing here
#push all the $dir/files into an array
#and search through them line by line
#
# if( $line =~ /desktop/){
# print $filename,' -> ',$.,"\n";
# $pm->finish;
# last;
# }
printf "done at %d secs!\n", time() - $start;
$pm->finish;
}
print " all done\n";
I'm not really a human, but I play one on earth.
flash japh
| [reply] [d/l] |
|
| [reply] |
|
Show us your code, and we will help you correct it.
I'm not really a human, but I play one on earth.
flash japh
| [reply] |
Re: forking and monitoring processes
by revdiablo (Prior) on Jan 08, 2005 at 06:52 UTC
|
You can use a forking form of open, and IO::Select. Here is an example:
use IO::Select;
my %pid;
my $s = IO::Select->new();
for (qw(proc1 proc2 proc3)) {
my $pid = open my $fh, "-|", $_
or warn "Could not fork&open '$_': $!"
and next;
$pid{$fh} = [ $pid, $_ ];
$s->add($fh);
}
while (my @ready = $s->can_read) {
for my $fh (@ready) {
if (eof $fh) {
delete $pid{$fh};
$s->remove($fh);
next;
}
my $line = <$fh>;
if ($line =~ /desktop/) {
chomp $line;
print "Found 'desktop' in '$line' from $pid{$fh}[1]\n";
kill 15, map {$_->[0]} values %pid;
}
}
}
Update: changed to a hash instead of an array for storing the PIDs
Another update: fixed bugs introduced by last-minute changes to hash structure. | [reply] [d/l] |
|
my $pid = open my $fh, "-|", $_
This means pipe open the process to the filehandler right? I wasn't sure what the "-" in " -|" meant.
Also, after looking at each line, I would like to store the output of each child process into different files. So for process 1, I want to store the output into proc1.txt etc. But when processing each line, I am only aware of the pid and not the process running so I was wondering how I can write to the file proc1.txt if I only know proc1's pid is running and so on?
Thanks.
| [reply] [d/l] |
|
But when processing each line, I am only aware of the pid and not the process running
That's why I changed to a hash of arrays to store the PIDs and process information. You can get the program name by $pid{$fh}[1], or put more information into that array as necessary.
Update: your question about -| can be fully answered by reading open's documentation. The short answer is it opens a process for reading.
| [reply] [d/l] [select] |
|
|
|
Re: forking and monitoring processes
by zentara (Cardinal) on Jan 10, 2005 at 20:48 UTC
|
Hi, here is a script that will work for you. I use the @ARGV trick to loop thru each line in a directory full of files, and do a regex, then cancel all forks if a regex matches. You don't need to worry about different filehandles, doing it this way. (This works very fast, so you may get 2 matches by the time the forks can be shut down, but that shouldn't be a problem.
#!/usr/bin/perl
use warnings;
use strict;
use Parallel::ForkManager;
my $dir = shift || '.';
my @dirs = get_sub_dirs($dir);
my $max_tasks = 3;
my $pm = new Parallel::ForkManager($max_tasks);
$|++;
my $start = time();
for my $dir (@dirs) {
my $pid = $pm->start and next;
printf "Begin processing $dir at %d secs.....\n", time() - $start;
#push all the $dir/files into @ARGV and search through them
#line by line
@ARGV = <$dir/*>;
while (<ARGV>) {
close ARGV if eof;
if( $_ =~ /desktop/){
print "$ARGV: $. :$_\n";
$pm->finish;
goto END;
}
}
END:
printf ".... $dir done at %d secs!\n", time() - $start;
$pm->finish;
}
print " all done\n";
exit;
##########################################################
sub get_sub_dirs {
my $dir = shift;
opendir my $dh, $dir or die "Error: $!";
my @dirs = grep { -d $_ } readdir $dh;
@dirs = grep !/^\.\.?$/, @dirs;
closedir $dh;
return @dirs;
}
I'm not really a human, but I play one on earth.
flash japh
| [reply] [d/l] |
|
Hello,
Could you please tell me what @ARGV = <$dir/*> does?
I actually want to execute a command ... shd I use backticks instead to capture the output?
so @output = `Start_Process.exe param1 param2`;
Then parse through each line of @output to see if I find the error messages I am looking for and then close out all the other forks immediately using close ARGV?
Thanks.
| [reply] |
|
It "globs" all the files in $dir into @ARGV, which is a special array for input. It's advantage over a regular array is that you can go thru it "line-by-line" without having to open and close each file, which you would have to do with a regular array of files. Your @output plan sounds about right, but there are usually a few glitches to work out, so test,test,test. :-)
I'm not really a human, but I play one on earth.
flash japh
| [reply] |
|
|