in reply to General perl question. Multiple servers.
In effect, the perl script does mechanically what you are now doing manually (you can be replaced by a perl script ;). Of course, that assumes that the 150 machines are all doing the same thing, and their results are all stored using the same path/file.name on each machine. Either that, or else the list of machines to scan includes all the details needed to find the log file for each one.
For doing the harvest, there's nothing inherently wrong about just running an appropriate command in back-ticks like my $log = `ssh $host cat /log/path/my_process.log` (this assumes you have public-key authentication in place, so the userid running this won't need to supply a password for each connection). If the overhead of launching a shell 150 times bothers you, you could do it like this:
(updates: added check for success from chdir call, and added first print statement to chdir in the subshell.)use strict; use POSIX; my $today = strftime( "%Y%m%d", localtime ); mkdir "/path/harvest/$today"; chdir "/path/harvest/$today" or die $!; my @remote_machine_list = ... # (fill in the ...) my $shell_pid = open my $shell, "|-", "/bin/sh" or die $!; print $shell "cd /path/harvest/$today\n"; print $shell "ssh $_ cat /log/path/my_process.log > $_.log 2> harvest. +errlog ||". " echo $_ failed 2> harvest.errlog\n" for ( @remote_machine_list ); print $shell "exit\n"; waitpid $shell_pid, 0;
You might need to add stuff to that, like setting SIG_ALARM in case ssh hangs on a given host. On each iteration, if the "ssh $_ ..." works, its output is stored locally in "$_.log", and any stderr output is appended to the local file "harvest.errlog". But if the ssh fails, a line about that is appended to "harvest.errlog" as well.
But you also have a variety of CPAN modules in the Net::SSH domain that you might find preferable.