in reply to How Could I Speed Up An Archive Script?
Since each $schema is from a different server, why not perform each of the FTP downloads at the same time? I doubt you're using up all of your machine's bandwidth for a single FTP, you might save some time by performing multiple FTPs at the same time. Also, one of the longer FTP sessions could still be downloading (relatively light on the CPU) while another $schema has moved on to the compression stage (heavy on CPU, light on disk).
Before you do the $ftp->get on the files, you have an if statement that is going to be TRUE for everything. You have if($file1 !~ /.stderr/ || $file1 !~ /.log/) which says "if NOT .stderr OR if NOT .log"; well the file isn't likely to be BOTH, so it'll return true every time. I doubt that's what you really wanted to happen.
After you've compressed your files, you write the resultant .zip file out to the local harddrive, then copy it to another drive based on $schema twice, then you delete the .zip file. Why not just write it to where you want it with $zip->writeToFileNamed( "G:/some folder/$schema/$logname ); instead? Then you can copy it to the second location.
I don't think you're going to speed up the compression much. You might wish to include a few more print statements so you know where in your script the process is. Statements like print "Starting FTP $host\n"; and print "Finished FTP, starting compression on $schema\n; would help. Maybe even a print "writing compressed file $logname to disk\n; too. It'll help you to figure out where your bottleneck is at.
- - arden.
p.s. please include <READMORE> tags on large sections of code. It really matters if your node gets front-paged!