note
roboticus
<p>[smart_amorist]:</p>
<p>Doing a stat on every file is too time consuming. I'd suggest rather than rescanning the entire directory tree and checking the age that you make a text file containing the filename and last modified date of all the files known. Then you can quickly ignore many of the files without doing a stat. For the files that *are* candidate for removal, do the stat and update your text database. If a file is new (wasn't in the last scan) it's obviously not older than the previous scan, so you could simply enter it into your table with the previous rundate. Something like:</p>
<c>
# Read text file from last run
my %Files;
open my $FH, '<', 'my_database';
while (<$FH>) {
my ($YYYYMMDD, $FName) = split;
$Files{$FName}=$YYYYMMDD;
}
# Retrieve last runtime
my $LASTRUN = $Files{TheLastRunTime};
my $OLDEST_FILE_DATE = function_getting_cutoff_date();
# Scan file tree
for my $FName (function_retreiving_file_list()) {
if (! exists $Files{$FNAME}) {
# New file, just add it to the database
$Files{$FName}=$LASTRUN;
}
elsif ($Files{$FNAME} lt $OLDEST_FILE_DATE) {
# Last recorded "modified" time is too old
...code to check last date and call archiver as needed...
}
else {
# Nothing to do, we know this file was touched within
# the cutoff period, so leave it until next time.
}
}
# Rewrite the database file for next time
$Files{TheLastRunTime} = function_returning_now_as_YYYYMMDD;
rename 'my_database', 'my_database.".$Files{TheLastRunTime};
open $FH, '>', 'my_database';
for $K (keys %Files) {
print $FH "$Files{$K} $K\n";
}
</c>
<p>If you typically have 10,000 files of 1,90,000 files needing to be archived, then you're saving many stat calls--on some operating systems, anyway. I don't know the details on yours, but if you typically have the stat calls taking a significant amount of time, this approach will save you 1,80,000 stat calls per run.</p>
<p>Hope this helps...</p>
<p>...[roboticus]</p>
<p><i>When your only tool is a hammer, all problems look like your thumb.</i></p>
1056216
1056216