note
Aristotle
<h4>Some more nooks and crannies for the last example</h4>
<p>'Cause I'm just itching to add a few bits. :)</p>
<code>#!/usr/bin/perl -w
use strict;
use Getopt::Std;
use File::Find;
my %opt;
@ARGV > 0 and getopts('n:s:m:', \%opt) and not (keys %opt > 1) or die << "USAGE";
Shows the biggest files residing in one or several directory trees.
usage: $0 [-n num] [-t size] [-m size] directory [directory ...]
-n show <num> files
-s show biggest files totalling <size>
-m show all files bigger than <size>
use only one option at a time
default is 20 biggest files
USAGE
my ($switch, $param) = %opt;
my %size;
find(sub {$size{$File::Find::name}=-s if -f;}, @ARGV);
my @sorted = sort {$size{$b} <=> $size{$a}} keys %size;
my $maxidx = 0;
if($switch eq 's') {
($param -= $size{$_}) >= 0 ? $maxidx++ : last for @sort;
}
elsif($switch eq 'm') {
$size{$_} < $param ? $maxidx++ : last for @sort;
}
else {
$maxidx = ($val || 20) - 1;
}
printf "%10d $_\n", $size{$_} for @sorted[0 .. $maxidx];
</code>
<h3>Even more advanced uses</h3>
The <tt>preprocess</tt> and <tt>postprocess</tt> predicates of [cpan://File::Find] let you do some really wild stuff. To make use of them, you have to use the extended syntax of calling <tt>find()</tt>. To specify extra options, you have to pass a hash as the first parameter, rather than just a subroutine reference. The simplest case is exactly equivalent to using the subref shorthand:
<code>find( { wanted => \&print_if_dir, }, @dirs);
# or
find( { wanted => sub { print if -d }, }, @dirs);
</code>
Both of the new extra directives, <tt>preprocess</tt> and <tt>postprocess</tt>, take a subroutine references, just like the standard <tt>wanted</tt> one in the above examples. Having that out of the way, let's get to the juicy stuff:
<h4><tt>preprocess</tt></h4>
<tt>find()</tt> passes this routine an array with the entire contents of a directory immediately upon entering the directory and expects it to return the list of interesting files. Any omitted files will not be passed to the <tt>wanted</tt> function and omitted directories <em>will not even be descended into</em> by <tt>find()</tt>. This predicate makes [cpan://File::Find] the most powerful tool for all your directory traversal needs. To warm up, here's a silly example that does the same as the previous examples, that is, print only the names of directories:
<code>find( {
preprocess => sub { return grep { -d } @_ },
wanted => sub { print },
}, @dirs);
</code>
<p>As you (should) know, Perl stores the parameters passed to a subroutine in the special array <tt>@_</tt>. [perlfunc:grep|grep] tests all elements of a list passed to it (here: the list of parameters, and thus filenames) against the expression and then returns a new list containing only the elements for which that expression is true. Here, the expression tests whether the entry is a directory, so the result is a list which does not contain any files, symlinks or anything else besides directories. We return this new list, causing <tt>find()</tt> to forget all the files, symlinks and everything else. It will not pass them to our <tt>wanted</tt> function, and so we can just print everything we get passed into there. Obviously, this is a contrived example.</p>
<p>So, what really interesting stuff can we do with the <tt>preprocess</tt> directive? Let's just try to implement the <tt>-mindepth</tt> and <tt>-maxdepth</tt> offered by GNU find. Of course, you don't <em>need</em> <tt>preprocess</tt> to do that. The naive way would be do check the depth of the current location in the directory tree within the <tt>wanted</tt> function and bail if we're too deep or not deep enough. However, this is wasteful: what if you are traversing a very deep tree with thousands of directories and several hundred thousand files? The <tt>wanted</tt> function will likely spend most of its time saying "no, not deep enough", "no, too deep", "no, no, too deep", "too deep, next one", throwing away files over and over. The biggest problem here is that even if you <b>only</b> want the files at depth 2-3, <tt>find()</tt> will happily descend down to level 15, giving <tt>wanted</tt> all the directories and files it encounters en route, oblivious to the fact that we are only throwing them all away, waiting for the directory traversal to back out up to level 3 again. The solutionn is to use a <tt>precprocess</tt> routine to cull all directories from the list once we reach the maxdepth, preventing <tt>find()</tt> from descending any further and getting lost in areas of the tree we aren't interested in anyway. So without further ado:</p>
<code>my ($min_depth, $max_depth) = (2,3);
find( {
preprocess => \&preprocess,
wanted => \&wanted,
}, @dirs);
sub preprocess {
my $depth = $File::Find::dir =~ tr[/][];
return @_ if $depth < $max_depth;
return grep { not -d } @_ if $depth == $max_depth;
return;
}
sub wanted {
my $depth = $File::Find::dir =~ tr[/][];
return if $depth < $min_depth;
print;
}
</code>
Let's see what happens here. We find out how deep we currently are by counting the forward slashes in the full pathname of wherever we are, <tt>$File::Find::dir</tt>. If we are below the maximum depth, then we want to look at all files. If we are <b>at</b> the maximum depth, we ditch all directories, so <tt>find()</tt> will not descend any further. If we somehow got too deep, we return nothing, causing <tt>find()</tt> to back out of the directory immediately. Finally, in <tt>wanted</tt> we examine the depth again, in order to avoid processing files below the minimum depth. Because <tt>find()</tt> needs to descend into these directories we cannot avoid it passing names for directories that are too far up the tree to our <tt>wanted</tt> function.
<h4><tt>postprocess</tt></h4>
<p>This one is a lot less involved; mainly because it neither takes nor returns anything. It is simply called before <tt>find()</tt> backs out of a directory, which means the entire subtree below it has been processed. In other words, it is safe to mess with the directory without unintentionally confusing <tt>find()</tt>.</p>
<p>The following utility script makes use of this to remove empty directories. It doesn't try to check whether they're empty, because that's relatively complicated (we have to pay attention to the special <tt>.</tt> and <tt>..</tt> entries) and [perlfunc:rmdir|rmdir] will not remove a non-empty directory anyway. So we just let it fail harmlessly.</p>
<code>#!/usr/bin/perl -w
use strict;
use Getopt::Std;
use File::Find;
@ARGV > 0 and getopts('a:', \my %opt) or die << "USAGE";
Deletes any old files from the directory tree(s) given and
removes empty directories en passant.
usage: $0 [-a maxage] directory [directory ...]
-a maximum age in days, default is 120
USAGE
my $max_age_days = $opt{a} || 120;
find({
wanted => sub { unlink if -f $_ and -M _ > $max_age_days },
postprocess => sub { rmdir $File::Find::dir },
}, @ARGV);
</code>
<h3>Conclusion</h3>
<p>As if [cpan://File::Find] was not already good enough, these two extra predicates give you the power to do literally anything. <tt>preprocess</tt> lets you control <tt>find()</tt>'s behaviour in any way conceivable, and <tt>postprocess</tt> makes it easy to do any cleanup tasks of all manner without requiring a second directory traversal. Combining these powers makes it very easy to write astonishingly powerful file handling scripts with very little effort.</p>
<p><b>Update:</b> fixed a couple typos in the text, rearranged a few sentences for clarity. No changes to actual content.</p>
<p><b>Update:</b> fixed code per reply below.</p>
<p align="right"><i>Makeshifts last the longest.</i></p>
217166
217166