Hi,
I wrote a command-line utility using your module that makes is easy to delete duplicate files.
#!/usr/bin/perl -w
use strict;
use File::Find::Duplicates;
$|++; # AutoFlush the Buffer
&usage if $#ARGV eq '-1';
my %dupes = find_duplicate_files(@ARGV);
die "No duplicates found!\n" unless keys %dupes;
print "############ Duplicate File Report & Removal Utility ##########
+##\n";
my $i = 1;
foreach my $fsize (keys %dupes) {
print "#" x 64 . " " . $i++ . "\n";
print map {-l $_ ? "# push \@delete, '$_'; # symlinked to " . read
+link($_) . "\n": "# push \@delete, '$_';\n"} @{ $dupes{$fsize} };
print "\n";
}
print "unlink \@delete;\n";
sub usage {
(my $script_name = $0) =~ s#.*/##; # $0 = full path to script
print <<END;
Generates a Report on Duplicate Files.
Usage: $script_name [List of Directories]
END
exit
}
### POD ###
=head1 Name
dupes - a command line utility to report on all duplicate files, even
+if they
have different names. This is good for mp3s and multiple drafts of do
+cuments
that may have been backed up in different places.
=head1 Synopsis
dupes [list of directories to search recursively]
=head1 From an empty buffer in Vim
The following commands will fill the buffer with a report of all dupli
+cate
files.
:%!dupes [list of directories]
B<or>
!!dupes [list of directories]
The report generated by the above commands is yet another perl script
+that can
be edited allowing you to flag certain files for removal.
The following command will run the report and remove all flagged files
+.
:%!perl
Nothing is deleted unless you flag the file by uncommenting the line.
If you don't understand how the report works, the following commands s
+hould
explain it.
perldoc -f push
perldoc -f unlink
=head1 AUTHOR
Kingsley Gordon, E<lt>kingman@ncf.caE<gt>
last modified: Thu Jul 4 15:11:26 EDT 2002
=cut
|