Beefy Boxes and Bandwidth Generously Provided by pair Networks
more useful options
 
PerlMonks  

Re: Find duplicate files.

by lemming (Priest)
on Jun 02, 2001 at 20:42 UTC ( [id://85207]=note: print w/replies, xml ) Need Help??


in reply to Find duplicate files.

Interesting. I just went through the similar problem of combining four computer's worth of archives. In some cases I had near duplicates due to slight doc changes and the like, so I wanted a bit more information. I had a second program do the deletes. (About 9,000 files)

I couldn't go by dates, due to bad file management

Note that the file statement uses the 3 arg version. I had some badly named files such as ' ha'. I wish I could remember the monk name that pointed out the documentation for me.

#!/usr/bin/perl # allstat.pl use warnings; use strict; use File::Find; use File::Basename; use Digest::MD5; my %hash; my @temp; while (my $dir = shift @ARGV) { die "Give me a directory to search\n" unless (-d "$dir"); File::Find::find (\&wanted,"$dir"); } exit; sub wanted { return unless (-f $_); my $md5; my $base = File::Basename::basename($File::Find::name, ""); my $size = -s "$base"; if ($size >= 10000000) { # They slowed down the check enough that I + skip them if ($size >= 99999999) { $size = 99999999; } $md5 = 'a'x32; # At this point I'll just hand check, less than a +dozen files } else { $md5 = md5file("$base"); } if ($File::Find::name =~ /\t/) { # Just in case, this screws up our +tab delimited file warn "'$File::Find::name' has tabs in it\n"; } printf("%32s\t%8d\t%s\t%s\n", $md5, $size, $File::Find::name, $base) +; } sub md5file { my ($file) = @_; unless (open FILE, "<", "$file" ) { warn "Can't open '$file': $!"; return -1; #Note we don't want to die just because of one file. } binmode(FILE); my $chksum = Digest::MD5->new->addfile(*FILE)->hexdigest; close(FILE); return $chksum; }

Replies are listed 'Best First'.
Re: Re: Find duplicate files.
by bikeNomad (Priest) on Jun 02, 2001 at 21:13 UTC
    For just comparing files, it might be enough to do a CRC32 and use this along with the file size. This has the added advantage of allowing quick scans through ZIP files for dupes as well (using Archive::Zip of course) because ZIP files already have crc32 values in them.

    The CRC32 found in Compress::Zlib runs 82% faster than Digest::MD5 on my system, using the following benchmark program:

    #!/usr/bin/perl -w use strict; use IO::File; use Compress::Zlib (); use Digest::MD5; use Benchmark; use constant BUFSIZE => 32768; sub crc32 { my $fh = shift; binmode($fh); sysseek($fh, 0, 0); # rewind my $buffer = ' ' x BUFSIZE; my $crc = 0; while ($fh->sysread($buffer, BUFSIZE)) { $crc = Compress::Zlib::crc32($buffer, $crc); } return $crc; } sub md5 { my $fh = shift; seek($fh, 0, 0); # rewind my $md5 = Digest::MD5->new(); $md5->addfile($fh); return $md5->digest; } foreach my $file (@ARGV) { my $fh = IO::File->new($file); binmode($fh); next if !defined($fh); Benchmark::cmpthese(-10, { "crc32 $file", sub { crc32($fh) }, "md5 $file", sub { md5($fh) } }); }
Re^2: Find duplicate files.
by mimosinnet (Beadle) on Apr 02, 2012 at 14:13 UTC

    I am new to perl (and to writing code) and I have just been in an excellent course organized by Barcelona_pm. I have rewritten lemming code as an exercise of using Moose. To improve speed, following above suggestions, files with similar size are first identified and, afterwards, md5 value is calculated in these files. Because this is baby-code, please feell free to recomend any RTFM $manual that I sould review to improve the code. Thanks for this great language!

    (I have to thank Alba from Barcelona_pm for suggestions on how to improve the code).

    This is the definition of the object "FileDups"

    package FileDups; use Digest::MD5; use Moose; use namespace::autoclean; has 'name' => (is => 'ro', isa => 'Str', required => 1,); has 'pathname' => (is => 'ro', isa => 'Str', required => 1,); has 'max_size' => (is => 'ro', isa => 'Num', required => 1,); has 'big' => (is => 'rw', isa => 'Bool', required => 1, default = +> 0); has 'unread' => (is => 'rw', isa => 'Bool', required => 1, default = +> 0); has 'dupe' => (is => 'rw', isa => 'Bool', required => 1, default = +> 0); has 'md5' => (is => 'ro', isa => 'Str', lazy => 1, builder = +> '_calculate_md5'); has 'size' => (is => 'ro', isa => 'Num', lazy => 1, builder = +> '_calculate_size'); sub _calculate_size { my $self = shift; my $size = -s $self->name; if (-s $self->name > $self->max_size) { $size = $self->max_size; $self->big(1); } return $size; } sub _calculate_md5 { my $self = shift; my $file = $self->pathname; my $size = $self->size; my $chksum = 0; if ($size == $self->max_size) { $chksum = 'a'x32; } else { my $fh; unless (open $fh, "<", "$file" ) { $self->unread(1); return -1; #return -1 and exit from subrutine if file can +not be opened } binmode($fh); $chksum = Digest::MD5->new->addfile($fh)->hexdigest; close($fh); } return $chksum; } ;1

    And this is the main package that lists duplicate files, big files and unread files.

    #!/usr/bin/env perl # References: # http://drdobbs.com/web-development/184416070 use strict; use warnings; use File::Find; use lib qw(lib); use FileDups; use Data::Dumper; # Hash of => [array of [array]], [array of objects] my (%dup, %sizes, @object, $number_files, $number_size_dups); my $max_size = 99999999; # Size above of whitch md5 will n +ot be calculated my $return = "Press return to continue \n\n"; my $line = "-"x70 . "\n"; while (my $dir = shift @ARGV) { # Find and classify files die "\"$dir\" is not a directory. Give me a directory to search\n" + unless (-d "$dir"); File::Find::find (\&wanted,"$dir"); } print "\n"; foreach (@object) { # Calculates md5 for files with equ +al size if ($sizes{$_->size} == "1") { $number_size_dups += 1; print "$number_size_dups Files with th +e same size \r"; $_->dupe(1); # The object has another object with t +he same size $_->md5; # Calculates md5 } } foreach (@object) { # Creates a hash of md5 values if ($_->dupe == 1) { # for files with the same size if (exists $dup{$_->md5}) { push @{$dup{$_->md5}}, [$_->size, $_->name, $_->pathname]; } else { $dup{$_->md5} = [ [$_->size, $_->name, $_->pathname] ]; } } } print "\n\nDuplicated files\n $line $return"; my $pausa4 = <>; foreach (sort keys %dup) { # sort hash by md5sum if ($#{$dup{$_}} > 0) # $_ = keys { # if we have more than 1 array whithin th +e same hash printf("\n%8s %10.10s %s\n", "Size", "Name", "Pathname"); foreach ( @{$dup{$_}} ) # $_ = keys, $dupes{keys} = +list of references (scalars) { # iterate trough the first dimension of t +he array printf("%8d %10.10s %s\n",@{$_}); # dereference referen +ce to array } } } my $r1 = &list_files("Big files","big",@object); # List big files my $r2 = &list_files("Unread files","unread",@object); # List unrea +d files sub wanted { return unless (-f $_); my $file = FileDups->new(name => $_, pathname => $File::Find::name +, max_size => $max_size); $number_files += 1; print "$number_files Files seen\r"; if ($file->size == $max_size) { # Identifies big files $sizes{$file->size} = "0"; # We do not check md5 for bi +g files } elsif (exists $sizes{$file->size}) { # There are more the +n one file with this size $sizes{$file->size} = "1"; } else { $sizes{$file->size} = "0"; # This is a new size value, +not duplicated } push @object, $file; # Puts the object in the @obje +ct array } sub list_files { # List objects according to criter +ia: my ($title,$criteria,@object) = @_; # (a) big files; (b) +unread files print "\n \n $title \n" . $line; my $pausa = <>; foreach (@object) { if ($_->$criteria) { printf(" %10.10s %s\n",$_->name,$_->pathname); } } print $line; }
Re^2: Find duplicate files.
by Anonymous Monk on Oct 10, 2008 at 03:36 UTC
    thanks to lemming's code for generating md5 hashes above, It became the first part in finding duplicates for me. I used the following code to find duplicates and show them. Running the same code again with 'remove' will 'move' all the duplicates to a ./trash/ subdirectory. Its a little too specific based on my specific needs, but might be a nice start for someone else needing the same. It went through 25k files, finding 11k duplicates, moving them to a ./trash/ directory in about 60 seconds. this code below takes the output of lemmings code above.
    #!/usr/bin/perl -w # usesage: dupDisplay.pl fileMD5.txt [remove] # input file has the following form: # 8e773d2546655b84dd1fdd31c735113e 304048 /media/PICTURES-1/my +media/pictures/pics/20041004-kids-camera/im001020.jpg im001020.jpg # e01d4d804d454dd1fb6150fc74a0912d 296663 /media/PICTURES-1/my +media/pictures/pics/20041004-kids-camera/im001021.jpg im001021.jpg use strict; use warnings; my %seen; my $fileCNT = 0; my $origCNT = 0; my $delCNT = 0; my $failCNT = 0; my $remove = 'remove' if $ARGV[1]; $remove = '' if !$ARGV[1]; print "\n\n ... running in NON removal mode.\n\n" if !$remove; open IN,"< $ARGV[0]" or die ".. we don't see a file to read: $ARGV[0]" +; open OUT,"> $ARGV[0]_new.temp" or die ".. we can't write the file: $AR +GV[0]_new.temp"; open OUTdel,"> $ARGV[0]_deleted" or die ".. we can't write the file: $ +ARGV[0]_deleted"; open OUTfail,"> $ARGV[0]_failed" or die ".. we can't write the file: $ +ARGV[0]_failed"; print "\n ... starting to read find duplicats in: $ARGV[0]\n"; if(! -d './trash/'){mkdir './trash/' or die " !! couldn't make trash d +irectory.\n $! \n";} while(<IN>){ my $line = $_; chomp $line; $fileCNT++; my ($md5,$filesize,$pathfile,$file) = split /\t+/,$line,4; if(exists $seen{"$md5:$filesize"}){ my $timenow = time; my $trashFile = './trash/' . $file . "_" . $timenow; # moves dup +licate file to trash with timestamp extension. #if( ! unlink($pathfile){print OUTfail "$pathfile\n"; $failCNT+ ++;} if($remove){if( ! rename $pathfile,$trashFile){print OUTfail "$pa +thfile\n"; $failCNT++;}} $seen{"$md5:$filesize"} .= "\n $pathfile"; $delCNT++; print " files: $fileCNT originals: $origCNT files to delete: $d +elCNT failed: $failCNT \r"; }else{ $seen{"$md5:$filesize"} = "$pathfile"; printf OUT ("%32s\t%8d\t%s\t%s\n", $md5,$filesize,$pathfile,$file +); $origCNT++; print " files: $fileCNT originals: $origCNT files to delete: $d +elCNT failed: $failCNT \r"; } } foreach my $key (keys %seen){ print OUTdel " $seen{$key}\n"; } print " files: $fileCNT originals: $origCNT files to delete: $delCNT + failed: $failCNT \n\n";

Log In?
Username:
Password:

What's my password?
Create A New User
Domain Nodelet?
Node Status?
node history
Node Type: note [id://85207]
help
Chatterbox?
and the web crawler heard nothing...

How do I use this?Last hourOther CB clients
Other Users?
Others exploiting the Monastery: (9)
As of 2024-04-23 18:31 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    No recent polls found