http://www.perlmonks.org?node_id=383834


in reply to Verifying data in large number of textfiles

Using a consistent algorithm may provide you with a consistent set of identical "rips" from your webpage. Just for a moment, lets consider that unlikely reality to be true.

You must combine the methods for parsing over all files in a directory with your comparison and sorting options.

The first line may not give you the best indication for comparison. I would suggest Digest::MD5 instead, and the following untested code - mostly ripped from the docs:

use Digest::MD5; use strict; %seen = (); $dirname = "/path/to/files"; # Parse over files in directory opendir(DIR, $dirname) or die "can't open $dirname: $!"; # Take a careful look at each file in $dirname while (defined($file = readdir(DIR))) { my $file = "$dirname/$file"; open(FILE, $file) or die "Can't open '$file': $!"; binmode(FILE); # make a $hash of each file my $hash = Digest::MD5->new->addfile(*FILE)->hexdigest, " $file\n" +; # store a copy of this $hash and compare it with all others seen unless ($seen{$hash}++ { # this is a unique file # do something with it here - perhaps move it to a /unique loc +ation } } closedir(DIR);
...code is untested

SciDude
The first dog barks... all other dogs bark at the first dog.