Beefy Boxes and Bandwidth Generously Provided by pair Networks
P is for Practical

Re: Recursive search for duplicate files

by sh1tn (Priest)
on Nov 27, 2007 at 13:35 UTC ( #653218=note: print w/ replies, xml ) Need Help??

in reply to Recursive search for duplicate files

Much better way is to use MD5 for file comparison.

Comment on Re: Recursive search for duplicate files
Replies are listed 'Best First'.
Re^2: Recursive search for duplicate files
by moritz (Cardinal) on Nov 27, 2007 at 13:41 UTC
    If used naively, that doesn't work out well for large files, because they have to be read from disc entirely.

    If you care about performance, you might just want to hash the first 5% (or the first 1k or whatever) and see if there are any collisions, and if there are you can still look at the entire file.

      I can agree. Another measure, taking in mind the performance, can be the filesize comparison before all.

Log In?

What's my password?
Create A New User
Node Status?
node history
Node Type: note [id://653218]
and the web crawler heard nothing...

How do I use this? | Other CB clients
Other Users?
Others making s'mores by the fire in the courtyard of the Monastery: (16)
As of 2015-07-29 14:21 GMT
Find Nodes?
    Voting Booth?

    The top three priorities of my open tasks are (in descending order of likelihood to be worked on) ...

    Results (263 votes), past polls