Perl Monk, Perl Meditation | |
PerlMonks |
Re: •Re: Interlaced duplicate file finderby abell (Chaplain) |
on Jan 07, 2003 at 00:31 UTC ( [id://224803]=note: print w/replies, xml ) | Need Help?? |
Most of the complication is in place to reduce file reading to a bare minimum. Say you have two 1 Gbyte files. The size is exactly the same, but the files are very different. I wouldn't want to read and digest both files to understand they are different, when it's enough to read a few bytes in the same position. My program deals rather well with these cases. It starts by reading a small chunk from all files of the same size and uses that chunk as key to partition the group of files. If any subset contains more than one file, then read another chunk starting from another (preferably far) position and iterate. It's more or less like the naif "real life" way of comparing things. If you have two books with a blank cover, to check if they are different you first compare the size. If it's the same, you open the same page from both and check if they differ. Only if the books are the same you need to keep on reading until the end. Moreover, by using byte by byte comparison instead of hashing, you don't even risk false positives. As small as the risk may be, it will most surely happen for your presentation due tomorrow. Package Finder::Looper takes care of the iteration. Each call to $looper->next returns a new pair ( start, length ) within a given range, so that consecutive calls sample from different parts of the file. That's the "interlaced" part (which I should maybe have called "interleaved", but hey! this side of the world it's not the best time for choosing names in foreign languages). Having said this, the program probably needs some tweaking to better exploit filesystem/buffering/head-positioning optimizations. Cheers Antonio Bellezza The stupider the astronaut, the easier it is to win the trip to Vega - A. Tucket
In Section
Code Catacombs
|
|