|Perl: the Markov chain saw|
Re^10: Indexing two large text filesby BrowserUk (Pope)
|on Apr 12, 2012 at 18:16 UTC||Need Help??|
In nearly all cases, I'd say that "put your filtering file in a hash and process the other file against it" is such a superior algorithm
I agree with your thoughts on the use of hashes for filtering.
that it's worth trying, even if you suspect it's going to force swapping to disk.
The problem is that hashes are just about the ultimate in 'cache unfriendly data structures'.
Once built, even if only a relatively small percentage of the total structure needs to be swapped out at any given time, their nature means that you will need to swap-in the bit that is out (and therefore swap out a bit that is currently in), for nearly every iteration of the lookup.
In turn, that has a disastrous affect upon the usually good performance of serial reading the other file from disk.
And if you enter swapping during hash creation, prior to the final doubling of the buckets, then the copying of the keys from the last intermediate stage to the final hash, just sends the disk head nuts. It can take forever.
It's a great way to check if your disk is working properly :)
With the rise and rise of 'Social' network sites: 'Computers are making people easier to use everyday'
Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
"Science is about questioning the status quo. Questioning authority".
In the absence of evidence, opinion is indistinguishable from prejudice.