Your skill will accomplish what the force of many cannot |
|
PerlMonks |
Re: Bloom::Filter Usageby demerphq (Chancellor) |
on Apr 20, 2004 at 18:31 UTC ( [id://346743]=note: print w/replies, xml ) | Need Help?? |
Divide and conquor. You say you have two fields that are the key, one of which is unique. Take the right hand digit of the number and sort your records into 10 files by that digit. (Insert hand waving about it probably working out that this means you end up with roughly even size output files.) Now do your dupe checks on the resulting files. The thing to remember about perl hashes is that they grow in powers of two, that is they double when they are too small. So divide your file sufficiently that you stay within reasonable bounds. Divide by 10 has worked for me with equivelent sized data loads. There are other approaches to this like using DB_File or some kind of RDBMS but I actually think overall you will have a simpler and probably more efficient system if you just use some kind of approach to scale the data down. Splitting data into bite sized chunks is an ancient and honorable programming tradition. :-) Oh, another approach is to use a Trie of some sort. If your accounts are dense then overall it can be a big winner in terms of space and is very efficient in terms of lookup.
---
demerphq First they ignore you, then they laugh at you, then they fight you, then you win.
In Section
Seekers of Perl Wisdom
|
|