Perl-Sensitive Sunglasses | |
PerlMonks |
comment on |
( [id://3333]=superdoc: print w/replies, xml ) | Need Help?? |
You shouldn't need to care about collisions, if the keys are random and not pathologically designed in a way to enforce collisions.
A hash normally just grows if there are too many entries. So still ~ O(1) ! IMHO your problem is not really speed but size. I once had the problem of a giant hash which was constantly swapping. So each access was limited by the speed of my hard-disk (ugly bottleneck). I was able to solve that by transforming it into a a HoH and splitting the old key into two halves. i.e. $new{long}{_key} = $old{long_key} this worked because I was able to make the algorithm run thru sorted keys, i.e. the "lower" hash needed to be loaded only once into RAM when the corresponding upper key long was processed. This way Perl only needed to keep two hashes in memory. This is quite scalable...the only limitation is the size of your hard disk then. So if you can manage your accesses (reads and writes) in a way that "long_keys" with the same "start" are bundled this is my recommendation (within the limited infos you gave). If order doesn't matter this should be easy! HTH! :)
Cheers Rolf ( addicted to the Perl Programming Language) In reply to Re: Small Hash a Gateway to Large Hash?
by LanX
|
|