Beefy Boxes and Bandwidth Generously Provided by pair Networks
"be consistent"
 
PerlMonks  

Re^3: How to remove duplicates from a large set of keys

by demerphq (Chancellor)
on Feb 10, 2005 at 08:50 UTC ( #429627=note: print w/replies, xml ) Need Help??


in reply to Re^2: How to remove duplicates from a large set of keys
in thread How to remove duplicates from a large set of keys

Lookup in a hash of 1 million keys is rougly the same as lookup in a hash of 10 keys. :-) (Assuming you are still inside of physical memory.)

Its creating the hash thats the problem. It takes a long time, especially if you dont know how many records you are storing up front.

---
demerphq

  • Comment on Re^3: How to remove duplicates from a large set of keys

Replies are listed 'Best First'.
Re^4: How to remove duplicates from a large set of keys
by nite_man (Deacon) on Feb 10, 2005 at 09:04 UTC
    Lookup in a hash of 1 million keys is rougly the same as lookup in a hash of 10 keys
    Can you explain, please

    ---
    Michael Stepanov aka nite_man

    It's only my opinion and it doesn't have pretensions of absoluteness!

      Hash lookup in the type of hash structure used in perl is O(1). Its basically the time to calculate the hash value of the key, which is not dependent on the size of the hash, followed by the application of a bitmask to obtain an index into an array of linked lists, which are normally very short, followed by a key comparison on each element in the LL until the actual key is found or the end of the LL is reached. Under non pathological circumstances the LL should hold 0 or 1 elements. The end result is that the lookup time (memory swapping aside) is O(1) and thus independent of the number of keys in the hash.

      Its true that under pathological cirumstances you could end up with a bucket with a million keys in it, but Perl has a number of heuristics to prevent this happening in practice. Building the hash is more expensive because Perl cannot know the final size required and must always have a power of two number of buckets, so while growing the hash array needs to be expanded and the keys remapped which costs time. But its purely on storage. Once built a perl hash should behave in O(1) time, or rather time proportional to the length of the key being looked up.

      ---
      demerphq

Log In?
Username:
Password:

What's my password?
Create A New User
Domain Nodelet?
Node Status?
node history
Node Type: note [id://429627]
help
Chatterbox?
and the web crawler heard nothing...

How do I use this? | Other CB clients
Other Users?
Others cooling their heels in the Monastery: (4)
As of 2021-07-24 07:40 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    No recent polls found

    Notices?