Beefy Boxes and Bandwidth Generously Provided by pair Networks
good chemistry is complicated,
and a little bit messy -LW
 
PerlMonks  

Re^5: elsif chain vs. dispatch

by ikegami (Pope)
on Apr 27, 2009 at 20:34 UTC ( #760429=note: print w/ replies, xml ) Need Help??


in reply to Re^4: elsif chain vs. dispatch
in thread elsif chain vs. dispatch

The degenerate case has nothing to do with the ratio of used buckets to the number of total buckets.

The degenerate case occurs when the number of elements in the hash (0+keys(%hash)) is much greater than the number of buckets in use (0+%hash) because most of keys hash to the same value.

Locating a key in the degenerate case is a linear search since they're all in the same bucket.


Comment on Re^5: elsif chain vs. dispatch
Select or Download Code
Re^6: elsif chain vs. dispatch
by Marshall (Prior) on Apr 27, 2009 at 21:02 UTC
    If you let Perl grow the hash, this super degenerate case will be detected and Perl will add bits to the hash key. The num keys start at 8, then 16,32,64,etc. The 9th entry to same hash value with buckets =8 would re-gen the entire hash. Now, I suppose that some case can be generated where at each bit addition, the same thing not only occurs, but becomes harder for earlier versions of Perl to detect!

    I think my general advice about checking these parms: #buckets used, #total buckets and #total entries is a good one when dealing with very large or performance sensitive hashes.

      The 9th entry to same hash value with buckets =8 would re-gen the entire hash.

      That doesn't prevent the degenerate case since you could end up with 9 entries in the same bucket of a 16 bucket hash after the split.

      But that would mean having N/4 keys hashing to the same bucket isn't detected. Which means the worst case is still Θ(N). In fact, if there's an ε > 0 such that it requires more than εN keys to be hashed to a single bucket before Perl reorders the hash, the worst case look up is still Θ(N).
        Yes, if I understand your point correctly: There is no absolute guarantee that all keys won't hash to the same hash key until the keys are absolutely unique! Correct!

        However in a practical sense, I think that you are going to be hard pressed to come up with a realistic example for this user's input data.

        Of course there is a "trick" here. Even if the hash table has to compare say 16 things to get a result, it is still going to be very fast!

        This idea that say 256 things will hash into an identical hash table entry is unlikely. Now "very, very seldom" doesn't mean "never".

        But, as the hash grows the probability of this decreases exponentially.

Re^6: elsif chain vs. dispatch
by Marshall (Prior) on Apr 27, 2009 at 21:54 UTC
    Completely correct! Yes this could happen. If it keeps happening, then the 17th entry would cause the hash to be re-sized. Then again on the 33rd entry.

    It sounds like Perl 5.8.3+ has made some improvements! Great!

    For Perl versions less than that and even on Perl 5.8.3, I don't think that a user will know more than #buckets, #buckets used and #total entries (ie, user wouldn't know the max entries into a "bucket"), but given those 3 things, a user can make some judgment call about increasing the hash table size and is able to do so.

Log In?
Username:
Password:

What's my password?
Create A New User
Node Status?
node history
Node Type: note [id://760429]
help
Chatterbox?
and the web crawler heard nothing...

How do I use this? | Other CB clients
Other Users?
Others meditating upon the Monastery: (9)
As of 2014-07-29 19:39 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    My favorite superfluous repetitious redundant duplicative phrase is:









    Results (226 votes), past polls