|Perl: the Markov chain saw|
Re: Re: Re: A short meditation about hash search performanceby Boots111 (Hermit)
|on Nov 16, 2003 at 20:40 UTC||Need Help??|
I am just refering to the two posts immediately above this, but I must point out that pg is correct. Despite what the points on either node may say...
The size of a hashtable is a variable (usually n), and the pathelogical case of inserting everything into the same bucket provides O(n) access for a simple hashtable.
The only way in which Abigail would be correct is if there were guarantee that the overflow chain would NEVER exceed one billion entries.
It is possible that the rehashing will prevent overflow chains from growing too large, but then one must consider the cost of rehashing the table. While that cost is not paid every time, it is likely a very large cost, and thus must be amortized across all calls to insert.
In general, one could get O(1) access to a hash by ensuring that the overflow chains reach at most a constant length, but this will require rehashing when chains get too long. This would cause hash insertions to be greater than O(1).
At heart it is a question of trading one cost for another...
Computer science is merely the post-Turing decline of formal systems theory.