in reply to Re: "Just use a hash": An overworked mantra?

in thread "Just use a hash": An overworked mantra?

Hash is not O(1)

Are you sure?

- "
*Many hash table designs also allow arbitrary insertions and deletions of key-value pairs, at constant average (indeed, amortized) cost per operation.*" -- http://en.wikipedia.org/wiki/Hash_table - http://en.wikipedia.org/wiki/Big_O_notation
- "
*Thus, everyone knows that hash table queries run in amortized constant time. That is, as the number of keys increases, the average time necessary to recover a key-value pair does not increase.*" -- http://lemire.me/blog/archives/2009/08/18/do-hash-tables-work-in-constant-time/ - "
*Hash tables, sometimes also known as associative arrays, or scatter tables, are a data structure that offers fast lookup, insertion, and deletion of (key, value) pairs. In a well-designed implementation of hash tables, all of these operations have a time complexity of O(1), rather than the O(n) that linked lists, association lists, and vectors would require for some of them.*" -- http://www.math.utah.edu/~beebe/software/hash/hash.html - "
*Perl hashes provide constant-time lookups. They are implemented as true hash tables (with automatic retuning as necessary), not associative arrays.*" -- http://stackoverflow.com/questions/3774904/what-is-the-time-complexity-for-looking-up-an-item-in-a-perl-hash - http://www.sysarch.com/Perl/sort_paper.html

I'll grant that an individual lookup might consume greater than O(1) time, but in the aggregate it's fairly well established and accepted that hash operations are O(1) operations, where 'n' is the number of elements in the hash.

Dave

In Section
Meditations