in reply to "Just use a hash": An overworked mantra?
But just because hash key lookups/increments are O(1) operations, that doesn't make them computationally cheap.
......
An array lookup is also O(1), but it turns out it's a much computationally cheaper task to perform these array operations.
not correct
Hash is not O(1).
Also, this example of 0..999 integers looks too artifical to me, and it is too obvious to use an array here, so  for me  it does not deserve to even start the discussion, in the first place :)
I do appreciate good English of the original post, however, which I, personally, do not possess
:) :)
Re^2: "Just use a hash": An overworked mantra?
by davido (Cardinal) on Dec 23, 2011 at 19:19 UTC

Hash is not O(1)
Are you sure?
 "Many hash table designs also allow arbitrary insertions and deletions of keyvalue pairs, at constant average (indeed, amortized) cost per operation."  http://en.wikipedia.org/wiki/Hash_table
 http://en.wikipedia.org/wiki/Big_O_notation
 "Thus, everyone knows that hash table queries run in amortized constant time. That is, as the number of keys increases, the average time necessary to recover a keyvalue pair does not increase."  http://lemire.me/blog/archives/2009/08/18/dohashtablesworkinconstanttime/
 "Hash tables, sometimes also known as associative arrays, or scatter tables, are a data structure that offers fast lookup, insertion, and deletion of (key, value) pairs. In a welldesigned implementation of hash tables, all of these operations have a time complexity of O(1), rather than the O(n) that linked lists, association lists, and vectors would require for some of them."  http://www.math.utah.edu/~beebe/software/hash/hash.html
 "Perl hashes provide constanttime lookups. They are implemented as true hash tables (with automatic retuning as necessary), not associative arrays."  http://stackoverflow.com/questions/3774904/whatisthetimecomplexityforlookingupaniteminaperlhash
 http://www.sysarch.com/Perl/sort_paper.html
I'll grant that an individual lookup might consume greater than O(1) time, but in the aggregate it's fairly well established and accepted that hash operations are O(1) operations, where 'n' is the number of elements in the hash.
 [reply] 

yes. I am sure.
first, the third link you've pointed reads:
Thus—formally—hash tables often run in time O(log n).
next, if the question is "is it possible to create an algorithm that, given arbitrary strings of finite length, make associative store/lookup for O(1)"  then the answer is "yes" but these are nonrealistic algorithms.
but  back to reality  reallife alhorithms are O(log N) in best case, but I afraid Perl's one is worse than that and  yet  have you ever heard of socalled "hash complexity attack" that was securityfixed by seed randomization in 5.8.0? To inform you, pre5.8.0 were looking like O(1) but actually were O(n) (or maybe even worse in worst case scenario)
next, your "stackoverflow.com" link contains simply wrong information. IOW, I do not see any proof there that convinces me.
And  your last link does not even mention hashes.
Bother explain what is doing here? TIA!
and finally, do you see something strange in your phrase hash operations are O(1) operations, where 'n' is the number of elements in the hash
?
I could be a bit joking, but I do not trust phrases like that.
you're explaining parameter 'n' that is not presents in your formulae  what other mistakes/typos have you made?
:)
 [reply] 

 [reply] 




 [reply] [d/l] 



