Does "preallocating hash improve performance"? Or "using a hash slice"?by vr (Pilgrim)
|on Feb 15, 2017 at 19:39 UTC||Need Help??|
vr has asked for the
wisdom of the Perl Monks concerning the following question:
I was reading this article: http://www.sysarch.com/Perl/sort_paper.html (actually, mentioned here: Re: Sorting geometric coordinates based on priority), and found this:
I liked the idiom
and thought it would be nice to remember and use it sometimes. While, of course, I was using hash slices before, but rather because they look so concise and, somehow because of this, I felt that code, as a result, must, indeed, be more efficient. And now additional optimization through "magical" use of keys as lvalue, forcing scalar context on array. Actually, keys mentions this optimization, but I missed it before:
Used as an lvalue, keys allows you to increase the number of hash buckets allocated for the given hash. This can gain you a measure of efficiency if you know the hash is going to get big.
Then I thought it strange that assigning to large hash slice still requires this "preallocation". Then I ran this test:
log is here to imitate at least some payload (useful work), and to create longer hash keys (if it matters). Returning a reference so that Perl doesn't sniff we don't need this hash and won't skip any work. And that's because of results:
No meaningful difference at all. So, are my tests flawed, or claims about efficiency of slices and preallocation don't hold any water?