I spent some effort with this idea of pre-allocation of Perl hashes. I thought it was cool and that it would do a lot. But I found out differently. In one application, I found that pre-sizing a 128K hash table made almost no significant difference at all versus letting the hash grow "naturally".

The Perl hash function has changed over the years, but the low level C implementation appears to be "solid". The Intel integer multiply has gotten faster over the years and using shifts and addition versus multiply doesn't make as much difference as it used to. Also the low level Perl mem to mem copies appear to be "fast enough" - this more apparent with bigger data sizes to be copied.

My conclusion: With less than 128K keys, don't worry about it unless there is some extreme requirement for this hash.


In reply to Re: Does "preallocating hash improve performance"? Or "using a hash slice"? by Marshall
in thread Does "preallocating hash improve performance"? Or "using a hash slice"? by vr

Title:
Use:  <p> text here (a paragraph) </p>
and:  <code> code here </code>
to format your post; it's "PerlMonks-approved HTML":