in reply to Re^4: Does "preallocating hash improve performance"? Or "using a hash slice"?
in thread Does "preallocating hash improve performance"? Or "using a hash slice"?

Okay, let me try responding to this again. Blow by blow this time.

I hope perl is not so inefficient that it copies all the content of a list when it needs to process it, when simply pointing to the already allocated elements would be enough.
Even more so with arrays.

Um. The example contains arrays. So if the previous sentence was not talking about arrays, what was it talking about?

I'd also expect some COW mechanism to remove a lot of allocating and copying.

CopyOnWrite generally applies to entire data segments of a process that is cheaply shared with another process; that's obviously not applicable here.

Perl does have some internal flags and references with "COW" in their names; where by the copying of scalars is avoided (by aliasing) until and unless they are written to; but as the argument lists (destination & source) to op_aassign are inherently readonly, that does not apply here either.

Keys might be more trouble though, as all data stored in them have to be stringified first (and I wouldn't be surprised to learn that hash keys always hold a copy of the initial string, since as far as I can tell they are not standard scalar values).

Since CoW is not applicable to the destination and source lists; most of that is irrelevant, but I can confirm that hash keys are not normal scalars, and even if they are already in memory as scalars, the text will be copied into the HV structure.

I agree that the memory usage, and number of copies is certainly higher when you go all the way to slicing, but I don't expect "at least 4 times more" memory.

For the statement: @hash{ @keys } = @array; here is the memory usage:

C:\test>p1 print mem;; 9,356 K $#keys = 10e6; $#values = 10e6; $keys[ $_ ] = "$_", $values[ $_ ] = $_ + for 0 .. 10e6;; print mem;; 2,000,088 K @hash{ @keys } = @values;; print mem;; 4,394,716 K print size( \%hash );; 783106748

So, final memory usage: 4,394,716 K - initial memory usage: 9,356 K = memory used by the two arrays, the final hash and all the intermediate allocations for the stack, smaller versions of the hash during construction and other bits & bobs: 4,385,360 K or 4490608640 bytes.

And, 4490608640 / 783106748 = 5.734350586901084908030954676437. Your expectations are wrong.

I can't see any value going further.


With the rise and rise of 'Social' network sites: 'Computers are making people easier to use everyday'
Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
"Science is about questioning the status quo. Questioning authority". The enemy of (IT) success is complexity.
In the absence of evidence, opinion is indistinguishable from prejudice.

Replies are listed 'Best First'.
Re^6: Does "preallocating hash improve performance"? Or "using a hash slice"?
by huck (Parson) on Feb 21, 2017 at 21:29 UTC

    Going farther has been rather enlightening for me, i never realized what keys %h = 10e6; would do or that apparently $#h = 10e6; would do the same, and i am known to read the documentation at times just because im bored. I did look up the keys %h = 10e6; reference in the doc after you used it tho.

    so thanks

Re^6: Does "preallocating hash improve performance"? Or "using a hash slice"?
by Eily (Monsignor) on Feb 22, 2017 at 13:34 UTC
    I can't see any value going further.

    I actually wasn't expecting you to get back on that explanation with so much detail and technical information, so thanks for that.

    The part I got wrong is that I thought you meant that all the data (all the strings, not just the SV*s) was duplicated four times; this is also why I started talking about COWs, because I didn't understand why perl would need to copy the strings so many times. I got confused by "two copies of all the keys and values" where I failed to understand that "keys and values" was refering to their SV*s. So by "4 times more memory", I meant 4 times more than the total_size of the hash, not just the structure size.

    So, for those occasions when the destination list is entirely defined by one array, and the source list entirely defined by another array, it would be possible to only pass references to those two arrays on the stack, and thus avoid large scale allocations; but is would require special casing, and probably another opcode.

    This was what my "even more so with arrays" was about, I didn't understand the need to have all the data duplicated so often. Again, it's the "all the data" I got wrong. Indeed my post doesn't make much sense starting from that point.

    Your expectations are wrong.

    Clearly I've been underestimating the proportion of structure data over actual data. Though you did say yourself that keys strings are always copied into the hash. But they are not allocated or copied 4 times. With $keys[ $_ ] = "$_" x 20 in your code (so strings with a mean length around 100 characters) I get:

    8 096 Ko 3 228 772 Ko 6 396 092 Ko 2091995858
    Where 6396092*1024/2091995858 = 3.131.

    Thanks again for taking the time to detail your answer.

    Nb: I got the mem sub in BrowserUK's post here if anyone is interested. The numbers are separated by \xFF (not a space) so /(\S+ Ko)$/ worked for me.