|We don't bite newbies here... much|
Re^2: Is there a difference in this declaration? (insignificant)by tye (Sage)
|on May 09, 2014 at 14:57 UTC||Need Help??|
but may be significant in looping code
No, not really. You've fallen for the classic fallacy that Benchmark's overblown attempts to "eliminate overhead" can often lead to. The huge values in the "rate" column are a good indicator.
Let's test your theory by actually writing looping code and seeing how "significant" this difference can be. We'll have to come up with a loop that has a useful declaration of a hash inside of it and yet can complete iterations at something close to 6 million times each second and yet where the loop gets enough useful stuff done that almost no other code is required to get a useful result (as other code will further dilute the relative speed-up and thus reduce its significance).
When talking about a Perl operation that can happen 6 million times each second, it is pretty much impossible to make such a single operation be a non-trivial percentage of a useful script's run time. This is classic "micro optimization", a fool's errand.
So, for a declaration of a hash to be useful, surely you have to insert something into the hash. Since it is a fresh declaration, you're also going to need to use the hash or else you'll be building up close to 6 million new hashes each second and will quickly run out of memory. And this needs to somewhat simulate useful code as speeding up useless code is not "significant", it is theory at best and more often just pointless. :)
So, here is looping code that does nothing but add two entries to the hash. It isn't useful, but it is pretty darn minimal. Truly useful code is surely going to have to do more than this for the hash declaration to be a useful part of it.
Above is a typical result from a run of the script. In my experience, a 10% speed-up would be characterized as "something I'm quite unlikely to even notice" which falls a long way from "significant".
The speed difference is small enough that I even got this result when I ran the script a few times to verify that my first results weren't atypical:
Note that the "with assignment" code is the one that ran faster that time.
Finally, a quick demonstration of why I think Benchmark.pm's attempt to "eliminate overhead" are overblown. With all of the insertions commented out, a typical result is:
While your original code on my computer gives:
...and takes noticeably longer to run. Benchmark has to over and over again try running the code in a tight loop with increasing repetition counts because it gets back time measurements that are too close to "the time it takes to run empty code" for the result to be considered meaningful enough to even be reported.
When that happens, the results are nearly guaranteed to have no practical value.
Note that none of this is meant as much of a criticism of what you wrote. Based on the numbers you got, it certainly might have been possible to have a significant impact. Your statement was quite conservative. But my experience lead me to doubt that such could happen, so I did a quick test to verify it.
This case is actually rather close to the edge of it being possible for a real, useful script to end up 20% faster (a minimum to be noticeable, IME) with only this change (though likely still rather contrived). Certainly extremely unlikely.
The speed difference certainly looks to be insignificant to me.