|laziness, impatience, and hubris|
I've spent the last few weeks optimizing and reoptimizing and re(n)optimizing a math module (Math::Combinator) I've been working on (to be released here soon), and had to wonder exactly how fast certain thing were, and how fast they were compared to each other. These things are the fundamental components of most programming languages: arrays, hashes, subroutines (by ref), sub's (by value). So, duh, Benchmark.
For A = 1,000 and B = 10,000 I get these results, which I find fairly satisfying for accuracy:
Here's my interpretation of things (the Relative column translates to the number of multiplication operations that could be performed in the time it takes to perform the Function):
Question #1: Interpretation? My instinct is to subtract Nothing from the rest and use that as the actual base, thereby ignoring the overhead of Benchmark and the for loops.
Question #2: Those for loops: I want to get rid of cacheing in the hash and arrays. Is that a valid concern? Are the for loops a valid solution?
Question #3: Pass by reference is slower than Pass by value. Is that to be expected in this case? I thought bouncing things off the stack was supposed to be slow.
Question #4: What have I forgotten that completely invalidates my whole process?