Clear questions and runnable code
get the best and fastest answer
As a pseudo-followup to the discussion on Testing a number for oddness, I decided I'd subject my computer to spending the better part of yesterday running benchmarks on the basic perl operators. Just thought I'd share the results with the monastery.
To begin with, here's the code I used:
Doing the math shows that, for each operation, I've run a billion (10^9) trials. That's 100 runs through the inner loop, times 100 runs through the outer loop, times 100000 calls to the appropriate subroutine, per operation. Each operation took nearly an hour to run. I've tried to be as "fair" as possible with this benchmark; if anyone has suggestions on a Better Way To Do It, I'd be willing to let a new benchmark run monopolize my box for another day or two. ;)
I've sorted the various operators by type (comparative, logical, mathematical) and sorted each of these categories from "quickest" to "slowest", based on total CPU time.
Disclaimer: I hardly know anything about perl's internals, so you should consider my thoughts below as mere conjectures, to be corrected by a more knowledgeable monk. Chances are that these numbers are platform-dependent as well; your mileage may vary. These figures were tested on a 233 MHz Pentium I/MMX machine, running perl 5.6 on linux 2.2.17.
There is only about a 1.7% difference between the fastest and the slowest comparison operators. The only operation which appears to be noticeably slower than the others here is !=. I thought it was odd that the != operation took 5 times as much system time as the other operations... any monk out there have any idea why this might be the case? Since >= and < took almost exactly the same amount of time, I conjectured that $foo >= $bar might get translated to $bar < $foo by the compiler, but this appears to not be the case:
Nor is it true the other way around:
These tests show that bitwise negation (the not benchmark) is 11.5% faster than any of the other logical operators. This makes sense since bitwise negation is a unary operator (it works with only one value), as opposed to the other logical operators, which all take two operands. The differences between the binary operators are not as striking; there's only a 3.2% difference between the fastest (right shift) and slowest (exclusive or). Both left-shift and right-shift use more system time than the rest; again, I have no idea why this is the case.
The exponentiation operator (**) took a good 18.5% longer than any other mathematical operator. This makes intuitive sense, merely because it takes more work to compute a power than to do any of the other operators. For the sake of discussion, I'll consider ** an outlier for now, and focus on analyzing the other operators in this category.
The addition operator is, unsurprisingly, the fastest of the bunch. I found it mildly surprising that multiplication is only 0.6% slower than addition. A typical assumption I've made in the past is that a multiplication "should" take longer than a comparable addition, but this appears to not be the case.
Subtraction runs about 1.9% slower than addition. I assume this is because computing a subtraction involves finding the negation of the second operand and adding the result with the first operand; this probably happens in-hardware, instead of in-perl, but it's still a nice thing to know for speed-critical optimizations.
Modular division (%) does appear to be slower than logical and (&), which contradicts the results I got in Testing a number for oddness. I think the benchmarks presented here are more accurate than the original ones, because I let them run longer and because I think they are crafted better (there are now fewer subroutine calls per amount of data, and I don't use the time-consuming rand() function).
The slowest remaining operator is normal division. My original guess was that division might be slower because, for a lot of cases (such as 42/5) the results are non-integer, so it has to use (mildly slower) floating-point hardware to find the result. On the other hand, I realized that I didn't use integer in my benchmark, so all of my mathematical operations were hypothetically done on floating-point hardware anyhow. Is there any monk out there who has a better reason for why division might take longer than the other mathematical operators?