|Problems? Is your data what you think it is?|
Benchmark.pm: Does subroutine testing order bias results?by jkeenan1 (Deacon)
|on Jul 12, 2004 at 02:41 UTC||Need Help??|
jkeenan1 has asked for the
wisdom of the Perl Monks concerning the following question:
Does the order in which Benchmark.pm tests various subroutines bias the results which Benchmark reports?
This is the inference I am drawing from repeated tests using Benchmark, and I would like to know if other users have experienced the same phenomenon.
The specific situation: I am preparing an update of my CPAN module List::Compare. I have been tweaking its internals in the hope of getting a speed boost, and would like to know for certain whether the *cumulative* result of these tweaks is a speed up of the operation of the module *as a whole*.
To test this with Benchmark, I did the following:
3. Benchmarked these two subroutines with varying numbers of iterations, with the following results. (For simplicity, I'm only going to show the most critical measurement: the 'usr' time.)
Note that in each case the older -- and presumably slower -- module outperformed the newer, revised module. This ran contrary to my expectations, as each modification I tried out in the newer version had itself been benchmarked and only included in the newer version if it clearly proved to be faster.
I started to wonder: What would happen if I simply reversed the order in which Benchmark tested the two modules? To do this, I simply aliased mistc() to a new subroutine with a name lower than 'listc' in ASCII order:
Note that, with one exception (the second case above), the first subroutine to be tested ran faster than the second -- even though in this case the first subroutine was *exactly the same* as the second, slower running subroutine in the first case above.
It almost seems as if Benchmark -- or Perl -- is getting tired when the subroutine it is testing involves a fair amount of computation. But, in any event, on the basis of this admittedly small sample I would seriously doubt whether Benchmark is capable of telling me accurately whether the older or newer version of my module is faster.
I googled the archives at comp.lang.perl.modules on this, but couldn't come up with anything. I then supersearched the perlmonks archives; other peculiarities of Benchmark have been reported, but I couldn't find anything on this problem.
Which leads to these questions:
1. Have other users experienced similar problems?
Thank you very much.