http://www.perlmonks.org?node_id=278228


in reply to calculate average from range of hash values (optimize)

Hmm, well this happens to be very similar to a problem that I have been working on. With respect to this node the primary difference btween our problems is that I cant simply return undef if start_pos is not in the hash. In other words my problem would be to find the average of the values where the keys are within a range, regardless of whether the first and last actually exists.

Of the solutions posted here the Zaxo/aristotle(1)/browseruk(1) variant fits. For reasons decribed in the reply to Aristotle I've ruled his second one out, and Ive ruled browserUk's second out because my data set isn't contiguous at all. Likewise I didn't include the two variants that take arrays as arguments as they don't scale well for scattered data sets. (IMO, they are welcome and encouraged to adjust their code and add it to the benchmark.)

My initial reaction to this node and your code was that it was way more complex that it seemed to need to be. And for this reason I put it in the benchmark even though it violates my rule by returning undef if the start pos doesnt exist. One might guess that this requirement would make your code infinitely fast in this case, (as does browseruks variant) but even then its one of the slowest in my benchmark, furthermore, as far as I can tell this code is incorrect, in that the sort statement does a lexicographical sort and not a numerical sort and thus would include 11 in the average of 1..3 in a data set containing 1..11. (Your test case happens to hide this, a good example of the difficulties in doing a really good comparative benchmark.)

My first solution was dem1. Its a naive single pass through the data set, filtering out keys whose value ar out of range, and keeping a count and sum for the final calculation. The second solution I did was dem2 which uses the sum() routine from List::Util. The third solution I did was a perl algorithm written to be easily translatable to C. It uses caching to speed itself up, using binsearch and a linear walk to do its business. The fourth version is the third translated as directly to C as possible. For these version I normally would use a class (or a tie) to handle the cache, thus being able to spoil the cache when an insert occurs. Instead the code currently has a fourth parameter to force a cache reset.

When you benchmark the dataset you test against is critical. Does it represent your actual data? Have you tested the behaviour as it changes? For instance in the results I will show below, the cached c/perl versions massively outperform the others. However this would almost certainly would not be true if the $data hash was changing regularly. Again, the behaviour of ari1 over the different test sets is interesting as it shows how subject it is to the spareness of the data set involved. Which just goes to show that truly optimizing this will require far more information about typical data involved. Is the data sparse? Dense? Does it change often in between lookups, will really govern optimizing this code. It would be nice to know a bit more about the usage pattern. Anyway, ive attempted to allow for a decent range of behaviour testing in my version of the benchmark. Although I havent touched the spoiled cache aspect.

In order to make this easier to read ill post the final result her and ill put the code and overall results in two replies with readmores. Heres the overall benchmark, of the average of the keys (1..100,10000 .. 30000,100000..150000) with the range 1..150000.

Testing 1 - 150000 ari1 = 0.500910 orig = 0.500190 demc = 0.500910 ofix = 0.500910 buk = 0.500910 dem1 = 0.500910 demcp = 0.500910 dem2 = 0.500910 Benchmark: running ari1, buk, dem1, dem2, demc, demcp, ofix, orig, each for at least 2 CPU seconds... ari1: 2 wc secs ( 2.16 usr + 0.00 sys = 2.16 CPU) @ 3.70/s ( +n=8) buk: 2 wc secs ( 2.18 usr + 0.00 sys = 2.18 CPU) @ 3.66/s ( +n=8) dem1: 2 wc secs ( 2.03 usr + 0.00 sys = 2.03 CPU) @ 5.41/s ( +n=11) dem2: 3 wc secs ( 2.24 usr + 0.00 sys = 2.24 CPU) @ 3.57/s ( +n=8) demc: 2 wc secs ( 2.03 usr + 0.00 sys = 2.03 CPU) @ 1424.99/s + (n=2897) demcp: 2 wc secs ( 2.09 usr + 0.00 sys = 2.09 CPU) @ 14.33/s ( +n=30) ofix: 2 wc secs ( 2.20 usr + 0.00 sys = 2.20 CPU) @ 3.18/s ( +n=7) orig: 2 wc secs ( 2.06 usr + 0.00 sys = 2.06 CPU) @ 3.39/s ( +n=7) Rate ofix orig dem2 buk ari1 dem1 demcp demc ofix 3.18/s -- -6% -11% -13% -14% -41% -78% -100% orig 3.39/s 7% -- -5% -7% -8% -37% -76% -100% dem2 3.57/s 12% 5% -- -3% -4% -34% -75% -100% buk 3.66/s 15% 8% 3% -- -1% -32% -74% -100% ari1 3.70/s 16% 9% 4% 1% -- -32% -74% -100% dem1 5.41/s 70% 59% 52% 48% 46% -- -62% -100% demcp 14.3/s 351% 322% 302% 291% 288% 165% -- -99% demc 1425/s 44746% 41896% 39853% 38784% 38428% 26236% 9842% --

Anyway, this was fun. :-)


---
demerphq

<Elian> And I do take a kind of perverse pleasure in having an OO assembly language...