Statistics::Descriptive::LogScale

Not long ago I asked a question that's still unanswered here, on Stackoverflow, and by repeated cpan and google searches.

So I couldn't resist and started out a github project to cover this need. My module provides a Statistics::Descriptive::Full-compatible interface, however it doesn't keep all the data. Instead, data is divided into logarithmic intervals and hit counts are stored for each interval. Thus statistical distribution's properties can be roughly calculated without using lots of memory.

Synopsis

```#!/usr/bin/perl -w

use strict;

use Statistics::Descriptive::LogScale;
my \$stat = Statistics::Descriptive::LogScale->new ();

while(<>) {
chomp;
};

# This can also be done in O(1) memory, precisely
printf "Average: %f +- %f\n",
\$stat->mean, \$stat->standard_deviation;

# This requires storing actual data, or approximating
foreach (0.5, 1, 5, 10, 25, 50, 75, 90, 95, 99, 99.5) {
printf "Percentile(\$_): %f\n", \$stat->percentile(\$_);
};

Current state

The module is still in development, but at least it shows consistent results for some common distributions. There's example/summary.pl to demonstrate percentiles, sum, etc. and examples/sample_generator.pl for quick and dirty random data generation (it's REALLY dirty, but it works).

It follows Statistics::Descriptive::Full interface as close as possible. Currently, mode and frequency_distribution are still unimplemented. I've also added standardized and central moments (i.e. expectations of arbitrary power of given random variable).

Why this module

The module was initially targeted at performance analysis. Knowing that 99% of requests finished in 0.1s +- 0.01s looks more useful than just having an average request time 0.1s +- 1s (standard deviation) which is what I observed not once trying to "analyze" our web applications' performance.

However, a broader usage may exist, e.g. some long-running application may want to keep track of various useful numbers without leaking.

Ideally, this module could also become a short way of saying "I'm not sure why I need statistics, but it's nice to have, and simple." For those who know why they need statistics, there's R.

Controversy

Some moments I'm not sure about.

• std_moment(\$power) or standardized_moment(\$power)? The latter is soooo long! And I'd like to be consistent with Statistics::Descriptile::{Full,Discrete,Weighted} if they ever add such method.
• Arbitrary function expectation.I've added sum_of( \$coderef, \$min, \$max ); method to estimate sum of \$coderef(\$_) over the part of sample between \$min and \$max. Is it a clear name? Should I add it at all? In most cases it looks like moments are enough.
• Estimating precision loss. I can store cumulative drift in a special variable(s) to estimate error caused by approximation. Should I? If so, are sum_error() and mean_error() good method names?

Before things go horribly wrong

I'm planning to release the module to CPAN in the near future, so please tell me if I'm missing something.

UPDATE: I admit I couldn't wait any more and uploaded the module to CPAN.