http://www.perlmonks.org?node_id=1011656


in reply to Evolving a faster filter?

If you can measure the speed and selectivity of each filter, then you don't need to run the filter to pick the fastest ordering. You can just compute the average speed of each ordering.

Sure, use real runs to measure the cost and selectivity. But why slow down your determination of the fastest ordering by only trying different orderings in Production?

As the measured average speed and average selectivity of each filter is updated from logs of Production work, do the simple calculations to pick the likely fastest ordering on average and push that ordering to Production.

If there is large variability in the speed or selectivity of a single filter, then you might want to compute based on more than just a single average of each. Of course, that gets significantly more complicated and probably would be quite difficult or just unlikely to give you consistently better results.

Update: The computation code is pretty simple (an untested version that shows predicted cost for "average" case as it finds cheaper orderings, as I thought that'd be interesting to watch as I fed it patterns of slow but selective and fast but unselective filter measurements:

use Algorithm::Loops qw< NextPermuteNum >; sub cost { my( $costs, $trims ) = @_; my @order = 0 .. $#$costs; my $best = 0; do { my $size = 1; my $cost = 0; for my $i ( @order ) { $cost += $size * $costs->[$i]; $size *= $trims->[$i]; } if( ! $best || $cost <= $best ) { print "$cost: @order\n"; $best = $cost; } } while( NextPermuteNum(\@order) ); }

- tye