Seeing the amount of Perl code involved outside the sort, makes sense. Thanks. Note that sorting is still faster than the original
use strict;
use warnings;
use Benchmark qw( cmpthese );
for our $n ( map 10**$_, 1, 3, 6 ) {
print "Benchmarking $n values\n";
our @n = map int( rand $n ), 1 .. $n;
cmpthese -log $n, {
orig => q[
my $c=0;
for my $elem (@n) {
if ($elem > $t){
$c++;
}
}
my $pct= ( @n - $c ) / @n * 100;
],
sort => q[
my $t = $n >> 1;
@n = sort{ $a<=>$b } @n;
my $c=0;
$c++ while $n[ $c ] <= $t;
my $pct= ( @n - $c ) / @n * 100;
],
grep => q[
my $t = $n >> 1;
my $pct = grep({ $_ > $t } @n ) / @n * 100;
],
};
}
Benchmarking 10 values
Rate orig sort grep
orig 272566/s -- -11% -41%
sort 306129/s 12% -- -34%
grep 462968/s 70% 51% --
Benchmarking 1000 values
Rate orig grep sort
orig 4332/s -- -42% -44%
grep 7459/s 72% -- -4%
sort 7795/s 80% 5% --
Benchmarking 1000000 values
^C
Lots of variation in the results due to the random input. The first test with orig I ran actually showed sort as being the fastest, although it usually isn't.