You are correct, if pedantic. :-) Just out of curiosity, under what circumstances would it be easier or faster to return something other than -1,0,1 ?
| [reply] |
perl> cmpthese -1, {
'<=>' => q[@s1=sort{$b<=>$a} 1..10000],
'-' => q[@s2=sort{$b - $a} 1..10000]
};;
Rate <=> -
<=> 33.8/s -- -82%
- 183/s 441% --
There are probably good reasons why this shoudln't be done outside of golf though.
Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
Lingua non convalesco, consenesco et abolesco. -- Rule 1 has a caveat! -- Who broke the cabal?
"Science is about questioning the status quo. Questioning authority".
The "good enough" maybe good enough for the now, and perfection maybe unobtainable, but that should not preclude us from striving for perfection, when time, circumstance or desire allow.
| [reply] [d/l] [select] |
That result highly surprised me, because sort is optimized for the {$a <=> $b}, {$b <=> $a}, {$a cmp $b} and {$b cmp $a} blocks (in the sense it recognises those blocks and does the compare internally instead of calling the block).
But then I realized that the current sort implementation is also implemented to take advantage of long runs of increasing/decreasing values - and in particulary sorted arrays.
So I decided to run the benchmark again, this time with shuffled values:
#!/usr/bin/perl
use strict;
use warnings;
use Benchmark 'cmpthese';
use List::Util 'shuffle';
our @data = shuffle 1 .. 10000;
cmpthese -1, {
'<=>' => q[@s1=sort{$b<=>$a} @data],
' - ' => q[@s2=sort{$b - $a} @data],
};
__END__
Rate - <=>
- 36.5/s -- -30%
<=> 52.3/s 43% --
That's what I'd expect.
| [reply] [d/l] [select] |