Pathologically Eclectic Rubbish Lister PerlMonks

### Re: Re: \${Schwartzian transform} ?

by liz (Monsignor)
 on Sep 18, 2003 at 12:22 UTC ( #292369=note: print w/replies, xml ) Need Help??

in reply to Re: \${Schwartzian transform} ?

Instead of 3 a's, I've used 80 a's (more or less a normal line of text). On my iBook I get this:
```use strict;
use warnings;

use Benchmark qw /cmpthese/;

our @array = map {sprintf "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
+aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa:%03d" => \$_} 0 .. 999;
our (@a, @b);

for (my \$i = @array; -- \$i;) {
my \$j = rand (\$i + 1);
@array [\$i, \$j] = @array [\$j, \$i];
}

cmpthese -5 => {
ST  => '@::a = map  {\$_ -> [0]}
sort {\$a -> [1] <=> \$b -> [1]}
map  {[\$_ => substr \$_, 81]} @array',
Liz => '@::b = map  {\${\$_ -> [0]}}
sort {\$a -> [1] <=> \$b -> [1]}
map  {[\\$_ => substr \$_, 81]} @array',
};

foreach my \$i (0 .. 999) {
die unless \$a [\$i] eq \$b [\$i];
}
__END__
Rate   ST  Liz
ST  26.7/s   -- -10%
Liz 29.6/s  11%   --
which looks rather significant to me.

Your original benchmark doesn't show anything significant on my iBook, except that my approach is always faster but the difference is always less than .5%.

Liz

Replies are listed 'Best First'.
Re: \${Schwartzian transform} ?
by Abigail-II (Bishop) on Sep 18, 2003 at 12:44 UTC
In general, 10% doesn't look significant to me for a Benchmark. It has been proven in the past that unless the difference is large, Benchmark results aren't well reproducable from platform to platform. Even changing the compiler with which perl was compiled (and leaving everything else the same), could change the order of the outcome of a Benchmark. For instance, if I run your version of the Benchmark, with a lead string of 80 a's, the original ST appears to be slightly *faster*:
```Benchmark: running Liz, ST for at least 5 CPU seconds...
Liz:  5 wallclock secs ( 5.28 usr +  0.01 sys =  5.29 CPU) @ 99
+.05/s (n=524)
ST:  5 wallclock secs ( 5.30 usr +  0.00 sys =  5.30 CPU) @ 10
+0.94/s (n=535)
Rate Liz  ST
Liz 99.1/s  -- -2%
ST   101/s  2%  --
Of course, if speed is a concern, you wouldn't use ST at all, you'd use GRT. Here's a benchmark with GRT included:
```#!/usr/bin/perl

use strict;
use warnings;

use Benchmark qw /cmpthese/;

our @array = map {("a" x 80) . sprintf ":%03d" => \$_} 0 .. 999;
our (@a, @b, @c);

for (my \$i = @array; -- \$i;) {
my \$j = rand (\$i + 1);
@array [\$i, \$j] = @array [\$j, \$i];
}

cmpthese -5 => {
ST  => '@::a = map  {\$_ -> [0]}
sort {\$a -> [1] <=> \$b -> [1]}
map  {[\$_ => substr \$_, 81]} @array',
Liz => '@::b = map  {\${\$_ -> [0]}}
sort {\$a -> [1] <=> \$b -> [1]}
map  {[\\$_ => substr \$_, 81]} @array',
GRT => '@::c = map  {substr (\$_, 4) . ":" . substr (\$_, 0, 3)}
sort
map  {substr (\$_, 81) . ":" . substr (\$_, 0, 80)} @
+array'
};

foreach my \$i (0 .. 999) {
die unless \$a [\$i] eq \$b [\$i] && \$a [\$i] eq \$c [\$i]
}

__END__

Benchmark: running GRT, Liz, ST for at least 5 CPU seconds...
GRT:  5 wallclock secs ( 5.33 usr +  0.00 sys =  5.33 CPU) @ 19
+8.87/s (n=1060)
Liz:  5 wallclock secs ( 5.31 usr +  0.00 sys =  5.31 CPU) @ 98
+.87/s (n=525)
ST:  6 wallclock secs ( 5.31 usr +  0.01 sys =  5.32 CPU) @ 10
+0.38/s (n=534)
Rate  Liz   ST  GRT
Liz 98.9/s   --  -2% -50%
ST   100/s   2%   -- -50%
GRT  199/s 101%  98%   --

Abigail

I think the real answer is that the Schwartzian Transform is only applicable when you truly have an expensive key extraction that swamps the overhead of making copies, creating references and dereferencing them, etc. Your non-copying approach would only be valid when the expense of copying is significant. It doesn't change the fundamental purpose of the transform, which is is save on the O(n lg n) portion at the expense of the O(n) portion.

If you do have expensive copying, or just a not-very-expensive key extraction, then a better approach would be to eliminate both the copying and all of the reference stuff in one shot, by using indexes instead. Try adding this code into the benchmark:

```    Idx => '@pre = map {substr \$_, 81} @array;
@::d = @array[sort {\$pre[\$a] <=> \$pre[\$b]} 0..\$#array]',
I get:
```Benchmark: running GRT, Idx, Liz, ST for at least 5 CPU seconds...
GRT:  5 wallclock secs ( 5.23 usr +  0.02 sys =  5.25 CPU) @ 67.24/s (n=353)
Idx:  5 wallclock secs ( 5.25 usr +  0.03 sys =  5.28 CPU) @ 57.77/s (n=305)
Liz:  5 wallclock secs ( 5.26 usr +  0.02 sys =  5.28 CPU) @ 37.50/s (n=198)
ST:  5 wallclock secs ( 5.20 usr +  0.03 sys =  5.23 CPU) @ 36.52/s (n=191)
Rate   ST  Liz  Idx  GRT
ST  36.5/s   --  -3% -37% -46%
Liz 37.5/s   3%   -- -35% -44%
Idx 57.8/s  58%  54%   -- -14%
GRT 67.2/s  84%  79%  16%   --
```
I haven't tried that with huge arrays to see if the expense of slicing starts to add up, though.

Create A New User
Node Status?
node history
Node Type: note [id://292369]
help
Chatterbox?
 [talexb]: Hmm .. fascinated to learn that there's no INT function in SQL Server, only FLOOR and CEIL. #interestingjobint erviewquestion [Corion]: talexb: How about convert( decimal( 10,0 ), EXPR )) ? ;-D [Corion]: On the upside, \$work has published two openings for cow-orkers; we'll see whether someone wants to work here ;)

How do I use this? | Other CB clients
Other Users?
Others wandering the Monastery: (9)
As of 2017-08-16 13:44 GMT
Sections?
Information?
Find Nodes?
Leftovers?
Voting Booth?
Who is your favorite scientist and why?

Results (265 votes). Check out past polls.

Notices?