in reply to A better non-existant string...

Single- and double-quoted strings both have to be parsed at compile time. Both get compiled into constants and so there is no speed difference between the two at run time.

Now a string like "We have $n things" actually gets compiled into "We have ".$n." things". The string is not reparsed each time you hit that line of code.

I had to resort to a 3200-byte string and finally was able to detect at 20% slow-down for double-quoted strings. This means that if your script has four thousand 3200-byte strings enclosed in double quotes, then you could make it start up 0.2 seconds faster by changing all of those double quotes to single quotes. No matter how long the script ran, no further speed benefit would be achieved.

So "optimizing" 0-byte strings to use single quotes instead of double quotes is just silly. The added complexity to your mental decision tree probably costs more time than you ever save. (:

        - tye (but my friends call me "Tye")
  • Comment on (tye)Re: A better non-existant string...

Replies are listed 'Best First'.
Re: (tye)Re: A better non-existant string...
by spaz (Pilgrim) on Jan 23, 2001 at 06:42 UTC
    I was going to disagree, but continue reading to see why I'm not. My code is below, have I done something wrong?
    #!/usr/bin/perl use Benchmark; sub double { $teststring = ''; if( $teststring ne "" ) { return 1; } else { return 0; } } sub single { $teststring = ''; if( $teststring ne '' ) { return 1; } else { return 0; } } timethese( 1000000, { double => 'double( )', single => 'single( )' } );
    Which gave these results on 3 consecutive trials
    Benchmark: timing 1000000 iterations of double, single... double: 3 wallclock secs ( 2.14 usr + 0.00 sys = 2.14 CPU) single: 1 wallclock secs ( 1.86 usr + 0.00 sys = 1.86 CPU) Benchmark: timing 1000000 iterations of double, single... double: 1 wallclock secs ( 1.86 usr + 0.00 sys = 1.86 CPU) single: 2 wallclock secs ( 1.92 usr + 0.00 sys = 1.92 CPU) Benchmark: timing 1000000 iterations of double, single... double: 1 wallclock secs ( 1.89 usr + 0.00 sys = 1.89 CPU) single: 1 wallclock secs ( 1.86 usr + 0.00 sys = 1.86 CPU)
    The way I understand the Benchmark module, system load at the time of benchmarking doesn't
    influence the numbers. Is that statement correct? Does anybody know what's going on?

      It isn't the start-up time. It is probably an effect of the working set "settling in" or any number of other things that can affect benchmark numbers. In general, a 5% or less difference isn't something I would consider "real" as running it an hour later could certainly swing the answer that much. Here is the code I used:

      use Benchmark qw(cmpthese); my $str= "This is a test, " x 200; my $single= "'".$str."'"; my $double= '"'.$str.'"'; cmpthese( -3, { a_double => sub { eval $double }, b_single => sub { eval $single }, c_double => sub { eval $double }, d_single => sub { eval $single }, } );
      I don't have my original results (20% difference), but a re-run gave this:
      Rate a_double c_double b_single d_single a_double 4773/s -- -1% -23% -23% c_double 4830/s 1% -- -22% -22% b_single 6170/s 29% 28% -- -0% d_single 6172/s 29% 28% 0% --
              - tye (but my friends call me "Tye")