http://www.perlmonks.org?node_id=406156


in reply to use File::Slurp for (!"speed");

reading a file smaller than a disk block (because, hell: we're not using slurping for any kind of sizable file... who would be doing that and still be concerned about performance?)

I think there is some area between a file "smaller than a disk block" and "any kind of sizable file." You only test one end of the spectrum, and nowhere in the middle. Let's see how File::Slurp holds up with a bit more testing.

First, the benchmark code:

use File::Slurp; use Benchmark qw(cmpthese); cmpthese(-2, { fs => sub { $x = read_file "foo" }, is => sub { $x = is("foo"); } }); sub is { local (@ARGV, $/) = $_[0]; <> };

Note, I put the idiomatic slurp into a subroutine. Much of File::Slurp's compared performance hit was due to Perl's slow subroutine calling. I thought it would be a more apples-to-apples comparison to add that hit for the idiomatic slurp. Feel free to post results with the idiomatic slurp inlined, if you want, but all they will do is change the point where File::Slurp overtakes the IS.

Now, some test runs:

$ perl -e 'print "x"x500' > foo $ perl benchmark Rate fs is fs 30210/s -- -33% is 44886/s 49% -- $ perl -e 'print "x"x5_000' > foo $ perl benchmark Rate fs is fs 27499/s -- -26% is 37057/s 35% -- $ perl -e 'print "x"x50_000' > foo $ perl benchmark Rate is fs is 11275/s -- -14% fs 13094/s 16% -- $ perl -e 'print "x"x500_000' > foo $ perl benchmark Rate is fs is 277/s -- -15% fs 325/s 17% -- $ perl -e 'print "x"x5_000_000' > foo $ perl benchmark Rate is fs is 29.5/s -- -17% fs 35.6/s 21% --

As we can see, the idiomatic slurp is faster for the 500 and 5,000 byte files, but once we get into the 50,000 range, File::Slurp takes the lead. You may consider 50k to be too big to slurp, but I certainly do not. In fact, I can even imagine circumstances where slurping the 5mb file would be reasonable.

Don't get me wrong. I'm not trying to disagree with your main point. In your particular case, switching to File::Slurp was probably not the right idea. But I certainly can see the case for slurping files that are 50k or more, and in this case, File::Slurp is faster.