http://www.perlmonks.org?node_id=406004

I apologize up front for sounding nit-picky or petty. This is essentially a public rebuttal to a private comment. It went something along the lines of "We should start using File::Slurp because it is 'way more efficient' {than just undefing $/ localy}."

Well, first of all... this was a bit frustrating because it was said person's first day on the job, and he had yet to read more than a smattering of the roughly one million lines of code under the aegis of "the application". He had essentially no context for where performance bottlenecks lie or what sort of priorities we had. For example, as priorities: stability far exceeds speed... local $/; is pure perl, but File::Slurp is untested (by us), highly platform specific (for whatever damn reason), foreign code. But anyway, I'm willing to cut the guy some slack... he's a smart guy, and he's new. He was just trying to establish his place with the alpha geeks.

But what about his argument? Well, I couldn't really give a damn about the microsecond that might be gained by doing raw I/O when reading a file smaller than a disk block (because, hell: we're not using slurping for any kind of sizable file... who would be doing that and still be concerned about performance?). However, I've recently had some dealings with the author of said module, and, frankly, wanted to see if his work stood up to the standard that he seems to set for everyone else (even when it is not an appropriate standard... but I digress). As one of my coleagues put it: "Man... if you're gonna act like that, you'd better never never make a mistake."

So, put up or shut up time:

So, I kept the benchmark super simple, and tried to control for the most obvious sources of error... I use a freshly created, and different (but identical in contents) file for local $/ and for File::Slurp. I run them each once, so that you can see the difference caused by the additional compile time of File::Slurp, and then I run them each 2000 times so that you can see the actual performance of the file reading. Then I repeat the 2000 test again, just to smooth out any problems that could potentially be caused by caching or whatnot:
[me@host test]$ echo -e foo\\nbar\\nbaz > a [me@host test]$ time perl -e '$x = do { local (@ARGV, $/) = "a"; <> }; + print $x' foo bar baz real 0m0.031s user 0m0.000s sys 0m0.000s [me@host test]$ time perl -e '$x = do { local (@ARGV, $/) = "a"; <> } +for 1..2000; print $x' foo bar baz real 0m0.074s user 0m0.020s sys 0m0.060s [me@host test]$ time perl -e '$x = do { local (@ARGV, $/) = "a"; <> } +for 1..2000; print $x' foo bar baz real 0m0.074s user 0m0.040s sys 0m0.030s [me@host test]$ echo -e foo\\nbar\\nbaz > b [me@host test]$ time perl -e 'use File::Slurp; $x = read_file "b"; pri +nt $x' foo bar baz real 0m0.066s user 0m0.020s sys 0m0.010s [me@host test]$ time perl -e 'use File::Slurp; $x = read_file "b" for +1..2000; print $x' foo bar baz real 0m0.136s user 0m0.090s sys 0m0.040s [me@host test]$ time perl -e 'use File::Slurp; $x = read_file "b" for +1..2000; print $x' foo bar baz real 0m0.138s user 0m0.090s sys 0m0.050s [me@host test]$

So the short answer is that File::Slurp is about twice as slow (on a small file) as the simple perl built-in method for reading a file all at once. I'll reserve my rant about why something which is built into the language really necessitates an overly complicated module... I just wanted to make a point about the often heard "File::Slurp is faster!" argument. (And, perhaps, to deflect a well-heaved stone back in the general direction of a glass house.)

------------ :Wq Not an editor command: Wq