If you want to slurp the entire file, don't read it line by line and then build up a giant $printout string. That will result (internally) in lots of copying and reallocating of larger memory chunks as the string grows. ...Perl's pretty smart, but you could almost look at that technique as an optimized version of Schlemiel the Painter's Algorithm. As you append, Perl has to allocate a larger chuncks of memory, and move the entire string to the new, larger locations.
Better to just slurp it in one step using one of the following:
my $string = do{
open my $fh, '<', 'filename' or die $!;
local $/ = undef;
<$fh>;
};
...or...
use File::Slurp;
my $text = read_file('filename');
...or...
use File::Slurp;
my @lines = read_file('filename');
Any of the above ought to be relatively efficient, though the File::Slurp methods are easier on the eyes. But if reading the file line by line (your existing approach) is too slow, slurping is only going to yield incremental savings, at best. Either way, you've got to touch the entire file. Line-by-line approaches tend to work out well, and scale well.
You're probably already aware of this, but if the file size is large enough to swamp physical memory, slurping is not a good approach.
|