Beefy Boxes and Bandwidth Generously Provided by pair Networks
Syntactic Confectionery Delight
 
PerlMonks  

Re: printout of large files. loop vs. flush at once

by davido (Cardinal)
on Apr 18, 2013 at 17:17 UTC ( [id://1029389]=note: print w/replies, xml ) Need Help??


in reply to printout of large files. loop vs. flush at once

If you want to slurp the entire file, don't read it line by line and then build up a giant $printout string. That will result (internally) in lots of copying and reallocating of larger memory chunks as the string grows. ...Perl's pretty smart, but you could almost look at that technique as an optimized version of Schlemiel the Painter's Algorithm. As you append, Perl has to allocate a larger chuncks of memory, and move the entire string to the new, larger locations.

Better to just slurp it in one step using one of the following:

my $string = do{ open my $fh, '<', 'filename' or die $!; local $/ = undef; <$fh>; };

...or...

use File::Slurp; my $text = read_file('filename');

...or...

use File::Slurp; my @lines = read_file('filename');

Any of the above ought to be relatively efficient, though the File::Slurp methods are easier on the eyes. But if reading the file line by line (your existing approach) is too slow, slurping is only going to yield incremental savings, at best. Either way, you've got to touch the entire file. Line-by-line approaches tend to work out well, and scale well.

You're probably already aware of this, but if the file size is large enough to swamp physical memory, slurping is not a good approach.


Dave

Log In?
Username:
Password:

What's my password?
Create A New User
Domain Nodelet?
Node Status?
node history
Node Type: note [id://1029389]
help
Chatterbox?
and the web crawler heard nothing...

How do I use this?Last hourOther CB clients
Other Users?
Others sharing their wisdom with the Monastery: (4)
As of 2024-04-19 20:39 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    No recent polls found