Beefy Boxes and Bandwidth Generously Provided by pair Networks
good chemistry is complicated,
and a little bit messy -LW
 
PerlMonks  

Re: File reading efficiency and other surly remarks

by lhoward (Vicar)
on Aug 26, 2000 at 05:10 UTC ( [id://29762]=note: print w/replies, xml ) Need Help??


in reply to File reading efficiency and other surly remarks

Reading a file all at once will always be faster than reading it one line at a time. The problem with the all at once approach is that if you file is large it will consume a large amount of memory by loading the whole file into memory at once. If you want you can get the efficiency of the all at once method without the memory use problem you can use the read/sysread functions to read from the file a block at a time. Only problem with this is that detecting line-breaks isn't handled automatically for you. The code below is taken from an earlier perlmonks discussion about reading files a block at a time; this isn't my code so I can't take credit (or blame) for it.
open(FILE, "<file") or die "error opening $!"; my $buf=''; my $leftover=''; while(read FILE, $buf, 4096) { $buf = $leftover.$buf; my @lines = split(/\n/, $buf); $leftover = ($buf !~ /\n$/) ? pop @lines : ""; foreach (@lines) { # process one line of data } } close(FILE);
This example uses a read-block size of 4096 bytes. The optimal value will depend on your OS and filesystem's blocksize (among other things).

Replies are listed 'Best First'.
RE (tilly) 2 (blame): File reading efficiency and other surly remarks
by tilly (Archbishop) on Aug 26, 2000 at 07:58 UTC
    Note: the open statement should have the filename in the debugging die on failure like it says in perlstyle. Also there are enough levels of buffering that I don't know that worrying about "optimal block size" really makes sense. And finally just letting Perl worry about the line by line is probably faster and more reliable IMO. It will do that buffering behind the scenes for you.

    OTOH I have used similar code when working with binary data. So the general technique is good to know.

      Good point. Since I also mentioned reading chunks at a time, I'll emphasize that this is not a good idea if you are going to split each chunk into lines.

      When you use Perl's <FILE>, Perl itself is reading the file as chunks and splitting them into lines to give to you. I can assure you that you can't do this faster in Perl code than Perl can do it itself. And the Perl code has been tested a lot more than any similar code you might write.

      Yes, Tilly already said all of this. I just didn't think he said it strong enough (and I felt guilty for suggesting chunk-by-chunk after not fully understanding a previous reply).

              - tye (but my friends call me "Tye")
        I have done some rather benchmarking of "line at a time" vs. "chunk at a time with manual split into lines" vs. "line at a time w/ lots of buffering". "block at a time with manual split into lines" is clearly the fastest by almost 2 to 1 over the other 2 methods. I've included my benchmarking program and results below:
        Benchmark: running BufferedFileHandle, chunk, linebyline, each for at +least 3 CP U seconds... BufferedFileHandle: 3 wallclock secs ( 3.22 usr + 0.08 sys = 3.30 C +PU) @ <b>2.73/s</b> (n=9) chunk: 4 wallclock secs ( 2.89 usr + 0.32 sys = 3.21 CPU) @ < +b>4.36/s</b> (n=14) linebyline: 4 wallclock secs ( 3.25 usr + 0.06 sys = 3.31 CPU) @ < +b>2.72/s</b> (n=9)
        #!/usr/bin/perl use Benchmark; use strict; use FileHandle; timethese(0, { 'linebyline' => \&linebyline, 'chunk' => \&chunk , 'BufferedFileHandle' => \&BufferedFileHandle }); sub linebyline { open(FILE, "file"); while(<FILE>) { } close(FILE); } sub chunk { my($buf, $leftover, @lines); open(FILE, "file"); while(read FILE, $buf, 64*1024) { $buf = $leftover.$buf; @lines = split(/\n/, $buf); $leftover = ($buf !~ /\n$/) ? pop @lines : ""; foreach (@lines) { } } close(FILE); } sub BufferedFileHandle{ my $fh=new FileHandle; my $buffer_var; $fh->open("file"); $fh->setvbuf($buffer_var, _IOLBF, 64*1024); while(<$fh>) { } close(FILE); }
        I'd be very interested to see your results that show diffrently.

        Edit to replace CODE tags for PRE tags around long lines

Log In?
Username:
Password:

What's my password?
Create A New User
Domain Nodelet?
Node Status?
node history
Node Type: note [id://29762]
help
Chatterbox?
and the web crawler heard nothing...

How do I use this?Last hourOther CB clients
Other Users?
Others goofing around in the Monastery: (4)
As of 2024-04-16 05:18 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    No recent polls found