http://www.perlmonks.org?node_id=864959


in reply to help reading from large file needed

Are there other techniques I should use to traverse a large file like this and which might offer methods to move forward, back, go to beginning, etc.?

See seek. It works best if the records are fixed length. If they are not then creating an index that maps record number to file position is very simple and makes for quite fast access.

I have a file that is a 3.6GB and contains 40e6 records. I index it like this:

perl -e"BEGIN{binmode STDOUT}" -ne"print pack'Q',tell STDIN" syssort >syssort.idx

Which takes just a couple of minutes to run. I can then randomly access the records in that file using:

#! perl -slw use strict; use Time::HiRes qw[ time ]; our $N //= 1000; open IDX, '+<:raw', 'syssort.idx' or die $!; open DAT, '+<:raw', 'syssort' or die $!; my $start = time; for ( 1 .. $N ) { my $recnum = int rand 40e6; seek IDX, $recnum *8, 0; my $idx; read IDX, $idx, 8; my $pos = unpack 'Q', $idx; seek DAT, $pos, 0; chomp( my $record = <DAT> ); # printf "Record %d: '%s'\n", $recnum, $record; } my $elapsed = time - $start; printf "$N random records read in %.3f seconds (%6f/s)\n", $elapsed, $elapsed / $N; __END__ c:\test>syssort-idx -N=1e4 1e4 random records read in 2.223 seconds (0.000222/s) c:\test>syssort-idx -N=1e5 1e5 random records read in 21.332 seconds (0.000213/s) c:\test>syssort-idx -N=1e3 1e3 random records read in 0.218 seconds (0.000218/s) c:\test>syssort-idx -N=1e3 1e3 random records read in 0.226 seconds (0.000226/s)

At 0.2 milliseconds per record, it is fast enough for most purposes.


Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
"Science is about questioning the status quo. Questioning authority".
In the absence of evidence, opinion is indistinguishable from prejudice.