http://www.perlmonks.org?node_id=433614


in reply to Displaying/buffering huge text files

This builds an index to a million line file in around 20 seconds, and accesses and prints 10,000 lines at random in under 1 second.

If the 20 seconds is too long for startup, then you could always read enough to allow you to populate your listbox, and then push the rest of the index building off into a background thread to finish, while you populate the listbox with the first 1000 lines or so.

The indexing could be sped up by a more intelligent indexer, that reads larger chunks and searched for the newlines, rather than reading one line at a time.

The index requires just 8 MB of ram. That could be halved by using 'N' or 'V' as your pack format, rather than 'd', if your files will never go over 4GB. By using 'd' you are good for files up to around 8,500 Terabytes, which should see you through the next couple of machine changes or so:)

#! perl -slw use strict; $| = 1; open FILE, '<', $ARGV[ 0 ] or die $!; print 'Before indexing: ', time; my $index = pack 'd', 0; $index .= pack 'd', tell FILE while <FILE>; print 'After indexing: ', time; print 'Size of index: ', length $index; for my $i ( map{ int rand( length( $index )/8 ) } 1 .. 10_000 ) { my $line = unpack( 'd', substr $index, $i*8, 8 ); seek FILE, $line, 0; chomp( $_ = <FILE> ); printf "\r$line : '%s'", $_; } print "\nAfter reading 10,000 random lines: ", time; __END__ P:\test>433953 data\1millionlines.dat Before indexing: 1109146372 After indexing: 1109146392 Size of index: 8000008 1865840 : '00186585' After reading 10,000 random lines: 1109146393

Examine what is said, not who speaks.
Silence betokens consent.
Love the truth but pardon error.