Beefy Boxes and Bandwidth Generously Provided by pair Networks
Perl Monk, Perl Meditation
 
PerlMonks  

Re^2: Displaying/buffering huge text files

by spurperl (Priest)
on Feb 23, 2005 at 08:35 UTC ( [id://433615]=note: print w/replies, xml ) Need Help??


in reply to Re: Displaying/buffering huge text files
in thread Displaying/buffering huge text files

First of all, thanks from the detailed answer. One can always expect that from you :-)

Now, indexing may indeed be my way to go. In fact, I may be needing to do it in C++. I also ran indexing, it took 14 seconds on a 3e6 line file (the file itself is ~60MB) using STL streams and 8 seconds using C file access (fgets(), ftell()). The memory consumption is pretty good, only 12 MB for these 3e6 files (in a vector). Accessing random lines with fseek() is, as expected, almost immediate.

This closes in on acceptable, and I'm starting to believe that it is indeed possible to keep it in memory. (I don't need to handle files > 4 GB).

Another problem that I forgot to mention is that the file is being updated "live" and I should keep up with it. I can get notifications, and probably just add new indexes (the file always grows, never shrinks).

Replies are listed 'Best First'.
Re^3: Displaying/buffering huge text files
by BrowserUk (Patriarch) on Feb 23, 2005 at 09:22 UTC

    Yes, appending to the index should cause no problems at all. As your limiting yourself to under 4 GB, then using pack 'J' (perl native unsigned 32-bit which saves a little conversion) rather 'd' effects a speedup of the indexing of around x4 giving under 5 seconds for the 1e6 file.

    #! perl -slw use strict; $| = 1; open FILE, '<', $ARGV[ 0 ] or die $!; print 'Before indexing: ', time; my $index = pack 'J', 0; $index .= pack 'J', tell FILE while <FILE>; print 'After indexing: ', time; print 'Size of index: ', length $index; for my $i ( map{ int rand( length( $index )/4 ) } 1 .. 10_000 ) { my $line = unpack( 'J', substr $index, $i*4, 4 ); seek FILE, $line, 0; chomp( $_ = <FILE> ); printf "\r$line : '%s'", $_; } print "\nAfter reading 10,000 random lines: ", time; __END__ P:\test>433953 data\1millionlines.dat Before indexing: 1109148435 After indexing: 1109148440 Size of index: 4000004 1087640 : '00108765' After reading 10,000 random lines: 1109148441

    Almost quick enough that you could avoid dropping to C++ :)

    Win32 also supports Memory Mapped Files natively, complete with Copy-on-Write where applicable. I also think that tye's Win32API::File may give you access to some, if not all the apis required to use them. I'm not sure that it would work out any quicker than indexing though and the fact that the file is being written to may cause problems.


    Examine what is said, not who speaks.
    Silence betokens consent.
    Love the truth but pardon error.
      You could look at Win32::MMF, which claims to provide native Memory Mapped File Service for shared memory support under Windows.


      acid06
      perl -e "print pack('h*', 16369646), scalar reverse $="

Log In?
Username:
Password:

What's my password?
Create A New User
Domain Nodelet?
Node Status?
node history
Node Type: note [id://433615]
help
Chatterbox?
and the web crawler heard nothing...

How do I use this?Last hourOther CB clients
Other Users?
Others about the Monastery: (3)
As of 2024-03-29 04:39 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    No recent polls found