good chemistry is complicated, and a little bit messy -LW |
|
PerlMonks |
comment on |
( [id://3333]=superdoc: print w/replies, xml ) | Need Help?? |
This is an excellent question with excellent answers. May I suggest you put the whole thing into a module and on CPAN? It might be useful for many others as well. I like the idea of only indexing every 10th or 25th line, then skipping on read. Most OSes will read a whole block at a time anyway, so for most files, you will be reading a lot of lines from the hard disk at the same time anyway. Might well make use of them. Of course, if it's in a module, the skipping could even be handled transparently (and customized by setting a parameter, and the user could just do a $file->GetLine(100_000) without worrying about what's going on. One more idea: You could only read and index $n lines initially, then provide a callback routine that can be called regularly to read and index $m lines more, until the file is fully indexed. This way, a text editor can display the first few lines very quickly, then continue indexing in the background by calling your callback routine in a separate thread or in the main thread's GUI loop. In reply to Re: Displaying/buffering huge text files
by crenz
|
|