http://www.perlmonks.org?node_id=162638


in reply to Speed differences in updating a data file

This does not answer your question (the previous answers should give enough information anyway), but I thought you might find interesting a third method (it's always useful to have many tricks in the bag): try Tie::File, by Dominus. It lets you manipulate a file as a regular perl array (each file record -- one line is the default -- corresponding to an array element) without loading the whole file into memory, so it is particularly good when dealing with big files.

hope this helps,

  • Comment on Re: Speed differences in updating a data file

Replies are listed 'Best First'.
Re:x2 Speed differences in updating a data file
by grinder (Bishop) on Apr 28, 2002 at 17:10 UTC
    The other day in Performing a tail(1) in Perl (reading the last N lines of a file), Chmrr showed that Tie::File is much, much slower than simpler read-by-line methods for reading files. This was for getting the end of the file, so I suspect that while the file is not loaded into memory, it is still read sequentially from beginning to end. It would be wise to benchmark the different approaches before settling on the technique to use.


    print@_{sort keys %_},$/if%_=split//,'= & *a?b:e\f/h^h!j+n,o@o;r$s-t%t#u'