|go ahead... be a heretic|
Efficient file handling (was Re^3: trouble parsing log file...)by jarich (Curate)
|on Nov 23, 2006 at 14:59 UTC||Need Help??|
I thought I'd reply rather than --ing your post just because I disagreed.
I cannot think of any meaning of the phrase "more efficient" which would render your statement correct.
All the reading I've ever done on the matter says that parsing a file line by line is extremely efficient. What happens is as follows. The operating system reads a chunk of the file into memory; this is then broken up on newlines (or whatever the value of $/ is); then we iterate over each line until we run out and the process repeats. We can parse a file line by line as follows:
If we choose to stop reading the file at any point (perhaps we've found what we want) and call last, then we end up only reading the smallest part of the file as necessary. This means it's efficient time-wise, and because we're only holding one chunk of file in memory at a time, it's efficient memory-wise.
Alternately, my reading has said that "dumping the file to an array" and parsing it line by line is very inefficient. This is the case whether we do this like this:
or like this:
This is because the file system still gives Perl the file on a chunk by chunk basis, and Perl still splits it up on $/, but Perl has to do this for the whole file even if we're only going to look at the first 10 lines. Worse, Perl now has to store the entire file in memory, rather than just a chunk. So this is the least efficient way to handle a file in Perl.
It is however very useful when we need random access to the whole file; for example when sorting it, or pulling out random quotes.
I'd love to hear why, if you think I'm mistaken in my understanding in this matter.