|Perl: the Markov chain saw|
Memory use when reading a text fileby abingham (Initiate)
|on Jul 24, 2002 at 11:44 UTC||Need Help??|
abingham has asked for the
wisdom of the Perl Monks concerning the following question:
Below is my code snippet...
...simple job, open a text log file, read through it line by line and then close it.
I am actually trying to extract statistics from our mail logs, so the final code will count up the number of messages to from each address and store this in a hash to be output to CSV file at the end. However, that's getting beyond myself.
The problem is when I run this with a 46Mb file, my WinXp workstation uses up all 1GB of RAM + Swap and goes ape. This seems alot of memory usage to just read through a file, even if the machine caches the entire file in memory. With this code, I am not storing any of the data or manipulating it in any way!
Any ideas from the Perl Monks?
Is there a more efficient way of reading through files?