http://www.perlmonks.org?node_id=184781

abingham has asked for the wisdom of the Perl Monks concerning the following question:

Below is my code snippet...

...simple job, open a text log file, read through it line by line and then close it.

I am actually trying to extract statistics from our mail logs, so the final code will count up the number of messages to from each address and store this in a hash to be output to CSV file at the end. However, that's getting beyond myself.

The problem is when I run this with a 46Mb file, my WinXp workstation uses up all 1GB of RAM + Swap and goes ape. This seems alot of memory usage to just read through a file, even if the machine caches the entire file in memory. With this code, I am not storing any of the data or manipulating it in any way!

Any ideas from the Perl Monks?

Is there a more efficient way of reading through files?

# process.pl # andy bingham # 14 April 2002 # open(INP, "log.txt") or die "cannot open input"; foreach $line (<INP>) { # read in each line of INP #processing code goes here } close(INP); # # DONE #