I've been slurping text files into a string using a loop and parsing the text, but I noticed that the memory in windows is always as large as the largest file that has been slurped (i.e., it never drops back down) even if I undefine the string each loop. Is this a problem with windows, or is there a way to resolve this in perl? A (rare) few of the files are 100K+ so this causes problems. I've simplified to code and even in this simple case, the effect is there:
#!C:/Perl/bin -w
use File::Listing qw(parse_dir);
my $dir='c:/mydir/';
#open the directory and get filenames;
opendir(TEMP, $dir) || die("Cannot open directory");
@thefiles= readdir(TEMP);
closedir(TEMP);
$maxsize=0;
#cycle through each of the files;
foreach $file (@thefiles)
{
unless ( ($file eq ".") || ($file eq "..") )
{
$filesize = -s $dir.$file;
if ($filesize > $maxsize){$maxsize=$filesize}
print "$file - $maxsize - $filesize\n";
my $html='';
$slurpfile=$dir.$file;
open( my $fh, $slurpfile ) or die "couldn't open\n";
my $html = do { local( $/ ) ; <$fh> } ;
undef $html;
}
}
Basically, I open up the directory and get a list of every file in the directory. Next, each file is individually opened and passed as a string to $html. I immediately undefine the string and repeat the loop. I can't understand why the memory isn't freed up. It should actually be freed up in 3 places each loop, shouldn't it? (1) when I define $html as '' (2) when I slurp the contents of the next file to $html and (3) when I undefine $html.
As it cycles through the thousands of files, I can watch the running maximum filesize and the memory allocated to perl increase in tandem.
I need to slurp the file for various reasons. I wouldn't mind this leak, but I have to do millions of files and it slows things down considerably. Any suggestions?