Excuse my French, but why the **censored** would you store the whole file in memory just to count the lines? Why hardcode the filename in the script? Why bother to open and close the file, when <> is so handy?
Sorry, please forgive the tirade, I don't know what came over me. It must be the ghost of Abigail-II...Certainly TIMTOWTDI. (I find many of my cow-orkers skip over the "gather requirements" phase of programming, and jump headfirst into the shallow end of the implementation pool.)
While playing with this, I wanted to check how similar schemes work. For instance, don't do this either:
perl -e 'print scalar(()=<>),"\n"' filename
I tried this on a 300MB file, which took a long time (I waited several minutes before killing it), lots of memory, and started swapping to disk.
I tried the following on the same 300MB file, which took about 10 seconds, and never went above 2MB memory:
perl -pe "}{$_=$." filename
Inside a script, you could do this:
#!/your/perl/here -p
}{$_=$.
(yes, that compiles and runs too) though you may prefer the more conventional
#!/your/perl/here
use strict;
use warnings;
while (<>) {}
print "$.\n";
If you want to get fancy, and feed it more than one file at a time, keeping track of each file, try this:
#!/your/perl/here
use strict;
use warnings;
my $file_count = @ARGV;
while (<>) {}
continue
{
if (eof)
{
# print file names for multiple files
print "$ARGV: " if ($file_count > 1);
print "$.\n";
close ARGV;
}
}
Someone will ask me for command line arguments to leave off the filenames, and provide summary statistics for multiple files. I'll leave that to OMAR. (Wow, there really is an OMAR! But he doesn't write much :(
-QM
--
Quantum Mechanics: The dreams stuff is made of
|