would it be faster to use an array?
If you mean "faster to use an array instead of a hash for collecting the data", then no, it would not be faster. I would split each line into an array during processing though.
The idea is to key the hash by the filenames. So, everytime you come across, for instance, "/some/dir/file.html", you increase a count and a sum. The code might look something like this (untested):
while (<LOG>) {
my @part = (split ' ', $_)[5,8];
$hash{$part[0]}->[0] ++; # increase the count.
$hash{$part[0]}->[1] += $part[1]; # increase the sum.
}
Note that the values of the hash are arrayrefs in order to store both the count
and the sum associated with each filename. After you've munged your logs into raw data, you'll traverse the hash you created and compute the stats you want. Something like (again, untested):
for my $key (sort keys %hash) {
my $avg = $hash{$key}->[1] / $hash{$key}->[0]; # sum/count.
print "$key\t$avg\n";
}
-sauoq
"My two cents aren't worth a dime.";