Good answers. Mind you, that does show a degree of unwarranted laziness in cleaning up your code for posting! In fact, if you'd cleaned up your code you quite likely would have fixed the problem.
However, to continue the lesson lets see what the code might look like if you follow the advice. Consider:
#!/usr/bin/perl
use strict;
use warnings;
my $inputData = <<IN;
1
2
3
4
5
6
7
8
9
10
IN
open my $fIn, '<', \$inputData;
chomp (my @lines = <$fIn>);
printf "Average: %.2f\n", average(@lines);
printf "Largest: %d\n", largest(@lines);
sub average {
my $sum = 0;
$sum += $_ for @_;
return $sum / @_;
}
sub largest {
my $max = shift @_;
for my $value (@lines) {
$max = $value if $value > $max;
}
return $max;
}
Prints:
Average: 5.50
Largest: 10
Note that I've stripped out the code dealing with getting a file name, validating it and reading a file from disk. Instead I "open" a string as a file. That gives me a quick test framework so I can easily make changes to the code and test them.
$fIn is a lexical file handle (variables declared with my are lexical). Lexical file handles have the advantage that strict can better check their usage so you have fewer issues with misspelled file handle identifiers. When lexical file handles go out of scope (execution leaves the block the variable was declared in) the file is closed so you mostly avoid files remaining open too long if you forget a close. Aside from that lexical file handles work pretty much the same.
Note that we declare @lines where we assign content to it and we don't need $avg at all.
Premature optimization is the root of all job security
|