This is how I would do that, with Data::Dumper output for comparison.
#!/usr/bin/perl -w
use strict;
use warnings;
use Data::Dumper;
open my $fh, '<', $ARGV[0] or die $!;
my @lines = <$fh>;
close $fh;
my %data;
for my $line (@lines) {
my @fields;
@fields = split /\s+/, $line;
$data{$fields[1]}{$fields[0]} = $fields[2];
}
for my $key (keys %data)
{
print "$key = $data{$key}\n";
for my $subkey (keys %{$data{$key}}) {
print "\t$subkey = $data{$key}{$subkey}\n";
}
}
print Dumper(\%data);
Here is a sample run and the data.
my@mybox:~/sandbox
$ ./2.pl data.txt
jones = HASH(0x40017524)
jenny = circle
ted = circle
knight = HASH(0x4001738c)
ted = triangle
$VAR1 = {
'jones' => {
'jenny' => 'circle',
'ted' => 'circle'
},
'knight' => {
'ted' => 'triangle'
}
};
my@mybox:~/sandbox
$ cat data.txt
ted jones square
ted jones circle
ted knight triangle
jenny jones circle
The key differences from Perl Best Practices :
- Don't use bareword file handles.
- Use the three-argument form of open.
- Use indirect file handles.
- Close file handles explicitly, and as soon as possible.
- Avoid C-style for statements.
- Don't use unnecessary parentheses for builtins and "honorary" builtins.
And also just the general idea of don't make global what can be local in scope (
$key,
$subkey, and
@fields).
I did, violate the "Prefer line-based I/O to slurping," rule because we both know the sample data is tiny.