I don't know how time sensitive your project is, but even if you can afford to see your program run for 10 minutes, it's always nice to have something faster than that :D. You can actually speed things up a lot by *not* building the three hashes. johngg had a good point when he said that you should base your search on the smallest hash. Going even further you only need to build that one. This would give something like (that's pseudo-perl, but you should get the idea):
my %h1 = $csv1->csv_to_hash; # with $csv1 the smallest file
my %temp;
while (my $line = $csv2->next)
{
my $key = $line->key;
# Ignore lines that don't also exist in h1
# That's way less data to build into the second hash
next unless exists $h1{$key};
my $value = $line->value;
$temp{ $key } = [ $h1{$key}, $value ];
}
%h1 = (); # Free the data from %h1, the matching pairs are all in %tem
+p anyway
my %out;
while (my $line = $csv3->next)
{
my $key = $line->key;
# As before, ignore non relevant keys
next unless exists $temp{$key};
# We make a copy of the array ref from %temp
# This means that technically modifying it also changes the content
+of %temp
# But it will be deleted anyway
my $values_array = $temp{$key};
my $value = $line->value;
push @$values_array, $value;
# Add the values into a brand new hash
# So that it contains only the needed keys
$out{$key} = $values_array;
}
%temp = (); # The important keys have been copied to %out;
You should also consider using Text::CSV if you're not already doing so :)
Edit: oh, and I showed the wrong example, but you should avoid variable names that are identical except for the number as much as possible. So maybe rename %h1 to %ref, if you don't already have better (distinct) names available for your hashes.