http://www.perlmonks.org?node_id=732753


in reply to Memory Efficient Alternatives to Hash of Array

You are trying to process 4-5 GB of data in Perl, which probably can only address 2-3 GB of data internally. That won't work. Which means that you want to keep data on disk. When processing data on disk you should always think about whether sorting helps. In this case sorting gives you all of the error rates for a given tag right after each other. So use the Unix sort utility or Sort::External to sort your data then process your file in one pass. That pass could have a snippet like this in it:
my $last_key = ""; my @last_error_rates; while (my $line = <DATA>) { my ($key, $error_rate) = split /\s+/, $line; if ($key ne $last_key) { # We just crossed a key boundary, do processing. process_block($last_key, @last_error_rates) if $last_key; $last_key = $key; @last_error_rates = (); } push @last_error_rates, $error_rate } # Don't forget the final block! process_block($last_key, @last_error_rates);