http://www.perlmonks.org?node_id=596074

Marsel has asked for the wisdom of the Perl Monks concerning the following question:

Dear Monks,

I'd like to merge different data matrices into one. The input are text files, with a header as a first line with the names of the cols, and on each following line, a identifier, then the numerical data for each row.
The cols are different in each file (but not always ...) and the rows are almost the same in all the files (but not always ...). The total size is about 10000 cols, and 55000 lines.

The simple way to explain what i want to do is the script below :
#!/usr/bin/perl -w use strict; use warnings; my(%data); foreach my $filename (@ARGV) { open(GSE, "$filename"); my(@samples); T:while(my $ligne = <GSE>) { $ligne =~ s/[\r\n]//g; my @t = split(/\t/, $ligne); if($. == 1) { shift(@t); @samples = @t; next T; } my $probe = shift(@t); for(my $i = 0; $i<@t; $i++) { $data{$probe}{$samples[$i]} = $t[$i]; $data{'samples'}{$samples[$i]}++; } } close(GSE); } my @samples = keys%{$data{'samples'}}; print "Probe\t".join("\t",@samples)."\n"; foreach my $probe (keys %data) { unless($probe eq 'samples') { print "$probe\t".join("\t", @{$data{$probe}}{@samples})."\n"; } }

But in fact, even with 8Go of RAM and 8Go of swap, it's too much, due to the space needed to store a hash i think. So do i have to leave perl there, and go to C, or is there another solution ?


Thanks a lot in advance for any clues,

Marcel