perlquestion
ultibuzz
<p>Hello Monks,<br><br>i need to duplicate check arrays , array size is from 5 million till 1 billion elements.<br>i use the following code<br><code>
if ($file =~ $spec_text){
my $file_date = (split(/\./,$file))[3];
open(IN, '<', $file) or die("open failed: $!");
my @rows;
while (<IN>) {
chomp;
my @eles = split(";",$_);
push @rows,$eles[0].";".$eles[1].";".$file_date;
}
print scalar(@rows),"\n";
my @non_dupe_rows = do { my %seen;grep !$seen{$_}++, @rows };
print scalar(@non_dupe_rows),"\n";
}</code><br>this code need for 9 million elements 203 seconds on our server, but i need it faster, i need to parse in total approx 15 billion elements ( from all files).<br>idears how do speed it up are greatly welcome<br><br>kd ultibuzz</p>