http://www.perlmonks.org?node_id=884368


in reply to Re^2: statistics of a large text
in thread statistics of a large text

Your guess is wrong.

You asked for advice on handling large amounts of data (~ 1 GB). With that much data your code will fail to run because it will run out of memory long before you finish. By contrast the approach that I describe should succeed in a matter of minutes.

If you wish to persist in your approach you can tie hash to an on disk data structure, for instance using DBM::Deep. Do not be surprised if your code now takes a month or two to run on your dataset. (A billion seeks to disk takes about 2 months. And you're going to wind up with, order of magnitude, about that many seeks.) This is substantially longer than my approach.

If my suggestion fails to perform well enough, it is fairly easy to use Hadoop to scale your processing across a cluster. (Clusters are easy to set up using EC 2.) This approach scales as far as you want - in fact it is the technique that Google uses to process copies of the entire web.

Replies are listed 'Best First'.
Re^4: statistics of a large text
by perl_lover_always (Acolyte) on Jan 26, 2011 at 17:13 UTC
    yes you are right after testing I can notice that it goes out of memory!
Re^4: statistics of a large text
by perl_lover_always (Acolyte) on Jan 27, 2011 at 09:59 UTC
    Thanks! in the third step of your approach, how can I merge the $line_number to @line_number in a fast way knowing that now my file is even bigger than before? any advise on that?
      Don't worry too much about micro-optimization. The key is to take advantage of the fact that an n-gram is all bunched together so you don't have to track too much information. I would do that something like this (untested):
      #! /usr/bin/perl -w use strict; my $last_n_gram = ""; my @line_numbers; while (<>) { my ($n_gram, $line_number) = ($_ =~ /(.*): (\d+)$/); if ($n_gram ne $last_n_gram and @line_numbers) { @line_numbers = sort {$a <=> $b} @line_numbers; print "$last_n_gram: @line_numbers\n"; $last_n_gram = $n_gram; @line_numbers = (); } push @line_numbers, $line_number; } @line_numbers = sort {$a <=> $b} @line_numbers; print "$last_n_gram: @line_numbers\n";
      This assumes that you're going to reduce_step.pl intermediate_file > final_file.
        Since you are more expert in memory usage and related issues, I have a question. Why when I store my 5 gb of file which has about 7m records of two columns, and I make two hashesh from two different files in the same format and size, even with a large ram (50gb) I run out of memory?