in reply to Re^2: statistics of a large text
in thread statistics of a large text
You asked for advice on handling large amounts of data (~ 1 GB). With that much data your code will fail to run because it will run out of memory long before you finish. By contrast the approach that I describe should succeed in a matter of minutes.
If you wish to persist in your approach you can tie hash to an on disk data structure, for instance using DBM::Deep. Do not be surprised if your code now takes a month or two to run on your dataset. (A billion seeks to disk takes about 2 months. And you're going to wind up with, order of magnitude, about that many seeks.) This is substantially longer than my approach.
If my suggestion fails to perform well enough, it is fairly easy to use Hadoop to scale your processing across a cluster. (Clusters are easy to set up using EC 2.) This approach scales as far as you want - in fact it is the technique that Google uses to process copies of the entire web.
|
---|
Replies are listed 'Best First'. | |
---|---|
Re^4: statistics of a large text
by perl_lover_always (Acolyte) on Jan 26, 2011 at 17:13 UTC | |
Re^4: statistics of a large text
by perl_lover_always (Acolyte) on Jan 27, 2011 at 09:59 UTC | |
by tilly (Archbishop) on Jan 27, 2011 at 15:05 UTC | |
by perl_lover_always (Acolyte) on Feb 10, 2011 at 11:13 UTC | |
by BrowserUk (Patriarch) on Feb 10, 2011 at 12:44 UTC | |
by perl_lover_always (Acolyte) on Feb 10, 2011 at 13:40 UTC | |
by BrowserUk (Patriarch) on Feb 10, 2011 at 14:13 UTC | |
by perl_lover_always (Acolyte) on Feb 10, 2011 at 14:21 UTC | |
|