Beefy Boxes and Bandwidth Generously Provided by pair Networks
Perl: the Markov chain saw

Re^3: a large text file into hash

by tilly (Archbishop)
on Jan 28, 2011 at 16:55 UTC ( #884882=note: print w/ replies, xml ) Need Help??

in reply to Re^2: a large text file into hash
in thread Reaped: a large text file into hash

Let's see, 18 GB, with a billion rows, so let's say 30 passes, each of which has to both read and write, streaming data at 50 MB/sec takes about 6 hours. It should not be doing all of those passes to disk. Your disk drive is likely to be faster than that. But in any case that is longer than I thought it would take. Sorry.

The last step should make the file much smaller. How much smaller depends on your data.

Anyways back to Search::Dict, it works by doing a binary search for the n-gram you are looking up. So you can give it the n-gram and it will find the line number for you. However it is a binary search. If you have a billion rows, it has to do 30 lookups. Some of those will be cached, but a lot will be seeks. Remember that seeks take about 0.005 seconds on average. So if 20 of those are seeks, that is 0.1 seconds. Doesn't sound like much, until you consider that 100,000,000 of them will take 115 days.

By contrast 100 million 50 byte rows is 5 GB. If you stream data at 50 MB/second (current drives tend to be faster than that, your code may be slower), then you'll need under 2 minutes to stream through that file.

If you have two of these files, both in sorted form, it really, really makes sense to read them both and have some logic to advance in parallel. Trust me on this.

Comment on Re^3: a large text file into hash

Log In?

What's my password?
Create A New User
Node Status?
node history
Node Type: note [id://884882]
and the web crawler heard nothing...

How do I use this? | Other CB clients
Other Users?
Others studying the Monastery: (10)
As of 2014-10-31 18:48 GMT
Find Nodes?
    Voting Booth?

    For retirement, I am banking on:

    Results (223 votes), past polls