Beefy Boxes and Bandwidth Generously Provided by pair Networks
more useful options
 
PerlMonks  

Re: a large text file into hash

by tilly (Archbishop)
on Jan 27, 2011 at 18:55 UTC ( [id://884633]=note: print w/replies, xml ) Need Help??


in reply to Reaped: a large text file into hash

You can't load it into an in memory hash because you have too much data. You could tie the hash to disk, but that will take a long time to load.

Did you try my suggestion of using Search::Dict?

I assume that you want it in a hash because you are planning on doing further processing on it. If that is the case, then I am going to strongly recommend that you try to think about your processing in terms of the whole map-reduce paradigm that I suggested. Because your data volume is high enough that you really will benefit from doing that.

It takes practice to realize that, for instance, you can join two data sets by mapping each to key/value where the key is the thing you are joining on, while the value is the original value and a tag saying where it came from. Then sort the output. Then it is easy to pass through the sorted data and do the join.

You have to learn how to use this toolkit effectively. But it can handle any kind of problem you need it to - you just need to figure out how to use it. And your solutions will scale just fine to the data volume that you have.

Replies are listed 'Best First'.
Re^2: a large text file into hash
by perl_lover_always (Acolyte) on Jan 28, 2011 at 10:37 UTC
    Thanks, I'm trying to use your suggested method! the first step created a 18 GB file and sorting it takes lots of time! I could finally sort it and I'm now going to the third step which is creating the last file of $ngram: @line_number. and try to see how can I access it using Search::Dict.
    my main usage is that I can have two big files in that way and then calculate some statistics such as Mutual Information from those big files. so as long as I can have the line numbers of each n-gram for both files I try to see how to handle it using search::dict.
      Let's see, 18 GB, with a billion rows, so let's say 30 passes, each of which has to both read and write, streaming data at 50 MB/sec takes about 6 hours. It should not be doing all of those passes to disk. Your disk drive is likely to be faster than that. But in any case that is longer than I thought it would take. Sorry.

      The last step should make the file much smaller. How much smaller depends on your data.

      Anyways back to Search::Dict, it works by doing a binary search for the n-gram you are looking up. So you can give it the n-gram and it will find the line number for you. However it is a binary search. If you have a billion rows, it has to do 30 lookups. Some of those will be cached, but a lot will be seeks. Remember that seeks take about 0.005 seconds on average. So if 20 of those are seeks, that is 0.1 seconds. Doesn't sound like much, until you consider that 100,000,000 of them will take 115 days.

      By contrast 100 million 50 byte rows is 5 GB. If you stream data at 50 MB/second (current drives tend to be faster than that, your code may be slower), then you'll need under 2 minutes to stream through that file.

      If you have two of these files, both in sorted form, it really, really makes sense to read them both and have some logic to advance in parallel. Trust me on this.

Log In?
Username:
Password:

What's my password?
Create A New User
Domain Nodelet?
Node Status?
node history
Node Type: note [id://884633]
help
Chatterbox?
and the web crawler heard nothing...

How do I use this?Last hourOther CB clients
Other Users?
Others having an uproarious good time at the Monastery: (7)
As of 2024-04-23 18:42 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    No recent polls found