Beefy Boxes and Bandwidth Generously Provided by pair Networks
XP is just a number
 
PerlMonks  

Re: Optimizing Iterating over a giant hash

by jethro (Monsignor)
on Dec 25, 2009 at 23:47 UTC ( #814387=note: print w/ replies, xml ) Need Help??


in reply to Optimizing Iterating over a giant hash

You might use a divide and conquer method.

Store the data sequentially into files, one file for each different $entry value. If that number is too big (bigger than the number of files you can have open at any one time), you might group the entry values with a suitable scheme.

The difference to DMBM:Deep is that you write to these files sequentially, it should be much faster because of buffering.

After that you can reread and work through the files one by one. Hopefully now a single file should fit into memory without having to resort to swapping (which probably makes your application slow at the moment)


Comment on Re: Optimizing Iterating over a giant hash

Log In?
Username:
Password:

What's my password?
Create A New User
Node Status?
node history
Node Type: note [id://814387]
help
Chatterbox?
and the web crawler heard nothing...

How do I use this? | Other CB clients
Other Users?
Others chanting in the Monastery: (8)
As of 2014-12-18 04:42 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    Is guessing a good strategy for surviving in the IT business?





    Results (41 votes), past polls