Beefy Boxes and Bandwidth Generously Provided by pair Networks
go ahead... be a heretic
 
PerlMonks  

Re: Optimizing Iterating over a giant hash

by jethro (Monsignor)
on Dec 25, 2009 at 23:47 UTC ( #814387=note: print w/replies, xml ) Need Help??


in reply to Optimizing Iterating over a giant hash

You might use a divide and conquer method.

Store the data sequentially into files, one file for each different $entry value. If that number is too big (bigger than the number of files you can have open at any one time), you might group the entry values with a suitable scheme.

The difference to DMBM:Deep is that you write to these files sequentially, it should be much faster because of buffering.

After that you can reread and work through the files one by one. Hopefully now a single file should fit into memory without having to resort to swapping (which probably makes your application slow at the moment)

  • Comment on Re: Optimizing Iterating over a giant hash

Log In?
Username:
Password:

What's my password?
Create A New User
Node Status?
node history
Node Type: note [id://814387]
help
Chatterbox?
[erix]: ha, good stuff - loud and fast...
[stonecolddevin]: here's some Mastodon for when you have 13 minutes to have your mind blown: https://www. youtube.com/watch? v=4pvfQtUhtNE
[karlgoethebier]: goes to Brownsville one more time..
erix squirrels away the linkies

How do I use this? | Other CB clients
Other Users?
Others making s'mores by the fire in the courtyard of the Monastery: (4)
As of 2017-06-22 21:50 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?
    How many monitors do you use while coding?















    Results (531 votes). Check out past polls.