in reply to Optimizing Iterating over a giant hash
You might use a divide and conquer method.
Store the data sequentially into files, one file for each different $entry value. If that number is too big (bigger than the number of files you can have open at any one time), you might group the entry values with a suitable scheme.
The difference to DMBM:Deep is that you write to these files sequentially, it should be much faster because of buffering.
After that you can reread and work through the files one by one. Hopefully now a single file should fit into memory without having to resort to swapping (which probably makes your application slow at the moment)