in reply to Re^2: Parallel::Forkmanager and large hash, running out of memory
in thread Parallel::Forkmanager and large hash, running out of memory
A problem like that one could be handled by scanning all the files ahead of time and pushing the lookup values into a database table. This would avoid the need to “look for” the answers you want, which could largely defeat your efforts at parallelization. A pre-scanner could loop through the directory, query to see if it has seen this particular file before, and if not, grab the lookups and store them. Each time, it would only consider new files. (In the database table, you could also note whether a particular file had already been processed. Something like an SHA1 hash could be used to recognize changes.)
This, once again, could be used to reduce the problem to a single-process handler that can be run in parallel with itself on the same and/or different systems.
By all means, if you have now hit-upon a procedure that works, I am not suggesting that you rewrite it.