Beefy Boxes and Bandwidth Generously Provided by pair Networks
"be consistent"
 
PerlMonks  

Re: Parallel::Forkmanager and large hash, running out of memory

by sundialsvc4 (Abbot)
on Apr 24, 2013 at 19:59 UTC ( #1030506=note: print w/ replies, xml ) Need Help??


in reply to Parallel::Forkmanager and large hash, running out of memory

Strange as it may initially seem to suggest it ... perhaps the very best approach to this problem would be to create a simple-minded program that “finds one TSV file and converts it to RDF,” then, if necessary, to (e.g. from the command-line ...) spawn as many concurrent copies of “that one simple-minded program” as you know that you have CPUs.   Reduce the problem to a simple subset that “can be parallelized, if necessary,” among a collection of one-or-more processes that do not (have to) care if other instances of themselves exist.   “One solitary instance” can solve the problem.   “n instances” can merely do it faster.   Q.E.D.


Comment on Re: Parallel::Forkmanager and large hash, running out of memory
Re^2: Parallel::Forkmanager and large hash, running out of memory
by mabossert (Beadle) on Apr 24, 2013 at 21:57 UTC

    I wish it were that simple...unfortunately, the needed lookup values are distributed across a couple thousand files and it is not possible to predict which file it will be found in...

    As-is, I am running the code in parallel using Parallel::Forkmanager, which seems to be working just fine...as long as I don't run out of memory ;-)

      A problem like that one could be handled by scanning all the files ahead of time and pushing the lookup values into a database table.   This would avoid the need to “look for” the answers you want, which could largely defeat your efforts at parallelization.   A pre-scanner could loop through the directory, query to see if it has seen this particular file before, and if not, grab the lookups and store them.   Each time, it would only consider new files.   (In the database table, you could also note whether a particular file had already been processed.   Something like an SHA1 hash could be used to recognize changes.)

      This, once again, could be used to reduce the problem to a single-process handler that can be run in parallel with itself on the same and/or different systems.

      By all means, if you have now hit-upon a procedure that works, I am not suggesting that you rewrite it.

Log In?
Username:
Password:

What's my password?
Create A New User
Node Status?
node history
Node Type: note [id://1030506]
help
Chatterbox?
and the web crawler heard nothing...

How do I use this? | Other CB clients
Other Users?
Others making s'mores by the fire in the courtyard of the Monastery: (8)
As of 2014-12-25 02:57 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    Is guessing a good strategy for surviving in the IT business?





    Results (159 votes), past polls