Beefy Boxes and Bandwidth Generously Provided by pair Networks
Perl Monk, Perl Meditation
 
PerlMonks  

Re^3: Using functional programming to reduce the pain of parallel-execution programming (with threads, forks, or name your poison)

by tilly (Archbishop)
on Oct 24, 2006 at 21:57 UTC ( [id://580410]=note: print w/replies, xml ) Need Help??


in reply to Re^2: Using functional programming to reduce the pain of parallel-execution programming (with threads, forks, or name your poison)
in thread Using functional programming to reduce the pain of parallel-execution programming (with threads, forks, or name your poison)

Yes, that only works with command line programs.

But it is pretty easy to convert function calls to command line programs.

In any case, I wouldn't use Parallel::Simple because I have always valued the ability to control how many children I have at once. (Generally there is a fairly fixed amount of potential parallelism. Using that many kids get gets maximum throughput. If you go higher or lower your overall throughput goes down.) Therefore if you really don't want to convert function calls then I'd use something like Parallel::ForkManager and build my own solution.

  • Comment on Re^3: Using functional programming to reduce the pain of parallel-execution programming (with threads, forks, or name your poison)

Replies are listed 'Best First'.
Re^4: Using functional programming to reduce the pain of parallel-execution programming (with threads, forks, or name your poison)
by tphyahoo (Vicar) on Oct 25, 2006 at 16:12 UTC
    >> Therefore if you really don't want to convert function >> calls then I'd use something like Parallel::ForkManager >> and build my own solution.

    I did this at Using DBM::Deep and Parallel::ForkManager for a generalized parallel hashmap function builder (followup to "reducing pain of parallelization with FP")

    The difficulty was getting my mapped result set back into a hashref that I could return at the end of the function, after all my child processes finished. I wound up "re agglomerating" my hash by storing the results of each function call on the hard drive with DBM::Deep.

    I wound up needing to use a separate DBM::Deep file for each element of the hash, as I was unable to do this with a single DBM::Deep file, although I attempted to take advantage of locking support.

    I wonder if there are other ways, including perhaps your previous suggestion to use ipc open3, perhaps bypassing having to store stuff on the hard drive completely. to be continued...

Log In?
Username:
Password:

What's my password?
Create A New User
Domain Nodelet?
Node Status?
node history
Node Type: note [id://580410]
help
Chatterbox?
and the web crawler heard nothing...

How do I use this?Last hourOther CB clients
Other Users?
Others romping around the Monastery: (6)
As of 2024-04-18 07:16 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    No recent polls found