in reply to Parallel::Runner and Amazon SQS Issue
Not taking the time to analyze this thing too closely ... what you need to do is to fork-off a controlled number of worker threads, each one of which (by means of a properly-locked, shared variable) fetches “the next chunk” of data and sends it. All of the threads do this until they discover that there are no more chunks to send, at which time they all terminate. When all of the children terminate, the parent process ends.
The two control-parameters to this process will be ... how many children do you want to fork, and how many rows of data do you want each of them to send at one time?
Notice that the children, once spawned, are persistent until the entire job is done. At the top of the loop, they must lock the shared variable, test its value, and then either “unlock and exit” if the job is done, or “increment the value and unlock” if it is not. They retrieve, package, and send their chunk, then loop back to do it again. (There are several ways to do it, many packages to choose from, etc. Only the concept is what I’m driving at here...) Taken together, it is fairly unpredictable which one of the threads will send the “next” chunk, but, working together as a team for however long it takes, they will get the job done.
|
---|
Replies are listed 'Best First'. | |
---|---|
Re^2: Parallel::Runner and Amazon SQS Issue
by stonecolddevin (Parson) on Sep 13, 2012 at 16:11 UTC |