|Keep It Simple, Stupid|
Reliable asynchronous processingby Codon (Friar)
|on Jul 07, 2005 at 21:31 UTC||Need Help??|
Codon has asked for the
wisdom of the Perl Monks concerning the following question:
I am on the eve of designing an extensive application that will need to be extremely efficient. I am going to need to do asynchronous (parallel) processing. The results of these sub-processes will need to be collected and "collated" by the parent process and sent back to the user. I am looking for some technology that will be able to support this.
I cannot use fork() because of the overhead of spawning (and reaping) multiple real processes. Perl threads are not ready for Prime Time. This seems to leave me in a bit of a bind as to how to solve this.
Do any of the monks have suggestions for technology that can support this model?
Update: I am concerned about the overhead of fork() because of the amount of data that I intend to have cached in shared memory. When process reaping occurs, Perl's garbage collection attempts to free memory that is really shared memory. The Linux kernel will then attempt to copy all of the shared memory for Perl so Perl can clear it. The concern is less with the actual fork() as much as it is with the reaping of children.
Sr. Software Engineer, DAS Lead