Syntactic Confectionery Delight | |
PerlMonks |
Re: gigantic daemonsby esh (Pilgrim) |
on Sep 03, 2003 at 05:58 UTC ( [id://288518]=note: print w/replies, xml ) | Need Help?? |
If you happen to be on Linux/Unix, and you happen to be running all the daemons on the same server, and they all happen to be forked from the same parent process, and they all happen to load a bunch of the same data, then you may be able to share memory by loading the data before the fork. This reduces the total memory footprint on the system. I use this practice to good effect with mod_perl processes by loading up dozens of megabytes of cached data in the parent process initialization before Apache forks off the children to handle the incoming HTTP requests. Even though there are 30 child processes, they all share the same memory when accessing the cached data. Note that Linux/Unix fork shares the memory until one of the processes decides to write to it at which point the operating system copies the data into a new block and lets the process muck in its own sandbox without affecting the other processes which still see the old copy of the data. If the above assumptions do not happen to apply to your situation, I recommend you look seriously at the cost of developer time to optimize the memory vs. the cost to purchase more memory. Perl is notoriously memory hungry. -- Eric Hammond
In Section
Seekers of Perl Wisdom
|
|