Beefy Boxes and Bandwidth Generously Provided by pair Networks
Syntactic Confectionery Delight
 
PerlMonks  

Re: gigantic daemons

by esh (Pilgrim)
on Sep 03, 2003 at 05:58 UTC ( [id://288518]=note: print w/replies, xml ) Need Help??


in reply to gigantic daemons

If you happen to be on Linux/Unix, and you happen to be running all the daemons on the same server, and they all happen to be forked from the same parent process, and they all happen to load a bunch of the same data, then you may be able to share memory by loading the data before the fork. This reduces the total memory footprint on the system.

I use this practice to good effect with mod_perl processes by loading up dozens of megabytes of cached data in the parent process initialization before Apache forks off the children to handle the incoming HTTP requests. Even though there are 30 child processes, they all share the same memory when accessing the cached data.

Note that Linux/Unix fork shares the memory until one of the processes decides to write to it at which point the operating system copies the data into a new block and lets the process muck in its own sandbox without affecting the other processes which still see the old copy of the data.

If the above assumptions do not happen to apply to your situation, I recommend you look seriously at the cost of developer time to optimize the memory vs. the cost to purchase more memory.

Perl is notoriously memory hungry.

-- Eric Hammond

Replies are listed 'Best First'.
Re: Re: gigantic daemons
by zby (Vicar) on Sep 03, 2003 at 07:19 UTC
    If you happen to be on Linux/Unix, and you happen to be running all the daemons on the same server, and they all happen to be forked from the same parent process, and they all happen to load a bunch of the same data, then you may be able to share memory by loading the data before the fork.
    The terminology here is Copy On Write. Linux does it, but AFAIK not all other Unixes do.
Re: Re: gigantic daemons
by Anonymous Monk on Sep 03, 2003 at 06:28 UTC
    Thanks for the information. I gather that only one copy of perl is actually loaded, am I right?

    Running on Linux -- check
    Same server (well, same host) -- check
    Same parent -- check
    Same data -- Well, not exactly, but the amount of data these daemons use is small.

    I am not loading entire web pages, I am only doing HEAD requests, and one-at-a-time at that. The data storage is on a PostgreSQL server. Could it be I am misleading myself about actual memory comsumption? I am looking at "ps -aux" and "gnome-system-monitor". Is the memory shown perhaps actually not physically allocated (or swapped, maybe?).

      I gather that only one copy of perl is actually loaded, am I right?
      Yes, one copy of the perl compiler/interpreter and probably one copy of your Perl source code (unless you "require", "do", or other similar things at run time).

      Could it be I am misleading myself about actual memory comsumption?
      Yes, it is possible that most of the parent/child memory listed by ps is shared amongst them.

      I tend to use the "top" command to watch the shared memory usage. It seems that ps should have an option to show shared memory, but I couldn't get it to work right off.

      What really matters is how much free memory you have on the system when everything is running. Try a before and after snapshot using something like "free" and make sure to look at the free momory +/- buffers/cache.

      -- Eric Hammond

        Thanks, Eric. I seem to get similar results with "free" and "top". I do, indeed, "use" several modules in the daemons, which must explain why they aren't shared as I think they should be. (Even so, 5.6MB seems excessive...). Thanks very much for the help.

Log In?
Username:
Password:

What's my password?
Create A New User
Domain Nodelet?
Node Status?
node history
Node Type: note [id://288518]
help
Chatterbox?
and the web crawler heard nothing...

How do I use this?Last hourOther CB clients
Other Users?
Others exploiting the Monastery: (6)
As of 2024-04-20 00:48 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    No recent polls found