Beefy Boxes and Bandwidth Generously Provided by pair Networks
We don't bite newbies here... much
 
PerlMonks  

Re: Re: module memory usage

by diotalevi (Canon)
on Dec 22, 2003 at 06:02 UTC ( [id://316313]=note: print w/replies, xml ) Need Help??


in reply to Re: module memory usage
in thread module memory usage

POSIX imports over 560 symbols, each imported symbol being about a 120 byte memory penalty. According to this book that works out to 140KB of memery overhead that could be avoided with 2 colons and a little more typing. Not alot in a small one-off script, but that piles up fast when you have 50-100 mod_perl/Apache child processes screaming for attention from the CPU.

You'll also want to realize that the qualifiied name adds a handful of additional dereferences and a hash lookup when you could have spent the memory on a local access.

--
  Devil

Replies are listed 'Best First'.
Re: Re: Re: module memory usage
by stvn (Monsignor) on Dec 22, 2003 at 07:14 UTC
    Devil,

    Quite true, the classic CPU versus memory tradeoff. As with all "optimizations", you almost never get away for free.

    Personally I would rather waste a CPU cycle or 2 since they tend to be very cheap, and for what I do (mod_perl web apps) a pico second here or there is usually okay, while running into disk swap for memory is not.

    In the end, I stick with the old mantra:

    Premature optimization is the route of all evil.

    - C.A. Hoare (although usually attributed to Donald Knuth)

    -stvn
      Nah, this stuff is in the realm of micro-optimizations. That's why I signed my note with 'Devil' for 'Devil's advocate'.
Re: Re: Re: module memory usage
by Anonymous Monk on Dec 22, 2003 at 07:33 UTC
    What kind of genius has 50-100 mod_perl/Apache child processes?

      I recently built a site which uses mod_perl to segment content to different user groups, and has a CMS backend. At first I had the MaxClients set to 50, and that turned out to be not nearly enough, from there we moved it to 100, which also wasn't enough. We settled on 125.

      Of course this does not mean that there are always 125 child processes serving 125 concurrent requests. But on average, there are between 50-100 processes running at any given time serving concurrent requests. The site regularly gets anywhere between 900-1400 unique homepage hits per day, most of which are concetrated between 8 a.m. and noon. And being an intranet site, it is actually read/used by its users. Most sessions run approx. 20 minutes to an hour.

      If I could have, I would have load balanced it, yadda yadda yadda, but economics and politics dictated that it had to run on a single dedicated Linux box with a shitload of RAM (I wasn't even allowed to put the MySQL DB on its own machine, it shares the same box).

      The site has had no issues, its response time is excellent, and there has been no complaints at all. Take note that this was a transparent replacement of an existing static HTML site, so users expectations on performance were pretty high.

      While I was pretty happy with the results, I would not call myself a genius.

      ;-)

      -stvn

      UPDATE: Oops, the site is actually running on FreeBSD, it was late last night when I posted this. I like Linux, but i gotta give credit where credit is due. (FreeBSD 4.7-RELEASE-p22 to be exact)

Log In?
Username:
Password:

What's my password?
Create A New User
Domain Nodelet?
Node Status?
node history
Node Type: note [id://316313]
help
Chatterbox?
and the web crawler heard nothing...

How do I use this?Last hourOther CB clients
Other Users?
Others wandering the Monastery: (3)
As of 2024-04-24 21:39 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    No recent polls found