http://www.perlmonks.org?node_id=998686

alain_desilets has asked for the wisdom of the Perl Monks concerning the following question:

A year ago, we migrated from mod_cgi to mod_perl, and this has helped greatly to increase performance.

But one downside of mod_perl is that processes that serve the requests stay up for a long time, and may gradually accumulate a lot of crud. You know, things like objects whose memory cannot be released until the end of the process.

So, I was wondering if it's possible to configure mod_perl so that the worker processes automatically close down once they have served N requests (say, N=50).

I just spent 30 mins looking for something like this on the web, and I didn't find anything that remotely resembles that. Does anyone know of something like that?

Thx.
  • Comment on Forcing modperl processes to restart after N requests

Replies are listed 'Best First'.
Re: Forcing modperl processes to restart after N requests
by McA (Priest) on Oct 12, 2012 at 18:03 UTC

    Hi!

    The simplest solution is to let Apache handle it: There is a configuration parameter MaxRequestsPerChild. On default installations it's set relativly high. Tweak it for you mod_perl centric processes.

    Look at http://perl.apache.org/docs/1.0/guide/performance.html#Measuring_the_Memory_of_the_Process for some hints on memory usage.

    Look for documentation of $r->child_terminate. With that method you can force the Apache child to exit gracefully at the end of the whole request lifetime. Combined with a process global counter you could implement an own controlled way of exiting after an certain count of requests. With this way you could be sure that a certain amount of perl handled requests were served (using Apache for static and dynamic content).

    Best regards
    McA

    P.S.: mod_perl mailing list is the right target for these questions. Thorsten Förtsch regularly answer fast and very competent. (http://foertsch.name/)

      Sounds like MaxRequestsPerChild is what I need. Thx.

      Strangely enough, I didn't come across it, even after 30 mins of google for things like "mod_perl forcing process to restart".

Re: Forcing modperl processes to restart after N requests
by flexvault (Monsignor) on Oct 12, 2012 at 15:29 UTC

    alain_desilets,

    Have you tested how long it takes to refresh the web-server?

    We have a "slow time" when we refresh all of our high usage servers. We have some persistent Perl applications that we could refresh and the clients waited for the server to respond. Unfortunately with Apache2, mod-perl would go crazy trying to re-connect to the persistent Perl application, which created errors trying to refresh the socket. For us, the downtime could be measured in seconds. Refresh didn't work correctly, but we just did a 'stop' followed by a 'start', which worked fine.

    Just a suggestion.

    Good Luck

    "Well done is better than well said." - Benjamin Franklin

Re: Forcing modperl processes to restart after N requests
by Anonymous Monk on Oct 12, 2012 at 18:15 UTC
    As an aside, the Plack subsystem (often used with FastCGI) has support for this in an obscure parameter called "harakiri." Aptly-named but hard to spot the docs. It terminates the worker after a specified number of requests.

      nothing which is well documented is obscure :)

      PSGI kill -> PSGI::Extensions

      psgix.harakiri: A boolean which is true if the PSGI server supports harakiri mode, that kills a worker (typically a forked child process) after the current request is complete.

      psgix.harakiri.commit: A boolean which is set to true by the PSGI application or middleware when it wants the server to kill the worker after the current request.

      That's a really good example for a self documenting method name... ;-) Thank you for that comment.
        It was a good name, even cute, but the devil himself to find in the documentation at the time.