http://www.perlmonks.org?node_id=998587


in reply to Catalyst/FastCGI and Perpetually Growing Perl Processes

What's the best approach?

find memory leak and resolve once and for all ( Devel::NYTProf, Devel::Leak Devel::LeakTrace, WeakRef ) ?

Scheduled periodic restart with Catalyst::Restarter or some such?

although it just seems to me that the processes are living a long time and never releasing memory back to the system.

Yeah, that is kind of the point of FastCGI, long lived processes :)

  • Comment on Re: Catalyst/FastCGI and Perpetually Growing Perl Processes

Replies are listed 'Best First'.
Re^2: Catalyst/FastCGI and Perpetually Growing Perl Processes
by jeffthewookiee (Sexton) on Oct 15, 2012 at 13:31 UTC
    Is it necessarily a memory leak though? It seems possible it's just a side effect of having a very long-running Perl process that's never going to release memory back to the system. mod_perl/Apache seem to anticipate this by having each web process only serve X requests before being re-spawned, re-starting the memory pool from scratch.

      Is it necessarily a memory leak though?

      yes, unintended memory growth is always a leak

      It seems possible it's just a side effect of having a very long-running Perl process that's never going to release memory back to the system.

      While that is possible (operating system memory managers are tricky) it is very unlikely. With proper scoping the memory should be stable, it might peak during requests, but it should stabilize.

      If it doesn't, and you didn't code a giant %CACHE hash, then it is definitely a leak (unintended memory growth). See Re^3: Scope of lexical variables in the main script for some links/discussion