http://www.perlmonks.org?node_id=1037857


in reply to Multithreading leading to Out of Memory error

It is also relevant to consider whether these could be processes instead of threads.   In the former case, an entire separate memory-management context is created; in the latter, both Perl’s memory manager and its own particular flavor of quasi-threads implementation is in effect throughout.   Threads all run in the same memory-management context (with suitable complications), which is not torn-down.   (My knowledge of the perlguts of the thread implementation is minimal; others here are experts and gurus.)

It would be useful to know if the same behavior occurs when there is only one thread, and/or when the processing is done sequentially in the main thread.   Does it, or does it not, foul-up after processing a certain number of files?   Does alteration of the number of threads, alter the point at which it hoses-up?   You should also note exactly which Perl version you are using.

Yes, “committing hari-kiri” is a legitimate way to forestall memory-leak problems especially in unknown processes.   (The technique is useless for threads, as described.)   FastCGI and mod_perl programs are sometimes deliberately arranged to process some n number of requests before they voluntarily terminate, upon which case the parent-process wakes up, reaps the child, then launches another copy until the pool of worker-threads is restored.   (Some separate provision would need to be made for the parent to be aware of end-of-job.)