Beefy Boxes and Bandwidth Generously Provided by pair Networks
Think about Loose Coupling
 
PerlMonks  

Memory limitation of a perl process in Unix

by asinghvi (Acolyte)
on Feb 20, 2004 at 22:57 UTC ( #330687=perlquestion: print w/replies, xml ) Need Help??

asinghvi has asked for the wisdom of the Perl Monks concerning the following question:

Hello Perl gurus, Running on perl 5.6.1 on Solaris 2.8 64 bit.
I run a perl script that forks out 2 children. Each of the children are reading big flatfiles into hashes within hashes and after processing (aggregation, modification), writing back into other flatfiles.

Today, I got an "Out of Memory error" on one of the threads. The process size had climbed to 3.28 GB.
I want to know if there is a memory limitation of a perl process in Unix.
I have already verified the following - Our server does not have a process size limitation. - Yesterday, exactly the same script had reached 3.15 GB and completed successfully.
Any help is appreciated.
  • Comment on Memory limitation of a perl process in Unix

Replies are listed 'Best First'.
Re: Memory limitation of a perl process in Unix
by Abigail-II (Bishop) on Feb 20, 2004 at 23:23 UTC
    There's no arbitrary limit that Perl imposes on itself. It basically has two types of constraints: whatever constraints the OS implies on it (ulimit, process size limit, total memory available, etc), and a limit that depends on the pointer size. With 32 bit pointers, you won't be able to go over 2 or 4 Gb. With 64 bit pointers, you can go further.

    Also note that if Perl dies with an "Out of Memory error" while using 3.28 Gb, it doesn't mean 3.28 Gb is the limit. It means the limit (at that moment in time) is between 3.28 Gb and whatever the amount it was that was being claimed.

    If my processes were dying with "out of memory" errors, and they were using over 3 Gb of memory, I'd first look at the program and see whether I could gain by doing a redesign that looking at imposed limits.

    Abigail

Re: Memory limitation of a perl process in Unix
by BrowserUk (Pope) on Feb 20, 2004 at 23:18 UTC

    The message means that perl asked the OS for more memory and got a refusal. That basically only happens if you ran out of both physical memory and swap space (assuming swapping is enabled). The point (memory consumed) at which this happens depends not just on the data size/memory used by your process, but also by the memory in use by other processes in the system. So if the other processes--system and user--are using more memory today than yesterday, then your process will hit the limits of your system earlier.

    Update: Tilly pointed out that the above information could be misleading to those coming along later. This reply is aimed specifically at the OP who is using a 64-bit OS and therefore unlikely to being affect by any address space limitations. Anyone using a 32-bit OS may encounter an os-dependant, 2 or 4 GB limit for a given (perl) process even though his RAM + swap totals more than 2/4 GB.


    Examine what is said, not who speaks.
    "Efficiency is intelligent laziness." -David Dunham
    "Think for yourself!" - Abigail
    Timing (and a little luck) are everything!
Re: Memory limitation of a perl process in Unix
by perrin (Chancellor) on Feb 20, 2004 at 23:18 UTC
    Perl doesn't have an inherent limit on process size. I ran into something similar to your situation on Linux where I was hitting a limit. It turned out to be the OS, and not Perl. I suspect it is the OS in your case as well.
Re: Memory limitation of a perl process in Unix
by Roger (Parson) on Feb 21, 2004 at 00:01 UTC
    Perhaps you should take one step back and look at your algorithm again, may be there is another approach that will consume less memory without sacrificing much processing speed.

Re: Memory limitation of a perl process in Unix
by mpeppler (Vicar) on Feb 21, 2004 at 17:33 UTC
    The addressable data space by a single process under Solaris in 32 bit mode is approximately 3.7GB (yes I know you are running 64 bit Solaris, but my guess is that your perl binary is compiled in 32 bit mode).

    Under linux 32 bit (at least the non-Enterprise versions) the limit is just under 2 GB for a single process because of the way linux maps memory. On RH Advanced Server one can use a special kernel variable (mapped_base) to increase the adderssable memory of the process to 2.7GB.

    Michael

Log In?
Username:
Password:

What's my password?
Create A New User
Domain Nodelet?
Node Status?
node history
Node Type: perlquestion [id://330687]
Approved by Roger
help
Chatterbox?
and the web crawler heard nothing...

How do I use this? | Other CB clients
Other Users?
Others chanting in the Monastery: (2)
As of 2021-09-19 23:16 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    No recent polls found

    Notices?