Beefy Boxes and Bandwidth Generously Provided by pair Networks
Pathologically Eclectic Rubbish Lister
 
PerlMonks  

Avoiding Out of Memory Problem

by EchoAngel (Pilgrim)
on Mar 17, 2005 at 21:10 UTC ( [id://440544]=perlquestion: print w/replies, xml ) Need Help??

EchoAngel has asked for the wisdom of the Perl Monks concerning the following question:

Hi Monks, I have revisted my perl package. It's suppose to create this complex hash structure from an existing hash structure. My scripts recently crashed on larger files. I been examining my code structure and trying to be more efficient in my usage of variables and commands. It seems even if I do all this work, another larger file is eventually going to crash my script. Any solutions for this? Go to another language? Upgrade to a different perl version? Most people told me that there is nothing I can do for releasing memory (eg C) due to OS.

Replies are listed 'Best First'.
Re: Avoiding Out of Memory Problem
by samtregar (Abbot) on Mar 17, 2005 at 21:25 UTC
    Instead of building your hash structure in memory you might build it on disk instead. If you use a tied system like DB_File your code probably won't have to change very much. It will be slower but you won't have to worry about running out of memory, just disk space.

    -sam

Re: Avoiding Out of Memory Problem
by Joost (Canon) on Mar 17, 2005 at 21:55 UTC
    Most people told me that there is nothing I can do for releasing memory (eg C) due to OS.

    Perl can re-use memory from free()d structures even if they don't get released to the OS, and on some systems (linux for instance) undef $var releases most of the memory back to the OS in certain circumstances.

    Not that that will help you if you allocate more than the available memory in the first place.

Re: Avoiding Out of Memory Problem
by FitTrend (Pilgrim) on Mar 17, 2005 at 21:54 UTC

    Good point. However, if disk IO and/or time to complete the task is the issue, you may want to consider a hybrid system that processes larger files differently. You would need to find out what the sweet spot is to determine a large file from a small file. Then its simply a matter of stating the file size to determine which method to process it.

    I too have a program that collects statistics and stores them into a hash. However, I routinely write the data to the hard drive on a set interval to avoid running out of memory. Based on what I've learned from my experiment, your hashes must be quite large to suck the memory from a machine.

    This of course depends on your ultimate goal for your application.

Log In?
Username:
Password:

What's my password?
Create A New User
Domain Nodelet?
Node Status?
node history
Node Type: perlquestion [id://440544]
Approved by moot
help
Chatterbox?
and the web crawler heard nothing...

How do I use this?Last hourOther CB clients
Other Users?
Others goofing around in the Monastery: (4)
As of 2024-04-20 01:59 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    No recent polls found