Beefy Boxes and Bandwidth Generously Provided by pair Networks
"be consistent"
 
PerlMonks  

tuning hash memory usage

by jmason (Novice)
on Nov 20, 2000 at 19:03 UTC ( [id://42496]=perlquestion: print w/replies, xml ) Need Help??

jmason has asked for the wisdom of the Perl Monks concerning the following question:

Hi all,

I'm running into a wee memory problem. Basically I'm working on a perl project, webmake, which, keeping it simple, does the following: it maintains a list of content items (think files), does all manner of substitutions on them, and writes them to an output directory. It also tracks their dependency graphs so it knows what to rebuild next time around.

This is modelled as a hash containing instances of a Content class; each of these contains a member containing the filename (string), and another hash, mapping dependency filenames (string) to a modification date (integer).

The problem is, when it runs with a 5000-file data set, it's taking up a hefty chunk of RAM; at peak it hits about 50megs. I think the 5000 instances of the latter hash is a big part of this.

Typically a Content item contains 1 dependency. Is there any way, or any alternative Tie:: implementation, that can handle a 1-element hash in a minimum amount of RAM?

Replies are listed 'Best First'.
Re: tuning hash memory usage
by lhoward (Vicar) on Nov 20, 2000 at 19:57 UTC
    I've encountered this before. I have a solution, but it is far from optimal. I ended up trading memory for CPU utilization. In my instance, this tradeoff worked well, but based on how you're using and accessing the data it may not be apropriate to you. With that disclaimer out of the way:

    What I did was searlize all the objects in the hash. When I needed to access them I re-inflated them into full-fledged objects. Depending on your data you could even use on-disk storage (like a tied dbm file, or even a real DB) for the objects; only pulling into memory the ones you need. When I used this technique I did not go as far as to use on-disk storage for the data elements, but by just searilizing them I cut down my memory usage by nearly %75; of course, it took about twice as long to run. I used Storable, but there are several other good perl modules out there that can searilize your objects.

Re: tuning hash memory usage
by gaspodethewonderdog (Monk) on Nov 20, 2000 at 23:36 UTC
    Having dealt with sorting, rebuilding and other sorts of fun on data sets that were well into the 300+MB amounts of data and only having 128MB or so of memory I can give you a few pointers/suggestions.

    1) Expect Perl to use much more memory to store your data than what you expect it to be. I have found that it is generally safe to assume that if you have 50MB of data expect it the program to require from 70MB to 100MB. Perl favors using more memory than CPU cycles to solve a problem.

    2) For any significantly large amounts of information it is best to use multiple arrays over hashes (or worse hashes of hashes) or objects. Unless you have the memory, you won't be able to store it all in memory and you'll kill your performance swapping to memory.

    3) Remove all unneeded data. If all you need is a list of search/replace information and file names, but you also have dates, file sizes and other 'useless' information then get rid of it and then write functions to get that information from what you are storing in memory.

    Unless you have a specific reason for using objects to do this work I'd suggest trying a more basic solution. There are certainly tons of cool things you can do with objects, but if you don't have the system resources (memory) to work with them, then you either have to try something else. I suppose some form of caching might help and you might even be able to let the OS do this by making the swap file big enough, but if you need to check a lot of objects quickly I'm not certain this would give you a big win.

    Best of luck!

      3) Remove all unneeded data. If all you need is a list of search/replace information and file names, but you also have dates, file sizes and other 'useless' information then get rid of it and then write functions to get that information from what you are storing in memory.

      Yes, a good tip. I've found that changing some flags (is_this_type_of_content, is_that_type etc.) from setting their default value explicitly, so instead undef is used as the default value, saves a lot of memory -- as the key and value just don't have to be set at all for this to work.

      I think I'll have to serialize the dependencies array into something much smaller, probably using (at the easy end) a serialize-into-string trick and (at the heavier end) some kind of Flyweight-pattern-style trickery.

Re: tuning hash memory usage
by Dominus (Parson) on Nov 20, 2000 at 20:08 UTC
    Perhaps you could change the internal representation of a Content object so that it is initially a two-element array, and only becomes a hash when it has two or more dependencies.

    This would require adding code to the Content class, but if your encapsulation is properly done, no other part of the program will need to know about the change.

      A good trick! I'll try it out. If that doesn't help I might just marshal some of the hashed data into a string and see if that helps ;)
Re: tuning hash memory usage
by tedv (Pilgrim) on Nov 21, 2000 at 00:44 UTC
    Although it's not clear to me from the problem description, often times you don't need the full data structure throughout the algorithm. If possible, try doing your work as soon as you put an entry into the hash and then delete the entry when you're done. The total data size might be 5000, but if you only have 200 entries in your hash at any given time, that's much more acceptable.

    -Ted
Re: tuning hash memory usage
by dws (Chancellor) on Nov 21, 2000 at 09:16 UTC
    Why does each Content instance keep a hash to map filenames to timestamps, rather than keeping a single global hash to do the same? If you removed that hash from Content, a Content would only have to keep a list of dependent files.

    And you you calculating the full closure for dependency files, or just keeping a 1-deep list? You might file that it pays in memory savings to rederive a full close on demand.

      Good question... Er, I've just taken a look, and it doesn't keep a hash, it's an array. oops. So in effect it is just a list of dependent files and my specification of the problem was wrong. sorry ;)

      Are you calculating the full closure for dependency files, or just keeping a 1-deep list?

      It is a full closure -- but that's not what's taking the memory here. I plan to refactor that alright -- but not just yet...

Log In?
Username:
Password:

What's my password?
Create A New User
Domain Nodelet?
Node Status?
node history
Node Type: perlquestion [id://42496]
Approved by root
help
Chatterbox?
and the web crawler heard nothing...

How do I use this?Last hourOther CB clients
Other Users?
Others surveying the Monastery: (5)
As of 2024-04-24 22:17 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    No recent polls found