|
|
| Perl: the Markov chain saw | |
| PerlMonks |
Re: How to save memory, parsing a big file.by duff (Parson) |
| on Mar 01, 2006 at 10:28 UTC ( [id://533673]=note: print w/replies, xml ) | Need Help?? |
This is an archived low-energy page for bots and other anonmyous visitors. Please sign up if you are a human and want to interact.
I don't know if this is still the case but it used to be that when perl grew data structures for you, it would double the amount of memory it was using each time even if you really only needed just 1 more element. You could try to give your hash(es) a good number of buckets to start with by assigning to keys thusly: Where 500 is the number of buckets you think your hash is likely to have (you'll have to determine this empirically).
In Section
Seekers of Perl Wisdom
|
|
||||||||||||||||||||||||