http://www.perlmonks.org?node_id=1062823


in reply to Perl Program - Out of memory?

A few things superficially come to mind:

Pursuing the last thought, I frankly do suspect that a lot of this “nested loop” logic could, indeed be expressed as a query which might, indeed, produce many thousands of rows as a “cross-product” between several smaller constituent tables.   (“And so what... that’s what SQL servers do for a living...”)   But this might then serve to rather-drastically reduce the complexity and memory-footprint of your code, which now only has to consume a record-set that is presented to it.

Note also that “SQL” doesn’t have to imply “a server.”   The SQLite database system, for instance, is built on single-files, and it runs quite nicely on everything from mainframes to cell-phones.   My essential notion here is that maybe you can shove “all that data” out of (virtual...) memory, and into file(s).

Replies are listed 'Best First'.
Re^2: Perl Program - Out of memory?
by Laurent_R (Canon) on Nov 16, 2013 at 12:16 UTC

    Yes, it is also not clear to me whether sorting the hash keys is necessary, but if it is necessary, you might consider sorting the keys before entering the nested loops and storing them into an array (and walking through the arrays containing the sorted keys rather than the hash keys). I do not know whether it will really reduce memory usage sufficiently, but it will certainly reduce considerably the run time. The hash keys used in the most inner loops might be sorted millions or possibly even billions of times, this is a huge waste of CPU power, as your program is likely to spend most of its running time sorting again and again the same data. Now, to restate, this will certainly also save some memory, but I have no idea whether this will be enough to solve your memory problem.

    One additional point: how many elements do you have in each of your hashes?