Clear questions and runnable code
get the best and fastest answer
STOP Trading Memory for Speedby PetaMem (Priest)
|on Sep 25, 2002 at 17:39 UTC||Need Help??|
Perl memory requirements have been subject to several discussion already, nevertheless I'd like to reiterate parts of some little discussions I heard at YAPC::Munich:
Some people (myself included), argued that Perl's memory requirements are too leechy. This is a well known issue and in the past the only answer to this was "Well - it's a design decision. Just live with it."
Those, that cannot live any good with it, got interesting support by the talk of Nicholas Clark When perl is not quite fast enough. On a side note, he mentioned that over the past few years, the speed of processors has increased way more than the speed (bandwidth) of memory.
He is absolutedly right with this, as he is with his assumption, that this trend will continue. Given that, a short discussion started, where the audience quickly realized, that the design decision "Trading Memory for Speed" may very well backfire - if it hasn't done so already.
Another talk - Perl 5.10 by Hugo van der Sanden, also touched this issue. Hugos point for this was to have some runtime options for requesting less memory usage by trading some speed for it. Hugo refused to have compile time options to make a less memory consuming perl compiler for there would be more code to maintain.
My own considerations which started because of the special requirements by our AI applications, led me to some quantification of the tradeoff we now have to make:
Our machines are 32bit and because of cost there will be no big iron (64bit and more RAM than 32bit can handle) used for our development. So maximum memory (reasonable speed - not segmented) is 4GB. Unfortunatedly this is not enough for the perl application, so we have to tie hashes to files which are located on High-Performance RAID IO-Subsystems, but even so the IO is by a factor of 60 slower than memory IO and Bandwidth.
Given a Perl, that executes Hash-Lookups by a factor of 10 slower and using only 20 to 30% overhead (not 1000% as of now), we could get everything to memory and still would run by the factor of 6 faster than now.
Memoize this for the next design-decision...
Update: Better specified what I mean with "big iron"