Beefy Boxes and Bandwidth Generously Provided by pair Networks
more useful options
 
PerlMonks  

Re: Re: STOP Trading Memory for Speed

by PetaMem (Priest)
on Oct 02, 2002 at 10:25 UTC ( #202226=note: print w/ replies, xml ) Need Help??


in reply to Re: STOP Trading Memory for Speed
in thread STOP Trading Memory for Speed

Hi,

nice logical analysis and argumentation, unfortunatedly not quite correct, as the "two different points" are entangled, thus they have to be discussed together.

Let me try again:

There is a gap between memory speed and CPU speed. This gap has grown over the past and probably will continue to grow - or let's even say it'll remain constant. It certainly will not lower or get inverse as long as we will have traditional von-Neuman machines instead of systolic fields or whatever.

Therefore: Yes, there are algorithms that gain speed by trading memory for it even if the memory is slower than the processing element. But our computers are not theoretical turing machines with infinite ressources there are hard limits. One of these hard limits is direct 4GB adressing for 32bit architectures.

If you hit this limit, your design decisions get a severe limitation in dimensionality. Often you have to rely on a implementation using the "remaining features" of your hardware. And in our case this is using the (relatively) enormous amounts of disk space compared with RAM space.

So: IF the gap between memory and CPU speed will continue to grow (the most likely scenario), trading memory for clock cycles may very well backfire. Admittedly this is a long term view and should only be seen as an auxiliary argument for re-thinking of perls memory usage design philosophy (or dogma?).

The more important argument is the real world situation: By trading very (too) much memory for clock cycles you hit the hard limits sooner, which forces you to use features of todays hardware that are inferior to other features you cannot use because of the hard limit.

So my whole argument is: IMHO the design decision "Memory for Speed" is too weighted into one direction only. Perl would gain a big leap in flexibility if a pragma (or whatever) could state something like: "use LessMem;" every User could decide if the resulting gain in available memory and loose in speed is ok for his applications. But those of us that hit the hard limits of todays hardware could live just as long with such a perl until a 64bit 40GB RAM machine gets custom hardware. And even then some might wish not to hit THAT hard limit...

As for a remark in this thread about the development ressources we'd be willing to invest into this. Well - we have no knowledge of the perl internas but this problem is critical to our application, so yes - we'll probably try some inline C, C has lib, and if all of these won't help - we'll dig into perlguts. :-)

Bye
 PetaMem


Comment on Re: Re: STOP Trading Memory for Speed
use less 'memory';
by Joost (Canon) on Oct 02, 2002 at 11:35 UTC
    Perl would gain a big leap in flexibility if a pragma (or whatever) could state something like: "use LessMem;"

    The perl porters are aware of this, as the documentation for the less pragma shows:

    NAME less - perl pragma to request less of something from the compiler SYNOPSIS use less; # unimplemented DESCRIPTION Currently unimplemented, this may someday be a compiler directive to make certain trade-offs, such as perhaps use less 'memory'; use less 'CPU'; use less 'fat';

    Ofcourse, as it is unimplemented, it is still useless in practice, but you could try convincing the perl 5 porters that it should at least do something in 5.10.

    -- Joost downtime n. The period during which a system is error-free and immune from user input.
Re^3: STOP Trading Memory for Speed
by Aristotle (Chancellor) on Oct 02, 2002 at 13:12 UTC
    You say:
    Yes, there are algorithms that gain speed by trading memory for it even if the memory is slower than the processing element. But our computers are not theoretical turing machines with infinite ressources there are hard limits. One of these hard limits is direct 4GB adressing for 32bit architectures.
    I said:
    You do have a point in complaining about the high overhead approach forcing you to take refuge in disk IO, but that's an entirely different argument from saying that trading memory for clock cycles will backfire when an application fits in memory.

    You're not saying anything I hadn't already covered. Yes, your points are correct, but you made the same mistake as the original poster of applying the "too much memory forces me onto disk" argument to the "cache misses are getting more and more costly" statement.

    These have nothing to do with each other. To argue in favour of the latter, you have to show an example where trading code size for data size actually accelerated an application that had not hit the hard limit to begin with.

    Note I did not comment on the "cache misses are getting more and more costly" statement. Although I don't want to follow the suggestion of trading code size for data size - not only because I believe that the disparity is not large enough to justify a trade yet in most cases but mainly because I frequently find data intensive designs much easier to maintain than code intensive ones -, I know that that statement is true and will at some point in the future require attention.

    Makeshifts last the longest.

Log In?
Username:
Password:

What's my password?
Create A New User
Node Status?
node history
Node Type: note [id://202226]
help
Chatterbox?
and the web crawler heard nothing...

How do I use this? | Other CB clients
Other Users?
Others studying the Monastery: (17)
As of 2014-07-28 19:49 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    My favorite superfluous repetitious redundant duplicative phrase is:









    Results (207 votes), past polls