I'd say that algorithms that favour memory over CPU
benefit more from CPU caches than the other
way around. The larger the cache, the faster the average
memory access is, because it decreases the chance of a
Re: Out-Of-Date Optimizations? New Idioms? RAM vs. CPU
Replies are listed 'Best First'.
When the first PC's came along, you would try to keep as much in memory as possible because disk access was so very slow compared to the CPU. (I once rewrote a Clipper (Dbase3) program that took > 30 minutes in another language in which I could just use RAM, and it went to 15 seconds ;-).
Lately, CPU's have become much faster. So much faster that RAM (other than the CPU L1 and L2 caches) has become very slow compared to the CPU. So now you should be trying to keep everything in the CPU caches.
One way to achieve this is to not keep temporary values in memory, but calculate them again and again (as long as it stays in the L1 and L2 cache).
Probably not a technically correct view of what's happening, but a model that I'm working with. I'm open to anyone correcting this model.
P.S.: Yes, my background is experimental physics, sometime long ago.
If you recalculate something, so that the something
doesn't stay in memory, it won't stay in the cache either.
The cache is a memory cache - what's there is also in
the main memory.
CPU's have become faster, but main memories have become bigger.
Nowadays, computers tend not to swap; if your server swaps
on a regular basis, you might want to do some tuning.
Memory I/O is faster than disk I/0, and the ratio
memory I/0 / disk I/0 is more than the ratio cache / memory.
Maybe not much of a data point, but from the servers with
resource problems I've seen, more of them benefitted from
getting more memory, than more or faster CPUs. Most computers
have more than enough CPU cycles - but usually they can use
more main memory.
True, but nowadays I think of swap as something to keep a computer from crashing during peak loads, rather then something you would need during "normal" operations. If your computer needs swap for "normal" operations (other than as an optimalization), then you have a problem. And indeed, then it doesn't matter because you have bigger problems.
But I meant more the case when everything can fit in RAM, and you want to make it still faster.
Could you recommend any good books/articles/etc on system performance tuning? I've read 'system performance tuning' and 'web performance tuning' from O'Reilly but didn't find them all that useful. Thanks.
A better way to improve usage of cache without going through a lot of careful tuning is to keep actively accessed data together, and avoid touching lots of memory randomly.
My understanding (from my view somewhere in the bleachers) is that Parrot's garbage collection will provide both benefits.
Incidentally correcting a point you made in your original post, the importance of Parrot having lots of registers is not to make efficient use of cache. It is to avoid spending half of the time on stack operations (estimate quoted from my memory of elian's statement about what JVM and .NET do). In a register-poor environment, like x86, you come out even. In a register-rich environment you win big. (Yes, I know that x86 has lots of registers - but most are not visible to the programmer and the CPU doesn't always figure out how to use them well on the fly.)
Before someone pipes up and says that we should focus on x86, Parrot is hoping to survive well into the time when 32-bit computing is replaced by 64-bit for mass consumers. Both Intel and AMD have come out with 64-bit chips with far more registers available to the programmer than x86 has. That strongly suggests that the future of consumer computing will have lots of registers available. (Not a guarantee though, the way that I read the tea leaves is that Intel is hoping that addressing hacks like PAE will allow 32-bit computing to continue to dominate consumer desktops through the end of the decade. AMD wants us to switch earlier. I will be very interested to see which way game developers jump when their games start needing more then 2GB of RAM.)
And a good way to ruin cache hits is to use a garbage-collecting language.
In reply to Aristotle: The theory is (and I haven't profiled this myself, just passing on received wisdom) is that when the GC goes off to clear out old memory, it has to read it into the cache to do so. If the memory were released as soon as it were finished with, then the page could just be discarded as necessary. Of course, the effect on processor cache is just one factor, and it may be that good GC systems can make up for this in other ways, but I don't like them anyway. I much prefer deterministic release of resources. I first heard of this theory from comments by Linus Torvalds, if you'll excuse the name dropping, and it seems to make sense to me. Of course it may be that the pages visited by the GC are pages that are going to be needed real soon. A good reminder that the first rule of optimisation is: don't, and the second is: do some profiling first.