I think of it this way:
- When the first PC's came along, you would try to keep as much in memory as possible because disk access was so very slow compared to the CPU. (I once rewrote a Clipper (Dbase3) program that took > 30 minutes in another language in which I could just use RAM, and it went to 15 seconds ;-).
- Lately, CPU's have become much faster. So much faster that RAM (other than the CPU L1 and L2 caches) has become very slow compared to the CPU. So now you should be trying to keep everything in the CPU caches.
- One way to achieve this is to not keep temporary values in memory, but calculate them again and again (as long as it stays in the L1 and L2 cache).
Probably not a technically correct view of what's happening, but a model that I'm working with. I'm open to anyone correcting this model.
P.S.: Yes, my background is experimental physics, sometime long ago.