|No such thing as a small change|
I still strongly disagree.
For instance you say that, That those currently having to use the smaller (4- to 32-way) clusters to get their work done will soon no longer need to deal with the latency, restricted bandwidth and topological problems involved with clusters, nor the complexity and expense of cluster-based software solutions, because they'll be able to use simpler, cheaper, cluster in a box solutions. And thereby miss the huge point that people use clusters to achieve a combination of performance and reliability.
Take, for instance, the lowly webserver cluster. Even if you're serving a million dynamic pages per day, your webserver requirements are pretty modest. 2, maybe 3 decent pizza boxes covers it. However you want more machines than that. Maybe you have 5 or so. Why the extra machines? Because if something goes wrong with one of them, you can just pull it out of service and work on it. You want both performance and reliability. And reliability is achieved through having redundant hardware.
Replacing a small cluster like this with one machine would be a stupid idea unless that one machine was engineered for an insane degree of reliability. At which point it is going to cost so much more that you'd be dumb to choose that route. (Unless the machine was already available for some other reason...)
Because of this fact I guarantee that small clusters of machines are not going away any time soon. Moore's Law may deliver machines whose performance exceeds the need, but the needs of commodity users guarantees that there is no pressure to make those machines reliable enough for many business uses. And businesses will achieve that reliability through clustering.
About what Google will and will not do, this will make you happy. Their request of chip designers (which looks like it is being paid attention to) is more multi-threading on chip, and less speculation in execution plans. The reason for this is that any work that computers do takes electricity. Therefore if chips do a lot of code analysis and speculative out-of-order operations, they use more electricity per completed instruction than they do if they have multiple parallel executing threads. Given that declining hardware costs make electricity an ever bigger component of total cost, multi-threaded cores are more power efficient than single-threaded ones.
That said, Google does not have people do a lot of multi-threaded programming. Their approach consists of having people doing a lot of single-threaded programming and then putting them together in easily parallelizable chunks. Yes, I'm aware that this is similar to your proposal for how to actually write multi-threaded code. Yes, I'm aware that pugs is implementing something like this. However my impression remains that Perl is a bad language to try to do this with simply because it is so darned hard to prove that arbitrary code won't have side-effects and therefore can be parallelized.
I could be wrong about that, and I'll applaud loudly if people prove me wrong by succeeding very well at it.
(Random note. Another kind of "easily parallizable chunk" is a piece of SQL. Databases have moved towards doing parallel query execution, and will move farther that way as time goes on. Again the programmer just writes single-threaded code. But behind the scenes it is parallelized. However in this case the programming language does none of the work.)
A final note about personal experience. I only bring up mine when I think it really is relevant to the point at hand. Which has nothing to do with breadth, and a lot to do with specifics. You definitely have been in computers longer than I have, and it sounds like you've done more kinds of things with them than I have. But that won't change whether or not my experience has any bearing on a specific example at hand. And if it does, I'll mention it. And if it doesn't, I won't.
You'll note that I wasn't the one who brought my experience up in this thread. You were. And you brought up a whole ton of experience and then said nothing relevant to the point that you were talking to. Which was my claim that even if the PC heads towards multi-threaded implementations, that doesn't mean that programming is all headed towards being multi-threaded.If you think I was trying to "put you in your place" based on my depth of experience, then I'd suggest re-reading the thread. You made a claim that I thought was silly, and I responded honestly to it. (For the record, I think that any claim of the form, The future of programming is X is silly. Programming is going in too many directions at once to be so simply characterized.)
You'll also note that you didn't say that your wide experience contradicts the point that I made about different kinds of computing devices being out there. Which is that even if the PC is headed towards having many parallel threads of execution, that doesn't indicate that one can simply say that threading is the future of programming.