Sorry, but that's just silly.
It is a basic economic fact that price per performance for commodity hardware is far, far cheaper than for big servers. Clusters are a way for businesses to take advantage of this to get the performance and reliability they want at a much better price point.
That 64-bit versus 32-bit is irrelevant can be trivially demonstrated. Big 64-bit servers are old news, the big Unix vendors went through that transition a decade ago. (I don't know when IBM's mainframes went through it, but I think it was earlier than that.) Yet in the last decade big iron not only did not replace clusters, but they actually lost ground to them. Why? Because clusters are a lot cheaper.
Now I'm not denying that big machines offer performance advantages over clusters. You have correctly identified some of those advantages. And I grant that there are plenty of problems that can only be done on a big machine. If you have one of those problems, then you absolutely must swallow the pricetag and buy big iron. But if you can get away with it, you're strongly advised to get a cluster.
Most problems do not have to run on a huge machine. Clusters are far cheaper than equivalent performance on a big machine. Neither fact seems likely to change in the forseeable future. As long as they remain true, clusters are going to remain with us.