Duh. That doesn't mean that your program trying to use lots of physical memory is going to do less page faulting. The original problem statement:
By loading several million records into an array, the virtual-memory footprint of this application blossomed to about 44 megabytes, which caused about 75,000 page faults to occur just in loading and sorting that “memory” list. Although there was enough RAM to allow that to happen without physical I/O, Linux discarded a corresponding amount of file-buffer space ... and Linux depends a great deal on its buffers for good performance.
So your "solution" won't prevent the problem. Granted, pre-allocating can reduce heap fragmentation and eliminate a few copyings, but it isn't going to make several million records suddenly no longer require many MBs and suddenly leaving room for most of the file buffer to remain. And a fragmented heap is likely to have less impact on the number of pages that need to be kept swapped in than it does on the total heap size. So I stand by my prediction that such games are unlikely to have a major impact in this type of situation.