I still disagree. In real world cases tightly integrated designs have often been viable a decade or more before modularized designs came to market. This is not a question of building a throw-away (though your solution will be superceded once raw performance isn't as important as, say, customizability), it is a question of being able to create the right product at the right time.
Let's look at an analogous situation comparing languages rather than development techniques. Suppose that Perl is 10x slower than C. But it is clearly preferable to accomplish a speific task in Perl. Then from Moore's law we can project that it will be feasible to do that task in C about 5 years before it is feasible to do it in Perl. The C solution will be doomed, eventually someone will do it in Perl and the Perl solution will be more flexible and customizable and will win. However the C solution will not be a throw-away, it will be able to meet a need before the Perl one could. Furthermore with the 5 year headstart on development, the C project could easily remain better than Perl for several years after CPUs become good enough for the Perl solution. So for perhaps a decade, despite the obvious reasons to prefer Perl, everyone will use a solution built in C.
The same thing happens with modularization. For instance in the mini-computer market, the first successful operating systems (Apollo, VMS, etc) were tightly integrated with the hardware. Eventually they all lost to Unix, which was significantly more modular in design. But they were not throw-away solutions, and for many years was what everyone used.