|Pathologically Eclectic Rubbish Lister|
allow software to be used in applications where any failure is simply not acceptable
That's a helluva concept... I would take the condition "failure is not acceptable" to mean something like "completion/closure of the software project is not expected within your lifetime". ;^) Keep in mind that the "final release" of an application is not when all the bugs are fixed. It's when people stop using the application (so the remaining bugs that surely still exist will never be found, let alone fixed).
The causes of software failure constitute an open set. They include not only things that are arguably wrong in the code design, but also varieties of data or operating conditions that no programmer could anticipate, no matter how careful or thorough the design and testing.
Apart from that, the evolutionary facts of life, as applied to hardware, OS's, programming languages and user needs, preclude the possibility of stasis in any application. Code quality is like internet security -- an ongoing process rather than a finite state to be acheived.
Your complaints about metaphors and lack of substance are well founded, I'm sure. But how can you expect substance, "working code", stuff that can be "directly applied", etc, in the context of talking about programming in general? Either you spout generalities or you delve into the details of a specific app (where the app usually requires some amount of app-specific QA/QC) -- or else you try to make some general point using some particular example,which turns out to be irrelevant to the majority of readers, so they can't "directly apply" it.
In order to learn the best way to write code, and in order to improve your skill as a programmer, you have to write code, fix code, review and critique code, and be involved enough in each particular application domain to know what the code should be doing -- i.e. to know what users of the code need to get done. This is not a matter of having a formula or cookbook for writing code that can't break; it's a matter of being able to figure out what to do when it does break. Because it will break.
(I know, I'm ignoring some areas of "general" software design where it really pays to learn from books, documentation, release notes, etc -- things like "how to write C code so that you don't allow input buffer overflows", "how to write cgi code so that you don't expose your server to malicious or accidental damage", and so on. But I would argue that these should be addressed in an application-specific way -- it seems unproductive to try speaking or learning about them "in