|We don't bite newbies here... much|
god forbid that developers, in the corporate or individual sense of the word, will ever become succeptable to the litigious nature of modern western culture. Woman sues McD because she burned her crotch will driving with a cup of their coffee between her thighs, and it didn't explicitly state that it was hot. So now the rest of us have to put up with luke warm coffee.
That said, I also think that the "This software is not warrented, explicitly nor implicitly as fit for any purpose" clause is a hugely detrimental cop-out, very damaging to the industry and the profession. Imagine a car that came with a "this car may not go around corners safely" sticker.
I whole heartedly agree that the best route to affordable, timely and warrentable (though not necessarialy warrented) software, is code re-use. Re-used components become more reliable over time, and reliable components greatly contribute to reliable applications.
However, I think that before "re-usable code" or "software components" will make a real impact upon the quality of applications, several things will need to come into being.
An 0-ba nut from one manufacture will fit a 0-ba bolt from another because of such defined component standards. Similarly, watches keep (roughly) the same time, tyres fit wheels, plugs fit sockets, paper fit envelopes and printers, clothes and shoes almost fit people--its a one-sided standard. Etc. etc.
Given that these three could be satisfied, then I think that it would be possible to start writing reliable, warrantable software at a reasonable cost of development.
The key, I believe, to making this work, is to delegate the responsibility of runtime enforcement of the input parameters to a function or method of a component, to the hardware.
Given the increasing power of todays processors, very large instruction words and super-scalar arcitectures, it should be eminantly possible to define an architecture independant, standardized mechanism for declaring the formal parameters of a function or method to the processor, along with the constraints and assertions that need to be applied to them. The same is true for a standardized method of presenting the passed parameters to the processor. The processor could then enforce the contract between the caller and the called, efficently and reliably, at the hardware level.
The process of independant validation of the contract enforcement of any particular piece of hardware is considerably simpler and cheaper than doing the same for every piece of software written. With a standardized, cross-vendor calling mechanism defined, the verification of compilers and interpreters against this standard would also become a realistic proposal. Once the compilers and interpreters can be validated, it reduces if not completely removes the various commercial and legal risks and prejudices against re-using components from third party sources. With standardisation at the hardware level comes language and platform independance, and the removal of many barriers.
It would become possible to write applications in such a way that the end user could purchase or download individual components from any source and the application could simply use them. Whether the components were obtained in binary form for their specific platform, or in source form and self compiled, the application would simply use the local copy of the component through load-time or run-time binding. This would also have the effect of introducing competition at the component level, reduce the distribution size of applications, and allow the appropriate language to be used for each component without the fear of interoperability problems.
Given that the latest hardware is getting close to the point where it is possible to render almost film-quality 3d graphics in real-time, the next generation (or maybe the one after that) should easily have enough performance to allow for runtime contract enforcment of parameter restraints without imposing an unacceptable burden in most applications. Back in the days of Pascal, it was possible to have the compiler add code to perform runtime array bounds checking. Unfortunately, this was usually disabled in production code because of the performance penalty, but the hardware has moved on a long way since then.
My thoughts on this go much deeper and have evolved over a long time, but this probably isn't the right place for taking them further.
One thing that is perl related, is that given the flexible typing mechanisms presented in the last Apocalypse, and the apparent cross-language capabilities of the parrot engine, it would only(!) require the addition of constraint specification mechanism to the language(s) and (optional) run-time checking mechanism in the engine, to approach a software simulation of the possibilities. I still think that the hardware element would be required for it to have a real effect on the reliablility of software in general, but the simulation in software would be a good way of (dis)proving my theories.
Counter-points to my thoughts eagerly and gratefully received, via a different communication mechanism if it is too far off-topic for this forum.
Examine what is said, not who speaks.
1) When a distinguished but elderly scientist states that something is possible, he is almost certainly right. When he states that something is impossible, he is very probably wrong.
2) The only way of discovering the limits of the possible is to venture a little way past them into the impossible
3) Any sufficiently advanced technology is indistinguishable from magic.
Arthur C. Clarke.