We don't bite newbies here... much | |
PerlMonks |
comment on |
( [id://3333]=superdoc: print w/replies, xml ) | Need Help?? |
In the mid-eighties, I was briefly involved in the development of an IBM internal language called PLS86. A descendant of PL-1 language that IBM started developing in the late 50s and early 60s, it was targeted at the x86 platform and one of its major design goals was to allow the production of "mathematically provably correct programs". The language was briefly considered for use in the development of IBM's then top-secret, and as-yet unnamed, OS/2. (SO inventive. Not!). Anyway, the language was dropped (for use in developing OS/2) not least because IBMs then partners in this development, MS, refused to use it and so C was used instead. I was friends with one of the guys who did considerable work on the development of the "mathematical proof" bit of the language. I don't recall (probably because I didn't really understand much of it) all of the details, but there was a point in the development of the language, which went ahead anyway, where the provability requirement was quietly dropped. The reason came down to the simple statement: It wasn't possible! Some of the bits I remember that lie behind the conclusion are: Even with it being a totally pre-compiled language, without the dynamic features of some other languages (eg. eval in perl, INTERPRET in REXX etc.), unless the program is entirely statically linked, there is no way to ensure that you will not get failures due to the unavailability or incorrect runtime linking of dynamically bound code segments. In order to prove the program is correct, it becomes necessary to add additional code to verify the inputs to every part of an algorithm and more code to verify the output. However, there are many problems with this.
That isn't the complete story, but it's all I can remember for now. What this means is, that for any real-world, costed, project, it becomes necessary to define a level of reliability that is 'acceptable' and then design your development and testing criteria to achieve that. Testing is good, but it is not a silver bullet to bugs. It is impossible to achieve 100% coverage of tests. The first major project I was involved in testing (the GUI component of OS/2), had over 700 APIs, with an average of 4 parameters per API and some with as many as 13! Many of those parameters are themselves (pointers to) structures which can consist of as many as 40+ discrete fields, and each field can (for example) be 32 individual booleans; or integer values with ranges of +- 2**31 or 0-4**32; or floats ranging from something like 1e-308 to 1e+308. Do the math! Even without considering the effects of interdependencies between the APIs--you can't draw a line until you've obtained a presentation space, you can't obtain a presentation space until you've obtained a device context and so on--the numbers become astronomical. Considering testing everything simple isn't possible given the projected life span of the earth, never mind that of human developers:) Given you have an experienced programmer developing yet-another-version of some project that s/he has done several of before, then they will probably be able to make some educated guesses about where the edge cases are and there by reduce the sets of tests to a manageable size. However, it was proved fairly comprehensively (to my satisfaction anyway), by some work done by another bit of IBM around the same time, that even experienced developers make bad guesses as to where the edge cases are when they move to projects that are even slightly different to those they have done before. In fact, that particular team showed that they make worse than random choices! And the more experienced they were (in similar but different project types), the worse their guesses were. The upshot was that we ended up developing a random test case generator. The logic went like this. If you have a few computers generating random but valid sequences of API calls, along with other code to test the resultant program, then with you can use statistics--the number of programs generated -v- the number of them that failed--to determine the rate at which bugs are being found. By re-running every test cases generated, both good and bad, each time a new build was released, you get an effective regression test. You also get a statistical measure of the rate at which earlier bugs are being fixed, which of them are re-appearing etc. You can also break the statistics down by component, developer, day of the week etc. etc. This allows you to target your efforts to where they are of greatest benefit. The effect of this was amazing and salutary. There had been many, many test cases written prior to the RTG being available. WIthin a month it was shown that all of the test case produced by programmers/ testers targeting their efforts according to their (or their superiors) best judgment, had covered less than 15% of the total APIs with 10% having been duplicated over and over, 5% of the possible parameter combinations, and found less than 1% of the possible bugs. Don't ask me how they arrived at this last statistic, it was way to much voodoo for me, but I can tell you that within two months, they were beginning to predict the number of new bugs that would be found, old bugs that would be cured and old bugs that would re-surface in any given week (with twice daily builds) with amazing accuracy. It was right around this time that the project got moved from the UK to the US and I never heard much more about it. You might find this article interesting Coming: Failsafe software. The only way the software industry is going to move out of the metaphorical iron- or maybe bronze-age, is when we start using computers to assist us in our work. It's a strange thing. If you describe a problem in almost any field of endeavour to a programmer, he will nearly always suggest a way that he could improve or solve that problem using a computer--except when the field of endeavour is his own! Examine what is said, not who speaks.
"Efficiency is intelligent laziness." -David Dunham"When I'm working on a problem, I never think about beauty. I think only how to solve the problem. But when I have finished, if the solution is not beautiful, I know it is wrong." -Richard Buckminster Fuller If I understand your problem, I can solve it! Of course, the same can be said for you. In reply to Re: Software Design Resources
by BrowserUk
|
|