|more useful options|
I want to be on record as saying that i agree with everything that's being said about testing. Testing more, especially if the tests are systematized so that they can be used productively in the future, is almost always a good thing. I do want to point out one thing, however.
I've met with groups who have tested extensively and then interfaced with a system I support. Something didn't go right and rather than trying to help find the problem their attitude was "We tested extensively, it broke long after installation, what did YOU change." Now, these groups had no visibility into how or how much we tested our part, they were just made arrogant by their assumptions that their testing was exhaustive.
In two cases of this I had recently, the problems were from things that their testing didn't encompass. One was a subtle timing problem, the other was an input that was very large.
The humble tester must recall Dijkstra's words:
Today a usual technique is to make a program and then to test it. But: program testing can be a very effective way to show the presence of bugs, but it is hopelessly inadequate for showing their absence.
The programmer must be aware of the limits of their testing and not allow their adherence to good practice to give them too much confidence. The attitude should be "Gosh, I think it works, and I'm more sure because it passed my tests.".
Update: 09/27/2002 17:02 EDT: Noticed that my wording was awkward. I referred to "In both cases" above without properly introducing this. I changed the wording to "In two cases of this I had recently...".