|Problems? Is your data what you think it is?|
OK - in reponse to your challenge, I've got one word to say to you - Exception Flows ;-)
Say you have a function/method, as part of your interface, that writes a file out somewhere. This isnt the main role of the function/method, its just part of the work it does for you. The normal flow is to open the file, write whatever, and close the file.
But there is also code to deal with being unable to open the file, not all the data being written or the file not being able to be closed.
How these failures get communicated back to the client can vary too - from not saying a word through to throwing an exception or even calling exit().
My opinion is that the developer is required to setup test cases for each of these failure modes, and this requires a white box approach. Knowing that the code writes a file may not be inherently knowable from the interface (whether it should be is a different point), but only by looking at the code do you _know_ that it writes a file, and that it does different things for different failures. Even if the interface doco states that it writes a file, detailing the different failure paths is almost certainly not part of the interface design or doco (but may well be part of the interface _implementation_ doco).
The function/method writer can test all the normal stuff just by reference to the interface, but setting up file-open failures and checking that the function does the right thing needs more detailed knowledge related directly to the code. This is where tools like Devel::Cover can help - exception flows pretty much stick out straight away as bits of code, related directly to your chosen implementation of the interface, that you have got to write test cases for.
...reality must take precedence over public relations, for nature cannot be fooled. - R P Feynmann
In reply to Re^4: Neither system testing nor user acceptance testing is the repeat of unit testing (OT)