|P is for Practical|
A related question: what do you do when the test results are themselves unknown? For sufficiently complex problems, if we could do the calculation by hand, we wouldn't need a computer.
For example, I once worked doing runway analysis for cargo jets: it took a group of our programmers two hours to hand-check a single number on a single page. There were perhaps a 100 numbers like that on each page, and hundreds of pages for a single manual, for a single type of airplane, for a single client. Granted, it could be hand checked faster than two hours: the two hour figure was largely because the programmers were unfamiliar with all the charts and algorithms they had to wade through, but the problem is by no means simple; there is a lot of room for error at multiple stages, and the method remains prohibitively slow.
We were provided with a binary program from a formally trained performance engineer who we had contracted to do these calculations correctly, but we had to take those numbers, and format them and present them on paper to the pilots who flew the planes.
Technically, from a legal standpoint, proving the correctness of the numbers we provided wasn't our job. On the over hand, both our company's reputation and the physical saftey of the pilot and crew were on the line. That essentially made it our problem in practice. In theory, our documents were just a "guideline" to the pilot: but in practice, the pilot needed those numbers to comply with both legal and safety requirements. If he knew what those numbers should be, he wouldn't need our product at all.
I eventually quit that job because I couldn't reconcile a way to prove the correctness of the numbers we were generating. These numbers were important to the flight of the plane (how much weight to put on it under given flight conditions, at what speed should the pilot lift off the runway, at what speed should he abort takeoff, at what height the pilot should stop climbing, and level off the plane to avoid hitting obstacles in case of an engine failure). While a good pilot could probably compensate for any gross errors, I decided that I didn't want to risk contributing to a flight disaster.
Can anyone think of a better testing approach that I could have used? I used to sanity check the most strangest airport destinations I could think of (ones with mountains, or at strange elevations, or with really short runways), and look for errors so obvious that even I could find them, but lacking institutional support (eg. a testing department, code reviews, someone else testing my code, etc.), and formal training (in both testing methods and aircraft performance engineering, etc), I really couldn't find a way to do my job to the level I felt I needed to.
As a result, I've gained an appreciation for testing, and formal correctness analysis. I'm always curious about how people improve their testing methodology, and especially how they convice management to let them do it. And today I work on things like billing systems, where nothing falls out of the sky if a bill goes out wrong.