I wasn't talking about boundary conditions (of course those should be tested!), I was talking about some QA tester (because I've had them) saying something to the effect of "the web application wasn't working becuase it didn't remember the information I input even though I didn't hit the submit button, but clicked on an unrelated link instead".
Now one might argue that this deficiency in web applications can and should be overcome, but it's an assumption in the environment. Unless the specs specifically mention that this is to behave differently, that's the way it should work.
I wasn't arguing that having brain dead QA testers wouldn't provide some insights and challenge what should or should not be tested. I was just saying the more ignorant the testers, the more false bugs you're going to get. Now it's up to you to decide if that quantity of false bugs is enough to make up for the occasional gem that might turn up.
And as far as my sig goes... exposure should
be a part of risk calculation. When did you ever read an article about how to avoid being attacked by a pig? I can recall seeing several (even some tax funded) articles about shark bites, attacks and prevention. Why is more effort expended for the one and not the other? Because of it's percieved threat. Even to the extent of choosing an Operating System (like your example). If one is attacked more than the other, that raises it's comparative risk.
-- More people are killed every year by pigs than by sharks, which shows you how good we are at evaluating risk. -- Bruce Schneier