I'm not limiting myself to exceptions and throws - I'm talking about exception flows - parts of the cod that gets executed when something goes wrong. This may result in an exception getting thrown, but only if the designer (probably not me) says I can throw - should the design be at that level of polish. Maybe the designer says - thou will not throw an exception in your implementation - I might disagree, but thats what I _have_ to do. I dont think I'd have a job long if I said "screw you , I'm throwing one anyway."
What I am referring to are testing the decision I made for the implementation of the design. _If_ I chose to use an on-disk file as part of my solution, and _if_ I wrote code to handle different I/O failures, I would want to make sure I wrote unit test cases to check I correctly implemented handling the different failures. Maybe I just handle them and dont tell the caller anything, maybe I log it and continue on. Whatever I chose to do, I should test I implemented correctly. How wise my on-disk solution is, that's a different question.
In short, those things I can test from the interface/design point-of-view, I agree with you completely and I should test black-box, those things that are artifacts of my implementation, but not part of the interface/design, I should test black-box if I can, and white box where I need to. Its all about making sure you tested "everything" - well it is for me, unit testing my code to as close to 100% coverage as possible is what I aim for. In the OP's case, calling a bad SQL wasnt something their testing had caught. Somewhere, if they ran a coverage report, they might see where that SQL was being injected, as a piece of code that hadn't been tested - a very white box approach.
...reality must take precedence over public relations, for nature cannot be fooled. - R P Feynmann