Beefy Boxes and Bandwidth Generously Provided by pair Networks
Problems? Is your data what you think it is?
 
PerlMonks  

Re^3: TDD with Coverage Analysis. Wow.

by xdg (Monsignor)
on Aug 28, 2005 at 06:07 UTC ( [id://487221]=note: print w/replies, xml ) Need Help??


in reply to Re^2: TDD with Coverage Analysis. Wow.
in thread TDD with Coverage Analysis. Wow.

I think this is a good example of the gray zone. You can do various contortions, but what are you really proving in doing so? That open can return a false value?

The reasons you give may well be valid from a particular point of view (and I'm largely sympathetic to them) -- but they are really unrelated to coverage. One should force the failure and test the result if these other things are important, not because one is aiming for 100% coverage.

To reinforce the point another way: one can improve the coverage metric just by removing the "or die" phrase, and letting the program blow up on its own should an error ever actually occur. This makes the program less robust and at least arguable lower quality -- but the coverage metric goes up. So coverage does not equal quality.

If there's a requirement to fail an error a certain way, then by all means, write the test and generate the error -- but then one is generating the error to show that the requirement is satisfied, not to meet a coverage goal for its own sake.

-xdg

Code written by xdg and posted on PerlMonks is public domain. It is provided as is with no warranties, express or implied, of any kind. Posted code may not have been tested. Use of posted code is at your own risk.

Replies are listed 'Best First'.
Re^4: TDD with Coverage Analysis. Wow.
by dws (Chancellor) on Aug 28, 2005 at 19:04 UTC

    If there's a requirement to fail an error a certain way, then by all means, write the test and generate the error -- but then one is generating the error to show that the requirement is satisfied, not to meet a coverage goal for its own sake.

    Lacking a requirement to fail a certain way, a lot of people, myself among them, will often toss in an or die and be done with it, without ever testing that failure case to see how it behaves functionally. And, for many customers, "fail gracefully" is an implicit requirement. Coverage analysis points out where we've taken half-steps, and suggests where a few more unit (or functional) tests might be needed.

    It's not about getting to 100%, though that does become tempting when being handed a color-coded chart. It's about adequate test coverage.

Re^4: TDD with Coverage Analysis. Wow.
by adrianh (Chancellor) on Aug 28, 2005 at 09:50 UTC
    The reasons you give may well be valid from a particular point of view (and I'm largely sympathetic to them) -- but they are really unrelated to coverage. One should force the failure and test the result if these other things are important, not because one is aiming for 100% coverage.

    Oh yes. Coverage is a tool to help you to create good test suites. Not the other way around.

Log In?
Username:
Password:

What's my password?
Create A New User
Domain Nodelet?
Node Status?
node history
Node Type: note [id://487221]
help
Chatterbox?
and the web crawler heard nothing...

How do I use this?Last hourOther CB clients
Other Users?
Others contemplating the Monastery: (2)
As of 2024-03-19 03:52 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    No recent polls found