Beefy Boxes and Bandwidth Generously Provided by pair Networks
We don't bite newbies here... much
 
PerlMonks  

TDD with Coverage Analysis. Wow.

by dws (Chancellor)
on Aug 27, 2005 at 07:39 UTC ( #487104=perlmeditation: print w/ replies, xml ) Need Help??

The short version of a longer story is that blakem loaned me his copy of Ian Langworth and chromatic's Perl Testing: A Developer's Notebook, and good things happened.

I've been reworking some old code TDD-style, commenting out the code, and only commenting lines back in when a unit test proved the need. Where the old code smells, I've been rewriting it, but only where there's test coverage. Thinking through testing has forced me to harden the code in some weak areas, and has flushed out a few problems that have been latent for the 5 years the code has been in production.

Though this approach has been working quite well, I've had nagging concerns about the quality of the tests. So I took authors' advice and hooked up Devel::Cover.

Wow. Nifty HTML charts with links to more nifty charts that point to code, branches, and conditions that the tests aren't exercising. I've used code coverage tools many years before, but never together with TDD. It's like discovering that my car has a pop-up GPS display.

The change to my development rhythm was immediate. "Write a test, watch it fail, then write the code to make the test pass; repeat" now includes periodically stopping to run the coverage tool to look for gaps in coverage, and to catch where untested code has crept in. It's a change in rhythm that I'm liking a lot.

One of the amusing things that coverage analysis showed up is that statement coverage in my existing tests was nearly 100%. The gap was in a few places where the tests needed to do different things depending on platform. Since pausing the tests to swap hardware is rather difficult, I'll have to accept life without a perfect score there, unless someone knows a trick. Coverage in non-test code was good on statements, but mediocre on branches, particularly when the branches involved error handling. Simulating errors in unit tests can be a nuisance, and I'd been avoiding it.

The book assumes that you're either working in either a ExtUtils::MakeMaker or a Module::Build world. I'm not, yet, so I had to tweak their examples around to get to something that would work in my old-school Makefile. Here's what I came up with:

test: prove t/*.t cover: cover -delete HARNESS_FOR_PERL_SWITCHES=-MDevel::Cover prove t/*.t cover
The cover target spits out HTML files.

Neat stuff. I recommend it. If monks have run into any Devel::Cover limitations, I'd love to hear about them.

Comment on TDD with Coverage Analysis. Wow.
Select or Download Code
Re: TDD with Coverage Analysis. Wow.
by xdg (Monsignor) on Aug 27, 2005 at 11:13 UTC

    Devel::Cover is pretty amazing. I had a similar revelation a year or so ago. Since then, one big thing I've learned is that it's important to keep in mind that 'coverage is not correctness' -- your coverage statistic is just a metric and getting too focused on it can be a distraction. Or put another way, it's a development tool, not a quality metric.

    For example, (as you hinted) what kind of coverage does this get:

    open( my $fh, ">", $output ) or die_with_error();

    If you're a coverage junkie, then you might put yourself through all sorts of contortions to eliminate that red spot (e.g. creating an non-writeable directory for output). But why? You can test die_with_error() on it's own. You don't really need to test that your code can successfully fail an open call. On the other hand, if instead of dying, your code did some special handling, like retrying the write a few time before giving up, then going through those contortions might be appropriate. But that's human context that Devel::Cover can't give you.

    Fortunately, Devel::Cover does tries to check for some types of "uncoverable" code. E.g.:

    my $value = $some_other_value || undef;

    It's smart enough to know that you'll never get undef to be true. But what about this:

    my $filename = $some_filename || default_filename();

    Sometimes, you can code around these things, but I don't think it's worth diminishing readabilty for coverage. Here's one way for the example above:

    my $filename = $some_filename ? $some_filename : default_filename();

    That's not bad, but what if the inital condition is a subroutine:

    # original my $filename = prompt_for_filename() || default_filename(); # can't do this my $filename = prompt_for_filename() ? prompt_for_filename() : default_filename(); # coverage-happy version my $prompted_filename = prompt_for_filename(); my $filename = $prompted_filename ? $prompted_filename : default_filename();

    perl-qa had interesting discussions about this kind of stuff. A good one to read is testing || for a default value. There, some people are advocating for some way to flag lines as uncoverable with comments or an external file, to "make the red go away" once they've checked a line and are convinced it's really not coverable.

    Other things that have popped up "red" for me along these lines:

    • OS-dependent stuff (as you said)
    • perl version or perl config specific code
    • throwing in a wantarray for the future when I haven't used it that way yet (thought I either ought to follow YAGNI or actually test this -- but it cropped up when I was emulating caller)
    • 'switch' type code with a default that shouldn't ever be reached

    So, my advice is use it as a tool to reveal where you thought you had written tests to cover something but hadn't. But don't let coverage become the end goal for its own sake.

    -xdg

    Code written by xdg and posted on PerlMonks is public domain. It is provided as is with no warranties, express or implied, of any kind. Posted code may not have been tested. Use of posted code is at your own risk.

      For example, ... what kind of coverage does this get:
      open( my $fh, ">", $output ) or die_with_error();
      If you're a coverage junkie, then you might put yourself through all sorts of contortions to eliminate that red spot ... But why?

      One reason to make sure that the error branch is tested is for documentation. You're showing the (test) reader how the method under test behaves when it encounters that error. Having tested die_with_error() isn't sufficient. That leaves the reader in the position of having to read both the test, the code being tested, and whatever methods or subs that code invokes.

      It might also be possible that in the context of the line of code above, invoking die_with_error() might be the wrong thing to do. And, admit it, that "or" branch might never get invoked during testing unless you force the issue.

      Besides, the contortions here are minor. If $output is an argument to the method you're testing, injecting a bogus file path is trivial. And if doing that involves too many other side effects, it's a hint that extracting that line (and possibly some others around it that are involved in setting up for output) into a separate, more easily testable method might simplify the code.

        I think this is a good example of the gray zone. You can do various contortions, but what are you really proving in doing so? That open can return a false value?

        The reasons you give may well be valid from a particular point of view (and I'm largely sympathetic to them) -- but they are really unrelated to coverage. One should force the failure and test the result if these other things are important, not because one is aiming for 100% coverage.

        To reinforce the point another way: one can improve the coverage metric just by removing the "or die" phrase, and letting the program blow up on its own should an error ever actually occur. This makes the program less robust and at least arguable lower quality -- but the coverage metric goes up. So coverage does not equal quality.

        If there's a requirement to fail an error a certain way, then by all means, write the test and generate the error -- but then one is generating the error to show that the requirement is satisfied, not to meet a coverage goal for its own sake.

        -xdg

        Code written by xdg and posted on PerlMonks is public domain. It is provided as is with no warranties, express or implied, of any kind. Posted code may not have been tested. Use of posted code is at your own risk.

      open( my $fh, ">", $output ) or die_with_error();

      Mock open()? That's what I would do ...


      My criteria for good software:
      1. Does it work?
      2. Can someone else come in, make a change, and be reasonably certain no bugs were introduced?
Re: TDD with Coverage Analysis. Wow.
by tlm (Prior) on Aug 27, 2005 at 13:22 UTC

    D::C is cool. I also agree xdg's observations. It can definitely seduce one into obsessing over achieving 100% coverage on all fronts, which, given the current limitations of D::C, is not always reasonable.

    In addition to the ones already mentioned, one (admittedly pretty darn obscure) situation I haven't figured out how to cover is code like this:

    $SIG{ __DIE__ } = sub { if ( $^S ) { # exceptions being caught } else { # uncoverable? die; } }
    In tests, the __DIE__ handler would always run within the dynamic scope of an eval, for obvious reasons, so $^S (aka, $EXCEPTIONS_BEING_CAUGHT) will always be true.

    Update: I added a die statement to the second branch of the conditional. Thanks to davidrw, whose reply alerted me to the omission.

    the lowliest monk

      Can you invoke it directly in the test script as &{$SIG{__DIE__}} and test accordingly from there?

      Also ditto on the above .. we started using Devel::Cover recently at work and besides being very cool in itself, is very great thing in terms of forcing you to really know your own code in order to write thorough test coverage for it. On more than one occaison has made me rethink/rework code, and also revealed hidden bugs.

      As for coverage --> 100%, for me it is also usually the error conditions (e.g. </c>$dbh->prepare() or do { ... }</c>) that are the holes.. and while 100% isn't strictly necessary, it is nice feeling to get a completely green board ;)


      As for another work around (this might not be possible depending on what the function actually does) for one of xdg's points, this might work in some cases:
      # original my $filename = prompt_for_filename() || default_filename(); # potential workaround use constant DEFAULT_FILENAME => 'foo.txt'; my $filename = prompt_for_filename() || DEFAULT_FILENAME;

        Can you invoke it directly in the test script as &{$SIG{__DIE__}} and test accordingly from there?

        Only if the !$^S branch doesn't die (and doesn't call something that does). I should have made that clearer in my original post. Will fix. Thanks.

        # potential workaround use constant DEFAULT_FILENAME => 'foo.txt'; my $filename = prompt_for_filename() || DEFAULT_FILENAME;

        I don't see how the second alternative of the || could ever be false (i.e. the coverage would never be 100%).

        Update: Ah, I see.

        the lowliest monk

      Perhaps you could test the non $^S branch by embedding the code under test in a separate litttle script and arrange to have this code executed in a separate process executed using an IPC::Run::<something> call and/or system(). You could then capture any error output/exit code as needed to verify correct behavior?

      --DrWhy

      "If God had meant for us to think for ourselves he would have given us brains. Oh, wait..."

        Perhaps you could test the non $^S branch by embedding the code under test in a separate litttle script and arrange to have this code executed in a separate process executed using an IPC::Run::<something> call and/or system(). You could then capture any error output/exit code as needed to verify correct behavior?

        Yes, I did try something like this, and it worked, but Devel::Cover did not detect it. So the branch is not really uncoverable, I originally wrote, but rather "uncoverable as far as Devel::Cover can tell." :-)

        the lowliest monk

Log In?
Username:
Password:

What's my password?
Create A New User
Node Status?
node history
Node Type: perlmeditation [id://487104]
Approved by gmax
Front-paged by Old_Gray_Bear
help
Chatterbox?
and the web crawler heard nothing...

How do I use this? | Other CB clients
Other Users?
Others exploiting the Monastery: (13)
As of 2014-08-29 16:01 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    The best computer themed movie is:











    Results (281 votes), past polls