Beefy Boxes and Bandwidth Generously Provided by pair Networks
good chemistry is complicated,
and a little bit messy -LW
 
PerlMonks  

Lessons learned from getting to 100% with Devel::Cover

by leriksen (Curate)
on Jul 30, 2004 at 03:58 UTC ( #378586=perlmeditation: print w/ replies, xml ) Need Help??

There has been a marked surge of interest in coverage testing recently, and Perl has excellent support for this aspect via Paul Johnson's excellent Devel::Cover.

I installed this module about 2 months ago and, despite articles like How to Misuse Code Coverage, that don't recommend trying to achieve 100% coverage, I thought I have a bash at it, and see what I would learn.

It wasn't easy to get to 100% coverage, especially when different platforms are taken into consideration. But I did learn a lot. And I found numberous issues with what I had (before the exercise) thought was solid, well designed and tested code (there was already >700 test cases across the 2 libraries).

In no particular order, things I found were

  • where parts of my API were hard to use
  • inconsistencies in the usage of the API or its return values
  • that even with 100% coverage, there are still bugs - I fixed one just yesterday
  • it highlighted bad implementation and design decision
  • highlighted several redundant areas - along the lines of 'if these preconditions are met, then it is impossible to reach this part of the code'
  • bugs continued to be found right up to the very end of generating coverage test cases, which was surprising - I thought that the last lot would be just to prove expected behaviour
  • bugs fixes that broke legacy behaviour were quickly identified. This allowed informed decisions like do we try a non-breaking fix or fix the code expecting the broken behaviour
  • most coverage tests were to test abnormal execution paths - and as such they proved most bug ridden - probably because of the mind-set '...but that'll probably never happen', which meant we weren't being as rigorous as we should have been

The lessons I learnt (or were reinforced) were

  • testing is good, more testing is better, immense testing is great
  • even if everything is tested, it might still be wrong
  • fixing the remaining bugs as they arise is made ridiculously easy in most cases, because it is so well tested. Fixes that break existing behaviour are highlighted as test failures - without test cases you might never know
  • remaining bugs seem like they will be mostly limited to data scenarios you haven't accommodated for. Our last one was for multiple CR (\r)'s in the data stream - we weren't expecting that

Whilst I wouldn't state that getting 100% coverage is required by everyone, just the exercise of attempting it can be revealing.

We've just had one of our developers finish a new library, and that currently has 92% coverage. We plan to get the rest of us familiar with the new API by having each developer extend the coverage tests by 1-2% each. Eventually everyone will have had to write a little code using the API, which is the best way to learn it.

use brain;

Comment on Lessons learned from getting to 100% with Devel::Cover
Re: Lessons learned from getting to 100% with Devel::Cover
by Jaap (Curate) on Jul 30, 2004 at 06:07 UTC
    I don't really get it. What do you mean when you say you test 100%? 100% of what? Of all tests possible ever?

      Devel::Cover measures the amount of your code that is exercised by your test plan. If you get 100% coverage then every line of the code being tested is run as part of the test plan.

      --
      <http://www.dave.org.uk>

      "The first rule of Perl club is you do not talk about Perl club."
      -- Chip Salzenberg

        It's more than just every line. Devel::Cover tests every line, every branch (if/else, etc.), and every conditional (A or B or C).

        In my view it's great for reminding one to test edge cases. How often do we write something along the lines of:

        some_function() or die; my $variable ||= 42;

        How many test suites actually test that the die happens, or test the case where the variable doesn't get set in advance? In using Devel::Cover just a little, I've been more thoughtful about where and how I sprinkle defensive logic and I've also made a point of testing to ensure the defenses actually hold in practice.

        -xdg

        postscript: my "test that die happens" comment is poorly phrased, probably a bad example, and justly critiqued below by belg4mit. As adrianh and stvn suggest, the coverage test is really testing the or part and whether some_function is falsifiable, not whether die works. If the code coverage tells us that the die phrase is never executed, it's a reminder that we're making an assumption (that some_function might return false), but not testing that assumption. In practice, the coverage test for some_function itself would probably show that there are untested failure cases that explain why some_function never returns false.

        Code posted by xdg on PerlMonks is public domain. It has no warranties, express or implied. Posted code may not have been tested. Use at your own risk.

        Not to be confused with every path of execution through said code.

         

Re: Lessons learned from getting to 100% with Devel::Cover
by McMahon (Chaplain) on Jul 30, 2004 at 14:49 UTC
    Thanks for writing this down. I passed a link to this node to the agile-testing mail list, who I'm sure will appreciate it.
Re: Lessons learned from getting to 100% with Devel::Cover
by pbeckingham (Parson) on Jul 30, 2004 at 15:02 UTC

    I recently had issues with Devel::Cover, (Devel::Cover Looks the Other Way), and while my project and test suite is still under development, I got the coverage from 41.3% to 52% just by upgrading Devel::Cover from version 0.40 to 0.47, on Perl 5.8.0.

    It is always worth checking version number on modules, especially those labelled as "alpha".

      This is very true - sometimes I had to write if's or unless statements in less idiomatic ways so that Devel::Cover could keep up - I expect almost all of those issues will abate as it progresses through beta to being a core module - it deserves to be there.

      use brain;

Re: Lessons learned from getting to 100% with Devel::Cover
by stvn (Monsignor) on Jul 30, 2004 at 15:36 UTC

    Excellent Meditation!

    Let me start out by saying I admire your drive to get 100%, it is not an easy thing to do. Personally I think that 100% is (in most cases) overrated, and >95% is acceptable in most cases, but you make a good argument.

    Of the things you said, I think this:

    even if everything is tested, it might still be wrong
    and this:
    fixing the remaining bugs as they arise is made ridiculously easy in most cases, because it is so well tested. Fixes that break existing behaviour are highlighted as test failures - without test cases you might never know
    are two of the most important points.

    Test/Code coverage is no silver bullet, as the How to Misuse Code Coverage article pointed out. But as your next point says, it is sure nice to have those tests when you are fixing bugs.

    Again, excellent post :)

    -stvn
Re: Lessons learned from getting to 100% with Devel::Cover
by shenme (Priest) on Jul 30, 2004 at 22:54 UTC
    highlighted several redundant areas - along the lines of 'if these preconditions are met, then it is impossible to reach this part of the code'
    This bothers me a bit, or at least how you might have reacted bothers me.

    While I love realizing that I've been overly cautious and have redundant checks, I also wouldn't want to be, umm, underly cautious either. There have been several times when I realized that it would be _useful_ for later code to check that earlier code hadn't mangled necessary preconditions. Read these as 'assertions'.

    How _did_ you react - clean up redundancies or total removal?

    --
    I'm a pessimist about probabilities; I'm an optimist about possibilities.
    Lewis Mumford

      You raise a good point.

      Redundancy isn't bad in the least bit. Since computers, math and all things logical are "incomplete" (google "halting problem turing godel"). Its impossible to remove all bugs, and (this is my belief) that there is a direct correlation between complexity and the bugginess (gotta love those scientific terms). Since there will always be bugs, redundancy is used to "detect" and possibly "ignore" bugs. Critical systems, such as those used on the F-16, have numerous redundancy that are checked against each other to determine the "correct" results. Of coarse even with highly redundant systems, it is possibly that a bug will result in a uniform false result.

      However, what is considered "critical" when flying faster than sound is quickly becomes "overkill" when running a perl script. In the end you have to decide just how important it is that the script absolutely runs.

        Actually, redundancy is bad. Redundancy is, in fact, a great evil. Redundancy hinders the propagation of changes.

        Redundancy and failover systems can be good in hardware, but that scenario is not comparable to redundancy in code on any level.

        Makeshifts last the longest.

      Maybe I made it sound a little more impressive than it really is

      Say you have a function/method.
      Say you have a number of preconditions that need to be met in that function/method before proceeding deeper into the sub's logic.
      Devel::Cover points out which unexercised paths among those preconditions need tests written for them. You write those test, so that the preconditions are 100% tested.
      Deeper inside the function.method, Devel::Cover shows you other untested paths.

      Looking at these, it quickly becomes apparent that, _if_ the preconditions have been met, and therefore you have progressed to one of these untested code branches, that it is not possible to specify a set of parameters to that function/method that pass the preconditions and progress down the untested path - therefore the untested branch is redundant - it can never be exercised.

      use brain;

Re: Lessons learned from getting to 100% with Devel::Cover
by hesco (Deacon) on Oct 28, 2008 at 01:08 UTC
    I'm not bothered in the least by having a 'redundant' path in my code of the form:

    if(1) { $obj->method(); } else { print STDERR "False is true, War is peace and slavery is freedom. W +e should never have reached this condition.\n"; die; }
    Somewhere among all those Best Practices articles I've read, I picked up the habit of never writing a conditional without an else clause. I can elsif my way through lots of potential logical branches, but if I fail to code some exception throwing into a final else branch of the dispatch code, I will never know if I truly covered all the bases.

    I tried this '100%' exercise with a module last year, and got coverage of over 90%, most of the uncovered execution paths were along the lines of the example above.

    -- Hugh

    if( $lal && $lol ) { $life++; }

Log In?
Username:
Password:

What's my password?
Create A New User
Node Status?
node history
Node Type: perlmeditation [id://378586]
Approved by davido
Front-paged by hossman
help
Chatterbox?
and the web crawler heard nothing...

How do I use this? | Other CB clients
Other Users?
Others meditating upon the Monastery: (9)
As of 2014-09-18 13:21 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    How do you remember the number of days in each month?











    Results (115 votes), past polls