Beefy Boxes and Bandwidth Generously Provided by pair Networks
"be consistent"
 
PerlMonks  

Testing at the right granularity

by polettix (Vicar)
on Apr 19, 2005 at 16:39 UTC ( #449325=perlmeditation: print w/replies, xml ) Need Help??

Wise Monks, I'd like to hear yours about the granularity one should keep in writing tests.

I'm programming a small application dealing with some trees, in which the value in a node is the plain sum of the values in the children. Following advices, mostly from here, I'm producing tests as I go (even if I've to admit that I first write the function, then the test, contrarily to the suggestion), but then I came into the present meditation.

Using Test::More, each single test looks checks something "atomic": is the return value true, is this equal or like to that, and so on. To check the results of the calculation, I basically test if the value in each node is what I'm expecting, thus traversing the tree and doing a test for each node.

As a consequence, the number of tests is likely to explode, and this is where I'm looking for some other meditations. I think there's nothing really bad in test bloating, because if any value is incorrect I'll be able to find exactly what and where, but I wonder how a final user would benefit from knowing this instead of a simple "this function doesn't work".

On the other side, I feel like I'm cheating doing all these repetitive tests, and having lots of "ok..." where I should really have one, that is the one related to the function I'm testing. I'm tempted to pack comparisons inside a function, and test only the outcome of this function for an "ok", but this would make troubleshooting harder for me, I fear.

For the moment, I'll stick to the bloating behaviour, because it'll be a limited distribution module, but I'm still curious: what granularity do you adopt in your test suites?

Flavio (perl -e "print(scalar(reverse('ti.xittelop@oivalf')))")

Don't fool yourself.

Replies are listed 'Best First'.
Re: Testing at the right granularity
by merlyn (Sage) on Apr 19, 2005 at 16:48 UTC
    I keep writing tests until I'm happy with the result of Devel::Cover, in covering all of the regular execution paths, and as many of the irregular paths as I can get at from outside the box.

    And from there, I write a test for every bug I uncover that the earlier tests didn't show.

    I'm pretty sure this strategy hits that sweet "80/20" spot, giving me the most return on investment.

    -- Randal L. Schwartz, Perl hacker
    Be sure to read my standard disclaimer if this is a reply.

Re: Testing at the right granularity
by ambs (Pilgrim) on Apr 19, 2005 at 16:42 UTC
    The most thin granularity you choose the best. That's my opinion (and other people may state other things). The truth is that I like to test every function I write with each case it can be faced with.

    Alberto Simões

      I agree, but I'm still dubious about the granularity problem I posed. As a matter of fact, I wasn't talking about coverage, corner cases and so on, but more about how a single "logical" test can result in a bunch of "atomic" tests.

      To make an exageration, suppose I've some function result that I have to check against the string "ciao a tutti":

      my $result = function_under_test(); my $expected = "ciao a tutti";
      Now, I can take the direct path:
      is($result, $expected, "strings are equal");
      But I can go much deeper:
      my @result = split //, $result; my @expected = split //, $expected; is(scalar(@result), scalar(@expected), "length match"); for (my $i = 0; $i < @expected; ++$i) { is($result[$i], $expected[$i], "$i-th char match"); }
      I repeat: this is an exageration, I'm not saying that I'm doing my tests this way. Nonetheless, I sometimes feel that maybe a more compact test (like the first) would do equally good as the bloated one.

      Flavio (perl -e "print(scalar(reverse('ti.xittelop@oivalf')))")

      Don't fool yourself.
        Nonetheless, I sometimes feel that maybe a more compact test (like the first) would do equally good as the bloated one.

        As long as they're testing the same things, yes. In my opinion, your 2nd snippet of code is by far inferior to the 1st. It is more verbose, and less clear. Keep in mind that writing tests still means following good coding practices. In this case, the simplest and most obvious solution is usually the best.

        If they were testing different things, though, then it's not so clear. I would lean towards testing as much as possible, as ambs said. Note that I am talking about this in terms of semantic tests, not in terms of number of "physical" test cases you can squeeze out. Update: dragonchild seems to also touch on this same idea in Re^3: Testing at the right granularity.

        If you're concerned about putting a lot of unrelated tests together, and thinking they may get drowned out in the noise, consider putting them in separate test files. This allows you to have your cake and eat it too. You get logical groupings, but also high granularity.

        I think the second set of tests misses the point of what a test suite should do, which is limited to making bugs visible. That's it! The analysis of the bug (which your second set of tests begins to do) should come after. All you want to do at this stage is to make it your program fail a test: the simplest the path there the better. (Of course, if you program does not fail despite your efforts, good for you, grab a beer.) It's like when you put a filled inner tube under water to quickly find the location air leaks (if any). That's all you'd expect from this test. You don't expect this test to tell you the diameter of the hole, or how it got there, or what it will take to fix it, or anything else.

        the lowliest monk

        a single "logical" test can result in a bunch of "atomic" tests

        I agree with the other replies to this, but just to play devil's advocate...

        There can be a value in doing the things the second way, but it depends on factors that only you can judge. It comes down to a question of information gathering.

        If having a multitude of tests means you are able to pinpoint the exact point of failure in your code ("Hmm, the eighth character is uppercase, I guess that means someone optmi^broke the the tau_gamma() routine again") then the tests are worth doing.

        If only one of the atomic tests fail and the information is useful (helps you zoom in faster to the problem area), then do it that way.

        What you will usually find though, is that if the "logical" test fails, you will have a cascade of failing "atomic" tests as well, all pointing to the same problem. Here, no new information is being added to the picture. If this is the case then just do the logical test and be done with it.

        Remember to test for the boundary conditions. In Perl, I find this often means dealing with empty arrays (), references to empty arrays and lists ([] and {}, respectively), empty strings, undef and the values '0' and '0 but true' or 0e0.

        - another intruder with the mooring in the heart of the Perl

Re: Testing at the right granularity
by mojotoad (Monsignor) on Apr 19, 2005 at 18:53 UTC
    dragonchild has it right -- in my tests on tree structures, I will stringify the two trees in question and simply compare the strings. If your tree object collections do not have built-in stringification, then use something like Data::Dumper to stringify them.

    Now if this test fails, you, as the developer, will no doubt be interested in such questions as whether all nodes are incorrect or only a specific subset of nodes, etc. So perhaps the node-level testing could be a fallback upon failure of the overall stringified comparison test.

    Cheers,
    Matt

Re: Testing at the right granularity
by dragonchild (Archbishop) on Apr 19, 2005 at 18:05 UTC
    Provide a way to stringify your tree, then test that stringification against some $expected. If this means you have to write a stringify function, that's not a bad thing. If your tests want to do it, then your users will, as well.
      Your answer seems to imply that you suggest to stick to the "abbreviated" test flavour. Is this correct? And if so... why? I'm looking for motivations to take one path or another, so forgive me for all these questions.

      Flavio (perl -e "print(scalar(reverse('ti.xittelop@oivalf')))")

      Don't fool yourself.
        Are you wondering how many ok() calls to make or how many scenarios to test? It doesn't matter how many ok() calls you make per test scenario so long as you test enough scenarios.
Re: Testing at the right granularity
by demerphq (Chancellor) on Apr 19, 2005 at 21:27 UTC

    I think it really depends on what the module does. The objective of your test suite should be that you can safely modify code and know if the changes you have made affect existing functionality. Exactly how you reach that goal from a test point of view really depends on the functionality. Some things you want to have a whole load of micro tests for each little possible changes. Tests like these if they are well coupled to the functionality make working out what went wrong easier. OTOH having a bunch of coarse tests that WILL fail if things change will give you sufficient warning of a mistake to make enhancements a lot easier, without necessarily requiring the precision of the microtests.

    In Data::Dump::Streamer I use a small number of fine grain tests for simple things with the rest tested via doing stright string comparisons of the output. Even a single one of the coarse grained tests represents a whole whack of fine grained tests.

    ---
    demerphq

Re: Testing at the right granularity
by jhourcle (Prior) on Apr 20, 2005 at 02:30 UTC

    I take the path of least work. If it's less work for me to produce more 'ok's, then that's the path I take. I'm not going to go out of the way to produce extra 'ok's, just to run up the number -- but likewise, I'm not going to go out of my way to keep the number down, either.

    For instance, I have some routines which do transformations of query parameters to a search engine. Some of the functions handle the translation of spectrum parameters. I probably could have done something to write out one 'ok' for the whole thing, but it was easier for me to use the following, which generates 3 'ok's per 'wave_equiv()'

    I have other tests that test to see if a query is equivalent, that currently results in 28 'ok's per test case -- and that's bound to go up as we add more valid search parameters. All that I care about is that I don't have any errors. I couldn't care about how many not errors I have, so long as the number of errors stays 0.

Specification-based testing
by audreyt (Hermit) on Apr 20, 2005 at 08:09 UTC
    The Haskell world uses test generators like QuickCheck that makes huge numbers of random tests, so all you need to do is to declare your program's properties, and QuickCheck will prove or disprove it.

    Test::LectroTest is QuickCheck ported over to Perl.

Re: Testing at the right granularity
by Anonymous Monk on Apr 20, 2005 at 09:08 UTC
    Don't care about the number of 'ok's. If you're running Test::Harnass (which is usually what's done when you do 'make test'), you get three levels of granuality already: each individual test, each test file, and a total.

    Someone installing a module first looks at the total (and for that matter, that's what CPAN.pm looks at as well. 'install Module' only does the install if all tests pass). Only if something fails, he/she might look into the details. But if everything is ok, the average user is unlikely to care whether you produced 2000 ok's, or 1 ok indicating it had 2000 passes.

    Just remember that not testing 1 case is worse than testing 100 cases twice.

Re: Testing at the right granularity
by petdance (Parson) on Apr 20, 2005 at 19:24 UTC
    As a consequence, the number of tests is likely to explode,

    On the other side, I feel like I'm cheating doing all these repetitive tests, and having lots of "ok..." where I should really have one,

    It sounds like you've adopted a mindset that looks at the raw numbers at the end of the run, instead of looking at the actual testing.

    Having a large number of tests doesn't mean anything.

    There's no "cheating" if the number of tests goes up.

    There's no sin in re-testing the same thing in multiple files. For example, there's nothing wrong, and indeed everything right, with checking the result of every single constructor call, as in

    my $foo = Foo::Bar->new(); isa_ok( $foo, 'Foo::Bar' );
    in every single file. For that matter, there's nothing wrong, and indeed everything right, with check the result for each line of text:
    while ( my $line = <> ) { my $foo = Foo::Bar->parse( $line ); isa_ok( $foo, 'Foo::Bar' ); ... }
    Don't worry about the test numbers. Keep throwing tests at it.

    xoxo,
    Andy

      My fear in the original post derived from a couple of sentences I read. One was inside a module's documentation, talking about the testing in a similar module ("there are only 5 tests..."), the other stating that "a few dozen of tests should suffice".

      Unluckly I cannot produce references, but these sentences stayed in my mind when I came to testing, and having a final count of around 200 for something that wasn't actually that great made me think that I was kinda cheating.

      Anyway, I like the attitude you suggest: I'll not worry about these foolish things and keep going on.

      Flavio (perl -e "print(scalar(reverse('ti.xittelop@oivalf')))")

      Don't fool yourself.

Log In?
Username:
Password:

What's my password?
Create A New User
Node Status?
node history
Node Type: perlmeditation [id://449325]
Approved by Tanktalus
Front-paged by ghenry
help
Chatterbox?
and the web crawler heard nothing...

How do I use this? | Other CB clients
Other Users?
Others contemplating the Monastery: (6)
As of 2020-12-02 22:53 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?
    How often do you use taint mode?





    Results (47 votes). Check out past polls.

    Notices?