Beefy Boxes and Bandwidth Generously Provided by pair Networks
XP is just a number
 
PerlMonks  

Re: Toggling test plans with vim

by tphyahoo (Vicar)
on Aug 09, 2006 at 10:55 UTC ( #566376=note: print w/ replies, xml ) Need Help??


in reply to Toggling test plans with vim

Can someone explain to me why counting your tests is such a big deal?

I always just use no_plan.


Comment on Re: Toggling test plans with vim
Download Code
Re^2: Toggling test plans with vim
by Hofmator (Curate) on Aug 09, 2006 at 11:11 UTC
    Counting your tests protects you against unexpected die-ing in your testscript. With no_plan Test::Harness doesn't know how many tests to expect so it can't tell you when you exited prematurely.

    -- Hofmator

      dieing exits with a non-zero exit status and so will be caught by Test::Harness. But it is possible, if unlikely, for a test to exit early in such a way that without a plan Test::Harness won't notice the problem.

      I always use a plan more because I want to notice when a different number of tests gets run, such as due to a bug I've introduced into the *.t file or a loop not running the same number of iterations.

      Not that I get this whole "ok 6" idea for a test suite. I'd rather output results, include the correct output, and assert that the generated output matches the correct output.

      - tye        

Re^2: Toggling test plans with vim
by Ovid (Cardinal) on Aug 09, 2006 at 11:11 UTC

    It can protect you when your tests end unexpectedly. For example, let's say you're running 30 tests. At some point, another programmer on your team writes a bad function which calls exit. Your tests might end prematurely and having a test plan catches that. Or maybe you've just updated a CPAN module which calls exit when it shouldn't or finds some other way of terminating your tests early. Again, having a test count will protect you. It's quite possible that a test can terminate early without any tests failing.

    Another example is when someone does something like this (assumes Scalar::Util::looks_like_number() has been imported):

    foreach my $num ( $point->coordinates ) { ok looks_like_number($num), "... and $num should be a number"; }

    If that returns a different number of coordinates from what you expect, having a test plan will catch that. Admittedly, this should actually look something like this:

    ok my @coordinates = $point->coordinates, 'coordinates() should return the coordinates'; is scalar @coordinates, 3, '... and it should return the correct numbe +r of them'; foreach my $num ( @coordinates ) { ok looks_like_number($num), "... and each should be a number ($num +)"; }

    With that, because you're explicitly counting the number of coordinates, the test plan is not as necessary. However, as with the exit example, a test plan not only helps out when the code is less than perfect, it also helps out when the tests are less than perfect. It's such a small thing to update and when it actually catches a problem, you'll be greatful.

    Some people argue, "yeah, but I never write code that stupid so this doesn't apply to me!". That's fine. If updating the test plan is too much work for them, so be it. Me, I know I make mistakes and I expect others to make mistakes. If we didn't make mistakes, we wouldn't need the tests in the first place.

    (Trivia quiz: if the above tests were in a CPAN module, how might those tests fail?)

    Cheers,
    Ovid

    New address of my CGI Course.

      I've never been caught out by an errant exit() but rather some kind of fault from bad XS. It looks like the script just exited but in reality something really horrible just occurred.

      ⠤⠤ ⠙⠊⠕⠞⠁⠇⠑⠧⠊

      Wouldn't having a tests_complete() function at the end of the test also solve (most of) these problems.
      -- gam3
      A picture is worth a thousand words, but takes 200K.
        That seems like a good idea to me. Why wouldn't this be a reasonable alternative to keeping track of a test count:
        use Test::More 'declare_done'; # tests here.... done_testing();
        I've declared that my plan is to declare when I'm done testing. At the end of the test script, I do just that. Now, if the script exits early and done_testing() hasn't be called, we know there was a problem, and we know where it is, because we know the last test that was run.
        Maybe you should suggest it to the maintainer(s) of Test::More?

Log In?
Username:
Password:

What's my password?
Create A New User
Node Status?
node history
Node Type: note [id://566376]
help
Chatterbox?
and the web crawler heard nothing...

How do I use this? | Other CB clients
Other Users?
Others surveying the Monastery: (8)
As of 2014-12-23 00:26 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    Is guessing a good strategy for surviving in the IT business?





    Results (133 votes), past polls