Beefy Boxes and Bandwidth Generously Provided by pair Networks
go ahead... be a heretic
 
PerlMonks  

Re^10: Self Testing Modules

by BrowserUk (Pope)
on Dec 20, 2005 at 12:24 UTC ( #518046=note: print w/replies, xml ) Need Help??


in reply to Re^9: Self Testing Modules
in thread Self Testing Modules

Assertion failed would be a lot more useful than Test Failed

If I were suggesting reducing the error text to a bare "Assertion failed", I might agree, but I am not.

Any Assert mechanism worth it's salt (eg) is going to give you full Carp::Croak style traceback, so you not only get the which line in the module failed, but also get

  1. which line (not just test no).
  2. in which test file
  3. an all-points-bulletin of the execution path between the two.

Which makes the need to have, and laboriously maintain, test numbers within your testcases redundant. This is all the standard stuff you expect to see from an Assert mechanism in any language.

However, this being Perl, a dynamic language with full introspection right down to the names of the variables involved in the Assertion, and even the text of the source code if wanted, the assert mechanism can provide even more useful information. And save the testcase writer from having to come up with textual descriptions for the tests, that result in the wild variations for the same situations I posted above.

Your point about regression tests is well taken, but it also reenforces my point about different tests having different audiences. I mentioned "developer mode' and 'user mode' controlling the volume and type of information that the test harness displays. There no reason not to have an intermediate 'regression mode' also. In reality, these are all just 'levels of information', and provided the full information is available, suppressing some levels of it for different audiences is trivial--but it has to be there in the first place. The problem with the current toolset is that this information is never available in the first place, or silently dropped unasked.

The way you describe your interests when you get a failure from a module that you are using, you want to see developer type information. And that's okay if the test harness has that ability, you can simply turn it on and get it. For any 'just-a-user' user, they never have to see it if they do not have your 'developer as user' interest.


Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
Lingua non convalesco, consenesco et abolesco. -- Rule 1 has a caveat! -- Who broke the cabal?
"Science is about questioning the status quo. Questioning authority".
In the absence of evidence, opinion is indistinguishable from prejudice.

Replies are listed 'Best First'.
Re^11: Self Testing Modules
by chromatic (Archbishop) on Dec 20, 2005 at 19:46 UTC
    Which makes the need to have, and laboriously maintain, test numbers within your testcases redundant.

    I can't think of the last time I had to maintain a single test number within my test suite. Which testing tools are you using?

      I built a patch for a couple of modules and found myself having to go through and renumber all the testcases after the position I inserted my test.

      More recently I tried to track down the source of a failure reported as "Hmmm. Looks like test 27 failed", or similar and found myself having to plod through the test file counting manually to work out which was test 27. Less than easy as the were a lot of tests commented out.

      Either way, a line number of the failing testcase, and a traceback to the point of failure would be a lot more useful.


      Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
      Lingua non convalesco, consenesco et abolesco. -- Rule 1 has a caveat! -- Who broke the cabal?
      "Science is about questioning the status quo. Questioning authority".
      In the absence of evidence, opinion is indistinguishable from prejudice.
        I built a patch for a couple of modules and found myself having to go through and renumber all the testcases after the position I inserted my test.

        Then that module has a pretty darn odd way of doing test cases. I've never had to explicitly number a test. There isn't even an API for explicit test numbers in Test::Builder!

        More recently I tried to track down the source of a failure reported as "Hmmm. Looks like test 27 failed", or similar and found myself having to plod through the test file counting manually to work out which was test 27. Less than easy as the were a lot of tests commented out.
        Either way, a line number of the failing testcase, and a traceback to the point of failure would be a lot more useful.

        Good test descriptions help with this and any standard Test::Builder based module will report the line number of the failing test case.

        #! /usr/bin/perl use strict; use warnings; use Test::More 'no_plan'; is 1+1, 2, 'addition works'; is 2*2, 5, 'oops - silly me'; __END__ # outputs ok 1 - addition works not ok 2 - oops - silly me # Failed test 'oops - silly me' # in foo.t at line 8. # got: '4' # expected: '5' 1..2

Log In?
Username:
Password:

What's my password?
Create A New User
Node Status?
node history
Node Type: note [id://518046]
help
Chatterbox?
and the web crawler heard nothing...

How do I use this? | Other CB clients
Other Users?
Others meditating upon the Monastery: (2)
As of 2021-06-22 01:58 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?
    What does the "s" stand for in "perls"? (Whence perls)












    Results (100 votes). Check out past polls.

    Notices?