Beefy Boxes and Bandwidth Generously Provided by pair Networks
more useful options
 
PerlMonks  

Re^9: Self Testing Modules

by demerphq (Chancellor)
on Dec 20, 2005 at 09:13 UTC ( #517994=note: print w/replies, xml ) Need Help??


in reply to Re^8: Self Testing Modules
in thread Self Testing Modules

[Assertion failed] would be a lot more useful than [Test Failed]

I think this comment indicates that you don't have the same expectations of a test framework as I do. When I use a test framework I'm testing behaviour, such as the output of a function under certain circumstances. I'm not concerned with implementation im concerned with output and side effects. It really doesn't matter to me if Module.pm(257) expected to get a SCALAR ref and got something different, what matters to me is how that affected the output, or whether it prevented something from happening that should have happened.

As an example in DDS i have assertions in various places, as well as "insane case" handlers on the important branch points. However, knowing that one of these assertions failed doesnt help me nearly as much as knowing what input set lead to the assertion failing. Because of the structure of DDS I can end up executing those assertions on almost any input, so knowing that the assertion failed simply tells me that "for some unknown input something doesnt do the right thing". This doesnt help me nearly as much as knowing that "for a stated input something doesn't do the right thing". So when a test reports a failure I can easily identify the input code responsible, and then work towards the logic failure that caused it. Simply knowing that a logic failure occured doesnt help me.

When I download a module and it fails test the first thing i do is open up the test file to see what was happening then i look at the relevent code in the module and usually I can find a fix right away, with the definition of "fix" being that i can patch it so it passes test. I then continue on my way, satisifed that I know whats going on. On the other hand when I install a module and it fails I have to take considerable time to figure out why because i dont know if its me doing something wrong or if its the module doing something wrong, furthermore since I dont know exactly what it should be doing its much harder to fix it.

So I guess to me in simple terms regression tests are about why the code fails not so much what code failed. Or in other words its about identifying initial states that lead to failure, not about the exact internal cause of it.

---
$world=~s/war/peace/g

Replies are listed 'Best First'.
Re^10: Self Testing Modules
by BrowserUk (Pope) on Dec 20, 2005 at 12:24 UTC
    Assertion failed would be a lot more useful than Test Failed

    If I were suggesting reducing the error text to a bare "Assertion failed", I might agree, but I am not.

    Any Assert mechanism worth it's salt (eg) is going to give you full Carp::Croak style traceback, so you not only get the which line in the module failed, but also get

    1. which line (not just test no).
    2. in which test file
    3. an all-points-bulletin of the execution path between the two.

    Which makes the need to have, and laboriously maintain, test numbers within your testcases redundant. This is all the standard stuff you expect to see from an Assert mechanism in any language.

    However, this being Perl, a dynamic language with full introspection right down to the names of the variables involved in the Assertion, and even the text of the source code if wanted, the assert mechanism can provide even more useful information. And save the testcase writer from having to come up with textual descriptions for the tests, that result in the wild variations for the same situations I posted above.

    Your point about regression tests is well taken, but it also reenforces my point about different tests having different audiences. I mentioned "developer mode' and 'user mode' controlling the volume and type of information that the test harness displays. There no reason not to have an intermediate 'regression mode' also. In reality, these are all just 'levels of information', and provided the full information is available, suppressing some levels of it for different audiences is trivial--but it has to be there in the first place. The problem with the current toolset is that this information is never available in the first place, or silently dropped unasked.

    The way you describe your interests when you get a failure from a module that you are using, you want to see developer type information. And that's okay if the test harness has that ability, you can simply turn it on and get it. For any 'just-a-user' user, they never have to see it if they do not have your 'developer as user' interest.


    Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
    Lingua non convalesco, consenesco et abolesco. -- Rule 1 has a caveat! -- Who broke the cabal?
    "Science is about questioning the status quo. Questioning authority".
    In the absence of evidence, opinion is indistinguishable from prejudice.
      Which makes the need to have, and laboriously maintain, test numbers within your testcases redundant.

      I can't think of the last time I had to maintain a single test number within my test suite. Which testing tools are you using?

        I built a patch for a couple of modules and found myself having to go through and renumber all the testcases after the position I inserted my test.

        More recently I tried to track down the source of a failure reported as "Hmmm. Looks like test 27 failed", or similar and found myself having to plod through the test file counting manually to work out which was test 27. Less than easy as the were a lot of tests commented out.

        Either way, a line number of the failing testcase, and a traceback to the point of failure would be a lot more useful.


        Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
        Lingua non convalesco, consenesco et abolesco. -- Rule 1 has a caveat! -- Who broke the cabal?
        "Science is about questioning the status quo. Questioning authority".
        In the absence of evidence, opinion is indistinguishable from prejudice.

Log In?
Username:
Password:

What's my password?
Create A New User
Node Status?
node history
Node Type: note [id://517994]
help
Chatterbox?
and the web crawler heard nothing...

How do I use this? | Other CB clients
Other Users?
Others contemplating the Monastery: (3)
As of 2021-06-12 21:34 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?
    What does the "s" stand for in "perls"? (Whence perls)












    Results (53 votes). Check out past polls.

    Notices?