Beefy Boxes and Bandwidth Generously Provided by pair Networks
The stupid question is the question not asked
 
PerlMonks  

comment on

( #3333=superdoc: print w/replies, xml ) Need Help??

This isn't really a response to the above post, but more follow up to your previous questions about "my problem" with the Test::* modules, slightly prompted by your statement above:

I'm in favor of reducing duplication in test code.

I think a good part of my problem is that I don't see what I would consider proper segregation between the types (or purposes) of tests within typical test suites that utilise the Test::* framework.

A few examples, which probably won't be well organised. I may borrow specific examples from particular modules, but that's not a critique of those modules or their authors, just an attempt to ground the examples with some reality.

To start with, let's imagine we are writing a OO module

  • Starting with the tests that get shipped with modules and run pre and post installation.

    What should these do?

    I'll assert that these should be minimal. And that most modules test too much and the wrong things at this time.

    Taking these two test types in reverse order

    1. Post installations tests.

      My assertion is that it should do just three things.

      1. Check that the installation has succeeded:

        The module can be found and loaded along with all it's compile-time dependencies.

        To this extent, just useing the module in the normal way would, (IMO), be better than use_ok(). The standard error texts report a lot of very useful information that simply is discarded by the institutional wrappers.

      2. Check that an instance of the object can be instantiated given known good instantiation parameters.

        The purpose is to test the installation--not unit or integration test the module. See later.

      3. Check that the instance is correctly destroyed.

        Any destructor code gets called correctly when an instance goes out of scope with no remaining references.

        I'm not sure that this really deserves to be here, or how one could check it in Perl, but it is typical to test this at this time.

      And that should be it.

    2. Pre-installation tests.

      The only things that should be run at this point are, what I would class as simple environmental checks. This might include such things as:

      • The version of Perl available.
      • The availability and versioning of dependant modules.
      • The type and version of the OS if this is important.

    What should these not do?

    I assert, pretty much anything else. That is, they should not test

    1. the modules parameter handling.
    2. nor it's error/exception handling.
    3. nor it's functionality.
    4. nor any of the above for it's dependant modules.
    5. nor any of the above for Perl's built-ins.

    Items 1 through 3 are all tests that should run and passed prior to the module being shipped--by the author. It doesn't make sense to re-run these tests at installation time, unless we are checking that electronic pixies haven't come along and changed the code since it was shipped.

    That's slightly complicated by the fact that authors will rarely be in a position to validate their modules in all the environments that it will run in, but Perl's own build-time test suite should (and does) test most factors that vary across different OSs and compilers. If the Perl installation has passed it's test suite, then there is little purpose in re-checking these things for every module that is installed.

    For example. If a module has a dependency upon LWP to fetch URLs from the Internet, there is no reason to run tests that exercise LWP on every installation. LWP is core, and has a it's own test suite. Modules that use LWP should only need to ensure that it is available, accessible and of a sufficiently high version for their needs. Everything beyond that is duplication. The modules system test should go out to a live Internet site and fetch data in order to do an end-to-end test, but that should only need to be run by the authors prior to shipping.

    I don't have a problem with shipping the full test suite with the module. In the event of failures that cannot be explained through other means, then having the user run the test suite in there environment and feed the results back to the author makes good sense. But running the test suite on every installation doesn't.

Different test phases each have a different target audience

And that brings me back to my main problem with the Test::* framework. It's main function appears to be to capture, collate and present in a potted form, the results of running the entire test suite; but whom is this aimed at?

As a module user, I am only concerned with "Did it install correctly?".

As a module developer, I am only concerned with "Did my last change work?", "Did it break anything else?", if no to either, "Where did the failure occur?"--and preferably "Take me there!".

And to my mind, the summary statistics and other output presented by Test::Harness serve neither target audience. They are 'more than I needed to know' as a module user, and 'not what I need to know' as a developer.

About the only people they serve directly are groups like the Phalanx project (Please stop using the cutesy 'Kwalitee'--it sounds like a '70s marketing slogan! Right up there on the 'grates like fingernails on a blackboard' stakes with "Kwik-e-mart", "Kwik print" and "Magick" of all forms). And possibly corporate QA/Third party code approval Dept's.

I have misgivings with need to put my unit tests in a separate file in order to use the Test::* modules. It creates couplings where none existed. Codependent development across files, below the level of a specified and published interface, is bad. It is clumsy to code and use. It means that errors found are reported in terms of the file in which they are detected, rather than the file--and line--at which they occurred. That makes tracking the failure back to the point of origin is (at least) a two stage affair. In-line assertions, that can be dis/en-abled via a switch on the command line or in the environment just serve the developer so much better at the unit test level.

And finally, I have misgivings about having to reduce all my tests to boolean, is_ok()/not_ok() calls. The only purpose this seems to serve is the accumulation of statistics which I see little benefit in anyway. I realise that extra parameters are available to log a textual indication of the cause of the failure, but again: Who do these serve?

Not the user, they don't care why it failed, only that it did.

And not the developer. Having to translate stuff like:

  • Did all the bytes get saved?
  • URI should be a string, not an object.
  • URI should be string, not an object
  • URI shouldn't be an object
  • URI shouldn't be an object
  • URI should be a plain scalar, not an object
  • URI shouldn't be an object
  • Find one form, please
  • Should have five elements

back to file and line number of the failing code; what the inputs were that caused them; why they failed; and what to do about it; just does seem logical to me. Most of those should just be asserts that are in the modules, stay permanently enabled, and report the file & line number (and preferably the parameters of the assertion in terms of both the variables names and contents) when they occur. And the Test::Harness to capture that information and present either "An assertion failure occurred" when in 'user mode', or the assertion text including all the information if run in 'developer mode'. That is

Assertion failed: Module.pm(257): "$var(My::Module=HASH(0xdeadbeef)) != 'SCALAR'

would be a lot more useful than

t\test03: test 1 of 7 failed: "URI should be a plain scalar, not an object"

I'm grateful to eyepopslikeamosquito (who started this sub-thread), for mentioning the Damian's module Smart::Comments. From my quick appraisal so far, I think that it is likely to become a permanent addition to my Perl file template. It appears to be similar in operation to another module that I have mentioned Devel::StealthDebug, but is possibly a better implementation. It seems to me that it would make an ideal way of incorporating Unit tests into code in a way that is transparent during production runs, but easily enabled for testing. With the addition of a Test::Harness compatible glue module that essentially simply enables the tests, and captures/analyses the output and converts it to boolean is_ok/not_ok checks, might be close to what I've been looking for.


Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
Lingua non convalesco, consenesco et abolesco. -- Rule 1 has a caveat! -- Who broke the cabal?
"Science is about questioning the status quo. Questioning authority".
In the absence of evidence, opinion is indistinguishable from prejudice.

In reply to Re^8: Self Testing Modules by BrowserUk
in thread Self Testing Modules by Sheol

Title:
Use:  <p> text here (a paragraph) </p>
and:  <code> code here </code>
to format your post; it's "PerlMonks-approved HTML":



  • Are you posting in the right place? Check out Where do I post X? to know for sure.
  • Posts may use any of the Perl Monks Approved HTML tags. Currently these include the following:
    <code> <a> <b> <big> <blockquote> <br /> <dd> <dl> <dt> <em> <font> <h1> <h2> <h3> <h4> <h5> <h6> <hr /> <i> <li> <nbsp> <ol> <p> <small> <strike> <strong> <sub> <sup> <table> <td> <th> <tr> <tt> <u> <ul>
  • Snippets of code should be wrapped in <code> tags not <pre> tags. In fact, <pre> tags should generally be avoided. If they must be used, extreme care should be taken to ensure that their contents do not have long lines (<70 chars), in order to prevent horizontal scrolling (and possible janitor intervention).
  • Want more info? How to link or or How to display code and escape characters are good places to start.
Log In?
Username:
Password:

What's my password?
Create A New User
Chatterbox?
and the web crawler heard nothing...

How do I use this? | Other CB clients
Other Users?
Others wandering the Monastery: (4)
As of 2021-06-22 00:51 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?
    What does the "s" stand for in "perls"? (Whence perls)












    Results (100 votes). Check out past polls.

    Notices?