Beefy Boxes and Bandwidth Generously Provided by pair Networks
good chemistry is complicated,
and a little bit messy -LW
 
PerlMonks  

How To Test

by John M. Dlugosz (Monsignor)
on May 15, 2011 at 04:13 UTC ( [id://904909]=perlquestion: print w/replies, xml ) Need Help??

John M. Dlugosz has asked for the wisdom of the Perl Monks concerning the following question:

Trying to make my CPAN module do all the up-to-date and "correct" stuff, I've learned that some(many) module boilerplates and defaults are not correct. Someone pointed me to this article which mentions things like “running of tests under AUTOMATED_TESTING” with the expectation that the reader knows what he's talking about. There seems to be a community of testers that share war stories and develop Best Practices and invent new modules.

So how does one find that community? I've read the Test Tutorial documents, Test::Simple and Test::More and gleamed some things from the test files generated by module-starter (which I'm told are incorrect). But that's just the tip of the iceberg, it would seem.

Where is the documentation on the details of these environment variables and other settings, how to tools and CPAN uses testing, etc.? Is there a tutorial for not just writing a single test case but in setting up the module correctly?

A more specific question: I got a CPAN tester report back concerning failure on Windows, because I'm testing that a function dies when documented to be exceptional, and furthermore that the Carp is giving information relating to the correct caller (the call of the user-facing API). Since the file names are different on Windows, this detail was wrong. But that's not reason for someone to reject the module if a dependency pulls it in. This talk of "advanced testing" is about module details and POD, but it would seem that some picky details that can be tested are "warning", not "fatal" or part of the critical feature set.

I have "ok" and "not ok". What about "somewhat OK"?

In this particular case, I'd like to know from the CPAN test suite on all versions and platforms that the warning exists, so I don't want to skip the test. But it should not mark the module as being bad on that platform/version.

Replies are listed 'Best First'.
Re: How To Test
by eyepopslikeamosquito (Archbishop) on May 15, 2011 at 08:47 UTC

    There seems to be a community of testers that share war stories and develop Best Practices and invent new modules. So how does one find that community? Where is the documentation on the details of these environment variables and other settings, how to tools and CPAN uses testing, etc.?
    For more information on this community of testers, see qa.perl.org and wiki.cpantesters.org. A lot of their work is done on the perl-qa mailing list and at annual perl qa hackathons.

    Like you, I struggled to find definitive references for the latest innovations and conventions the (very active) perl-qa community was concocting. I wrote a short node, Perl CPAN test metadata, a while back which might be of use, especially "The Oslo Consensus" section and References. See also RFC: How to Release Modules on CPAN in 2011.

Re: How To Test
by Corion (Patriarch) on May 15, 2011 at 07:59 UTC

    Simply don't use weirdo testing ideas like Test::NoWarnings. This module only ever causes grief when a harmless warning pops up.

    You can make testing as hard on yourself as you like. Include no tests, or a test.pl that only outputs

    1..1 ok

    You are quite vague about your situation, code-wise. There are no "somewhat OK" tests. Write all your tests so that they pass, fail, or skip if unapplicable. If you can't test a function that way, either skip it completely, or split it or marke it parametrizable.

    As an example, I'm currently writing Win32::Wlan. This module is ugly to test, because it tries to fetch information from the Wlan surroundings. This means that I will skip tests in many situations:

    • Not on Windows
    • No Wlan API available (it starts with XP SP3 or so)

    Even if these conditions pass, the subsequent tests are still split up into two parts

    1. Tests that output diag information about the currently seen networks. I use these tests locally for manual inspection of the results.
    2. Tests with canned results for the API calls. I use these tests to get reproducible results even when the environment changes.
      You are quite vague about your situation, code-wise. There are no "somewhat OK" tests. Write all your tests so that they pass, fail, or skip if unapplicable.
      I didn't think I was vague.

      Function throws an exception if a particular parameter is a particular bad value: check. Exception string mentions the specific parameter in question: check. Exception carp's to the correct user-facing call: fail.

      The last thing will not prevent the module from being used, and should not fail an installation. It should not prevent the module from being flagged as broken during CPAN testing. But I do want to find out about it during CPAN testing, as I want to exploit its ability to test on all platforms and versions.

      So, what "situation" exists that I can use to determine whether or not the test runs?

        You can't have both. A test either fails or passes. If it fails during CPAN testing, it will get marked as "FAIL". If it fails elsewhere, it will get marked as "FAIL". If it passes, it will get marked as "PASS". If you have a "FAIL", you will likely be sent a mail from the CPAN testers, so if you want to learn about things, that will happen.

        If you are testing for things that are not relevant, you will paint yourself in the corner that users can't automatically install your module with passing tests. If you think that the test is relevant, then include it. If you don't think it is relevant, skip it.

        If you want to run long tests, or tests that just check the integrity of your documentation, add these as "author tests" under the xt/ directory of your distribution. I'm not sure how these tests are run by Makefile.PL / Build.PL - I would assume that you need to set some environment variable for these to be run.

Re: How To Test
by MidLifeXis (Monsignor) on May 17, 2011 at 13:41 UTC

    I have started using AUTOMATED_TESTING and RELEASE_TESTING variables.

    AUTOMATED_TESTING is set any time I have an automated process doing my build -- places where prompting or other environmental considerations cannot be assumed.

    RELEASE_TESTING cranks up my testing level to include kwalitee (pod, tidy, standards checks, etc) tests. These get run prior to a release (well, as part of the release), before the code is shipped off (on passage of all tests) to the distribution system. These are also turned on periodically when doing development to ensure that I have not broken anything.

    I also make certain tests optional (using require and skip/skip_all based on if the current environment has a specific module installed (my development environment may have more modules installed for development and testing purposes that are not or should not be in production, for example).

    Anything that I can automate is a good thing :-)

    If you are targeting cross platform, you need to ensure that your tests are also cross platform. In the case where a test fails on Windows due to path differences, I would argue that your test is wrong. You can build your path names in a portable way in your tests just as well as you can in your modules.

    Take this all with a grain of salt, as my exposure to testing is not formal, but based on lots of reading, asking questions, and "fixing" complaints I have with the behaviors of tests I have had the pleasure (take that as you will) of using.

    --MidLifeXis

Re: How To Test
by sundialsvc4 (Abbot) on May 16, 2011 at 13:03 UTC

    Bear in mind also that when people are installing your module on whatever environment it may be, they really don’t know much about how your module does what it does.   They just want:   “It Just Works.™”   If your module is adversely affected by “something that is brought in,” then that will happen to the end-user also.   Whereas the tester might know what to do, and of course you know what to do and why it’s happening ... it might well be a complete (and incomprehensible) show-stopper for Gentle Reader.

    The testers subject your module to all kinds of real-world environments including ones that you don’t personally have.   It is, to some extent, your call what to do in all of the use-cases they draw your attention to.

Log In?
Username:
Password:

What's my password?
Create A New User
Domain Nodelet?
Node Status?
node history
Node Type: perlquestion [id://904909]
Approved by GrandFather
Front-paged by planetscape
help
Chatterbox?
and the web crawler heard nothing...

How do I use this?Last hourOther CB clients
Other Users?
Others perusing the Monastery: (5)
As of 2024-04-25 23:47 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    No recent polls found