http://www.perlmonks.org?node_id=368518

samtregar has asked for the wisdom of the Perl Monks concerning the following question:

It's no secret that I use Test::More's no_plan and hate counting my tests. The only reason not to, as far as I can divine, is the worry that your tests might stop part way through without producing a non-zero exit code. For example, the module being tested might have an exit(0) hidden deep in its bowels. I'd like to think that nothing I test would be so rude, but there's no easy way to guarantee it.

I'd like to solve this problem once and for all by writing Test::Finished. This module would add an extra test to the test run which would pass if the test script ran to completion and fail otherwise. The thing is, I'm not sure how to write it, or even if it's possible.

One possibility is to make Test::Finished a source filter. It could add code to the end of the script which toggles a $FINISHED variable. Test::Finished could install an END{} block which checks $FINISHED and fail()s if it's not set.

Another possibility would be to override CORE::exit() and anything else that can exit silently (are there others?). This wouldn't work if the code being tested wants to override exit(), but that's fairly unlikely.

Is there a better way? Or perhaps this problem is already solved in a way I don't know about? Enlighten me, Monks.

-sam

Replies are listed 'Best First'.
Re: How can I write Test::Finished? (auto count)
by tye (Sage) on Jun 21, 2004 at 19:28 UTC

    exec is enough to make me think this is a bad idea. And there are other reasons why this wouldn't really address the "problem".

    Instead write Test::AutoPlan that you use like:

    perl -MTest::AutoPlan -e0 t/*

    and it would run each of your t/* files and then modify them to record the current plan count, notifying you how the counts had changed.

    Instead of writing your t/* files like:

    use Test::More ( ... ); ...

    you'd write them like:

    use Test::AutoPlan qw( Test::More ... ); ...

    and perl -MTest::AutoPlan t/* would change that to

    use Test::AutoPlan 23 qw( Test::More ... ); ...

    and you could even make it so "make plan" updates the test count plans.

    The Test::AutoPlan code goes something like this:

    my $plan; sub VERSION { $plan = $_[1]; } sub import { my $self = shift(@_); goto &UpdateTests if ! @_; my $module = shift(@_); my @plan = "no_plan"; if( defined($plan) ) { @plan = ( tests => $plan ); } elsif( "Test::Simple" eq $module ) { @plan = ( tests => 0 ); } unshift @_, $module, @plan; my $import = $module . "::import"; undef $plan; goto &$import; } sub UpdateTests { for my $test ( @ARGV ) { my $plan = RunTestsAndCount( $test ); @ARGV = $test; $^I = ".old"; while( <> ) { s{ ^( [\w\s]* (?<![\w:]) Test::AutoPlan ) (\s+\d+)? }{ $1 . " " . $plan }ex; print; } } }

    The first argument could be Test::Simple instead of Test::More (or whatever other modules you decide to support).

    - tye        

      I might have misunderstood, but I think this would require me to run

         perl -MTest::AutoPlan t/*

      every time I add a new test. That's definitely an improvement over having to manually adjust a magic number, but it's still a long way from the no-work solution I'm looking for.

      -sam

        Well, you could use it how you like. I'd run "make plan" every time I released a new version of the module. You could even make "make dist" run "make plan" (for your modules).

        If I'm about to make a lot of incremental changes to some test plans, then I could remove the number from the "use Test::AutoPlan" line to revert to plan-less testing.

        I'd think even you would do "make test" every time you added new tests so I'm not sure why replacing that with "make plan; make test" (or perhaps just "make plan test") is such a hardship for you. I'd think the "work" is doing the counting. If you can't handle typing those few extra characters, then I don't know how you managed to produce a module in the first place. (:

        [ Update: Actually, "make plan" should tell you if any tests failed so you could just always use "make plan" instead of "make test" and fool people into thinking that you use plans when you never do. If you have "make plan" be smart enough to not update test files if they already report the correct plan number, then this seems like it might even be little enough work / change for you (if I can be so presumptuous as to make such a guess). ]

        But it sounds like you don't care at all about the types of failures that plans are meant to catch, so perhaps you should just add:

        ok( 1, "NO, I don't *WANT* a plan" ) if rand() < 0.5;

        to the end of all of your test files to prevent those annoying patches from coming it.

        - tye        

      Hear hear!

      ------
      We are the carpenters and bricklayers of the Information Age.

      Then there are Damian modules.... *sigh* ... that's not about being less-lazy -- that's about being on some really good drugs -- you know, there is no spoon. - flyingmoose

      I shouldn't have to say this, but any code, unless otherwise stated, is untested

Re: How can I write Test::Finished?
by adrianh (Chancellor) on Jun 21, 2004 at 18:42 UTC
    The only reason not to, as far as I can divine, is the worry that your tests might stop part way through without producing a non-zero exit code.

    It can also be a useful sanity check when you know the number of tests. For example they're useful when you accidentally cut and forget to paste, or when you have test runs where you have the number of tests determined at runtime and a dodgy piece of logic.

    I do think the file is the wrong level of granularity though, which is why I tend to have them at the method level (via Test::Class) or block level (via Test::Block) if I do use them.

    Overriding exit() sounds like the better choice. Sticking something at the end of the script could cause problems with test scripts the have multiple or non-obvious exit points. You'd have to be careful about thinks like Test::Builder's skip_all.

    The only other things that I can think of that would exit cleanly would be POSIX::_exit or a piece of XS code. However AFAIK neither of those would run the Test::Builder END block so Test::Harness would pick up on the lack of a test footer.

    That said I'd just not sweat it, use no_plan, and not worry.

      exec could exit cleanly.


      ---
      demerphq

        First they ignore you, then they laugh at you, then they fight you, then you win.
        -- Gandhi


Re: How can I write Test::Finished?
by Zaxo (Archbishop) on Jun 21, 2004 at 19:07 UTC

    You can do the test in a child process. Then you will have SIGCHLD to tell you when the child goes away. Since you may need a pipe to return the result of the test code, you can also have SIGPIPE to play with. If you set a timeout with alarm, you can check that the child is still running with kill 0, $cpid.

    That ought to make a pretty flexible bag of tricks for testing really suspect code. Mostly useful on POSIX systems.

    I'm not sure how useful the system exit status will be, but it can be collected by the signal handlers. Somebody who would exit from a module method is likely as not to exit(0).

    After Compline,
    Zaxo

      Ok, so that will tell me when the child exited and what its return code was, but how does that help me distinguish between a rogue exit(0) and complete test run?

      -sam

        untested code:
        if (open CHILD, "-|") { while (<CHILD>) { if (/^SUPERSECRETSTRING\n/) { exit 0; } print; } close CHILD; # to update $? exit $? >> 8; # oops, child exits, so we do too } ... rest of tests here ... print "SUPERSECRETSTRING\n"; # probably as an END block or DESTROY met +hod

        -- Randal L. Schwartz, Perl hacker
        Be sure to read my standard disclaimer if this is a reply.

        The child can send, say, SIGUSR1 to the parent just before a normal exit. Or if you have a pipe, print "Done" over it. Sadly, there is little you can do that a really malicious rogue can't imitate.

        After Compline,
        Zaxo

Re: How can I write Test::Finished?
by dws (Chancellor) on Jun 22, 2004 at 05:38 UTC

    I used to hate counting tests. Now I only dislike it mildly. The benefit of catching oopses has thus far outweighted the minor nuisance of keeping the numbers current. It's easy enough to run a .t by hand, quickly note how many tests it expects, note the number of the last test, and then edit the file to adjust the number. Two of my coworkers have an emacs macro that does this automagically.

    One side-benefit of having explicit counts is that you can periodically traverse the code base to count tests (using File::Find or File::Find::Rule, and a regex), then chart the number over time.

Re: How can I write Test::Finished?
by demerphq (Chancellor) on Jun 22, 2004 at 10:53 UTC

    I have to admit I dont understand why you would ever have to count tests. When im building tests I just add the new ones to the test files, and run the tests, note the number of tests it whines about being unexpected and then add that number to the plan. Never seemed like a lot of stress to me. :-)

    Oh, I also usually run my tests suites during development without using harness so I can see everything. In fact, I've written a test framework in DDS that will automatically generate a passing test from a failing test, so that all I have to do is review what the results were, if they were correct I cut an paste the new results into my test file.

    Also other approaches would be to build a framework like this:

    use Test::More; use_ok('Some_Module'); $obj=Some_Module->new(); isa_ok($obj,'Some_Module'); my @tests=( 'is($obj->accessor,"Some value","accessor")', ); plan tests => 2+@tests; eval $_ for @tests;

    In short there are lots of way to be Lazy without being lazy about counting your tests. :-)


    ---
    demerphq

      First they ignore you, then they laugh at you, then they fight you, then you win.
      -- Gandhi


Re: How can I write Test::Finished?
by schwern (Scribe) on Jun 22, 2004 at 19:53 UTC

    Overriding CORE::exit() is probably the simplest thing to do, I'm planning on building that into Test::More once I'm convinced there's a real need. POSIX::_exit() is another one. Override them so they set a flag and check for that flag in an END block (END blocks are still run even when exit()ing).

    I don't think there's anything you can do about a direct call to CORE::exit() but that's about as unlikely as they come.

    Filters are just asking for trouble.

    I've never run into this problem in the real world so I'm less than convinced it needs addressing. Has an exit(0) bitten anyone else?

    -- Michael G Schwern
      That sounds reasonable and it's probably the easiest thing that could possibly work.

      BTW, how come exit(1) gets ignored by Test::Harness? I've got this in bad.t:

      use Test::More qw(no_plan); ok(1); exit(1); ok(1);

      But when I run a make test:

      $ make test PERL_DL_NONLAZY=1 /usr/local/bin/perl -Iblib/arch -Iblib/lib -I/usr/ +local/lib/perl5/5.6.1/i686-linux -I/usr/local/lib/perl5/5.6.1 -e 'use + Test::Harness qw(&runtests $verbose); $verbose=0; runtests @ARGV;' t +/*.t t/bad....ok All tests successful. Files=1, Tests=1, 0 wallclock secs ( 0.02 cusr + 0.01 csys = 0.03 + CPU)

      What gives? I've got Test::Harness v2.32 and Test::More v0.47.

      -sam

Re: How can I write Test::Finished?
by theorbtwo (Prior) on Jun 24, 2004 at 17:47 UTC

    I think the best way to fix this is in a different layer -- in the harness.

    Make Test::Harness understand a magical 0..__END__ plan, and then expect a __END__ line, then EOF, and consider Bad Things to have happened if the output ends with no __END__ line in sight, or, conversely, if there is output past the __END__.


    Warning: Unless otherwise stated, code is untested. Do not use without understanding. Code is posted in the hopes it is useful, but without warranty. All copyrights are relinquished into the public domain unless otherwise stated. I am not an angel. I am capable of error, and err on a fairly regular basis. If I made a mistake, please let me know (such as by replying to this node).