Beefy Boxes and Bandwidth Generously Provided by pair Networks
Just another Perl shrine
 
PerlMonks  

Test::More and fork

by cees (Curate)
on Jun 22, 2005 at 16:23 UTC ( [id://469077]=perlquestion: print w/replies, xml ) Need Help??

cees has asked for the wisdom of the Perl Monks concerning the following question:

I am having some issues getting Test::More to work properly when the test script needs to fork to do some of the tests. I require this in the test scripts for my Data::FormValidator::Filters::Image module which needs to simulate a file upload. I have the test suite working by just printing "ok 1" directly (See here), but would rather use Test::More if I can.

Here is some example code that shows the problem:

use Test::More tests => 2; ok(1, "First test"); $pid = open( CHILD, "|-" ); $SIG{PIPE} = 'IGNORE'; if ($pid) { # parent print CHILD "test\n"; close CHILD; exit 0; } # child process ok(1, "Second test");

The output is as follows:

1..2 ok 1 - First test ok 2 - Second test # Looks like you planned 2 tests but only ran 1.

Notice that the test did execute, and it did print to STDOUT, but Test::More didn't see it, hence it wasn't counted.

Here is the same test without using Test::More:

print "1..2\n"; print "ok 1 - First test\n"; $pid = open( CHILD, "|-" ); $SIG{PIPE} = 'IGNORE'; if ($pid) { # parent print CHILD "test\n"; close(CHILD); exit 0; } print "ok 2 - Second test\n";

Obviously this second test will just output the simple printed data to STDOUT, but to highlight that it works properly, you can run it through Test::Harness to see that it works OK, and that the Test::More version fails:

# perl -MTest::Harness -e 'runtests("test_plain_fork.t", "test_more_fo +rk.t")' test_plain_fork....ok test_more_fork.....ok 2/2# Looks like you planned 2 tests but only ran + 1. test_more_fork.....dubious Test returned status 1 (wstat 256, 0x100) after all the subtests completed successfully Failed Test Stat Wstat Total Fail Failed List of Failed ---------------------------------------------------------------------- +--------- test_more_fork.t 1 256 2 0 0.00% ?? Failed 1/2 test scripts, 50.00% okay. 0/4 subtests failed, 100.00% oka +y.

I have a feeling this is all because of Test::Builder, which has this comment in the docs:

It's ok for your test to change where STDOUT and STDERR point to, Test::Builder's default output settings will not be affected.

If that is the problem, is there any way around it? If it isn't the problem, what am I doing wrong?

Replies are listed 'Best First'.
Re: Test::More and fork
by diotalevi (Canon) on Jun 22, 2005 at 16:34 UTC

    You could try changing where Test::Builder writes to.

    my $builder = Test::More->builder; # Set each of the Test::Builder output setttings $builder->output( ... ); $builder->failure_output( ... ); $builder->todo_output( ... );

      Do you have any idea how I would actually use that? I tried a few things (like redirecting all output to IO::String), but I still get the same results.

      Thanks for your input...

        I assumed telling it to write to a closed or invalid handle would enough. Or to have it write to something opened to File::Spec->devnull.
Re: Test::More and fork
by bluto (Curate) on Jun 22, 2005 at 18:12 UTC
    This may be too simple for your needs, but one way to do this may be to have the parent do all of the 'ok' printing (e.g. have it wait for the child to die and then examine the exit code). If the child has fairly simple output you need to check, you can just send it to a temporary file and have the parent parse it once the child has exitted. I've found that if I design my tests right I don't need anything more complex than this, but YMMV.

    Update: I meant to say I've gotten this method to work fine with Test::More.

Re: Test::More and fork
by adrianh (Chancellor) on Jun 23, 2005 at 17:32 UTC
    If it isn't the problem, what am I doing wrong?

    The test output isn't the problem :-)

    Test::Builder does some automatic checks when a process exits that, amongst other things, check that the number of tests run matches the number of tests in the plan.

    After you fork the parent process doesn't know that the child process ran another test. This means when the parent process exits, it comes up one test short and outputs the error you see.

    To get over this you can disable the exit checking by doing:

    Test::More->builder->no_ending(1);

    in the parent process - see the Test::Builder docs for details.

      Brilliant! adrianh++

      I missed that bit in the docs, but it works like a charm. I guess I was confused that Test::Harness couldn't see the correct number of valid tests either, but it must parse some of the end report that Test::More generates, instead of counting the tests itself.

      There is just one limitation (that doesn't affect me), but after the fork, you can't perform any more tests in the parent, because it will mess up the test numbering (since it doesn't see the tests the child has performed). However, you could disable automatic numbering and manage that yourself to get around it. For me it is not a big deal, since I don't need to perform any more tests in the parent after the fork.

      In case anyone is intereset why I needed this, I needed to test a CGI file upload, and the following allows me to simulate that perfectly (basic idea stolen from CGI.pm test suite):

      my $req = &HTTP::Request::Common::POST( '/dummy_location', Content_Type => 'form-data', Content => [ test => 'name2', image1 => ["t/image.jpg"], ] ); $ENV{REQUEST_METHOD} = 'POST'; $ENV{CONTENT_TYPE} = 'multipart/form-data'; $ENV{CONTENT_LENGTH} = $req->content_length; if ( open( CHILD, "|-" ) ) { print CHILD $req->content; close CHILD; exit 0; } # at this point, we're in a new (child) process # and CGI.pm can read the POST params from STDIN # as in a real request my $q = CGI->new;
        I guess I was confused that Test::Harness couldn't see the correct number of valid tests either, but it must parse some of the end report that Test::More generates, instead of counting the tests itself.

        The reason T::H gives an error is that the when T::B figures out it has a bad test count it exits with a non-zero value, which then gets picked up as suspicious by T::H (which is why you got the "Test returned status 1" error from T::H).

        This makes it easier for other things like Aegis to figure out which test script borked automatically.

Log In?
Username:
Password:

What's my password?
Create A New User
Domain Nodelet?
Node Status?
node history
Node Type: perlquestion [id://469077]
Approved by Old_Gray_Bear
help
Chatterbox?
and the web crawler heard nothing...

How do I use this?Last hourOther CB clients
Other Users?
Others studying the Monastery: (5)
As of 2024-03-29 00:39 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    No recent polls found