Beefy Boxes and Bandwidth Generously Provided by pair Networks
Think about Loose Coupling
 
PerlMonks  

Running a set of tests twice

by dragonchild (Archbishop)
on Dec 09, 2005 at 04:35 UTC ( #515453=perlquestion: print w/replies, xml ) Need Help??

dragonchild has asked for the wisdom of the Perl Monks concerning the following question:

For my rework of PDF::Writer, I'm developing a set of tests that each PDF::Writer subclass has to pass in order to be considered to "work". Using Test::PDF, I can verify that each subclass correctly renders the set of commands against a control PDF.

Here's the problem - I don't want to have copies of identical tests for each subclass. Instead, I want to run the same subset of tests against each renderer. But, I want to have a comprehensive output. So, I want to have something like:

Overall: t/foo.t ... ok t/bar.t ... ok Subclass1: t/001.t ... ok t/002.t ... failed 5/22 failure message here Subclass2: t/001.t ... failed 1/2 failure message here t/002.t ... failed 3/22 failure message here
Then, something useful in the summary. (I haven't even thought that far.)

First off - has anyone done this? Second, how easy would this to do with Module::Build? (I'm not even going to think about doing this with EU::MM.) Am I really looking for an extension to TAP and a subclass of Test::Harness::Straps?


My criteria for good software:
  1. Does it work?
  2. Can someone else come in, make a change, and be reasonably certain no bugs were introduced?

Replies are listed 'Best First'.
Re: Running a set of tests twice
by xdg (Monsignor) on Dec 09, 2005 at 12:05 UTC

    You might want to look at Test::Class and the section on "Extending Test Classes by Inheritance".

    For a more manual approach, what I've often found helpful when I feel like I'm needing identical tests is just creating my testing functions in a helper module that takes an object to be tested (or even just a class name) and a prefix for the test label.

    # file: t/helper.pm package t::helper; @EXPORT = qw( run_all_tests ); use strict; use warnings; use base 'Exporter'; use Test::More; sub run_all_tests { my ($obj, $prefix) = @_; diag "Starting tests for $prefix"; isa_ok( $obj, "Parent::Class", "$prefix: object isa Parent::Class" + ); ok( $obj->true(), "$prefix: true() is true" ); # more tests ... } 1;
    # file: t/001.t use Test::More 'no_plan'; use t::helper; my @cases = ( Parent::Class->new(), Sub::Class->new(), ); for my $o ( @objs ) { run_all_tests( $o, ref $o ); }

    Does that do what you were looking for? Or did you mean something different by "comprehensive output"?

    -xdg

    Code written by xdg and posted on PerlMonks is public domain. It is provided as is with no warranties, express or implied, of any kind. Posted code may not have been tested. Use of posted code is at your own risk.

Re: Running a set of tests twice
by randyk (Parson) on Dec 09, 2005 at 05:32 UTC
    For the aspect of avoiding duplicate tests, you might want to take a look at how mod_perl approaches this - many of the apr and apr-ext tests use the same test subroutine in the appropriate module in TestAPRlib to run the actual tests.
Re: Running a set of tests twice
by ambrus (Abbot) on Dec 09, 2005 at 09:00 UTC

    I've already met a similar problem. See this perl bug report where I made all tests have every parameter in their name.

    Also see the diag function in Test::More.

      This link requires a username/password, if you don't have an account, use guest/guest to view.
Re: Running a set of tests twice
by adrianh (Chancellor) on Dec 10, 2005 at 22:32 UTC
    First off - has anyone done this?

    Yup. One of the motivations behind Test::Class was solving exactly this sort of problem. Getting test running is pretty simple.

    # create a base class that contains your generic tests { package BaseClass::Test; use base qw( Test::Class ); use Test::More; sub class_under_test { my $self = shift; my $class = ref $self; $class =~ s/::Test$//; return $class; } my $o; sub create : Test( setup => 1 ) { my $self = shift; my $class = $self->class_under_test; $o = $class->new; isa_ok $o, $class; } sub answer : Test { is $o->answer, 42; } # because we don't want our abstract test class to run __PACKAGE__->SKIP_CLASS( 1 ); } my @classes_to_test = qw( Foo Bar Ni ); # make a bunch of subclasses of our base test class eval "package ${_}::Test; use base 'BaseClass::Test'" foreach @classes_to_test; # run everything Test::Class->runtests; __END__ # will output something like 1..6 ok 1 - The object isa Bar ok 2 - answer ok 3 - The object isa Ni not ok 4 - answer # Failed test 'answer' # in foo.pl at line 44. # got: 'infinity' # expected: '42' ok 5 - The object isa Foo ok 6 - answer # Looks like you failed 1 test of 6.

    Where it currently falls down is the test reporting. It's currently a little tricky to see which particular subclass failed a test. You either have to add it in to the test description explicitly, or set TEST_VERBOSE - which is a little too verbose. Making this easier is on the to do list.

Log In?
Username:
Password:

What's my password?
Create A New User
Domain Nodelet?
Node Status?
node history
Node Type: perlquestion [id://515453]
Approved by mrborisguy
Front-paged by Old_Gray_Bear
help
Chatterbox?
and the web crawler heard nothing...

How do I use this? | Other CB clients
Other Users?
Others imbibing at the Monastery: (8)
As of 2022-05-19 08:38 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?
    Do you prefer to work remotely?



    Results (71 votes). Check out past polls.

    Notices?