Beefy Boxes and Bandwidth Generously Provided by pair Networks
laziness, impatience, and hubris
 
PerlMonks  

Re: Test::Class and test organization

by Herkum (Parson)
on Mar 21, 2006 at 17:46 UTC ( #538240=note: print w/ replies, xml ) Need Help??


in reply to Test::Class and test organization

Your tests are set up here

sub test1 : Test(5) {
    my $self = shift;
    # Do stuff here with $self->{foo} and $self->{bar}
    # that were passed in at new()
}

Everytime you call test1 Test::Class knows to add 5 to the expected result.

The problem I see with your structure here,

my $test = Test::Floober->new( foo => 2, bar => 5 );
my $test2 = Test::Floober->new( foo => 'abcd', bar =>  2 .. 5  );

Test::Class->runtests( $test, $test2 );
is that you are trying to call specific tests and ask it to run them. You should write all your tests as methods in your module and then run them all.

For example, in your test module

sub new_scalar_values : Test(5) {
    my $self = shift;
    
    my $floober = Floober->new( foo => 2, bar => 5 );
    isa_ok($floober, 'Floober');
    # Create Custom checks here for foo and bar  
}

sub new_array_references_values : Test(5) {
    my $self = shift;
    
    my $floober = Floober->new( foo => 2, bar => 1..5 );
    isa_ok($floober, 'Floober');
    # Create Custom checks here for foo and bar
   
}

When you run your tests with Test::Class it will call ALL of your tests, no more managing scripts!

I recommend the Perl Testing: A Developers Handbook for really learning alot about writing tests


Comment on Re: Test::Class and test organization
Re^2: Test::Class and test organization
by dragonchild (Archbishop) on Mar 21, 2006 at 18:42 UTC
    I may have been unclear. I think I'm trying to create a 2-D grid of tests. I have a series of tests that need to be run every time I have a hash, just to verify that the hash is working correctly. (DBM::Deep is a tied class.) I will want to pass in different control data, but run the same tests using that control data. So, sometimes the control data will be 3 key/value pairs. Sometimes, it will be 4000 k/v pairs. But, the actual tests are the same.

    How would you recommend I organize that?


    My criteria for good software:
    1. Does it work?
    2. Can someone else come in, make a change, and be reasonably certain no bugs were introduced?

      Could you share some examples of the other dimension in your grid? What I think you're saying is that you have a set of input data that you want to use repetitively in different tests -- but I'm not clear on whether these different tests are classes or subsets of functionality of the classes.

      I'm not sure this necessarily calls for Test::Class. What about factoring all the test cases into a helper module:

      # t/Data.pm package t::Data; our @cases = ( { label => "3 pairs", input => ( 'a' .. 'f' ), }, # etc... ); 1;

      Create all the separate .t files using that package for input data:

      # t/42_somefunctionality.t use Test::More; # no plan here! use t::Data; my $tests_per_case = 13; plan tests => $test_per_case * @t::Data::cases; for my $case ( @t::Data::cases ) { # 13 tests here for each case }

      In the Test::Class paradigm, I think this would done with a superclass and the individual test classes would inherit from it:

      # t/DBM/Deep/Test.pm package t::DBM::Deep::Test; use base 'Test::Class'; use Test::More; my @cases = ( # all test data here ); sub startup :Test(startup) { my $self = shift; $self->{cases} = \@cases; } 1;
      # t/DBM/Deep/Feature1.pm use base 't::DBM::Deep::Test'; use Test::More; sub a_simple_test :Tests { my $self = shift; for my $c ( @{ $self->{cases} } ) { # tests here } } 1;
      # runtests.t use t::DBM::Deep::Feature1; use t::DBM::Deep::Feature2; # etc... Test::Class->runtests();

      This is similar to how something like CGI::Application recommends using an application-wide superclass to provide the common DBI connection and security but individual subclasses for different parts of the application. I'm not sure exactly how to get the plan right, though.

      For another approach (similar to the first one I mentioned), you might want to look at how I structured the tests for Pod::WikiDoc. I put all the test cases as individual files in subdirectories, and then my *.t files called on some fixture code to run a callback function on each test case in a directory. (This is almost exactly what Test::Base is designed to do, but I wanted to avoid that dependency.)

      Is any of this helpful for what you're trying to do?

      -xdg

      Code written by xdg and posted on PerlMonks is public domain. It is provided as is with no warranties, express or implied, of any kind. Posted code may not have been tested. Use of posted code is at your own risk.

        Here's a basic list of the discrete tests:
        • Basic functionality (Does a single level hash or array work?)
        • Embedded functionality (Does a HoH, HoA, AoH, AoA, and deeper work?)
        • What about wide hashes and arrays (4000+ keys)?
        • What about deep hashes and arrays (4000+ levels)? Does every level behave appropriately?
        I then want to cross-reference those tests against:
        • What happens if we use filters on the keys? The values?
        • What about changing some of the internal key values?
        • Does changing the hashing function make a difference?
        • Does cloning work?
        • What about importing and exporting from standard Perl data structures? tied datastructures?
        • How about if I turn locking on and off?
        • What about creating the db using tie vs. DBM::Deep->new?
        • How about concurrent access?
        • What about turning autobless on and off? What about locking? autoflush?

        It's almost a N-dimensional matrix of possibilities, and I want to be able to cover as many as possible.

        Now, 99.9% of these tests will not be run when the user installs. I plan on picking some representative examples that provide 95%+ code coverage in under 30 seconds on a standard machine and using those for installation tests. That suite will also be the tests that I run on a standard basis before committing changes.

        However, I need a suite of tests that I can run overnight (if necessary) that will completely and utterly crush this code and show me that every single edge case I can think of has been covered. Then, when a bug is found, I can demonstrate the the bug has been covered in every one of these scenarios. I need to do this so I can have the level of confidence in DBM::Deep that I have in MySQL or Oracle.

        A perfect example is the autobless feature. I found a few bugs in autobless in a few situations, so I fixed them. Recently, however, I found a bug with autobless when it came to exporting. Had I been using this comprehensive test suite, I would have found that bug already (and others like it, I suspect).


        My criteria for good software:
        1. Does it work?
        2. Can someone else come in, make a change, and be reasonably certain no bugs were introduced?
Re^2: Test::Class and test organization
by adrianh (Chancellor) on Mar 22, 2006 at 11:30 UTC
    The problem I see with your structure here ... is that you are trying to call specific tests and ask it to run them. You should write all your tests as methods in your module and then run them all.

    Nothing wrong with it :-) T::C was explicitly designed to make object-based setting of fixtures possible where class-based stuff was too inflexible.

Log In?
Username:
Password:

What's my password?
Create A New User
Node Status?
node history
Node Type: note [id://538240]
help
Chatterbox?
and the web crawler heard nothing...

How do I use this? | Other CB clients
Other Users?
Others rifling through the Monastery: (12)
As of 2014-09-23 16:13 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    How do you remember the number of days in each month?











    Results (229 votes), past polls