http://www.perlmonks.org?node_id=538387


in reply to Re^2: Test::Class and test organization
in thread Test::Class and test organization

Could you share some examples of the other dimension in your grid? What I think you're saying is that you have a set of input data that you want to use repetitively in different tests -- but I'm not clear on whether these different tests are classes or subsets of functionality of the classes.

I'm not sure this necessarily calls for Test::Class. What about factoring all the test cases into a helper module:

# t/Data.pm package t::Data; our @cases = ( { label => "3 pairs", input => ( 'a' .. 'f' ), }, # etc... ); 1;

Create all the separate .t files using that package for input data:

# t/42_somefunctionality.t use Test::More; # no plan here! use t::Data; my $tests_per_case = 13; plan tests => $test_per_case * @t::Data::cases; for my $case ( @t::Data::cases ) { # 13 tests here for each case }

In the Test::Class paradigm, I think this would done with a superclass and the individual test classes would inherit from it:

# t/DBM/Deep/Test.pm package t::DBM::Deep::Test; use base 'Test::Class'; use Test::More; my @cases = ( # all test data here ); sub startup :Test(startup) { my $self = shift; $self->{cases} = \@cases; } 1;
# t/DBM/Deep/Feature1.pm use base 't::DBM::Deep::Test'; use Test::More; sub a_simple_test :Tests { my $self = shift; for my $c ( @{ $self->{cases} } ) { # tests here } } 1;
# runtests.t use t::DBM::Deep::Feature1; use t::DBM::Deep::Feature2; # etc... Test::Class->runtests();

This is similar to how something like CGI::Application recommends using an application-wide superclass to provide the common DBI connection and security but individual subclasses for different parts of the application. I'm not sure exactly how to get the plan right, though.

For another approach (similar to the first one I mentioned), you might want to look at how I structured the tests for Pod::WikiDoc. I put all the test cases as individual files in subdirectories, and then my *.t files called on some fixture code to run a callback function on each test case in a directory. (This is almost exactly what Test::Base is designed to do, but I wanted to avoid that dependency.)

Is any of this helpful for what you're trying to do?

-xdg

Code written by xdg and posted on PerlMonks is public domain. It is provided as is with no warranties, express or implied, of any kind. Posted code may not have been tested. Use of posted code is at your own risk.

Replies are listed 'Best First'.
Re^4: Test::Class and test organization
by dragonchild (Archbishop) on Mar 22, 2006 at 03:51 UTC
    Here's a basic list of the discrete tests:
    • Basic functionality (Does a single level hash or array work?)
    • Embedded functionality (Does a HoH, HoA, AoH, AoA, and deeper work?)
    • What about wide hashes and arrays (4000+ keys)?
    • What about deep hashes and arrays (4000+ levels)? Does every level behave appropriately?
    I then want to cross-reference those tests against:
    • What happens if we use filters on the keys? The values?
    • What about changing some of the internal key values?
    • Does changing the hashing function make a difference?
    • Does cloning work?
    • What about importing and exporting from standard Perl data structures? tied datastructures?
    • How about if I turn locking on and off?
    • What about creating the db using tie vs. DBM::Deep->new?
    • How about concurrent access?
    • What about turning autobless on and off? What about locking? autoflush?

    It's almost a N-dimensional matrix of possibilities, and I want to be able to cover as many as possible.

    Now, 99.9% of these tests will not be run when the user installs. I plan on picking some representative examples that provide 95%+ code coverage in under 30 seconds on a standard machine and using those for installation tests. That suite will also be the tests that I run on a standard basis before committing changes.

    However, I need a suite of tests that I can run overnight (if necessary) that will completely and utterly crush this code and show me that every single edge case I can think of has been covered. Then, when a bug is found, I can demonstrate the the bug has been covered in every one of these scenarios. I need to do this so I can have the level of confidence in DBM::Deep that I have in MySQL or Oracle.

    A perfect example is the autobless feature. I found a few bugs in autobless in a few situations, so I fixed them. Recently, however, I found a bug with autobless when it came to exporting. Had I been using this comprehensive test suite, I would have found that bug already (and others like it, I suspect).


    My criteria for good software:
    1. Does it work?
    2. Can someone else come in, make a change, and be reasonably certain no bugs were introduced?

      It sounds like what you really may need is something along the lines of Test::Trait (not yet written) rather than Test::Class.

      When I was writing tests to fix Sub::Uplevel, I needed to test a lot of variations of nested regular and upleveled function calls. As I had just read Higher Order Perl, I realized that my test cases could be described by a tree, so I wrote a recursive function to generate and check all my test cases. While the patch isn't incorporated yet, the problem and patch are posted on RT.

      If you think your problem is really N-dimensional -- meaning that you want to test all the combinations of individual variations you described, then you might want to consider doing something similar and just traverse all the combinations.

      I think the challenge will be trying to describe the test variations sufficiently orthogonally that you can easily combine them. (E.g. concurrent access can't be toggled on/off as easily as key filters.) I think you're going to want to figure that out before worrying about organizing your test files and data.

      Personally, my initial thought on this is to implement the core discrete tests in a utility module. Then I would create a test script along these lines:

      use Test::More; use t::CoreTests.pm qw( run_core_tests core_test_plan ); sub generate_cases { # returns AoH with toggles for which variations should be used for e +ach case # e.g. { filter_keys => 1, filter_values => 0, change_values => sub +{ } } # # adjust the complexity of cases generated to suit your needs } my @cases = generate_cases(); plan tests => core_test_plan() * @cases; run_core_tests( $_ ) for @cases;

      Of course, the core test utility would need to know what to do with the various toggles.

      -xdg

      Code written by xdg and posted on PerlMonks is public domain. It is provided as is with no warranties, express or implied, of any kind. Posted code may not have been tested. Use of posted code is at your own risk.

      What it sounds like what you really want is different tests with varying level of details but only run one big test script. The thing about tests is that it cost nothing to keep making them, if you make one big test script you might spend as much time just trying to debug your test script as you would your module.

      Why not start with a strategy of making a number of different tests with each one in its own file; each test has different layers of complexity. Start with small simple tests as your core and write them. As you need add complexity to your tests, focus a test on one aspect of your module. Your tests are easier to maintain because they are limited in scope. Another reason to limit the scope is that it easier for you to focus on that one piece. I cannot imagine, with the things that you are working with, trying to test everything at once. For example these could be a list of test files,

      basic.t       # Basic functionality (Does a single level hash or array work?)
      embedded.t    # Embedded functionality (Does a HoH, HoA, AoH, AoA, and deeper work?)
      wide_data.t   # What about wide hashes and arrays (4000+ keys)?
      deep_data.t   # What about deep hashes and arrays (4000+ levels)? Does every level behave appropriately? 
      filter.t      # What happens if we use filters on the keys? The values?
      internals.t   # What about changing some of the internal key values?
      change_wide.t # Changing the hashing function make a difference?
      change_deep.t #
      clone.t       # Does cloning work?
      import.t      # What about importing and exporting from standard Perl data structures? tied datastructures?
      export.t      #
      locking.t     # How about if I turn locking on and off?
      dbm_deept.t   # What about creating the db using tie vs. DBM::Deep->new?
      concurrent.t  # How about concurrent access?
      auto_bless.t  # What about turning autobless on and off? What about locking? autoflush? 
      

      This way might be more redundant, however, by building it in layers you will be able to more easily identify when you broke your code because the simple stuff will start breaking right away and you can focus your immediate efforts on fixing that before you attempt to address your very specific and detailed tests.

        This is exactly where the current test suite is. I'm trying to move beyond that. :-)

        My criteria for good software:
        1. Does it work?
        2. Can someone else come in, make a change, and be reasonably certain no bugs were introduced?