Beefy Boxes and Bandwidth Generously Provided by pair Networks
go ahead... be a heretic
 
PerlMonks  

Re^4: Test::Class and test organization

by dragonchild (Archbishop)
on Mar 22, 2006 at 03:51 UTC ( #538395=note: print w/ replies, xml ) Need Help??


in reply to Re^3: Test::Class and test organization
in thread Test::Class and test organization

Here's a basic list of the discrete tests:

  • Basic functionality (Does a single level hash or array work?)
  • Embedded functionality (Does a HoH, HoA, AoH, AoA, and deeper work?)
  • What about wide hashes and arrays (4000+ keys)?
  • What about deep hashes and arrays (4000+ levels)? Does every level behave appropriately?
I then want to cross-reference those tests against:
  • What happens if we use filters on the keys? The values?
  • What about changing some of the internal key values?
  • Does changing the hashing function make a difference?
  • Does cloning work?
  • What about importing and exporting from standard Perl data structures? tied datastructures?
  • How about if I turn locking on and off?
  • What about creating the db using tie vs. DBM::Deep->new?
  • How about concurrent access?
  • What about turning autobless on and off? What about locking? autoflush?

It's almost a N-dimensional matrix of possibilities, and I want to be able to cover as many as possible.

Now, 99.9% of these tests will not be run when the user installs. I plan on picking some representative examples that provide 95%+ code coverage in under 30 seconds on a standard machine and using those for installation tests. That suite will also be the tests that I run on a standard basis before committing changes.

However, I need a suite of tests that I can run overnight (if necessary) that will completely and utterly crush this code and show me that every single edge case I can think of has been covered. Then, when a bug is found, I can demonstrate the the bug has been covered in every one of these scenarios. I need to do this so I can have the level of confidence in DBM::Deep that I have in MySQL or Oracle.

A perfect example is the autobless feature. I found a few bugs in autobless in a few situations, so I fixed them. Recently, however, I found a bug with autobless when it came to exporting. Had I been using this comprehensive test suite, I would have found that bug already (and others like it, I suspect).


My criteria for good software:
  1. Does it work?
  2. Can someone else come in, make a change, and be reasonably certain no bugs were introduced?


Comment on Re^4: Test::Class and test organization
Re^5: Test::Class and test organization
by xdg (Monsignor) on Mar 22, 2006 at 05:54 UTC

    It sounds like what you really may need is something along the lines of Test::Trait (not yet written) rather than Test::Class.

    When I was writing tests to fix Sub::Uplevel, I needed to test a lot of variations of nested regular and upleveled function calls. As I had just read Higher Order Perl, I realized that my test cases could be described by a tree, so I wrote a recursive function to generate and check all my test cases. While the patch isn't incorporated yet, the problem and patch are posted on RT.

    If you think your problem is really N-dimensional -- meaning that you want to test all the combinations of individual variations you described, then you might want to consider doing something similar and just traverse all the combinations.

    I think the challenge will be trying to describe the test variations sufficiently orthogonally that you can easily combine them. (E.g. concurrent access can't be toggled on/off as easily as key filters.) I think you're going to want to figure that out before worrying about organizing your test files and data.

    Personally, my initial thought on this is to implement the core discrete tests in a utility module. Then I would create a test script along these lines:

    use Test::More; use t::CoreTests.pm qw( run_core_tests core_test_plan ); sub generate_cases { # returns AoH with toggles for which variations should be used for e +ach case # e.g. { filter_keys => 1, filter_values => 0, change_values => sub +{ } } # # adjust the complexity of cases generated to suit your needs } my @cases = generate_cases(); plan tests => core_test_plan() * @cases; run_core_tests( $_ ) for @cases;

    Of course, the core test utility would need to know what to do with the various toggles.

    -xdg

    Code written by xdg and posted on PerlMonks is public domain. It is provided as is with no warranties, express or implied, of any kind. Posted code may not have been tested. Use of posted code is at your own risk.

Re^5: Test::Class and test organization
by Herkum (Parson) on Mar 22, 2006 at 13:42 UTC

    What it sounds like what you really want is different tests with varying level of details but only run one big test script. The thing about tests is that it cost nothing to keep making them, if you make one big test script you might spend as much time just trying to debug your test script as you would your module.

    Why not start with a strategy of making a number of different tests with each one in its own file; each test has different layers of complexity. Start with small simple tests as your core and write them. As you need add complexity to your tests, focus a test on one aspect of your module. Your tests are easier to maintain because they are limited in scope. Another reason to limit the scope is that it easier for you to focus on that one piece. I cannot imagine, with the things that you are working with, trying to test everything at once. For example these could be a list of test files,

    basic.t       # Basic functionality (Does a single level hash or array work?)
    embedded.t    # Embedded functionality (Does a HoH, HoA, AoH, AoA, and deeper work?)
    wide_data.t   # What about wide hashes and arrays (4000+ keys)?
    deep_data.t   # What about deep hashes and arrays (4000+ levels)? Does every level behave appropriately? 
    filter.t      # What happens if we use filters on the keys? The values?
    internals.t   # What about changing some of the internal key values?
    change_wide.t # Changing the hashing function make a difference?
    change_deep.t #
    clone.t       # Does cloning work?
    import.t      # What about importing and exporting from standard Perl data structures? tied datastructures?
    export.t      #
    locking.t     # How about if I turn locking on and off?
    dbm_deept.t   # What about creating the db using tie vs. DBM::Deep->new?
    concurrent.t  # How about concurrent access?
    auto_bless.t  # What about turning autobless on and off? What about locking? autoflush? 
    

    This way might be more redundant, however, by building it in layers you will be able to more easily identify when you broke your code because the simple stuff will start breaking right away and you can focus your immediate efforts on fixing that before you attempt to address your very specific and detailed tests.

      This is exactly where the current test suite is. I'm trying to move beyond that. :-)

      My criteria for good software:
      1. Does it work?
      2. Can someone else come in, make a change, and be reasonably certain no bugs were introduced?

Log In?
Username:
Password:

What's my password?
Create A New User
Node Status?
node history
Node Type: note [id://538395]
help
Chatterbox?
and the web crawler heard nothing...

How do I use this? | Other CB clients
Other Users?
Others romping around the Monastery: (13)
As of 2014-07-11 15:40 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    When choosing user names for websites, I prefer to use:








    Results (230 votes), past polls