in reply to Stupid, but fun, ideas in testing

That's nice if you're optimizing to minimize test runtime. However, I usually optimize to maximize test independence. I want each *.t file to run in its own interpreter from a known starting point.

That said, I do often look for ways to run similar tests in one main loop where the test data comes from a data structure within the .t file or from outside the .t file in some way.

How does Test::Class handle the independence issue? Load all the modules and then fork for each class?

-xdg

Code written by xdg and posted on PerlMonks is public domain. It is provided as is with no warranties, express or implied, of any kind. Posted code may not have been tested. Use of posted code is at your own risk.

Replies are listed 'Best First'.
Re^2: Stupid, but fun, ideas in testing
by Ovid (Cardinal) on Nov 24, 2007 at 16:27 UTC
    You could probably easily hack in that separation to Test::Class, but that's not part of its design. By deliberately allowing different tests to run in the same process, you can easily find hidden assumptions about state. Maintaining a reasonable state is what the startup/setup/teardown/shutdown attributes are for. See this thread for Adrian Howard's thoughts on this topic.

    Cheers,
    Ovid

    New address of my CGI Course.

      Maintaining a reasonable state is what the startup/setup/teardown/shutdown attributes are for.

      But teardown doesn't reset %INC and changes to the symbol tables.

      Sometimes, when I refactor code to a separate module, I might wind up taking a function or class method call along with and I might forget to use or require the corresponding module. If I only test that new module after loading the original, I might never notice that the new module doesn't load something. Only testing that new module on its own (assuming that's a valid use case) would pick that up. Admittedly, that's a simplistic example for argument, but I think that it makes the case that Test::Class shouldn't be used for optimization without some good thought into what the tradeoffs are.

      Of course, I maintain weird modules like Sub::Uplevel and Class::InsideOut, so maybe I just tend to be extra careful.

      -xdg

      Code written by xdg and posted on PerlMonks is public domain. It is provided as is with no warranties, express or implied, of any kind. Posted code may not have been tested. Use of posted code is at your own risk.

      By deliberately allowing different tests to run in the same process, you can easily find hidden assumptions about state.

      One good way to shake out dependencies is to run the tests in semi-random order. "Semi" in the case means at least temporarily reproduceable, to allow for debugging. A switch to run the tests in reverse order is a trivially easy way to flush out some problems. Seeding the random number generator with the epoch date (but not the time) before randomizing test order allows for same-day debugging.

Re^2: Stupid, but fun, ideas in testing
by jasonk (Parson) on Nov 24, 2007 at 16:21 UTC

    It doesn't on it's own, it runs the tests in whatever test classes you have loaded. So if you have two test classes named App::Test::Foo and App::Test::Bar, then you can run them together...

    use App::Test::Foo; use App::Test::Bar; Test::Class->runtests;

    In this case they will not be run independently, they will be run in the same interpreter, and I don't think the order they get run in (App::Test::Foo first or App::Test::Bar first) is defined.

    If you want them to run independently, you still have to create independent test scripts for them...

    # Foo.t use App::Test::Foo; Test::Class->runtests; # Bar.t use App::Test::Bar; Test::Class->runtests;

    Generally I don't think of Test::Class as a replacement for traditional *.t, but as a helper for them. It's a great help if you have things like unit tests where you have to setup fixtures first and tear them down after the testing, or if you have a bunch of similar classes that need to have their common features tested as well as their own individual unit tests.


    We're not surrounded, we're in a target-rich environment!