http://www.perlmonks.org?node_id=477256


in reply to Wanted, more simple tutorials on testing

    Basically, I don't want to test for things I know.

Yes, you do. The edge/weird/invalid cases all need tests, but so do the standard simple cases. What this coverage (of the simple/known stuff) will do is ensure that the basic blocks don't change behavior. It also provides finer-grained resolution for debugging when tests fail -- for example if a weird edge case starts failing, you're going to assume it's because it's a weird edge case and the edge-case code has a problem, but what if it's because you tweaked a core function and it's just rearing it's head here? Good tests on that core function would point to that immediately.

As for your actual test-creation, my first thought is that I'm not sure you need (or want) to use WWW::Mechanize (note also the existence of Test::WWW::Mechanize) .. for one, it requires a web server running and is only needed for testing the gui screen functionality, but here it seems like you really want to get tests down for the back-end functionality.

For your actual individual tests, it seems like you need 1) input text 2) if the result against HTML::Lint should be valid or invalid and optionally 3) a string to compare the result against
You can set that up in a data structure, and then iterate over it performming the tests...
  • Comment on Re: Wanted, more simple tutorials on testing

Replies are listed 'Best First'.
Re^2: Wanted, more simple tutorials on testing
by xdg (Monsignor) on Jul 22, 2005 at 20:03 UTC

    I'll second this. Your code is still just code -- it only does what it's told. Write your tests to define what you want it to do -- and that includes handling unexpected stuff. The hard part is actually defining what you want it to do, not writing tests.

    At the granular level, much of your code converts a string of input to a string of output. Set up a test loop that does that and define your inputs and outputs in a data structure. You don't need to worry about the web parts yet -- test at a simpler level. E.g.

    use strict; use Test::More; my @cases = ( { input => "= Headline", output => "<h1>Headline</h1>", label => "Single =" }, { input => "== Headline", output => "<h2>Headline</h2>", label => "Double =" }, ); plan tests => 1 + @cases; require_ok ("My::Module"); # assume exports 'convert' for my $c (@cases) { is( convert( $c->{input} ), $c->{output}, $c->{label} ); }

    Once you've got a framework like that in place, just keep adding tests for all the basic, most granular elements. Then I'd suggest considering more complicated cases in similar groups -- e.g. nesting of items inside each other, ordering of nested tags, missing whitespace, etc. (A good time for a separate test file.) Once you have a category, it should be easier to imagine lots of variations. The key is to start small -- focus on small, simple combinations before working up to larger units.

    You might also try thinking about the problem in terms of a state-tree. As your code receives each chunk of input, it enters certain states. Are you testing each of the states that different chunks of input lead it to?

    Then get malicious. Intentionally try to break your code or make it give improper results -- which is when you may want to use Test::Exception to see if how errors are handled, too. You know your code better than any random end-user so you should be much more likely to generate malignant input than the "million monkeys" that might bang on your code later. (You'd be surprised how often ideas for this will occur to you during the simple tests of expected behavior.)

    The point of all of this is to focus on exploring the desired behaviors of your code, both when input is as expected and when it's not. If you do that well at a granular level, then any sort of "real world" input is muchlikely going to fit some pattern you've already tested.

    -xdg

    Code written by xdg and posted on PerlMonks is public domain. It is provided as is with no warranties, express or implied, of any kind. Posted code may not have been tested. Use of posted code is at your own risk.