in reply to When Test Suites Attack

The more I work with Perl test suites, the more I dislike the is_same( $got, $want ) methodology.

I've toyed with writing Perl tools so my modules could instead use the methodology of t/feature.t (a Perl script), t/feature.ok (the expected output), and "perl t/feature.t | diff t/feature.ok -".

Then often "fixing" the test suite is as simple as "perl t/feature.t > t/feature.ok" (once you've verified the that the changes are correct).

Even better is to not use plain 'diff' but something that knows how to ignore or transform variant parts of the output (something that I'll probably write up in more detail at some later date; a combination of simple 'quoting', simplistic templates, and simple reverse templating).

Update: BTW, this "easy to 'approve' new UT output" feature isn't the only reason I prefer this style of UT validation. It also means that when a test fails, someone can simply send you the output from "t/feature.t" and you've probably got all of the information you'd be trying to find in the debugger (if you could even get to a debugger in an environment where the problem is reproduced) or be trying to figure out what "debugging" prints to add in order to figure it out. By using 'diff' to validate the test, you've already figured out what is important to display.

It also encourages you to make all of your inner workings have "dump to text" modes, which is often very handy in other phases of maintenance, or even when adding new features. Sure, sometimes Data::Dumper or similar is enough, but custom dumping usually cuts more to the heart of the situation and so is valuable, usually in addition to using a general-purpose dumper.

- tye        

Replies are listed 'Best First'.
Re^2: When Test Suites Attack (text; diff)
by BrowserUk (Pope) on Oct 29, 2005 at 07:58 UTC
    Sure, sometimes Data::Dumper or similar is enough...

    It's for things like this that a just-a-dumper dumper is valuable. The last thing you want is for the dumper to arrive at arbitrary decisions about whether the substructure under key A is a duplicate of that under key D on one pass and vice versa on another, because that's the keys came out of the hash in different orders.

    You'd probably have to apply a sort option to get comparible output from a hash anyway, but for this kind of thing you want to see what's there in detail, not an abbreviated reminder that this bit at the bottom is the same as that bit way back up near the top. Especially in heavily self-referencial structures.

    Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
    Lingua non convalesco, consenesco et abolesco. -- Rule 1 has a caveat! -- Who broke the cabal?
    "Science is about questioning the status quo. Questioning authority".
    In the absence of evidence, opinion is indistinguishable from prejudice.
Re^2: When Test Suites Attack (text; diff)
by Ovid (Cardinal) on Oct 29, 2005 at 06:01 UTC

    This sounds very interesting. I'd love to see more concrete examples of it.


    New address of my CGI Course.

      I used something simular to described above in HTTP::WebTest's tests. I had to test whenether text output matches given samples. And it too was PITA to update tests each time I change something. The solution was writting these text samples in files and adding special test mode when the text samples are updated from test results. Take a look on my module tests and particulary on HTTP::WebTest::SelfTest (all on CPAN). Here is the relevent bit from HTTP::WebTest::SelfTest. The subroutine compare_output acts more or less like Test::More's is with a difference that it only works for text and the expected result is stored in file.
      sub compare_output { my %param = @_; my $check_file = $param{check_file}; my $output2 = ${$param{output_ref}}; my $output1 = read_file($check_file, 1); _print_diff($output1, $output2); _ok(($output1 eq $output2) or defined $ENV{TEST_FIX}); if(defined $ENV{TEST_FIX} and $output1 ne $output2) { # special mode for writting test report output files write_file($check_file, $output2); } } # ok compatible with Test and Test::Builder sub _ok { # if Test is already loaded use its ok if(Test->can('ok')) { @_ = $_[0]; goto \&Test::ok; } else { require Test::Builder; local $Test::Builder::Level = $Test::Builder::Level + 1; Test::Builder->new->ok(@_); } }
      So my workflow was: change something, run tests, see that it fails as I expect (by inspecting diffs), run tests again in self-update mode.

      Ilya Martynov,
      CTO IPonWEB (UK) Ltd
      Quality Perl Programming and Unix Support UK managed @ offshore prices -
      Personal website -

Re^2: When Test Suites Attack (text; diff)
by adrianh (Chancellor) on Oct 30, 2005 at 12:25 UTC
    I've toyed with writing Perl tools so my modules could instead use the methodology of t/feature.t (a Perl script), t/feature.ok (the expected output), and "perl t/feature.t | diff t/feature.ok -".

    This sounds very similar to the kind of test framework you can build up with Test::Base.