From time to time, I remind people that they need to write tests for their work. My first real experience writing unit tests was when I uploaded CGI::Safe to the CPAN. This was actually fairly daunting due to the complex nature of some of what I was doing, coupled with the fact that I had never written tests before. Now, due to incessant prodding from chromatic, I've gone XP and have started writing unit tests before writing code.
If you've ever read anything about a new programming language, you know that you can't just read about it, you have to do it. Once you actually start getting hands-on experience, then, and only then, do you start to grok the depths (if any) of whatever you're getting into.
If you've not written tests before, the following small test script won't make much sense to you, but I'll point out the highlights that led to revelations for me.
#!/usr/bin/perl
use strict;
use warnings;
$|++;
use Test::More tests => 11;
use lib ('../lib');
use Foo::Test::Engine;
my $site = 'dummy';
my $engine = Foo::Test::Engine->new;
ok( defined $engine,
'Engine->new should return something' );
ok( $engine->isa('Foo::Test::Engine'),
'and it should be an Engine object' );
ok( $engine->{_template}->isa('Template'),
'$engine->{_template} should be a Template object' );
$engine->_process_page;
ok ( ! $engine->error, '_process_page successful'.$engine->error );
my $template = $engine->_get_base_template;
ok ( defined $template, 'base template is defined' );
like( $engine->headers, qr/Content-Type: text\/html/,
"Headers are being created" );
# we're instantiating another engine object because the tests of priva
+te
# methods have already processed the output, thus creating double outp
+ut
my $engine2 = Foo::Test::Engine->new;
my $output = $engine2->output;
ok( defined $output, "Looks like we got some output" );
unlike( $output, qr/Template Not Found/, "All templates were found" );
my $output_length = length $output;
my $template = $engine2->get_template;
ok ($template->isa('Template'),
"get_template() should return a 'Template' object");
my $test_output;
$template->process( \*DATA, {}, \$test_output ) || die $template->erro
+r;
like( $test_output, qr/good/,
'$test_output should have the word "good" in it');
is( length $output, $output_length, 'Length of $output should not chan
+ge' );
__DATA__
good
Now, this is very simple. I only have 11 tests, but they are for a fairly consistent API for my Engine object. Some of my tests are dependant on the internals and I dither on whether or not that's a good thing, but the beauty of this is pretty straightforward: many of the tests handle methods that I haven't yet written. I put stubs in there that merely return some hardcoded data. Thus, I can test my API and when I need to actually write the methods, I don't have to worry about how consistent my API is. I've already figured this out.
By having a clearer idea of my API, I can better know what I need to do and I write cleaner code. A case in point was when I first started on this project in the PTE (pre-testing era), I was very proud of the fact that the core script that drives an entire site was only about 40 lines long. Now, by starting over (I wasn't very far into the project) and doing my tests up front, the problems with my previous API shone like beacons. Now, assuming that very little changes, my core script to drive an entire site can be reduced to four lines of code!
use Foo::Test::Engine;
my $engine = Foo::Test::Engine->new;
print $engine->headers;
print $engine->output;
Further, once I find I need to rework the inner magic of my code, I am confident that I can make nice, sweeping changes and still run my tests and know instantly what went wrong. As this is a research project for my work, I've repeatedly been forced to rework things to try new ideas.
Pure bliss :) Tests. Don't leave home without 'em.
Cheers,
Ovid
Join the Perlmonks Setiathome Group or just click on the the link and check out our stats.
Re: The Joy of Test
by dws (Chancellor) on Apr 11, 2002 at 20:08 UTC
|
First, congrats on taking the Testing plunge.
Some of my tests are dependant on the internals and I dither on whether or not that's a good thing, ...
The XP folks call this "code smell". It's a signal that some restructuring or refactoring is needed in the code.
This happens to me a lot when I build post facto test cases, and find that a class is too dependent on, for example, there being a database underneath it, which can make it cumbersome to test. A solution to that particular problem is to restructure a class to make use a "database interface" class that can be passed in either at object creating time or to a method that populates an instance of the object. This makes it easy to swap in a "dummy" (testing) database interface during testing. This change takes code that looks like:
$calendar->loadFromDatabase();
and reworks it to look like
$calendar->loadFrom($calendarDatabase);
where $calendarDatase is either a wrapper class that sits atop some "real" database, or an instance of TestCalendarDatabase, which exists soley to provide reproducable testing data.
| [reply] [d/l] [select] |
|
There's another interesting way to do this called mock objects. A brief description: "We propose a technique called Mock Objects in which we replace domain code with dummy implementations that both emulate real functionality and enforce assertions about the behaviour of our code." (From Endo-Testing) It's Java-focused, but since Perl has a great tradition of borrowing interesting tools and ideas from other languages I don't think that is a problem ;-)
Since I tend to approach problems like this from a code generation standpoint, I find this very intriguing. What if we were to create metadata to configure how the different methods in our objects are supposed to work? Then the code that uses these objects can always depend on a consistent environment and we can test the interesting stuff -- processes that use the objects -- with full confidence.
I haven't implemented anything with this idea yet, but it -- along with generating most of my unit testing code -- has been in the back of my mind for a while now. Testing sizable data-backed systems is hard, and any boost will be greatly welcomed.
Chris
M-x auto-bs-mode
| [reply] |
|
What if we were to create metadata to configure how the different methods in our objects are supposed to work?
You've just cracked the thought barrier that leads to formal methods. For each chunk of code, you create a list of assertions that impose constraints on what the code should do. Then you go through and make sure your code actually obeys those assertions.
Formal methods take the idea a step farther by writing the assertions in mechanically-readable form, then running them through a postulate-matching engine to do the gruntwork of making sure everything checks.
As a trivial example, let's use a mock-language that uses types to impose assertions, and nail down a common blunder in C programming:
pointer: a value that refers to another object
non-null-pointer: a pointer whose value is not zero
string: a sequence of characters in format F
sp: a pointer to a string
nnsp: a non-null-pointer to a string
boolean: TRUE | FALSE
boolean strcmp (nnsp a, nnsp b): takes two non-null
string pointers and returns TRUE if the strings are equal
which makes the error in the following code stanza:
sp x, y;
boolean b;
b = strcmp (x,y);
stand out like a sore thumb. strcmp() demands non-null pointers, but the code above is only giving it regular pointers. The code above meets some, but not all, of strcmp's required assertions, and a formal type engine would point out the error.
Strongly-typed languages build an assertion-tester into the compiler so programmers can build and verify code in the same way. The hints about memory allocation are useful, too. But that's not the only way to handle assertions.
Even though Perl doesn't use strong typing, we can build our own testable assertions about what the program should be doing, and weave that right into our error-handling code.
So while I think your use of the test module is cool, I'd challenge you to crank the quality up one more notch, and make your test code part of the program itself.
When you code to the assertions, you find yourself structuring programs so that no given operation can possibly fail. Ususally, you end up with a framework like so:
create a data structure filled with default values
known to conform to the required assertions.
collect the input that will instantiate this structure.
iterate over the input {
if (this input is valid) {
put the input into the structure.
} else {
put a conforming error value into the structure.
}
}
## at this point, we can assume that the structure conforms
## to the assertions, whether the input was valid or not
and if you make "this structure will be consumable by any client" one of your assertions, you don't have to branch your code to handle error conditions. Simple example:
%templates = (
1 => "template one: data = #DATA#",
2 => "template two: data = #DATA#",
3 => "template three: data = #DATA#",
ERR => "bad template reference, but the data = #DATA#",
);
%data = (
1 => "data value 1",
2 => "data value 2",
3 => "data value 3",
ERR => "bad data reference"
);
for (1..20) {
$t = $templates{ int rand(5) } || $templates{'ERR'};
## assertion: we always get a template that will be usable
## in the substitution below.
$d = $data{ int rand(5) } || $data{'ERR'};
## assertion: we always get a value that will be usable
## in the substitution below.
$t =~ s/#DATA#/$d/;
print $t, "\n";
}
The assertions guarantee that the main code always works, even if the inputs are bad. The structure of the default values makes both kinds of failure visible, without having to obscure the main-line code behind a bunch of tests and conditional branching.. and multiple failures like the ones in this example are a bitch to handle with binary if-else branching.
Guaranteed success: Try it -- it's addicitive. ;-)
| [reply] [d/l] [select] |
|
|
|
|
| [reply] |
|
A solution to that particular problem is to restructure a class to make use a "database interface" class that can be passed in either at object creating time or to a method that populates an instance of the object.
I like this idea a lot, and I do use a database wrapper for most code I write now. At some point I will probably write another wrapper to do what you mention.
I do have another suggestion, which is the approach I've been using lately. I'm trying to follow the MVC concept where it makes sense. Hence I have broken down the various data types into Model modules. They have a dependency on each other like: Change needs Event needs Provider. So I started at the top (Provider) and wrote the module & tests. It works great. Then I wrote the Event module. In the test for it, I use the Provider object I just finished to create dummy entries in the database. Then I run my tests on this known data and verify all is well. Then I delete everything at the end of the test. All is well and I've just used up a few auto_increment id's - no big deal.
The disadvantage to this approach is that while in development you'll most likely get quite a few invalid entries left in the database. In my case it's no big deal to clean them up by hand. You should keep this fact in mind when using this approach.
| [reply] |
Re: The Joy of Test
by gmax (Abbot) on Apr 11, 2002 at 19:43 UTC
|
Thanks, Ovid.
You're one step ahead of me in this respect. I have already figured out that subs stubs (for which I even write the documentation before hacking the code) are the right path toward implementing a good design, but I didn't think of integrating them into my test.pl.
Instead, I was endlessly running test cases that were doing the same thing you are saying, only more complicated.
I am going to try out your suggestion, which I foresee could simplify my coding practice. However, I have a doubt that maybe is just a technical quibble. I am used to make small test scripts, and to run them against different aspects of the module I am building up. Is there any ready-to-use idiom to make a test.pl script by assembling several small ones together?
Keep on the good work.
_ _ _ _
(_|| | |(_|><
_|
| [reply] |
|
gmax asked if there was a "ready-to-use idiom to make a test.pl script by assembling several small ones together?"
You can look at the documentation for Test::Harness. This module will allow you to run tests from several different sources and will return results based upon their output to STDOUT. Here's a script from the docs to have Test::Harness test itself, using all test scripts in the "t" directory.
$ cd ~/src/devel/Test-Harness
$ perl -Mblib -e 'use Test::Harness qw(&runtests $verbose);
$verbose=0; runtests @ARGV;' t/*.t
Using /home/schwern/src/devel/Test-Harness/blib
t/base..............ok
t/nonumbers.........ok
t/ok................ok
t/test-harness......ok
All tests successful.
Files=4, Tests=24, 2 wallclock secs ( 0.61 cusr + 0.41 csys = 1.02 C
+PU)
Cheers,
Ovid
Join the Perlmonks Setiathome Group or just click on the the link and check out our stats. | [reply] [d/l] |
|
$verbose=0; runtests @ARGV;' t/*.t <----
only scripts named *.t will get run. This is usefull as you can stop a test script in the t/ dir running simply by renaming it say widget.test
cheers
tachyon
s&&rsenoyhcatreve&&&s&n.+t&"$'$`$\"$\&"&ee&&y&srve&&d&&print
| [reply] [d/l] |
Re: The Joy of Test
by drewbie (Chaplain) on Apr 12, 2002 at 14:23 UTC
|
Further, once I find I need to rework the inner magic of my code, I am confident that I can make nice, sweeping changes and still run my tests and know instantly what went wrong. As this is a research project for my work, I've repeatedly been forced to rework things to try new ideas.
Pure bliss :) Tests. Don't leave home without 'em.
You summed up perfectly the reason we need tests! Code changes, bugs are fixed/introduced. Bugs are bad, so we need to know when they spring to life. Tests are a simple way to quickly & easily verify your code works as expected. As we all know, lazyiness & hubris are the virtues esposed by perl programmers. You put in a little time up front to save a lot of time & headache further down the line. I hate repetition, and anything to reduce it is good in my book. Without tests, there is always a chance that you've missed something. I was sucked in by the excellent article on perl.com which should be required reading for newbie testers.
IMHO, there is not a good excuse to NOT write tests. It could probably be argued that not writing them is a bit negligent on the programmer's part (not to start a flame war...). And with the wonderful modules that the ingenius Michael Schwern has given us, tests are so simple. Witness Test::Simple, Test::More, Test::Harness, and most recently Test::Inline (Write tests inline in POD!). Is there really a good reason to not write tests? I definitely realize that some things are hard to test (like web app controllers - I'd love to hear how you test those), but I do think everything deserves at least a simple test or two. Even if it's just use_ok('My::Module::Name'); | [reply] |
|
|