![]() |
|
Perl: the Markov chain saw | |
PerlMonks |
Using Test::More to make sense of documentationby ELISHEVA (Prior) |
on May 01, 2009 at 11:39 UTC ( #761266=perlmeditation: print w/replies, xml ) | Need Help?? |
Sometimes I run into documentation that is hard to read. It may be poorly written, but more often it just uses jargon or concepts I'm not familiar with or focuses on a use case different from my own. The only way to really understand how the module or Perl feature works is to experiment. One easy way to experiment is to just try different things on the command line using perl -e'...'. This works well for simple documentation questions, but not for more complex questions:
Another way is to write a short script with lots of sample calls and syntax. That way you eliminate repeat setup. You can also keep a history and notes, but it still isn't perfect. One still needs to visually check the answers and comment out incantations that didn't work. You may also have to write out a lot of print statements to see and compare results. That can get tiresome, especially if you decide that you "might have had it right with an earlier example after all". Fortunately, you can skip all that work by getting a bit creative with Test::More. Test::More is normally used to test code you wrote, but tests are so easy and fast to write, that you can also use it to make sense of what other people wrote. A further advantage is that it checks the right and wrong answers for you, so all you need to do is press a button with all your experiments. If the subroutine output you want to test is a simple scalar, you can make do with knowing exactly two subroutines: is and isnt. First you try something that you think will work, using either is(test expression, ...) or is(eval {...},...), like this:
If it doesn't work, you get a nice little error message like this:
Once you've determined that a particular incanation doesn't work you just change is to isnt or comment the test out. Using isnt, however, means that you never need to worry about going back and checking to see if incantation X really did work after all and you just didn't look at it carefully enough the first time. The test still runs, it just now expects the wrong answer.
Using Test::More for experimenting is a little different than using it for testing. There are two main differences:
At the top of TestMore it says "gotta have a plan". A "plan" is just a hard coded count of the number of tests to run. For formal test situations this is a good idea because it is easy to temporarily comment out important tests while debugging and then forget to put them back. However, in experimental mode, adding and commenting out tests is the whole point of the game. It would be awfully tedious to have to change the test count, everytime you came up with a new incantation to try out. In formal testings we normally use a special command, like prove to run tests. Or we create a custom harness, using App::Prove, the internal guts of prove, as a base. Formal testing uses prove and App::Prove the modules guts of prove because they need to do things like recursively ignore successful tests, search directories for test files, limit printouts to failed tests, and calculate success percentages. But when we are experimenting, we may not need such statistics. Scripts using Test::More can run just as easily using perl myexperiments.pl. The only difference is that you will see lots more output and won't get statistics. Here's a comparison of the output when all tests are successful:
Another advantage of using Test::More (or scripts in general) for tests, is that it gets one thinking in terms of sets of experiments, rather than singleton incantations. Suppose we want to learn more about how regular expressions work. We might start with an experiment file like this:
But the next day we want to know if the same thing will work right if there are two or more repeating patterns. Which one will it find? We could repeat the whole mess, using cut and paste ... or we could replace 'abcdeee' and 'eee' with $input an $output and then put the whole mess into a subroutine. Doing this lets us run the same experiments (or additional ones) over and over with a variety of different inputs. Like this:
The above discussion only covers testing subroutines and regular expressions with simple scalar outputs, but the experimental technique can also be adapted to experiment with routines that output more complex data structures: arrays, hashes, and references to the same. Comparing anything using references can be a problem because is and isnt only check the actual reference, not the things stored inside the reference. However, CPAN has many, many modules for working with complex data and hopefully you can find a few that work for you. This is just a small sampling:
Unfortunately, none of the tools for comparing advanced data structures have an easy way to test for "not equal" , as in the is/isnt combination available for scalars. Commenting out, skipping tests (see Test::More for details) or changing the expected value to the one that actually was produced may be your only options. A final note: this use of Test::More doesn't need to be limited to testing documentation for Perl syntax and modules. Using the various versions of Inline you can test various parameter combinations for non-Perl documentation as well. Best, beth Update fixed nonsensical misplaced phrase in first paragraph.
Back to
Meditations
|
|