Beefy Boxes and Bandwidth Generously Provided by pair Networks
Perl: the Markov chain saw
 
PerlMonks  

Using Test::More to make sense of documentation

by ELISHEVA (Prior)
on May 01, 2009 at 11:39 UTC ( #761266=perlmeditation: print w/replies, xml ) Need Help??

Sometimes I run into documentation that is hard to read. It may be poorly written, but more often it just uses jargon or concepts I'm not familiar with or focuses on a use case different from my own. The only way to really understand how the module or Perl feature works is to experiment.

One easy way to experiment is to just try different things on the command line using perl -e'...'. This works well for simple documentation questions, but not for more complex questions:

  • it is easy to mess up quoting on the command line and get garbage results.
  • if setup is needed, I have to do setup again and again.
  • command line history is short lived. I can't go back next week and see what worked and didn't.
  • I can't annotate command line history with what I thought I got vs. what I actually did get. Knowing both can help me understand the syntax or module better.

Another way is to write a short script with lots of sample calls and syntax. That way you eliminate repeat setup. You can also keep a history and notes, but it still isn't perfect. One still needs to visually check the answers and comment out incantations that didn't work. You may also have to write out a lot of print statements to see and compare results. That can get tiresome, especially if you decide that you "might have had it right with an earlier example after all".

Fortunately, you can skip all that work by getting a bit creative with Test::More. Test::More is normally used to test code you wrote, but tests are so easy and fast to write, that you can also use it to make sense of what other people wrote. A further advantage is that it checks the right and wrong answers for you, so all you need to do is press a button with all your experiments.

If the subroutine output you want to test is a simple scalar, you can make do with knowing exactly two subroutines: is and isnt. First you try something that you think will work, using either is(test expression, ...) or is(eval {...},...), like this:

is($actual, $expected, $descriptionOfWhatYouTried); # to check effect of parameters passed to subroutine is(substr('abcdef', 0, 2), 'ab', q(tried subtr(..,0,2)}); # to check what got captured by a regular expression is(eval { 'abcdef' =~ /^(\w+)/; $1 }, 'ab', q{tried /^\w+/});

If it doesn't work, you get a nice little error message like this:

not ok 1 - tried /^\w+/ # Failed test 'tried /^\w+/' # in Monks/Snippet.pm at line 6. # got: 'abcdef' # expected: 'ab' 1..1 # Looks like you failed 1 test of 1.

Once you've determined that a particular incanation doesn't work you just change is to isnt or comment the test out. Using isnt, however, means that you never need to worry about going back and checking to see if incantation X really did work after all and you just didn't look at it carefully enough the first time. The test still runs, it just now expects the wrong answer.

#changed is to isnt #This doesn't work. Captures 'abcdef' instead of just 'ab' isnt(eval { 'abcdef' =~ /^(\w+)/; $1 }, 'ab', q{tried /^\w+/});

Using Test::More for experimenting is a little different than using it for testing. There are two main differences:

  • test plans. You may prefer not to have a plan.
  • running tests. You may prefer to use perl rather than prove

At the top of TestMore it says "gotta have a plan". A "plan" is just a hard coded count of the number of tests to run. For formal test situations this is a good idea because it is easy to temporarily comment out important tests while debugging and then forget to put them back. However, in experimental mode, adding and commenting out tests is the whole point of the game. It would be awfully tedious to have to change the test count, everytime you came up with a new incantation to try out.

In formal testings we normally use a special command, like prove to run tests. Or we create a custom harness, using App::Prove, the internal guts of prove, as a base. Formal testing uses prove and App::Prove the modules guts of prove because they need to do things like recursively ignore successful tests, search directories for test files, limit printouts to failed tests, and calculate success percentages.

But when we are experimenting, we may not need such statistics. Scripts using Test::More can run just as easily using perl myexperiments.pl. The only difference is that you will see lots more output and won't get statistics. Here's a comparison of the output when all tests are successful:

#using perl MyExperiments.pl ok 1 - using /(..)/ ok 2 - using /(.)\1/ ok 3 - using /((.)\1)/;$2 ok 4 - using /((.)\1)/;$1 ok 5 - using /(.)(\1+)/;"$1$2" ok 6 - using /(..)/ ok 7 - using /(.)\1/ ok 8 - using /((.)\1)/;$2 ok 9 - using /((.)\1)/;$1 ok 10 - using /(.)(\1+)/;"$1$2" #using prove MyExperiments.pl MyExperiments.pl....ok All tests successful. Files=1, Tests=10, 0 wallclock secs ( 0.05 cusr + 0.04 csys = 0.09 +CPU)

Another advantage of using Test::More (or scripts in general) for tests, is that it gets one thinking in terms of sets of experiments, rather than singleton incantations. Suppose we want to learn more about how regular expressions work. We might start with an experiment file like this:

use strict; use warnings; use Test::More qw(no_plan); #number of tests is constantly changing #changed is to isnt: #this doesn't work, results in 'ab' isnt(eval {'abcdeee' =~ /(..)/;$1}, 'eee', q{using /(..)/}); #changed is to isnt: #this printed out only 'e' isnt(eval {'abcdeee' =~ /(.)\1/;$1}, 'eee', q{using /(.)\1/}); #changed is to isnt: #this printed out undef isnt(eval {'abcdeee' =~ /((.)\1)/;$2}, 'eee', q{using /((.)\1)/;$2}); #changed is to isnt: #this also printed out undef isnt(eval {'abcdeee' =~ /((.)\1)/;$1}, 'eee', q{using /((.)\1)/;$1}); #BINGO! this worked is(eval {'abcdeee' =~ /(.)(\1+)/;"$1$2"}, 'eee', q{using /(.)(\1+)/;"$ +1$2"});

But the next day we want to know if the same thing will work right if there are two or more repeating patterns. Which one will it find? We could repeat the whole mess, using cut and paste ... or we could replace 'abcdeee' and 'eee' with $input an $output and then put the whole mess into a subroutine. Doing this lets us run the same experiments (or additional ones) over and over with a variety of different inputs. Like this:

use strict; use warnings; #we have no plan! #usually want to pick and choose tests anyway #so the number is not fixed use Test::More qw(no_plan); #============================================================ # VARIOUS EXPERIEMENTS #============================================================ #--------------------------------------------------- #GOAL: find run of repeating letters #--------------------------------------------------- sub regexRepeat { my ($sInput, $sOutput) = @_; #----------------------------------------- # add tests here to try out different things # if it doesn't work, change is to isnt #------------------------------------------ #changed is to isnt: #this doesn't work, $input='abcdeee' results in 'ab' isnt(eval {$sInput =~ /(..)/;$1}, $sOutput , q{using /(..)/}); #changed is to isnt: #$input='abcdeee' printed out only 'e' isnt(eval {$sInput =~ /(.)\1/;$1}, $sOutput , q{using /(.)\1/}); #changed is to isnt: #$input='abcdeee' printed out undef isnt(eval {$sInput =~ /((.)\1)/;$2}, $sOutput , q{using /((.)\1)/;$2}); #changed is to isnt: #$input='abcdee' also printed out undef isnt(eval {$sInput =~ /((.)\1)/;$1}, $sOutput , q{using /((.)\1)/;$1}); #BINGO! this worked is(eval {$sInput =~ /(.)(\1+)/;"$1$2"}, $sOutput , q{using /(.)(\1+)/;"$1$2"}); } #============================================================ # HERE IS WHERE I SELECT EXPERIMENTS #============================================================ regexRepeat('abcdeeef', 'eee'); #only one to find regexRepeat('abbb;eeef', 'bbb'); #should find first

The above discussion only covers testing subroutines and regular expressions with simple scalar outputs, but the experimental technique can also be adapted to experiment with routines that output more complex data structures: arrays, hashes, and references to the same. Comparing anything using references can be a problem because is and isnt only check the actual reference, not the things stored inside the reference.

However, CPAN has many, many modules for working with complex data and hopefully you can find a few that work for you. This is just a small sampling:

  • Test::More::is_deeply does a very simple walk of the data structures. It ignores blessings which may or may not bother you.
  • Test::Deep provides tools for more complex structures where a simple walk through the data structure is not enough. Its main drawback is that it automatically imports the world. Some people do not like things that pollute namespaces so blithly.
  • Test::Differences uses Data::Dumper to compare strings and complex data structures. Best used with Perl 5.8 and up. Before Perl 5.8, it didn't compare data structures with hashes properly.
  • Data::Match takes an entirely different strategy: using regular expressions as the model for comparing components of a data structure.

Unfortunately, none of the tools for comparing advanced data structures have an easy way to test for "not equal" , as in the is/isnt combination available for scalars. Commenting out, skipping tests (see Test::More for details) or changing the expected value to the one that actually was produced may be your only options.

A final note: this use of Test::More doesn't need to be limited to testing documentation for Perl syntax and modules. Using the various versions of Inline you can test various parameter combinations for non-Perl documentation as well.

Best, beth

Update fixed nonsensical misplaced phrase in first paragraph.

Replies are listed 'Best First'.
Re: Using Test::More to make sense of documentation
by BrowserUk (Patriarch) on May 01, 2009 at 12:20 UTC

    Jeez! You do like to do things the hard way. Get yourself a REPL.

    c:\test>p1 [0] Perl> $a = 'abcdeee';; [0] Perl> $a =~ /(..)/ and print $1;; ab [0] Perl> $a =~ /(.)\1/ and print $1;; e [0] Perl> $a =~ /((.)\1)/ and print $2;; [0] Perl> $a =~ /((.)\1)/ and print $1;; [0] Perl> $a =~ /(.)(\1+)/ and print "$1$2";; eee [0] Perl> $a = 'abcdeeef';; [0] Perl> $a =~ /(.)(\1+)/ and print "$1$2";; eee [0] Perl> $a = 'abbb;eeef';; [0] Perl> $a =~ /(.)(\1+)/ and print "$1$2";; bbb

    Most of that just requires a cursor up and a couple of characters edits.


    Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
    "Science is about questioning the status quo. Questioning authority".
    In the absence of evidence, opinion is indistinguishable from prejudice.

      I only like to do things the hard way when it makes things easier. :-)

      The only limitation of command line testing addressed by a REPL (read-eval-print loop) is the quoting problem. Both command lines and REPLs are excellent tools for quick clarifications. They are less helpful when one wants to really explore what subroutine X::foo really does with its parameters, especially if you only use that language feature or X::foo every few months.

      Command lines also have up-cursors and history. However, the history of command lines and REPLs is basically a session transcript and goes away after N lines of input. You can't go back to last week or last month's tests (unless you use it very rarely). You could increase N so that a longer history can be kept, but the history isn't categorized by topic the way a named file can be.

      Best, beth

        However, the history of command lines and REPLs is basically a session transcript and goes away after N lines of input.

        Hm. I do a lot of experiments with my REPL. There is almost always a copy running in my system somewhere. With a command line history of 500 lines and and a 1000 line console.That's usually more than enough to record the experiments for as long as I need them.

        On the rare occasions that I wish to keep something, I have a habit of C&Ping the relevant bits of the console log into the script as a comment or after and __END__ tag.

        But mostly I'm not interested in the things I tried that failed, only that which worked. And that ends up in whatever script I was doing my experiments for. So if I want to find it again, I just grep *.pl for it.

        If I used your method, I would still end up grepping for it as I would find it onerous--if not impossible--to come up with enough meaningful names to accurately catalog all the experiments I do with my REPL.

        Still, we all have our own ways of working, and if yours works for you, that's all that matters. It does seem awfully laborious though.


        Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
        "Science is about questioning the status quo. Questioning authority".
        In the absence of evidence, opinion is indistinguishable from prejudice.
Re: Using Test::More to make sense of documentation
by spectre9 (Beadle) on May 01, 2009 at 19:15 UTC

    If your running under Unix or Linux, the script command might help you. It creates a file (by default, named typescript) that contains everything you type and all output from the command.

    "Playback" of the file does not have all the nice animation delays I remember from good old ANSI Animation, but it is exceedingly simple to use. Combine it with a history command before you exit the logging, and your probably covered.

    Personally, I find most of my needs are covered by having a decidedly large history buffer, with "savehistory" set (I use tcsh. Since I am rigorous about how I setup my home directory and where I 'play' around its not too hard to recreate the same results even when executing a perl -e with a Use Module; included.

    Create a location for saving the histories, date them ISO style and you should be good to go with a nice GREP finding you many good hits.

    "Strictly speaking, there are no enlightened people, there is only enlightened activity." -- Shunryu Suzuki
Re: Using Test::More to make sense of documentation
by Jenda (Abbot) on May 01, 2009 at 18:08 UTC

    There are two great "side-effects". First, if you do end up finding a bug, you already have a test script to send the module maintainer (and possibly to use while fixing the problem) and second, you end up with a test script for a possibly undertested module. Which you may send the maintainer as well to include in the distribution.

    If the documentation is lacking, it's likely the tests are as well.

    Jenda
    Enoch was right!
    Enjoy the last years of Rome.

Re: Using Test::More to make sense of documentation
by JavaFan (Canon) on May 01, 2009 at 13:24 UTC
    Test::More is typically used to compare two thingies, and report whether they are equal or not. And that's what you seem to be doing in your post.

    IMO, that's a hard way of learning things. I rather want to see what is calculated, not just whether what was calculated actually matches what I think it might calculate (specially when you are learning-by-trying, by the time you can a reasonable prediction of what the output will be, you're almost done learning).

    For regexes for instance, inspecting $& is far more informative than guessing what $1 will be. And, for failures, I can learn far more from the output of use re "debug"; that I can from Test::More saying not ok.

      The goals of experimentation are different than the goals of testing. In test mode, you propose an output and verify that the actual output is "as expected". This, as you note, takes a lot of understanding of the module.

      However, in experimental mode, you start with a goal and consider various possible inputs and incantations that might result in the desired output. It is exploratory, not predictive. For failures, Test::More::is prints out a lot more than just "not ok". In fact, it prints out the right result, so it can be a good way to "see what happens".

      An alternative see-what-happens technique is the command line or a REPL (read-eval-print-loop). Both are very good tools for small clarifications. REPL's don't run into quoting problems like the command line does. However, both have several other limitations: repeated setup, lack of annotations, inability to repeat what you did last week, inability to repeat en-mass a batch of trials with different inputs, and so on.

      $& - agreed. But that wasn't the point of the example and I apologize if it wasn't clear. The point was really to show

      • how to test an incantation where the goal isn't simply the output of a subroutine (via eval)
      • an experimental approach. The intent was to give a sampling of the kinds of mistakes one might make while feeling one's way to understanding how capturing and back-references work.
      • being able to keep an annotated history of what does and doesn't work. A command line or REPL provides some short-term history, but no annotations you can go back to weeks later.
      • the ability to repeat experiments en-masse for new input by encapsulating a set of tests in a subroutine.

      Best, beth

        I've sometimes used the inverse of this (reading the tests to understand the intended usage), but had never thought of it as an exploratory technique. Further consideration suggests that these experimental test files might be useful to the author, in at least two cases:

        1. when reporting a code bug or patch.
        2. when reporting a documentation bug or patch.

        The first of these is recommended practice, but the second might also be useful. When receiving a documentation patch or question, it's sometimes hard to tell why the person asking the question doesn't understand my <sarcasm>perfectly clear and understandable</sarcasm> documentation. I can see the test file containing your experiments being very useful in making assumptions and misconceptions slightly clearer.

        While some might argue that this isn't the best way to learn an interface, I think you have added another tool to my learning toolbox.

        G. Wade
Re: Using Test::More to make sense of documentation
by jplindstrom (Monsignor) on May 02, 2009 at 16:18 UTC
Re: Using Test::More to make sense of documentation
by roho (Chancellor) on May 07, 2009 at 08:45 UTC
    I think your idea is very innovative. I wish more people were willing and able to think outside the box and find novel ways to using existing features to their advantage. Keep up the good work!

    "Its not how hard you work, its how much you get done."

Log In?
Username:
Password:

What's my password?
Create A New User
Domain Nodelet?
Node Status?
node history
Node Type: perlmeditation [id://761266]
Approved by ww
Front-paged by Arunbear
help
Chatterbox?
and the web crawler heard nothing...

How do I use this? | Other CB clients
Other Users?
Others making s'mores by the fire in the courtyard of the Monastery: (2)
As of 2023-06-09 15:00 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?
    How often do you go to conferences?






    Results (36 votes). Check out past polls.

    Notices?