Beefy Boxes and Bandwidth Generously Provided by pair Networks
Perl-Sensitive Sunglasses
 
PerlMonks  

What goes in a test suite?

by Marza (Vicar)
on Aug 10, 2002 at 07:25 UTC ( [id://189148]=perlquestion: print w/replies, xml ) Need Help??

Marza has asked for the wisdom of the Perl Monks concerning the following question:

In Touching it when it ain't broke, Ovid suggested using a test suite for code maintenance.

Sounds like a reasonable suggestion, but what would go in a test suite? I have never used one myself and have not a clue as to what would be valid tests. I would like to create one as I am starting to create more indepth scripts.

Replies are listed 'Best First'.
Re: What goes in a test suite?
by chromatic (Archbishop) on Aug 10, 2002 at 16:46 UTC

    There are at least two kinds of tests: unit tests and integration tests. Unit tests test individual pieces of your code (functions, modules) in sufficient isolation, to demonstrate that the implementation works as you expect. Integration tests explore the project as a whole (as far as possible), to show that the individual pieces work together to provide the desired behavior.

    Start with Test::Tutorial, then my Introduction to Test::More. You might also like a longer and slightly modified version of Test::Tutorial, and the Testing Evangelism article. A very detailed example can be found in the Test::MockObject discussion.

Re: What goes in a test suite?
by adrianh (Chancellor) on Aug 10, 2002 at 14:08 UTC

    If you've not come across them already take a look at Test::Simple, Test::More and Test::Tutorial.

    You might also find Test::Class of interest (but since I wrote that one I may be biased :-)

    As to what you test - I tend to write all my code "Test First". Basically this means:

    1. I write the test for what I want my code to do.
      This (of course) fails since there isn't any code yet.
    2. I then write code until my test passes.
    3. Then, since I know I've finished and can move along to the next piece of functionality that needs testing.

    Works very well for me. I get into the coding "flow" much faster this way and end up with very robust code.

    Test suites also make changing your code a much less stressful business since you immediately know if you've broken anything.

    I always have a window open that sits there doing a make test every time any file in my project changes. Instant feedback. Very handy.

    This style comes from Extreme Programming (also known as XP - just be be confusing). If you're interested take a look at extremeprogramming.org.

    Retrofitting a test suite on an existing code base can be a much more painful task. Two approaches that have worked for me are:

    1. Testing the documentation (Test::Tutorial has an example of this).
    2. Write tests for any bit of code that you need to change. That way you can check that you don't break anything as you make the change, and you will gradually build up a set of tests for the whole code base.
      I always have a window open that sits there doing a make test every time any file in my project changes.

      how do you do this?

        I have this sitting in my ~/bin
        #! /usr/bin/perl -w # onchange file ... command # run command if any of the given files/directories # change use strict; use warnings; use File::Find; use Digest::MD5; my $Command = pop @ARGV; my $Files = [@ARGV]; my $Last_digest = ''; sub has_changed { my $files = shift; my $ctx = Digest::MD5->new; find(sub {$ctx->add($File::Find::name, (stat($_))[9])}, grep { +-e $_} @$files); my $digest = $ctx->digest; my $has_changed = $digest ne $Last_digest; $Last_digest = $digest; return($has_changed); }; while (1) { system($Command) if has_changed($Files); sleep 1; };

        I have this in my .cshrc

        alias testwatch "onchange Makefile.PL Makefile */*.pod */*.pm *.pm t t +est.pl 'clear; make test \!*'"

        I then type % testwatch or % testwatch TEST_VERBOSE=1 in the root directory of any module I'm messing with. This won't work 100% of the time but hits that 80/20 spot for me.

        I have a little todo list on an improved test monitor that I will, when some of that mythical free time comes along, implement. Now we have the lovely Test::Harness::Straps it's not even that difficult.

Re: What goes in a test suite?
by BrowserUk (Patriarch) on Aug 11, 2002 at 03:53 UTC

    Most effort in too many test suits goes into ensuring that the code does what it is thought it should do, especially when it given the correct data.

    The two most oft omitted testing strategies are :

    1. What does it do when given plausible, wrong data?

      The classic example of this seemingly continues to plague internet applications, the infamous buffer overflow condition.

    2. How clearly defined are the design criteria to start with?

      Most older monks will have seen the cartoon strip for the child's swing. It reads something like:

      • How the customer envisaged it
      • How the pre sales analyst captured it
      • How the salesman sold it
      • How the systems analyst scoped it
      • How the planner project-planned it
      • How the budgeting dept costed it
      • How the programmer coded it
      • How the tester tested it
      • How it was finally produced

      That's only a paraphrase of the original and it is probably funnier with the pictures, but that's serves my purpose in two ways.

      First, the joke itself won't detract too much. Second it emphasises my real point.

      Everyone sees, interprets, reads & prioritises according to their own particular bias.

      As a result, the transition from paper to product often leaves many, if not most of those people disappointed with the results. Many believe the final arbiter in this process is the programmer, after all, he's the one that actually produces the product. In reality, he is constrained from above and below in that hierarchy. If he takes too long in producing it, the management will call him over-time, budgeting over budget, the salesman will call him pedantic and the customer won't pay. If he varies from the spec too far, the analysts will reign him in though pre-sales and the salesman may try to intervene on his behalf.

      However, if the tester says it works, the salesmen will believe him, management will rubber-stamp it, the analyst are on their next project and budgeting will raise the invoice. The tester spec wins out. Of course, the tester is often the programmer, in which case, all bets are off, unless the programmer is also most or all of the other job titles as well. In that case, the specs are, or ought to be the same. The only fly in that ointment is the customer.

      It is therefore vitally important the customer likes what the tester is testing for. That means that before the any other jobs get to view or act upon the spec, the tester should refer it, as the pre sales analyst wrote it up, back to the customer an ensure that they agree. The mechanism for ensuring this agreement should be the test plan.

      The testers greatest skills come not from devising test data to exercise boundary conditions, nor his analysis of the algorithms used to provide verification of outputs--though both are important. His greatest skill is in devising written and verbal wordings of the verification criteria for meeting the spec. By doing this with simple, concise and precise language, he effectively can 'tie the spec down' into a near-immutable object. The benefits of this are that at the same time as he reduces, as far as is practicable, the room for ambiguities he also enables simple agreement of specified variations as often arise out of the waterfall process and even, though less frequently, as a result of bottom up driven changes.

    To sum up, let your tester be an independent person, with independent authority and equal voice. Don't sideline either the role, or the task into a ghetto of afterthought and slack space. If you do, you open the door to the salesmen overselling, the analyst overanalysis, the designer over designing, the planner under-planning, the coster under costing, the programmer badly coding and the under whelmed customer going elsewhere at the trot. ---

    Throughout this, I have referred to the tester and other job functions possibly, as he. This is for simplicity only. In my experience, female testers often outshine their male counterparts in several ways. The two most common of these are, again 'in my experience', they generally are better at sticking to the task at hand and not drifting off in new and interesting, but unspecified directions. A particular failing of mine. They also seem better at handling changes, interruptions, restarts and random variations than most guys. Possibly because they are less prone to righteous indignation and anger than us? Suffice it to say, that my choice of gender nouns was not by way of bias.

Re: What goes in a test suite?
by r.joseph (Hermit) on Aug 10, 2002 at 08:07 UTC
    What I have done in the past is simply create a script (or collection of them) that runs, many times and in many different permuatations, what a user would do.

    The advantage of having something like this is that every time a change is made to the code base, you can re-run the test suite, which could have as many test cases you want, it can run them all for you and report on what succeded and what did not.

    Constant testing (like, every 5 minutes) is a foundation of Extreme Programming (XP), a programming philosophy which I have come to enjoy using, in moderation of course :-).

    r. j o s e p h
    "Violence is a last resort of the incompetent" - Salvor Hardin, Foundation by Issac Asimov
Re: What goes in a test suite?
by Abigail-II (Bishop) on Aug 22, 2003 at 12:33 UTC
    but what would go in a test suite?

    Anything that is important for your product. Here are a few points you may want to think about. Not all points are relevant in all cases though.

    1. Correctness is usually important, and one of the first things that are tested. This could range for simple things like "if I plug X and Y in this function, the result should be Z" (typically the things that are tested when "make test" of a CPAN module is run) to "if I push the red buttons after I disabled the blue switch, while standing on my head humming the national anthem, does it make a decent decaf".
    2. System resources. How CPU intensitive is it? How much memory does it use? If your product has always hummed along on a Sparc 5, but a new minor release suddenly requires a 6 processor 3800 than your customers won't be happy, even if the product passes all correctness tests. Besides CPU and memory usage, there's disk usage, I/O usage, shared memory usage, etc, etc to consider.
    3. Robustness. What does your program do when something unexpected happens, like a file that suddenly disappears, a network that goes down, a remote server that no longer answers, a disk that's no longer repsonsive? How does the program behave when it receives signals? What does it do when I cut-and-paste a million characters in a small text field? How does it behave on garbage input data? Or on no input data? What happens if the program crashes?
    4. User interface. If you have a GUI program with many screens, is the user interface consistent, or does each screen need its own manual? Is the user interface simple? Can I do things that need to be done often with one keystroke or mouse-click, or do I need control-alt-shift-meta-cokebottle all the time? Does your product confirm to the house style? Is each background the right shade of orange, or do you still have screens in green, the house style of 2001. Are all the logo's correct? [1].
    5. Regression tests. Adding a feature in one part of the software shouldn't have an effect on another, unrelated, part of the software. You can lose customers that way, because people do get upset if the software suddenly behaves differently. I'd be mighty upset if the next release of vi suddenly gave a different meaning to 'a'.
    6. Stress testing. Testing a single instance of a product is one thing, but what happens if two people use the product at the same time? Or 10? Or 1000? How does the product behave if run continuously for 24 hours? Or a week?
    7. Security. Can the software be used to comprise security in one way or the other? This not only means gaining unlawful access to the box the machine runs on, but also access to data (for instance, in a database).

    [1] Don't laugh. I've worked for a company that was once threatened with legal action because a button displayed the old name of the product - a name that our company no longer had the right to use.

    Abigail

Re: What goes in a test suite?
by Marza (Vicar) on Aug 16, 2002 at 03:39 UTC

    Thank you everybody! Excellent information all around. ++ for everybody!

    I will dive into this after we get the corporate HQ moved! :( Ugh that is going to be looooooongggg weekend! Oh well that is why they pay me!

Re: What goes in a test suite?
by Anonymous Monk on Aug 22, 2003 at 09:14 UTC
    My high-school programming teacher kept demanding precondition, postcondition, for every function we wrote. A test suite should ensure post condition happens when precondition happens ;)

      That's certainly one thing that tests should do. But there are many more also (e.g. do you get appropriate error reporting when pre-condition is not met on a function, does the application as a whole exhibit appropriate behaviour, etc.)

      You might also want to look at Design By Contract as an alternative way of using pre-conditions, post-conditions and invariants in software development.

Log In?
Username:
Password:

What's my password?
Create A New User
Domain Nodelet?
Node Status?
node history
Node Type: perlquestion [id://189148]
Approved by osfameron
Front-paged by DamnDirtyApe
help
Chatterbox?
and the web crawler heard nothing...

How do I use this?Last hourOther CB clients
Other Users?
Others meditating upon the Monastery: (1)
As of 2024-03-19 03:50 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    No recent polls found