Beefy Boxes and Bandwidth Generously Provided by pair Networks
Keep It Simple, Stupid

Regression testing

by jplindstrom (Monsignor)
on Sep 26, 2000 at 16:44 UTC ( #34002=perlquestion: print w/ replies, xml ) Need Help??
jplindstrom has asked for the wisdom of the Perl Monks concerning the following question:

I just decided I need to start using regression testing. (I redefined a method in a subclass and missed that the method was already redefined with a different behaviour than I thought. Oops!).

I don't know so much about it except the purpose: catching the moment when you introduce bugs in previously working code.

The problem I'm facing right now is that all examples I have seen so far are pretty simple.

I want to test not only single modules, but a group of objects working together. On their own they are fairly simple, but together and with various input their behaviour is more complex. I simply have a hard time finding things to pinpoint for testing.

Any ideas? Gotchas? Lessons learned?

Of course, URLs and books are nice too :)


Comment on Regression testing
Re: Regression testing
by dchetlin (Friar) on Sep 26, 2000 at 16:58 UTC

    Well, certainly take a long look at the regression tests in the Perl core. They're not in the best shape (I swear there are at least 5 different methods for doing the ok/not ok thing), but they are extremely thorough, and in general show the right ways to test things.

    I don't think you'll find too many examples of different (separable) functionalities being tested together. I'm not sure I can think of a particularly easy or sensible way to do that. Would there really be no way of testing certain functionality without needing to test two objects together? I can't think of any example where a test in the core checks more than one distinct functionality at a time.

    The general point of regression testing is to define functionality by testing the building blocks -- once you have the simple operations guaranteed, they'll fit together correctly.

    Most important is maintaining the test suite. Any time you fix a bug, write a test for it. Any time you change functionality, write a test for it. Any time you add documentation, write a test to make sure that things are behaving the way that you're documenting them. In fact, that's a good way to start to fill out your test suite -- go through your documentation and write a test for everything you assert about your modules.

      Any time you fix a bug, write a test for it.

      And be sure to test the new (or modified) test by running it against the non-fixed, buggy version of the code and making sure the test fails as you'd expect (a la the eXtreme Programming model). THEN fix the code and make sure it passes the test. Otherwise, it's easy to mistakenly write a test that your "fix" passes because it doesn't really test the condition you thought it tested...
RE (tilly) 1: Regression testing
by tilly (Archbishop) on Sep 26, 2000 at 17:43 UTC
    As the comment says, think Loose Coupling. :-)

    It has been understood since the early 70's that interactions between parts of your software are the source of the hardest to track bugs, and inevitably this is where all sizeable software projects will run into problems. Pick up The Mythical Man-Month for details.

    What this means is that you should define your software in terms of small components with well-defined, behaviour and simple interfaces. You then can hook them up together keeping interaction between components to a minimum.

    As for books, well I am a big fan of Steve McConnell, Code Complete is a classic that explains a lot of how to do the factoring at a low level, and offers references on topics it doesn't cover (like testing).

      The Pragmatic Programmer also has a lot of good things to say about this.

      The majority of it involves building in tests as you go. Rather than writing a huge Windows application (as an example of something that's typically more difficult to test than a command line application), then backing in test-suite code, you should be designing the application with this in mind.

      This might mean building in hooks where input can be taken from a file or a socket, rather than requiring a mouse to be clicked (there *are* scripting tools for testing, but that's a slightly different track).

      Another fundamental concept is that as you write code for debugging, these tests remain. If the bug was there once, it's likely that it can recur. You've already written the test code, so why throw it away? Rather, incorporate it into the overall test suite.

      Lotsa good ideas in that book...


      e-mail jcwren
      There is a sequal to Code Complete that I found to be somewhat better, called Debugging the Development Process (also by Steve McConnell). He repeats at least half of Code Complete, so its a bit frustrating to read both; given the two I would recommend Debugging....
Re: Regression testing
by Fastolfe (Vicar) on Sep 26, 2000 at 20:17 UTC
    Confucious say, "He who writes and ignores documentation, documents his ignorance in what is written."

    Just kidding.. :)

        That was my intention. :)

Log In?

What's my password?
Create A New User
Node Status?
node history
Node Type: perlquestion [id://34002]
Approved by root
and the web crawler heard nothing...

How do I use this? | Other CB clients
Other Users?
Others surveying the Monastery: (7)
As of 2014-09-22 11:58 GMT
Find Nodes?
    Voting Booth?

    How do you remember the number of days in each month?

    Results (190 votes), past polls