http://www.perlmonks.org?node_id=749581

On my most-recent project, I did something I had not properly done before:   I built an ongoing test-suite, during the development process. As each new feature was added to the code, and sometimes before, I immediately wrote a test-case for it. And I didn't proceed until that test, and all preceding tests, “ran clean.”

This had immediate effects:

  1. Initial code-writing took about twice as long as before.
    • Overall development time was far less.
  2. During this process, “regression” of previously-written code happened at unexpected (and frequent) times.
  3. Writing tests in this way, and at this time, forces you to think more slowly and more deliberately about the code that you write, and thus to write much better code.
    • The “payback” was so obvious, and so immediate, that I became somewhat obsessed, so to speak, with writing and maintaining those test cases. I found myself feeling quite nervous about “putting my full weight upon” a new piece of code that wasn't tested. (Also, once you get into it, writing new tests is engaging and fun.)
  4. The bugs that were uncovered were usually nasty and subtle. I would not have discovered them except under the least-desirable circumstances.
    • The bugs that weren't uncovered were, for the most part, “merely annoying.”
  5. When the code was finished, it was reliable, and I didn't worry about doing demos.
    • In spite of all this, a demo is still the best test-case you can ever have.
    • The probability of a bug appearing during a demo is in direct proportion to your smugness that it won't, plus 100%.   :-D

Fortunately for all of us, Perl makes it very easy to write tests. A test program is simply a Perl program, written using a test-module such as Test::Most, Test::Memory::Cycle, Test::Exception, or Test::Class. You see test-suites run every time you install anything from CPAN, so it's a very well-developed system. When you start writing test-suites of your own, you really appreciate what it means when you see hundreds or even thousands of test-cases flying by during one of those installs. The fact that “CPAN is highly reliable” is anything but an accident.

What I do (and it's not the only way nor necessarily the best way to do it) is simply to create a t subdirectory, for “tests,” with number-prefixed subdirectories and number-prefixed test-programs in each. (This is done because the test files and directories will be traversed in-order.) Then, I run these with the command:   prove -r.

A test-case is very simple, really:   it's both a list of things that you do expect to happen, and things that you don't. It's a test of things that should cause an exception to occur, and things that shouldn't. It's a test of the “edge cases,” the sublime, and the ridiculous.

For web-based applications you can go further. Modules such as Test::WWW::Mechanize can perform an entire web-interaction sequence. Or you have things like Test::WWW::Declare, which is based around the notion of defining a “flow.”

My point here is not to enumerate all of the things that you can do with Perl testing. (There are, today, 805 CPAN modules whose name begins with “Test::” ...) Rather, it is:  “You should do this. It really works.” Sure, I'm not the first person to say it. I'm just one who took about twenty years to really listen.   ;-) ;-) :-D

Replies are listed 'Best First'.
Re: Testing IS Development
by moritz (Cardinal) on Mar 10, 2009 at 23:38 UTC
    As a humble contributor to an open source project of which I've only grasped a small part, a test suite also gives me one things: confidence.

    When I changed something, and the test suite doesn't show a new failure, then I can be pretty sure that my patch doesn't screw up things very badly.

    Without this confidence I would hesitate much more before to send patches, let alone push changes to a repository directly.

    These days I regularly contribute to Rakudo (a Perl 6 compiler), and we don't close any bug reports unless there are tests for them in the test suite. As a consequence the number of reported regressions clearly decreased, while areas that can't be tested yet (due to limitations in our tools) still regress now and then.

    So apart from confidence the confidence also makes me proud, because I know that our product is of good quality.

    Don't underestimate the benefits of these "soft" advantages, especially when the project is driven by volunteers. If they don't feel good during development for too long, they'll simply jump off and look for a different project. Having a test suite to ward off frustrating experiences is a way to keep them on board.

Re: Testing IS Development
by tilly (Archbishop) on Mar 10, 2009 at 17:10 UTC
    While testing is good, people who are getting into testing may find this use.perl blog by Ovid of interest. Basically it says that while testing is good, you shouldn't feel forced to adopt strict test driven development.

      I followed your link... and read this:

      I wrote the new Test::Harness that ships with Perl's core. I've written Test::Most and Test::Aggregate. I maintain or have co-maintainership on a number of modules in the Test:: namespace. I was invited to Google’s first Automated Testing Conference in London and gave a lightning talk (my talk on TAP is about 42 minutes into that). I was also at last year's Perl-QA Hackathon in Oslo, Norway, and I'll be at this year's Perl-QA Hackathon in Birmingham, UK. I was also the one of the reviewers on Perl Testing: A Developer's Notebook.

      In short, I'm steeped in software testing. I've been doing this for years. When I interview for a job, I always ask them how they test their software and I've turned down one job, in part, because they didn't.

      This is an excellent and well-balanced post (written all of two days ago...), and you should all go right now and read it. (Yes, before you watch any more episodes of “The Prisoner.” You'll understand my reference when you've read it.) Uhh... and go ahead and finish re-reading the book, Code Complete, while you're at it.

      It makes a difference ... a big difference. And as soon as you follow all of the threads that are (already) attached to this one, you'll see what I mean... and you'll get a sense of proper balance. Like all things in our profession, it is a balancing act, not a religion. Nevertheless, it is a best practice, and it makes a very measurable difference.

      Yes, that's a really good Ovid article. I was about to say something similar here myself... I'm certainly sold on the value of test-driven development, but while a large body of working tests makes it much easier to make small improvements to the code, it does have the draw-back of acting as a kind of institutional weight on the design, making big changes much harder in many cases.
        ... while a large body of working tests makes it much easier to make small improvements to the code, it does have the draw-back of acting as a kind of institutional weight on the design, making big changes much harder in many cases.

        I expect the same is true from a large body of working code -- the kind I expect to see accompanying a large body of working tests.

Re: Testing IS Development
by zby (Vicar) on Mar 10, 2009 at 16:18 UTC

      Needless to say, I promptly added a reference to Test::Class in the original text. Don’t know how I overlooked that one.

Re: Testing IS Development
by stonecolddevin (Parson) on Mar 10, 2009 at 19:10 UTC

    The benefit of test driven development is, you *know* your code will work before you ship it. Although, that is contingent upon whether you've written thorough and proper enough tests.

    Regardless, tests offer a bit more sane and to the point code that can be easily passed around to colleagues to be reviewed for errors, etc. This works twofold, you get better tests, and code reviews on the code you are testing.

    ++ to you for good practices!

    meh.
      The benefit of test driven development is, you *know* your code will work before you ship it. Although, that is contingent upon whether you've written thorough and proper enough tests.
      I find a significant portions of the "bugs" I make come from making the wrong assumptions (which can be caused by many things: wrong or unclear documentations, bad communication between the product owner and the dev team, me not checking facts with the right people) etc. Making the wrong assumptions usually means the tests are wrong. So while I write code that passes all tests, I still don't have code that "works".

      Don't get me wrong. I do see value in tests. But I've been programming for way too long to see the silver bullet in anything. Only for trivial code will "passes all tests" mean "the code is bugfree". A test suite is just a tool. Just like use strict and use warnings. They're all just modest tools in writing code.

        Granted. Now, having said that, here is an assumption that we all tend to make, albeit without firm basis:   the assumption that the code we've just written is “right.” We trust our own experience, and let's face it, our own gut instinct. Granted, that experience/instinct is by now very well-honed, and therefore trustworthy ... but there is still plenty of room for “a digital computer to do for us what a digital computer does best,” namely, to grind through an onerous procedure in just a few seconds.

        So, yes. It is “just a tool.” It is also, “a darn good one.” We have no dispute there (nor anywhere). It is a tool that, I now realize, I have not yet availed myself of to a sufficient degree. I guess that the cobbler's children tend to have no shoes.

        I find a significant portions of the "bugs" I make come from making the wrong assumptions

        True. OTOH, I find that writing tests forces me to codify and explicitly state my assumptions (even if not in a form the typical end-user would understand), which, in turn, forces me to think about and identify those assumptions.

        By making me consciously aware of my assumptions, it serves as a first step towards finding and correcting those which are incorrect. It can also prove useful in implementing the correct behaviours once incorrect assumptions are identified - fixing the tests, if they are well-written, often clarifies what the code needs to do differently.

        product owner and dev team should write some tests
      Building an "ongoing test-suite, during the development process" does not necessarily mean TDD. See the article mentioned by tilly above (even though he also misrepresents the OP in the TDD aspect).
        I don't think that I misrepresented the OP at all given that I didn't represent him as having said anything in particular.

        My thinking was that people who read what he said and then went out to learn more about how to do this wonderful thing were going to read about test driven development, try to do that, and some would get turned off by it. Which could then lead to throwing out the baby with the bathwater.

Re: Testing IS Development
by JavaFan (Canon) on Mar 10, 2009 at 22:04 UTC
    Fortunately for all of us, Perl makes it very easy to write tests.
    Yes, but that's not the whole story. I write tests for my code. And at $WORK, I can even make time to write tests. And while writing code to do a test is usually easy, I still find it takes a long time to create tests. It's not the test itself, which is just calling the code. It's coming up with good test data which on the one side tests all the aspects of the new code, and on the other side can be run without taking too many resources. Tests that take an hour too finish, or will bring the test server to its knees loading the test set are out the question. So are test sets consisting of one row tables.

    But while I do write tests at work (and current project even allocates time for it), not everyone of the business is convinces this is right. This frustrates some of my coworkers. It doesn't frustrate me - I can see the POV of the business. If I spend XX hours a year writing tests, I cannot spend the XX hours writing other tools that help streamlining the work of other departments. So, the company invests XX development hours in getting tests. But I cannot quantify what the company gets back (yes, I can *describe* what they get back, but I cannot put a value on it). Sure, if there are tests, it's (hopefully) easier to change the code, or at least, prevent breaking it. But in our company, code "lives" an average of about 2.5 years - after that, code is obsolete, rewritten or replaced, so tests only have a limited shelf life. I can say the quality of what I deliver increases if I have tests for it. I cannot say the *value* of "XX hours of writing code + YY hours hours of writing tests" is more than "(XX + YY) hours of writing code". And a better value for the company means the company makes more profit. And if the company makes more profit, I get a bigger bonus (bonusses at $WORK are strongly correlated to profit $WORK makes).

Re: Testing IS Development
by ack (Deacon) on Mar 13, 2009 at 18:45 UTC

    Wow! Great post and a great thread from everyone! I have been in a bit of a challenge lately at work trying to help our leadership find better...but minimally intrusive...strategies for producing better systems (especially flight software systems).

    We always produce a large, strategic suite of tests (we call them, somewhat conventionally, Baseline Functional Tests (BFTs)) that are developed once the code and hardware are produced and have been through module- or subsystem-level testing. In short, our tests always "follow" after our development.

    The price seems to consistently be twofold: (1) we seem to find it hit-n-miss finding "big" errors and (2) we have to go back and make significant changes to our tests after we fix the errors.

    This results in quite a bit or wasted time (in my opinion) and leaves our leadership wondering "did we find enough of the big errors?"

    I have read a number of posts over the past couple of years here and had been somewhat swirling around the notion tha perhaps a more test-driven development approach would help. The comment that ...Initial code-writing took about twice as long as before. Overall development time was far less is actually at the heart of almost every discussion I've had around here on the matter: the fear that things will take so much longer. But your sub-comment on overall development time is at the heart of what I've been thinkking and your articulation of that point strengthens my resolve.

    This post and subsequent thread has renewed my thinking and strengthened my resolve to move more aggressively in this direction to see what it brings.

    Thanks to everyone for the great dialog; it really helps...as the Monks always do.

    ack Albuquerque, NM

      It was, to “lil” ol” me,” absolutely staggering what a (positive) difference it made.

      It really felt to me like I had just wandered into the hunting-fields for the very first time with a really good hunting dog. :-D

      “Wow! How'd all those pheasants get in here?!”

      The rigorous, early testing-process flushed out an unbelievable number of errors. But it also made the resulting code much more reliable.

      When we're in school, we're taught a lot about data structures, but we're really not taught about ... code structures! Animators and engineers know about the concept of “degrees of freedom,” but we neglect to teach software engineers that exactly the same concept applies to their work ... only “eversomuch more-so.”

      Any “unanticipated movement,” lurking anywhere in the moving-parts-laden structures that we universally build, has the potential to de-stabilize the whole thing, and to express its movement in any number of well-concealed indirect ways. Only by systematically ferreting out these “movements” early, under (admittedly artificial) conditions designed to minimize the number of moving-parts that might have been layered on top of them, can we ever hope to say ... with any degree of assurance ... that “even though I do not yet know what is causing this problem, I can say that it probably isn't this-or-that.” Otherwise we spend too much time trying to deduce where the problem(s) actually lie.

      Fixing them, once we find them, is usually trivial. But by then, the economic damage has already been done.

Re: Testing IS Development
by boblawblah (Scribe) on Mar 10, 2009 at 15:54 UTC
    Convinced me.