http://www.perlmonks.org?node_id=533257

I've hacked quite a bit of code by now, whether my own or other people's. I've developed a few small projects from concept to working beta for in-house production uses. What I haven't done is develop installation scripts to automate the propogation of useful code onto other people's servers.

Discussion in another node led me to read some perldoc from modules in Test::. What I could use at this point is some better idea of what a testing plan is, how it is developed? I have a friend who rails against being asked to write code without a testing plan in place. But I'm wondering how one would go about planning tests for code which hasn't been written yet. Surely a testing plan is something more than the number of tests you anticipate running during installation. Isn't it?

I'm guessing one would want to test the installation environment to make sure it will support the script and its dependencies. What else would one test?

-- Hugh

UPDATE

Well folks, that was great. I didn't write a lick of code yesterday but I did work my way through the entire book linked below on Extreme Perl. Facsinating read. Its like I'm just now starting to learn how to code, perhaps 26 years since that first assembler class in college. I think I'll spend some time with some more perldoc from Test::* and then try my hand at writing a test suite for my current project. At this rate, it may be a while before I get back to writing new code as a part of that project. Thanks to all the respondents below. That was very educational and its like starting all over now, in a way.

Replies are listed 'Best First'.
Re: What is a testing plan?
by friedo (Prior) on Feb 28, 2006 at 05:39 UTC
    The Extreme Programming people say that the tests are the specification. In other words, if your code passes the tests, it is correct by definition. (Even if your tests are in fact incorrect. :) ). I usually don't write tests before I write code; I'm just not used to thinking that way, but when I do, I think in terms of a specification. What's this class supposed to do? What will its interface be? At the very least, let's assume it's a traditional OO Perl class with a blessed hashref implimentation and a constructor. That gives us some good information to start a simple test file:

    use strict; use warnings; use Test::More; use_ok('My::New::Module'); my $obj = My::New::Module->new; ok( $obj ); isa_ok( $obj, 'My::New::Module' );

    We've already tested some very important things: We can load the module, the constructor returns a true value, and that value is an object of the type we've been expecting.

    If you're the type of person who can plan your modules completely ahead of time, go ahead and write simple tests for each method that this object will have, making sure they return the correct values and alter the object's internal state properly.

    Personally, I tend to develop modules and tests in parallel, writing a feature, making sure it works to my satisfaction, and then writing a test to make sure it continues to work that way. Sometimes I get lazy and don't write the tests until the very end.

    The most important thing is not to get too bogged down in dogma. Experiment a lot and do what works best for you and increases your productivity. Don't worry yourself if you're not strictly adhering to "red, green, refactor." Development methodologies should be taken as advice, not instructions. You must always find your own way.

      friedo wrote: The Extreme Programming people say that the tests are the specification.

      I see that a lot, but it always concerns me because I don't think it really holds true. Consider the following tests:

      can_ok $object, 'copy_for_xmission'; ok my $clone = $object->copy_for_xmission, '... and copying should be +successful'; isa_ok $clone, 'Some::Class'; ok !defined $clone->ssn, '... but the SSN should not + be copied';

      It's very common to see that in tests and it's perfectly appropriate. Now in reading the test output, we can see that the SSN is not copied on clone, but we don't know why. There are plenty of reasons why we might not copy the SSN over, but the tests tend to reflect what is happening, not why it's happening. As a result, tests document behavior, not business rules.

      The problem with this is institutional knowledge. In many companies I've worked for, documentation is almost an afterthought. Some other programmer looking at this might can see the behavior, but when getting asked to extend or alter the behavior, they can easily make mistakes without an understanding of the underlying business reasons for the code's behavior. Programmers often claim that you should simply be able to read the code (or tests) to know what's going on but that ignores that the larger the system, the larger the number of assumptions which might creep into the code without explanation.

      Cheers,
      Ovid

      New address of my CGI Course.

        As a result, tests document behavior, not business rules.

        But that's not quite what friedo said. He wrote that "the tests are the specification". A specification just describes behavior -- the reasons why that is the desired behavior provide helpful context for interpreting behavior or generalizing it to new situations, but they aren't the specification itself.

        Assumptions creep into code because requirements are rarely well-specified. Therefore, they become open to interpretation -- which is why having that context and institutional knowledge documented becomes important. All TDD does is encourage developers to be explicit about how they are interpreting a requirement or specification, before they write the code.

        When people say tests or code are the documentation -- it's a limited form, but it has the advantage over other types of documentation that they are, by definition, describing what actually happens, rather than what was desired or intended, which which may or may not have been updated and which the program may or may not fulfill. But that's still just a documentation of the spec, not the requirements.

        What I find, personally, is that TDD helps brings focus. It's not a documentation technique; it's a task-management technique. There's a clearly defined concept of "done" and the code I write is limited to achieving "done" as directly as possible. The clarity of thinking through how to prove that something does what I expect also usually means that by the time I write actual code, I have a much better idea of what I need to write. That, too, saves time and improves the quality of what I write.

        -xdg

        Code written by xdg and posted on PerlMonks is public domain. It is provided as is with no warranties, express or implied, of any kind. Posted code may not have been tested. Use of posted code is at your own risk.

Re: What is a testing plan?
by chromatic (Archbishop) on Feb 28, 2006 at 05:38 UTC

    I see two parts to a testing plan. First, what do you want the code to accomplish from the user's point of view? Second, what do you want the individual pieces of the code to do in isolation?

    For example, if you're building a program to process information retrieved from a remote server, the first type of test needs to explore the program as a whole -- probably including network connectivity and whatever output your program produces. The second type of test is much smaller in scope. You might test that a subroutine or module that parses the information does so appropriately, returning or creating the right data structures.

    If you're still not sure how you plan tests for code you haven't written, ask yourself how you know what code you need to write. Once you know that, ask yourself "What's the first test I could write to exercise that?"

Re: What is a testing plan?
by swngnmonk (Pilgrim) on Feb 28, 2006 at 15:36 UTC

    The size of your test plan is really a function of the size of your project, and the level of complexity. I'll offer my own experiences in writing tests while working on Krang a while back.

    Krang is a large system, written almost entirely in object-oriented perl. Each Krang::* package we wrote has a corresponding test file, responsible for testing all aspects of that package's interface, ensuring that we have a basis for making sure future revisions don't break existing functionality. At least, that's the boilerplate text.

    In reality, fully testing packages is HARD. Especially higher-level ones, which depend greatly on lower level packages. By the point you start testing higher-level functionality, you've got a very complex system, and shining a light into every last corner to confirm that things are ok is a lot of work.

    What I took away from the project is this - the more time you put into testing your lower-level modules, the more payoff you get in the end. These packages are generally easier to test comprehensively, and it simplifies your higher-level tests - you can work from the basis of knowing your underlying structure is stable.

    So what to test? Simple. Every time I create a new package, I do the following:

    • POD out the initial API.
    • Write tests to instantiate the object, rudimentary isa_ok() and can_ok() tests. Obviously, these fail.
    • Code the constructor & accessor-mutators, so that the above tests pass.
    • Test initial functionality.
    • Write code so the above tests pass.
    • Repeat as needed.

    Some of you will recognize this as being right out of Kent Beck's book on Test Driven Development, and you're right. It's worked quite well for me in several areas:

    • Writing tests is a great way to discover whether your planned API is useful, or clunky.
    • When I'm done with 1.0, I have a full test suite for my code.
    • My level of fear when it comes to developing 1.01, 1.1 or 2.0 is greatly reduced - the tests let me know if I've broken things.

    As a side note - in addition to Test::Harness and Test::More, check out Test::MockObject - I've had some great luck with it, and I think it's going to be how I solve to always-painful problem of testing CGI applications in the future.

Re: What is a testing plan?
by xdg (Monsignor) on Feb 28, 2006 at 16:36 UTC

    It's well worth considering the XP and test-driven approaches, even if you don't go so far. Extreme Perl has some nice examples.

    -xdg

    Code written by xdg and posted on PerlMonks is public domain. It is provided as is with no warranties, express or implied, of any kind. Posted code may not have been tested. Use of posted code is at your own risk.

Re: What is a testing plan?
by t'mo (Pilgrim) on Mar 02, 2006 at 04:25 UTC

    One thing you might want to consider is that testing shouldn't just cover your expected outputs given certain inputs, but should also test the "behavior" of the code. Your tests should prove that the code behaves a certain way, e.g., that an object from Module::Foo delegates work to another object from Module::Bar.

    A co-worker taught me that you can tell if your tests are of this type if you can describe each test using the word "should". For example, "Module::Foo should call Module::Bar::x", "should populate webpage template with values", "should throw exception if Oracle error code XYZ received", etc.

    This shouldn't detract from having tests for the output, but from what I've seen recently, most tests involving actual data values (as opposed to behaviors) should be thrown in the "integration test" category.

    Some relevant links: http://blog.daveastels.com/files/sdbp2005/BDD%20Intro%20slides.pdf, http://behaviour-driven.org/BehaviourDrivenProgramming.