http://www.perlmonks.org?node_id=328949

I've read lots of tutorials and various praises for the "Testing Paradign" and the thought of having hundreds of test I could just instantly run to tell me if my change affected anything fills me with glee, but I Just Don't Get It.

I understand the basic concept of testing, I think, and I can easily apply it to the examples they use in a text book: ok(add(2,2),4); or whatever simple ass method you're testing. My problems comes when I try to extend/abstract the idea.

For example, something I've been working on now is a web application for manipulating a specific type of server. I have no bloody idea how to write tests for it. Sure I could test some of the classes that it uses and so forth, but how do I test the application it self? Print out the html and grep it for some specific phrase? What if I change the html, it's just the display layer isn't it? Do I use a dumper to dump the actual variables I'm passing to the template? Isn't that looking at the implementation?

I'm just confused, I guess what I'm really looking for is "real world" examples of tests for complex things, things you can't just test by calling a function and comparing it's output to a constant.

Updated: Changed the "can" to a "can't" in that last line.

Replies are listed 'Best First'.
Re: Yet another meditation on testing
by Abigail-II (Bishop) on Feb 14, 2004 at 10:09 UTC
    All tests revolve about the same thing: you have a piece of functionality (often a subroutine, but it could also be an operator or a program). You give whatever you are testing some known input, and you check whether the output is what you expect it is. If there's a difference, the test fails. As for a templating system, you can do several things. You could consider the template to be part of the product to test, which means you need to modify the tests if you change the template. Or you consider the templates to be input. Which means you probably have to multiply the number of tests with a factor X, and your tests have to be very generic. Or you could have two sets of test: one for the template you are using (meaning you need to change those tests if the template changes) and another set of tests for the functionality, using several different templates as input. I'd go for the last option - it's more work, but it covers things best.
    I'm just confused, I guess what I'm really looking for is "real world" examples of tests for complex things, things you can just test by calling a function and comparing it's output to a constant.
    Don't get the impression that writing tests for complex things is easy. It might in some cases, but it often isn't. For some projects, I spend more than 80% of the time on writing the test cases.

    Abigail

Re: Yet another meditation on testing
by simonm (Vicar) on Feb 14, 2004 at 18:04 UTC
    Web application testing is definitely a trouble area for testing fanatics, due to the issues you're describing.

    I think the best practical advice I have for you is to use WWW::Mechanize. Each test script can model a certain type of common interaction, with test code for things like "load the home page", "follow the link that says Search", "fill in a query term and submit", and "does the current page have a link to the page we expect to find?"

    If you were starting from scratch, and looking to build a more-testable web application, I might suggest the approach laid out in XP for Web Projects: structure the core of your web application to accept and return XML, and then use stylesheets to transform that into your HTML interface. That way, you can test your application logic using command-line XML tests, then test your stylesheets independently. (It also sets you up to support alternative interfaces such as automated web services.)

Re: Yet another meditation on testing
by hardburn (Abbot) on Feb 14, 2004 at 18:30 UTC

    What if I change the html, it's just the display layer isn't it? Do I use a dumper to dump the actual variables I'm passing to the template? Isn't that looking at the implementation?

    <plug type="shameless">

    This is exactly what I wrote HTML::Template::Dumper for. It only works with HTML::Template, and I'm not sure how well a similar module would work for more complex template systems, like TT or HTML::Mason. However, if all your application runs is HTML::Template, you should be able to easily extend your program to use the Dumper version so you can inspect the raw datastructure being sent to the template engine.

    ----
    : () { :|:& };:

    Note: All code is untested, unless otherwise stated

      That was indeed the module I was thinking of when I wrote this meditation, but you didn't answer my questions about the suitability of using such a module. It seemed to me that testing the datastructure sent to the template would be kind of like testing the code that makes up the subroutine. But on second thought, perhaps those tests combined with other tests.

        IMHO, the general rules of good coding practices should be relaxed when applied to test scripts and debugging. If anything, attempting to parse your way through the raw HTML makes your tests more dependant on implementation details compared to reading the datastructure alone. The things your tests will check are going to be some result that's inside that datastructure anyway, and going through the HTML will just create extra work. You don't necessarily have to check the entire datastructure, but just the pieces you're interested in.

        What HTML::Template::Dumper will do is tie you to using HTML::Template. For the orginization I work for, this isn't much of a problem, because HTML::Template best fits the way we operate (e.g., strict seperation of code and data). Though this isn't an itch I personally have the need to scratch, I would like to see other modules for different template systems and work with the authors to integrate them where we can.

        ----
        : () { :|:& };:

        Note: All code is untested, unless otherwise stated

Re: Yet another meditation on testing
by dws (Chancellor) on Feb 16, 2004 at 05:29 UTC
    I'm just confused, I guess what I'm really looking for is "real world" examples of tests for complex things, things you can't just test by calling a function and comparing it's output to a constant.

    If you approach development by considering the test first--thinking through how to structure a method, class, or API such that it's possible to call functions and compare output--you'd be surprised how far you can get. Part of the trick is being clear about what you're testing. You want to test small numbers at a things at a time, faking whatever infrastructure those things live on top of.

    Take your example. You're building a web application that manipulates a specific kind of server. The "obvious" way to test such a think is to simulate an input (a link click or a button press), then verify that the right HTML comes back. But these kind of tests end up testing too much stuff at once, and get messy quickly. It's far easier to break things down and test parts in semi-isolation. Not knowing how your application is structured, I can speculate that you have a layer that abstracts communication to the remote server. This layer can be tested in isolation, even in isolation from a real remote server, if there's a way to swap in a "mock" object for the remote server (e.g., by using a fake socket that's under the control of the test code). Then, you can write test cases that verify that if you tickle the abstraction API, the write bits get delivered to the socket. And you can test that the server abstraction correctly handles various simulated responses from the remove server.

    Then, you can test whatever layer drivers the remote server abstraction by swapping in a mock implementation of the abstraction layer. And so on up the chain, small piece by small piece, until you're at the level of delivering test-driven URIs to the top layer of code, and are verifying that it's emiting the HTML you expect.

    The trick is to figure out how to structure the code so that you can swap in mock implementations of underpinnings. This is a lot easier to do if you approach development by figuring out the test case first, but can still be done on legacy code with some amount of restructing.

      Automated testing, a tenet of the buzzword-compliant "XP" Extreme Programming, can be seductively dangerous though. Beware, you will write tests first, but they clearly won't encompass the problem....or worse, you will write tests later and will unknowingly write tests that all pass. A good test suite is very difficult to build, and the need for manual testing can never be eliminated.

      Far too often in the software industry does an emphasis on automated testing and formal test organizations (i.e. "testers by occuptation") result in poor manual unit testing. A developer really has to understand all of the corner cases before manually unit testing, and to write effective unit tests, he has to be even sharper.

      Anyhow, be warned -- automated testing is great stuff -- but it is not a substitute for the real thing. Your test cases passing doesn't mean there are no bugs!

        Anyhow, be warned -- automated testing is great stuff -- but it is not a substitute for the real thing.

        First, we weren't talking about "automated" testing. Second, what do you mean by "real thing?" Actual use? Ad Hoc testing?