http://www.perlmonks.org?node_id=752409

stinkingpig has asked for the wisdom of the Perl Monks concerning the following question:

I've read a lot on this and other sites about testing, but I seem to be missing something fundamental between "it's a good idea" and "here's how to go about it in a real project." I've watched screencasts and presentations, I've read articles and module instructions, and the examples are always abstracted so far from my use case that I don't understand how to get from here to there.

I have a number of projects where I'm interested in automatic testing, but I'll focus on one for this posting. The code base is a set of monitoring and maintenance routines for a Windows server product. It's about 9,000 lines of code that does a lot of direct SQL work, Windows event viewer work, Windows services work, and file system work... then it records statistics in RRD files and generates some HTML and email reports. This is a free tool which is nominally open source, although I am the sole developer. The target environments run the tool as a compiled .exe, so they won't be running the test harness at all (e.g. as done during CPAN install).

In a perfect world, it would have little to say. Some of its test conditions only occur rarely, and others that I intend to add are "once-in-a-blue-moon" situations... but indicative of the kind of extreme problem that needs to be jumped on right now.

So, my question: is the idea in testing this sort of real-world application to fabricate a complete set of environmental inputs and then test the program's reactions to them? Or is there a higher level of abstraction that I'm missing? Given the number of interfaces being discussed, I've got some workload concerns with building and resetting a test case for every possible situation. How do other Perl developers do this?

Replies are listed 'Best First'.
Re: how to begin with testing?
by ELISHEVA (Prior) on Mar 22, 2009 at 19:24 UTC

    At a very, very high level testing involves (a) defining sample inputs (b) defining expected outputs (c) comparing actual outputs to expected outputs. A very good starting point for understanding the basic testing pattern is the documentation included in Test::Simple.

    However, as you are noticing, applying that scenario to real world testing problems isn't quite so simple. Reading through your example, it seems you have two separate issues:

    • how to emulate the real world environment in a controlled fashion without spending too much time setting up and tearing down the test environment.
    • how to organize suites of tests so that you can easily target problem areas

    There are thousands of modules in the CPAN Test namespace and figuring out how they might apply to your testing needs can be overwhelming.

    First, the standard test environment is centered around Test::More and App::Prove. In its standard usage, you define one or more test suite files. These files are normal script files except that they end in .t rather than .pm or .pl. A test suite is simply a collection of assertion statements stored in a single file and run start to finish. When written in a traditional fashion, the beginning of the file stores set up instructions. This is followed by a sequence of assertions. The test suite script ends with the tear down code. The scripts are run singly or as a group using the prove command.

    Out of the box these tools are primarily suitable for testing modules containing a set of functions with minimal set up and tear down. Your testing needs are more complicated, but fortunately there are several modules designed to help you gain more control over setup, teardown, and selective execution of tests.

    • To better manage set up and tear down for a collection of test script files, you can define your own version of the 'prove' command - see App::Prove for details.
    • If you have a lot of tests defined between setup and tear down of your controlled environment, you may want to consider working with one of the modules for selectively running tests: Test::Steering, Test::Tagged, Test::Less or Test::Usage.
    • There are also several modules designed to help with setting up and tearing down controlled database environments. Among the modules (this list is not complete):
    • Finally, there are modules that can speed up the process of setting up inputs and comparing actual and expected outputs (again, this list is not complete):

    If time spent writing tests is a problem, you may also want to check out some of the answers on the recent thread Lazy test writing?.

    Best, beth

Re: how to begin with testing?
by moritz (Cardinal) on Mar 22, 2009 at 19:20 UTC
    When I read your title, "how to begin with testing?", my first thought was "begin with simple things", and I'll try to to explain what I mean by that.

    Nearly all applications have parts that are very easy to test; even if your application does mostly database work, it might contain a few utility functions that don't access the database; testing them should be an easy start, and not require too much initialization.

    In your application this kind of testing is probably not very rewarding; so the next step is to set up a test database, and write a script or module that inserts some fixed dummy data sets. If your tests modify that data, make sure the test database is reset each time before the tests are run.

    Then, when you find a bug somewhere in your application, you first extend your dummy data sets so that you can reproduce the bug with the dummy data, and then write tests for it.

      moritz,
      The title doesn't really match what is being asked. A more appropriate title might have been "How to retrofit a test suite to a complex application". From that perspective, I might address the question from a different angle. The rest is meant more for the OP than for you - but it is meant in context of your response.

      What are you hoping to accomplish? Do you have some significant changes coming up and you want to be sure you don't break anything? Do you suspect that over time the application has evolved and you are unsure if all the business requirements are being met? Perhaps you think there is a lot of "dead code" that can be pruned. Do you have the luxury of devoting yourself to this full time or do you need to take a more iterative approach?

      Now that you know what your motivation is, is testing where you need to begin? There is a good chance that, if there is no test suite, then there is likely no or ill defined requirements. Backing into requirements can be as much "fun" as backing into testing. Since there are a myriad of different types of tests you can run (see Software testing), you will at a minimum want to make sure you are meeting your business requirements. If you can, don't look at or even think about your code when documenting these requirements.

      Since this is an existing large and complex application, you can start by writing tests for individual subroutines. Keep your business requirements handy as a checklist. If the subroutine assumes some other function performed correctly, write the test that way too (even if you haven't looked at that code yet). In other words, don't be afraid to write a test that you know or are unsure will fail. The test is the correct behavior so failures will help you find bugs.

      A missing test suite is an indicator of other problems (like lacking requirements). It may be beneficial to read through Re: Refactoring a large script to see if any of those other things apply as well.

      Cheers - L~R

Re: how to begin with testing?
by Bloodnok (Vicar) on Mar 22, 2009 at 23:26 UTC
    On this thread, no one has, AFAICT, (yet) identified the, to my mind, seminal work by Langworth & chromatic - Perl Testing: A Developer's Notebook (sorry for the UK URL).

    I'm sorry to have to say that I have to disagree with zwon - there are immense benefits to writing test harnesses for existing code - the principal one being the ease by which potential regressions can be caught and fixed before the code goes live - especially in cases where the maintenance is the responsibility of a (virtual) test team and the original developer(s) are no longer available. The sad thing is that, like Configuration Management (CM), most employeres/clients don't/won't see the benefits until the problem has bitten them (frequently, more than once).

    You say ...compiled .exe, so they won't be running the test harness at all... - just because your end-user won't run the test harness shouldn't preclude your writing one and running the deliverable against it - testing is an essential part of quality assurance and altho' it won't, indeed can't, demonstrate that there aren't any defects, it should give you the confidence that, assuming you've written good tests, there is a significantly reduced probability of their existence.

    From my own POV, I start by identifying and writing tests for...

    • corner cases aka boundary/limit conditions
    • typical operational/use cases

    Just my 10 pen'orth...

    A user level that continues to overstate my experience :-))
      On this thread, no one has, AFAICT, (yet) identified the, to my mind, seminal work by Langworth & chromatic - Perl Testing: A Developer's Notebook.

      ++ for this only. I wasn't aware that this kind of book existed so I ordered it immediately :-)

      (sorry for the UK URL)

      No need to be sorry, pound to euro exchange rates are very pleasing at the moment ;-)

      --
      seek $her, $from, $everywhere if exists $true{love};

      I second that,.. Perl Testing was both easy to read and IMHO gave a lot to my personal development quality. Really good value for money. (-: Talking about money, was that an URL that you get money when people buy? Never mind. :-)

Re: how to begin with testing?
by gwadej (Chaplain) on Mar 23, 2009 at 13:33 UTC

    There's been a lot of great advice here, but I thought I would chime in with a different reference. I found the book Working Effectively with Legacy Code by Michael Feathers to be quite good for this particular issue.

    One thing that Feathers covers that most other testing books don't is that the normal unit testing approach does not usually work well with legacy code (which he defines as code with no tests). He suggested a tactic he calls characterization tests. Instead of trying to verify correct behavior as you would with unit tests, characterization tests focus on current behavior.

    Before making changes to legacy code (bug fixes, feature additions, refactorings, etc.), you would start by characterizing the current code, in the area of interest, with tests. Then, you can modify the tests and code in something approximating normal unit testing.

    One of the most important points in the book is the explanation of the different tactics that you need to use for legacy code and new code that can be properly unit tested.

    G. Wade

      I'm not familiar with that specific book, but when you mentioned 'Legacy Code', I was reminded of Peter J. Scott's book Perl Medic: Transforming Legacy Code. It also deal with being handed existing code, and having to maintain it. The third chapter covers tests -- it's not as complete as chromatic's book, but it does give an overview of the perl modules in CPAN (at the time) to handle testing.

        All three books complement each other. If you can read all three, I recommend it.

Re: how to begin with testing?
by zwon (Abbot) on Mar 22, 2009 at 18:07 UTC

    I personally wouldn't write tests for already written and working application, it's a lot of work and if application already do its job then why bother? But if I eventually find some bug, I'd write a test that demonstrates this bug before I would fix it. My opinion is that tests most useful if you write them before code, writing the complete test suite after code is a waste of time.

      While your advice for when to write tests seems spot on—before writing (much) code, and when bugs are found—your advice for when not to write them seems suspect. Even when a complete, working application is in place, it's appropriate to buttress it with tests—they will help with maintenance, so that changes can be seen not to break existing behaviour, and, well, testing, perhaps revealing that an application that seemed to be both complete and working is neither. Since writing tests, especially in Perl, can be very cheap, and need not be done all at once, it seems harmless at worst.

        Ok, maybe I made this too categorical. Of course tests would be useful on any stage of program life cycle. I just think that price-usefulness ratio would be too hight to develop complete test suite after program is ready. You saying that writing tests very cheap, hmm, for me its usually more than half of the time I work on a program, so I'd say it rather expensive.

        What about maintenance -- yes, I would write tests when I have to maintain program without tests, but again that would be individual tests, not a complete test suite, nobody would pay me for that. The only case I would write full tests is if I'm going to seriously refactor some old stuff; but I would do it just before refactoring and not just after writing a program, because I don't know what requirements would be, if at all.

      Why do you think that writing tests after you have working code is a waste of time?

      As already mentioned, having tests for the code at leasts helps spot regressions when you have to modify the code later. But, more over, even if you have a completely functional application, writing tests can help you uncover areas where there are possible misconceptions before they are found in the live app. Even if the app is working, that doesn't mean it's working as it is supposed to, but you may not realize that without taking the time to write tests against the application.

      My humble opinion,

      Agreed, with one exception - that being if you want to re-factor the code or make major architectural changes. Then you need tests that exercise the application thoroughly so that you can be sure that your changes don't break anything.

      As for how to write an application in such a way as to be easily tested - instead of a huge monolith, break it into chunks so that each little bit of functionality can be easily tested. For example, if your application has a config file, then instead of scattering configgish stuff throughout the app, abstract it out into a module that provides functions for reading and parsing the file, and for getting at the information, which can be tested entirely seperate from the main body of the app.

Re: how to begin with testing?
by dsheroh (Monsignor) on Mar 23, 2009 at 12:22 UTC
    On the question itself:

    In that situation, I would add tests as convenient in any available free time (hah!) until a bug was found or the code needed to be changed for any other reason.

    If the change is triggered by a bug, create one or more test cases to demonstrate the bug and prove both that it exists and that you know how to trigger it before making any changes to the code. (Proponents of test-driven development would classify "it should have a new feature, but doesn't" as a bug and perform this step for any change. While I find TDD to be a useful practice in many cases, you can still do good testing without embracing it.) These tests will tell you when you have fixed the bug.

    Regardless of the cause of the change (bug, new feature, whatever), create additional tests for each subroutine before making any changes to that subroutine. These tests should verify that, for a full range of both valid and invalid inputs, you get the expected output. These tests will tell you whether you broke anything new in the course of making the changes (and you may discover some additional bugs in the process of writing the tests).

    By creating tests as needed before making any changes, you will eventually build up a solid test suite for your code, or at least for the portions which are subject to change.

    On the side question of users not running the tests:

    That's not really a major issue in most cases. Tests are tools for the developer's use and, in most cases, knowing that they run successfully on your machine(s) is sufficient.

    But there are always the exceptions, when some environmental issue brings out a bug on a user's system which doesn't occur on yours. In these cases, you need to identify the environmental issue and attempt to duplicate it on your development system in order to debug it - this holds regardless of whether you're doing automated testing or not.

    However, if you have your test suite, you can turn that into a compiled .exe and ask the affected user(s) to run it. If any tests fail there, then that gives you a head start towards isolating the problem and identifying its underlying cause. And, as with any bug, once you're able to emulate the cause and trigger the bug in your test suite, it will stay there and help to avoid the introduction of similar bugs (or re-introduction of the same one) in later versions of the code.

Re: how to begin with testing?
by sundialsvc4 (Abbot) on Mar 23, 2009 at 13:05 UTC

    “Testing” in Perl is an especially poignant example of TMTOWTDI. A search of CPAN on the word, “test,” produced 5000 hits, and a search of module-names beginning with Test:: has 806 entries today.

    What has become absolutely clear to me is that, “punching a few buttons” and from that saying, “okey, it seems to work...” isn't enough. Also, “you really don't know ‘your own’ code!” No... the only way to test, is to do so very explicitly and continuously.

    When you start doing this aggressively, you realize just how unstable code that you believed to be “production ready” actually was, all this time. (And that's a very sobering thing to realize about your profession.) But when you start building things out of code that is being aggressively tested while you build it, and/or you start flushing-out the problems in production code before users have a chance to find them, you start “waking up in the night” much less often.

Re: how to begin with testing?
by gokuraku (Monk) on Mar 23, 2009 at 18:48 UTC
    The way I see this is you have two issues to be addressed, one is that you want automatic testing in some situations but overall you also want to have testing being accomplished on this application you have. There are a few different ways to go about this, but you do need first to generate an overall strategy, as someone mentioned, what is your goal? What do you want to achieve by "testing"?

    Most QA organizations will generate documentation that notes the overall strategy, the what will be done, and plans that will note the how. You don't need anything that formal but having even a napkin with a sketch of what you want to do will be helpful because it will make sure that you have a way to see and accomplish your goals. Most testing examples are high level, because everyone's needs, environment and code is different. What I may be testing is different from someone else, we may use the same underlying techniques of unit and boundary testing, but the methods we use will be different. If you've been reviewing some information on testing so far I'll go with the premise that you already have the basics down and know that you need to test but need to come up with a how. One way to handle this, if you have a project that is open source, is to open it up for people to test. Getting general feedback from people willing to test your project will give you different views on how its used and if you don't have Users giving you feedback its a good way to get a view from outside your own. Looking at sites like the Software Testing Club, Test Republic or SQA Forums you may find a few souls willing to test out your product.

    Automation on the other hand is something you really need to plan out, basically its a software project in itself. If you have web interfaces that need testing you need a GUI test run, if its web based and not complex there are plenty of open source web tools to use. There are quite a few books out there on automation, including Implementing Automated Software Testing a recent one by Elfriede Dustin. If you have a sufficiently complex project then look at a framework you can run so that if you check in code you can do a build and have the framework run your tests; very useful in Regression Testing.

    Expect that you will probably need to adjust your tests, yes you will write them AFTER you write code, anyone who suggests otherwise is being disingenuous. Tests that are stale will be as useful as the day they were written, proper testing needs as much care as writing code. For the last part of your post, you only need as much of a framework for testing as will allow you to exercise the code you are trying to test. Most QA organizations would love to have a mirror of production for what code is released, its not always possible so you go for the most bang for your buck and time. Set up what will allow you to test your code in a real world environment, set up scripts or situations that will allow you to test your code and if you need it think as well about what you can do to provide stress and load on the system.

    If this is just you my suggestion would be trying to get others to test for you, or provide beta code for people to use before you release. The more you can get people to help you out, the more eyes and different perspectives you can get.

Re: how to begin with testing?
by jeepj (Scribe) on Mar 25, 2009 at 12:38 UTC
    Before going too much into specific Perl's related questions regarding testing, I think that you must define what you want to test.

    In our organization, the software is tested tons of time, with several approaches.

    - Unit tests : you want to test a specific subset of your application, and this subset is often related to a specific use case. This tests are mainly to verify that a given functionality is working as expected. In that case, all inputs and outputs, from other applications or databases, are simulated

    - End-to-end tests: you take the full range of applications, in a controlled environment, and any real use case in your business can be tested, this is a complement to the first test, as interactions between your applications are checked

    - performance/robustness : your application is tested with all transactions captured from a real environment, and you verify the resource consumption is not too high and that there is no crash/core/dump. This is purely technical, but ensure that the application is ready to handle a real traffic

    - non-regression: you verify from one delivery to another that same inputs generate the same outputs. Your expected results must be adapted or validated when functionality is changed

    and their are others. This is just some examples of what testing could be, to give you some hints. Also, some tests can be easily automated, some needs to remains manual. Another idea is to create a script each time a bug is found, and to add it to a set of automated tests, to ensure that no future dev will break something that was already fixed.

    Anyway, all tests must be performed in a controlled environment (data, configuration, links...), to ensure that no modification in the environment will interfere with your tests.
Re: how to begin with testing?
by Anonymous Monk on Mar 25, 2009 at 23:26 UTC
    If you've got an open source application, and a user base using it, you have another option -- let the users help. Perhaps you can cultivate an alpha test list, people who get the new releases prior to anyone else. They'll test with real data, knowing the risks.