saintbrie has asked for the wisdom of the Perl Monks concerning the following question:

My question has to do with testing redundancy. If you have a good set of functional tests, is it necessary to write unit tests? Why bother writing unit tests if you have a strong set of functional tests? Are unit tests just a waste of time if they are not checking the final output of a program rather than intermediate operations?

I'm the only one working on a large project which has essentially no tests for any of the code. In an ideal world, I'd have both unit tests and functional tests, and I'd have written the unit tests before I wrote any code and wouldn't be asking this question. But I didn't. And now I have limited time to work on the project.

My desire for testing has led me to stop development until I have a robust set of tests that cover most of the code. I've set to working methodically through each of the scripts with WWW::Mechanize to set up functional tests for each use of each script. I figure functional tests should expose anything that unit tests would show, and by skipping unit testing, I save myself hours of writing tests that are just redundant. Am I wrong in my assumption?

Replies are listed 'Best First'.
Re: Functional and Unit Test Redundancy
by matija (Priest) on May 08, 2004 at 19:51 UTC
    How much unit testing will be usefull depends in a large part on how general your units are.

    In a closely coupled system, a unit that never directly receives a user's input will have it's inputs "protected" by other units. It can rely (in an ideal world) that it's inputs will be in a range it can handle - and they won't contain any tests to verify it is so (why should they - the other units have handled it). Such systems are more efficient, but more difficult to extend and maintain. But they can be tested through functional testing alone - because in a way, the whole script is one big unit.

    If, on the other hand, your project consists of more general units, which are only loosely coupled with other units, then each unit must be able to check that it's assumptions about the inputs (and the state of the system) are valid. That means that the unit should be tested for conditions which can not happen when it is coupled with other units.

    Yes, the loosely coupled systems are less "efficient", they are much easier to extend, maintain.

    If your project is large, you will be much happier if you find a way to break it down into more-or-less independent units. If you don't, eventualy your project will behave like a bowl of spaghetti: you won't be able to fix one end of the project without something moving at some other end. If you do, however, you will need to implement unit testing, too.

Re: Functional and Unit Test Redundancy
by xdg (Monsignor) on May 08, 2004 at 21:45 UTC

    This probably depends a lot on your own personal development style and whether your are coding by yourself or in a team. Clearly, if you're part of a larger team, having each developer unit testing makes a lot of sense. if you are the project, then you may get by fine with functional testing, particularly if your project is small. If your project has to be "bulletproof" then more rigorous testing helps you get confidence in the system. If you're doing something that should work if used correctly and the penalties of incorrect usage are the tough luck of the user, then a lighter, top-down testing approach may be sufficient. As always, TIMTOWTDI, and the right approach will be different in each situation.

    That said, the advocates of test-driven development (which I'm starting to explore) would say that bottom-up unit-testing during development is the way to go -- you write your tests to express clearly what results you want and then code to that and build up from there. In the end, so what if your tests are "redundant" -- they were useful along the way to shape the logic of the system.

    There is some logic in the test-first approach that applies even if you aren't doing test-driven development. In particular, if you are continually tuning or refactoring your code, running unit tests on the affected code is much faster than re-running a full testing suite after each change.

    As a separate note, it's worth thinking about what you mean by "redundant". Just because a code routine or statement is executed more than once doesn't make tests redundant. My view is that redundancy means that test are:

    • Independent -- the success of a test does not depend on the prior success of another test. (e.g., initializing the system, creating a certain state, etc.)
    • Duplicative -- the tests identify the same failure condition.

    A unit test may well be independent of a functional test, but the failure of the functional test doesn't necessarily tell you the same thing as the failure of a unit test. This is an issue of granularity -- when a failure occurs, how fast can you track down where and why the failure occurred? Thus, back the opening paragraph, if your project is simple enough that you have good transparency in a functional failure, then you probably don't need unit testing. But if your functional test failing leads you to go "Huh?" and start digging through code to find the problem, then some more granular unit tests would probably help you debug things more quickly. (Assuming that speed/efficiency are important to you.)


    Code posted by xdg on PerlMonks is public domain. It has no warranties, express or implied. Posted code may not have been tested. Use at your own risk.

Re: Functional and Unit Test Redundancy
by bfilipow (Monk) on May 08, 2004 at 22:15 UTC

    "If you have a good set of functional tests, is it necessary to write unit tests?".

    YES, unit tests shall be applied to each module before functional tests, even before the whole system/program is ready. And should save a lot of debugging time.

    As matija suggested above a good set of functional tests guarantee that the product works fine for the time being. However, consider future extentions - individual units may become exposed to inputs you have never dreamt of.

    But if you are limited in your time - consider writing "release notes".

      Could you give an example of what you would consider good "release notes" or good use of "release notes"?

        IMO, "release notes" are the part of documentaions which lists known bugs/issues of the product. All the minor and not-so-minor flaws which there were no resources to fix in that version, but should be fixed in next releases. "RN" should be the summary, with details included in more appropriate part. The lack of unit test to any module should be mentioned here.

        In your case, the part of the documentation that describes tests shuold have a discussion of what the functional tests could not cover and why.

        There is one more problem you should bear in mind turning down unit tests - any reuse of any part of your code is questionable. It could be easier to write it from scratch than to use yours.

        Mind that it is my private opinion of what good "release notes" are.

Re: Functional and Unit Test Redundancy
by petdance (Parson) on May 09, 2004 at 05:09 UTC
    Redundancy is good. Don't think in terms of optimizing the tests. You don't really care how long things to take, or if the code could be leaned up.

    Every little test you have in there adds to your army of guardian angels.


Re: Functional and Unit Test Redundancy
by torin (Sexton) on May 11, 2004 at 16:03 UTC

    If your tests never fail than functional tests are all that you need. But if something fails in your functional test, how do you go about determining where it is failing? It is this for which Unit tests excel. It definitely gives _me_ the warm fuzzies when I can verify the changes to one specific module still does what the tests expect them to do.

    In theory, the functional tests are redundant since if all the Unit tests work, then of course all the functional tests will work as well. But then, we know that theory doesn't always map correctly to reality.