in reply to Re^2: When Test Suites Attack
in thread When Test Suites Attack

Yes, I do. But, the developer(s) involved in the code being integrated shouldn't be doing integration tests. Those should be done by someone else, preferably a dedicated tester (though an uninvolved developer would do). And, integration tests are no substitute for unit-tests, system tests, and user-acceptance tests. Each tests a different view and a different slice of the product.

My criteria for good software:
  1. Does it work?
  2. Can someone else come in, make a change, and be reasonably certain no bugs were introduced?

Replies are listed 'Best First'.
Re^4: When Test Suites Attack
by BrowserUk (Pope) on Oct 30, 2005 at 21:27 UTC

    That's a "big shop" attitude, but where possible, I totally agree with you.

    However, and this is where we may disagree, is that I see a tendancy for XP and TDD or maybe it's just some practitioners of those, that tend to place too high an emphasis upon unit testing. There is a tendancy for unit tests to be too all encompassing; too extensive; too mandated; too important. Effectively, I see unit tests being used as a substitute for, and largely overlapping the bounds of integration testing, and another form of testing that seems to be out of favour currently--functional verification.

    Under the schemes of testing I grew up with, unit tests were the programmers own sanity checks of his own code. They were entirly within his mandate, his responsibility, and for his benefit. They written by the programmer, run by the programmer, acted upon by the programmer. They served his purpose, not that of the organisation. They were, by implication, white-box tests.

    Once the programmer says his code is ready, then the responsibility and purview moves away from the programmer to the tester, and into the realms of Functional Verification. These are black-box tests, written to specification by a third party, code unseen; and run by a third party. These tests are for the organisation.

    Integration tests are used when modules written by disparate groups come together. Their form is dependant upon the nature of the coming together. Where there is a definite caller/called relationship, the UT and FV tests of the calling code becomes an effective IT of the interface between the calling and called code.

    In other situations, there is a more peer-level relationship between the modules and there may not be any (in-house) application who's FV & UT will perform this role. In these circumstances, there is a need to write an in-house integration suite. This may also take the form of a demonstration application and/or user acceptance test.

    System test comes when a complete system is put together. This may or may not: happen in-house; be a real application, or demonstration; form a part of a user acceptance, or contractual obligation.

    The trick to a successful and cost effective test program, is to minimise the overlap between these levels, whilst ensuring coverage.

    Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
    Lingua non convalesco, consenesco et abolesco. -- Rule 1 has a caveat! -- Who broke the cabal?
    "Science is about questioning the status quo. Questioning authority".
    In the absence of evidence, opinion is indistinguishable from prejudice.
      That's a "big shop" attitude, but where possible, I totally agree with you. While I cut my teeth in the big shops, it's actually the attitude I take when I write CPAN distros. I work in a company where there are 2 programmers and the two owners, one of whom moonlights as the third programmer. About half of what we write are CPAN modules and/or patches to CPAN modules that are direct benefits to our projects. The other half is the proprietary algorithms, DB interactions (most of which could benefit from some ORM), and glue code. As such, we are very oriented towards decoupling functionality and CPAN'ing as much as possible. OSS, for us, is as much about outside testing as all the other benefits combined.

      Since we tend to write CPAN distros, our test suites are "unit-tests", but also are API tests. We use TDD wherever possible, so we immediately use and write document for our APIs. This is very much akin to the "integration-tests" you speak of. I can't tell you how often I have added features and/or changed APIs (and rewritten tests) because I can't test a specific scenario or the test "just feels wrong."

      We are also starting to write each other's tests. For one, it means that the API has to pass someone else's eyes. stvn will think of usages that I just don't conceive of, and vice versa. Plus, it's so much easier to write code to pass tests than it is to write the tests in the first place. We did this for Perl6::Roles and we liked it a lot.

      Our UAT tends to comes about because many of our modules are heavily used (Excel::Template, Tree::Simple, DBD::Mock, etc). If it doesn't pass muster, we hear about it.

      My criteria for good software:
      1. Does it work?
      2. Can someone else come in, make a change, and be reasonably certain no bugs were introduced?