|Perl Monk, Perl Meditation|
Re: Regression testing with dependenciesby mstone (Deacon)
|on May 29, 2002 at 23:29 UTC||Need Help??|
What you're talking about -- though indirectly -- is the need for configuration management.
All software starts from some basic set of assumptions. One of the main selling points for high level languages like Perl is that they give programmers a nice, coherent set of basic assumptions from which to work. But language is only part of the picture. Programs also rely on assumptions about their operating system, filesystems, libraries, devices, and a whole layer of formats and protocols that let them communicate with the rest of the world. That set of assumptions is collectively known as a 'configuration'.
Simple programs -- ones that spend all their time working with values in their own address space -- don't base make many assumptions about configuration above and beyond the language. In essence, any code that compiles will run. More complex programs -- ones that communicate with other parts of the system -- inherit a whole load of configuration-dependent assumptions for every piece they rely on.
(and please note that in this context, 'simple' and 'complex' have no relation to the functional code itself. a genetic annealing simulation with no serious I/O would be 'simple', while a hit counter that calls a database would be 'complex')
Configuration management is the art of nailing down all the assumptions made by a given piece of software. Some of those assumptions will be internal to the language, as you mentioned, while others will involve software the programmer doesn't control. Some will reside on the machine where the program executes, while others may live on other machines (a program that connects to a departmental database server, for instance). The configuration schedule for a given program should tell you exactly how to build an environment in which that program will run as expected.
As Chromatic mentioned, you can use mock objects to build abstraction barriers into your program, and design by contract gives you a way to specify the conditions a dependency has to meet in order to work and play nicely with your software. In the long run, though, there's no way to escape your dependency on.. well.. your dependencies.
As to the specific problem you mentioned in your reply -- testing a CC system without actually shipping 10,000 widgets to Sri Lanka -- design by contract is your friend.
Yes, your high-level component relies on low-level systems to fulfill its contract. So to unit-test that high-level component, set it into a test harness where all the low-level systems are fakes that, by design, either meet or violate the required contract. Your unit tests will confirm that the high-level component does fulfill its contract when all its dependencies fulfill theirs, and that it fails in the required way when some or all of its dependencies fail. The problem of making sure the component works properly in the live system will then fall under the heading of integration testing, not unit testing.
As to your objection that mock objects can end up being as numerous and as large as the actual system, you're on the right track, but you haven't learned to love the idea yet.
Instead of thinking of mock objects as an annoying adjunct to proper testing, think of them as a living design reference. Build the mock object first, and run it through its paces before building the actual production object. In fact, you should build a whole family of mock objects, starting with the simplest, no-dependencies-canned-return-value version possible, and working up through all the contractual rights and obligations the production object will have to observe. For every condition in the contract, write objects that both meet and violate that condition, so you can be sure all the bases are covered.
What you're really doing there is working out your structural code first -- the stuff that keeps all the modules working and playing nicely together -- and putting the actual data manipulation off for last. That isn't as much fun as slamming down a 1.5 lethal dosage of caffiene and blasting your way through a 40-hour hack session, but it does tend to produce better-structured code.
In the end, you may end up with far more mock-code than production code, but the mock code will be easier to build and evolve, and every assumption in the production code will be directly testable from one of the mock units. In effect, you do all your debugging during the design stage, and by the time you get to production code, there's nothing left to test.
BTW - a 'smoke test' is a live test in a (presumably) safe environment: plug it in, turn it on, and see if it starts to smoke.