|Problems? Is your data what you think it is?|
I've been given the job of "improving the quality" of over a million lines of old code (mostly C++, but some Perl and Java too). Most of this code currently has no tests and was not designed with testability in mind. After reading these nodes:
1. Add new tests whenever you find a bug: i) run test (should fail); ii) fix bug; iii) rerun test (should now pass). That is, grow a regression test suite over time.
2. Risk. Find code "hot spots" by talking to people and examining the bug archives. Add tests for hot spots and areas known to be complex, fragile or risky. Refactor where appropriate (first writing a regression test suite).
3. Add new tests based on what is important to the customer.
4. Documentation-driven tests. Go through the user manual and write a test for each example given there. This also improves the quality of the documentation and hopefully focuses on the customer view of the system.
5. Where you fear a bug. A test that never finds a bug is poor value.
6. Write tests for the more "stable" interfaces. If an interface changes often, you must keep changing the test suite in step.
7. Boundary conditions.
8. Stress tests.
9. Big ones, but not too big. A test that takes too long to run is a barrier to running it often.
10. Use code coverage tools to guide where to write tests.
11. Write some mock objects, where appropriate, to reduce dependencies, make tests more predictable, simulate errors and so on.
12. Do some manual code reviews.
13. Do some code reviews using tools, such as static (e.g. lint) and dynamic (e.g. valgrind) code analysers.
14. Don't let the enormity of the task get you down! Incomplete is better than nothing at all.
janitored by ybiC: Change hardcode www.perlmonks.org/index.pl?node_id=nnnnn links to [id://nnnnn], to avoid logging out monks with cookie set to other perlmonks domain (ie: .com or .net, or sans leading-www)