|Problems? Is your data what you think it is?|
Please let me know if I've missed important ideas or strategies and which of the points below you consider to be the most important.
I think you're list pretty much has it covered, and I completely agree with chromatic's point about writing tests for new functionality.
The other strategy that I'd add is making sure that you write tests around any code that you're going to improve.
If you've got some gnarly code that you want to refactor write some tests to exercise the behaviour that you want to preserve before you change the code.
In particular, I'm interested to know the best way to add tests to existing code.
The advice I would give is to remember that tests exist to help improve the code. They're not an end in themselves.
A counter productive practice that I've seen is to go through a large piece of legacy code and add developer tests for everything. Doing this with legacy code not driven by tests will produce a test suite that is brittle in the face of change. When you get to the refactoring you're going to find that you're going to be continually throwing away a lot of the new tests so you don't get any benefit from them.
In my experience it's much more effective to build the test suite around the changes you make to the code. Add tests when you add new features. Add tests around code that you're refactoring. Add tests to demonstrate bugs. In my experience just following those three rules naturallys build a test suite around the most important code.