in reply to Test Case Generator
There are several kinds of generated tests I use:
Crash and burn tests: To do this testing you need to have documentation on the valid value ranges for a function or method's parameters. The very act of writing up these tests can point out problems in documentation and incomplete code even before you write the test generation module. There are two types:
- You randomly generate data within the range for each parameter that can be passed to a function. Then you pass the randomly generated data to the function to see if it dies when good data is passed. I also like to pass boundary condition data in additional to mid range randomly generated data.
- If the function is supposed to be doing its own bounds checking, you generate out of bounds data to make sure it DOES die. If it doesn't throw the expected error message or return value, it fails the test.
Consistency tests: consistency tests verify that the output of two function calls/methods (or repeated calls to a single method) are mutually consistent. Some examples:
- Making sure that two calls to a toggle function returned the original value.
- Verifying that $oFoo->isWidget($oAllegedWidget) returns true if $oAllegedWiget is a member of the array returned by $oFoo->getAllWidgets().
- Round trip testing. For example, one could call all the getters of an object to get its data and then passing the data to the constructor to create a new object. Then one verifies that all of the getters on the old and new object return the same values. This is a good way to make sure that constructors are storing data in the slots they are supposed to be stored in.
It should be stressed that the quality of consistency testing is VERY dependent on the initial values of the test object. For instance, if all of the parameters passed to a constructor are the same value and stored unchanged, then the round trip test described above would have little value. The getters all return the same value and they can't be used to verify that data is being stored in the right slots. Automated consistency testing should usually be coupled with (a) a few carefully designed test-pattern objects with handcrafted sets of return values for their function calls. (b) code that sanity tests randomly generated data/objects to make sure that they will create useful test objects. For example, one could verify that each parameter passed to a constructor has a different value.
Static/stable result tests: Some methods and functions are expected to return specific values no matter what data is passed to them. Here the generated test generates random input and verifies that it has no effect on the return value. For example, a constructor for a singleton class should return the same object no matter how many times it is called.
Environment sensitivity tests: this involves generating perturbations in the environment, e.g. changing environment variable values or or other aspects of the execution environments around the object in various ways to make sure that the object maintains its expected state in a variety of execution contexts. For example, one might want to verify that the object continues to be well behaved regardless of whether it is created and run via the command line or a daemon.
Load testing: This involves generating various load levels to make sure that object performs within tolerance ranges at those load levels
One danger in any automated test generation project is that the test harness itself can be buggy. That's another reason why it is important to combine any generated test suites with hand crafted ones. Then again hand crafted test suites can also be error prone (do I have a bug? 5+4 didn't add up to 10! Oops that was a typo in my expected result!). Each can therefore act as a check on the other.