http://www.perlmonks.org?node_id=752409

stinkingpig has asked for the wisdom of the Perl Monks concerning the following question:

I've read a lot on this and other sites about testing, but I seem to be missing something fundamental between "it's a good idea" and "here's how to go about it in a real project." I've watched screencasts and presentations, I've read articles and module instructions, and the examples are always abstracted so far from my use case that I don't understand how to get from here to there.

I have a number of projects where I'm interested in automatic testing, but I'll focus on one for this posting. The code base is a set of monitoring and maintenance routines for a Windows server product. It's about 9,000 lines of code that does a lot of direct SQL work, Windows event viewer work, Windows services work, and file system work... then it records statistics in RRD files and generates some HTML and email reports. This is a free tool which is nominally open source, although I am the sole developer. The target environments run the tool as a compiled .exe, so they won't be running the test harness at all (e.g. as done during CPAN install).

In a perfect world, it would have little to say. Some of its test conditions only occur rarely, and others that I intend to add are "once-in-a-blue-moon" situations... but indicative of the kind of extreme problem that needs to be jumped on right now.

So, my question: is the idea in testing this sort of real-world application to fabricate a complete set of environmental inputs and then test the program's reactions to them? Or is there a higher level of abstraction that I'm missing? Given the number of interfaces being discussed, I've got some workload concerns with building and resetting a test case for every possible situation. How do other Perl developers do this?