Where I am working, we have a requirement to provide an automatic regression test suite for an existing live application. The application is C++, not perl (though there is some perl glue already deployed for application monitoring). I'd quite like to do this using Perl, as I think the language is ideally suited. I also have permission to use any CPAN modules of my choice. I will still need to convince colleagues of the wisdom of this choice.

I use the Test::More and Test::Harness mechanisms in unit tests for modules used by the application glue. But, the full regression test is a much bigger prospect. On the automation side, I will need to be doing the following:

It occurs to me that there is a fundamental difference between an automation "step" and a test. If a test fails, I want the script to carry on and run more tests. If a step fails, I want it to pause requiring manual intervention. The pause could eventually time out, bailing the run, allowing the whole suite to be run in "lights out mode" unattended over a weekend.

My plan is to put the automation steps in first, and add the tests afterwards.

I'd be very interested in hearing from anybody who has attempted or achieved anything like this before. What tools have already been written that can help me? What problems and gotchas should I be careful to avoid?


Oh Lord, won’t you burn me a Knoppix CD ?
My friends all rate Windows, I must disagree.
Your powers of persuasion will set them all free,
So oh Lord, won’t you burn me a Knoppix CD ?
(Missquoting Janis Joplin)

Replies are listed 'Best First'.
Re: Automated regression testing
by Old_Gray_Bear (Bishop) on Oct 13, 2006 at 16:48 UTC
    If you haven't alerady, go take a look at "Perl Testing, A Developer's Notebook (Langworth & chromatic), particularly the last chapter ("Testing Everything Else"). There are several tools and tricks that can be applied to testing non-Perl code. Basically, you can treat your C++ as a black box, and wrap it in an eval-block. For a given stimulus, you have an expected response, that you can code for (by examining $@, any temporary files that get written, databases modified, etc.) in a Test::More test harness.

    If this were me confronted with this project, I'd strongly consider going down the 'Test' path in parallel with the 'Automation' path. There will be enough cross-talk between the two to make this a very useful approach. The Test path will give you the means to determine and report your C++ code in action, which the Automation path can use to make go/stop and fix/nogo decisions.

    I Go Back to Sleep, Now.


Re: Automated regression testing
by radiantmatrix (Parson) on Oct 13, 2006 at 15:15 UTC

    Test scripts intended for the Test::Harness framework can be talked into handling all of what you require; they are, after all, just Perl programs.

    Basically, you're looking at using Conditional tests like Test::More supports. It works like this:

    SKIP: { skip $num_tests, "Wasn't able to start the 'foo' daemon" if ( try_start('/bin/foo') or wait_for('Please start foo', '5m'); ); # ... the tests that depend on foo }

    If you need to bail out completely, you can use the BAIL_OUT method provided by Test::More:

    eval { try_start('/bin/foo') or wait_for('Please start foo', '5m'); die "Unable to find a running foo" unless is_running('/bin/foo'); }; if ($@) { BAIL_OUT($@); }

    Please note that these are pretty high-level suggestions, but I hope they can start you thinking in the right direction.

    A collection of thoughts and links from the minds of geeks
    The Code that can be seen is not the true Code
    I haven't found a problem yet that can't be solved by a well-placed trebuchet
Re: Automated regression testing
by g0n (Priest) on Oct 14, 2006 at 12:17 UTC
    In an attempt to get around the problems with QA described in The purpose of testing, I have been working on something very similar. The chief difference between what you are describing and what I've been working on is that I am seeking to generate automated tests and formal test scripts (in MS-Word) from the same metadata.

    I did very much what you are describing, wrote the automation steps first and the tests later. Also, the distinction you make between steps failing and tests failing is one that I've followed.

    Probably the most significant thing I found was that the automation steps ended up being reused for all sorts of things - in fact, the test framework is not in use yet, but I've written several support tools (for component & log file monitoring, enabling/disabling components etc) using the automation classes which save me a lot of work from day to day. Since the automation classes also get much more use this way, they've improved considerably in error detection & recovery in ways that they might not if they had only been used for testing.

    For that reason I'd recommend not coupling your UI with the automation - instead have packages containing automation functions that return success or failure, and write the user interaction code separately - possibly as part of your test script.

    I also had a similar problem of convincing other people this was a good way forward - I got around that by designing the whole thing up front, then initially only implementing those classes I needed for a simple demonstration test case to get buy in.

    Much of what you need is built into perl (backticks to run command line utilities, normal filehandling for reading log files, pattern matches for spotting log messages etc) - which is one reason why perl is ideally suited to the task.

    If you are going to have a very large volume of tests and associated test data, I'd suggest keeping the test and it's associated test data together for maintainability. I initially did this by storing them both in a database, but later found that XML files were more flexible and easily maintained. If you use the Test::Harness framework for your tests (and I see no reason not to - I didn't really have the option) you could perhaps put the test data for each file in a __DATA__ block, or in a separate file with the same base name, or some other mechanism, as long as there is an easily spotted connection between them.


    "If there is such a phenomenon as absolute evil, it consists in treating another human being as a thing."
    John Brunner, "The Shockwave Rider".

Re: Automated regression testing
by reneeb (Chaplain) on Oct 14, 2006 at 08:10 UTC
    If it is a command line application, you can use Test::Cmd or Test::Excpect. If it is a GUI, you can use Win32::GuiTest on Windows and X11::GUITest on Linux.

    If you want to test the code, you could write XS-code that interacts with your C-Code. In the Perl script you can use the above mentioned modules for the TAP output.
Re: Automated regression testing
by Anonymous Monk on Oct 13, 2006 at 20:55 UTC
    Perl Testing: A Developers Notebook.

    Edited by planetscape - removed link to copyrighted content

    Considered by grep: Reap: Copyrighted Material
    Unconsidered by Arunbear: keep (and edit) votes prevented reaping (Keep: 1, Edit: 2, Reap: 9)