Earlier, I wrote about cleaning up in one pass all my CVS repositories. Now I want to run all the tests in all those projects, and I want to do it every day.

I wanted to go through all the directories to run their tests suites, and I want to do this from a script so I can generate nightly reports. Instead of checking a distribution by hand when I worked on it, I want to preiodically check it, and do so automatically. Some of my modules rely on network things, and since I don't control those situations, sometimes the tests fail because things changed. Still, there is no end to the list of reasons this sort of things is useful.

Most of that work is just a simple matter of programming (SMOP), but there are some tricky parts. No matter: the time it took me to write the script was significantly shorter than checking the 10 or so projects I cared about that day. Not only that, I get to use the script over and over again.

Perl distributions assume that they are going to be run from a Makefile and that someone is going to look at the output: not the results, the output. The Test::Harness stuff is narrowly focussed for just that purpose. No one really cares that much about using it for anything else, so it mostly lives a peaceful life. I just need it to be the tiniest bit different though, so I just reuse some bits from the private interface to Test::Harness and throw away the output. Don't try this at home, kids!

(Note: some of this originally appeared on Use.perl) I want to run the Test::Harness magic from a program, but it wasn't designed for that really, and outputs things when it should really be be quiet. This problem is a documented To Do item in the Test::Harness docs, even. No matter: I just turn STDOUT and STDERR into big globs of nothing, then run the private method _run_all_tests() because it is really just runtests() without the report.

That turning off STDOUT even works surprises me. Remember that Test::Harness gets messages from the test files through STDOUT. Heck, it was a couple of seconds of typing and it worked. I tried it on some tests that had failures, and it caught them. Schwern tells me I should do it right by using Test::Harness::Straps, and he's right, of course. I basically have to rewrite _run_all_tests(), which already does everything I want but is just slightly annoying. So, I'll get right on that the next time I have some free time for coding.

_run_all_tests() needs to know where to find things that only MakeMaker knows about (the build library: blib), so I create an ExtUtils::MM object and peek at things I probably shouldn't know about. It likes to complain too, so I have to do that after STDERR is wiped away (temporarily). I could have just faked this since I already know what those paths are, but maybe someone who wants to use this has a different set-up. I hate hard-coding things in programs (mostly because Randal slaps me on the wrist whenever I do. If you think he's tough here, try pair programming with him ;)

What a little bundle of ugliness, but it's better than fixing the modules or doing a lot of Test::Harness::Straps work, at least for now. The first goal is working code.

my( $totals, $failed ) = eval { local @INC = @INC; local *STDOUT; # Shut up Test::Harness! local *STDERR; # Yeah, you too! my $MM = ExtUtils::MM->new(); # and you! unshift @INC, map { File::Spec->rel2abs($_) } @{ $MM }{ qw( INST_LIB INST_ARCHLIB ) }; Test::Harness::_run_all_tests( @test_files ) };

The report

The report I get is very simple. It's a one line message for each directory it checks. Later, when I convert things to Test::Harness::Straps, I can do an overall report.
/Users/brian/Dev/orn-weblogs ... No tests failed
/Users/brian/Dev/Palm/Magellan/NavCompanion ... 1/1 ( 100.0% ) failed
/Users/brian/Dev/perlbrowser ... Prerequisites not found! Skipping.
/Users/brian/Dev/Pod/LaTeX/TPR ... No tests failed
/Users/brian/Dev/Polyglot ... No tests failed
/Users/brian/Dev/release ... Could not even run tests.
/Users/brian/Dev/scriptdist ... No tests failed

Most things pass all their tests, some have missing prerequisites, and a some fail tests. Some things don't have tests. Some things have tests, but I've broken things so badly I can't run the tests (like the release project which is in the middle of a major overhaul). Now I know what I need to do to get everything to the same level.

Some things are missing prereqs because I have a new laptop and I simply haven't installed them. That's easy to fix (unless the prereq is ImageMagick ;).

Again, I'm doing all of this to avoid abandoned code. Most of the time those "abandoned" projects just need the tiniest bit of attention to stay current. I can periodically check all of my projects at once, automatically, and with very little effort.

The program

I wrote this in one pass, and since it's working I haven't messed around with it. Eventually I want to turn this into something where the test harness runs all of the test files in all of the directories and gives one report, but that's going to take some work (and a lot of chdir()s).

Remember the bits I said about ugly kludginess!

If you want to run it, give it a list of directories to check. Those should be the directories with a Makefile.PL and a t directory. I run it with a command line and save the output to a file. I can then grep the file to list things that I need to look at.

find ~/Dev -name Makefile.PL | perl -pe 's|/Makefile.PL$||' | xargs da +ily_build > daily.txt

Just the code

#!/usr/bin/perl -sw use strict; use ExtUtils::MM; use File::Spec; use File::Spec::Functions qw(catfile); use Test::Harness; use Test::Manifest; our( %Missing, $v ); foreach my $arg ( @ARGV ) { print "$arg ... "; print "\n" if $v; # # # # # # # # # # # # # # # # # # # # # do{ message( "Could not change to [$arg]: $!" ); next } unless chdir $arg; do{ message( "There is no Makefile.PL! Skipping." ); next } unless -e "Makefile.PL"; # # # # # # # # # # # # # # # # # # # # # my @output = run_makefile_pl(); # # # # # # # # # # # # # # # # # # # # # if( my @missing = find_missing_prereqs( @output ) ) { message( "Prerequisites not found! Skipping." ); if( $v ) { message( "\t\t" . join "\n\t\t", @missing ); @Missing{ @missing } = (1) x @missing; } next; } # # # # # # # # # # # # # # # # # # # # # @output = run_make(); # # # # # # # # # # # # # # # # # # # # # my @test_files = get_test_files(); # # # # # # # # # # # # # # # # # # # # # unless( @test_files ) { message( "Found no test files! Skipping." ); next; } # # # # # # # # # # # # # # # # # # # # # message( "Found tests ==> " . join " ", @test_files, "\n" ) if $v; open STDERR, ">> /dev/null"; message( "Testing ==> " ) if $v; my( $totals, $failed ) = test_files( @test_files ); unless( defined $totals and defined $failed ) { message( "Could not even run tests." ); next; } # # # # # # # # # # # # # # # # # # # # # if( keys %$failed ) { my( $max, $fail, $string ) = ( 0, 0, '' ); foreach my $key ( keys %$failed ) { my $hash = $failed->{$key}; if( $hash->{failed} > $hash->{max} ) { $hash->{failed} = $hash->{max} } $string .= sprintf "\t\t%-20s failed %d/%d ( %3d%% )\n", $key, @{ $hash }{ qw( failed max percent ) }; $max += $hash->{max}; $fail += $hash->{failed}; } message( sprintf "%d/%d ( %3.1f%% ) failed", $fail, $max, $fail / $max * 100 ); message( $string ) if $v; } else { message( "No tests failed" ); } } # # # # # # # # # # # # # # # # # # # # # if( $v and keys %Missing ) { print "-" x 73, "\n", "Missing modules\n"; print join "\n\t", sort keys %Missing; print "\n\n"; } # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # sub message { chomp( my $message = shift ); print "\t" if $v; print "$message\n"; } sub get_test_files { my @files = do { if( -e Test::Manifest::manifest_name() ) { Test::Manifest::get_t_files() } else { find_all_t_files(); } }; } sub find_all_t_files { glob "t/*.t" } sub run_makefile_pl { `perl Makefile.PL 2>&1`; } sub run_make { `make 2>&1`; } sub find_missing_prereqs { my @output = shift; my %missing = (); if( grep { m/not found/ } @output ) { chomp @output; %missing = map { s/^W.*ite\s+|\s+[\d._]* not found.$//g; $_, 1 + } grep { m/not found/ } @output; } return ( keys %missing ); } sub test_files { my @test_files = @_; eval { local @INC = @INC; local *STDOUT; # Shut up Test::Harness! local *STDERR; # Yeah, you too! my $MM = ExtUtils::MM->new(); # and you! unshift @INC, map { File::Spec->rel2abs($_) } @{ $MM }{ qw( INST_LIB INST_ARCHLIB ) }; Test::Harness::_run_all_tests( @test_files ) } }
--
brian d foy <bdfoy@cpan.org>

Replies are listed 'Best First'.
Re: Parallel maintenance on many projects, part II: The Testing
by eyepopslikeamosquito (Bishop) on Sep 26, 2004 at 23:16 UTC
    Any reason you are not using the Test::Harness prove command? For my automated nightly tests, I use a very simple script that runs the prove command against a bunch of directories.

      I tried prove(1) a long time ago. It didn't work right for me, so I didn't look at it again. Andy probably fixed whatever it was, but even if it worked, it doesn't do much for me. It's' probably more useful and valuable to people who haven't developed a testing process that works for them.

      Almost everything prove(1) does I can do with `make test` and some command line magic, since it's basically a shorter way of typing a lot of things. It can run tests in random order, but who's writing tests that depend on each other? (That's what it's supposed to catch, I think ;)

      I want to get the numbers into variables so I can do something with them, such as shove them into a database. With something more fancy, I should only see interesting output (things that need attention) and I should be able to see historical reports. External programs like prove(1) aren't good for that sort of thing.

      --
      brian d foy <bdfoy@cpan.org>
Re: Parallel maintenance on many projects, part II: The Testing
by pg (Canon) on Sep 26, 2004 at 18:40 UTC
    "Now I want to run all the tests in all those projects, and I want to do it every day."

    Just curious, do all those projects get modified/updated daily? If not, maybe only test those ones with CSV check in's.

    An interesting thing will be project dependency. You might like to test a project, if what it depends on was changed, althought the project itself is not changed. That could starts something fun...

    I really like what you are doing.

    Recently I headed the system testing of a project, and lots of the testings involve GUI interfaces, so lots of those testing have to be done manually. I would like to hear from others, what they do in this case.

      Checking only things that you think have changed is a heck of a lot of coding for significantly less benefit. That extra coding introduces even more bugs and more maintenance time, but doesn't add any benefit. Anything I can catch by testing only things that have changed I can catch by testing everything.

      Never assume that the world is a perfect place. If I can test everything every day with as much effort (or even less effort) than trying to be clever about it, I'll check everything everyday. Computers are designed to do mindless tasks over and over again.

      Unless you are looking to waste time, once you accomplish the task, move on to the next thing on your To Do list. :)

      I like Joel Spolsky's take on this sort of thing: Fixing bugs is only important when the value of having the bug fixed exceeds the cost of the fixing it.

      --
      brian d foy <bdfoy@cpan.org>

        I believe the approach you took is right in your case, and I really like the quote at the end of your post.

        However, in general, I think it depends on the size of the project, and the amount of effort required to test everything. There are projects that could not be possibly tested in one day, or one week, or even one month. A way to tailer test cases has to be there, in general. Don't misunderstand me, I am just trying to add some thoughts, not saying anything is wrong in your particular case.