http://www.perlmonks.org?node_id=11133352


in reply to Let's try for a better CPAN experience

Do you really need to run a gazillion slow tests on every single computer your dist is installed? ... Don't run extensive "timeout" tests unless you absolutely have to ... Make your tests as short and fast as possible ... Make more of your tests Author-only tests ...
Thanks for taking the time to report your real world experiences in this area. Very much appreciated.

Though agreeing with Your Mother's sentiment (namely I refuse to criticize devs willing to do things I am not) I felt very sad about the reception my Perl CPAN test metadata proposal received back in 2010. Actually, I still feel my suggested declarative trumps imperative approach to CPAN test metadata is the best general approach to this tricky problem ... though it appears to have little support from the people actually doing the work.

  • Comment on Re: Let's try for a better CPAN experience

Replies are listed 'Best First'.
Re^2: Let's try for a better CPAN experience
by eyepopslikeamosquito (Archbishop) on Jun 02, 2021 at 07:53 UTC

    In case you're interested, though unable to sell my test metadata ideas to the perl-qa folks, I was allowed to implement a simple test metadata scheme at work, mostly for C++, but also Perl and other languages. We used identical test metadata names across all languages and all types of tests (not just unit tests) ... and integrated with our build and release tools.

    In practice, the most popular and useful metadata was Smoke, with a value of Smoke=1 indicating a Smoke Test. Smoke tests need to be robust and fast because if they fail, the change is automatically rejected by our build tools.

    We also learnt that it's vital to quarantine intermittently failing tests quickly and to fix them quickly ... only returning them to the main build when reliable. If you don't do that, people start ignoring test failures! You need a mindset of zero tolerance for test failures, aka No Broken Windows.

    An interesting metadata extension is to keep metrics on the test suite itself. Is a test providing "value"? How often does it fail validly? How often does it fail spuriously? How long does it take to run? Who writes the "flakiest" tests? ;-)

    See also: Effective Automated Testing