Beefy Boxes and Bandwidth Generously Provided by pair Networks
We don't bite newbies here... much
 
PerlMonks  

Creating a co-operative framework for testing

by gmax (Abbot)
on Oct 23, 2006 at 10:28 UTC ( #579983=perlmeditation: print w/ replies, xml ) Need Help??

Hi Monks,

I recently applied for a post of QA developer at a open source company and I started working there, with great delight, about one month ago. One of the reasons for my hiring was my active participation in the community, with blogging, writing articles, submitting bug reports, answering to forum and NG questions. My background as a tester was considered, of course, but the stress was on my community links.

Inviting an open source community to Quality Assurance tasks

Thus, one of my tasks is now to organize a framework for cooperation between the Quality Assurance department and the community.

It is not an easy task to tackle. I can see how much an active community can contribute (bug reports, test cases, code patches, usability feedback), but I can also see the clash between a QA department populated by professional testers and the erratic behavior of a large community. Somehow I have to find a way of reconciling these two worlds, and make them work harmoniously toward a common goal.

My plan (yes, in spite of the task being so though, I do have a plan) is to motivate potential contributors with durable benefits, to get the best possible outcome from this relationship.

During the past twelve months, the company has promoted such cooperation with a contest. Win an iPod if you submit many bugs, and/or write articles or blog posts about our products. The results were quite good in the beginning, but they faded away soon after a few weeks. Some of the prizes are yet to be awarded. Why did this policy fail to provide the expected results? IMO, because it was targeting the wrong quality of a potential contributor: it was appealing at the competitiveness, instead of tickling pride and desire of recognition, which work much better in an open source environment. But especially, it did not address the main aspect of open source success, i.e. mutual benefit.

Why does someone contribute to an open source project? There are many reasons, but the paramount one should be striving for improvement. The main point of open source is being able to modify something that barely works for you into something that fits your needs appropriately. Thus the mutual benefit: when you add a feature or fix a bug in an open source product that affects your work, you make that product better for you, but you are also improving its overall value.

The challenge of testing

Among the activities that make the quality of a software product, testing is perhaps the most visible one. There are other elements, such as policies for coding, internal code reviews, and several organizational issues that will affect the final quality. But testing is a key element in software development. Thus, my company has a huge regression test suite that does a good job of keeping each build free of bugs.

As everyone knows, testing is never enough. Despite all the efforts from the developers and the QA engineers, there is always a bug that escapes their attention and affects the user. Why? Surely because testing everything is impossible, but also because developers and QA engineers are highly trained people and they approach problems from a different perspective than the final users.

Common users take the product for what they need, and just throw at it the commands that will solve their problem, regardless of any other concern that the developers could take into account. This naive behavior is what finds the most appalling bugs.

The challenge, then, is to combine the scientific approach of a QA department with the cleverness of willing contributors, who are able to find bugs that elude most of the professionals.

The lessons of Perl

That's why (finally) I am coming here to ask for advice. My job is not as a pure Perl developer (although Perl is largely used in our testing infrastructure), so this problem is not related to Perl as a language, but with Perl as a community. I was accustomed to testing before using Perl, but it is in the Perl community that I found the most efficient way of testing.

I can see that the Perl community has a very good testing infrastructure. I see how it is organized, how it works, but the reasons why it is so good escape me.

Seeking advice

Here are the questions

  • Why is the Perl testing infrastructure so effective?
  • If I wanted to export some of the qualities of Perl testing to a non-Perl product, what should I focus on?
  • What motivates a (QA) contributor?

So, fire on. Any insight on this matter could be valuable.

Thanks in advance.

 _  _ _  _  
(_|| | |(_|><
 _|   

Comment on Creating a co-operative framework for testing
Re: Creating a co-operative framework for testing
by brian_d_foy (Abbot) on Oct 23, 2006 at 11:34 UTC

    * Why is the Perl testing infrastructure so effective?

    Because Schwern made it that way, and it was good. :)

    * If I wanted to export some of the qualities of Perl testing to a non-Perl product, what should I focus on?

    Getting Schwern or Ovid interested in that product. That's most effective when you change the product to be World of Warcraft, I think.

    * What motivates a (QA) contributor?

    If you're asking how to get people to work for free, well, there is no answer. It's different for every person. If you're assuming that you will get work for free, I think you're already off to a bad start. iPods might be interesting, but they aren't all that special or expensive to make most people do that much work. If the product is really useful and people like it enough or think they can't live without it, they'll help with it.

    I contribute to open source because I'm too stupid to realize that for the same pay I could watch TV all day, get through my Netflix list, or maybe read a book. :)


    Update: I wrote this post just before going off to bed, and then laid in bed thinking about it and that it probably is more flippant than I mean it to be. Writers should be given free prescriptions of sleeping pills for this very reason.

    I think things such as Perl's testing culture congeal around a few alpha personalities. There's no scientific reason these things happen, and a lot of it is by accident. I created Test::Pod because I could. It was easy and it was fun. From that, Andy Lester got interested, and eventually created Test::Pod::Coverage, and eventually took over Test::Pod. Both of those ended up as CPANTS metrics. Although it pains me to say it, I think Malcolm Gladwell's The Tipping Point applies here. That's not a perscription for success. It just points out that some things succeed for no other reason than somebody does it and somebody else likes it enough to do it too. For every time that happens, though, many other things don't catch on. I've written plenty of test modules, but I bet most people can't name any them. Test::Pod is the one that made it.

    Perl's testing culture matured at a particular time. Test had been around for quite a while, although most CPAN contributors seemed to find it just as easy to print "ok $test\n"; as use a module. Test::Simple did what a lot of people were thinking they should do, but didn't: write an ok function. Schwern actually did it though, and there were plenty of other people around who were waiting to use it. IF he had made it earlier, maybe it would just be sitting there on CPAN collecting dust. Who knows.

    Update2 Foro stvn: sure, there are a lot of people involved with testing now, but I credit Schwern with starting the whole thing. All that stuff you mention came later.

    --
    brian d foy <brian@stonehenge.com>
    Subscribe to The Perl Review

      Uhm, I think you are missing a LOT of names on your list there.

      What about petdance and the Phalanx project? And chromatic and his test-first evangalism which he spreads through http://www.perl.com (Test Code Kata, and several articles on Test::Builder)? They have done a lot for testing in the perl community as well.

      And then there is nothingmuch and the many people over in #perl6 who contributed to make Test::TAP::Model and Test::TAP::HTMLMatrix. Work which is being expanded upon by several others (sorry for not knowing more specific names) to create things like:

      • The smartlinks between the Perl 6 test suite and the Synopsis (a technology which has also been shared with Parrott and IIRC is being used to link to the Parrot PPDs)
      • An automated smoke server, which IIRC already has a spinoff product, whose name I cannot recall at the moment.
      The efforts going on here (IMO) are modernizing and expanding the already excellent Perl testing tools and infastructure (we wouldn't want to stagnate would we ;)

      And then there is AdamK and his work on PITA, which opens up the world of ridiculously large scale testing as well.

      I could go on and on, the perl-qa list is quite active with many regularly contributing members. My point really is that Perl's testing culture is so strong, because Perl's testing community is so strong and very active. New tools are constantly being developed, and existing tools are constantly being improved. Keeping the community active keeps it strong, and new shiny toys are a great way of attracting even more talent.

      UPDATE

      For stvn: sure, there are a lot of people involved with testing now, but I credit Schwern with starting the whole thing. All that stuff you mention came later.
      Yes, I know it came later, this is my point entirely. Schwern was just another perl hacker in a long line of perl hackers who has furthered the culture of perl testing. This is not meant in any way to discount Schwern's contribution to it all, for it was and is very signifigant in both it's scope and timing. However, he built atop the shoulders of those who came before him, and today many others are building atop his shoulders. I believe the Perl testing culture is so strong, specifically because it is so active. This activity keeps it fresh and therefore keeps people interested, which in turn spurs interest from others, and the cycle continues.

      -stvn
Re: Creating a co-operative framework for testing
by xdg (Monsignor) on Oct 23, 2006 at 15:23 UTC

    There's a little bit of an XY Problem here. Your task is to "organize a framework for cooperation between the Quality Assurance department and the community". But your questions are about the Perl testing infrastructure, which has little to do, directly, with community involvement.

    In my view, the Perl testing infrastructure is effective for several reasons:

    • Running tests was made part of the module installation cycle: make; make test; make install

    • Writing tests was made easy, through a good framework (Test::Simple, Test::More and friends) and good instructions (e.g. Test::Tutorial).

    • Lots of evangelism and infrastructure support (as noted in other responses).

    However, this is all about module authors writing tests, not about community feedback into the testing process. I count myself lucky to have bug reports that include a patch -- and I almost never get a bug report that includes a test file demonstrating the bug. I wrote "The value of test-driven bug reporting" to encourage more of it.

    So, back to your task -- coordinating QA and community -- I think you need to look beyond the Perl testing infrastructure. You need to look at collaborative projects -- perhaps in Perl, perhaps elsewhere -- and see what works and what doesn't.

    For example, there was/is the Phalanx project. Here in NY, the local Perl group collaborated to work on two Phalanx modules, and had a devil of a time getting their work incorporated back into the modules by the author. That's led to a general disinterest in repeating that process. People want some sort of sense of feedback and accomplishment from their work.

    I think a better example is Pugs -- audreyt's Perl 6 interpreter. Commit bits are handed out freely. And there's a real emphasis on automated testing with something like 18,000 tests (if I recall correctly). Look at the Smoke Reports -- in particular, drill into the details and look at some of the graphical test output. This may be a model to emulate.

    For your task, I offer these suggestions:

    • Put your test suite into a repository and give out commit bits liberally. Make it easy for people to contribute tests.

    • Create automated smoke tests against the repository so people can quickly see that their tests are included and what the results are.

    • Write good, easy documentation to help people get started writing tests in your framework.

    • If using Perl, consider tools like Perl::Critic to help identify contributed tests with poor style that your QA department should clean up. Don't use it as a gateway to make contributions harder, just use it to QA your QA.

    • Track and publicize contributions: aim for reputation-based reward rather than monetary or other awards. (Look at the XP system on Perl Monks for example.)

    I hope this sparks some useful thinking. Best of luck.

    -xdg

    Code written by xdg and posted on PerlMonks is public domain. It is provided as is with no warranties, express or implied, of any kind. Posted code may not have been tested. Use of posted code is at your own risk.

Re: Creating a co-operative framework for testing
by chromatic (Archbishop) on Oct 23, 2006 at 16:17 UTC

    One of the big changes in Perl testing came about when the pumpkings adopted a policy of applying only patches that included tests. (This of course was only possible when people started taking test failures seriously enough to fix them.)

Re: Creating a co-operative framework for testing
by adrianh (Chancellor) on Oct 23, 2006 at 17:22 UTC
    Why is the Perl testing infrastructure so effective?

    One reason that springs to mind is that it's not monolithic, and it doesn't force you to do things. We have an infrastructure that you can poke into and build on top of without having to follow any particular process.

    While the parallels are not exact you might find The Zen of Comprehensive Archive Networks an interesting read.

Re: Creating a co-operative framework for testing
by zby (Vicar) on Oct 24, 2006 at 09:35 UTC
Re: Creating a co-operative framework for testing
by ruoso (Curate) on Oct 26, 2006 at 09:46 UTC

    I know this wasn't exactly your question, but I think this will be of help...

    One of the things that makes perl testing framework really nice is TAP, this means that the test can be anything that just prints "ok" or "not ok" for each test (Test::Harness actually expects that to be a Perl program, but this is easily work-aroundable (does that word exists?)).

    So, here is what I use to work with Test-Driven-Development in whatever-language:

    #!/bin/sh make clean all check && perl -MTest::Harness -e '@tests=<./test/t*>;$T +est::Harness::Switches=qw();*Test::Harness::Straps::_command_line = s +ub {return $_[1]};runtests(@tests)'
    daniel

      BTW, for C code, I also use this _test.h file... which is very helpful to me...

      #ifndef LOADED_TEST_H #define LOADED_TEST_H #include <stdio.h> #include <unistd.h> #define plan(numtests) printf("1..%i\n",(numtests)); #define ok(bool,testname) printf("%s - %s\n",(bool)?"ok":"not ok",test +name); #define pass(testname) printf("ok - %s\n",testname); #define fail(testname) printf("not ok - %s\n",testname); #define skip(testname,reason) printf("ok - %s # Skipped: %s\n",testnam +e,reason); #define is_int(got,expected,testname) printf("%s - %s (expected:%i, go +t:%i)\n",((got)==(expected))?"ok":"not ok",testname,expected,got); #define is_short(got,expected,testname) printf("%s - %s (expected:%hhu +, got:%hhu)\n",((got)==(expected))?"ok":"not ok",testname,expected,go +t); #define is_flt(got,expected,testname) printf("%s - %s (expected:%f, go +t:%f)\n",((got)==(expected))?"ok":"not ok",testname,expected,got); #define is_str(got,expected,testname) printf("%s - %s (expected:%s, go +t:%s)\n",(strcmp((got),(expected))==0)?"ok":"not ok",testname,expecte +d,got); #endif
      daniel
Re: Creating a co-operative framework for testing
by wjw (Deacon) on Oct 27, 2006 at 16:32 UTC

    A couple of thoughts here... I have in the past donated time to community projects, (not always code related) and at some point asked myself "why?" The following is the best I could come up with

    • I wish to put my time/effort into something I believe in (ie. something larger than myself)</il>
    • I wish to be respected by those I respect
    It really boils down to that. I would have a hard time "believing" in something that I might win a prize for doing. I do something for tangible gain, or I do something for intangible gain. I get the sense of "cheezy" when the two are mixed, as they seem to be in what you are attempting. PLEASE! I am not critisizing what you are attempting in any way. Just telling you about my gut reaction to it. Hope that might be useful.

    ...the majority is always wrong, and always the last to know about it...
Re: Creating a co-operative framework for testing
by Anonymous Monk on Oct 27, 2006 at 20:02 UTC
    I addressed this a bit in my YAPC talk this year, "The Reluctant Web Tester". I worked1 at Yahoo! Global Search at the time; most of Yahoo!'s web development is in PHP (Rasmus himself works for us), and there's a lot of time pressure to get features in and get bugs fixed fast. Developers tend to not have a lot of spare cycles.

    PHP itself has a pretty decent testing package, but it's more like Test::Class than Test::More: you create a class to do the testing, lots of setup, etc. So we didn't get much in the way of buy-in.

    My job was to build tools to get the programmers doing tests on the search platform, across a large number of international sites. They key was Keeping It Trivial To Do. I put together a Perl application called simple_scan (available as App::SimpleScan on CPAN) that took a URL, a regex, and a 'Y' or 'N' as its input; it then generated a Test::More-based Perl program that actually did the testing.

    This was the first bar: no writing programs to write a test. The second bar was, for instance, running 20 queries against 20+ sites. Obviously cut-n-pasting 20 identical tests was unappealing, and I wanted to stay away from the idea that you were writing a program. So I came up with the idea of doing combinatorial substitution: define a variable that has the servers you want to test, and another one which has the queries you want to run, and simple_scan does all the work of generating the unique combinations. So 3 lines of input can now generate 400+ tests, all of which are monitored via the standard TAP tools.

    The lesson of all this is to make sure that you provide testing in a way that is compatible with the goals of the programmer on the ground: if a programmer finds it really easy to write and run tests, he or she will write them. If there's any friction between "I should test this" and "test is running", you'll find that there are no tests.

    The other lesson is that you don't have to make them write Perl to take advantage of the Perl testing tools.

    1 I'm working on development tools now; simple_scan's worked out so well that I ran out of things to do for Search!

      Oop, should have logged in first. Oh well.

Log In?
Username:
Password:

What's my password?
Create A New User
Node Status?
node history
Node Type: perlmeditation [id://579983]
Approved by Corion
Front-paged by dbwiz
help
Chatterbox?
and the web crawler heard nothing...

How do I use this? | Other CB clients
Other Users?
Others having an uproarious good time at the Monastery: (14)
As of 2014-09-17 13:44 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    How do you remember the number of days in each month?











    Results (81 votes), past polls