I've just been asked to advise management on organisational quality issues. There seems general agreement that the test labour content across most development projects is around 50% (when people do honest accounting). The central organisational issue, then, is who does that 50% and how do we manage it?

Management are especially eager to learn good answers to the following questions:

  1. What is the best tester to developer ratio?
  2. Who should QA report to?
  3. How skilled should "testers" be? And how do you hire them, keep them, and motivate them?

As detailed in the References section below, I've done a first cut at googling for resources that might help me answer these questions. If you know of other good resources, please let us know.

Here's what we currently do:

  1. We have roughly one tester per ten developers.
  2. QA is part of and reports to Development. Most development project teams have a QA resource from day one in the project.
  3. Most of our "testers" have decent programming skills: they ensure requirements and designs are testable, design test strategies, write test plans, write test automation harnesses, and so on.

How do you do it?


  • Comment on Quality, Developers and Testers: Organisational Issues

Replies are listed 'Best First'.
Re: Quality, Developers and Testers: Organisational Issues
by kvale (Monsignor) on Jun 11, 2005 at 07:37 UTC
    I would imagine that the ideal ratio of testers to developers would depend on the development strategy used.

    In an XP strategy that emphasizes test-driven development, developers write tests first and all developers are in effect testers as well, at least at the unit level. In this process, relatively few pure testers are needed to test integration of components and systems.

    By contrast, a traditional software development process does the requirements analysis, architetecture development and coding at the beginning and testing doesn't ramp up until significant portions of the system are coded. This process needs more testers, who both look at low and high level behavior. Because testing comes in at later phase, the proportion of testers needed may also change over time.


Re: Quality, Developers and Testers: Organisational Issues
by dragonchild (Archbishop) on Jun 11, 2005 at 18:26 UTC
    Effective use of testers (and any other development resource) is only possible when you have a clear-cut process. The process is the important thing; without it, you have cowboys with keyboards doing whatever the last manager who spoke to them said to do.

    There are many, many ways to go about setting up a good testing strategy. These are the things I feel are common to them all:

    • A tester has veto over a developer. If the tester says it fails, no-one can override him/her. Period.
    • A tester has veto over a designer. If the tester says the design is untestable, no-one can override him/her. Period.
    • Developers should write unit tests. Testers should write system tests.
    • As many tests as possible should be automated. Ideally, the entire application is smoke-tested within 15 minutes of any checkin, with the whole team being notified of any test failures.

    Beyond that ... I prefer a test-first, code-later approach, but that's because of where and how I work. Other companies prefer to have a design-code-test-recode-retest methodology, and it must work for them because they're still in business. *shrugs* YMMV.

    • In general, if you think something isn't in Perl, try it out, because it usually is. :-)
    • "What is the sound of Perl? Is it not the sound of a wall that people have stopped banging their heads against?"
Re: Quality, Developers and Testers: Organisational Issues
by jplindstrom (Monsignor) on Jun 11, 2005 at 11:29 UTC
    It all depends on your situation, how complex the software setup is, how the development is being done, whether the software is operated in-house or by an external party or delived to end-users, etc. Do the testers work closely together with development? Are there separate test systems that needs to be taken care of etc. Is there custom hardware involved?

    But in my experience one tester to ten developers seems a bit low.

    Something that have worked well for us is to have the testers and developers at the same level, both reporting to the project manager. Each iteration the software is stabilised and programmer tested before an internal release to the testers. They then deploy the software in their own environment.

    To get this to work, it's important to practice the release process during the early iterations (short iterations with almost no features) to get all the not-so-obvious problems out of the way. It also helps the testers to get familiar with the basics of the software, and to identify where some things may need more operations documentation.


      I completely agree. It really depends on the situation and complexity of the application and also the deployment scenario. I don't think that there can be any formulae that can define the ratio. What matters the most is what is tested and what is not tested and proper communication on the same. As is already known "When Microsoft can release an OS with 55000 known bugs, it clearly defines the management's perspective, which depends on the market conditions etc. Ideally there should be a separation between the developement and test teams, so that neither influences the other. And as said above (to practice the release process during the early iterations), so that maximum bugs are caught in the early phases, even regular code reviews can help a lot.