Beefy Boxes and Bandwidth Generously Provided by pair Networks
Welcome to the Monastery
 
PerlMonks  

Re: What goes in a test suite?

by BrowserUk (Patriarch)
on Aug 11, 2002 at 03:53 UTC ( [id://189262]=note: print w/replies, xml ) Need Help??


in reply to What goes in a test suite?

Most effort in too many test suits goes into ensuring that the code does what it is thought it should do, especially when it given the correct data.

The two most oft omitted testing strategies are :

  1. What does it do when given plausible, wrong data?

    The classic example of this seemingly continues to plague internet applications, the infamous buffer overflow condition.

  2. How clearly defined are the design criteria to start with?

    Most older monks will have seen the cartoon strip for the child's swing. It reads something like:

    • How the customer envisaged it
    • How the pre sales analyst captured it
    • How the salesman sold it
    • How the systems analyst scoped it
    • How the planner project-planned it
    • How the budgeting dept costed it
    • How the programmer coded it
    • How the tester tested it
    • How it was finally produced

    That's only a paraphrase of the original and it is probably funnier with the pictures, but that's serves my purpose in two ways.

    First, the joke itself won't detract too much. Second it emphasises my real point.

    Everyone sees, interprets, reads & prioritises according to their own particular bias.

    As a result, the transition from paper to product often leaves many, if not most of those people disappointed with the results. Many believe the final arbiter in this process is the programmer, after all, he's the one that actually produces the product. In reality, he is constrained from above and below in that hierarchy. If he takes too long in producing it, the management will call him over-time, budgeting over budget, the salesman will call him pedantic and the customer won't pay. If he varies from the spec too far, the analysts will reign him in though pre-sales and the salesman may try to intervene on his behalf.

    However, if the tester says it works, the salesmen will believe him, management will rubber-stamp it, the analyst are on their next project and budgeting will raise the invoice. The tester spec wins out. Of course, the tester is often the programmer, in which case, all bets are off, unless the programmer is also most or all of the other job titles as well. In that case, the specs are, or ought to be the same. The only fly in that ointment is the customer.

    It is therefore vitally important the customer likes what the tester is testing for. That means that before the any other jobs get to view or act upon the spec, the tester should refer it, as the pre sales analyst wrote it up, back to the customer an ensure that they agree. The mechanism for ensuring this agreement should be the test plan.

    The testers greatest skills come not from devising test data to exercise boundary conditions, nor his analysis of the algorithms used to provide verification of outputs--though both are important. His greatest skill is in devising written and verbal wordings of the verification criteria for meeting the spec. By doing this with simple, concise and precise language, he effectively can 'tie the spec down' into a near-immutable object. The benefits of this are that at the same time as he reduces, as far as is practicable, the room for ambiguities he also enables simple agreement of specified variations as often arise out of the waterfall process and even, though less frequently, as a result of bottom up driven changes.

To sum up, let your tester be an independent person, with independent authority and equal voice. Don't sideline either the role, or the task into a ghetto of afterthought and slack space. If you do, you open the door to the salesmen overselling, the analyst overanalysis, the designer over designing, the planner under-planning, the coster under costing, the programmer badly coding and the under whelmed customer going elsewhere at the trot. ---

Throughout this, I have referred to the tester and other job functions possibly, as he. This is for simplicity only. In my experience, female testers often outshine their male counterparts in several ways. The two most common of these are, again 'in my experience', they generally are better at sticking to the task at hand and not drifting off in new and interesting, but unspecified directions. A particular failing of mine. They also seem better at handling changes, interruptions, restarts and random variations than most guys. Possibly because they are less prone to righteous indignation and anger than us? Suffice it to say, that my choice of gender nouns was not by way of bias.

Log In?
Username:
Password:

What's my password?
Create A New User
Domain Nodelet?
Node Status?
node history
Node Type: note [id://189262]
help
Chatterbox?
and the web crawler heard nothing...

How do I use this?Last hourOther CB clients
Other Users?
Others browsing the Monastery: (5)
As of 2024-03-19 08:26 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    No recent polls found