Most effort in too many test suits goes into ensuring that the code does what it is thought it should do, especially when it given the correct data.
The two most oft omitted testing strategies are :
- What does it do when given plausible, wrong data?
The classic example of this seemingly continues to plague internet applications, the infamous buffer overflow condition.
- How clearly defined are the design criteria to start with?
Most older monks will have seen the cartoon strip for the child's swing. It reads something like:
- How the customer envisaged it
- How the pre sales analyst captured it
- How the salesman sold it
- How the systems analyst scoped it
- How the planner project-planned it
- How the budgeting dept costed it
- How the programmer coded it
- How the tester tested it
- How it was finally produced
That's only a paraphrase of the original and it is probably funnier with the pictures, but that's serves my purpose in two ways.
First, the joke itself won't detract too much. Second it emphasises my real point.
Everyone sees, interprets, reads & prioritises according to their own particular bias.
As a result, the transition from paper to product often leaves many, if not most of those people disappointed with the results. Many believe the final arbiter in this process is the programmer, after all, he's the one that actually produces the product. In reality, he is constrained from above and below in that hierarchy. If he takes too long in producing it, the management will call him over-time, budgeting over budget, the salesman will call him pedantic and the customer won't pay. If he varies from the spec too far, the analysts will reign him in though pre-sales and the salesman may try to intervene on his behalf.
However, if the tester says it works, the salesmen will believe him, management will rubber-stamp it, the analyst are on their next project and budgeting will raise the invoice. The tester spec wins out. Of course, the tester is often the programmer, in which case, all bets are off, unless the programmer is also most or all of the other job titles as well. In that case, the specs are, or ought to be the same. The only fly in that ointment is the customer.
It is therefore vitally important the customer likes what the tester is testing for. That means that before the any other jobs get to view or act upon the spec, the tester should refer it, as the pre sales analyst wrote it up, back to the customer an ensure that they agree. The mechanism for ensuring this agreement should be the test plan.
The testers greatest skills come not from devising test data to exercise boundary conditions, nor his analysis of the algorithms used to provide verification of outputs--though both are important. His greatest skill is in devising written and verbal wordings of the verification criteria for meeting the spec. By doing this with simple, concise and precise language, he effectively can 'tie the spec down' into a near-immutable object. The benefits of this are that at the same time as he reduces, as far as is practicable, the room for ambiguities he also enables simple agreement of specified variations as often arise out of the waterfall process and even, though less frequently, as a result of bottom up driven changes.
To sum up, let your tester be an independent person, with independent authority and equal voice. Don't sideline either the role, or the task into a ghetto of afterthought and slack space. If you do, you open the door to the salesmen overselling, the analyst overanalysis, the designer over designing, the planner under-planning, the coster under costing, the programmer badly coding and the under whelmed customer going elsewhere at the trot.
Throughout this, I have referred to the tester and other job functions possibly, as he. This is for simplicity only. In my experience, female testers often outshine their male counterparts in several ways. The two most common of these are, again 'in my experience', they generally are better at sticking to the task at hand and not drifting off in new and interesting, but unspecified directions. A particular failing of mine. They also seem better at handling changes, interruptions, restarts and random variations than most guys. Possibly because they are less prone to righteous indignation and anger than us? Suffice it to say, that my choice of gender nouns was not by way of bias.
Posts are HTML formatted. Put <p> </p> tags around your paragraphs. Put <code> </code> tags around your code and data!
Titles consisting of a single word are discouraged, and in most cases are disallowed outright.
Read Where should I post X? if you're not absolutely sure you're posting in the right place.
Please read these before you post! —
Posts may use any of the Perl Monks Approved HTML tags:
You may need to use entities for some characters, as follows. (Exception: Within code tags, you can put the characters literally.)
- a, abbr, b, big, blockquote, br, caption, center, col, colgroup, dd, del, div, dl, dt, em, font, h1, h2, h3, h4, h5, h6, hr, i, ins, li, ol, p, pre, readmore, small, span, spoiler, strike, strong, sub, sup, table, tbody, td, tfoot, th, thead, tr, tt, u, ul, wbr
Link using PerlMonks shortcuts! What shortcuts can I use for linking?
See Writeup Formatting Tips and other pages linked from there for more info.
| & || & |
| < || < |
| > || > |
| [ || [ |
| ] || ] ||