Beefy Boxes and Bandwidth Generously Provided by pair Networks
more useful options
 
PerlMonks  

Testing 1...2...3...

by raybies (Chaplain)
on Dec 07, 2010 at 20:55 UTC ( [id://875884]=perlmeditation: print w/replies, xml ) Need Help??

Is it just my luck to work with so many "choice" programmers, or is it difficult for programmers to internalize the principle, "If it hasn't been tested, it doesn't work."?

Replies are listed 'Best First'.
Re: Testing 1...2...3...
by mr_mischief (Monsignor) on Dec 07, 2010 at 21:14 UTC

    Be careful with absolute statements. If you haven't tested it, how do you know it doesn't work? I'd say you know that no more than they know that it does, and the hyperbole doesn't help them understand. It just sounds like you're willing to either exaggerate (which you are) or to ignore the simple rules of logic (in which case testing wouldn't do any good anyway).

    I think the best way to get around all this "sort of meets spec" thinking and to get people to do tests misses the obvious part of the problem. If you really want the software tested and you really want it to meet specs, then there's a solution staring you in the face. Spec the tests.

    If you specify what the tests are, and specify that the software must meet the tests, then the software isn't fit for spec until it passes the tests. You may be a test first person, a test last person, a write then test per feature person, or whatever type of test-minded person you care to be. If the spec is written in terms of what tests specifically must pass and how the tests will be written, then there's no way to claim the software is to spec until the tests pass.

      And, very importantly, spec tests should not be written by the same people who implement the product/feature. We all feel great about the myriad of tests that are on CPAN, and we nearly pee in out pants anytime someone mentions "unit tests". But tests written by the authors feel like the police policing themselves, and banks judging for themselves whether they behaved ethically.
Re: Testing 1...2...3...
by JavaFan (Canon) on Dec 07, 2010 at 22:11 UTC
    "If it hasn't been tested, it doesn't work."
    What kind of tests are you referring to? Unit tests? System tests? Acceptance tests? Usability tests? Stress tests? ADA compliance tests? Colour-blindness tests? Security tests? Cross-platform tests? Data-consistency tests? Data-validity tests? Code reviews? Internal auditing? External auditing? Backwards compatability tests? Switch-over tests? Restore-from-backup tests? Disaster recovery tests? Disk-full tests? "What-happens-if-I-yank-a-cable" tests? "Let's-change-the-password-halfway-the-procedure" tests? "Send-it-random-data-for-24-hours" tests? "On-call-phone" tests? Power-consumption tests? Power-failure tests? Fire drills? Climate tests? Copyright violation tests? Patent violation tests?

    What's the last project you did where you tested at least half of the tests I listed? (Except for the last 3, I've done all the tests I listed, but never on a the same project).

      Heh. Great list. There are other tests too, like Protocol/Standards Certification tests, Code-Coverage tests, Timing-Delay/Jitter tests, Quality/Customer/Play Tests and Performance Testing.

      There are also various model for test, including Goldenmodel Comparision testing, Blackbox Testing, Whitebox Testing, Directed Tests, Cornercase Testing, Hardware/Software Emulator Tests and Random Stimulus (You mentioned this, though why limit it to 24 hours? ;)).

      And then of course the tests that test the tests that test the tests that test the tests... :)

      Personally, I came to programming as a vehicle for testing hardware models (which were actually software models of what would be made into hardware... heh.) And the majority of the tests you mention above were necessary to insure that the price of developing a chip were kept to a minimum. In that model, it pays to test thing exhaustively because the cost of additional tapeouts (essentially bugfixes on a chip) are prohibitively high (around a million bucks a turn at the time). In that developer paradigm, the price of a bug was so high, that even the dopiest management saw the value in early frontloading of the test teams--and we divided those who wrote the tests from those who did the development. On the best-tested projects we developed test suites in parallel with the developers from day one of the design.

      Perhaps I'm paranoid, but working with my latest developers has not been anywhere close to the level of testing discipline. And "amazingly enough" the software is remarkably brittle.

      One nice thing about Perl is that it enables me to throw a lot of data around without getting in the way of the test object and in a timely manner.

Re: Testing 1...2...3...
by ambrus (Abbot) on Dec 08, 2010 at 14:44 UTC

    I know my code doesn't work. I don't need tests to prove that. Writing tests would be a waste of time, for even if they did say that the code works, that would just indicate that the test is buggy.

Re: Testing 1...2...3...
by PeterPeiGuo (Hermit) on Dec 08, 2010 at 03:30 UTC

    Talking about the philosophical aspect of this, the purpose of testing is not to prove that the application works, but rather to find the bugs so that the application can be improved. You can improve an application over time, but can never make it 100% correct (there is no way to prove or measure that any way). Expect certain amount of issues and don't be surprised.

    Talking about the psychological or behavioral aspect of this matter, when testers found bugs, they should be proud that they helped and view this as the success of teamwork between testers and programmers. For testers to blame programmers of not programming well, or for programmers to blame testers of not testing well, are both unhealthy behavior and they are equally unhealthy.

    Peter (Guo) Pei

Re: Testing 1...2...3...
by sundialsvc4 (Abbot) on Dec 08, 2010 at 14:19 UTC

    I was programming for a quarter-century before I actually sat down and built acceptance-tests while I was writing code.   And the experience really startled me.   It took a lot more time, and it took a lot less time.   The difference was startling.

    I admit it:   I hate to test my work.   “If it compiles clean, I’m done.”   I still have to force myself to do more.

    There is a tradeoff to be struck.   There are no absolutes.   You cannot test everything, nor do you necessarily have to.   What seems to matter the most are the low-level primitives ... the stuff upon which everything else in the program’s whole world depends.   It is easy to redo a piece of sheetrock, but if the house has foundation problems it may as well be torn down.

    But, yes, it is a mark of a good, healthy program-development culture that folks do test as they build.   (For one thing, it suggests that they have given some thought to what they are actually doing|going to do.)   What CPAN routinely does is a good example.

      I've developed a similar take on test coverage. We test "library" code heavily with Test::More (mainly integration tests, some unit tests). The utilities and apps we write that call them are tested much more casually. We're a small team, and we find the balance to be reasonable for our environment.

      We have been disciplined about writing first the POD, then the tests, then the code. I've had the luxury of a fairly inexperienced guy on the team, so I've not been having to fight years of experience, at least with him... The poor guy thinks this is normal.

      I've said it before, but I have found the single biggest payoff---even more than writing tests---has been a disciplined approach to consistent POD. Consistently testing has been probably the second most significant gain.

Re: Testing 1...2...3...
by Anonymous Monk on Dec 07, 2010 at 21:18 UTC

    It could be worse... they could try it out, but then ignore the horrid mess of a result and sub/commit anyways.

    Is morale so low that your programmers have no pride in their work, and can't be bothered to care about quality?

      That's a great question to ask. Perhaps that's a key in motivating good test/coding: Identify the areas in which developers can take pride in their work, and then promote those areas.

      In my particular case, I think it's because the original developed code (which is C-wrapped Fortran) has been around for decades now, and it's a massive strung-together type project (and it uses Motif... bleh) that is required to maintain backwards compatibility with some legacy hardware that's been around since WWII. So what's there to take pride in, if you're the third generation developer on it?

      (Well, that's the attitude... I suppose I should attack that attitude, rather than just my original point of attack, which is configuration management and test-automation for maintainability's sake.)

        You said --
        ... required to maintain backwards compatibility with some legacy hardware that's been around since WWII. So what's there to take pride in, if you're the third generation developer on it?
        Well, the Old Folks take pride in the fact that We even remember anything about that particular breed of hardware/ software. (Said by someone who is currently working with some folks who want to drive an IBM 1403 model 3, built circa 1961, from a Linux box -- it's for a museum display: type your message, press <print>, and listen ...)

        The Kids get a chance to show how good their deductive/ analytical/ archaeological skills are. Plus, putting the project on your CV makes sure that your Resume will be remembered by the next Hiring Manager. ("You worked on WHAT? I thought they'd all be replaced by PCs by now. Why, I remember Back In the Day ...")

        And everybody gets to be amazed that something designed and built the the 1940's is still around and doing useful work.

        Plus you get some delicious war-stories to trot out at the buddies-n'-brew on Friday night. ("You won't believe how LOUD those things are until you've been in the same room with one....")

        ----
        I Go Back to Sleep, Now.

        OGB

Re: Testing 1...2...3...
by dpuu (Chaplain) on Dec 20, 2010 at 23:09 UTC
    Tests are simply one way of adding redundancy to your code with the idea that reification of a concept is less likely to be incorrect if it is implemented in two different ways. There are other ways to add this redundancy, for example assertions (assuming you have a tool that does model-checking) or static type system annotations.

    Whenever you add redundancy, you also add inertia -- resistance to change -- because now each change that you make must be made in (at least) two places: ideally two different ways in those two places.

    use-case testing is almost always necessary because it is unlikely that any specification can fully capture the nuances of the requirements. Unit testing is somewhat more fungible.

    --Dave
    Opinions my own; statements of fact may be in error.

Log In?
Username:
Password:

What's my password?
Create A New User
Domain Nodelet?
Node Status?
node history
Node Type: perlmeditation [id://875884]
Approved by mr_mischief
Front-paged by Old_Gray_Bear
help
Chatterbox?
and the web crawler heard nothing...

How do I use this?Last hourOther CB clients
Other Users?
Others admiring the Monastery: (6)
As of 2024-04-25 13:51 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    No recent polls found