Beefy Boxes and Bandwidth Generously Provided by pair Networks
"be consistent"
 
PerlMonks  

Re^4: Testing IS Development

by JavaFan (Canon)
on Mar 11, 2009 at 13:31 UTC ( #749868=note: print w/ replies, xml ) Need Help??


in reply to Re^3: Testing IS Development
in thread Testing IS Development

True. OTOH, I find that writing tests forces me to codify and explicitly state my assumptions (even if not in a form the typical end-user would understand), which, in turn, forces me to think about and identify those assumptions.
That is only added value if you don't think about assumptions when you are coding.

I generally do. I don't suddenly consider assumptions more when I code tests than when I write write code. And I'm not talking about assumptions like "snow is always white". I'm talking "assume the data we're interested in is in table X in database Y on server Z" and I assume that because the company wiki says so. But then it turns out that table Z.Y.X is obsolete, and currently the data lives in tables A, B, C on database D on server E. Testing is not going to find that, because when you make your tests, you make mock data from table Z.Y.X. Tests succeed. Code would have worked fine if indeed the report used data from table Z.Y.X. But since the assumption is wrong - the entire chain falls.


Comment on Re^4: Testing IS Development
Re^5: Testing IS Development
by sundialsvc4 (Abbot) on Mar 11, 2009 at 15:06 UTC

    The tests of which you speak here really move into the realm of process data integrity, not the specific testing of any particular application. Like any “manufacturing production-line,” the shop must have the means to validate where the data is actually coming from, and that the correct parameters were specified to the applications that were run. This is an ongoing part of the daily production process.

    This presupposes, of course, that the applications themselves are “known good,” such that it's all essentially worthless if they're not. In other words, they do have a test-suite, it does validate the handling of the data that is flowing through each application, and it does also check that not-valid data will be detected and rejected. Each time an application is deployed to the production environment (by the personnel that is responsible for that ... not the developers themselves), it must clear all tests.

    So, the two concerns are complementary to each other, not exclusive.

      The tests of which you speak here really move into the realm of process data integrity, not the specific testing of any particular application.
      That maybe for the specific example I gave*, but remember, I only raised the issue after the following statement was made:
      The benefit of test driven development is, you *know* your code will work before you ship it.
      which IMO, is so far away from the truth, I wouldn't hire a programmer who thinks that way. I rather have a programmer who's unsure, than one who's convinced to be right, when he isn't.

      *A programmer might also assume the area of a circle is 22/7 times the radius, and write his/her tests accordingly.

        I respectfully dissent. CPAN, for instance, wouldn't be CPAN without “all those tests.” After all, we don't need to be dealing with somebody else's bugs:   we have plenty enough of our own.

        Perhaps we can take the viewpoint of Thomas Edison's quote:  “I know a hundred ways to build a light bulb that don't work.” In our case, “we know a hundred ways and places that the code doesn't fail.” This does not, of course, mean that the software is defect-free, because obviously we know that it does have plenty of defects lurking in there somewhere. So the tests that we do have, give us a good foundation for helping to consider where the defects are much less likely to be.

        I would also offer the opinion that this becomes a lot more important when you have a large number of developers working on the same project:   there is no longer a single person who “lives, breathes, and sleeps-with this piece of code every day,” who therefore has a gut-instinct about it. More than just a few people now need to have a basis for determining that the code is (and remains) reliable. When a bug happens, all of them have to dig for it, and having some objective sense of where not to start digging (first) is very helpful.

        *A programmer might also assume the area of a circle is 22/7 times the radius, and write his/her tests accordingly.
        Yes, but (s)he will at least detect when the assumption, correct or incorrect, no longer holds. See here for a real world example where simple testing most likely would have prevented a serious bug.
        --
        No matter how great and destructive your problems may seem now, remember, you've probably only seen the tip of them. [1]

Log In?
Username:
Password:

What's my password?
Create A New User
Node Status?
node history
Node Type: note [id://749868]
help
Chatterbox?
and the web crawler heard nothing...

How do I use this? | Other CB clients
Other Users?
Others browsing the Monastery: (12)
As of 2014-12-18 13:41 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    Is guessing a good strategy for surviving in the IT business?





    Results (51 votes), past polls