Beefy Boxes and Bandwidth Generously Provided by pair Networks
Perl: the Markov chain saw
 
PerlMonks  

Re: A danger of test driven development.

by dragonchild (Archbishop)
on Oct 03, 2005 at 11:19 UTC ( #496872=note: print w/replies, xml ) Need Help??


in reply to A danger of test driven development.

Sounsd like you didn't read all of the TDD / XP documentation. Whenever two pieces of code disagree, verify that the simpler one is correct (cause it's simpler to verify), then check the more complex one. This is kinda like checking to make sure the computer is still plugged in when the monitor mysteriously goes blank.

I'm also not sure you were truly doing TDD when you have code you changed to pass the tests. You shouldn't have to change code to pass tests - the code should be written literally line by line to pass tests you wrote 5 minutes earlier. I literally write can_ok( $CLASS, 'foo' );, watch that fail, then write sub foo {}, watch it pass, then write the first test for foo().

It sounds like you wrote too much untested code at once.


My criteria for good software:
  1. Does it work?
  2. Can someone else come in, make a change, and be reasonably certain no bugs were introduced?

Replies are listed 'Best First'.
Re^2: A danger of test driven development.
by pg (Canon) on Oct 03, 2005 at 15:14 UTC
    "You shouldn't have to change code to pass tests"

    Huh? Then I guess that you have to change your test to let your code pass ;-)

Re^2: A danger of test driven development.
by Perl Mouse (Chaplain) on Oct 03, 2005 at 12:01 UTC
    Whenever two pieces of code disagree, verify that the simpler one is correct (cause it's simpler to verify), then check the more complex one.

    I did, and the test looked ok.

    You shouldn't have to change code to pass tests

    Really? You never change code? All the code you write is correct, always? You never make typos, swap arguments, popped instead of shifted? Lucky you, you don't need any tests!

    It sounds like you wrote too much untested code at once.

    No, I didn't. I did write enough chunks of code between test runs to actually have compilable, no non-sense code though.

    Perl --((8:>*
      I obviously didn't make myself understandable. The point is that you write the test, write the code that passes the test, then that piece of code is done until the spec changes (or you refactor). You shouldn't have code that didn't have a test on it so that you write another test and find yourself having to fix code.

      The point is that while you might have written tests before code, you wrote code that wasn't tested until you wrote the test for it. I think your idea of "no non-sense code" is unreasonable. About half the code I write between runs is what I think you would consider "non-sense" code. A lot of time, I will have code that I know is wrong because I haven't written the test that demonstrates its "wrong"-ness. Until I write the test that demonstrates what's wrong, forcing me to refactor, I don't refactor. Otherwise, I would have written code before that code's test, which means I'm not doing TDD anymore.

      Now, I don't ship code in that state (though I do check it in). The test suite doesn't fully implement the spec, so the feature isn't done. But, you literally write the minimalist and simplest code that will work. Unoptimized, unvalidating code. Crappy CS101 code. You'll write the tests that expose the weaknesses, forcing the refactor to correct code. But, only with tests.


      My criteria for good software:
      1. Does it work?
      2. Can someone else come in, make a change, and be reasonably certain no bugs were introduced?

Log In?
Username:
Password:

What's my password?
Create A New User
Node Status?
node history
Node Type: note [id://496872]
help
Chatterbox?
and the web crawler heard nothing...

How do I use this? | Other CB clients
Other Users?
Others browsing the Monastery: (6)
As of 2019-12-13 23:12 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    No recent polls found

    Notices?