Beefy Boxes and Bandwidth Generously Provided by pair Networks
We don't bite newbies here... much
 
PerlMonks  

Re: What efforts go into a programming project? (Somewhat OT)

by bikeNomad (Priest)
on Aug 10, 2001 at 19:35 UTC ( #103909=note: print w/ replies, xml ) Need Help??


in reply to What efforts go into a programming project? (Somewhat OT)

You might look into Extreme Programming; XP practioners would argue that the split between design and coding is rather arbitrary -- that much of the time spent in "coding" is actually spent thinking about design. And they would also say that 10% of time for actual testing would be too high, since you'd spend a lot of time up front making automated test suites (you build the test code first, and then design/code until the tests pass). Once you have all your testing automated, the actual time spent on testing is quite small. Of course, the time spent writing the test suites will be significant.


Comment on Re: What efforts go into a programming project? (Somewhat OT)
Re: Re: What efforts go into a programming project? (Somewhat OT)
by echo (Pilgrim) on Aug 10, 2001 at 20:10 UTC
    you build the test code first, and then design/code until the tests pass

    I've tried that and found it didn't work out all that well. In order to write a sensible test, the feature more or less has to be completely designed. I found myself ping-ponging between the code and the test as I was refining the design. My experience is that test suites can't really be written efficiently until a significant amount of design and coding has been done.
    Otherwise I agree with you, automated test suites are the way to go.

      My take on this testing thing is that XP promotes one level of design (project design) while programmers take it to mean a different level (code design.)

      When it is mentioned "design the tests first" - these are tests on the level of "how will we determine that the code satisfies requirement X" not "this algorithm keeps pumping out numbers that are off by 1/100... I better run the debugger and find out what's going on."

      Project design tests should be written first. Users and Programmers do not view requirements in the same light, so when they try and speak requirements in the same language, projects fail. Designing project-level tests first forces the Users to properly define their requirements and then forces the Programmers to translate that effectively in terms that they understand - the Users have their requirements and the Programmers have their tests - both are looking at the exact same thing in their own language.

      This way, the Code Design will be driven by the tests - i.e. the actual requirements, rather than what the Programmers think the requirements are. At the same time, the Users know what results they can expect, and can compare this with what they want and be satisfied that all is well. This results in projects that are done right the first time.

      Code design level tests must necessarily be designed/run at design/coding time - this is also the kind of testing that first jumps into programmers heads when they read about XP, and is, IMHO, why this aspect of XP is so often taken to task.



      ---Apsyrtes
      I've tried that and found it didn't work out all that well. In order to write a sensible test, the feature more or less has to be completely designed.

      That's the point. :)

      You have a feature you want to write. Instead of asking yourself "How can I get my code to do this?", you ask yourself "How can I write a test that proves my code can't do this?" In the process of writing that test, you will discover the design of the code that will make the test succeed. This is the Zen beauty of test-first design. Then you add more tests until you can no longer think of ways to break what you've written.

      This ensures that you don't do too much work. So long as you only test those things you need, and you don't write anything that isn't contributing to making a test pass, you're not over-designing, or succumbing to bloat.

Re: Re: What efforts go into a programming project? (Somewhat OT)
by Sherlock (Deacon) on Aug 10, 2001 at 21:07 UTC
    Well, it would appear to me that the testing portion of this process has been the part that most people seem to disagree about. Not being very knowledgeable in the realm of eXtreme Programming (XP), I'd be interested to hear from someone who is what their take on testing is.

    In your post, bikeNomad, you state: you build the test code first, and then design/code until the tests pass.

    Does this imply that when doing XP, you only perform Black-Box tests? What about white-box tests? These can't really be designed until you written the code, or at least done an exhaustive job on the design and know exactly what your code will look like.

    Since you can't really ever perform black box or white box tests exhaustively, I feel that you really need to do a good job of both. Does XP provide for this? I only ask because your post seemed to imply that it did not.


    To get back to the original post, however, I think you've gone a little overboard on the time spent doing design. Don't get me wrong, I think the more time you can dedicate to doing design, the better (more maintainable, higher quality, ect.) the product you'll develop. On the other hand, it seems like you're skimping on some other areas, specifically, coding and testing (primarily testing). You can have an exquisite design, but if you're not testing against the requirements, your final product probably won't be as good as you'd hoped.

    To draw these two points (lack of testing time & black/white box testing) together, let me give you my opinion of how testing should be done.

    Black Box Testing:
    These should be set up when the requirements are created (just as bikeNomad mentioned). You should know, even before creating your project, what it should produce given various inputs, which is the entire nature of black box tests.

    White Box Testing:
    Your white box tests should be designed to ensure complete path coverage (ensuring that every line of code is executed). This can't be done until your software has been written. This is very important because you can't possibly test your program against all inputs using Black Box Testing. This technique is helpful in finding different types of errors.

    Now that I've rambled on long enough, I think your assumptions look pretty good. I'd add a little time to testing, however. I hate to take time away from design, but I don't think you can really take any more away from anywhere else. Hopefully, this will help you.

    Good luck,
    - Sherlock

    P.S.
    For anyone that is unfamiliar with the terms Black/White Box testing: http://www.scism.sbu.ac.uk/law/Section5/chap3/s5c3p23.htmlgo here. This was just the first site I found - I'm sure you could find better ones if you looked a little.

    Skepticism is the source of knowledge as much as knowledge is the source of skepticism.
      At first, you can only do black-box tests (since, as you point out, there's no code yet). There's certainly room for white-box testing in XP, but it would come later. After all, since your team designed it, coded it, and wrote the tests, tests added after some code has been written can't help but be anything but white-box!

      I think a lot of the distinction between white-box and black-box testing assumes that people writing/performing tests will be different than the people designing/coding. With XP, of course, it's the same people making and running the unit tests.

      Coverage analysis would help to find what you've missed tests for. It's hard to err by having too many tests.

      In XP, you keep adding tests for new features/behavior before you add the new features themselves. At that time, you could add more tests for boundary cases in the code you just wrote if you think it's going to be a problem.

      In XP, there are two kinds of tests - "Acceptance Tests" and "Unit Tests". The Acceptance tests ensure that the code conforms to the requirements, end-to-end. The Unit tests are written against each individual part of the code as it is being developed. You also take significant steps to isolate the code that is being tested from the rest of the program.

      Unit Tests are mostly white-box in that they are written in conjunction with the source-code. The first test proves that you can't get away with just doing nothing. After you have written the first test, you will examine your code, and write a second test that illuminates a failure condition within that code. You then work until that test ceases to fail, and continue the cycle until you can't think of anything more to test.

Log In?
Username:
Password:

What's my password?
Create A New User
Node Status?
node history
Node Type: note [id://103909]
help
Chatterbox?
and the web crawler heard nothing...

How do I use this? | Other CB clients
Other Users?
Others drinking their drinks and smoking their pipes about the Monastery: (14)
As of 2014-07-31 20:17 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    My favorite superfluous repetitious redundant duplicative phrase is:









    Results (252 votes), past polls