Beefy Boxes and Bandwidth Generously Provided by pair Networks
go ahead... be a heretic
 
PerlMonks  

Programming strategy with no on-going testing

by punkish (Priest)
on Mar 13, 2005 at 21:54 UTC ( #439149=perlmeditation: print w/replies, xml ) Need Help??

strategy monks,

This weekend I indulged myself in a programming exercise with a difference. Usually, as I program, I keep on testing. So, for example, if I am making a web-based app, I will write a bit of code, load it in the browser and test it, fix it if needed, and then go forward. Call it my "training wheels."

Except, this weekend I worked without any training wheels. I wrote the entire app (was up till some horrendous hour) and didn't test it even once. Of course, when I finally tested it, I had many errors to fix. However, that got me thinking -- just like in a real foreign language class for English speakers, English is not allowed at all, will it make me more careful and better at programming if I don't use any training wheels?

I am a relative noob at Perl, so I seek advice and ministrations from wiser monks. How do you go about it?

  • Do you lock yourself in a room and code till you finish, or do you insure you have plenty of distractions in the way of cats and music and family, etc.?
  • Do you go at it in one shot without getting up, or do you break it up into small phases?
  • And, most important, do you test as you go, or have you become so good that you don't make silly little mistakes such as misplaces punctuation and the like?

How do you write a lot of code while making sure that each smaller component piece is bullet-proof?

Update: I realize that "no on-going testing" in the title goes against the spirit of my final question above. The only way to write a large body of error-free code is to write and test lots of constituent small bits of code. I guess, by testing I hint more at the constant "load in the browser and let Perl complain and tell you what is wrong instead of preemptively not writing what is wrong in the first place."

--

when small people start casting long shadows, it is time to go to bed
  • Comment on Programming strategy with no on-going testing

Replies are listed 'Best First'.
Re: Programming strategy with no on-going testing
by brian_d_foy (Abbot) on Mar 14, 2005 at 00:27 UTC

    I wouldn't call testing "training wheels", and I wouldn't compare it to brain rot from using a calculator. Test code makes you think more, in fact.

    As for the rest of the stuff, you have to find how you work best, adn do that. Don't necessarily try to copy how someone else works.

    For me, I try to create small, compartmentalized code that I can easily test. With small bits, I don't need a big block of time to work on it (more breaks without losing context), can write focussed tests, and can make easier changes. A long hacking session isn't necessarily bad, but if you have to do that to get anything done, you aren't being effective.

    --
    brian d foy <bdfoy@cpan.org>
Re: Programming strategy with no on-going testing
by jhourcle (Prior) on Mar 13, 2005 at 22:56 UTC

    There is too much variability in the premise of the question. If I'm doing a short, one off script, or something that's similar to things I've done before (or uses concepts that I've used dozens of times before), I'm probably not going to do any testing.

    If it's something that's going to run for years, but I'll have tight control of, and there aren't any major repercussions if it fails, I'll do some basic testing, but it's not going to be exhaustive.

    If it's going to be released into the wild, or I'm doing it under a contract, and I'll have no control over how it gets used, I'm going to do my damned to make sure that every possible permutation gets tested.

    I find that it's a lot like optimizing code -- you have to weigh the overall risk of something failing ( the change of it happening, the overall impact, etc ), and the amount of time the tests are going to take. If writing tests are going to take 4x longer than the amount of time to write the initial code, you're not always going to do it. (like what I was working on last week ... but because of the end product, I need it bulletproof, so I wrote the tests) If I was on a tight deadline, I might not have done such exhaustive testing. (much of the time writing my tests was in determining that the logic in my test scripts were flawed, not the actual program itself).

    In find that a few things help, such as using a text editor that's language aware ( the one I use handles syntax coloring, string coloring, and matching ({[]}) characters, which can give me a few clues before things have gotten too far out of line. Even the occassional perl -wc can be enough to make sure you haven't completely screwed up. For the most part, I try to get things working whenever I reach a good stopping point for the day -- sometimes, there isn't a good point for the day. If my brain feels fried in the afternoon, I'll sometimes leave stuff 'till the next morning, or even over the weekend...it just depends on the situation.

    I think there is no one good answer for this -- yes, more experience helps, but you have to evaluate what it is that you're trying to do, and what the correct level of testing is for that particular program/module/function/whatever. If I'm on a roll programming, I'll keep on writing for quite some time (maybe days), without any debugging. Of course, if I'm massively stuck, I might go for days without any debugging, too.

Re: Programming strategy with no on-going testing
by TedPride (Priest) on Mar 13, 2005 at 22:39 UTC
    It's impossible to write a program of any size without bugs, and finding a bug inside a finished program is far more difficult than finding it in a component part. Personally, I'd just use structured testing and leave perfection to someone else. The more programming the do, the better you'll get at it, and time spent searching for a bug is just time wasted.

    EDIT: Your comparison between language immersion and writing a large program all at once is invalid, since you get real-time feedback during the language immersion. You don't spend the whole class trying to pronounce words on your own and then get a list of everything you did wrong at the end.

      Your comparison between language immersion and writing a large program all at once is invalid, since you get real-time feedback during the language immersion. You don't spend the whole class trying to pronounce words on your own and then get a list of everything you did wrong at the end.

      You are only partially correct, in my view. While I may get real-time feedback, I may have absolutely no idea what to make of that feedback. I remember on my first visit to the Netherlands. My friend Wim Bloemen took me to a family gathering... I was the only one in a room of maybe 20 Dutch family members who had no clue what the others were chattering about (except, by some ESP, I always knew when they were talking about me). It was a fascinating experience as, for once, I was a lingual minority. In such situations, the real-time feedback becomes just like a Perl golf regular expression... a bunch of line noise.

      In any case, my point was -- am I making myself prone to commiting errors because I am always depending upon the perl -w switch to complain. Its like constantly depending on the calculator and hence, losing the ability to compute basic math quickly in one's head.

      --

      when small people start casting long shadows, it is time to go to bed
Re: Programming strategy with no on-going testing
by friedo (Prior) on Mar 13, 2005 at 22:33 UTC
    I tend to over-test. Generally, I'll write a small piece of code, usually less than ten lines, run the program to make sure nothing broke, and then continue.

    It may not be as efficient as writing larger chunks all at once, but it lets me narrow down the problem to a very small section when something breaks.

Re: Programming strategy with no on-going testing
by revdiablo (Prior) on Mar 14, 2005 at 03:21 UTC

    I think you might be causing some confusion with the terminology you're using. When people speak of testing code, more and more they mean in an automated fashion. In other words, test suites that exercise all of the code for you, and check for expected results. There's a difference between that and trying out the program in an ad-hoc way after you make changes. Sure, both can be said to be testing the code, but generally the automated test suite variety is what people call "testing."

    If your post is about testing as in writing test suites, then I think the implication in your post -- that "testing" is a crutch -- is wrong. As others have already written, any non trivial piece of code will tend to have some bugs, and anything you can do to find those bugs is helpful. Furthermore, a good test suite can help ensure changes to one part of the code don't cause problems in other parts of the code. I definitely think test suites help with those things, so I think they're useful tools.

    If, on the other hand, you meant trying to go through the program after each change, then it's not so clear. Certainly, you will tend to "smoke out" new bugs, but only if you do the right sequence of events. It could end up giving a false sense of security when things appear to go right. On the balance, though, I think it's worthwhile.

    So I guess my point is that whichever type of testing you were talking about, it is useful. Writing a lot of code all at once may be fun, and if it works on the first try, very satisfying. But my experience says that's not a very common occurance, and it's probably best to try the code as it's being written, so problems can be fixed as they're created.

Re: Programming strategy with no on-going testing
by bradcathey (Prior) on Mar 14, 2005 at 02:02 UTC

    Since most of my web applications are similar in nature (get user input, validate, write to database, retrieve for edit or page display) I tend to write all the code, run it, and then fix all the errors (which tend to be about the same: strict and database errors).

    Having said that, whenever I introduce a new piece of logic (a routine or piece of code I haven't used) I will write a test distilling the code down to the absolute basics. I want to eliminate the less obvious errors before they happen and save the routine errors for that initial run attempt.

    Writing code, in some ways, is akin to writing copy for a brochure or a letter. It's sometimes best to just write in broad strokes to keep the spirit of the piece flowing, and then to come back and run the spell checker and make my edits. I.e., don't let the technical detail interrupt that creative flow.


    —Brad
    "The important work of moving the world forward does not wait to be done by perfect men." George Eliot
Re: Programming strategy with no on-going testing
by TedPride (Priest) on Mar 13, 2005 at 23:56 UTC
    "In any case, my point was -- am I making myself prone to commiting errors because I am always depending upon the perl -w switch to complain. Its like constantly depending on the calculator and hence, losing the ability to compute basic math quickly in one's head. "

    You're confusing diagnosing errors with committing errors . Using -w all the time might reduce your ability to quickly FIND an error, but it won't reduce your ability to PREVENT an error. Given that -w exists, why make life more painful? Find the errors and move on.

Re: Programming strategy with no on-going testing
by hok_si_la (Curate) on Mar 14, 2005 at 04:15 UTC
    Greetings Punkish,

    Though I am relatively new to perl, I feel inclined to share with you what several if not all of my old college profs shared with us as students. Certainly arguments can be made for both sides but none-the-less here is what we were taught.

    Many of my professors used analogies for coding. One of the more common ones was the process of building a car. Manufacturers do not just throw one together and run through the testing afterwards. Each individual component is tested for various weaknesses before the vehicle is put together. Once assembled, various systems within the vehicle are tested as a whole.

    Most of them swore by the following method for larger projects. (I am making the assumption you are one of the 99% of us that do not create SRS documentation with milestones.)

    Break your code up into the smallest possible functional parts and as you program test those parts. For instance, if you were inserting a a record into a DB from a text file then first check to ensure you have successfully created a session with your DB table then actually test your insertion method. In languages like Perl this requires writing stubs for all of the other necessary code snippets. So if you were inserting a line from a text file then store the test info in a scalar variable and plug away. After you have coded your app in the above manner you can stress, top-down, bottom-up, increment, blackbox, or whitebox test your entire program to look for joint weaknesses.

    Sorry if any of you found this response long winded,
    hok_si_la
Re: Programming strategy with no on-going testing
by nerfherder (Monk) on Mar 14, 2005 at 08:00 UTC
    Do you lock yourself in a room and code till you finish, or do you ensure you have plenty of distractions in the way of cats and music and family, etc.?
    Distractions? What for?
    Do you go at it in one shot without getting up, or do you break it up into small phases?
    Yeeeessssss.
    And, most important, do you test as you go, or have you become so good that you don't make silly little mistakes such as misplacing punctuation and the like?
    Test as you go by creating test scenarios to verify desired output. If you perceive that you would be wise to become more aware of the perils of punctuation in Perl programming, it is advisable to do so.
    How do you write a lot of code while making sure that each smaller component piece is bullet-proof?
    In practical application, the method of "test-driven" programming or whatever you want to call it is just the way you end up doing it when it has to get done.   To help figure out "what's right for you", ask yourself:   Would you compose a symphony (or even try to figure out how to play some catchy tune) without picking up an instrument to see if you're on the right track?

    <reggae>The wise man build his house on the rock; the foolish man build his house on the sand </reggae> :-)
Re: Programming strategy with no on-going testing
by artist (Parson) on Mar 14, 2005 at 11:32 UTC
    This weekend I indulged myself in a programming exercise with a difference. Usually, as I program, I keep on testing. So, for example, if I am making a web-based app, I will write a bit of code, load it in the browser and test it, fix it if needed, and then go forward

    I do similar. This is what I am planning to do: If your needs are fixed, define them first. If they are flexible, then find out a suitable level and define them. Then write non-browser tests (as in module). Write small piece of code at a time. Run them against your code.

Re: Programming strategy with no on-going testing
by Anonymous Monk on Mar 14, 2005 at 12:23 UTC
    And, most important, do you test as you go, or have you become so good that you don't make silly little mistakes such as misplaces punctuation and the like?
    A lot will depend on how "expensive" tests are. The price of a testset comes two ways: creating the testset, and running it. Take for instance the early stages of your new project. After writing the first bits and pieces, you don't have anything yet that can stand on its own. If you want to test the bits of pieces, you may have to create a lot of scaffolding to get your bits and pieces running this makes creating the test suite relatively more expensive - and so does running, since each failed test needs to be examined, was it a bug in the bits and pieces, or did the scaffolding fail? And if your test involves smashing a $100,000 dollar car against a concrete wall, it does make sense to limit the number of test runs as much as possible.

    Like everything in programming practices: it's a trade-off.

Re: Programming strategy with no on-going testing
by 5mi11er (Deacon) on Mar 15, 2005 at 17:01 UTC
    I've noticed no one's hit your question square on (at least in my opinion), so I'll try.

    No matter how "good" or experienced you get you'll always make silly little mistakes. Punctuation problems are probably going to end up as a large percentage of the problems you'll "create" over the course of your programming lifetime. There are tools that will help minimize them, such as the colorized editors etc. But they will still happen.

    As most everyone else has said, small chunks, lots of on going testing are better. It will generally be more efficient. I remember times in my past, after writing lots of code without testing, just to spend the next two days debugging all that. No, it's much more efficient to test a little bit, than it is to have to debug for a LONG time.

    On the other hand, if you're not very experienced, then you will likely have more logic problems to deal with. You will get better at being able to "run the code through your head" to discover logic problems before seeing the output is not doing what you'd expected or wanted. It will just take time, and "methodology" is really not going to help speed this up very much. But, logic problems still reinforce the write a small bit and test methodology. Obviously writing a lot of problem logic code will take longer to sort out than short bits, so, it still follows that you want to write short amounts and test them often.

    Hopefully I've gotten to the heart of your questions,

    -Scott

Re: Programming strategy with no on-going testing
by steelrose (Scribe) on Mar 16, 2005 at 18:51 UTC
    I take an approach that's slightly different than what I've seen here, though I guess my answer is probably that I test chunks of code. I will create comments that describe the logic of the program first, from start to finish (This allows me to keep my code well documented). I then go in and will write the code for a block of comments, and test it. I find that it helps me remain focused on the big picture that way, and easy to pick up a project that I've begin but had to put aside for a length of time. I'm particular about commenting code, I hate going into code that is "self documenting" and the programmer thinks it didn't need any explanation.

Log In?
Username:
Password:

What's my password?
Create A New User
Node Status?
node history
Node Type: perlmeditation [id://439149]
Approved by BazB
Front-paged by hsmyers
help
Chatterbox?
and the web crawler heard nothing...

How do I use this? | Other CB clients
Other Users?
Others surveying the Monastery: (5)
As of 2021-02-25 13:38 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    No recent polls found

    Notices?