|No such thing as a small change|
Re^5: Does anybody write tests first?by BrowserUk (Pope)
|on Feb 25, 2008 at 22:44 UTC||Need Help??|
And program code is code. Therefore, if you write no code at all, you'll have no bugs. Of course, you'll also have no features.
Is that a facetious reply? Or did you genuinely think I was not aware of that obvious consequence? :)
On a more serious note. A step of project design that was common years ago but that seems to be missing from too many shops these days is risk/benefit analysis. It is entirely possible, and surprisingly common, that once a project has been show to be possible, and the predicted development effort costed, the biggest ROI possible is to not do the project at all.
The point should be clear. You write just the code required to implement the features you need. And do just as much as is required to test those features.
Writing extra code or tests now, to hedge against future possibilities is wrong. There are three possible outcomes of that extra effort--no matter how little extra it is.
So, simplistic math puts your chances of predicting the future correctly and so benefiting from the extra effort expended as 33%. If you believe that your powers of prescience can do substantially better, give up programming and start playing the stock market or visiting casinos. But, keep quiet about it because your local military sci-ops team are likely to come looking for you in the middle of the night if they get wind of it :)
So is your objection to writing tests first as opposed to after the fact? Or to hacky, poorly-designed tests, regardless of whether they were written first or last? My hypothesis would be that tests are more likely to be designed well when they are viewed by the developer as an integral part of the development of program code rather than something to be added afterwards -- at least with respect to individual developers
My objection (to typical Perl/CPAN test suites) is the prevalent methodology. It is really hard to make a cogent argument on this subject in the abstract.
It can be typified by the test suite for DBM::Deep. Let me say here that I think dragonchild has done an amazing job with this module, and his test suite is extensive and thorough. What I am going to be critiquing here is the effort that has gone into its construction, and it's opacity for those coming along to use it after the fact.
Certainly incomplete, but in essence, DMB::Deep allows you to create Perlish hashes and arrays on disk.
Okay, so now let's think about a testing strategy to cover that lot. My initial thoughts are:
More would be required, but this is just a reply to a SOPW reply (to a SOPW reply...).
For repeatability, I seed the PRNG with srand.
For regression testing, I redirect the terminal output to a file and compare against an earlier capture using diff.
This strategy allows me to add temporary debug trace without completely screwing up the rest of the testing.
I can drop into the debugger, set a breakpoint, skip over the early tests and walk through the failing test.
At any time I can enable/disable asserts to stop at the point of failure or just log and run on.
At any time I can enable/disable full trace back or just top-level caller traceback.
There have been several replies that say "you can do that to with Test::*/prove/TAPI". That's fine (though many of the can-do-that-to's seem to be very recent additions on the basis of my encounters), but I still question what those tools give me that is extra and useful?
And does that make up for all the things--print, debugger, traceback, remoteness--that they take away? IMO, the only extra they give is a set of statistics that I have no interest in and can see no benefit from.
Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
"Science is about questioning the status quo. Questioning authority".
In the absence of evidence, opinion is indistinguishable from prejudice.