in reply to Does anybody write tests first?
In fact, I have code which is being used in production, just a couple of weeks into development, and the scripts which are actually being run are in my t/ directory. I've run a module in development for weeks at a time like this, from the t/ directory, doing production work, as I continue to develop the module, adding features, monitoring output, refactoring early efforts, tightening up pieces I have to handle manually until its ready for prime time.
I find it very useful and helpful to watch a test fail, and then to resolve each error in turn as it presents itself, until the test passes.
I code in vim, using konsole on a gui desktop to access a command line. Konsole permits multiple tabs, each in its own directory, with its own tab label, perhaps even connected to its own server. I'll use three adjancent tabs all in my sandbox/My-New-Project/ folder. One I use to vim lib/My/New/Project.pm. The next tab invokes vim t/20-my-new-project.t. The third is used to invoke `time perl t/20-my-new-project.t`.
If a database is involved, a fourth tab will give me a psql or mysql prompt to the affected database, even when my test scripts are directly monitoring the results in the database, as well. If I'm writing a cgi script, I'm likely to also have a browser opened to localhost/t/testscript-my-new-project.cgi, as well.
I don't write tests first, so much as along with. Usually just a test or three at a time, then the code it tests, then a couple more tests, and some additional code. By the time I've built a ->new() method there may be a dozen, or two or more tests making sure that all the sanity checks are working.
I've found that I spend far less time debugging this way than I use to. No more chasing errors around. When I have three or five tests which reasonably exercise an internal method, changes are less scary.
I also use Devel::Cover, is it, to examine the coverage offered by my test suite. When I get stuck or bored moving forward in implementing a design, I might spend some time looking at how I can improve coverage of my test suite.
My first supposedly Test-Driven-Development project got me working code, and a well populated t/ directory. But I was aghast at how few branches were actually exercised by my tests when months later I applied Devel::Cover to it. Now I run my test suite under Devel::Cover from time to time as I go and fill in the holes. It leaves me far more confident in the results I produce.
And as one client mentioned to me, it left her far more confident in my work as well. I sent her the test results. The well named tests (after bugs or variations in the data) left her confident not only that the module I built for her would appropriately process her test data she had sent me, but would also handle the 5,000 records that module was written to process, as well.