|Welcome to the Monastery|
No, I don't. I have tried it a few times, but it's not appealing to me. Basically, what you attempt to do with the "write tests first" paradigm is delegate the most important checks to the machine: that the code you have written is correct.
Now, correctness is an elusive word to define, because correctness of a program depends on its requirements, and requirements are seldom, if ever, fully consistent or even complete. Nevertheless, the point is that by writing tests you describe the procedure how to check that the code you have is correct, and then you can automate the correctness checks by making the machine run them.
The benefits of it are that if you can fully formalize what correct means, you can check the program time and again -- or rather you can just run the test framework and the machine will report its findings. Sounds like good application of laziness.
However, it only works if you can formalize -- if you can write comprehensive tests in the first place. There are several reasons why it's difficult or even impossible.
First, there's the question of what input to use. In all but trivial cases (i.e. no input, or a constant expression), the input space is too large to be comprehensively searched. So you settle for "corner cases" and usual sources of error. Ignoring how you can come up with these (maybe you develop skill as you write tests), how do you know you have it all nailed down? How do you know you are testing with the right input, or that you have included all corner cases? It seems to me that if you knew this, you wouldn't need to test; you could simply prove (by deductive or inductive reasoning) that the piece of source code always works.
Second, the programming language you use for tests is the same as the programming language you use in programming the application. This is natural, since you want easy integration, and you want executable test specifications. However, this is likely not the best choice. I know I can't express certain things in Perl that I would need in writing tests, or more specifically formalizing how to check correctness. I don't mean fancy things such as pi calculus or some nonsense; I mean, for example, being able to write a quantified expression without having to put it in the form of an implicit or explicit loop.(*)
Thirdly, the usual benefit cited is that writing tests forces you to think about the interface. Thinking about the interface and sketching different alternatives is actually a very good thing, though why it has to be disguised as testing eludes me.
What I normally do is start writing documentation. I write the synopsis, some or most of the description, and then start documenting the non-existent public interface. For each function or method I think would be useful (and for all functions and methods in general), I write a line or two of example code how to call it, describe what kinds of input it expects and accepts, and what kind of output it will give (or how it will change the state of the object, say), and what kind of error conditions to expect.
Sounds ridiculous? Try it. What you simply have to do during writing is actively think what you even want from the module or piece of code, how you should build the interface, and most importantly, how to distribute functionality among the modules. Often I notice that I'm missing an abstraction or a class, and I repeat the same steps for that. It's not the silver bullet, of course. It doesn't always work.
The usual argument against this is that then you have to keep the documentation in sync with the code, or that they don't really match, or that you have to do a whole lot of extra work. The function of the documentation is not to replace the source code, or to describe how the program works, as you probably already know. It's to supplement the missing pieces that you lose when you encode your ideas in a programming languages: The whys. The intent and purpose.
Coming back to testing, the fundamental difficulties in instructing the machine how to check the validity and correctness of source code are that 1) you cannot encode everything in executable source code, and 2) the dumb machine will only check the things you have explicitly told it to. Since it's clear that you don't know exactly what you're doing if you have to write tests, it's wishful thinking to expect the machine to report errors in places where you haven't thought were possible.
One important aspect of testing I agree with: checking for implicit assumptions in the run-time environment, for example when installing the piece of code on an entirely different kind of platform than where it was developed. But this is only because those assumptions are not documented anywhere, and even if they were, there would be too many to take into account.
(*): "For all x, P(x)." Here P(x) is some Boolean statement about x. If I'm writing tests, I don't want to loop over the entire input space and check P(x) for all x. I would just want to write the condition down and make the machine deduce for me if it applies or not. But if we had that, we wouldn't need human programmers in the first place.