|more useful options|
just useing the module in the normal way would, (IMO), be better than use_ok(). The standard error texts report a lot of very useful information that simply is disaguarded by the institutional wrappers.
Possibly this has changed at some point, but to the best of my recollection use_ok has always reported the same error messages that a plain use would supply as part of its test diagnostics.
I don't have a problem with shipping the full test suite with the module. In the event of failures that cannot be explained through other means, then having the user run the test suite in there environment and feed the results back to the author makes good sense. But running the test suite on every installation doesn't.
Here my experiences differ from yours. I have lost count of the number of times that modules test suites have saved my bacon by failing due to some bizarre platform/version/infrastructure issue.
I've found running the tests post-install much less useful because I have to spend time chasing down the dependencies, running their tests, etc. Pre-installation tests are one of the things that make modular distributions work for me rather than against me. I can't imagine not doing it.
And to my mind, the summary statistics and other output presented by Test::Harness serve neither target audience. They are 'more than I needed to know' as a module user, and 'not what I need to know' as a developer.
While I see what you mean, I don't find the situation quite a dire as you seem to.
As a module user I personally find the test harness output just about right. Enough info to let me know stuff is happening while it runs. Summary info that gives me a pointer to where things are broken if stuff breaks.
As a module author there are a few things that I wish were slightly easier. Writing a patch for prove that to run test scripts in most-recently-modified order has been on my list for ages. Along with more flexible ways of running/reporting Test::Class based test suites.
That said, none of these itches have been irritating enough for me to scratch. The current set up hits some kind of 80/20 sweet spot for me most of the time. When it doesn't I can easily roll a custom test runner that gives me what I need.
I have misgivings with need to put my unit tests in a separate file in order to use the Test::* modules.
Then don't :-) Stick 'em in modules, subroutines, closures, etc. Whatever allows you to write your tests in a useful way.
It creates couplings where none existed.
A bad test suite that's evolved over time, hasn't been maintained, hasn't had commonalities factored out, with functionality spread over dozens of files can certainly be a royal pain.
Alternatively, when done well, separating out different aspects of a module into different scripts/classes can help make the behaviour of classes easier to see and maintain. If all of FooModule's logging behaviour is tested in logging.t then I've a much easier time of it when it comes to test/tweak/maintain/debug the logging behaviour.
Codependent development across files, below the level of a specified and published interface, is bad. It is clumsy to code and use.
I certainly don't find it so. Quite the opposite.
It means that errors found are reported in terms of the file in which they are detected, rather than the file--and line--at which they occurred. That makes tracking the failure back to the point of origin is (at least) a two stage affair. In-line assertions, that can be dis/en-abled via a switch on the command line or in the environment just serve the developer so much better at the unit test level.
Inline assertions mean that errors found are reported in terms of the file and line where they occured, rather in terms of the context that caused the error to occur. That makes tracking the failure back to the point of origin (at least) a two stage affair. Automated tests that allow continual, repeatable regression tests just serve the developer so much better at the unit test level.
Actually I don't believe the "so much better" bit :-)
Inline assertions are a useful tool, and I've found design by contract to be an effective way of producing software. However, in my experience, they perform complementary tasks to tests.
Personally I find tests a more generally effective tool since I've found it easier to incrementally drive development via tests than I have via assertions/contracts.
And finally, I have misgivings about having to reduce all my tests to boolean, is_ok()/not_ok() calls.
I'm sorry - I'm not understanding your point here. Whether you have an inline assertion/contract or an external test you are still describing a bit of behaviour as a Boolean ok/not_ok aren't you?
The only purpose this seems to serve is the accumulation of statistics which I see little benefit in anyway.
The benefit is not the accumulation of statistics. It's about checking that my new bit of code does what I want it to do, or that bug #126 is fixed, or that the new version of Log::Log4perl doesn't break the existing installation, etc. For me testing is all about answering questions, not meaningless stats.
Not the user, they don't care why it failed, only that it did.
Depends on the user. I'm usually very interested in why it fails because I need to get the damn thing to work - even when I didn't develop it :-)
It depends on what information you're after. Most of the time I am far more interested in what caused the error than where it occurred. If I know and can reproduce the cause then I can easily find out where an error occurred. I find doing the opposite considerably harder.
I'm grateful to eyepopslikeamosquito (who started this sub-thread), for mentioning the Damian's module Smart::Comments.
If you like this I suspect you'll really like design by contract. If you've not played with them already take a look at Class::Contract and Class::Agreement, and give Meyer's "Object Oriented Software Construction" a read.