It is the number of tests executed that counts, not the number of statements. So for planning purposes in your first loop your test plan should be 3. The second test plan should be 8.
That being said, I run tests with no_plan simply because counting the number of tests can be annoying to keep up with and I don't think it really adds much to the tests anyways. I rather put more time into test code coverage instead.
Oh dear, I took a test to mean one test script.. Hmm.. what's a hundred times fifty? I feel that - really- tests should be relatively small- because stuff should be broken up into pieces of code- so.. if your project consists of ten classes, maybe each should be separated and tested into its own module/distro/package. I have a couple of packages that have maybe forty or fifty different test files, and i feel today that this is a failure in development.
I voted 10K - 100K. Back in 1980 or so I wrote virtually all the microcode for the instruction set of a bit-sliced machine. By far the largest amount of microcode was the test set --- millions of lines of souce. Almost all of that was, of course, written automatically from templates.
For me, writing tests is all about the war between sloth and discipline. Anything that makes it easier to write tests, like not having to update that dumb number all the time, means more test get writ.
In the beginning of a project I start writing tests before coding, when the deadline comes nearby and my stresslevel starts peaking I start to omit them. Bad habit, but inevitable, when a manager made some interesting promises.
I am afraid I haven't gotten into testing (yet), though I wrote some applications here and there and not undermining the importance of testing as it seems to hold a sizable part in the development cycle I am yet to understand how to go about it, as a biologist randomness is a nice pond for discoveries, but as a programmer uncharacterized code behavior can be heckling and quite disruptive to mental resources....suggestions???
Excellence is an Endeavor of Persistence.
Chance Favors a Prepared Mind.
Here's one way to get started quickly with testing, in about 10 lines of code.
I recently wrote my first tests to drive a Java CLI app, and the investment was repaid quickly when they were easily ported to a second project w/ a similar CLI.
I spent less time manually testing, and was more confident about changing the code because I could quickly test it.
The program has a simple
command-line menu of about 10 items. I tested the program manually using that menu, which became tedious during development. So I looked at the excellent book:
Perl Testing, A Developer's Notebook
The section on Testing Interactive Programs in chapter 9 worked for me.
(I had previously read the book and written a few tests.
If you are new to testing, start at the beginning instead of the end, if necessary).
The man pages for the test modules are also helpful.
I needed Test::Expect for this solution.
That required me to make a unique and consistent prompt string in the programs under test, which was a good thing.
My first test looked like this:
use Test::More tests => 3;
command => "java -cp '.' src/TestRoster",
prompt => " >: ",
quit => "0\n",
expect_send( '97', 'Ask for help');
expect_like( qr/-Help about the menu items.+Terminate.+/sm, '... show
+s help text');
Test 1: print the program's Help menu:
Initialize with expect_run():
the command to run the program under test;
the expected prompt;
and how to terminate it.
Send the request from the test program (97); see the response at the console
(the program's help text);
write a regex in the test program to match the response
You now have a test.
Run that Perl test program individually; or use 'prove' to run one or more tests, and to show a summary of the output.
Write a test before you write the code for the next action.
Run it, see it fail; write the code; run the test; repeat until no failures.
You are now doing test-driven development.
very interesting!! I' have not grown rationally in coding and I fail in tests writing.. more .. normally I code by my own at work like at home and seven on ten it ends with a quick and lazy hack; -w use strict; use this; use that; abracadabra! The others three times my lack in test writing skill make the difference between me an a serious prof coder.
It remeber me the time when I was stucked wrinting a recursive dir walker .. atcroft it need an option more:
Wow! I was surprised when I looked at the current poll and found it to be my earlier suggestion. Just-wow.
My original reason for the idea was that I had started myself a small project in late January: I wanted to write something to standardize the abbreviations used in a lengthy list of customer addresses where I work. I started with a listing of US Postal Service abbreviations, and began writing expressions to check for common variations and replace them. In doing so, I realized I could use the Test::* framework to generate tests to make sure none of the patterns I had created were too lax or too aggressive, plus it would give me a reason to play with testing, something that I too have not done as much of as I should. When my generated count of tests exceeded 1.0e4, I began to wonder how many tests others had created/generated/written, and thus the question I posted. (As for the project, I ran out of to-its with a full 10+ minute run of the tests (exceeding 5.8e5, including, at that point, 9 failures), probably several dozen expressions left to write, and the eventual intent to try my hand at morphing it into some kind of usable module.)
613,026 tests (generated by the script), taking 11m:50s to run, and all passed. I'm surprised (on all counts)!
By the way, are there any modules anyone could recommend that actually do what I seem to have originally intended when I started this, to be able to give it an address part (such as an address line, or a US state) using a possibly common abbreviation and return one in which the common but not standard abbreviations are replaced? Just curious.
$ prove t/library/past*.t
t/library/pastpattern.t ............. ok
t/library/pasttransformer.t ......... ok
t/library/pasttransformerdynamic.t .. ok
t/library/pastwalker.t .............. ok
t/library/pastwalkerdynamic.t ....... ok
All tests successful.
Files=5, Tests=2161, 18 wallclock secs ( 0.38 usr 0.05 sys + 16.40 cusr 0.66 csys = 17.49 CPU)
2,161 as of right now. That number is probably about to go up significantly as I've recently re-factored the code in question a lot, and I haven't written new tests for some of the new functionality.
It's Not Quite a Perl project, although it is a Not Quite Perl 6 project. It's a optimization and analysis framework, originally for Parrot Abstract Syntax Trees(which Parrot's Compiler Tools framework uses), but now rather easy to extend to any tree-like data structure.