Beefy Boxes and Bandwidth Generously Provided by pair Networks
Problems? Is your data what you think it is?
 
PerlMonks  

Re^2: Self Testing Modules

by BrowserUk (Pope)
on Dec 18, 2005 at 21:28 UTC ( #517622=note: print w/replies, xml ) Need Help??


in reply to Re: Self Testing Modules
in thread Self Testing Modules

it doesn't scale well as your module (and its tests) grow

Would you explain that?


Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
Lingua non convalesco, consenesco et abolesco. -- Rule 1 has a caveat! -- Who broke the cabal?
"Science is about questioning the status quo. Questioning authority".
In the absence of evidence, opinion is indistinguishable from prejudice.

Replies are listed 'Best First'.
Re^3: Self Testing Modules
by eyepopslikeamosquito (Bishop) on Dec 18, 2005 at 21:42 UTC

    When writing a lot of tests, having one large monolithic block of tests can become difficult to manage. For example, with Test::More, it is more manageable to write many small .t files, rather than one large one.

      many small .t files

      I cannot tell you how much I disagree with that.

      You're suggesting that it is okay for the module to be a single file, but the tests for that module have to be littered around the file system in lots of tiny files?

      So now to test my module, I need a whole support harness to find, run, accumulate and report on those tests, adding layers to what should be the simplest code possible commensurate with getting the task done, and pushing a damn great wedge between the 'edit' and the 'run' in the development cycle.

      It takes the greatest benefit of 'interpreted' languages, the removal of the 'compile' step, and throws it away by substituting the 'test' step. And for what benefit?

      You now have to sit and wait for it to (re-run) all the tests of all code that you haven't changed, in order to gain a pretty number telling you "XX.X% passed", when what your really want to know is.

      Failure at line nn.

      And preferably, be dropped back into your editor, with that line highlighted. Running any tests after that failure is pointless, because like the warning issued by Perl when you omit a semicolon or brace, only the first one is real and the others are often as not cascade failures from it.

      It makes no sense to me at all.


      Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
      Lingua non convalesco, consenesco et abolesco. -- Rule 1 has a caveat! -- Who broke the cabal?
      "Science is about questioning the status quo. Questioning authority".
      In the absence of evidence, opinion is indistinguishable from prejudice.

        Personally I think it makes a lot of sense to split up your tests into seperate files, preferably based on the type of testing that will occur. When your tests start being more code than what is being tested having it all in one file doesnt make a lot of sense. I mean at the very least if your code is 100 lines, and your tests 1000 (a not unreasonably ratio IMO) then if you embed your tests in the same file thats 900 lines of code that will be read and parsed and compiled for no reason every time you use the module.

        And frankly if you are using the test framework to get a pretty number telling you "XX.X% passed" then you aren't using the test framework properly. The issue is to find out what failed not what passed. Having the tests broken down into test files grouped in some sensible way, a failure from a given file can itself be enough information to start investigating. Having the actual test name is even better.

        Anyway, i dont expect youll change your views based on my comments, but hopefully other readers will learn something from this.

        ---
        $world=~s/war/peace/g

        From Perl Testing: A Developer's Notebook by chromatic and Ian Langworth, page 40:

        There's no technical reason to keep all of the tests for a particular program or module in a single file, so create as many test files as you need, organizing them by features, bugs, modules, or any other criteria.

        In my experience, this is sound advice. Dividing a large number of tests into a number of smaller (cohesive) units, is essentially just divide and conquer, the only fundamental technique we have to fight complexity. It also helps ensure that each test runs in isolation, independent of others. To give a specific example, notice that WWW::Mechanize, written by Phalanx leader petdance, contains 3 lib .pm files and 49 .t files.

        Apart from all that, a single large .t file makes developing new tests inconvenient because after adding a new test to the .t file, you must run all the other tests (intolerable if the single .t file contains stress tests taking hours to run). Or are you recommending that we don't use the standard Test::Harness/Test::More framework?

Log In?
Username:
Password:

What's my password?
Create A New User
Node Status?
node history
Node Type: note [id://517622]
help
Chatterbox?
and the web crawler heard nothing...

How do I use this? | Other CB clients
Other Users?
Others pondering the Monastery: (7)
As of 2021-06-23 07:55 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?
    What does the "s" stand for in "perls"? (Whence perls)












    Results (117 votes). Check out past polls.

    Notices?