Beefy Boxes and Bandwidth Generously Provided by pair Networks
Perl: the Markov chain saw
 
PerlMonks  

Re^3: Auto-compile checking??? WTF?

by stevieb (Canon)
on Apr 16, 2016 at 11:27 UTC ( [id://1160627]=note: print w/replies, xml ) Need Help??


in reply to Re^2: Auto-compile checking??? WTF?
in thread Auto-compile checking??? WTF?

Unit testing is one of the most very useful tools of a good developer (any language). Before, or as you code, you simply write code that uses your software just like a user would (while throwing in all the edge cases you can think of) to make sure your code is doing the right thing.

You write these tests as you go, so the more you code, you run your tests to ensure you haven't broken something you coded earlier.

See Test::More. That'll get you well on your way. Then, go to MetaCPAN and browse to the t/ directory of random modules and read the unit tests. Some modules have clear tests with good descriptions, other modules don't have any tests.

The most important thing about writing tests is write them now. Not after you get those two new features added, not next week, they should be part of your coding regimen. Doing it this way helps make sure code you've already written is to the best of your ability not doing something wrong due to future changes.

Replies are listed 'Best First'.
Re^4: Auto-compile checking??? WTF?
by nysus (Parson) on Apr 16, 2016 at 11:40 UTC

    I suppos I kind of do testing already by running my executing have finished code and looking to see if the results are as expected and maybe throwing in a warning if they aren't. Admittedly, this testing method is very crude, however, and not very reliable. Thanks for the advice.

    $PM = "Perl Monk's";
    $MCF = "Most Clueless Friar Abbot Bishop Pontiff Deacon Curate";
    $nysus = $PM . ' ' . $MCF;
    Click here if you love Perl Monks

      You're half way there then. Instead of using a single script that you keep changing, once you get your 'test' to run, copy it to a file with a .t extension and that's it.

      Of course, using Test::More is a good choice for doing the assertions. Here's the difference between a normal test script, and a real test:

      use warnings; use strict; use My::Module; my ($x, $y) = My::Module->pairs(); print "ok" if $x == $y;

      test:

      use warnings; use strict; use Test::More; use_ok('My::Module'); my ($x, $y) = My::Module->pairs(); is ($x, $y, "pairs() returns a pair that match"); done_testing();

      The latter will tell you exactly what was expected and what failed if there's a fail. The former doesn't. Not much extra effort. So if you're testing your code with one-offs, save them to test files instead, and now that part of your code will be tested every time you run your suite.

      Example test output from above on fail:

      ok 1 - use My::Module; not ok 2 - pairs() returns a pair that match # Failed test 'pairs() returns a pair that match' # at pairs.t line 11. # got: '1' # expected: '2' 1..2 # Looks like you failed 1 test of 2.

      ...because someone made a typo in the pairs() function:

      sub pairs { return (1, 2); }

      In addition to stevieb++'s posting:

      Test scripts can do more: You feed your code not only with expected input and test for correct output. But you also feed your code with intentionally bad input and test for correct error handling.

      "Intentionally bad input" can be something simple as a string passed to a function that expects a number, or some random garbage instead of a well-formed input file. Or, it can be a test for a known error in an older version.

      For example, have a look at the tests that come bundled with DBD::ODBC:

      • 01base.t to 90_trace_flags.t, odbc_describe_parameter.t, and sql_type_cast.t are "just" test for the API functions. Load the module, test that some functions basically work.
      • pod-coverage.t and pod.t are "documentation tests", that check that everything is documented, and that the documentation itself is syntactically correct. Not much effort for the author, just load and call Test::Pod or Pod::Coverage and let them do their job.
      • The most important tests are the rt_XXXX.t tests. They all test for known errors from the bug tracking system, making sure that the overall "make test" will fail if one of the known errors is accidentally reintroduced or "reinvented".

      Alexander

      --
      Today I will gladly share my knowledge and experience, for there are no sweeter words than "I told you so". ;-)

Log In?
Username:
Password:

What's my password?
Create A New User
Domain Nodelet?
Node Status?
node history
Node Type: note [id://1160627]
help
Chatterbox?
and the web crawler heard nothing...

How do I use this?Last hourOther CB clients
Other Users?
Others romping around the Monastery: (4)
As of 2024-04-19 02:42 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    No recent polls found