|laziness, impatience, and hubris|
RFC: Tutorial "Introspecting your Moose code using Test Point Callbacks"by ait (Friar)
|on Jan 01, 2011 at 18:16 UTC||Need Help??|
Note:This RFC has been superseded by RFC Using Test-Point Callbacks for Automated Diagnostics in Testing
Happy new year fellow Monks!
I'd like to start the new year by clearing out my pending list with PM which starts with a Tutorial I've been meaning to write about a technique I've been using to test Moose code (but can be used actually for any library).
Anyway, please have a look and see if this makes any sense or have I been re-inventing the wheel here? Maybe there are better or actually more standard "best practice" ways of doing this. Or perhaps this could could make it to a test module of some sort? it's been working well for me and hopefully it may be useful for others as well, and worthy of a Tutorial. Your comments very welcome.
Update: Moved the RFC here instead of my scratchpad
Update: Changed the title, and updated the prose using comments and ideas spawned from the discussion below up to 20110102-1600UTC
Using Test-Point Callbacks for Automated Diagnostics in Testing
What is a "Test-Point" and why should I care?
Tests usually fail at most with a diagnostic message regarding the overall result of the test or some observable side-effect available to the test code. Then it's up to the programmer to elevate the debugging level and re-run the tests in the hopes that the diagnostic debug messages will give some clue as to what went wrong inside a specific method or subroutine. If the diagnostic messages don't help, the programmer then dives into the code placing warn/debug statements or firing up the debugger placing breakpoints, watches, etc. to determine under what conditions this particular test was failing.
Many times, the breakpoints and debug-level messages are left in the code as they innocuous at runtime, but are left there, precisely as a reminder that this is usually a good place to stop and inspect something when testing and debugging. It is these spots in the code which we will call "Test-Points" in the context of this tutorial. They are places in the code that help you diagnose a failure, akin somewhat to assertions, but targeted not at unexpected results or side-effects, but rather at expected flows, values, etc. given certain conditions created by a specific test. Anywhere you place a permanent debug statement, or have found the need to place a DB breakpoint, qualifies as a potential TP.
It is important to note that contrary to assertions, Test-Points (or TP) are mainly used to give the test code the ability to inspect inside methods at specific places, when running a specific test, and just as debug statements and pre-defined breakpoints they can be safely left in the code. The TPs analogous to those found in printed circuit boards which are sometimes used by special self-test / diagnostic circuitry such as I2C, commonly found in newer Phillips appliances.
By using the TP techniques described here, you can actually: evaluate partial results within a sub, monitor some local/automatic var, object attribute or side-effect at a given spot, or even condition tests based on these intermediate results of a running test.
To illustrate the method, we will be using Moose code as an example, although this technique can be applied to any and all Perl code.
As mentioned above, the technique is based on the use of callbacks at certain parts of the code which we have defined as Test-Points. The particular code example below, uses a single callback defined as an class attribute. In reality, you would probably protect this attribute with specific read and write methods, use configuration options, environment variables and so forth, but for the sake of simplicity, we will just let the testing code just use the pre-defined setter to map the single callback to a code reference in the test code itself.
Any potential hook in software can lead to unscrupulous use by crackers and malware. It could also lead to inadvertent use by a hacker to exploit some use for your TP not realizing that it should only be used for testing and no code should depend on the callbacks whatsoever. In the example below, you will notice that these callbacks are designed just to send scalar values in anonymous hashes, giving the testing software a chance to inspect values inside a sub, much like watches. Likewise, they could be also used bi-directionally to force certain behavior in a specific test, for example simulate some data inside the subroutine, but this is highly unrecommended.
Setting up the Callback
This example Moose class implements the TP technique by setting up a single callback attribute (tp_callback), which is validated to be a code reference. In the BUILD method, we initialize the callback to simply return, making it innocuous unless someone points this code reference elsewhere. This has to occur at creation time (in the new() method of the class), or in the example can occur later on.
In real life, you would probably want to limit this to creation time and wrap/condition the actual calls to the callback with a debug-level or diagnostic-mode flag of some sort, the same way debug messages are conditioned by level, as well as making the attribute private by means of specific read/write methods and other mechanisms. One way to do this in Moose, for example, would be to verify the debugging level (by checking a configuration parameter or environment variable), using a code block at the beginning of your class that modifies the META before creation, so basically, the whole callback mechanism is simply non-existent if the correct configuration is not set. An example of this is given at the end that exemplifies this.
Creating the Test Point
To create a TP all you have to do is call the global callback with two parameters (recommended practice, although the parameter list is up to you). The first parameter is the TP name, this will allow the testing code to identify which TP is being called. In the testing code, a simple dispatch table can be used to service the TPs which will dispatch to the servicing sub, using the TP name. The second parameter is optional and we recommend to be a single anonymous hash, containing key-value pairs of the things you want to inspect.
Of course, references could also be sent, for example to allow the test code to perform more elaborate actions, but this is not recommended, unless you really know what you're doing and there is no other way to accomplish this using the regular input of the method in question. The overall idea of the Test-Point is for diagnostics (see the inner workings of a sub and better or more granularly test a sub), though in some particular situations it might be useful to force certain things and analyze the behavior. For example, if something is not working as expected, forcing a value may allow the testing code to diagnose the problem and pin-point exactly what failed and why.
As can be seen, the TPs are basically innocuous unless the main callback referenced is pointed at some other code that actually evaluates the callback. This callback code will live in your test code, as you can see in the test code example below. In real life, you would probably avoid the callback altogether by wrapping it in a conditional of some sort, in combination with some create-time magic, to avoid the use of this mechanism by other code (see other security notes above, and the create time META manipulation example at the end).
The test code below implements the Test Points. The general structure of the test file is just like any other except that the callbacks are handled in an event-driven fashion, meaning that they will be called at any time during the execution of a regular test. You can think of this as a software/hardware interruption, and the TP subs are akin to the service routines of the interrupt.
This particular test example has four main sections. The first is just the basic declarations and common use_ok tests of any test file. The second, is the declaration of the dispatch table that maps the Test-Point names to their servicing sub-routines. The third is the standard tests you would usually perform on this class, and the fourth are the Test-Point service subroutines.
TP names should be unique, and could include for example, the sub name as part of their name, or could also be numbered with some special TP scheme such as those used in circuit boards. Also, bear in mind that a TP may be called by different tests, meaning that more than one test may invoke the callback. This can be addressed by conditioning the callback in the test code itself, or in the class code itself (i.e. if $x==y then invoke this TP).
The final implementation details depend very much on your your specific needs so you must adapt this technique accordingly. The technique described here is intended to be simple and introduce the subject of Test-Points by using Callbacks. Other options include: using multiple callbacks, or to be a bit more functionally purist, a code reference could be passed as a parameter of a subroutine call, effectively mapping the callback to a specific test call. This is commonly used in functional programming and well described in Mark Jason Dominus' great book "Higher order Perl".
How to run the examples
Download the examples to the same directory and run with:
prove -v test_class.t
You should see something like the example results below using bash shell:
Avoiding TP abuse by Meta manipulation in Moose
If you are paranoid on possible exploits or inadvertent use of the TP technique (although beware that even Moose provides some hooks by itself), you could use more advanced techniques to protect against inadequate use. Nevertheless, the example given is more to prevent inadvertent use (avoiding that someone creates code that uses your callbacks for anything other than testing) than for actual security reasons.
Note that the class is much the same as the original example above. Nevertheless, a code block at the top of the class will modify the class' meta information dynamically and avoid a programmer from using it incorrectly unless a specific debug level is set in the environment. To explain this code in detai is beyond the scope of this tutorial and to completely understand it please check the POD for Moose and Class::MOP.
The test code is almost exactly the same as the previous one except for the conditional on setting-up the callback.
How to run the examples
Download the examples to the same directory and run with:
prove -v cond_tp.t
Test the TP conditionals by setting the MYDEBUG_LEVEL to 5 and above. You should see something like the example results below using bash shell: