|P is for Practical|
I think you are more likely to get feedback if you move the code on your scratchpad to the body of your RFC. Just surround it by <readmore> tags so it doesn't distract attention from your introductory question. The comments here won't make any sense if you delete or modify your scratch pad contents at some point in the future. Posting the draft here will make it easier for people coming after you to learn from the dialog in these replies.
The metaphor of a circuit board with test points appeals to me, but I'm having trouble envisioning situations where I would actually use this. I think a tutorial would be significantly strengthened by a more in depth discussion of when this is approach is most valuable, especially in light of alternative design, debugging and testing techniques.
The main difficulty is that one needs to know in advance where one would want to have permanent test points. To make something part of the class itself, it needs to be something one is testing for long term reasons. That rules out testing motivated by "I'm not sure I understand how this Perl function/syntax works" and testing motivated by the hunt for logic errors.
Once logic errors are fixed, there isn't much point in keeping the code used to find it sitting around. What one needs instead is preventive measures. An in-code comment discussing any tricky points of logic will hopefully keep someone from erasing the fix and regressing the code back to its buggy state. A regression test should also be defined as a double check to ensure that future maintainers do in fact read the comments and don't reintroduce the error.
A permanent test usually looks at object state or the interaction of object state and the environment. This raises the quetion: is there a way you can design your object so that test points aren't needed, or at least needed less frequently? I suspect the answer is sometimes yes and sometimes no. A tutuorial should discuss the situations where design cannot eliminate the need for test points.
I would tend to think that the times when one most needs permanent test points is when one needs to test state in real time during a process that interacts with a changing environment. For example, if a user complains that hitting a certain key causes the software to spew garbage, one might want to have them run the software in a debugging mode and see what output gets generated by embedded test points.
However, if one is using test points as part of a static test suite, I'd suspect the need for test points is related more to the design than an actual need for a test point. Sometimes, if we are working with a legacy design, we may not have any option but to stick with it, so test points might also be helpful there. However, if one can come up with a design that eliminates the need for test points, I think that is good. Such designs tend to be more modular and easier to maintain.
In my own case, I try to define objects so that any information I need to test the state of the object is publicly accessible at least in read only form. I also try to break down long complex functions into small testable units. I test each of the component steps and not just the entire mega routine. This reduces the need for intermediate test points within a routine, except when I'm trying to track down a logic error. As discussed above, tracking down a logic error doesn't justify a permanent test point.
I also work hard to build my software from immutable objects wherever possible. Instead of littering each class with setter methods, I'll write a class that gathers input, passes it to a constructor and spits out the immutable object. The input gathering class will need careful tests on state, but the other objects can be checked after creation and then we're done. However, I wouldn't need a test point to check state because my state is normally exposed via the aforementioned readonly methods.
Exposing object state via a readonly method, can't hurt the integrity of the data. The only possible downside is that some developers are fond of hunting down and using undocumented methods. Most likely if a method exposes private data I don't want it to be part of the public API because the organization of private data can change over time.
Maybe there are better or actually more standard "best practice" ways of doing this?
I don't know that there "better" techniques but there are related techniques. This technique bears some similarity to assertions. Assertions are also permanent additions to the code that let one potential peek at problem areas deep within the code. There are however two key differences - one that adds options and another that takes them away:
You could, of course, modify your technique slightly and enable testing of automatic data. Just define your testpoint routine to accept parameters passed in by the method. However, that means that you would have to anticipate what parameters were being passed in and design separate test point methods for each combination of passed in paramters. At that point you are getting awfully close to the very problem you are trying to avoid - having to define and sprinkle special case debug/warning statements all through your code.
You would still retain one benefit: the ability to cleanly encapsulate your debugging code. However, there are alternatives, especially for temporary debugging code. I often surround lengthy temporary debugging code with
When I am ready to remove it or comment it out, I just grep my source files for #DEBUG.
Update Expanded comments on design vs. test points.