Beefy Boxes and Bandwidth Generously Provided by pair Networks
There's more than one way to do things
 
PerlMonks  

Re^4: "Practices and Principles" to death

by BrowserUk (Patriarch)
on Mar 01, 2008 at 04:05 UTC ( [id://671336]=note: print w/replies, xml ) Need Help??


in reply to Re^3: "Practices and Principles" to death
in thread "Practices and Principles" to death

I'm not sure bounding in like a bungee boss and saying "I'm here to challenge the status quo! The prevailing wisdom doesn't always work!" is the way to do that, which is why I responded to BrowserUk so strongly.

Unfortunately, as is so often the case, you were so busy responding strongly, that you either: a) didn't bother to read what I wrote; or b) read it, and decided that it was easier (or perhaps, more entertaining) to take minutia of my posts and blow them out of proportion, than to deal with the argument itself.

For example. Read back and you'll find that string eval was just supporting example to a wider point--accusation if you prefer--that, despite recognising the importance of test code, by using the Test::* modules to create test suites, you I the Test::* user is forced to apply different standards to the code in their test suite than they would apply, (and you would advocate), for their production code.

I do not have a problem with the use of string eval--subject to sensible precations. As I've pointed out before, and I seem to recall but cannot find, you may have done similar, when you get down to base level, all Perl code is handled through string eval: use is require is do is eval. Only two things differ. The source of the string and the timing of the evaluation.

Indeed, I've spoken up against the unthinking paranoia--"if you cannot trust the source of the code it can be dangerous; if you repeatedly evaluate the same code, it can be slow"--that surrounds string eval, being converted into "string eval is evil", on many occasions in this place. Provided the code being evaluated originates from within your own filesystem/organisation, and is provenanced with the same credentials, there is no greater risk between evaluating that code at runtime, and evaluating a perl source file at compile time.

And provided that you do not evaluate identical code more than once, (think Memoize or a hash lookup), then it is no slower than doing the same thing at compile time. And far faster than trying to replicate the Perl parser using Perl code (C vs. Perl) or (for example) Parser::RecDescent.

And it is this last point that I was making about Test::Builder's use of string eval. If you are, at the lowest level, going to kick the responsibility of performing a comparison test off to Perl's parser (via string eval), why bother with interspersing all the layers between those comparisons and the Perl parser? You mentioned that you had attempted a dispatch table solution but that string eval proved to be faster. You also challenged me (though I'm pretty sure it was more of a dare than a challenge), to suggest an alternative that would deal with all the edge cases and caveats that had been evolved into T::B. Well, here is an idea for you: Let Perl do it

How? How about this? (And I know before posting that you will find a reason for not using it (based upon my crude implementation. Perhaps something to do with supporting ancient builds?):

package Assert; require Exporter; our @ISA = 'Exporter'; our @EXPORT = qw[ assertOk assert ]; my $nTests = 0; sub assertOk (&$;) { my( $package, $file, $line ) = caller; my( $code, $desc ) = @_; warn sprintf "%s %d # %s [%s(%d)]\n", ( $code->() ? 'ok' : 'not ok' ), ++$nTests, $desc, $file, $line ; } 1;

To produce:

#! perl -slw use strict; use Assert; my $fred = 1; assertOk { $fred eq 'fred' } 'Check $fred'; assertOk { 2 == 1 } 'Check math'; $fred = 'fred'; assertOk { $fred eq 'fred' } 'Check $fred'; assertOk { 2 == 2 } 'Check math'; __END__ [ 3:14:54.52]C:\test>t-Assert.pl not ok 1 # Check $fred [C:\test\t-Assert.pl(7)] not ok 2 # Check math [C:\test\t-Assert.pl(8)] ok 3 # Check $fred [C:\test\t-Assert.pl(12)] ok 4 # Check math [C:\test\t-Assert.pl(13)]

As far as I am able to discern, as the code will be run with the context of the calling code, the interpretation of any variables--be they tied, overloaded or whatever--should be identical to the way they would be interpreted if executed at the same point in the calling code. Assuming that you arrive at similar conclusions, no doubt you'll let me know if not, then go back and look at the shenanigans that similar code goes through before being passed back to string eval. And also consider the less than stellar syntax that it requires.

One possible objection to this is that it cannot (easily; ignoring B::* for the moment), automatically produce a comment that shows the exact code. I have two answers to that:

  1. if you gave me the file/lineno (as shown above) in a 'standard' form--inline with the failure notification, rather than spread across a variable number of lines as currently--then when I run the test from within my editor, I can have it parse the results and take me directly to the failing line of code. This is far more useful than printing it in the output, as it is in context and is editable.
  2. If you look at what Smart::Comments, you'll see that it achieves listing the actual failing code. Of course, it is a source filter and so (in some circles) verbotten. But is it really worse than string eval?

But my primary objections to Test::*, are:

  • Why are we converting boolean test results into cutsey 'ok'/'not ok' strings in the first place?

    The only two reasons I can see are:

    1. They are easy for a harness to parse.

      This is Perl. We have P(C)RE at our disposal. Are they that much easier to parse than (for example)?:

      /path/file(23): Error: some relevant error text here /path/file(32): Warning: some other relevant text here /path/file(64): Passed: more text
    2. They are easy to count which allows for the production of those oh-so-useful statistics.

      Whilst I can see some value in allowing a test suite to continue after a failure rather than dieing on-site. I can see little value in having the statistics.

      Imagine an employee who spent the day yelling: "I did that correctly", "I did that correctly", "I did that correctly", "I did that correctly", "I did that correctly", "I did that correctly", "I did that correctly", "I did that correctly", "I did that correctly", "I did that correctly", "Oops! I screwed up", "I did that correctly", "I did that correctly", "I did that correctly", "I did that correctly", "I did that correctly", "I did that correctly", "I did that correctly", "I did that correctly".

      And finished with "I did my job correctly 95% of the time today". It's like food packaging that says it is "95% fat free". That means it is 5% fat.

  • They remote me from the code being tested.

    test harness: failure; prove (with options): file/lineno; construct a file to emulate the test so that I can use print or perl -de1; run that file; track back to source of failure: edit file; repeat.

  • Most damning, they prevent me from using the full power of perl to construct my test suite because I am always having to service the needs of the test tool infrastructure by reducing all my tests to simple yes/no answers.

I do not expect a meaningful response to this, because that would require you to actually consider my arguments rather than using spoiling tactics, like exploiting a typo , to dismiss them.


Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
"Science is about questioning the status quo. Questioning authority".
In the absence of evidence, opinion is indistinguishable from prejudice.

Replies are listed 'Best First'.
Re^5: "Practices and Principles" to death
by chromatic (Archbishop) on Mar 01, 2008 at 06:57 UTC
    I do not expect a meaningful response to this, because that would require you to actually consider my arguments rather than using spoiling tactics, like exploiting a typo , to dismiss them.

    I've long known that American isn't your native language. (The "Uk" in your nickname gives it away.)

    sub assertOk (&$;) { my( $package, $file, $line ) = caller;

    This does not respect the level of the invoking code in the callstack. Every test function is exportable and composable, and as such has to set the level at which to print the call trace appropriately.

    my( $code, $desc ) = @_; warn sprintf "%s %d # %s [%s(%d)]\n",

    This prints to the wrong filehandle, so diagnostics might not appear in the output stream.

    ( $code->() ? 'ok' : 'not ok' ), ++$nTests, $desc, $file, $line ; }

    This doesn't check the test numbers for sanity.

    And also consider the less than stellar syntax that it requires.

    Besides the camel case, it's not too ugly.

    If you are, at the lowest level, going to kick the responsibility of performing a comparison test off to Perl's parser (via string eval), why bother with interspersing all the layers between those comparisons and the Perl parser?

    "Hey, if you're going to calculate the Fibonacci function in every Haskell tutorial, why bother writing a program? Why not just look it up in a book?" I'm sure you could trivialize any program to its least interesting part and dismiss it completely for being most uninteresting. I get it. You don't care about any of the other features Test::Builder provides. That's fine. I don't care. You don't have to use them.

    Those other features solve plenty of problems for plenty of other people using plenty of other languages and frameworks. You don't have to care about that, either. I can't understand why you do, but that's fine too.

    They remote me from the code being tested.

    Yes. That's the point. That's exactly why TAP exists, and it's why we can use the Perl test infrastructure to manage Parrot's tests, for example, which are written in Perl 1, Perl 5, Perl 6, Tcl, Lua, Pheme, C, Ruby, PIR, PASM....

    Most damning, they prevent me from using the full power of perl to construct my test suite because I am always having to service the needs of the test tool infrastructure by reducing all my tests to simple yes/no answers.

    That's half true. It's true in that all good tests ultimately have a yes or no answer. It's false in that the entire point of extracting Test::Builder is so that people could use the full power of Perl to construct test suites without having to service the needs of the test tool infrastructure and without having to worry about the interaction of other parts of the test suite they might also want to use.

    A reply falls below the community's threshold of quality. You may see it by logging in.
Re^5: "Practices and Principles" to death
by shmem (Chancellor) on Mar 01, 2008 at 13:11 UTC
    Nice! slightly more convoluted to make assertOk return what the passed code block returns:
    package Assert; require Exporter; our @ISA = 'Exporter'; our @EXPORT = qw[ assertOk assert ]; my $nTests = 0; sub assertOk (&$;) { my( $package, $file, $line ) = caller; my( $code, $desc ) = @_; my( $scalar, @list); warn sprintf "%s %d # %s [%s(%d)]\n", ( ( wantarray ? (@list = $code->()) : ($scalar = $code->()) ) ? 'ok' : 'not ok' ), ++$nTests, $desc, $file, $line ; warn $@ if $@; wantarray ? @list : $scalar; } 1;

    --shmem

    _($_=" "x(1<<5)."?\n".q·/)Oo.  G°\        /
                                  /\_¯/(q    /
    ----------------------------  \__(m.====·.(_("always off the crowd"))."·
    ");sub _{s./.($e="'Itrs `mnsgdq Gdbj O`qkdq")=~y/"-y/#-z/;$e.e && print}

Log In?
Username:
Password:

What's my password?
Create A New User
Domain Nodelet?
Node Status?
node history
Node Type: note [id://671336]
help
Chatterbox?
and the web crawler heard nothing...

How do I use this?Last hourOther CB clients
Other Users?
Others meditating upon the Monastery: (5)
As of 2024-04-19 07:32 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    No recent polls found