Beefy Boxes and Bandwidth Generously Provided by pair Networks
P is for Practical
 
PerlMonks  

Re^4: Universal test flag

by chromatic (Archbishop)
on Jul 19, 2012 at 17:44 UTC ( #982678=note: print w/ replies, xml ) Need Help??


in reply to Re^3: Universal test flag
in thread Universal test flag

Obviously your world seems more perfect than ours, but the hard truth in the commercial world is that there is always a tradeoff between quality and the amount of money and time available.

I've been in management for five years now. I've been doing automated testing for fourteen years now.

Now that we're past the credential waving protocol, let me paraphrase what I said, lest it be waved away under the flag of "We don't have time to do it right. We're doing Serious Business".

Of course, code must be good enough to be tested but it surely doesn't have to be perfect.

If you run your code differently under testing than you do in deployment, your tests probably aren't testing what you care about.

I didn't use the word "perfect". Please don't read into what I wrote; you'll confuse things.


Comment on Re^4: Universal test flag
Re^5: Universal test flag
by ait (Friar) on Jul 19, 2012 at 18:46 UTC

    Yeah, I should have answered separately because I was mostly answering to zwon, I apologize for that, but you shouldn't have turned this into a pissing contest either.

    If you run your code differently under testing than you do in deployment, your tests probably aren't testing what you care about.

    I understood you the first time around, but it's quite common to have several tests for the same piece of code, or have a test behave differently given certain test conditions (static, offline, live, etc.). More commonly this is done with skip blocks in the test code, but sometimes you don't have that option.

    I found tobyink's approach the best because it would even allow to run the app for integral testing with certain parts shut-off. I don't see this any different that skipping tests from the test suite.

      I found tobyink's approach the best because it would even allow to run the app for integral testing with certain parts shut-off. I don't see this any different that skipping tests from the test suite.

      The question is what's in control, the tests or something external.

      There may be confirmation bias here, but every time I've modified my code to add a branch to distinguish between testing and deployment, it's bitten me later. I'm even leery of using separate Catalyst configurations for testing and deployment because those differences exist (I do use those configurations and usually very productively, but they introduce a risk I'm still not completely comfortable with).

      As an intermediate step between "We don't have tests for this code" and "We have great tests for this code and we've refactored the code to be simple and testable and clean", debugging sections in the code are workable. I could live with them, as long as there's a plan to replace them with something better in the near future.

      You wrote about management and business priorities and you're right to mention that. Getting some working tests is better than having no working tests—but even so, I just can't convince myself to have a lot of confidence in those tests until the differences between the code running under testing and the code running when deployed are minimal.

      (That's why I use very, very minimal mock objects.)

Log In?
Username:
Password:

What's my password?
Create A New User
Node Status?
node history
Node Type: note [id://982678]
help
Chatterbox?
and the web crawler heard nothing...

How do I use this? | Other CB clients
Other Users?
Others perusing the Monastery: (14)
As of 2014-07-25 16:18 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    My favorite superfluous repetitious redundant duplicative phrase is:









    Results (173 votes), past polls