Beefy Boxes and Bandwidth Generously Provided by pair Networks
Perl-Sensitive Sunglasses
 
PerlMonks  

Re^3: Universal test flag

by ait (Hermit)
on Jul 18, 2012 at 13:42 UTC ( [id://982428]=note: print w/replies, xml ) Need Help??


in reply to Re^2: Universal test flag
in thread Universal test flag

chromatic and zwon:

Obviously your world seems more perfect than ours, but the hard truth in the commercial world is that there is always a tradeoff between quality and the amount of money and time available. Most of the projects today are constrained in both, and you must quickly adapt constantly changing business conditions, so refactoring always falls in second place, after you get the (ever changing) functionality pinned down.

Testing on the other hand is paramount to make sure each piece does what it should and all the pieces fit together when several people are working on the same project. Of course, code must be good enough to be tested but it surely doesn't have to be perfect.

Sure, software is perfectible if you have enough time and resources, but the cold hard truth is that in most situations customers are not willing to pay the extra money for perfect code and their budget only allows for "good enough". IMHO, the truly successful business projects are able to deliver in time and money with good enough code to get business flowing and creating the cash flow necessary to eventually perfect the code.

Replies are listed 'Best First'.
Re^4: Universal test flag
by chromatic (Archbishop) on Jul 19, 2012 at 17:44 UTC
    Obviously your world seems more perfect than ours, but the hard truth in the commercial world is that there is always a tradeoff between quality and the amount of money and time available.

    I've been in management for five years now. I've been doing automated testing for fourteen years now.

    Now that we're past the credential waving protocol, let me paraphrase what I said, lest it be waved away under the flag of "We don't have time to do it right. We're doing Serious Business".

    Of course, code must be good enough to be tested but it surely doesn't have to be perfect.

    If you run your code differently under testing than you do in deployment, your tests probably aren't testing what you care about.

    I didn't use the word "perfect". Please don't read into what I wrote; you'll confuse things.

      Yeah, I should have answered separately because I was mostly answering to zwon, I apologize for that, but you shouldn't have turned this into a pissing contest either.

      If you run your code differently under testing than you do in deployment, your tests probably aren't testing what you care about.

      I understood you the first time around, but it's quite common to have several tests for the same piece of code, or have a test behave differently given certain test conditions (static, offline, live, etc.). More commonly this is done with skip blocks in the test code, but sometimes you don't have that option.

      I found tobyink's approach the best because it would even allow to run the app for integral testing with certain parts shut-off. I don't see this any different that skipping tests from the test suite.

        I found tobyink's approach the best because it would even allow to run the app for integral testing with certain parts shut-off. I don't see this any different that skipping tests from the test suite.

        The question is what's in control, the tests or something external.

        There may be confirmation bias here, but every time I've modified my code to add a branch to distinguish between testing and deployment, it's bitten me later. I'm even leery of using separate Catalyst configurations for testing and deployment because those differences exist (I do use those configurations and usually very productively, but they introduce a risk I'm still not completely comfortable with).

        As an intermediate step between "We don't have tests for this code" and "We have great tests for this code and we've refactored the code to be simple and testable and clean", debugging sections in the code are workable. I could live with them, as long as there's a plan to replace them with something better in the near future.

        You wrote about management and business priorities and you're right to mention that. Getting some working tests is better than having no working tests—but even so, I just can't convince myself to have a lot of confidence in those tests until the differences between the code running under testing and the code running when deployed are minimal.

        (That's why I use very, very minimal mock objects.)

Re^4: Universal test flag
by zwon (Abbot) on Jul 18, 2012 at 17:08 UTC
    the hard truth in the commercial world

    Oh, you apparently think that I'm living off charity and only write abstract examples ;)

    refactoring always falls in second place, after you get the (ever changing) functionality pinned down

    refactoring is not something you do after implementing functionality, it is something you do to implement functionality in a most efficient way (and so save time and money)

      Oh, you apparently think that I'm living off charity and only write abstract examples ;)

      No sir, I don't make those assumptions ;-)

      What I'm saying is that it's very different, for example, to code a well thought-out CPAN library, than working on a large system with a team of coders at different levels, with time and money constraints.

      I think that your comments refer more to design than to the process of coding. Refactoring by very definition is a continuos and disciplined process after the fact, so your assumption above is actually wrong and must not be confused with good design. Good design is paramount, and that is not in discussion.

      Furthermore, to actually be able to refactor you must first have a solid set of tests so when you actually refactor, you can guarantee the same functionality that should have been pinned down beforehand.

        to actually be able to refactor you must first have a solid set of tests

        And the exact problem with what you are trying to do is that it isn't solid. Suppose later you extract SMS sending code into a separate module, will your tests help you to find if you broke anything? No, because the code that may be broken by the change is being skipped in the test mode.

        So what I would do, is to write solid test, and refactor code to pass it.

Log In?
Username:
Password:

What's my password?
Create A New User
Domain Nodelet?
Node Status?
node history
Node Type: note [id://982428]
help
Chatterbox?
and the web crawler heard nothing...

How do I use this?Last hourOther CB clients
Other Users?
Others exploiting the Monastery: (2)
As of 2024-03-19 04:36 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    No recent polls found