Beefy Boxes and Bandwidth Generously Provided by pair Networks
Perl: the Markov chain saw
 
PerlMonks  

Re: Neither system testing nor user acceptance testing is the repeat of unit testing (OT)

by dragonchild (Archbishop)
on Oct 23, 2005 at 17:56 UTC ( #502324=note: print w/replies, xml ) Need Help??


in reply to Neither system testing nor user acceptance testing is the repeat of unit testing (OT)

Well written. ++!

I have one nit to pick - unit-testing is still black-box testing. One should be testing against the published interface and verifying that the code performs according to spec. Of course, this assumes that one has both well-defined interfaces and a well-defined spec (including error-cases) to work off of.

TDD has a very similar point of view, changing only that the unit-tests combined with the user stories are the spec. The system/integration testing and UAT phases are more verification of the user stories + validation that the user stories are correct. In the terms of the Pragmatic Programmer, TDD is tracer bullets on steroids, with UAT being the iteration.


My criteria for good software:
  1. Does it work?
  2. Can someone else come in, make a change, and be reasonably certain no bugs were introduced?
  • Comment on Re: Neither system testing nor user acceptance testing is the repeat of unit testing (OT)

Replies are listed 'Best First'.
Re^2: Neither system testing nor user acceptance testing is the repeat of unit testing (OT)
by pg (Canon) on Oct 23, 2005 at 18:20 UTC

    Good point! But there are at least two methodologies for picking: white box and black box, this is especailly true with unit testing, and gets less true with system testing and user acceptance testing.

    For unit testing, most of the time, it is a mixture of white box testing and black box testing. They have different focuses. Black box testing is used to ensure that the interfaces a module/function/procedure provide are correct as spec'd. But the white box testing come in to make sure that each line of code has its reason to exist and if it exists it is correct.

    It also has something to do with our abilitity to define test cases. White box testing has more dependency on that ability, which just like any other ability human being possesses, is far from perfect ;-)

Re^2: Neither system testing nor user acceptance testing is the repeat of unit testing (OT)
by fokat (Deacon) on Oct 24, 2005 at 00:46 UTC

    I mostly agree with you, dragonchild, except for this small piece (which does not deprive you of your well deserved ++):

    unit-testing is still black-box testing

    The blackness of the boxes you test with, are orthogonal to the phase of testing you are at. Seriously, you can base your cases on the documented behavior (what the code is supposed to do) and also add cases to stress the inner workings of the code you're looking at.

    If you generalize, you'll see that you cannot insure you're achieving the desired level of test coverage, unless you peek under the hood and verify that the code is being excercised in the intended (or unintended, depending on who you ask) way.

    The point is that you must try to use both, black box and white box, as long as you can. As you progress in the scope of your tests (ie, you move from unit testing to system testing also called integration testing), the number of cases / paths makes the white box approach too resource intensive.

    Otherwise, you may have all the test cases you want, and still miss bugs because you did not see how the actual code did something in particular.

    But even when white box can help provide more effective test cases, this does not mean you can forget about black box testing. If you rewrite a substantial part of the code, chances are your white-box test cases become less effective, because they may now be tickling different code, or in different ways. But your black-box tests will still be verifying that at least, the interface is working as expected.

    Update: pg is right in the money (++ too): For unit testing, most of the time, it is a mixture of white box testing and black box testing (...)

    Best regards

    -lem, but some call me fokat

      Oooh. This is where I start to get a little antsy. In my extremely un-humble opinion, white-box testing is a CodeSmell. If you need to look at the code instead of the interface to determine what tests you need to run, your interface isn't correctly designed. Correct interface design and mocking will allow you to black-box all your unit-tests.

      That's a big statement, but I'm willing to be proven wrong. (Yes, that's a challenge.)


      My criteria for good software:
      1. Does it work?
      2. Can someone else come in, make a change, and be reasonably certain no bugs were introduced?
        OK - in reponse to your challenge, I've got one word to say to you - Exception Flows ;-)

        Say you have a function/method, as part of your interface, that writes a file out somewhere. This isnt the main role of the function/method, its just part of the work it does for you. The normal flow is to open the file, write whatever, and close the file.

        But there is also code to deal with being unable to open the file, not all the data being written or the file not being able to be closed.

        How these failures get communicated back to the client can vary too - from not saying a word through to throwing an exception or even calling exit().

        My opinion is that the developer is required to setup test cases for each of these failure modes, and this requires a white box approach. Knowing that the code writes a file may not be inherently knowable from the interface (whether it should be is a different point), but only by looking at the code do you _know_ that it writes a file, and that it does different things for different failures. Even if the interface doco states that it writes a file, detailing the different failure paths is almost certainly not part of the interface design or doco (but may well be part of the interface _implementation_ doco).

        The function/method writer can test all the normal stuff just by reference to the interface, but setting up file-open failures and checking that the function does the right thing needs more detailed knowledge related directly to the code. This is where tools like Devel::Cover can help - exception flows pretty much stick out straight away as bits of code, related directly to your chosen implementation of the interface, that you have got to write test cases for.

        ...reality must take precedence over public relations, for nature cannot be fooled. - R P Feynmann

        Since you've thrown down the gauntlet, let me pick it up.

        You present a false dilemma, suggesting that you only get to do one kind of test. And then you present an argument that black box unit tests should be more important.

        But the fact is that you can do both. And they pick up different kinds of errors. I fully agree that all of the tests of the basic interface should be motivated by the interface specification, not the implementation. That is, they should be black. But corner cases and special cases in the code are a common source of bugs, and that one may only reliably figure out where those cases actually are by staring at the implementation. (One may guess without peeking, but one only knows after looking.) So including white box unit tests to smoke out possible bugs in corner cases catches errors that black box testing may not.

        Therefore after you write your black box unit tests, there is value in adding white box tests as well. Given a positive return from doing the work, one should do it.

        This is also why white-box measurements such as coverage percentage (see Devel::Cover) have value.

        UPDATE: minor punctuation and added "white-box" to the last sentence to clarify my point.

        "... to determine what tests you need to run, your interface isn't correctly designed."

        There is a big hole in this logic. Testing should not and cannot depend on the correctness of anything else, regardless whether it is the interface design or the actual code. There is simply no dependency, we don't expect it, and we avoid it. The whole purpose of testing is to find faults with the application, from requirements to design, to coding.

        If we assume that the interface design is great, or depend on that, then the entire purpose of the testing is defeated right there.

        Correct interface design and mocking will allow you to black-box all your unit-tests. That's a big statement, but I'm willing to be proven wrong. (Yes, that's a challenge.)

        I basically agree, with a moderate number of caveats :-)

        • Legacy code. I'm going to carry on adding white-box unit tests to test free code that's not been designed with testing in mind.
        • Code clarity. Sometimes I think the code remains clearer with a couple of private methods than it does by factoring out a separate support class. If I think that I'll quite happily write white box tests if I need to tweak them.
        • Can TDD ever be black box? Since the next test is driven both by the spec and by your knowledge of the implementation you can argue that it's always white box. The kind of unit tests that TDD produce are very different from the sort of unit tests a pure specification based approach might take (see discussion on the TDD list for example).
Re^2: Neither system testing nor user acceptance testing is the repeat of unit testing (OT)
by prad_intel (Monk) on Oct 25, 2005 at 05:35 UTC
    Hi pg and other fellow monks,

    Again as a representative/part of testing community I welcome your discussion on a topic which needs to be raised almost everywhere in this world who have employed testing proffesionals.

    After being in testing for 2 and a half years I feel very few people have a clear concept of what testing is.

    I have never crossed India so my comments would be based on what I have seen in Bangalore.

    1) Most of the people here will ask you "what tools you know?" if you +say you are a tester and not given a product how will you test someth +ing. 2) Most of the people here are confused with the testing terminology. +Regression testing which basically is a selective re-testing is misun +derstood as executing all the cases. 3) Many of the people I know put fake experience and enter MNC's there +by decreasing the quality of the product. 4) There are more Testing Institutes than S/w companies in India which + teaches Testing tools and claim as testing course. 5) Some companies make the developers to do system testing apart from +unit testing for lack of funds to hire a Testing guy. 6) Even MNC's are not clear about the position they offer .. Some comp +anies are confused between QA and Testing. 7) A QA engineer does testing in the current company and sometimes Tes +ting people are asked to look at the process which is a QA role. 8) If you join some indian students yahoo groups most of them would ha +ve a query from 10% of the people of the group asking whether to join + a testing course and the money they will get back. 9) Testing is more of an attitude game and not a profession either. 10) Every company should spend a day or two testing the products for t +hose scenarios which dont appear in the test plan/cases before User A +cceptance testing. 11) Rarely you can find a good quality tester in India because every t +ester wants to go to development feeling there is more money there an +d more travel opportunity. 12)I have seen many people skip cases. What is the use of writing a ca +se when people testing it skip. 13) I have also noticed people writing duplicated test case. 14) Test data is another important aspect in testing which most of the + companies constrain themselves if it costs a few $. 15) When the devoid testers become leads/mananger their management pri +nicples contribute to the degradation of the quality. 16) Testers here are rarely allowed to undergo the training what the d +eveloper undergoes which is not a good sign for the quality of the pr +oduct. 17) Freshers are hired and directly given test cases and are asked to +execute.No company nor these freshers worry that they should understa +nd the domain/technology before they get hands on something. 18) I am a tester by virtue and not by profession.Every other team mem +ber wants to jump to development or something else soon. 19) All interviews in india today for testing has one question in comm +on "why do you like testing and will you be in testing for a long tim +e ?" Gosh ! look at the poor state of my virtue.
    You people have seen more than me so would have better views , please excuse me if anything was wrong because I am naive :-D.

    Regards

    Prad

Log In?
Username:
Password:

What's my password?
Create A New User
Node Status?
node history
Node Type: note [id://502324]
help
Chatterbox?
and all is quiet...

How do I use this? | Other CB clients
Other Users?
Others cooling their heels in the Monastery: (8)
As of 2017-12-12 08:31 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?
    What programming language do you hate the most?




















    Results (327 votes). Check out past polls.

    Notices?