Beefy Boxes and Bandwidth Generously Provided by pair Networks
Your skill will accomplish
what the force of many cannot
 
PerlMonks  

Effective Automated Testing

by eyepopslikeamosquito (Chancellor)
on Apr 18, 2015 at 08:50 UTC ( #1123862=perlmeditation: print w/replies, xml ) Need Help??

I'll be giving a talk at work about improving our test automation. Initial ideas are listed below. Feedback on talk content and general approach are welcome along with any automated testing anecdotes you'd like to share. Possible talk sections are listed below.

Automation Benefits

  • Reduce cost.
  • Improve testing accuracy/efficiency.
  • Regression tests ensure new features don't break old ones. Essential for continuous delivery.
  • Automation is essential for tests that cannot be done manually: performance, reliability, stress/load testing, for example.
  • Psychological. More challenging/rewarding. Less tedious. Robots never get tired or bored.

Automation Drawbacks

  • Opportunity cost of not finding bugs had you done more manual testing.
  • Automated test suite needs ongoing maintenance. So test code should be well-designed and maintainable; that is, you should avoid the common pitfall of "oh, it's only test code, so I'll just quickly cut n paste this code".
  • Cost of investigating spurious failures. It is wasteful to spend hours investigating a test failure only to find out the code is fine, the tests are fine, it's just that someone kicked out a cable. This has been a chronic nuisance for us, so ideas are especially welcome on techniques that reduce the cost of investigating test failures.
  • May give a false sense of security.
  • Still need manual testing. Humans notice flickering screens and a white form on a white background.

When and Where Should You Automate?

  • Testing is essentially an economic activity. There are an infinite number of tests you could write. You test until you cannot afford to test any more. Look for value for money in your automated tests.
  • Tests have a finite lifetime. The longer the lifetime, the better the value.
  • The more bugs a test finds, the better the value.
  • Stable interfaces provide better value because it is cheaper to maintain the tests. Testing a stable API is cheaper than testing an unstable user interface, for instance.
  • Automated tests give great value when porting to new platforms.
  • Writing a test for customer bugs is good because it helps focus your testing effort around things that cost you real money and may further reduce future support call costs.

Adding New Tests

  • Add new tests whenever you find a bug.
  • Around code hot spots and areas known to be complex, fragile or risky.
  • Where you fear a bug. A test that never finds a bug is poor value.
  • Customer focus. Add new tests based on what is important to the customer. For example, if your new release is correct but requires the customer to upgrade the hardware of 1000 nodes, they will not be happy.
  • Documentation-driven tests. Go through the user manual and write a test for each example given there.
  • Add tests (and refactor code if appropriate) whenever you add a new feature.
  • Boundary conditions.
  • Stress tests.
  • Big ones, but not too big. A test that takes too long to run is a barrier to running it often.
  • Tools. Code coverage tools tell you which sections of the code have not been tested. Other tools, such as static (e.g. lint) and dynamic (e.g. valgrind) code analyzers, are also useful.

Test Infrastructure and Tools

  • Single step, automated build and test. Aim for continuous integration.
  • Clear and timely build/test reporting is essential.
  • Quarantine flaky failing tests quickly; run separately until solid, then return to main build. No broken windows.
  • Make it easy to find and categorize tests. Use test metadata.
  • Integrate automated tests with revision control, bug tracking, and other systems, as required.
  • Divide test suite into components that can be run separately and in parallel. Quick test turnaround time is crucial.

Design for Testability

  • It is much easier/cheaper to write automated tests for systems that were designed with testability in mind in the first place.
  • Interfaces Matter. Make them: consistent, easy to use correctly, hard to use incorrectly, easy to read/maintain/extend, clearly documented, appropriate to audience, testable in isolation.
  • Dependency Injection is perhaps the most important design pattern in making code easier to test.
  • Mock Objects are also frequently useful and are broader than just code. For example, I've written a number of mock servers in Perl (e.g. a mock SMTP server) so as to easily simulate errors, delays, and so on.
  • Consider ease of support and diagnosing test failures during design.

Test Driven Development (TDD)

  • Improved interfaces and design. Especially beneficial when writing new code. Writing a test first forces you to focus on interface. Hard to test code is often hard to use. Simpler interfaces are easier to test. Functions that are encapsulated and easy to test are easy to reuse. Components that are easy to mock are usually more flexible/extensible. Testing components in isolation ensures they can be understood in isolation and promotes low coupling/high cohesion.
  • Easier Maintenance. Regression tests are a safety net when making bug fixes. No tested component can break accidentally. No fixed bugs can recur. Essential when refactoring.
  • Improved Technical Documentation. Well-written tests are a precise, up-to-date form of technical documentation.
  • Debugging. Spend less time in crack-pipe debugging sessions.
  • Automation. Easy to test code is easy to script.
  • Improved Reliability and Security. How does the code handle bad input?
  • Easier to verify the component with memory checking and other tools (e.g. valgrind).
  • Improved Estimation. You've finished when all your tests pass. Your true rate of progress is more visible to others.
  • Improved Bug Reports. When a bug comes in, write a new test for it and refer to the test from the bug report.
  • Reduce time spent in System Testing.
  • Improved test coverage. If tests aren't written early, they tend never to get written. Without the discipline of TDD, developers tend to move on to the next task before completing the tests for the current one.
  • Psychological. Instant and positive feedback; especially important during long development projects.

References

Replies are listed 'Best First'.
Re: Effective Automated Testing
by choroba (Bishop) on Apr 20, 2015 at 08:42 UTC
    Just an anecdote:

    My task was to add a new feature to our product. As the feature was rather complicated, I created some tests along coding it. When the release date was near, the project manager asked me whether I was finished. Almost, my reply was, I'm still working on the tests. Don't waste your time, tests aren't part of the task, said he. Nevertheless, I finished the tests as well as the task the next day, still several days before the deadline. At almost the same time, the client changed the requirements and we had to add some additional features. Thanks to the tests, it didn't take me more than one hour. I can't imagine what I'd have done if the tests hadn't been there. Since then, I've created tests several times, but there hasn't been any complaint from the manager about wasting my time.

    لսႽ ᥲᥒ⚪⟊Ⴙᘓᖇ Ꮅᘓᖇ⎱ Ⴙᥲ𝇋ƙᘓᖇ
Re: Effective Automated Testing
by einhverfr (Friar) on Apr 21, 2015 at 15:05 UTC

    I have a different view on when and what to test. I would say that a test when breaks when you fix a bug is a bad test. A test that never finds a bug may be a bad test or a good test.

    A lot of people do TDD with the idea that 100% test coverage is something to shoot for in itself. I am not in that camp. To me, contract oriented design and testing go hand in hand. You don't want to test every possible behavior because your view on the behavior may be wrong and there are legitimate areas you want to reserve the right to change your mind without breaking your tests,

    Instead you want to test guarantees. What do you promise? Why? What are the corner cases there you need to check? Get those tested. You will usually find that results in high test quality and coverage, but not 100%, and that fixing bugs rarely breaks tests, that tests which break are showing you bugs.

Re: Effective Automated Testing
by RonW (Vicar) on Apr 20, 2015 at 20:07 UTC
    A test that never finds a bug is poor value.

    Depends on why it never finds a bug. Which highlights the importance of testing the tests.

    Ideally, as long as it can be demonstrated that the tests are correct, then you want the tests to find no bugs.

    FWIW, our testing manager complains loudly when his team finds any bugs. In development, we do, of course, run tests, both automated and manual, both our own tests and our "ports" of their tests. Unfortunately, we can't directly run their tests because the testing team uses LabView and we don't. We've asked many times for "run time only" LabView licenses, but, so far, we have not succeeded in explaining to the C-Level managers how LabView would be useful to us.

Log In?
Username:
Password:

What's my password?
Create A New User
Node Status?
node history
Node Type: perlmeditation [id://1123862]
Approved by ww
Front-paged by Ratazong
help
Chatterbox?
and all is quiet...

How do I use this? | Other CB clients
Other Users?
Others contemplating the Monastery: (4)
As of 2017-12-16 17:24 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?
    What programming language do you hate the most?




















    Results (457 votes). Check out past polls.

    Notices?