Properly design and execute a software test and you're likely to catch a lot of anomalies before they cause problems. Get lazy with testing and you could find yourself encountering nasty bugs. One of the things that make testing difficult is that people have to remember they are seeing only part of the picture, making it crucial to keep certain design considerations in mind.
At the Software Test Professionals Conference in Phoenix this October, software consultant Doug Hoffman will share his expertise on software testing.
In this Q&A, Hoffman discusses his test-execution model and suggests design considerations.
What are some of the key elements that can affect a software test?
Doug Hoffman: The elements influencing software under test [SUT] behavior come from the test inputs, precondition data, precondition program state and environmental factors.
What the main reasons a test may not be repeatable?
Hoffman: The primary reason is that one of the important influences was not monitored or controlled. When the test failed the first time, the influence that caused the failure generated a situation where the SUT behavior was anomalous (e.g., a network time-out). It may or may not be a bug in the SUT. The condition may have caused the expected behavior even though the test did not expect it. On the other hand, the condition may have caused the SUT to encounter a bug (e.g., the program should gracefully handle network time-outs).
What are some design considerations to kept in mind when checking for outcomes?
Hoffman: When code has a bug, anything can happen. It may cause the SUT to touch unexpected files, disrupt services, corrupt data unrelated to the expected behavior, etc. There are infinite possibilities, so the tester's job is to choose the most important (or easiest) factors and design tests and oracles (an oracle is a mechanism for recognizing bad behavior) to expose them.
More test-execution advice
Tips for smooth SOA deployments
Guide to automated testing
Continuous integration testing 101
The typical oracle for a test is a check for expected results. They confirm that the feature or function being tested appears to do its job. This doesn't scratch the surface of the possible outcomes. Some examples of overlooked outcomes are memory leaks, data corruption unrelated to the expected results, changing of environmental variables, or putting the SUT into the wrong internal state. Test-specific and generic oracle mechanisms may be implemented and run during or after the test to check for unanticipated side effects.
What are some common testing mistakes you see people make?
Hoffman: A big one is believing that a test passing means there isn't a bug in the area the test is checking. Pass means we didn't notice anything. It provides one data point, but may have missed the potential divide-by-zero case, buffer overflow, incorrect filtering, wrong equation (2+2 = 4 = 2*2), corruption of other data, etc. It gives us confirmation that the function or feature sometimes generates the expected result. The test may be broken and will always pass. Because the test passes, we'll never know, because we aren't going to investigate, ever.
Similarly, fail doesn't mean there is a bug. The good news is that we're likely to figure out whether a bug was encountered, because fail really means we need to investigate. We usually eventually figure out whether the SUT behavior is expected under the circumstances.
How does your test-execution model help developers?
Hoffman: Software developers may use the model to better understand the things that influence the behavior of their software. It really isn't much help for programmers, but it is extremely valuable for test designers and testers.
The model provides a framework for understanding factors that control SUT behavior so important ones may be monitored, controlled and checked. It goes way beyond the input-process-output model implicitly used by testers.
How does your test-execution model change existing architecture?
Hoffman: The main influence in test architecture is designing better controls, monitoring and checks. The improved control of the SUT reduces the likelihood of false alarms (tests failing when there is no bug in the SUT), and awareness of possible outcomes increases the likelihood that bugs encountered during testing will be caught.
About the author:
Maxine Giza is associate site editor for SearchSOA.com. She can be reached at email@example.com.