More Keys to Effective Agile Testing
Learn other key factors for effective Agile testing.
We'll cover the following
- Ensure developers take primary responsibility for testing their own code
- Measure code coverage
- Beware of abuse of test coverage measures
- Monitor static code metrics
- Write test code with care
- Prioritize maintaining the test suites
- Have the separate test organization create and maintain acceptance tests
- Keep unit tests in perspective
Aside from including testers on the development teams and using automated tests, keep in mind the following keys to effective Agile testing.
Ensure developers take primary responsibility for testing their own code
Integrating testers into development teams can have the unintended consequence of developers not testing their code—the opposite of what is intended! Developers have primary responsibility for quality of their work, including testing. Beware of these warning signs:
-
Backlog items are closed only toward the end of each sprint (this implies that testing is occurring after coding, and separately).
-
Developers move to other coding tasks before driving previous tasks to the DoD.
Measure code coverage
Writing test cases before writing the code (“test-first”) can be a useful discipline, but we’ve found that, for new code bases, code-coverage measurement of unit tests combined with downstream test automation is more critical. A unit test code coverage percentage of 70% is a useful, practical level to aim for with new code. Code coverage of 100% by unit tests is rare and usually far past the point of diminishing returns. (There are exceptions for safety-critical systems, of course.)
For organizations my company has worked with, best of breed typically approaches approximately a 1:1 ratio of test code to production code, which includes unit test code and higher-level test code. Again, this varies by type of software. Safety-critical software will have different standards than business software or entertainment software.
Beware of abuse of test coverage measures
We have found that measures like “70% statement coverage” are prone to abuse more frequently than you might expect. We’ve seen teams deactivate failing test cases to increase their pass ratios or create test cases that always return success.
In cases like these, it’s more effective to fix the system than the person. This behavior suggests that the teams believe that development work is a higher priority than test work. Leadership needs to communicate that test and QA are as important as coding. Help your teams understand the purpose and value of the tests and emphasize that a number like 70% is simply an indicator—it is not the goal itself.
Monitor static code metrics
Code coverage and other test metrics are useful but don’t tell the entire quality story. Static code-quality metrics are also important: security vulnerabilities, cyclomatic complexity, depth of decision nesting, number of routine parameters, file size, folder size, routine length, use of magic numbers, embedded SQL, duplicate or copied code, quality of comments, adherence to coding standards, and so on. The metrics provide hints about which areas of code might need more work to maintain quality.
Write test code with care
Test code should follow the same code-quality standards as production code. It should use good naming, avoid magic numbers, be well-factored, avoid duplication, have consistent formatting, be checked into revision control, and so on.
Prioritize maintaining the test suites
Test suites tend to degrade over time, and it isn’t uncommon to find test suites in which a high percentage of the tests have been turned off. The team should include review and maintenance of the test suite as an integral cost of its ongoing development work and include test work as part of its DoD. This is essential for supporting the goal of keeping the software close to a releasable level of quality at all times—that is, for keeping defects from getting out of control.
Have the separate test organization create and maintain acceptance tests
If your company still maintains a separate test organization, it’s useful to have that organization assume primary responsibility for creating and maintaining acceptance tests. The development team will still create and run acceptance tests—continuing to do that provides important support for minimizing the gap between defect insertion and defect detection. But it will have secondary responsibility for that kind of work.
We often see acceptance tests performed in a separate QA environment. This is useful when the content of the integration environment is constantly in flux; the QA environment can be more stable.
Keep unit tests in perspective
A risk with Agile testing is overemphasizing code-level (unit) tests and underemphasizing tests of emergent properties such as scalability, performance, and so on—which become apparent when running integration tests of the larger software system. Be sure to include sufficient system-wide testing before a team declares itself done with a sprint.
Get hands-on with 1400+ tech skills courses.