Search⌘ K
AI Features

Test Coverage

Explore how to measure test coverage in your Python projects with pytest and the pytest-cov plugin. Understand different coverage metrics like line, branch, and statement coverage, configure coverage reporting, and apply strategies to improve your test suite's effectiveness to ensure more reliable and maintainable code.

Introduction to test coverage

Test coverage plays a crucial role in ensuring the effectiveness and reliability of software testing. It measures the extent to which the source code of a software application is tested by test cases. High test coverage indicates that a significant portion of the code has been exercised by tests, reducing the risk of undetected bugs and improving overall software quality.

Test coverage measures the coverage of test cases in terms of code execution and identification of untested areas. It helps identify gaps in testing and ensures that all code paths and logic are tested. It is typically measured as a percentage, indicating the proportion of code executed by tests.

Test coverage helps ensure that the code is thoroughly tested, reducing the risk of undetected bugs. It provides confidence in our codebase and allows for easier maintenance and refactoring. Test coverage metrics serve as a quantitative measure of code quality and can help in identifying areas for improvement.

Types of test coverage metrics

There are a lot of different types of test coverage metrics. We will look into some popular metrics used in development.

Line coverage

Line coverage measures the percentage of lines of code that are executed by tests. It determines whether each line of code in the tested codebase has been executed at least once by the test suite. This metric helps identify lines that have not been covered and might indicate potential areas where bugs could exist.

def add_numbers(a, b):
result = a + b
return result
def subtract_numbers(a, b):
result = a - b
return result
Line coverage

If the test suite only executes the add_numbers function, the line coverage would be 50% because only half of the lines in the code snippet have been executed by tests.

Branch coverage

Branch coverage measures the percentage of decision points (branches) that are executed by tests. It determines whether each possible branch of a decision point has been taken at least once during test execution. Decision points include conditions, loops, and other control flow structures.

def is_even(number):
if number % 2 == 0:
return True
else:
return False
Branch coverage

To achieve 100% branch coverage for this code, the test suite needs to execute both branches of the if-else statement, ensuring that the code is tested with both even and odd numbers.

Statement coverage

Statement coverage measures the percentage of executable statements that are executed by tests. It determines whether each individual statement in the code has been executed at least once. This metric helps ensure that every line of code is tested, including function calls, assignments, and other statements.

def greet(name):
if name:
print(f"Hello, {name}!")
else:
print("Hello, anonymous!")
def add_numbers(a, b):
result = a + b
return result
Statement coverage

To achieve 100% statement coverage for this code, the test suite needs to execute each line of code, including the print statements in the greet function and the assignment statement in the add_numbers function.

Pytest coverage plugin

The pytest-cov plugin provides test coverage analysis for pytest projects. Let’s explore how to set up pytest-cov in the project.

Installation

To install pytest-cov, we use pip:

pip install pytest-cov

Running pytest with coverage

To run pytest with coverage analysis, we use the --cov option followed by the target directory or package:

pytest --cov=<target_directory_or_package>
Command to run pytest with coverage

For example, to run pytest with coverage for the entire project, we use:

pytest --cov=.

Let’s look at the following example:

Python 3.10.4
class MathOperations:
def add(self, a, b):
return a + b
def subtract(self, a, b):
return a - b
def multiply(self, a, b):
return a * b
def divide(self, a, b):
if b == 0:
raise ValueError("Cannot divide by zero")
return a / b

After running the above code, we get the following report on the terminal:

Name Stmts Miss Cover
----------------------------------
my_module.py 11 0 100%
test.py 20 0 100%
----------------------------------
TOTAL 31 0 100%
Coverage report

In my_module.py, a total of 11 executable statements are examined, and every single statement is executed during the testing phase. This results in a pristine 100% code coverage, indicating that all lines of code in my_module.py were thoroughly exercised by the accompanying test suite. Similarly, in the case of test.py, with a greater number of total statements at 20, no statements were left untested.

Coverage thresholds

Coverage thresholds allow us to define minimum acceptable coverage levels for our project. By setting coverage thresholds, we can ensure that the tests adequately cover a certain percentage of the codebase. This helps maintain a high standard of test coverage and ensures that critical areas of our code are thoroughly tested.

To set coverage thresholds in the project, we can use the pytest-cov plugin along with pytest’s command-line options or configuration files. The pytest-cov plugin provides options to specify coverage thresholds, such as --cov-fail-under.

For example, to set a coverage threshold of 80% for the project, we can use the following command:

pytest --cov=myproject --cov-fail-under=80

Generating coverage reports

By default, pytest-cov generates coverage reports in the terminal. However, we can also generate reports in different formats, such as HTML or XML.

Terminal report

The coverage report is displayed in the terminal after running pytest. It provides a summary of coverage metrics, including the percentage of covered lines, statements, branches, and functions. It also highlights the coverage for individual files and displays a coverage percentage for each.

HTML report

The HTML report offers a more detailed view of coverage. Open the index.html file in the htmlcov directory to access the report. The HTML report shows coverage information for each file, including line-by-line coverage highlighting and overall coverage percentages. To generate an HTML report, we use the --cov-report=html option:

pytest --cov=<target_directory_or_package> --cov-report=html
Command to generate an HTML report

This will generate an HTML report in the htmlcov directory.

XML report

The XML report is useful for integrating coverage analysis with other tools or generating custom reports. It provides coverage information in an XML format that can be processed programmatically. To generate an XML report, we use the --cov-report=xml option:

pytest --cov=<target_directory_or_package> --cov-fail-under=<coverage_percentage>
Command to generate the XML report

Missing term

The --cov-report=term-missing option is used in conjunction with the pytest-cov plugin to generate a coverage report that includes information about the lines of code that are not covered by the tests. This option highlights the specific lines of code that are missing coverage in the terminal output.

When we run pytest with coverage enabled and use the --cov-report=term-missing option, pytest generates a coverage report that displays the coverage percentage as well as the lines of code that are not covered. The lines of code that are not covered will be marked with a > symbol in the coverage report.

The command for this will look like the following:

pytest --cov=myproject --cov-report=term-missing

Configuration file

To configure pytest-cov, we can create a configuration file named .coveragerc in the root directory of the project. This file allows us to customize various aspects of pytest-cov, such as coverage report format, coverage thresholds, and additional options.

Here’s an example of a .coveragerc file:

# .coveragerc
[run]
source = myproject
omit = tests/*
[report]
show_missing = True
fail_under = 80

In this example:

  1. The [run] section specifies the source directory where the project’s code is located. In this case, myproject is set as the source directory.

  2. The omit option in the [run] section specifies any files or directories that should be omitted from the coverage analysis. In this case, the tests/* directory is excluded from the coverage analysis.

  3. The [report] section configures the behavior of the coverage report.

  4. The show_missing option in the [report] section is set to True, which means the coverage report will display the lines of code that are not covered.

  5. The fail_under option in the [report] section specifies the minimum coverage percentage required for the tests to pass. In this example, the threshold is set to 80%.

To use this configuration file, we can run the pytest command with the --cov-config option, specifying the path to the .coveragerc file:

pytest --cov=myproject --cov-config=.coveragerc

Let’s take a look at the following example:

Python 3.10.4
[run]
source = .
[report]
show_missing = True
fail_under = 90

We add the –addopts = –cov-config=.coveragerc option in the pytest configuration file. This way, pytest runs the command including that option.

Strategies to improve test coverage

Let’s also look at some strategies to improve test coverage.

Identifying untested code segments

Regularly reviewing coverage reports is essential to identify areas of low coverage or untested code segments. Coverage reports provide insights into which parts of the code are not adequately covered by tests.

By analyzing these reports, we can identify gaps in our test suite and prioritize efforts to improve coverage in those areas. For example, we might discover that certain error-handling scenarios or specific modules have low coverage. This awareness allows us to focus on writing additional tests to cover those areas.

Write effective tests

To improve coverage, it’s important to write test cases that specifically target different code areas. This includes writing tests for edge cases, error handling, and boundary conditions. By covering these critical areas, we increase the likelihood of catching bugs and ensure that the tests exercise the full range of expected behaviors.

For example, if we have a function that accepts user input, we can write tests to validate different input values, including valid and invalid inputs, to ensure comprehensive coverage.

Handling challenging scenarios

Challenging scenarios, such as exception handling, complex branching, or asynchronous code, often require additional attention to improve coverage. These scenarios can involve specific code paths that are not easily covered by general tests.

By writing specialized tests that target these challenging scenarios, we can increase coverage in those areas. For example, if we have code that handles network failures or complex conditional statements, we can design tests to simulate those scenarios and ensure the corresponding code paths are executed.

Techniques for improving branch and path coverage

To improve branch and path coverage, we can employ various techniques, such as equivalence partitioning, boundary value analysis, and decision tables.

  • Equivalence partitioning involves dividing the input domain into sets of equivalent classes and designing tests to cover each class.

  • Boundary value analysis focuses on testing values at the boundaries of input ranges because these are often where issues occur.

  • Decision tables help create comprehensive test cases by capturing different combinations of inputs and expected outcomes.

By applying these techniques, we can systematically design tests that exercise different branches and paths in our code, leading to improved coverage.

Test your learning!

Technical Quiz
1.

In pytest, which command-line argument is used to generate a coverage report?

A.

--cov-report

B.

--coverage

C.

--report-coverage

D.

--report-coverage


1 / 1