Pytest is an excellent testing framework for Python with a wealth of plugins which extend the functionality. This blog post covers using the pytest-testmon to speed up the feedback loop whilst working on and developing your code.
Test, test, test
It is good practice to have a comprehensive suite of both unit, integration and regression tests when writing and developing software. The former ensure at a fine-grained level that each of the functions or methods that you write work as expected whilst the later ensure that they all function together as expected.
Pytest is a popular alternative framework for writing and running your test suite to the standard libraries unittests and is easy to incorporate into your project. Including and configuring it via pyproject.toml
might look something like the following which includes pytest-cov
to summarise coverage and pytest-mpl
for testing Matplotlib output. Configuration of coverage is included that omits certain files.
[project.optional-dependencies]
tests = [
"pytest",
"pytest-cov",
"pytest-mpl"
]
[tool.pytest.ini_options]
minversion = "8.0"
addopts = ["--cov", "--mpl", "-ra", "--strict-config", "--strict-markers"]
log_level = "INFO"
log_cli = true
log_cli_level = "INFO"
testpaths = [
"tests",
]
filterwarnings = [
"ignore::DeprecationWarning",
"ignore::PendingDeprecationWarning",
"ignore::UserWarning"
]
xfail_strict = true
[tool.coverage.run]
source = ["src"]
omit = [
"**/_version.py",
"*tests*",
"**/__init__*",
]
Tests are typically organised with a single file per Python sub-module with at least one test for each function. If you’re not familiar with pytest check out its documentation.
Development cycle
A typical development cycle to add a new feature might involve the following cycle, if you are following the Test Drive Development paradigm…
- Write a unit test.
- Write code to add functionality.
- Run test suite to see if it passes.
Alternatively if fixing bugs you might…
- Add new test to cover the bug.
- Run test to check new test fails.
- Attempt to fix the bug
- Run test suite to see if it passes.
In either scenario you will typically iterate over the steps a few times until everything is working.
With pytest
running your test suite is simple you call pytest
from the route of your directory.
pytest
If you want to only run tests on a specific sub-module of your package you can do so.
pytest tests/test_submodule1.py
This will run all tests within the file tests/test_submodule.py
. But you can have even finer grained control over your tests. For example if you want to only run an individual test within tests/test_submodules.py
you can do that too. For example if you are fixing a bug in your function1()
function you can run the corresponding test, assuming you follow the conventional nomenclature, with the following.
pytest tests/test_submodule1.py::test_function1
This would run onby test_function1()
defined within tests_submodule1.py
which tests the function1()
function.
Similarly if you were developing a new feature and writing function2
in submodule1 you could have a test test_function2()
and explicitly test it with.
pytest tests/test_submodule1.py::test_function1
As you iterate through the development cycle you could manually run your tests which works fine, but doesn’t account for any integration tests where your functions are used as part of a pipeline.
You could
Commits as Units of Work
In software development small units of related work are typically aggregated into commits within Git which means we have a natural point at which to undertake testing, i.e. when we are ready to add our work to the Git history via a commit. Thus before making a commit we should make sure that our test suite completes. Naturally we could run the whole suite with pytest
(and even parallelise it so it runs faster with pytest-xdist) but as a software package grows so does the number of tests and it might take a while to run the test suite. We’ve seen already that it is possible to run individual tests as desired but this might take a little work if there are several tests that need running when fixing bugs and regression tests need to be run as well as unit tests.
Wouldn’t it be great if we could automatically run only the tests that are affected by the changes that we have made when making a commit?
Say Hello to pytest-testmon
This is where pytest-testmon comes in useful, it “selects and executes only tests you need to run” and achieves this by “collecting dependencies between tests and all executed code and comparing the dependencies against changes.”. Basically, given the dummy example above it would identify the tests that involve either function1()
or function2()
and only run those tests.
Installation
You can test out pytest-testmon
by installing it in your virtual environment. Before you can use it though you have to allow it to analyse your code base and build a database of how the tests relate to the packages modules, functions, classes and methods, so after installing it you should run pytest --testmon
to create becomes useful it has to establish how functions, classes and methods relate to tests and you can use it thought pytest-testmon
needs to analyse your code base and build a database of how the test relates to the packages modules and functions/methods.
pip install pytest-testmon
pytest --testmon
This creates, by default, the file .testmondata
which is an SQLite database of the functions, classes and methods and how they relate to your test suite. Once you have installed and built your database subsequent calls to pytest --testmon
will work out what tests are affected by changes and only run those.
This is an improvement as you now can run pytest --testmon
to ensure that only tests affected by the changes you are making are run and it helps shorten the feedback loop because you only have to wait for a subset of the tests to run. But it’s still running the tests manually before you make your git commit
. What if there were a way to automate this step?
Hello pre-commit my old friend
I’m a big fan of pre-commit and have written about it many times and we can define a pre-commit
local hook to run pytest --testmon
automatically before each commit is made which not only removes the task of having to remember to run pytest --testmon
manually but ensures that our test suite passes before we can make commits.
Add pytest-testmon
to package dependencies
You will want to make sure that anyone else who wishes to contribute to your code also uses pytest-testmon
to help ensure that pull requests that are made against your repository continue to pass all tests.Here we add it under the dev
subset of optional dependencies in the projects pyproject.toml
as only those developing the software will want to install it, and we want to avoid installing it unnecessarily in say Continuous Integration.
[project.optional-dependencies]
dev = [
"pytest-testmon",
]
Configure pre-commit
We can now add a local hook to the .pre-commit-config.yaml
file which will run pytest --testmon
on each commit. We define the repo: local
so that it only runs on local systems and not in Continuous Integration and give it appropriately named ìd
and name
values. The entry
is the command we wish to run, i.e. pytest --testmon
and we set the language: system
. Finally we restrict it to files
that end in \.py$
so that it won’t run if you change say your documentation files that are written in Markdown (.md
).
repos:
- repo: local
hooks:
- id: pytest
name: Pytest (testmon)
entry: pytest --testmon
language: system
files: \.py$
pytest
configuration
If you don’t already have a tool.pytest.ini_options
section in your pyproject.toml
now would be a good time to add one. You don’t need any specific configuration for pytest --testmon
added but the options defined here will be used when pytest --testmon
is run by the pre-commit
hook we have just defined. A basic example is below along with the additional entries required in project.optional-dependencies.dev
. The following is the basic configuration introduced at the start of this article, adding in the pytest-testmon
as a dev
optional dependency.
[project.optional-dependencies]
tests = [
"pytest",
"pytest-cov",
"pytest-mpl"
]
dev = [
"pytest-testmon",
]
[tool.pytest.ini_options]
minversion = "8.0"
addopts = ["--cov", "--mpl", "-ra", "--strict-config", "--strict-markers"]
log_level = "INFO"
log_cli = true
log_cli_level = "INFO"
testpaths = [
"tests",
]
filterwarnings = [
"ignore::DeprecationWarning",
"ignore::PendingDeprecationWarning",
"ignore::UserWarning"
]
xfail_strict = true
[tool.coverage.run]
source = ["topostats"]
omit = [
"topostats/_version.py",
"*tests*",
"**/__init__*",
]
Package Upgrades
You may find periodically that pytest-testmon
reports that the installed packages have changed and the database needs updating. If this is the case simply run pytest --testmon
again manually and it should update the database.
Test It Out
Add a new test, fix a bug or add a new feature (with a test of course!) and git add -u . && git commit -m "bug: Fixing bug 5"
and you should find that only the tests that affect the changed code are run as part of the pre-commit
hook. 🪄
Summary
Speeding up the feedback loop for software development by pushing the identification of errors in your code further “left”, i.e. earlier in the development cycle is really useful and leveraging pytest-testmon as a pre-commit hook is a simple way of achieving this. It helps you capture and fix changes that break your test suite before making commits to your Git history which helps remove the fixing tests
commits that litter many projects.
Reuse
Citation
@online{shephard2025,
author = {Shephard, Neil},
title = {Pytest Testmon},
date = {2025-09-24},
url = {https://blog.nshephard.dev/posts/pytest-testmon/},
langid = {en}
}