Pytest is an excellent testing framework for Python with a wealth of plugins which extend the functionality. This blog post covers using the pytest-testmon to speed up the feedback loop whilst working on and developing your code.
Test, test, test
It is good practice to have a comprehensive suite of both unit, integration and regression tests when writing and developing software. The former ensure at a fine-grained level that each of the functions or methods that you write work as expected whilst the later ensure that they all function together as expected.
Pytest is a popular alternative framework for writing and running your test suite to the standard libraries unittests and is easy to incorporate into your project. Including and configuring it via pyproject.toml
might look something like the following which includes pytest-cov
to summarise coverage and pytest-mpl
for testing Matplotlib output. Configuration of coverage is included that omits certain files.
[project.optional-dependencies]
tests = [
"pytest",
"pytest-cov",
"pytest-mpl"
]
[tool.pytest.ini_options]
minversion = "8.0"
addopts = ["--cov", "--mpl", "-ra", "--strict-config", "--strict-markers"]
log_level = "INFO"
log_cli = true
log_cli_level = "INFO"
testpaths = [
"tests",
]
filterwarnings = [
"ignore::DeprecationWarning",
"ignore::PendingDeprecationWarning",
"ignore::UserWarning"
]
xfail_strict = true
[tool.coverage.run]
source = ["src"]
omit = [
"**/_version.py",
"*tests*",
"**/__init__*",
]
Tests are typically organised with a single file per Python sub-module with at least one test for each function. If you’re not familiar with pytest check out its documentation for more on how to organise your test suite.
Development cycle
A typical development cycle to add a new feature might involve the following cycle if you are following the Test Drive Development paradigm.
- Write a unit test.
- Write code to add functionality.
- Run test suite to see if it passes.
- If it fails return to 2.
- If it passes all is good.
Alternatively if fixing bugs you might…
- Add new test to cover the bug.
- Run test to check new test fails.
- Attempt to fix the bug
- Run test suite to see if it passes.
- If it fails return to 3.
- If it passes all is good.
In either scenario you will typically iterate over the steps a few times until everything is working using pytest
to run your test suite root of your packages directory.
pytest
But if you’ve a large test suite you probably want to avoid running all tests each time and you can do so by telling pytest
which file to run tests on by providing it as an argument. Here we run only the test_submodule1.py
tests.
pytest tests/test_submodule1.py
You can have even finer grained control over your tests. For example if you want to only run an individual test within tests/test_submodules.py
you can do that too by specifying the name of the test you want to run after the test file name, so if you had test_function1()
defined in tests/test_submodules.py
you would run the following.
pytest tests/test_submodule1.py::test_function1
Similarly if you were developing a new feature and writing function2
in submodule1 you could have a test test_function2()
and explicitly test it with.
pytest tests/test_submodule1.py::test_function2
As you iterate through the development cycle you could manually run your unit tests to find out if they pass or fail, but that doesn’t account for any integration tests where your functions are used as part of a pipeline and knowing what tests are affected by changes can be challenging, particularly in a large code base where you would want to avoid running the whole test suite each time.
Commits as Units of Work
In software development small units of related work are typically aggregated into commits within Git (or other version control systems) which means we have a natural point at which to undertake testing, i.e. when we are ready to add our work to the Git history via a commit. Thus before making a commit we should make sure that our test suite completes. Naturally we could run the whole suite with pytest
(and even parallelise it so it runs faster with pytest-xdist) but as a software package grows so does the number of tests and it might take a while to run the test suite. We’ve seen already that it is possible to run individual tests as desired but this might take a little work if there are several tests that need running when fixing bugs and regression tests need to be run as well as unit tests.
Wouldn’t it be great if we could automatically run only the tests that are affected by the changes that we have made when making a commit?
Enter pytest-testmon
This is where pytest-testmon comes in useful, it “selects and executes only tests you need to run” and achieves this by “collecting dependencies between tests and all executed code and comparing the dependencies against changes.”. Basically, given the dummy examples above it would identify the tests that involve either function1()
or function2()
and only run those tests.
Installation
You can install pytest-testmon
in your virtual environment and run it manually. Before you can use it though you have to allow it to analyse your code base and build a database of how the tests relate to the packages modules, functions, classes and methods, so after installing it you should run pytest --testmon
to create becomes useful it has to establish how functions, classes and methods relate to tests and you can use it thought pytest-testmon
needs to analyse your code base and build a database of how the test relates to the packages modules and functions/methods.
pip install pytest-testmon
pytest --testmon
This creates, by default, the files .testmondata
, .testmondata-shm
and .testmondata-wal
which comprise is an SQLite database of the functions, classes and methods and how they relate to your test suite. Once you have installed and built your database subsequent calls to pytest --testmon
will work out what tests are affected by changes and only run those.
This is an improvement as you now can run pytest --testmon
to ensure that only tests affected by the changes you are making are run and it helps shorten the feedback loop because you only have to wait for a subset of the tests to run. But it’s still running the tests manually before you make your git commit
. We want to automate this step, how can we ensure pytest --testmon
is run prior to commits to make sure tests pass on only the changed files?
Hello pre-commit my old friend
I’m a big fan of pre-commit and have written about it many times. We can define a local pre-commit
hook to run pytest --testmon
automatically before each commit is made which not only removes the task of having to remember to run pytest --testmon
manually but ensures that our test suite passes before we can make commits.
Configure pre-commit
We define the local hook in the repositories .pre-commit-config.yaml
file which will run pytest --testmon
on each commit. The repo: local
is defined so that it only runs on local systems and not in Continuous Integration (CI), typically the whole test suite will be run in CI anywaym and give it appropriately named ìd
and name
values. The entry
is the command we wish to run, i.e. pytest --testmon
and we set the language: system
. Finally we restrict it to files
that end in \.py$
so that it won’t run if you change say your documentation files that are written in Markdown (.md
).
repos:
- repo: local
hooks:
- id: pytest
name: Pytest (testmon)
entry: pytest --testmon
language: system
files: \.py$
Package Upgrades
You may find periodically that pytest-testmon
reports that the installed packages have changed and the database needs updating. If this is the case simply run pytest --testmon
again manually and it should update the database.
Add pytest-testmon
to package dependencies
You will want to make sure that anyone else who wishes to contribute to your code also uses pytest-testmon
to help ensure that pull requests that are made against your repository continue to pass all tests.Here we add it under the dev
subset of optional dependencies in the projects pyproject.toml
as only those developing the software will want to install it, and we want to avoid installing it unnecessarily in say Continuous Integration.
[project.optional-dependencies]
dev = [
"pytest-testmon",
]
If you don’t already have a tool.pytest.ini_options
section in your pyproject.toml
now would be a good time to add one (see the example at the start of this article). You don’t need any specific configuration for pytest --testmon
adding, just make sure its an optional dependency as above.
Test It Out
Add a new test, fix a bug or add a new feature (with a test of course!) and git add -u . && git commit -m "bug: Fixing bug 5"
and you should find that only the tests that affect the changed code are run as part of the pre-commit
hook. 🪄
Summary
Speeding up the feedback loop for software development by pushing the identification of errors in your code further “left”, i.e. earlier in the development cycle is really useful and leveraging pytest-testmon as a pre-commit hook is a simple way of achieving this. It helps you capture and fix changes that break your test suite before making commits to your Git history which helps remove the fixing tests
commits that litter many projects.
Reuse
Citation
@online{shephard2025,
author = {Shephard, Neil},
title = {Pytest Testmon},
date = {2025-09-24},
url = {https://blog.nshephard.dev/posts/pytest-testmon/},
langid = {en}
}