Pytest Fail and Skip

python
testing
pytest
Author

Neil Shephard

Published

April 25, 2024

Pytest is an excellent framework for writing tests in Python. Sometimes tests don’t pass though and you might want to mark them as failing or skip them.

Pytest has a few decorators available to help with skipping tests using @pytest.mark.skip or @pytest.mark.skipif or allowing tests to fail with @pytest.mark.xfail

We’ll use the pytest-examples repository for looking at how these work.

git clone git@github.com:ns-rse/pytest-examples.git
cd pytest-examples

Why?

There are a number of reasons why you may wish to deliberately and Eric covers them nicely. In brief…

  • Incompatible Python or Package Version - some tests don’t pass under a certain version.
  • Platform specific issues - some tests fail on a specific platform.
  • External dependencies - if you haven’t got round to mocking a service.
  • Local dependencies - excluding tests running under Continuous Integration that rely on local dependencies.

Choose your Partner - Skip (to my lou)

If you want to unconditionally skip a test prefix it with @pytest.mark.skip(), adding a reason can be useful and there is the argument reason="<reason>"to do so, it helps others, including your future self. If we use the tests/test_divide.py from the pytest-examples repository we can skip the redundant test_divide_unparameterised() function as its already covered by the parameterised test that follows.

@pytest.mark.skip(reason="redundant - covered by test_divide()")
def test_divide_unparameterised() -> None:
    """Test the divide function."""
    assert divide(10, 5) == 2

When we run the test we are told it is skipped. To keep things fast we run just that test using the command line version pytest <file>::<test_function> but your IDE may support running individual tests (in Emacs you can use pytest.el to the same effect).

 pytest tests/test_divide::test_divide_unparameterised
======================================= test session starts ============================================
platform linux -- Python 3.11.9, pytest-7.4.4, pluggy-1.5.0
Matplotlib: 3.8.4
Freetype: 2.6.1
rootdir: /mnt/work/git/hub/ns-rse/pytest-examples/main
configfile: pyproject.toml
plugins: durations-1.2.0, xdist-3.5.0, pytest_tmp_files-0.0.2, mpl-0.17.0, lazy-fixture-0.6.3, cov-5.0.0
collected 1 item

tests/test_divide.py s                                                                            [100%]

---------- coverage: platform linux, python 3.11.9-final-0 -----------
Name                        Stmts   Miss  Cover
-----------------------------------------------
pytest_examples/divide.py      16      8    50%
pytest_examples/shapes.py       5      5     0%
-----------------------------------------------
TOTAL                          21     13    38%

====================================== short test summary info =========================================
SKIPPED [1] tests/test_divide.py:9: redundant - covered by test_divide()
====================================== 1 skipped in 0.59s ==============================================

Choose your Partner - Failing (…)

Nothing in the the old dance about failing but you can selectively allow tests to fail using the pytest.mark.xfail() fixture. If you know a test is going to fail you can, rather than commenting it out, mark it as such. If we update the test condition so we know it will fail we mark that it will fail as follows.

@pytest.mark.xfail(reason="demonstrate expected failure")
def test_divide_unparameterised() -> None:
    """Test the divide function."""
    assert divide(10, 5) == 3

And running pytest on this shows the failure

 pytest tests/test_divide.py::test_divide_unparameterised
====================================== test session starts =============================================
platform linux -- Python 3.11.9, pytest-7.4.4, pluggy-1.5.0
Matplotlib: 3.8.4
Freetype: 2.6.1
rootdir: /mnt/work/git/hub/ns-rse/pytest-examples/main
configfile: pyproject.toml
plugins: durations-1.2.0, xdist-3.5.0, pytest_tmp_files-0.0.2, mpl-0.17.0, lazy-fixture-0.6.3, cov-5.0.0
collected 1 item

tests/test_divide.py x                                                                            [100%]


---------- coverage: platform linux, python 3.11.9-final-0 -----------
Name                        Stmts   Miss  Cover
-----------------------------------------------
pytest_examples/divide.py      16      6    62%
pytest_examples/shapes.py       5      5     0%
-----------------------------------------------
TOTAL                          21     11    48%

====================================== short test summary info =========================================
XFAIL tests/test_divide.py::test_divide_unparameterised - demonstrate expected failure
====================================== 1 skipped in 0.59s ==============================================

Conditional Skipping/Failing

The pytest.mark.skipif() and pytest.mark.xfail() fixtures both have the argument condition which allows you to use a Boolean (i.e. a statement that evaluates to True or False) to determine whether they are used. Any Python expression that can be evaluated to True or False can be used and for backwards compatibility strings can still be used. If condition argument is used in pytest.mark.xfail() then the reason argument must also be given indicating why the test is being skipped/is expected to failed.

Here we fail the test only if the Python version is 3.10.*. Note the need to import sys and the use of sys.version_info[:2] to extract a tuple of the major and minor Python version).

import sys

@pytest.mark.xfail(sys.version_info[:2] == (3, 10), reason="Skip under Python 3.10"))
def test_divide_unparameterised() -> None:
    """Test the divide function."""
    assert divide(10, 5) == 3

Skipping/Failing Parameterised Tests

In many instances you can parameterise tests, and you can use the fixtures we’ve covered against the whole test. But what if you want to skip not all of the parameterised tests but only specific ones? This is possible because as covered previously you can use pytest.param() function to define your parameters and give them id="some text" to help identify them. pytest.param() also has a marks= option which allows you to add pytest.mark.* to just that set of parameters and so we can add pytest.mark.xfail() or pytest.mark.skip[if]() to specific sets of parameters.

Instead of placing the fixture before the test so that it applies to all functions, you use the pytest.param() for each set of parameters and add pytest.mark.xfails() (or other variants) as arguments to the marks option.

Here we mark the test with id of zero division error with marks=pytest.mark.xfail as we know that a division by zero test will fail and so that set of parameters should be skipped.

@pytest.mark.parametrize(
    ("a", "b", "expected"),
    [
        pytest.param(10, 5, 2, id="ten divided by five"),
        pytest.param(9, 3, 3, id="nine divided by three"),
        pytest.param(5, 2, 2.5, id="five divided by two"),
        pytest.param(0, 100, 0, id="zero divided by one hundred"),
        pytest.param(
            10, 0, ZeroDivisionError, id="zero division error", marks=pytest.mark.xfail(reason="Expected to fail")),
    ],
)
def test_divide(a: float | int, b: float | int, expected: float) -> None:
    """Test the divide function."""
    assert divide(a, b) == expected
 pytest tests/test_divide.py::test_divide
====================================== test session starts =============================================

platform linux -- Python 3.11.9, pytest-7.4.4, pluggy-1.5.0
Matplotlib: 3.8.4
Freetype: 2.6.1
rootdir: /mnt/work/git/hub/ns-rse/pytest-examples/main
configfile: pyproject.toml
plugins: durations-1.2.0, xdist-3.5.0, pytest_tmp_files-0.0.2, mpl-0.17.0, lazy-fixture-0.6.3, cov-5.0.0
collected 5 items

tests/test_divide.py ....x                                                                        [100%]

---------- coverage: platform linux, python 3.11.9-final-0 -----------
Name                        Stmts   Miss  Cover
-----------------------------------------------
pytest_examples/divide.py      16      3    81%
pytest_examples/shapes.py       5      5     0%
-----------------------------------------------
TOTAL                          21      8    62%

====================================== short test summary info =========================================
XFAIL tests/test_divide.py::test_divide[zero division error] - Expected to fail
====================================== 4 passed, 1 xfailed in 0.37s ====================================

The condition/reason arguments to both pytest.mark.skipif() and pytest.mark.xfail() functions are still valid and can be used to conditionally mark specific sets of parameters to be skipped or indicate if they will fail under certain conditions.

To exclude the test with id of five divided by two under Python 3.10 we would do the following (again note the need to import sys and its use in the cond positional argument).

import sys

...

@pytest.mark.parametrize(
    ("a", "b", "expected"),
    [
        pytest.param(10, 5, 2, id="ten divided by five"),
        pytest.param(9, 3, 3, id="nine divided by three"),
        pytest.param(5, 2, 2.5, id="five divided by two", marks=pytest.mark.xfail(sys.version_info[:2] == (3, 10),
                                                                                  reason="Skip under Python 3.10")),
        pytest.param(0, 100, 0, id="zero divided by one hundred"),
        pytest.param(10, 0, ZeroDivisionError, id="zero division error", marks=pytest.mark.xfail),
    ],
)
def test_divide(a: float | int, b: float | int, expected: float) -> None:
    """Test the divide function."""
    assert divide(a, b) == expected

Summary

Pytest has features which help support test development and allow specific tests to fail or be skipped completely which helps with both test development and with Continuous Integration where test results can vary depending on platform and package versions.

This post stems from a suggestion made by @jni@jni@fosstodon.org during some work I have been contributing to the skan package. Thanks to Juan for the prompt/pointer.

No matching items

Reuse

Citation

BibTeX citation:
@online{shephard2024,
  author = {Shephard, Neil},
  title = {Pytest {Fail} and {Skip}},
  date = {2024-04-25},
  url = {https://blog.nshephard.dev/posts/pytest-xfail/},
  langid = {en}
}
For attribution, please cite this work as:
Shephard, Neil. 2024. “Pytest Fail and Skip.” April 25, 2024. https://blog.nshephard.dev/posts/pytest-xfail/.