DEV Community

Martin Heinz
Martin Heinz

Posted on • Originally published at martinheinz.dev

Pytest Features, That You Need in Your (Testing) Life

Note: This was originally posted at martinheinz.dev

Testing your code is integral part of development and quality tests can save you lots of headache down the road. There are plenty of guides on testing in Python and specifically testing with Pytest, but there are quite a few features that I rarely see mentioned anywhere, but I often need. So, here goes list of tricks and features of Pytest, that I can't live without (and you won't be able to as well - soon).

Testing for Exceptions

Let's start simple. You have a function that throws an exception and you want make sure that it happens under right conditions and includes correct message:

import pytest

def test_my_function():
    with pytest.raises(Exception, match='My Message') as e:
        my_function()
        assert e.type is ValueError

Here we can see simple context manager that Pytest provides for us. It allows us to specify type of exception that should be raised as well as message of said exception. If the exception is not raised in the block, then test fails. You can also inspect more attributes of the exception as the context manager returns ExceptionInfo class that has attributes such as type, value or traceback.

Filtering Warnings

With exceptions out of the way, let's look at warnings. Sometimes you get bunch of warning messages in your logs from inside of libraries that you use. You can't fix them and they really just create unnecessary noise, so let's get rid of them:

import warnings
from sqlalchemy.exc import SADeprecationWarning

def test_my_function():
    # SADeprecationWarning is issued by SQLAchemy when deprecated API is used
    warnings.filterwarnings("ignore", category=SADeprecationWarning)
    ...
    assert ...

def test_my_function():
    with warnings.catch_warnings(record=True) as w:
        warnings.simplefilter("ignore")  # ignore annoying warnings
        ...
        assert ...
    # warnings restored

Here we show 2 approaches - in first one, we straight up ignore all warnings of specified category by inserting a filter at the front of a filter list. This will cause your program to ignore all warnings of this category until it terminates, that might not always be desirable. With second approach, we use context manager that restores all warnings after exiting its scope. We also specify record=True, so that we can inspect list of issued (ignored) warnings if needs be.

Testing Standard Output and Standard Error Messages

Next, let's look at following scenario: You have a command line tool that has bunch of functions that print messages to standard output but don't return anything. So, how do we test that?

def test_my_function(capsys):
    my_function()  # function that prints stuff
    captured = capsys.readouterr()  # Capture output
    assert f"Received invalid message ..." in captured.out  # Test stdout
    assert f"Fatal error ..." in captured.err  # Test stderr

To solve this, Pytest provides fixture called capsys, which - well - captures system output. All you need to use it, is to add it as parameter to your test function. Next, after calling function that is being tested, you capture the outputs in form of tuple - (out, err), which you can then use in assert statements.

Patching Objects

Sometimes when testing, you might need to replace objects used in functions under test to provide more predictable dataset or to avoid said function from accessing resources that might be unavailable. mock.patch can help with that:

from unittest import mock

def test_my_function():
    with mock.patch('module.some_function') as some_function:  # Used as context manager
        my_function()  # function that calls `some_function`

        some_function.assert_called_once()
        some_function.assert_called_with(2, 'x')

@mock.patch('module.func')  # Used as decorator
def test_my_function(some_function):
    module.func(10)  # Calls patched function
    some_function.assert_called_with(10)  # True

In this first example we can see that it's possible to patch functions and then check how many times and with what arguments they were called. These patches can also be stacked both in form of decorator and context manager. Now, for some more powerful uses:

from unittest import mock

def test_my_function():
    with mock.patch.object(SomeClass, 'method_of_class', return_value=None) as mock_method:
        instance = SomeClass()
        instance.method_of_class('arg')

        mock_method.assert_called_with('arg')  # True

def test_my_function():
    r = Mock()
    r.content = b'{"success": true}'
    with mock.patch('requests.get', return_value=r) as get:  # Avoid doing actual GET request
        some_function()  # Function that calls requests.get
        get.assert_called_once()

First example in above snippet is pretty straightforward - we replace method of SomeClass and make it return None. In the second, more practical example, we avoid being dependent on remote API/resource by replacing requests.get by mock and making it return object that we supply with suitable data.

There are many more things that mock module can do for you and some of them are pretty wild - including side effects, mocking properties, mocking non-existing attributes and much more. If you run into problem when writing tests, then you should definitely checkout docs for this module, because you might very well find solution there.

Sharing Fixtures with conftest.py

If you write a lot of tests, then at some point you will realize that it would be nice to have all Pytest fixtures in one place, from which you would be able to import them and therefore share across test files. This can be solved with conftest.py.

conftest.py is a file which resides at base of your test directory tree. In this file you can store all test fixtures and these are then automatically discovered by Pytest, so you don't even need to import them.

This is also helpful if you need to share data between multiple tests - just create fixture that returns the test data.

Another useful features is ability to specify scope of a fixture - this is important when you have fixtures that are very expensive to create, for example connections to database (session scope) and on other end of spectrum are the one that need to be reset after every test case (function scope). Possible values for fixture scope are: function, class, module, package and session.

Parametrizing Fixtures

We already talked about fixtures in above examples, so let's go little deeper. What if you want to create a bit more generic fixtures by parametrizing them?

import pytest, os

@pytest.fixture(scope='function')
def reset_sqlite_db(request):
    path = request.param  # Path to database file
    with open(path, 'w'): pass
    yield None
    os.remove(path)

@pytest.mark.parametrize('reset_sqlite_db', ['/tmp/test_db.sql'], indirect=True)
def test_send_message(reset_sqlite_db):
    ...  # Perform tests that access prepared SQLite database

Above is an example of fixture that prepares SQLite testing database for each test. This fixture receives path to the database as parameter. This path is passed to the fixture using the request object, which attribute param is an iterable of all arguments passed to the fixture, in this case just one - the path. Here, this fixture first creates the database file (and could also populate it - omitted for clarity), then yields execution to the test and after test is finished, the fixture deletes the database file.

As for the test itself, we use @pytest.mark.parametrize with 3 arguments - first of them is name of the fixture, second is a list of argument values for the fixture which will become the request.param and finally keyword argument indirect=True, which causes the argument values to appear in request.param.

Last thing we need to do is add fixture as a argument to test itself and we are done.

Skipping Tests

There are situations, when it's reasonable to skip some tests, whether it's because of environment (Linux vs Windows), internet connection, availability of resources or other. So, how do we do that?

import pytest, os

@pytest.mark.skipif(os.system("service postgresql status") > 0,
                    reason="PostgreSQL service is not running")
def test_connect_to_database():
    ... # Run function that tries to connect to PostgreSQL database

This shows very simple example of how you can skip a valid test based on some condition - in this case based on whether the PostgreSQL server is running on the machine or not. There are many more cool features in Pytest related to skipping or anticipating failures and they are very well documented here, so I won't go into more detail here as it seems redundant to copy and paste existing content here.

Conclusion

Hopefully these few tips will make your testing experience little more enjoyable and more efficient and therefore will motivate you to write few more tests then before. If you have any questions, feel free to reach out to me, also if you have some suggestions or tricks of your own, please share them here. 🙂

Top comments (6)

Collapse
 
dmitrypolo profile image
dmitrypolo

I'm curious as to why you switch between pytest and unittest in this when the title specifies it is for pytest. Specifically when it comes to mocking why did you not illustrate the monkeypatch fixture available in pytest.

def test_my_function(monkeypatch):
    with monkeypatch.context() as mc:
        mc.setattr(module, 'some_func', lambda x: 'foobar')
        res = my_function()  # function that calls `some_function`

        assert res == 'foobar'
Collapse
 
colinb profile image
Colin Bounouar

Wondering the same, also why mocking requests.get yourself when pytest-responses provide the responses fixture.

There is a lot of pytest fixtures out there, it's always nice to spread knowledge anyway :-)

Collapse
 
martinheinz profile image
Martin Heinz

I personally prefer to use unittest mock, that's why I showed it here, but you are right, that considering the title, maybe I should have used monkeypatch. Thanks for showing example here.

Collapse
 
zchtodd profile image
zchtodd

Awesome article... I use pytest pretty regularly but some of these are new to me, like the conditional test skipping. Very useful, thanks.

Collapse
 
martinheinz profile image
Martin Heinz

Glad you liked it! :)

Collapse
 
ivergara profile image
Ignacio Vergara Kausel

Good to see material that's not introductory in nature.