DEV Community

Cover image for Tools to automate Python Tests
Niklas Tiede
Niklas Tiede

Posted on • Originally published at

Tools to automate Python Tests

Heya fellows,

The code of this post can be found on Github (see here).

When projects grow a good test suite gives you confidence that new code you add don't cause parts of the application to break. It improves the maintainability of the project. The complexity of small projects is low and tests are only necessary when the size of the project increases. But for the sake of this tutorial we will write a small test to demonstrate the usage of pytest, tox and github actions.

We will store the tests within a separate folder. Here's the current structure of the project.

├── tests
│  ├──
│  └──
Enter fullscreen mode Exit fullscreen mode

We place a file within the tests folder. It will contain the test suite. The main() function which contains the logic for triggering the flags has to be imported from the file.

from tihttp import main

def test_GET_body(capsys):
    main(["-B", ""])
    captured = capsys.readouterr()
    result = captured.out
    with open("tests/jsonplaceholder.json", "r") as f:
        output =
    assert result == output
Enter fullscreen mode Exit fullscreen mode

We compare the response of a GET request to an API with the expected JSON data. We place a jsonplaceholder.json file into the same folder which contains our expected output. Then we install pytest and let it execute the test.

$ pip install pytest
$ pytest
Enter fullscreen mode Exit fullscreen mode

Our test passes. To increase the tests verbosity -v is a useful flag. Furthermore, I like to use the -s flag to see the captured output. Next we add pytest to our extra requirements in the file.

    'dev': [
        'pytest', # pip install tihttp[dev]
Enter fullscreen mode Exit fullscreen mode

This gives us the possibility to install extra dependencies (testing, linting tools etc.) easily by adding a [dev] to the package name.

$ pip install .[dev]            # local install
$ pip install tihttp[dev]       # remote install, PyPI repo
Enter fullscreen mode Exit fullscreen mode

We tested all of this with python 3.7.3. But how does our application behave when executed on a different interpreter version? So let's test it against different Python versions! We use tox. It let's us run tests in multiple virtual envs.

$ pip install tox
Enter fullscreen mode Exit fullscreen mode

Tox needs a recipe to know which virtualenv/commands to create/execute. This recipe is named tox.ini.

envlist = py36,py37,py38,py39

deps =
commands =
Enter fullscreen mode Exit fullscreen mode

If some of the Python interpreters are missing on your system, install them from the deadsnakes archive:

$ sudo add-apt-repository ppa:deadsnakes/ppa
$ sudo apt install python3.5 python3.6 python3.7 python3.8 python3.9
Enter fullscreen mode Exit fullscreen mode

Now let's test across different interpreters!

$ tox
Enter fullscreen mode Exit fullscreen mode

If you wanna test against a specific environment or execute only one file, then type:

$ tox -e py38
$ tox -e py38 -- test/   # executes only a single test
Enter fullscreen mode Exit fullscreen mode

Ok, we did the test locally, but when working in a team using continuous integration is pretty convenient. We set a integrate.yaml file up within a .github/workflows directory to tell github actions what jobs to execute. The following github actions file will test across different platforms and Python versions.

name: Python package

on: [push]

    runs-on: ${{ matrix.os }}
        python-version: [3.6, 3.7, 3.8, 3.9]
        os: [ubuntu-latest, macos-latest, windows-latest]
      - uses: actions/checkout@v2
      - name: Set up Python ${{ matrix.python-version }}
        uses: actions/setup-python@v2
          python-version: ${{ matrix.python-version }}
      - name: Cache pip
        uses: actions/cache@v2
          path: ~/.cache/pip
          key: ${{ runner.os }}-pip-${{ hashFiles('requirements.txt') }}
          restore-keys: |
            ${{ runner.os }}-pip-
            ${{ runner.os }}-
      - name: Install dependencies
        run: |
          python -m pip install --upgrade pip
          pip install flake8 pytest pytest-cov
          pip install -r requirements.txt
      - name: Lint with flake8
        run: |
          # stop the build if there are Python syntax errors or undefined names
          flake8 . --count --select=E9,F63,F7,F82 --show-source --statistics
          # exit-zero treats all errors as warnings. The GitHub editor is 127 chars wide
          flake8 . --count --exit-zero --max-complexity=10 --max-line-length=127 --statistics
      - name: Test with pytest
        run: |
Enter fullscreen mode Exit fullscreen mode

Don't be intimidated by the length of this job. It's just illustrating how powerful Github workflows can be. 🥰

Top comments (1)