DEV Community

Bas Steins
Bas Steins

Posted on • Originally published at bas.codes

A Gentle Introduction to Testing with PyTest

A test is code that executes code. When you start developing a new feature for your Python project, you could formalize its requirements as code. When you do so, you not only document the way your implementation's code shall be used, but you can also run all the tests automatically to always make sure your code matches your requirements. One such tool which assists you in doing this is pytest and it's probably the most popular testing tool in the Python universe.

It's all about assert

Let's assume you have written a function that validates an email address. Note that we keep it simple here and don't use Regular Expressions or DNS testing for validating email addresses. Instead, we just make sure that there is exactly one @ sign in the string to be tested and only Latin characters, numbers, and ., - and _ characters.

import string
def is_valid_email_address(s):
    s = s.lower()
    parts = s.split('@')
    if len(parts) != 2:
      # Not exactly one at-sign
      return False
    allowed = set(string.ascii_lowercase + string.digits + '.-_')
    for part in parts:
        if not set(part) <= allowed:
          # Characters other than the allowed ones are found
          return False
    return True
Enter fullscreen mode Exit fullscreen mode

Now, we have some assertions to our code. For example, we assert that these email addresses are valid:

  • test@example.org
  • user123@subdomain.example.org
  • john.doe@email.example.org

On the other hand, we would expect that our function returns False for email addresses like:

  • not valid@example.org (includes a space)
  • john.doe (no @)
  • john,doe@example.org (includes a ,)

We can check that our function indeed behaves the way we expect:

print(is_valid_email_address('test@example.org'))               # True
print(is_valid_email_address('user123@subdomain.example.org'))  # True
print(is_valid_email_address('john.doe@email.example.org'))     # True
print(is_valid_email_address('not valid@example.org'))          # False
print(is_valid_email_address('john.doe'))                       # False
print(is_valid_email_address('john,doe@example.org'))           # False
Enter fullscreen mode Exit fullscreen mode

These email address examples we come up with are called test cases. For each test case we expect a certain result. A testing tool like pytest can help automate test these assertions. Writing down these assertions can help you

  • document how your code is going to be used
  • make sure that future changes do not break other parts of your software
  • think about possible edge cases of your functionalities

To make that happen, we just create a new file for all of our tests and put a few functions in there.

def test_regular_email_validates():
    assert is_valid_email_address('test@example.org')
    assert is_valid_email_address('user123@subdomain.example.org')
    assert is_valid_email_address('john.doe@email.example.org')


def test_valid_email_has_one_at_sign():
    assert not is_valid_email_address('john.doe')

def test_valid_email_has_only_allowed_chars():
    assert not is_valid_email_address('john,doe@example.org')
    assert not is_valid_email_address('not valid@example.org')
Enter fullscreen mode Exit fullscreen mode

Running tests

Easy example

So, we have two files in our project directory: validator.py and test_validator.py.

We can now simply run pytest from the command line. Its output should look something like this:

============================= test session starts ==============================
platform darwin -- Python 3.9.6, pytest-7.0.1, pluggy-1.0.0
rootdir: /Users/bascodes/Code/blogworkspace/pytest-example
collected 3 items

test_validator.py ...                                                    [100%]

============================== 3 passed in 0.01s ===============================
Enter fullscreen mode Exit fullscreen mode

pytest informs us that it has found three test functions inside test_validator.py and that all of these functions were passed (as indicated by the three dots ...).

The 100% indicator gives us a good feeling since we are confident that our validator works as expected. However, as outlined in the introduction, the validator function is far from perfect. And so are our test cases. Even without DNS testing, we would mark an email address like john.doe@example as valid while an address like john.doe+abc@gmail.com would be marked invalid.

Let's add these test cases now to our test_validator.py

...
def test_valid_email_can_have_plus_sign():
    assert is_valid_email_address('john.doe+abc@gmail.com')

def test_valid_email_must_have_a_tld():
    assert not is_valid_email_address('john.doe@example')
Enter fullscreen mode Exit fullscreen mode

If we run pytest again, we see failing tests:

============================= test session starts ==============================
platform darwin -- Python 3.9.6, pytest-7.0.1, pluggy-1.0.0
rootdir: /Users/bascodes/Code/blogworkspace/pytest-example
collected 5 items

test_validator.py ...FF                                                  [100%]

=================================== FAILURES ===================================
_____________________ test_valid_email_can_have_plus_sign ______________________

    def test_valid_email_can_have_plus_sign():
>       assert is_valid_email_address('john.doe+abc@gmail.com')
E       AssertionError: assert False
E        +  where False = is_valid_email_address('john.doe+abc@gmail.com')

test_validator.py:17: AssertionError
_______________________ test_valid_email_must_have_a_tld _______________________

    def test_valid_email_must_have_a_tld():
>       assert not is_valid_email_address('john.doe@example')
E       AssertionError: assert not True
E        +  where True = is_valid_email_address('john.doe@example')

test_validator.py:20: AssertionError
=========================== short test summary info ============================
FAILED test_validator.py::test_valid_email_can_have_plus_sign - AssertionErro...
FAILED test_validator.py::test_valid_email_must_have_a_tld - AssertionError: ...
========================= 2 failed, 3 passed in 0.05s ==========================
Enter fullscreen mode Exit fullscreen mode

Note that we got two FF in addition to out three ... dots to indicate that two test functions failed.

In addition, we get a new FAILURES section in our output which explains in detail at which point our test failed. That's pretty helpful for debugging.

Note on Designing Tests

Our small validator example is a testament to the importance of designing tests.

We wrote our validator function first and then came up with some test cases for it. Soon we noticed that these test cases are by no means comprehensive. Instead, we missed some essential aspects of validating an email address.

You may have heard about Test Driven Development (TDD), which advocates for the exact opposite: Getting your requirements right by writing your test cases first and not start implementing a feature before you feel you have covered all test cases. This way of thinking has always been a good idea but has gained even more importance over time since software projects have increased complexity.

I will write another blog post about TDD soon to cover it in depth.

Configuration

Usually, a project setup is much more complicated than just a single file with a validator function in it.

You may have a Python package structure for your project, or your code relies on external dependencies like a database.

Fixtures

Setup and Tear Down

You might have used the term fixture in different contexts. For example, for the Django Webframework, fixtures refer to a collection of initial data to be loaded into the database. However, in pytest's context, fixtures only refer to functions run by pytest before and/or after the actual test functions.

We can create such functions using the pytest.fixture() decorator. We do this inside the test_validator.py file for now.

import pytest

@pytest.fixture()
def database_environment():
    setup_database()
    yield
    teardown_database()
Enter fullscreen mode Exit fullscreen mode

Note that setting up the database and tearing it down happens in the same fixture. The yield keyword inidcates the part where pytest running the actual tests.

To have the fixture actually be used by one of your test, you simply add the fixture's name as an argument, like so (still in test_validator.py):

def test_world(database_environment):
    assert 1 == 1
Enter fullscreen mode Exit fullscreen mode

Getting Data from Fixtures

Instead of using yield a fixture function can also return arbitrary values:

import pytest

@pytest.fixture()
def my_fruit():
    return "apple"
Enter fullscreen mode Exit fullscreen mode

Again, requesting that fixture from a test function is done by providing the fixtures name as a parameter:

def test_fruit(my_fruit):
    assert my_fruit == "apple"
Enter fullscreen mode Exit fullscreen mode

Configuration Files

pytest can read its project-specific configuration from one of these files:

  • pytest.ini
  • tox.ini
  • setup.cfg

Which file to use depends on what other tooling you might use in your project. If you have packaged your project, you should use the setup.cfg file. If you use tox to test your code in different environments, you can put the pytest configuration into the tox.ini file. The pytest.ini file is used can be used if you do not want to utilize any additional tooling, but pytest.

The configuration file looks mostly the same for each of these three file types:

For using pytest.ini and tox.ini:

[pytest]
addopts = ​-rsxX -l --tb=short --strict​
Enter fullscreen mode Exit fullscreen mode

**If you are using the setup.cfg file, the only difference is that you have to prefix the [pytest] section with tool: like so:

[tool:pytest]
addopts = ​-rsxX -l --tb=short --strict​
Enter fullscreen mode Exit fullscreen mode

conftest.py

Each folder containing test files can contain a conftest.py file which is read by pytest. This is a good place to place your custom fixtures into as these could be shared between different test files.

The conftest.py file(s) can alter the behaviour of pytest on a per-project basis.

Apart from shared fixtures you could place external hooks and plugins or modifiers for the PATH used by pytest to discover tests and implementation code.

CLI / PDB

During development, mainly when you write your tests before your implementation, pytest can be a beneficial tool for debugging.

We will have a look at the most useful command-line options.

Running Only One Test

If you want to run one particular test only, you can reference that test via the test_ file it is in and the function's name:

pytest test_validator.py::test_regular_email_validates
Enter fullscreen mode Exit fullscreen mode

Collect Only

Sometimes you just want to have a list of the test collection rather than executing all test functions.

pytest --collect-only
Enter fullscreen mode Exit fullscreen mode
============================= test session starts ==============================
platform darwin -- Python 3.9.6, pytest-7.0.1, pluggy-1.0.0
rootdir: /Users/bascodes/Code/blogworkspace/pytest-example
collected 5 items

<Module test_validator.py>
  <Function test_regular_email_validates>
  <Function test_valid_email_has_one_at_sign>
  <Function test_valid_email_has_only_allowed_chars>
  <Function test_valid_email_can_have_plus_sign>
  <Function test_valid_email_must_have_a_tld>

========================== 5 tests collected in 0.01s ==========================
Enter fullscreen mode Exit fullscreen mode

Exit on the first error

You can force pytest to stop executing further tests after a failed one:

pytest -x
Enter fullscreen mode Exit fullscreen mode

Run the last failed test only

If you want to run only the tests that failed the last time, you can do so using the --lf flag:

pytest --lf
Enter fullscreen mode Exit fullscreen mode

Run all tests, but run the last failed ones first

pytest --ff
Enter fullscreen mode Exit fullscreen mode

Show values of local variables in the output

If we set up a more complex test function with some local variables, we can instruct pytest to display these local variables with the -l flag.

Let's rewrite our test function like so:

...
def test_valid_email_can_have_plus_sign():
    email = 'john.doe+abc@gmail.com'
    assert is_valid_email_address('john.doe+abc@gmail.com')
...
Enter fullscreen mode Exit fullscreen mode

Then,

pytest -l
Enter fullscreen mode Exit fullscreen mode

will give us this output:

============================= test session starts ==============================
platform darwin -- Python 3.9.6, pytest-7.0.1, pluggy-1.0.0
rootdir: /Users/bascodes/Code/blogworkspace/pytest-example
collected 5 items

test_validator.py ...FF                                                  [100%]

=================================== FAILURES ===================================
_____________________ test_valid_email_can_have_plus_sign ______________________

    def test_valid_email_can_have_plus_sign():
        email = 'john.doe+abc@gmail.com'
>       assert is_valid_email_address('john.doe+abc@gmail.com')
E       AssertionError: assert False
E        +  where False = is_valid_email_address('john.doe+abc@gmail.com')

email      = 'john.doe+abc@gmail.com'

test_validator.py:18: AssertionError
_______________________ test_valid_email_must_have_a_tld _______________________

    def test_valid_email_must_have_a_tld():
>       assert not is_valid_email_address('john.doe@example')
E       AssertionError: assert not True
E        +  where True = is_valid_email_address('john.doe@example')


test_validator.py:21: AssertionError
=========================== short test summary info ============================
FAILED test_validator.py::test_valid_email_can_have_plus_sign - AssertionErro...
FAILED test_validator.py::test_valid_email_must_have_a_tld - AssertionError: ...
========================= 2 failed, 3 passed in 0.09s ==========================
Enter fullscreen mode Exit fullscreen mode

Using pytest with a debugger

pdb is a command line debugger built into Python. You can pytest to debug your test function's code.

If you start pytest with --pdb, it will start a pdb debugging session right after an exception is raised in your test. Most of the time this is not particularly useful as you might want to inspect each line of code before the raised exception.

Another option is the --trace flag for pytest which will set a breakpoint at each test function's first line. This might become a bit unhandy if you have a lot of tests. So, for debugging purposes, a good combination is --lf --trace which would start a debug session with pdb at the beginning of the last test that failed:

pytest --lf --trace
Enter fullscreen mode Exit fullscreen mode
============================= test session starts ==============================
platform darwin -- Python 3.9.6, pytest-7.0.1, pluggy-1.0.0
rootdir: /Users/bascodes/Code/blogworkspace/pytest-example, configfile: pytest.ini
collected 2 items
run-last-failure: rerun previous 2 failures

test_validator.py
>>>>>>>>>>>>>>>>>>>> PDB runcall (IO-capturing turned off) >>>>>>>>>>>>>>>>>>>>>
> /Users/bascodes/Code/blogworkspace/pytest-example/test_validator.py(17)test_valid_email_can_have_plus_sign()
-> email = 'john.doe+abc@gmail.com'
(Pdb)
Enter fullscreen mode Exit fullscreen mode

CI / CD

In modern software projects, software is developed according to Test Driven Development principles and delivered through a Continuous Integration / Continuous Deployment pipeline that includes automated testing.

A typical setup is that commits to the main/master branch are rejected unless all test functions pass.

If you want to know more about using pytest in a CI/CD environment, stay tuned as I am planning a new article on that topic.

Top comments (1)

Collapse
 
shiksha profile image
shiksha kundu

could you please help how declare constants & variables in pytest.ini