Welcome to the PyTest refresher guide. This guide will provide an overview of the pytest library, focusing mainly on its usage for writing and managing tests, especially in relation to pytest fixtures.
Overview of PyTest
PyTest is a full-featured testing tool for Python that supports simple unit tests to complex functional testing. It allows for more readable and easier-to-write tests by using Python's assert keyword directly, without any wrappers around it. Its auto-discovery feature makes it easier to set up and run tests, and its detailed output for failed tests makes debugging a smoother process.
PyTest Fixtures
One of the most powerful features of pytest is its fixture system. A fixture is a function that is run before (and sometimes after) the tests. It is used to set up some state or data that will be used by the tests. Fixtures are modular and can be created to serve specific tests or reused across multiple tests, modules, or even the entire project.
Here's a simple example of a fixture:
import pytest
@pytest.fixture
def example_data():
return "some data"
In this example, example_data
is a fixture that simply returns a string. This fixture can be used in any test that requires this string.
Using Fixtures in Tests
To use a fixture in a test, you simply need to include an argument in your test function with the same name as the fixture:
def test_example(example_data):
assert example_data == "some data"
In the above test, pytest sees that example_data
is a fixture and therefore calls the example_data
fixture function and passes the returned value into the test function.
This system allows for powerful modular testing setups and can be used to represent things like database connections, data, mock objects, and state setup/teardown.
Parameterizing Fixtures
Another powerful feature of pytest is the ability to parameterize fixtures. This allows you to run a test multiple times with different inputs from the fixture. Here's an example:
import pytest
@pytest.fixture(params=[0, 1, 2])
def example_data(request):
return request.param
def test_example(example_data):
assert 0 <= example_data <= 2
In this example, the test test_example
will be run three times, once for each value in params
. The current value is accessed via the request.param
attribute.
Fixture Scope
By default, pytest will call a fixture function for every test that uses it, creating new fixture data for each test. However, pytest allows you to control the scope of a fixture using the @pytest.fixture(scope='...')
decorator. The available scopes are:
-
function
(default): Run the fixture function before each test that uses it. -
class
: Run the fixture function once per test class, regardless of how many test methods use it. -
module
: Run the fixture function once per module, regardless of how many tests or test classes use it. -
package
: Run the fixture function once per package. -
session
: Run the fixture function only once per session (i.e., per test run).
For example, here's how to create a fixture with module scope:
import pytest
@pytest.fixture(scope="module")
def example_data():
return "some data"
In this case, example_data
will be called only once per module, and the same data will be used for every test in the module that uses the fixture.
Managing Multiple PyTest Fixtures
When dealing with multiple fixtures in PyTest, the order in which they're declared in your test function does not dictate the order of their execution. PyTest manages each fixture independently, setting up and tearing down each in the appropriate order.
import pytest
@pytest.fixture
def fixture_a():
print("Setting up fixture_a")
yield
print("Tearing down fixture_a")
@pytest.fixture
def fixture_b():
print("Setting up fixture_b")
yield
print("Tearing down fixture_b")
def test_example(fixture_b, fixture_a):
print("Running test_example")
In this scenario, the print statements will execute in the following sequence:
- Setting up fixture_a
- Setting up fixture_b
- Running test_example
- Tearing down fixture_b
- Tearing down fixture_a
Despite fixture_b
being listed first in the test_example
function arguments, pytest orchestrates the setup of all fixtures before executing the test. The teardown of fixtures occurs in the reverse order of their setup, ensuring a smooth handling of dependencies between fixtures. This remains true even when the fixtures share the same scope. If the fixtures have differing scopes (e.g., session vs function), the fixtures with broader scopes are set up prior to those with narrower scopes.
Mixing Fixture and Non-Fixture Arguments
A test function can indeed have a mix of fixture and non-fixture arguments. When doing so, the non-fixture arguments must be passed via pytest's mark.parametrize decorator to feed in the values. Here's an example:
import pytest
@pytest.fixture
def fixture_a():
return "fixture_a value"
@pytest.mark.parametrize("param", ["value1", "value2"])
def test_example(fixture_a, param):
print("Running test_example with", fixture_a, "and", param)
In this example, the test function test_example
uses a fixture fixture_a
and a parameterized non-fixture argument param
. PyTest will run test_example
twice, once with param
as "value1" and once with "value2", setting up and tearing down fixture_a
for each run.
This allows for very flexible and powerful test setups, combining the benefits of both fixtures and parameterized inputs.
Using Fixture Instances Across Multiple Fixtures and Tests
When working with PyTest fixtures, you may often find yourself in situations where you need to share an instance of an object between multiple fixtures or tests. PyTest makes this easy and ensures that the same instance is used across your tests, promoting consistency and reducing redundancy.
Consider an example where we have a Database
class and a Store
class. The Store
class uses an instance of the Database
class.
class Database:
def __init__(self):
self.data = "database data"
class Store:
def __init__(self, database):
self.database = database
self.data = "store data"
We can create PyTest fixtures for these classes and use the instance of Database
in both the Store
fixture and the test function.
import pytest
@pytest.fixture
def context():
print("Setting up context")
return Database()
@pytest.fixture
def store(context):
print("Setting up store with context:", context.data)
return Store(context)
def test_example(store, context):
print("Running test_example")
assert store.database is context
In the above code, the context
fixture sets up and returns an instance of the Database
class. This instance is then used in the store
fixture to set up a Store
instance. In the test_example
function, we assert that the Database
instance within the Store
instance (store.database
) is the same as the context
instance. This assertion will pass because PyTest ensures that the store
fixture and the test_example
function receive the same instance of the context
fixture.
This example illustrates how PyTest manages instances and dependencies, allowing us to create sophisticated and consistent test scenarios. The ability to share and manage instances across fixtures and tests is a powerful feature that makes PyTest an excellent choice for testing in Python.
Common Patterns in PyTest
Parameterized testing
PyTest allows you to parameterize tests so that you can run the same test multiple times with different arguments.
import pytest
@pytest.mark.parametrize("test_input,expected", [("input1", "output1"), ("input2", "output2")])
def test_eval(test_input, expected):
assert eval(test_input) == expected
In the example above, test_eval
will run twice, once with test_input="input1"
and expected="output1"
, and once with test_input="input2"
and expected="output2"
.
Exception Testing
Testing that certain code raises an exception can be done using pytest.raises
as a context manager:
import pytest
def test_zero_division():
with pytest.raises(ZeroDivisionError):
1 / 0
In the above example, the test will pass if 1 / 0
raises a ZeroDivisionError
, and fail otherwise.
Skipping or Expecting Test Failures
You can skip tests or expect them to fail under certain conditions:
import pytest
import sys
@pytest.mark.skip(reason="no way of currently testing this")
def test_the_unknown():
...
@pytest.mark.skipif(sys.version_info < (3,7), reason="requires python3.7 or higher")
def test_function():
...
@pytest.mark.xfail
def test_function():
...
In the examples above, test_the_unknown
is unconditionally skipped, test_function
is skipped if the Python version is lower than 3.7, and another test_function
is expected to fail.
Top comments (0)