- This chapter is part of the series:
- In this section, we will be going through the features of pytest.
- Table of Contents:
4.1 Test Discover
- Test discovery is the process of automatically discovering tests within the specified locations.
- There are a lot more use cases of test discovery than whats mentioned in this article. We can specify the configurations on a much more granular level.
- However, we are going to look at the default discovery mechanism that we currently have in
pytest
. - The default implementation is as follows:
- It looks at the tests in our
tests/
directory. We specified this directory in thetool.pytest.ini
option inpyproject.toml
file. - It then looks for any module (filename in this case) that starts with
test_
. - Within each module it looks for functions with
test_
, it assumes that these are the tests and it runs them.
- It looks at the tests in our
- To know more about
pytest
test discovery, we can read the documentation.- Here is a small article: https://docs.pytest.org/en/stable/explanation/goodpractices.html#test-discovery
- Here is an article on how to change the standard test discovery: https://docs.pytest.org/en/7.1.x/example/pythoncollection.html
4.2 Parameterising our Tests
-
For now, our code has a single test function inside
test_app.py
.
import pytest from app import do_something_with_somethingelse, SomethingElse def test_something() -> None: somethingelse1: SomethingElse = SomethingElse(1) somethingelse2: SomethingElse = SomethingElse(2) somethingelse3: SomethingElse = SomethingElse(3) assert do_something_with_somethingelse(somethingelse1) == 'some' assert do_something_with_somethingelse(somethingelse2) == 'thing' assert do_something_with_somethingelse(somethingelse3) == ''
This test does not indicate the behaviour that we are testing. But later on, we will have multiple tests based on the different tests.
Not only that, notice that in the tests, if one
assert
statement fails. Then the rest of theassert
statements are never executed. The test is considered to have failed.In order to prevent this from happening,
pytest
allowsparameterization
oftests
.It has a
@pytest.mark.parameterize
decorator using which we can plug in different values for the same use case (same test function).This allows our tests to run multiple
assert
statements. If one fails, the others will still run.-
Using this our test could look something like:
import pytest from app import do_something_with_somethingelse, SomethingElse @pytest.mark.parameterize("dummy_somethingelse_value, dummy_expected_result", [ (1, 'some'), (2, 'thing'), (3, '') ]) def test_something(dummy_somethingelse_value, dummy_expected_result) -> None: assert do_something_with_somethingelse(SomethingElse(dummy_expected_result)) == \ dummy_expected_result
-
The syntax for
parameterize
is:
@pytest.mark.parameterize('label1, label2, ... and so on', [ (label1value, label2value, ... and so on), (label1value, label2value, ... and so on), ... and so on ]) def test_dummy(label1, label2, .... and so on) -> None: pass
So the main idea behind
mark.parameterize
is that if one of the cases inparameterize
list fails, it still runs the rest of them.Apart from that, it makes our tests look a lot cleaner.
4.3 Skipping Tests
-
We can skip tests in
pytest
by using the@pytest.mark.skip(reason='Any message')
decorator.
@pytest.mark.skip(reason='Feature not implemented yet') def test_skip_sometest() -> None: assert some_uninmplemted_feature is not None
When we do
TDD (Test Driven Development)
, we write tests before we implement the functionality.We usually skip such tests.
There is also another
skipif
variation of skip which lets us skip based on conditions. For example, if the operating system is Windows, or the version of Python is not supported, etc.
4.4 Handling Failing Tests
- We might have tests that are bound to fail. These tests can be used as:
- Tests to assure that things that are supposed to fail (like some virtual scenario), do fail.
- Documentation to better understand the system. To show that certain settings will fail.
- These tests when executed will show up as failure and fail the entire build.
- We can avoid that by deliberately marking them as tests that are supposed to fail.
-
We can achieve this with the
pytest.mark.xfail
decorator.
@pytest.mark.xfail def test_zero_division() -> None: assert 1 / 0 == 1
However, we should not mark those tests that raise errors as failing tests. This brings us to the next topic.
4.5 Handling Tests that raise Exceptions
- We might want to make sure certain scenarios raise a certain exceptions.
-
This can be handled by using the context manager called
pytest.raises(Exception)
.
def zero_division() -> float: return 1/0 def test_zero_division() -> None: with pytest.raises(ZeroDivisionError): zero_division()
As we can see, there is an exception expected when we run
zero_division
. We can handle it withpytest.raises
.This test will now fail if
ZeroDivisionError
is not raised.
4.6 fixtures
in pytest
- Fixtures are used when multiple tests require a certain amount of setup that they all share in common.
- For example, tests that require a database connection.
- In such cases, it is better to create the connection and share it with tests.
- Rather than creating a connection for every test.
- So we can create
Fixtures
that get passed to our tests.
- In
unittest
this is done by thesetUp
method. It creates objects that can be shared throughout the test methods. - By convention if we create a file called
conftest.py
, and create a fixture in that file, it will be available for all of our tests based on thescope
of the fixture. -
So we create
conftest.py
in ourtests/
directory alongsidetest_app.py
and create fixtures there as follows:
import pytest @pytest.fixture(scope="session") def db_con(): db_url = "driver+dialect://username:password@host:port/database" db = DatabaseLibrary() with db.connection(db_url) as conn: yield conn
Notice we have scopes in
pytest
. We will look at what scopes are.Fixture scopes basically define how many times a fixture will run based on the area it covers.
4.6.1 Understanding fixture
scopes
- To learn more about
fixture
scope
, we can go: https://betterprogramming.pub/understand-5-scopes-of-pytest-fixtures-1b607b5c19ed. - Scope basically has to do with the scope of the fixture as in, upto where the fixture is shared. There are five types of scopes:
function
,class
,module
,package
,session
. -
function
scope:- This is the default scope without explicitly adding
scope='function'
. - The fixture will be executed per test function.
- This is very heavy task.
-
This fixture is suitable for single use functions.
import pytest @pytest.fixture() def only_used_once(): with open("app.json") as f: config = json.load(f) return config
-
This fixture is suitable when it handles very lightweight operations such as returning a constant or a different value every time.
import pytest from datetime import datetime @pytest.fixture() def light_operation(): return "I'm a constant" @pytest.fixture() def need_different_value_each_time(): return datetime.now()
- This is the default scope without explicitly adding
-
class
scope:- This scope runs the fixture per test class.
- If we have couple of test functions that do similar things, we can put them in the same class.
- To know more about grouping tests in classes.
- Conventions for Python Test Discovery: https://docs.pytest.org/en/stable/explanation/goodpractices.html#test-discovery
- Group multiple tests in a class: https://docs.pytest.org/en/stable/getting-started.html#group-multiple-tests-in-a-class
- The parent StackOverFlow thread: https://stackoverflow.com/questions/20277058/py-test-does-not-find-tests-under-a-class#:~:text=test does not find tests under a class,-Ask Question
- How to test
unittest.TestCase
based classes: https://docs.pytest.org/en/7.1.x/how-to/unittest.html - Changing Python standard test discovery: https://docs.pytest.org/en/7.1.x/example/pythoncollection.html
- Now that we know how to group tests in classes and test discovery for classes, we will see how we can use fixtures on these test classes with
scope="class"
. -
We can use
@pytest.mark.usefixtures("fixturename")
decorator with the test class.
@pytest.fixture(scope="class") def dummy_data(request): request.cls.num1 = 10 request.cls.num2 = 20 logging.info("Execute fixture") @pytest.mark.usefixtures("dummy_data") class TestCalculatorClass: def test_distance(self): logging.info("Test distance function") assert distance(self.num1, self.num2) == 10 def test_sum_of_square(self): logging.info("Test sum of square function") assert sum_of_square(self.num1, self.num2) == 500
The fixture will be executed once per test class before the test methods.
-
There is a special usage of
yield
statement inpytest
that allows us to run the fixture after all the test functions.- The code before
yield
acts assetup code
. - The code after
yeild
acts asteardown code
. -
For example, testing a database.
@pytest.fixture(scope="class") def prepare_db(request): # pseudo code connection = db.create_connection() request.cls.connection = connection yield connection = db.close() @pytest.mark.usefixtures("prepare_db") class TestDBClass: def test_query1(self): assert self.connection.execute("..") == "..." def test_query2(self): assert self.connection.execute("..") == "..."
We can
yield
values to any test that wants it and the remaining code will act as theteardown code
.
- The code before
-
module and package
scopes:- The
scope='module'
runs the fixture per module (per file).- A module may contain multiple functions as well as classes.
- No matter how many tests are in the module, the fixture is run only once.
- The
scope='package'
runs the fixture per package (directory).- A package contains one or more modules.
- No matter how many modules there are, the fixture is run only once.
-
An example would be:
@pytest.fixture(scope="module") def read_config(): with open("app.json") as f: config = json.load(f) logging.info("Read config") return config
We might want to read config only once per module or only once throughout the entire package.
- The
-
session
scope:- Every time we run
pytest
, it is considered to be one session. - The
scope='session'
makes sure the fixture only executes per session. - Session scopes are designed for expensive operations like truncating a table or loading a test set to a database.
- Every time we run
We can read more about fixture topics such as
autouse
and execution order of fixtures. We will look at them when required.
4.7 Monkey Patching in PyTest
-
Monkey patching
is the process of changing the declaration of code during runtime. - This is
mocking
done right. - To understand
mocking
andtest-driven development
inPython
, you can check out my tutorial series onTDD in Python
: https://www.youtube.com/watch?v=NAjCDS-qxrQ&list=PLVWk3kKbCAAuPPr_eh4KNKdL2hDSNFH3H. -
TLDW(Too long didn’t watch?)
: Mocking lets us replace methods with dummy objects during runtime.
4.7.1 Understanding the need to mock
- Why would we want to mock anything?
- To understand this you need to understand the principals of unit testing:
- Unit testing are the fastest tests. Why?
- It is testing the functionality of a single unit, without any external dependencies.
- Say you test a function which calls the database.
- You would only want to test the flow of such a function without making the call to the actual database.
- How do you do that? You basically,
mock
the entire database connection object and replace it with a dummy object. - You can set whatever values you want that dummy object to return, create whatever dummy functions are required along with their return types. You can mimic the entire database connection’s set of methods if you want.
- What this allows us to do is set dummy values for all the external calls that are within our function and set default results for them.
- With this we can setup a fake scenario without actually making external calls.
- This is why unit tests are fast.
- Integration testing is on the other hand is testing of the actual values in which case, we would test the actual results from the external calls and evaluate them. Thats why Integration tests are slower than unit tests.
- This is one of the reasons we use mock.
-
Monkey patching
gives us a better interface than barebonesmock
library.
4.7.2 Monkeypatch syntax overview
-
Fixtures
can also depend on otherfixtures
.monkeypatching
is such a fixture which is available to all functions. -
Let’s create a
fixture
:
import pytest @pytest.fixture(scope="function") def capture_stdout(monkeypatch) std_output: dict = {'output': '', 'write_count': 0} def fake_writer(s) -> dict: std_output['output'] += s std_output['write_count'] += 1 return std_output monkeypatch.setattr(sys.stdout, 'write', fake_writer) return std_output def test_print(capture_stdout): print("Hello") assert capture_stdout['output'] == "Hello"
What we did here is replace
sys.stdout.write
withfake_writer
which returns the captureddictionary
.Every time a test function uses
capture_stdout
fixture, the environment within the test changes wheresys.stdout.write
is replaced withfake_writer
.Whenever any function calls
sys.stdout
within the function scope will be making a call tofake_writer
instead. In our example, we callprint
will calls tosys.stdout
.The return value is captured in the
capture_stdout
variable and can be used toassert
results.This is how we
mock
things.-
But why use
monkeypatching
instead of just mocks?- At the end of the test,
monkeypatching
makes sure that everything is undone andsys.out.writer
goes back to its original value. - In
unittest
we usemock.patch
asdecorators
andcontext managers
. This also makes sure that everything is undone at the end of the test.
- At the end of the test,
4.8 UNDERSTANDING TEST COVERAGE
- We have put
--cov
options inpyproject.toml
. - This shows us test coverage in percentages.
- Here
100%
test coverage means all our tests combined and touched every single line of executable code in thesrc
. - Any percentage lesser means that some executable code was not tested by our tests.
- However, having
100%
test coverage does denote that there are no bugs. It all depends. However, it’s good to have100%
code coverage.
Top comments (3)
this is a solid article, one thing to add would be setup and teardowns with the db with alembic
I am planning to create an article for SQLAlchemy core. The migrations will be handled by alembic. Please stay tuned until that article is out. I am currently working on the source code. The tests that you request will be there.
Thank you for the feedback. I will try to add this as well.