DEV Community

Cover image for Introduction about Pytest and its plugins

Posted on • Updated on

Introduction about Pytest and its plugins

I. Why Pytest
    1.How do we choose a test automation framework
    2.What pytest got to offer
II. Pytest signature features
    Pytest fixture
    Pytest parametrize
    Pytest CLI
III. Pytest plugins
    Pytest-rerun failures

I. Why Pytest
1.How do we choose a test automation framework

A test automation framework should be capable of handling different types of test : UI test, API test, Contract test, Unit test( for developer)..

It should support basic features like : reporting , parameterized, screenshot capture (UI test)

Below is a list of features that we consider automation framework should support:

  • fine grained test selection mechanism that allows to be very selective when you have to choose which tests you are going to launch (tags)
  • parametrized
  • high reuse
  • test execution logs easy to read and analyze
  • easy target environment switch
  • block on first failure
  • repeat your tests for a given amount of times
  • repeat your tests until a failure occurs
  • support parallel executions
  • provide integration with third party software like test management tools
  • integration with cloud services or browser grids
  • execute tests in debug mode or with different log verbosity
  • support random tests execution order (the order should be reproducible if some problems occur thanks to a random seed if needed)
  • versioning support
  • integration with external metrics engine collectors
  • support different levels of abstraction (e.g., keyword driven testing, BDD, etc)
  • rerun last failed
  • integration with platforms that let you test against a large combination of OS and browsers if needed
  • are you able to extend your solution writing or installing third party plugins?

2.What pytest got to offer

Alt Text

Please refer to pytest document official page for more reference pytest.

But in short: the pytest framework makes it easy to write small tests, yet scales to support complex functional testing for applications and libraries.

Most important pytest features are:

- simple assertions instead of inventing assertion APIs (.not.toEqual or self.assert*)
- auto discovery test modules and functions
- effective CLI for controlling what is going to be executed or skipped using expressions
- fixtures, easy to manage fixtures lifecycle for long-lived test resources and parametrized features make it easy and funny implementing what you found hard and boring with other frameworks
- fixtures as function arguments, a dependency injection mechanism for test resources
- overriding fixtures at various levels
- framework customizations thanks to pluggable hooks
- very large third party plugins ecosystem

II. Pytest signature features

1. Pytest fixture

You can refer to this pytest_fixtures for detailed explanation. But in short:

Fixture is like function, you can reuse that (fixture is called as input parameter of the test).
import pytest

def input_value():
input = 39
return input

def test_divisible_by_3(input_value):
assert input_value % 3 == 0

def test_divisible_by_6(input_value):
assert input_value % 6 == 0

Similarity with Java (as in jUnit, TestNG) we have setup, teardown method.

We can setup fixture as setup, teardown as :
def my_fixture():
print("\n I'm the fixture - setUp")
print("I'm the fixture - tearDown")

def test_first(my_fixture):
print("I'm the first test")

def test_second(my_fixture):
print("I'm the second test")

def test_third():
print("I'm the third test (without fixture)")

pytest.fixture also allows us to set the scope, autouse boolean:

scope can be 'function','class', 'module', or 'session'

The priority would be from session → function, autouse → used when call, conftest(most outer → most inner)

2. Pytest parametrize

For more details you can refer to this web page : pytest parameterized

Parameterizing of a test is done to run the test against multiple sets of inputs. We can do this by using the following marker
import pytest

@pytest.mark.parametrize("num, output",[(1,11),(2,22),(3,35),(4,44)])
def test_multiplication_11(num, output):
assert 11*num == output

You could write a function to read test data from csv, json, excel file ... and return that data to a list. → You could use that list as parameter in the test function
def get_data():
# Retrieve values from CSV
return ['value1', 'value2', 'value3']

@pytest.mark.parametrize("value", get_data())
def test_sample(self, value):

3. Pytest CLI

For more details, you can refer to this pytest cli

You can invoke testing through the Python interpreter from the command line:
python -m pytest [...]

This is almost equivalent to invoking the command line script pytest [...] directly, except that calling via python will also add the current directory to sys.path.

Or sometimes you want to invoke some parameter/settings from command line, you can use pytest_addoption(parser) in conftest

def pytest_addoption(parser):
parser.addoption("--name", action="store", default="default name")

def name(pytestconfig):
return pytestconfig.getoption("name")

def test_print_name(name):
print(f"\ncommand line param (name): {name}")

def test_print_name_2(pytestconfig):
print(f"test_print_name_2(name): {pytestconfig.getoption('name')}")

pytest practice\ -q -s --name cuongld

III. Pytest plugins

1. Pytest-testrail

Allow us to integrate with testrail for test case/test result management.

First we need to have testrail.cfg file : for store testrail info . You can invoke the testrail info via command line if you want

url =
email =
password = xxxxxxx

assignedto_id = 5
project_id = 10
suite_id = 165
run_id = 881

Secondly, specify the usage of testrail in pytest command line

pytest --testrail --tr-config=testrail.cfg practice\

2. Pytest-rerun failures

Official page can be found here : pytest-rerunfailures

To re-run all test failures, use the --reruns command line option with the maximum number of times you’d like the tests to run:
pytest --reruns 5

Failed fixture or setup_class will also be re-executed.

To add a delay time between re-runs use the --reruns-delay command line option with the amount of seconds that you would like wait before the next test re-run is launched:
pytest --reruns 5 --reruns-delay 1

To mark individual tests as flaky, and have them automatically re-run when they fail, add the flaky mark with the maximum number of times you’d like the test to run:
def test_example():
import random
assert random.choice([True, False])

Note that when teardown fails, two reports are generated for the case, one for the test case and the other for the teardown error.

You can also specify the re-run delay time in the marker:
@pytest.mark.flaky(reruns=5, reruns_delay=2)
def test_example():
import random
assert random.choice([True, False])

3. Pytest-html

For the Official site, please refer to this pytest-html

As in the name, this plugin allow us to generate html report for test result.

You can add customize content to the report by using pytest hook

The following example adds the various types of extras using a pytest_runtest_makereport hook, which can be implemented in a plugin or file:

import pytest
def pytest_runtest_makereport(item, call):
pytest_html = item.config.pluginmanager.getplugin('html')
outcome = yield
report = outcome.get_result()
extra = getattr(report, 'extra', [])
if report.when == 'call':
# always add url to report
xfail = hasattr(report, 'wasxfail')
if (report.skipped and xfail) or (report.failed and not xfail):
# only add additional html on failure

Additional HTML'))
report.extra = extra
4. Pytest-bdd

Official site can be found here : pytest-bdd

In short, we can implement test in bdd style (like in other bdd tools : Cucumber, behave).

Keep in mind that behave will not be compatible with pytest.

So to implement bdd with pytest , you need to use pytest-bdd

First you need to define feature file:

Feature: Blog
A site where you can publish your articles.

Scenario: Publishing the article
Given I'm an author user
And I have an article
When I go to the article page
And I press the publish button
Then I should not see the error message
And the article should be published # Note: will query the database

Note that only one feature is allowed per feature file.

from pytest_bdd import scenario, given, when, then

@scenario('publish_article.feature', 'Publishing the article')
def test_publish():

@given("I'm an author user")
def author_user(auth, author):
auth['user'] = author.user

@given('I have an article')
def article(author):
return create_test_article(author=author)

@when('I go to the article page')
def go_to_article(article, browser):
browser.visit(urljoin(browser.url, '/manage/articles/{0}/'.format(

@when('I press the publish button')
def publish_article(browser):

@then('I should not see the error message')
def no_error_message(browser):
with pytest.raises(ElementDoesNotExist):

@then('the article should be published')
def article_is_published(article):
article.refresh() # Refresh the object in the SQLAlchemy session
assert article.is_published
That's it for pytest. Hope you guys enjoy this. :-*

Notes: If you feel this blog help you and want to show the appreciation, feel free to drop by :

This will help me to contributing more valued contents.

Top comments (0)