DEV Community

Cover image for Testing in Python
Amnish Singh Arora
Amnish Singh Arora

Posted on • Updated on

Testing in Python

Automated Testing is the backbone of quality software. Without having tests setup in your project, there is no way to ensure if your code works as expected, and becomes next to impossible to keep it going in the right direction as more people start contributing to it. It is crucial to setup automated tests for complex projects, as manual testing or exploratory testing after each iteration of your product is simply not feasible without a dedicated team of QA Analysts.

There are various types of tests like end-to-end tests, unit tests, integration tests, performace tests etc., each differing with its approach of testing.

In this post, I'll be sharing how I set up both unit and integration testing in my python project til-page-builder, and how it helped me detect some hidden issues.

Table of Contents

 1. Unit Testing 👨‍🔬
       1.1. Pytest 🐍
       1.2. pytest-watch 🔎
 2. Integration Tests ⚙️
       2.3. Writing my first integration test
 3. Coverage 📈
 4. Makefile
 5. Conclusion 🎇

Unit Testing 👨‍🔬

Unit testing is an approach of automated testing in which the smallest testable parts of an application, called units, are individually run against certain scripts to verify for proper operation. These units could be indiviual functions, classes or methodsthat you write to implement functionalities for your application.

Pytest 🐍

After looking through the various unit testing tools available for Python like pytest, unittest (built-in), and nose, I went with pytest for its simlpicity and ease of use.

Pytest image

I took the following steps to setup pytest in my project.

1. Installing Pytest

Execute the following command from your terminal.

pip install pytest
Enter fullscreen mode Exit fullscreen mode

2. Setting up pytest configuration

Create a pytest.ini file at the root of your project.

pytest.ini

[pytest]
minversion = 7.0

# Directories to look for the test files
testpaths = tests

# Directories to look for the source code
pythonpath = src
Enter fullscreen mode Exit fullscreen mode

3. Writing my first unit test

I started out by testing the HeadingItem class I was using for generating a TOC after parsing html.

tests/builder/toc_generator/test_heading_item.py

from builder.toc_generator.heading_item import HeadingItem


class TestHeadingItem:
    """
    Test suite for 'HeadingItem' class
    """

    class TestHeadingItemConstructor:
        """
        Tests for 'HeadingItem' class __init__ method
        """

        def test_default_arguments(self):
            """
            Test to verify the default values for HeadingItem constructor
            """
            heading = HeadingItem()

            assert heading.value == HeadingItem.DEFAULT_HEADING_VALUE
            assert heading.id == HeadingItem.generate_heading_id(
                HeadingItem.DEFAULT_HEADING_VALUE
            )
            assert heading.children == HeadingItem.DEFAULT_CHILDREN_VALUE

        def test_value_argument(self):
            """
            Test to verify the supplied value property is correctly set
            """
            sample_heading_value = "This is a sample heading"
            heading = HeadingItem(sample_heading_value)

            assert heading.value == sample_heading_value
            assert heading.id == HeadingItem.generate_heading_id(sample_heading_value)
            assert heading.children == HeadingItem.DEFAULT_CHILDREN_VALUE

        def test_value_and_children(self):
            """
            Test to verify the supplied value and children properties are correctly set
            """
            deep_nested_heading_1 = HeadingItem("1.1.1")
            deep_nested_heading_2 = HeadingItem("1.1.2")

            nested_heading_1 = HeadingItem(
                "1.1", [deep_nested_heading_1, deep_nested_heading_2]
            )
            nested_heading_2 = HeadingItem("1.2")

            top_heading = HeadingItem("1", [nested_heading_1, nested_heading_2])

            # Check for values
            assert top_heading.value == "1"
            assert nested_heading_1.value == "1.1"
            assert nested_heading_2.value == "1.2"

            # Check nested values
            assert top_heading.children[0].value == "1.1"
            assert top_heading.children[1].value == "1.2"

            # Check deep nested values
            assert top_heading.children[0].children[0].value == "1.1.1"
            assert top_heading.children[0].children[1].value == "1.1.2"

            # Check if children are correctly set
            assert nested_heading_1 in top_heading.children
            assert nested_heading_2 in top_heading.children

            # Check if deep nested children are correctly set
            assert deep_nested_heading_1 in top_heading.children[0].children
            assert deep_nested_heading_2 in top_heading.children[0].children

        def test_bad_values(self):
            """
            Check if default values are assigned when 'None' is passed as arguments
            """
            heading = HeadingItem(None, None)

            assert heading.value == HeadingItem.DEFAULT_HEADING_VALUE
            assert heading.id == HeadingItem.generate_heading_id(
                HeadingItem.DEFAULT_HEADING_VALUE
            )
            assert heading.children == HeadingItem.DEFAULT_CHILDREN_VALUE
Enter fullscreen mode Exit fullscreen mode

I created TestHeadingItem class to group all the tests for HeadingItem class, and further grouped the tests related to init method in TestHeadingItemConstructor class.

4. Running the tests

Now that the tool was configured and the very first tests in place, it was time to run them with the following command.

pytest
Enter fullscreen mode Exit fullscreen mode

It wasn't a smooth run for the first time, as there were some issues with my constructor's default values setup.

I was trying to generate the item's id before setting defaulting the value property to an empty string. This is what the fixed function looked like.

constructor

I finally got the green check from all tests.

Running unit tests

I would have never known about this edge case for a long time if not for the unit tests, and this problem might have turned into something much harder to debug by that time.

This is how

Automated and quality tests can save companies millions of dollars and hundreds of wasted hours on fixing problems that could have been prevented in the first place.

pytest-watch 🔎

After I was satisfied with the basic setup of unit tests, it was time to look for something that could execute my tests automatically everytime a file changed. This makes it really convenient to debug a problem as you don't have to manually run your tests after every little change.

I installed pytest-watch for this purpose and had to make following additions to the pytest.ini file we discussed above.

[pytest-watch]
# Re-run after a delay (in milliseconds), allowing for
# more file system events to queue up (default: 200 ms).
spool = 200

# Waits for all tests to complete before re-running.
# Otherwise, tests are interrupted on filesystem events.
wait = true
Enter fullscreen mode Exit fullscreen mode

Once it was configured, all I had to do was execute the following command for the utility to start listening for changes to filesystem.

ptw
Enter fullscreen mode Exit fullscreen mode

ptw execution

Integration Tests ⚙️

Unit tests do a good job in testing if individual units of program behave as expected. But no one knows if two or more units work together as expected, until we have some integration tests in place.

Writing my first integration test

To begin with, I added a very general test that called my program against a markdown file, and comparted the generated html with the expected snapshot.

I created a dictionary to store the expected snapshots in a separate file.

snapshots.py

"""Html snapshots to be used in integration testing"""

snapshots = {
    "yattag_html": """<!DOCTYPE html>
<html lang="en-CA">
  <head>
    <meta charset="utf-8" />
    <title># TIL *Yattag*</title>
    <meta name="viewport" content="width=device-width, initial-scale=1" />
  </head>
  <body>
    <h2>Table of Contents</h2>
    ...
    ...
    ...
    <p>Here's a simple code <em>snippet</em> that makes use of this library:</p>
    <p>with tag('p'):</p>
    <p>text("some random text")</p>
  </body>
</html>"""
}
Enter fullscreen mode Exit fullscreen mode

And this is what my first integration test looked like.

test_integration.py

""""This module is responsible for integration testing of this application"""

import os
from snapshots import snapshots
from til_builder_main import App


class TestIntegration:
    """Integration testing suites"""

    DEFAULT_OUTPUT = "til"

    def test_general(self):
        """Compare the generated output with a sample file to verify general expectations"""

        # Load the exected html from snapshot
        expected_html = snapshots["yattag_html"]

        # Run the application with default settings
        app = App()
        app.run("integration/test_samples/til-yattag.md", test_context=True)

        # Verify if the file was generated in the correct location
        assert os.path.isfile(f"{TestIntegration.DEFAULT_OUTPUT}/til-yattag.html")

        # Verify the contents of generated file match with the expected snapshot
        with open(
            f"{TestIntegration.DEFAULT_OUTPUT}/til-yattag.html", "r"
        ) as generated_file:
            generated_html = generated_file.read()

            assert generated_html == expected_html
Enter fullscreen mode Exit fullscreen mode

I feel like this can be done better by replacing the keys in snapshots.py with the sample file paths, and a single function can iterate over all key-value pairs and run corresponding comparisons.

This can prevent lots of code repetition allowing the developer to add more test files and snapshots without adding more code.

But for now, all that matters is I was able to get a green check in the integration test as well.

pytest.ini

# Directories to look for the test files
testpaths = tests integration
Enter fullscreen mode Exit fullscreen mode

After adding integration tests directory in testpath,

pytest
Enter fullscreen mode Exit fullscreen mode

Run integraion tests

Coverage 📈

We also need a way to know how much of our code has been covered by the existing tests. For this purpose, I added another pytest plugin called pytest-cov.

pip install pytest-cov
Enter fullscreen mode Exit fullscreen mode

To check the code coverage, I ran

pytest --cov=src
Enter fullscreen mode Exit fullscreen mode

Code Coverage

Makefile

Even though I was almost done with the basic test setup, I still felt like something was missing. I am mostly used to working with node and javascript, and there we have an idea of configuring custom scripts required by the project in a package.json file.

I wanted to do something similar and hence, found a way by making use of the GNU Make utility.

From the official documentation,

GNU Make is a tool which controls the generation of executables and other non-source files of a program from the program's source files.

Make gets its knowledge of how to build your program from a file called the makefile, which lists each of the non-source files and how to compute it from other files. When you write a program, you should write a makefile for it, so that it is possible to use Make to build and install the program.

I added a Makefile to define all the custom scripts that I needed,

Makefile

install:
    pip install -r requirements.txt

format:
    black .

lint:
    pylint src/

test:
    pytest

run-failed-tests:
    pytest --last-failed 

test-watch:
    ptw

coverage:
    pytest --cov=src
Enter fullscreen mode Exit fullscreen mode

Each of them can be executed with

make <script-name>
Enter fullscreen mode Exit fullscreen mode

If you don't have the utility installed, refer to the installation instructions here,
https://www.gnu.org/software/make/#download

Sweet!

Conclusion 🎇

In this post, we talked about the importance of software testing and its various types, setting up pytest and it plugins, writing both unit and integration tests, and the Makefile utility to setup custom scripts just like we do with npm.

Hope this helped!
Make sure to check out the other posts.

Image Attribution

Cover Image by vectorjuice on Freepik

Top comments (1)

Collapse
 
sarwilliam profile image
Sarah William

Very helpful. Just scrap data from deinepfoten.de/