DEV Community

Liz Acosta
Liz Acosta

Posted on • Edited on

Harder, Better, Faster, Stronger Tests With Fixtures

Okay so I really just wanted to reference Daft Punk. With their December 2024 limited re-release of Discovery and screening of Interstella 5555, the French electronic duo have been on my mind. Of course it has made me nostalgic and a little bit sad, yearning for days that seemed simpler.

It is not easy to be unemployed and depressed during the holidays, so guess what? Here’s a tutorial for you to round out what was probably the worst year of my life! (Including the year I was diagnosed with cancer!)

But real talk: Test fixtures can improve your tests by reducing redundancy, isolating scenarios, and increasing performance.

How to use this tutorial

This tutorial is designed to accommodate many different learning styles so you can choose your own adventure:

  • Jump straight to the code: The code for this tutorial can be found here. Use the README to get it up and running locally.
  • But first, who cares about testing?: I’ll try to convince you why writing tests can be fun and list some best practices to keep in mind.
  • What are test fixtures?: An introduction to test fixtures in unittest and how they can help you adhere to unit testing best practices.
  • A little bit of context: If you’re the kind of person who likes to ask a lot of questions or who finds comfort in expectation-setting, I got you, my sweet little anxious overachiever! This section aims to help set you up for success.
  • Test fixtures in action: A walk-through of the tutorial code in which we’ll explore the effects of using different kinds of test fixtures so you can experience that “Aha!” moment first-hand.

But first, who cares about testing? Can’t AI do it?

First of all, I care about testing.

Second of all, yes … but unless you are a sadist who enjoys incident pages at three in the morning trying to debug code a robot wrote, you should probably at least check the robot’s work. (And – you know – no shame if that is your thing.)

You can read my previous posts in this series to learn more about why testing is important, but in summary:

  • Besides the qualitative benefits of software testing such as bug prevention, reduction in development costs, and improved performance, the most compelling benefit of writing tests is it makes us better engineers. Writing tests forces us to ask ourselves, “What exactly is the expected behavior of this method or application?” When software becomes “difficult to test,” it is usually a good indicator of code smells and an opportunity to refactor a method or reconsider the entire design of a system.
  • A less obvious but still important benefit to writing tests – unit tests especially – is their double duty as quick documentation. Best practices for unit tests call for long, descriptive function names. These function names not only make verbose test output more readable and quick to assess, they also provide documentation for the function under test.
  • Writing tests can be fun. Yeah, you read that correctly. A well-constructed test suite can be just as satisfying a problem to solve as the application code itself. And that feeling when all your tests pass? Or when your tests assist in a smooth refactor or feature implementation? It feels good. Tests can be a quick dopamine win in a profession that can be fraught with midnight debugging sessions and bouts of imposter syndrome.

Unit testing best practices

Now that I’ve successfully convinced you of the benefits of writing tests and you are now sufficiently stoked, here are some unit test best practices to keep in mind.

A good unit test should be:

  • One specific assertion at a time
  • Independent, isolated, and controlled
  • Relevant and meaningful
  • Repeatable and deterministic
  • ​​Automatic
  • Descriptive

Senior pug Gary Photoshopped poorly as the Judgement tarot card

The tarot card for Judgement seems appropriate for software testing, so here is senior pug Gary Photoshopped poorly as an angel -- naturally.

What are test fixtures?

In the context of software, a test fixture (also called "test context") is used to set up the system state and input data needed for test execution.” The purpose of a test fixture is to establish the environment in which the test(s) will be run. Test fixtures can help tests adhere to our unit testing best practices by controlling for variables like databases and data sets, system state, operating system, specific files, and mocks.

Specifically, in Python’s unittest framework, test fixtures are functions or methods that are executed before or after a test or group of tests to establish a testing environment.

Class and method-level test fixtures

Class and method-level fixtures are provided by a TestCase instance and are part of the group of methods concerned with running tests.

Class-level test fixtures are methods that are executed either before and/or after all of the test methods in a TestCase instance. They look like this:


@classmethod
def setUpClass(cls):
    print("Class-level setup test fixture has been executed!")

@classmethod
def tearDownClass(cls):
    print("Class-level tear down test fixture has been executed!")

Enter fullscreen mode Exit fullscreen mode

An example implementation might look like this:


import unittest

class ExampleTestCaseClassTestFixtures(unittest.TestCase):

    @classmethod
    def setUpClass(cls):
        print("Class-level setup test fixture has been executed!")

    @classmethod
    def tearDownClass(cls):
        print("Class-level teardown test fixture has been executed!")

    def test_example_equal(self):
        self.assertEqual(1 + 1, 2)

    def test_example_not_equal(self):
        self.assertNotEqual(1 + 1, 3)

Enter fullscreen mode Exit fullscreen mode

If you were to run the above tests, the output you would get might look something like this:

Class-level setup test fixture has been executed!

test_example_equal (tests.test_example.ExampleTestCaseClassTestFixtures.test_example_equal) ... ok

test_example_not_equal (tests.test_example.ExampleTestCaseClassTestFixtures.test_example_not_equal) ... ok

Class-level teardown test fixture has been executed!

----------------------------------------------------------------------

Ran 2 tests in 0.000s

OK
Enter fullscreen mode Exit fullscreen mode

💡 This example is provided in the repo and as long as you are on the fixtures-tutorial branch you can run it from the root directory with: python -m unittest tests.examples.test_test_case_fixtures_example.ExampleTestCaseClassTestFixtures -v

Method-level test fixtures are methods that are executed either before and/or after each test method in a TestCase instance. They look like this:

    def setUp(self):
        print("Method-level setup test fixture has been executed!")

    def tearDown(self):
        print("Method-level teardown test fixture has been executed!")

Enter fullscreen mode Exit fullscreen mode

An example implementation might look like this:

class ExampleTestCaseMethodTestFixtures(unittest.TestCase):
    def setUp(self):
        print("Method-level setup test fixture has been executed!")

    def tearDown(self):
        print("Method-level teardown test fixture has been executed!")

    def test_example_equal(self):
        self.assertEqual(1 + 1, 2)

    def test_example_not_equal(self):
        self.assertNotEqual(1 + 1, 3)
Enter fullscreen mode Exit fullscreen mode

If you were to run the above tests, the output you would get might look something like this:

test_example_equal (tests.test_example.ExampleTestCaseMethodTestFixtures.test_example_equal) ... Method-level setup test fixture has been executed!

Method-level teardown test fixture has been executed!

ok

test_example_not_equal (tests.test_example.ExampleTestCaseMethodTestFixtures.test_example_not_equal) ... Method-level setup test fixture has been executed!

Method-level teardown test fixture has been executed!

ok

----------------------------------------------------------------------

Ran 2 tests in 0.000s

OK

Enter fullscreen mode Exit fullscreen mode

Notice how in this output the setup and teardown messages are repeated twice – once for each of the two test methods in the test case.

💡 This example is provided in the repo and as long as you are on the fixtures-tutorial branch you can run it from the root directory with: python -m unittest tests.examples.test_test_case_fixtures_example.ExampleTestCaseMethodTestFixtures -v

Module-level test fixtures

Module-level test fixtures are functions that are executed before and/or after all the tests in a module are run. These fixtures are typically used for setting up and tearing down resources that are shared across multiple tests within a module. They look like this:

def setUpModule():
    print("Module-level setup test fixture has been executed!")

def tearDownModule():
    print("Module-level teardown test fixture has been executed!")
Enter fullscreen mode Exit fullscreen mode

An example implementation might look like this:

import unittest

def setUpModule():
    print("Module-level setup test fixture has been executed!")

def tearDownModule():
    print("Module-level teardown test fixture has been executed!")

class ExampleTestCaseSecond(unittest.TestCase):

    def test_example_equal(self):
        self.assertEqual(1 + 1, 2)

    def test_example_not_equal(self):
        self.assertNotEqual(1 + 1, 3)

class ExampleTestCaseFirst(unittest.TestCase):

    def test_example_equal(self):
        self.assertEqual(1 + 1, 2)

    def test_example_not_equal(self):
        self.assertNotEqual(1 + 1, 3)

Enter fullscreen mode Exit fullscreen mode

If you were to run the above tests, the output you would get might look something like this:

Module-level setup test fixture has been executed!

test_example_equal (tests.examples.test_module_fixtures_example.ExampleTestCaseFirst.test_example_equal) ... ok

test_example_not_equal (tests.examples.test_module_fixtures_example.ExampleTestCaseFirst.test_example_not_equal) ... ok

test_example_equal (tests.examples.test_module_fixtures_example.ExampleTestCaseSecond.test_example_equal) ... ok

test_example_not_equal (tests.examples.test_module_fixtures_example.ExampleTestCaseSecond.test_example_not_equal) ... ok

Module-level teardown test fixture has been executed!

----------------------------------------------------------------------

Ran 4 tests in 0.000s

OK

Enter fullscreen mode Exit fullscreen mode

I’m sure you’ve already noticed how this output differs from the prior two examples because you’re smart like that!

💡 This example is provided in the repo and as long as you are on the fixtures-tutorial branch you can run it from the root directory with: python -m unittest tests.examples.test_module_fixtures_example -v

Now that you’ve got a basic understanding of test fixtures, read on to see them in action.

To read more about test fixtures in unittest, refer to the documentation here.

A little bit of context

You can find the code for this tutorial in this repo on the fixtures-tutorial branch.

Build-a-Pug is a Flask app that makes a call to an OpenAI endpoint to generate an image of a pug based on a prompt constructed from user provided input. For this iteration of Build-a-Pug, I’ve added a SQLite database to store the pugs that are built which can be retrieved via the See Your Grumble page (Because that is what a group of pugs is called – a “grumble”!).

SQLite is an embedded SQL database engine. Unlike most other SQL databases, SQLite does not have a separate server process, and it reads and writes directly to ordinary disk files. For the sake of this tutorial, you do not need to concern yourself too much with the inner workings of SQLite. All you need to know is that initializing the database requires an extra explicit step and that the database manifests as a single .sqlite file.

Caveats and troubleshooting

Because OpenAI provides access to generated images for a limited duration, depending on when you view your grumble, some images may return an invalid signature authentication error which will appear as broken images. This is because the images are not being saved anywhere and implementing that functionality felt out of scope for this particular tutorial.

To fix this, you can delete the database and initialize a new one. This will, however, mean all your pugs will be permanently lost.

Please don’t deploy this app to production anywhere

This app was initially created as a toy demo app. As I’ve iterated on it, it’s become clear that it needs to be refactored. Obviously it isn’t intended for production, nor is it a good example of how to properly implement a database, however it does successfully demonstrate how and why test fixtures are useful – especially when you start adding complexity (like databases). Use this app as a learning resource and who knows? Maybe I’ll do a tutorial on refactoring!

How to use the test code

The most successful learning happens when you achieve that “Aha!” experience. I liken it to a magic trick: It’s the moment of wonder and delight when your brain is not only pleasantly surprised, but intrigued. It invites playful curiosity. To try to recreate that experience, I have commented out sections of the test code for you to later uncomment and run so you can see for yourself how different test fixtures impact the test results.

Test fixtures in action

Build-a-Pug is a Flask app that makes a call to an OpenAI endpoint to generate an image of a pug based on a prompt constructed from user provided input. For this iteration of Build-a-Pug, I’ve added a SQLite database to store the pugs that are built which can be retrieved via the See Your Grumble page (Because that is what a group of pugs is called – a “grumble”!).

Prerequisites

Setup

  1. Clone the repo: git clone https://github.com/liz-acosta/testing-strategies-for-python.git
  2. Change directory to the project
  3. Check out the fixtures-tutorial branch: git checkout fixtures-tutorial
  4. Install dependencies from Pipfile.lock: pipenv install
  5. Add environment variables by renaming .env_template to .env ...
  6. ... and replacing placeholder secrets with real secrets
  7. Initialize the SQLite database: pipenv run init-db
  8. Optional: Delete the database: pipenv run delete-db

Run the app locally

While you don’t need to run the app for the tutorial, it could be helpful to understand the tests.

There are some caveats to this app, see the Caveats and troubleshooting section to learn more.

  1. To run the app locally: pipenv run start-app
  2. Navigate to http://localhost:5000/ in your browser

You should get something that looks like this:

A screenshot of the Build-a-Pug app

From here, you can build your own pug:

Screenshot of a created pug named Fiona

Learn more about pugs:

A screenshot of the Pug Facts page

Or check out your grumble:

A screenshot of a grumble of two pugs

Run the tests

Run the tests with method-level test fixtures

The tests we are interested in are located in tests/unit/test_pug.py – and in particular, we want to take a look at the tests that pertain to the database operations.

Locate the test case called TestPugDBWithMethodLevelFixtures and take a look at what the code is doing:


# A method-level test fixture that
# creates and inserts data into a sqlite database before each test in this class
def setUp(self):
    """Create a test database before each test method in this class"""

    self.connection = sqlite3.connect(TEST_DATABASE_FILEPATH)
    self.connection.row_factory = sqlite3.Row

    test_pug_lily = Pug("Lily", "6", "San Francisco", "4:00 PM")
    test_pug_lily.description = "Lily is the best pug"
    test_pug_lily.image = "lily_pug.jpg"

    test_pug_fiona = Pug("Fiona", "2", "San Francisco", "4:00 PM")
    test_pug_fiona.description = "Fiona is the best pug"    
    test_pug_fiona.image = "sweet_fiona.jpg"
    test_pugs = [test_pug_lily, test_pug_fiona]

    with open("build_a_pug/schema.sql", "r") as f:
        self.connection.executescript(f.read())

    query = "INSERT INTO pug (name, age, home, puppy_dinner, description, image) VALUES (?, ?, ?, ?, ?, ?)"

    for pug in test_pugs:
        self.connection.cursor().execute(query, (pug.name, pug.age, 
        pug.home, pug.puppy_dinner, pug.description, pug.image,),)

        self.connection.commit()

    print(Fore.GREEN + f"Test database: {TEST_DATABASE_FILEPATH} connection created and test data inserted")

# A method-level test fixture that
# closes and deletes the previously created sqlite database after each test in this class
def tearDown(self):
    """Close and delete the test database after  each test method in this class"""

    self.connection.close()
    os.remove(TEST_DATABASE_FILEPATH)

    print(Fore.RED + f"Test database: {TEST_DATABASE_FILEPATH} connection closed and deleted")
Enter fullscreen mode Exit fullscreen mode

This code uses method-level test fixtures to:

  1. Create a SQLite database connection, create a table in the database, create a couple of instances of the class Pug, and insert them into the database
  2. Close the database connection and delete the database

💡 There is a lot of SQLite/database boilerplate here you do not need to worry about – just focus on the setUp and tearDown methods and how they impact the tests.

For your convenience, I have color-coded the printed output of the test fixture methods.

Run the tests with: pipenv run pug-unit-tests

If everything went as planned, all the tests should pass and you should see the setup and teardown printed output for each test method run.

Run the tests with class-level test fixtures

Now let’s see what happens when we use class level fixtures.

  1. Comment out the whole TestPugDBWithMethodLevelFixtures class
  2. Uncomment the class TestPugDBWithClassLevelFixtures.
  3. Run the tests: pipenv run pug-unit-tests

Did you get this test failure?: AssertionError: 3 != 2

Since the database was created and torn down before and after all the test methods were executed, the pug we added in the test_create_pug test is still in the database and therefore affects the results of the test_get_grumble test.

You probably also noticed that the green and red printed output appeared only once.

Run the tests with module-level test fixtures

Bear with me because this one gets a little tricky.

  1. Comment out the whole TestPugDBWithClassLevelFixtures class
  2. Uncomment the class TestPugDBWithModuleLevelFixtures
  3. Toward the top of the file, uncomment the functions setUpModule and tearDownModule
  4. Run the tests: pipenv run pug-unit-tests

This time we get the same assertion error.

Similarly, the green and red printed output appear only once, but instead of wrapping a particular test case class, they bookend the entire module.

In conclusion

I hope that this tutorial demonstrated how test fixtures can help further refine your tests by enforcing best practices like isolating test cases, controlling variables, and improving performance.

In this particular example, a test database allows us to leave our real database unharmed. While we could set up this test database at the top of our test module, such a potentially draining resource may not be necessary for all of our tests and it could affect test results in unintended ways. We also have the option of setting up our test database at the class level, but as we witnessed, this means the test methods within that test case are reliant on each other and no longer independent.

As our tests are currently written, method-level fixtures seem to serve us best. However, this can change as our test needs evolve, forcing us to truly internalize what the code under test is really intended to do.

Testing, like life, is full of challenges, but it’s also filled with opportunities for growth, clarity, and even a little fun. By embracing tools like test fixtures, we can reduce chaos and gain confidence in the code we write – something that feels especially meaningful during times when we might just need a little dopamine hit to stave off the imposter syndrome. Whether you’re debugging at midnight or just trying to make it through the day, remember that every small step forward counts.

Have you been using test fixtures? How have they helped or hindered your tests?

If you enjoyed this tutorial, please consider buying me a coffee.

More resources on test fixtures

Top comments (4)

Collapse
 
linkbenjamin profile image
Ben Link

Unit testing never gets enough love - great tutorial!

Collapse
 
lizzzzz profile image
Liz Acosta

Hey @linkbenjamin, thanks for this comment!

I agree that unit testing is probably one of the most underrated topics in software. I'll be the first to admit it's not glamorous at all and I've been amazed at much stronger my developer skills have become in general. Even as I'm writing proof of concepts, I'm thinking about how I would test it, and in thinking about how I would test it, I end up writing more precise code.

Collapse
 
linkbenjamin profile image
Ben Link

I'm always amazed at how many experienced devs don't ever do it - like, how did you get there without it?

...then I get terrified for how much software is out there in production, untested. 😳

Thread Thread
 
lizzzzz profile image
Liz Acosta

I agree! And people wonder why I am sometimes such a luddite! I am like, "I've SEEN what gets pushed to prod!"