DEV Community

Cover image for Setting up your Go tests as table-driven subtests
Jayson Smith
Jayson Smith

Posted on • Originally published at jayson.dev

Setting up your Go tests as table-driven subtests

Go has terrific support for tests, and some folks may be writing them in ways that could be much more efficient and easy to read. I've seen some different ways to write tests, but my favorite so far, are table driven subtests. Before we get into what these look like, let's take a look at a simple project we'll refactor into using table-driven subtests one step at a time. Note: You may see table driven subtests written in different styles in the wild. The style covered here is how I do it.

Before Table Driven Subtests (TDSTs)

Let's work with a straightforward adding function that takes in two strings, converts them, and then returns their sum if their conversion was successful.

// add.go
package add

import (
    "strconv"

    "github.com/pkg/errors"
)

func add(x, y string) (int64, error) {
    xInt, err := strconv.ParseInt(x, 10, 64)
    if err != nil {
        return 0, errors.Wrap(err, "converting x failed")
    }

    yInt, err := strconv.ParseInt(y, 10, 64)
    if err != nil {
        return 0, errors.Wrap(err, "converting y failed")
    }

    return xInt + yInt, nil
}

Here are six cases that we can use to provide some basic coverage of our add function:

  • Zeros
  • Positive Numbers
  • Negative Numbers with a Positive Answer
  • Negative Numbers with a Negative Answer
  • X argument fails to parse
  • Y argument fails to parse

While the example below is set up in a clear, easy to read fashion, with the steps broken out into Arrange/Act/Assert or Given/When/Then depending on how you think. This set up also gets us all test output even if a single fails.

...it's also super repetitive and unnecessarily verbose. 😅

// ./add_test.go
package add

import (
    "errors"
    "testing"

    "github.com/stretchr/testify/require"
)

func TestAdd_Zeros(t *testing.T) {
    expected := int64(0)

    actual, err := add("0", "0")

    require.NoError(t, err)
    require.Equal(t, expected, actual)
}

func TestAdd_PositiveNumbers(t *testing.T) {
    expected := int64(4)

    actual, err := add("2", "2")

    require.NoError(t, err)
    require.Equal(t, expected, actual)
}

func TestAdd_NegativeNumbers_PositiveAnswer(t *testing.T) {
    expected := int64(5)

    actual, err := add("-5", "10")

    require.NoError(t, err)
    require.Equal(t, expected, actual)
}

func TestAdd_NegativeNumbers_NegativeAnswer(t *testing.T) {
    expected := int64(-8)

    actual, err := add("-5", "-3")

    require.NoError(t, err)
    require.Equal(t, expected, actual)
}

func TestAdd_XFailsParse(t *testing.T) {
    expected := int64(0)
    expectedErr := errors.New(`converting x failed: strconv.ParseInt: parsing "a": invalid syntax`)

    actual, err := add("a", "0")

    require.EqualError(t, expectedErr, err.Error())
    require.Equal(t, expected, actual)
}

func TestAdd_YFailsParse(t *testing.T) {
    expected := int64(0)
    expectedErr := errors.New(`converting y failed: strconv.ParseInt: parsing "z": invalid syntax`)

    actual, err := add("0", "z")

    require.EqualError(t, expectedErr, err.Error())
    require.Equal(t, expected, actual)
}

Instead of having six different test functions, some with increasingly long names like TestAdd_NegativeNumbers_NegativeAnswer to convey the test's purpose, what if I told you we could refactor that into a single test function? That'd really clean things up, wouldn't it? A table test consists of three basic parts: a table, a loop, and a test.

Setting the Table

The table in a table test can set up one of two ways: a slice of struct([]struct) or a map of string struct(map[string]struct). There's not a whole lot of difference between the two aside from style, and when using the []struct, your tests will always run in the order in which they exist in the table. This always consistent order can potentially be a bad thing as it can lead to tests passing in one order but not another if you're not careful with your state between tests. I recently ran into an issue where we had a flaky set of tests that were using the map[string]string method as maps don't return in a consistent order. They're not random, but they're inconsistent enough that they may as well be for this case. For this guide, the table we'll create will use the []struct method.

Our table will be a collection of all test cases that we'll want to run and contain all the info we'll need around test name, inputs, and expected data. In more advanced cases, our tables can include things like test fixtures or helper methods, too- let's keep it simple while we're learning.

At it's most basic, our empty table will look like this(I like naming the variable we assign it to as testCases):

testCases := []struct{}{}

From here, we'll need to fill in our test cases. We'll do this first by defining our fields and their types just like we would put in a standard struct.

testCases := []struct {
    name     string
    x        string
    y        string
    expected int64
    err      error
}{}

Now let's add our first test case and tighten things up a little!

testCases := []struct {
    name     string
    x        string
    y        string
    expected int64
    err      error
}{
    {
        name:     "Zeros",
        x:        "0",
        y:        "0",
        expected: int64(0),
    },
}

Things are still nice, clean, and easy to read! You can also write the individual case without the struct keys, but I tend to leave them in for readability. You may notice that there's no entry here for the err. There's no declared error for this case as we're not expecting one, so there's no need to include one!

Time to Loop

One of the great things about table driven tests is that they're compact, but we need a way to work through them. Here's where our loop comes in. This step is really simple:

for _, tc := range testCases {}

We setup up a standard loop over our testCases and assign each test case to tc. That's is for here, let's tighten things up and see how the full setup looks now.

func TestAdd(t *testing.T) {
    testCases := []struct {
        name, x, y string
        expected   int64
        err        error
    }{
        {name: "Zeros", x: "0", y: "0", expected: int64(0)},
    }

    for _, tc := range testCases {

    }
}

I've compacted the declarations in the struct a little, and since our test values are basic, I've put them all on one line.

The Test

We've got two-thirds of our table driven test set up, let's give it something to run. (We'll do the subtests later on) This step will look basically like our initial tests; we'll just need to name things appropriately so that the tests know what goes where.

func TestAdd(t *testing.T) {
    testCases := []struct {
        name, x, y string
        expected   int64
        err        error
    }{
        {name: "Zeros", x: "0", y: "0", expected: int64(0)},
    }

    for _, tc := range testCases {
        actual, err := add(tc.x, tc.y)

        require.NoError(t, err)
        require.Equal(t, tc.expected, actual)
    }
}

Now, you might be thinking that it's a bit involved compared to what we started with, and if this were our only test case, I'd agree with you to a point. Once you get going with these, it makes sense to set them up by default as it makes adding new cases super easy. Let's show this by adding all of our test cases.

func TestAdd(t *testing.T) {
    testCases := []struct {
        name, x, y string
        expected   int64
        err        error
    }{
        {name: "Zeros", x: "0", y: "0", expected: int64(0)},
        {name: "Positive numbers", x: "2", y: "2", expected: int64(4)},
        {name: "Negative numbers with positive result", x: "-5", y: "10", expected: int64(5)},
        {name: "Negative numbers with negative result", x: "-5", y: "-3", expected: int64(-8)},
        {name: "X fails to parse", x: "a", y: "0", expected: int64(0),
            err: errors.New(`converting x failed: strconv.ParseInt: parsing "a": invalid syntax`)},
        {name: "Y fails to parse", x: "0", y: "z", expected: int64(0),
            err: errors.New(`converting y failed: strconv.ParseInt: parsing "z": invalid syntax`)},
    }

    for _, tc := range testCases {
        actual, err := add(tc.x, tc.y)

        require.NoError(t, err)
        require.Equal(t, tc.expected, actual)
    }
}

So easy! "But wait, those don't pass!" You're correct reader; we need to add some more to our test section to get things going. Here's what that could look like.

for _, tc := range testCases {
    actual, err := add(tc.x, tc.y)

    if tc.err == nil {
        require.NoError(t, err)
    } else {
        require.EqualError(t, tc.err, err.Error())
    }
    require.Equal(t, tc.expected, actual)
}

Time for the Subtests

One trouble with standard table tests is that if one test case fails, then all test cases after that will not be run, which leaves us in a really poor place. When this happens, we have an incomplete picture of the state of our code, and that can be dangerous! Here's where subtests come in and save the day.

How? Well, each subtest runs independently, and when one fails, the test execution continues to the next test case until all are done- assuming you're writing your tests to be atomic. You are writing your tests this way, right? Right?!

Setting things up to run as a subtest is quite simple:

func TestAdd(t *testing.T) {
    testCases := []struct {
        name, x, y string
        expected   int64
        err        error
    }{
        {name: "Zeros", x: "0", y: "0", expected: int64(0)},
        {name: "Positive numbers", x: "2", y: "2", expected: int64(4)},
        {name: "Negative numbers with positive result", x: "-5", y: "10", expected: int64(5)},
        {name: "Negative numbers with negative result", x: "-5", y: "-3", expected: int64(-8)},
        {name: "X fails to parse", x: "a", y: "0", expected: int64(0),
            err: errors.New(`converting x failed: strconv.ParseInt: parsing "a": invalid syntax`)},
        {name: "Y fails to parse", x: "0", y: "z", expected: int64(0),
            err: errors.New(`converting y failed: strconv.ParseInt: parsing "z": invalid syntax`)},
    }

    for _, tc := range testCases {
        t.Run(tc.name, func(tt *testing.T) {
            actual, err := add(tc.x, tc.y)

            if tc.err == nil {
                require.NoError(tt, err)
            } else {
                require.EqualError(tt, tc.err, err.Error())
            }
            require.Equal(tt, tc.expected, actual)
        })
    }
}

Super simple! Just wrap your existing test setup, change some variable names, and you've got subtests! There are a few things to point out here:

  • t.Run takes in the test name (we're using it now!) as well as a function that wraps our test code.
  • The wrapper function takes in an instance of the testing.T struct and declares it as tt. While it doesn't have to, I do as it's possible things can misbehave, so we may as well do it and save ourselves some possible pain down the road.

Let's run our newly set up subtests and see how things go:

$ go test
PASS
ok      github.com/jaysonesmith/tdst    0.024s

All about the communication

Okay... big deal, that output looks like things do when any other test run passes! Yes! Correct! While they look the same whey they pass, it's when the tests fail that subtests really shine! I'll change things in half of the subtests so that we can see what the output looks like when things fail:

$ go test
--- FAIL: TestAdd (0.00s)
    --- FAIL: TestAdd/Positive_numbers (0.00s)
        add_test.go:35:
                Error Trace:    add_test.go:35
                Error:          Not equal:
                                expected: 5
                                actual  : 4
                Test:           TestAdd/Positive_numbers
    --- FAIL: TestAdd/Negative_numbers_with_negative_result (0.00s)
        add_test.go:35:
                Error Trace:    add_test.go:35
                Error:          Not equal:
                                expected: -3
                                actual  : -8
                Test:           TestAdd/Negative_numbers_with_negative_result
    --- FAIL: TestAdd/Y_fails_to_parse (0.00s)
        add_test.go:33:
                Error Trace:    add_test.go:33
                Error:          Error message not equal:
                                expected: "converting y failed: strconv.ParseInt: parsing \"z\": invalid syntax"
                                actual  : "converting y failed: strconv.ParseInt: foo"
                Test:           TestAdd/Y_fails_to_parse
FAIL
exit status 1
FAIL    github.com/jaysonesmith/tdst    0.024s

Woot! We get a super easy to read break out of all the test cases that failed including the parent test, the test case, the line of the assertion that failed, and some helpful expected/actual output to help us out in diagnosing things. Without the subtest, here's what the output would look like:

$ go test
--- FAIL: TestAdd (0.00s)
    add_test.go:35:
                Error Trace:    add_test.go:35
                Error:          Not equal:
                                expected: 5
                                actual  : 4
                Test:           TestAdd
FAIL
exit status 1
FAIL    github.com/jaysonesmith/tdst    0.025s

... Yeah. Not nearly as helpful, eh?

We can get similar behavior to the subtest setup if we use Go's non-fatal tools or Testify's assert instead of require as they won't completely stop the current run. I usually use assert, but, as a warning, that can also cause some null pointer exceptions if one part of a test fails and a later assertion depends on that data.

Here's how I'd set things up and what our final test will look like:

func TestAdd(t *testing.T) {
    testCases := []struct {
        name, x, y string
        expected   int64
        err        error
    }{
        {name: "Zeros", x: "0", y: "0", expected: int64(0)},
        {name: "Positive numbers", x: "2", y: "2", expected: int64(5)},
        {name: "Negative numbers with positive result", x: "-5", y: "10", expected: int64(5)},
        {name: "Negative numbers with negative result", x: "-5", y: "-3", expected: int64(-3)},
        {name: "X fails to parse", x: "a", y: "0", expected: int64(0),
            err: errors.New(`converting x failed: strconv.ParseInt: parsing "a": invalid syntax`)},
        {name: "Y fails to parse", x: "0", y: "z", expected: int64(0),
            err: errors.New(`converting y failed: strconv.ParseInt: foo`)},
    }

    for _, tc := range testCases {
        t.Run(tc.name, func(tt *testing.T) {
            actual, err := add(tc.x, tc.y)

            if tc.err == nil {
                assert.NoError(tt, err)
            } else {
                assert.EqualError(tt, tc.err, err.Error())
            }
            assert.Equal(tt, tc.expected, actual)
        })
    }
}

So, what do you think? I personally really enjoy these and more often than not write TDST's when I'm testing. I hope you learned something and enjoyed this read.

Thank you so much for your time!

Js

P.S. This guide only covers setting up table driven subtests but doesn't include all the neat things about them! Perhaps I'll write about those, too!

Latest comments (0)