DEV Community

loading...
Salesforce Engineering

Subtesting, skipping, and cleanup in the Go testing.T

&y H. Golang (he/him)
Software engineer at Salesforce (prev MIT), Google Developer Expert in Go, organizer at Boston Golang, resident #sloth enthusiast at everywhere
ใƒป9 min read

In my last tutorial, we looked at the basics of writing tests in Go. We saw:

  • ๐Ÿšฆ how the testing.T type is used in all Go tests for managing the status of whether or not a Go test passed
  • ๐Ÿ› how we can use tests to catch a bug and fix our code to make a test pass
  • ๐Ÿ—„ how we can use table-testing in order to run tests cases with the same logic together

These are a good foundation to writing test coverage in Go, since any Go repository doing automated tests uses the standard library's testing package, and use the testing.T type. But that's not all the tricks testing.T has up its sleeve!

In this tutorial, we'll look at how we can use three more methods in the testing.T type in order to better organize your tests.

  • ๐Ÿ’จ t.Run to give your test cases subtests
  • โญ t.Skip, for when we only want to run a test sometimes
  • ๐Ÿงน t.Cleanup, for cleaning up state in between tests

And when your tests are well-organized, that improves the quality of life for your dev team when it comes to knowing where to write your tests, and knowing how to fix a failing test. So let's dive in!

๐Ÿ’จ Writing subtests for our Go tests

As a quick recap, in the last tutorial, we wrote test coverage for a function telling us whether a string IsSlothful, which returns true if either it contains the word sloth, or it contains the hibiscus emoji but not the race car emoji.

To make it easier to table-test that function, we made this assertion helper function, which has our testing.T run its Errorf method, causing the test to fail, if IsSlothful doesn't return the expected value for the string we're testing:

func assertIsSlothful(t *testing.T, s string, expected bool) {
    if IsSlothful(s) != expected {
        if expected {
            t.Errorf("%s is supposed to be slothful", s)
        } else {
            t.Errorf("%s is not supposed to be slothful", s)
        }
    }
}
Enter fullscreen mode Exit fullscreen mode

And we ran assertIsSlothful in a loop on a slice of different test cases, like this:

type isSlothfulTestCase struct {
    str      string
    expected bool
}

var isSlothfulTestCases = []isSlothfulTestCase{
    {str: "hello, world!",                               expected: false},
    {str: "hello, slothful world!",                      expected: true},
    {str: "Sloths rule!",                                expected: true},
    {str: "Nothing like an iced hibiscus tea! ๐ŸŒบ",       expected: true},
    {str: "Get your ๐ŸŒบ flowers! They're going fast! ๐ŸŽ๏ธ", expected: false},
}

func TestIsSlothful(t *testing.T) {
    for _, c := range isSlothfulTestCases {
        assertIsSlothful(t, c.str, c.expected)
    }
}
Enter fullscreen mode Exit fullscreen mode

If one of the tests errors, then we get an error message about why the string isn't slothful. Say that we forgot to make checking for the word "sloth" case-insensitive. The error we get would look like this:

-------- FAIL: TestIsSlothful (0.00s)
    sloths_test.go:12: Sloths rule! is supposed to be slothful
Enter fullscreen mode Exit fullscreen mode

We have the descriptive error message of which string we expect to be slothful. But what if our IsSlothful function got a lot more complex, to capture the many nuances of laziness? It would be nice to have a better description of exactly what part of our functionality is being tested. And if we have hundreds of strings we want to test, it would be nice to zoom in on just one, or a few, test cases.

That's where you can subtesting can help you. Since Go 1.7, if inside your Go test, you run the t.Run(string, func(*testing.T)) method, your testing.T will make a subtest within your test.

To try it out, first let's take our isSlothfulTestCase and give it a new string field called testName, which is intended to be a brief description of what we're testing in each scenario. Then, let's add the testName to each of our test cases:

var isSlothfulTestCases = []isSlothfulTestCase{{
    testName: "string with nothing slothful isn't slothful",
    str:      "hello, world!",
    expected: false,
}, {
    testName: `string with the substring "sloth" is slothful`,
    str:      "hello, slothful world!",
    expected: true,
}, {
    testName: `checking for the word "sloth" is case-insensitive`,
    str:      "Sloths rule!",
    expected: true,
}, {
    testName: "strings with the ๐ŸŒบ emoji are normally slothful",
    str:      "Nothing like an iced hibiscus tea! ๐ŸงŠ๐ŸŒบ",
    expected: true,
}, {
    testName: "the ๐ŸŽ๏ธ emoji negates the ๐ŸŒบ emoji's slothfulness",
    str:      "Get your ๐ŸŒบ flowers! They're going fast! ๐ŸŽ๏ธ",
    expected: false,
}}
Enter fullscreen mode Exit fullscreen mode

Now, let's update our main TestIsSlothful testing loop to use subtesting:

  func TestIsSlothful(t *testing.T) {
      for _, c := range isSlothfulTestCases {
-         assertIsSlothful(t, c.str, c.expected)
+         t.Run(c.testName, func(t *testing.T) {
+             assertIsSlothful(t, c.str, c.expected)
+         })
      }
  }
Enter fullscreen mode Exit fullscreen mode

t.Run takes in two arguments:

  • The name of our subtest, which will be the description we just added in the testName field
  • A func(t *testing.T) containing the code we want to run in the subtest. In this case, we are having the subtest just wrap the call to assertIsSlothful.

Now if we run go test -v, the output looks like this:

$ go test -v
=== RUN   TestIsSlothful
=== RUN   TestIsSlothful/string_with_nothing_slothful_isn't_slothful
=== RUN   TestIsSlothful/string_with_the_substring_"sloth"_is_slothful
=== RUN   TestIsSlothful/checking_for_the_word_"sloth"_is_case-insensitive
    sloths_test.go:12: Sloths rule! is supposed to be slothful
=== RUN   TestIsSlothful/strings_with_the_๐ŸŒบ_emoji_are_normally_slothful
=== RUN   TestIsSlothful/the_๐ŸŽ๏ธ_emoji_negates_the_๐ŸŒบ_emoji's_slothfulness
-------- FAIL: TestIsSlothful (0.00s)
    --- PASS: TestIsSlothful/string_with_nothing_slothful_isn't_slothful (0.00s)
    --- PASS: TestIsSlothful/string_with_the_substring_"sloth"_is_slothful (0.00s)
    --- FAIL: TestIsSlothful/checking_for_the_word_"sloth"_is_case-insensitive (0.00s)
    --- PASS: TestIsSlothful/strings_with_the_๐ŸŒบ_emoji_are_normally_slothful (0.00s)
    --- PASS: TestIsSlothful/the_๐ŸŽ๏ธ_emoji_negates_the_๐ŸŒบ_emoji's_slothfulness (0.00s)
Enter fullscreen mode Exit fullscreen mode

We log output for each individual subtest now, rather than only seeing output for scenarios where the test failed. And now we see we got a failing subtest named TestIsSlothful/checking_for_the_word_"sloth"_is_case-insensitive.

The slash in the name is used to indicate that a test being run is a subtest. And in addition to giving a name to the part of our top-level test that failed, you can even use a subtest as input to the -run flag in the go test command. If we have a whole lot of subtests and you want to zoom in on just one test case, like testing that checking for the word "sloth" is case-insensitive, you can run a command like go test -v -run TestIsSlothful/case-insensitve, and the output will look like this:

$ go test -v -run "TestIsSlothful/case-insensitive"
=== RUN   TestIsSlothful
=== RUN   TestIsSlothful/checking_for_the_word_"sloth"_is_case-insensitive
    sloths_test.go:12: Sloths rule! is supposed to be slothful
-------- FAIL: TestIsSlothful (0.00s)
    --- FAIL: TestIsSlothful/checking_for_the_word_"sloth"_is_case-insensitive (0.00s)
Enter fullscreen mode Exit fullscreen mode

Now, we run TestIsSlothful like before, but we skip all of its subtests that don't match the name "case-insensitive"!

โญ Skipping tests with t.Skip

One of the main benefits to writing automated tests is that you can do things like integrate them into continuous integration (CI) platforms, and you can then have rules on your team like that you only can merge changes if all of the test cases pass. Rules like that can help limit bringing in code that inadvertently breaks functionality because of unforeseen interactions between parts of the code.

But you might have some complex tests that take a really long time to run, or that rely on external services where the test would fail if say, a service your code talks to is unavailable. Especially in the latter scenario, you don't want a different service's outage to bring your team's development work to a halt.

You still want to run those more complex tests, but maybe only once in a while, rather than requiring the test to pass if you want to to bring in a code change. Luckily, there is a convenient workaround for this scenario: the t.Skip method!

To try it out, let's say we have a test for code to automate a quadcopter for feeding the lizards at a terrarium.

func TestQuadcopterDelivery(t *testing.T) {
    q := ConnectToQuadcopter("quadcopter-communication-info")
    if err := q.DeliverFood(
        "from-my-desk", "to-terrarium",
    ); err != nil {
        t.Errorf("error delivering food: %v", err)
    }
}
Enter fullscreen mode Exit fullscreen mode

We could run this every time we do a CI test for a pull request, but that could fail if:

  • The quadcopter is out of batteries
  • The quadcopter isn't turned on
  • Your friend is borrowing the quadcopter
  • Someone bumped into the quadcopter
  • The lizards aren't hungry

So the test can't reliably pass every time we push some code.

But let's say we tried running it at a set time where we know the quadcopter is there and charged, no one's gonna get in the way, and the lizards want a snack. Running a build at a certain time is called a nightly build, and you might specify you're in a nightly build with something like whether a given environment variable is present, like NIGHTLY_BUILD, or more accurately in this case, SNACKTIME_BUILD.

To only run TestQuadcopterDelivery only in a nightly build, we could modify the code like this:

  func TestQuadcopterDelivery(t *testing.T) {
+     if _, ok := os.LookupEnv("SNACKTIME_BUILD"); !ok {
+         t.Skip("only running this test on snack build")
+     }
+ 
      q := ConnectToQuadcopter("quadcopter-communication-info")
      if err := q.DeliverFood(
          "from-my-desk", "to-terrarium",
      ); err != nil {
          t.Errorf("error delivering food: %v", err)
      }
  }
Enter fullscreen mode Exit fullscreen mode

We check whether the SNACKTIME_BUILD environment variable is present with os.LookupEnv. If it returns false, we run t.Skip and leave the message that we're only running that test case in a snack-time build. Otherwise, we run the test.

So in our CI configuration for regular builds, we don't set that environment variable. But in our CI configuration for snack-time builds, we would set the variable. Now your dev team can efficiently build next-generation lizard-feeding! ๐ŸฆŽ

๐Ÿงน Cleaning up after your test

Finally, when you're writing more complicated tests, sometimes there are changes a test has to run that aren't totally contained in the test. Like changes to files, or putting data into databases. This can cause trouble when you're re-running a test, or running a test that uses a system that a different test had changed.

There are different ways of addressing that, like per-test-case namespacing for your data, or mock systems like Afero memory filesystems, and cleaning up after each test case.

For the latter of those, you can use things like Testify Suite to handle cleanups, but since Go 1.14, the standard-library testing.T type now has this new method:

func (t *T) Cleanup(func())
Enter fullscreen mode Exit fullscreen mode

Now when we write a test, we can specify a function that should run at the end of a test. Say we have a function that appends the gopher ๐Ÿน (actually a hamster) emoji to a file:

func addGopher(filepath string) error {
    f, err := os.OpenFile(filepath, os.O_APPEND|os.O_WRONLY, 0644)
    if err != nil {
        return err
    }
    defer f.Close()

    _, err = f.Write([]byte("๐Ÿน"))
    return err
}
Enter fullscreen mode Exit fullscreen mode

We might test that like this:

func TestAddGopher(t *testing.T) {
    // set up file to add a gopher to
    path := filepath.Join("test-files", "gopher-added.txt")
    f, err := os.Create(path)
    if err != nil {
        t.Fatal(err)
    }

    if _, err := f.WriteString("Go is awesome!"); err != nil {
        t.Fatal(err)
    }
    f.Close()

    // run addGopher and test that we now have a gopher emoji
    if err := addGopher(path); err != nil {
        t.Fatal(err)
    }

    fileContents, err := os.ReadFile(path)
    if err != nil {
        t.Fatal(err)
    }
    if string(fileContents) != "Go is awesome!๐Ÿน" {
        t.Errorf(
            `unexpected file contents %s`, string(fileContents),
        )
    }
}
Enter fullscreen mode Exit fullscreen mode

We create a file at ./test-files/gopher-added.txt, write "Go is awesome!", use addGopher to add the gopher emoji, and then check that the file now has the emoji.

The test is correct, but what if a different test we ran later expected that there were no files in the test-files directory? That test would now fail without any of our code actually being broken, and when that happens, it can be a real pain.

That's where cleanup after a test comes in. Let's give it a try:

  func TestAddGopher(t *testing.T) {
      path := filepath.Join("test-files", "gopher-added.txt")
      f, err := os.CreateFile(path)
      if err != nil {
          t.Fatal(err)
      }

+     t.Cleanup(func() {
+         if err != os.Remove(path) {
+             t.Fatalf("error cleaning up %s: %v", path, err)
+         }
+     })
Enter fullscreen mode Exit fullscreen mode

Now, at the end of TestAddGopher, we run our function to delete test-files/gopher-added.txt. Now our other tests can run without having to worry about any data left over from TestAddGopher!

While that was just a contrived example, cleanup is important in a lot of complex tests working with interdependent systems. If you're coming from using testing systems like Jest in JavaScript, you can use t.Cleanup in scenarios similar to where you would use the Jest afterAll function; the Cleanup function runs after a test and all its subtests complete.

As you can see, in addition to keeping track of the state of tests, the testing.T type gives some great functionality for keeping your automated Go tests well-organized, and you can use that alongside techniques like table testing and CI build system setups.

In my next tutorial on Go testing, we'll look at a different package in the standard library that will come in handy if you're a web developer like me: net/http/httptest!

Discussion (0)

Forem Open with the Forem app