DEV Community

jess unrein
jess unrein

Posted on

Don't comment out your unit tests!

There are few things I dislike more than checking out a project and seeing large swaths of the test module commented out. When this happens, you're lucky if there's a TODO at the top of the section that describes why the test is neglected. Even then, it's a pretty solid assumption that the TODO statement is out of date. It's also likely that the targeted code no longer has test coverage. As a result, if something in that portion of the code suffers a regression, it's going to fail silently.

There are some good reasons why you might not want to run a test. Maybe you have triggers in your codebase that are not currently enabled. Maybe you have a non-breaking failure in your code and you can't afford a delay due to continuous integration failures. Maybe some of your tests are only applicable for older versions of your project. None of these scenarios are ideal, but as developers we're rarely working under ideal circumstances. It sometimes makes sense to ignore portions of your test suite.

However, instead of commenting out your tests, the built in unittest library has some features that handle problematic tests elegantly.

Skip

If you have a section of code that is not used, but you plan to enable in the future, the skip decorator is appropriate. unittest.skip decorates a task and takes in a string that describes the reason for the skip. You should always provide a reason for skipping a test.

@unittest.skip("Email temporarily disabled")
def test_send_email(self):
    status = send_email(params)
    self.assertTrue(status)
Enter fullscreen mode Exit fullscreen mode

There's a twofold advantage to skipping tests like this.

First, when you use this decorator the reason prints out if you run your tests with verbose output (you can access this mode by using the --verbose or -v flag when running tests). When you run the test file, it acts as a reminder that there might be unused functionality in your codebase. Being aware of which parts of your codebase might not be necessary is a huge first step toward good code health.

Second, when you reenable that piece of code later, it's easier to grep for "skip," (or the specific string you provided as a reason) than to scroll through large blocks of commented code to figure out what to reenable. As a bonus, it's a far more readable solution for new people coming into your codebase.

It's also possible that you want a test to run for an older version of your code, or you only want a test to run under certain circumstances. unittest.skipIf allows you to specify when certain tests should or should not run. This way, the old code is still covered and the entire suite doesn't fail for newer versions.

The first argument for this decorator is a statement that will cause the test to skip if it evaluates to True. The second argument is the human readable reason why you're skipping the test.

@unittest.skipIf(mylib.__version__ > (1, 3), "email replaced with push notification")
def test_send_email(self):
    status = send_email(params)
    self.assertTrue(status)
Enter fullscreen mode Exit fullscreen mode

There's also the unittest.skipUnless option, which will skip the decorated test unless the condition you pass in resolves as True. This is an option you can use, I guess. I'm not the boss of you. But it seems to me like using unittest.skipIf is almost always more straightforward and readable.

@unittest.skipUnless(mylib.__version__ < (1, 3) "email replaced with push notification")
def test_send_email(self):
    status = send_email(params)
    self.assertTrue(status)
Enter fullscreen mode Exit fullscreen mode

In addition to skipping individual tests, you can skip entire test classes. Both of the following individual tests will be skipped.

@unittest.skip("Functionality not currently supported")
class MyCoolProcessTestCase(unittest.TestCase):
    def test_cool_process(self):
        result = cool_process()
        self.assertTrue(result)

    def test_less_cool_process(self):
        result = not_as_cool()
        self.assertIsNotNone(result)
Enter fullscreen mode Exit fullscreen mode

Expected Failures

Occasionally when you're working on a deadline, you might have known failures that do not impact your launch. In a perfect world we would fix everything before deploying the new version. Most of us don't have that luxury. For example, maybe you know that timestamps are off due to mishandling daylight savings. If you know this is the case and you're accounting for it elsewhere, you might want to mark the test as an expected failure.

@unittest.expectedFailure
def test_timestamps_match(self):
    expected_time = # what you expect to happen
    actual_time = # timestamp created in your code
    self.assertEqual(expected_time, actual_time, "DST might interfere")
Enter fullscreen mode Exit fullscreen mode

If a test fails when this decorator is applied, it's not counted as a failure. This tool can be useful but dangerous. Marking tests as expected failures can cause you to miss breaking changes in your code.

Also, notice that I've passed a short, informative message as the third argument into self.assertEqual. This is an underused option for adding readability and context to your tests. If you often run into wonky circumstances that affect a particular part of your codebase, consider passing in messages that shed light on previous failures and solutions. It might save future-you a headache.

Which one should I use?

unittest.expectedFailure is an option for dealing with non-breaking failures, but I personally recommend erring on the side of using unittest.skip or unittest.skipIf instead. Grouping skipped tests together with descriptive reasons is an easy way to organize and keep track of unused tests. Marking a test as an expected failure is a good temporary solution, but invites opportunity for larger failures in the future. Both are valid and useful options if you keep an eye on your test coverage and make sure your unused tests don't fall too far out of sync with your codebase.

Top comments (7)

Collapse
 
ben profile image
Ben Halpern

Guilty af on this one.

Collapse
 
adityasridhar profile image
Aditya Sridhar • Edited

This is very useful. Thanks for writing this :)

Collapse
 
canialucian profile image
Cania Consulting

I have found a similar topic: Test conditions represent an item or event of a component or system that could be verified by one or more test cases.
You can check out at youtu.be/PPgHhAsk-EA If you are interested in Test Conditions

Collapse
 
hamaney profile image
Abdulrhman Mohamed

I am new. I enjoyed this piece. Question ... What does the unicorn means gere.?

Collapse
 
thejessleigh profile image
jess unrein

You mean the unicorn in the react emoji? I think it's up to interpretation. One of my friends uses it to mean "I super duper extra liked your post." I tend to use it to mean "I learned something cool," or "you've given me something to think about."

Collapse
 
bgadrian profile image
Adrian B.G.

Nice but I imagine that noone will grep search for these keywords, not in an env that allowed skip in the first place.

Collapse
 
thejessleigh profile image
jess unrein • Edited

When I was working on a monolithic Django application with literally upwards of 70 different test files, I definitely grepped for keywords I remembered from skipped tests to figure out where certain functionality lived. It's probably not going to be super useful for your three endpoint web app for you side project, but I've found explicit messages and keywords, in conjunction with descriptive naming conventions, very useful in overgrown projects.