DEV Community

Cover image for Bad Test, Bad
bob.ts
bob.ts

Posted on • Updated on

Bad Test, Bad

These tests should never have been written. They provide no or little value.

Testing code as been described as an "art form." It is, but it should not be. There are many good patterns that should be followed when writing tests. There is even a lifecycle of tests that should be paid some attention. There are also many BAD patterns that developers should be aware of so that they can be avoided (or cleaned up).

Over the years, I've heard of and come across many examples of tests that are incorrect, violate basic principles of testing, do too much or too little. What I've had a challenge with is finding actual examples supporting the testing principles I follow.

The reason for this article is to provide concrete code patterns that are BAD and where possible, ways to correct these BAD tests.

Code used here is located on github at https://github.com/bob-fornal/bad-test-bad.

Use of BAD Testing Patterns

Given that we are showing code that is BAD: There are always reasons to violate these rules, but the logic behind violating the rules should be well thought out and described in the test code.

Tests Should Not Be "Commented Out"

Tests should NEVER be pushed in to get code through the pipeline. Only when code is removed should the corresponding tests be removed.

Tests Should Not Be "Well-Factored"

/* BAD PATTERN: Well-Factored */
var testableCode = {
  user: {},
  getUserRating: function() {
    return testableCode.user.rating;
  },
  getUserScore: function() {
    return testableCode.user.score;
  }
};

beforeEach(function() {
  testableCode.user = {
    username: 'Bob',
    rating: 100,
    score: 1.1
  };
});

// more code and tests here...

it('expects user rating to be returned', function() {
  var result = testableCode.getUserRating('Bob');
  expect(result).toEqual(100);
});

it('expects user score to be returned', function() {
  var result = testableCode.getUserScore('Bob');
  expect(result).toEqual(1.1);
});
Enter fullscreen mode Exit fullscreen mode

What does the test in this code do? It retrieves a rating and verified that the value is 100. Looking at just the test, a few questions should come to mind ...

  1. Where did 'Bob' come from?
  2. Why is the rating 100?

This test is well-factored, meaning in this case that obvious information is moved out of the test. Examining this using a slightly different design ...

/* GOOD PATTERN: Keep the reader in the test */
/* GOOD PATTERN: Violate the DRY Principle */
var testableCode = {
  user: {},
  getUserRating: function() {
    return testableCode.user.rating;
  },
  getUserScore: function() {
    return testableCode.user.score;
  }
};

afterEach(function() {
  testableCode.user = {};
});

// more code and tests here...

it('expects user rating to be returned', function() {
  testableCode.user = {
    username: 'Bob',
    rating: 100,
    score: 1.1
  };

  var result = testableCode.getUserRating('Bob');
  expect(result).toEqual(100);
});

it('expects user score to be returned', function() {
  testableCode.user = {
    username: 'Bob',
    rating: 100,
    score: 1.1
  };

  var result = testableCode.getUserScore('Bob');
  expect(result).toEqual(1.1);
});
Enter fullscreen mode Exit fullscreen mode

Keep The Reader In The Test

When writing a test, think about the next developer who will see the test. They do not want to read the entire test suite, and they certainly do not want to read through a suite of test utilities.

If a test breaks, the reader should be able to diagnose the problem by reading the test in a straight line from top to bottom. If they have to jump out of the test to read additional code, the test has not done its job.

Violate The DRY Principle

This pattern of including meaningful information means that code will be repeated, most likely using COPY/PASTE. This is good; the tests include all necessary information.

The goal here is clear, simple tests.

Before blindly applying the DRY Principle to tests, think about what will make the problem obvious when a test fails. Refactoring may reduce duplication, but it also increases complexity and can obscure information when things break.

Testing Too Much

Given some code ...

var testableCode = {
  strings: [],
  getString: function() {
    if (testableCode.strings.length === 0) {
      return "";
    }
    return testableCode.strings.pop();
  },
  setString: function(value) {
    testableCode.string.push(value);
  }
};
Enter fullscreen mode Exit fullscreen mode

Now, examining a possible test ...

/* BAD PATTERN: Testing Too Much */
it('expects getString to return an empty string', function() {
  testableCode.strings = [];
  expect(testableCode.getString()).toEqual("");

  testableCode.strings = ["Bob"];
  expect(testableCode.getString()).toEqual("Bob");
  expect(testableCode.getString()).toEqual("");
});
Enter fullscreen mode Exit fullscreen mode

The test above is doing too much. There are at least two distinct scenarios shown. Cleaning up these tests should result in something like the following ...

/* GOOD PATTERN: Each test expects once */
describe('it expects getString to return', function() {
  it('an empty string', function() {
    testableCode.strings = [];
    expect(testableCode.getString()).toEqual("");
  });

  it('the "last" value', function() {
    testableCode.strings = ["Dan", "Bob"];
    expect(testableCode.getString()).toEqual("Bob");
  });

  it('an empty string after all other values are removed', function() {
    testableCode.strings = ["Bob"];
    testableCode.getString(); // removes "Bob"
    expect(testableCode.getString()).toEqual("");
  });
});
Enter fullscreen mode Exit fullscreen mode

Each Test Expects Once

Having more than one assert is not automatically a problem (even though having too many is a CODE SMELL). But more important than the quantity of asserts is their quality; the semantics. The test name promises that it tests for just a single scenario. But the test goes way beyond that and tests other behaviors, each one deserving of its own focused and well-named test case.

Breaking Single Responsibility Principle

(in code-under-test)

This type of test ...

/* BAD PATTERN: Code-under-test can break SRP */
describe("removeSomething", function () {
  describe("where when called", function () {
    beforeEach(function () {
      this.module.remove.and.returnValue(jasmine.helpers.deferredDone());
      this.module.removeSomething();
    });

    it("should call remove action to remove something", function () {
      expect(this.module.remove).toHaveBeenCalledWith({
        "subPathId": "removeSomething"
      });
    });
  });
});
Enter fullscreen mode Exit fullscreen mode

... would allow a developer to follow the "Each Test Expects Once" rule above with code-under-test that is doing too much, violating the Single Responsibility Principle.

See A Different Unit Test Pattern

Duplication Of Code Logic

Given some code ...

var testableCode = {
  getString: function(a, b, len) {
    return (a + b).substr(0, len);
  }
};
Enter fullscreen mode Exit fullscreen mode

Now, examining a possible test ...

/* BAD PATTERN: Duplication of code logic */
it('expects getString to return a + b at len', function() {
  var a = "abc";
  var b = "def";
  var len = 4;
  var result = (a + b).substr(len);

  expect(testableCode.getString(a, b, len)).toEqual(result);
});
Enter fullscreen mode Exit fullscreen mode

The test contains the same logic that the code under test uses. If this logic contained a flaw, the test might still succeed. Cleaning up these tests should result in something like the following ...

/* GOOD PATTERN: Use Magic Numbers */
it('expects getString to return a + b at len', function() {
  var a = "abc";
  var b = "def";
  var len = 4;

  expect(testableCode.getString(a, b, len)).toEqual("abcd");
});
Enter fullscreen mode Exit fullscreen mode

This way, if the logic within the code is wrong (or changed to something incorrect) the test will fail.

Bad Test Double

This is a case for testing with a "faithful" test-double.

Given some code (there's a bit more setup here) ...

var apiService = {
  base: 'http://www.example.com/',
  testApi: 'testApi/',
  getSimple: function() {
    return fetch(apiService.base + apiService.testApi{
        method: 'GET'
      }).then(function(response) {
        return response.json();
      });
  },
  getMore: function() {
    return fetch(apiService.base + apiService.testApi{
        method: 'GET'
      }).then(function(response) {
        var result = response.json();
        result.checked = true;
        return result;
      });
  }
};

var testableCode = {
  getData: function(service) {
    return service.getSimple();
  }
};
Enter fullscreen mode Exit fullscreen mode

This code sets us up to examine some testing paths where the test-double can cause problems.

/* BAD PATTERN: Testing with a mock */
var mockService, mockResult;
beforeEach(function() {
  mockResult = {
    data: "Bob"
  };
  mockService = {
    getSimple: function() {
      return Promise().resolve(mockResult);
    }
  };
});

describe('expect getData to', function() {
  it('return the correct object', function(done) {
    testableCode.getData(mockService)
      .then(function(data) {
        expect(data).toEqual(mockResult);
        done();
      }).catch(function() {
        expect(true).toEqual(false);
        done();
      });
  });
});
Enter fullscreen mode Exit fullscreen mode

In the previous test, if the service changes or the data returned changes, the setup for these tests have to be changed. These tests would continue to pass.

Looking at the next example ...

/* OK PATTERN: Hooking into the service */
var mockService, mockResult;
beforeEach(function(_apiService_) {
  mockResult = {
    data: "Bob"
  };
  mockService = _apiService_;
  mockService.getSimple = function() {
    return Promise().resolve(mockResult);
  };
});

describe('expect getData to', function() {
  it('return the correct object', function(done) {
    testableCode.getData(mockService)
      .then(function(data) {
        expect(data).toEqual(mockResult);
        done();
      }).catch(function() {
        expect(true).toEqual(false);
        done();
      });
  });
});
Enter fullscreen mode Exit fullscreen mode

Here, the prior example will continue to pass if the service is changed or the data returned changes. It is an improvement on the previous test because the remainder of the internals of the apiService are exposed for use, if needed. This exposure would allow for changes in testableCode and the other parts of the service that remain unchanged.

/* GOOD PATTERN: Hooking into HTTP Request */
var mockService, mockResult;
beforeEach(function(_apiService_) {
  mockResult = {
    data: "Bob"
  };
  mockService = _apiService_;

  // Hook into HTTP Requests here ...
  // Specifically: GET on http://www.example.com/testApi/
  // Leaving this code out; will depend on packages
  //   or frameworks used
});

describe('expect getData to', function() {
  it('return the correct object', function(done) {
    testableCode.getData(mockService)
      .then(function(data) {
        expect(data).toEqual(mockResult);
        done();
      }).catch(function() {
        expect(true).toEqual(false);
        done();
      });
  });
});
Enter fullscreen mode Exit fullscreen mode

This code should be much more resilient to change. For example, getData in testable code could be changed to use getMore rather than getSimple from the service without failure.

Here, the test would only be vulnerable to changes in the API data coming back.

Testing Against Bad Data

Given some code ...

var testableCode = {
  isDateAfterToday: (givenDate) => {
    var currentDate = new Date();
    return givenDate > currentDate;
  }
};
Enter fullscreen mode Exit fullscreen mode

Now, examining a possible test ...

/* BAD PATTERN: Testing against bad data */
it('expects isDateAfterToday to return true', function() {
  var futureDate = new Date('2019-10-22');
  expect(testableCode.isDateAfterToday(futureDate))
    .toEqual(true);
});

/* BETTER PATTERN: Testing against bad data */
it('expects isDateAfterToday to return true', function() {
  var futureDate = new Date('3019-10-22');
  expect(testableCode.isDateAfterToday(futureDate))
    .toEqual(true);
});

/* BEST PATTERN: Testing against bad data */
it('expects isDateAfterToday to return true', function() {
  var tomorrow = new Date();
  tomorrow.setDate(tomorrow.getDate() + 1);
  futureDate = new Date(tomorrow);
  expect(testableCode.isDateAfterToday(tomorrow))
    .toEqual(true);
});
Enter fullscreen mode Exit fullscreen mode

In the BAD PATTERN, the date is hard coded; we will reach this date (yes, the date used here was found in a recently failing test).

In the BETTER PATTERN, a year is used that we will most likely not reach.

In the BEST PATTERN, we are calculating a value that will not be reached, tomorrow.

Testing The Mock

Given some code ...

var testableCode = {
  getString: function(a, b, len) {
    return (a + b).substr(0, len);
  },
  getShortName: function(first, last) {
    return testableCode.getString(last + ", ", first, 10);
  }
};
Enter fullscreen mode Exit fullscreen mode

Now, examining a possible test ...

/* BAD PATTERN: Testing the mock */
beforeEach(function() {
  jasmine.spyOn(testableCode, "getShortName")
    .and.returnValue("Bob45678901234567890");
});

describe('expects getShortName to return', function() {
  it('a name truncated to 10 characters', function() {
    expect(testableCode.getShortName("Bob", "Last"))
      .toEqual("Bob45678901234567890");
  });
});
Enter fullscreen mode Exit fullscreen mode

The only thing that gets tested in this example is the mock created in the beforeEach. The true getString functionality does not get exercised here. The only thing determined here is that the getString function is what is actually called within the getShortName functionality (this is a form of Gray-Box Testing; some knowledge of the internals).

Cleaning up these tests should result in something like the following ...

/* GOOD PATTERN: Testing appropriate code */
describe('expects getString to return', function() {
  it('a + b at len', function() {
    var a = "abc";
    var b = "def";
    var len = 4;

    expect(testableCode.getString(a, b, len)).toEqual("abcd");
  });
});

describe('expects getShortName to return', function() {
  it('a name truncated to 10 characters', function() {
    expect(testableCode.getShortName("Bob4567890", "Last"))
      .toEqual("Last, Bob4");
  });
});
Enter fullscreen mode Exit fullscreen mode

Here, it is clear that we are testing the code, not the test framework itself.

False Positives

Examining a possible test ...

/* BAD PATTERN: False positive */
it('expect the code inside setTimeout to be ignored', function() {
  setTimeout(function() {
    expect(true).toEqual(false);
  }, 1000);
});
Enter fullscreen mode Exit fullscreen mode

The code inside the setTimeout will not run before the test completes.

In most javascript test suites, **specs with no expectations simply pass.

/* BAD PATTERN: False positive */
it('test with no expect will always pass', function() {
  const hello = 'World';
  const life = 42;
});
Enter fullscreen mode Exit fullscreen mode

Dealing with this issue is simple: both in mocha and jasmine, an extra parameter can be passed into the spec (usually called done).
This flags the test as asynchronous, and the test engine will wait for the parameter (function) to be called before flagging the test as passed.

Looking at the examples above in this light ...

/* GOOD PATTERN: Handling Asynchronous Behavior */
it('expect the code inside setTimeout to run', function(done) {
  setTimeout(function() {
    expect(true).toEqual(false);
    done();
  }, 1000);
});

/* GOOD PATTERN: Include expect */
it('test with an expect can pass or fail', function() {
  const hello = 'World';
  const life = 42;
  expect(true).toEqual(false);
});
Enter fullscreen mode Exit fullscreen mode

These tests will now fail.

Promises

Looking at some code ...

var testableCode = {
  getRejectedPromise: function() {
    return new Promise((resolve, reject) => { 
      setTimeout(() => {
          reject('fail');
      }, 1000);
    });
  }
};
Enter fullscreen mode Exit fullscreen mode

And now, looking at the test ...

/* BAD PATTERN: False positive */
it('expects rejection to occur (should pass)', function() {
  testableCode.getRejectedPromise().then(function(result) {
    expect(result).toEqual('fail');
  });
});
Enter fullscreen mode Exit fullscreen mode

As shown above, this test will give a false positive. Using the resolution seen ...

/* GOOD PATTERN: Handling Asynchronous Behavior */
it('expects rejection to occur (should follow catch)', function(done) {
  testableCode.getRejectedPromise()
    .then(function(result) {
      expect(result).toEqual('pass');
      done();
    })
    .catch(function(result) {
      expect(result).toEqual('fail');
      done();
    });
});
Enter fullscreen mode Exit fullscreen mode

So, at this point, the code would fail if it went through the resolve, but since it gets rejected, it will correctly pass.

Testing Private Functionality Directly

Looking at some code ...

function testableCode() {
  function privateFnA() { ... };
  function privateFnB() { ... };

  function publicFnC() { ... };
  function publicFnD() { ... };

  return {
    publicFnC,
    publicFnD,

    testable: {
      privateFnA,
      privateFnB,
      publicFnC,
      publicFnD
    }
  };
}
Enter fullscreen mode Exit fullscreen mode

And here, the tests can now directly test all functionality within this codebase. The issue here is that the private functionality should not be exposed and if there is some functional shift in the code, this can lead to significant test refactoring along with the code refactoring.

The private functionality should have been tested indirectly through the publicly exposed functionality.

Excessive Setup

Excessive setup is more of a CODE SMELL than something where code can be shown that is incorrect versus correct. Just be aware that this is a case where care should be taken to examine why the setup is so lengthy and document the reasoning if it should exist.

Conclusions

The reason for this article is to provide concrete code patterns that are BAD and where possible, ways to correct these BAD tests.

These tests should never have been written. They provide no or little value.

Testing code as been described as an "art form." It is, but it should not be. There are many good patterns that should be followed when writing tests. There is even a lifecycle of tests that should be paid some attention. There are also many BAD patterns that developers should be aware of so that they can be avoided (or cleaned up).

Top comments (9)

Collapse
 
afcjunior profile image
Adalberto Camacho Jr.

Very well written. I loved how you gave bad examples and proceeded to correct them.

And you raise some good points too. I found many testing patterns I learned from experience that I wish someone had told me about when I was just getting started with TDD. Especially the one about not giving too much thought to the DRY principle during tests.

Collapse
 
jamesthomson profile image
James Thomson

One of the hardest parts I found about TDD was knowing exactly what to test.

You start off thinking you must test every little bit of code, but this just creates major overhead and often quite brittle tests. I've come to the conclusion that it's better to think about it as input/output.

Only testing for what's coming in and what's going out. The internals of how we get that expected output aren't necessary to test. This has helped greatly when writing tests and hopefully is correct.

Other than that, TDD has really helped to encourage keeping functions simple and modular. When something becomes (or looks like it's becoming) difficult to test, it often means the function is becoming too verbose and should be split up into smaller more manageable/testable pieces.

Collapse
 
rfornal profile image
bob.ts

Very well stated!

Thread Thread
 
jamesthomson profile image
James Thomson

Thanks! That's reaffirming :)

Collapse
 
rfornal profile image
bob.ts

I was in the same boat and recently have been mentoring an organization on testing ... prompting this article. Thanks!

Collapse
 
peerreynders profile image
peerreynders • Edited

re: Bad Test Double
Actually your GOOD PATTERN is a potential anti-pattern in itself.

You've taken a fast, well isolated (decoupled) test and slowed it down and made more it difficult to set up and run - you've created an "integrated test" (not to be confused with an integration test).

Integrated Tests Are A Scam (2009)
Clearing Up the Integrated Tests Scam (2015)

The accepted solution to the conundrum you present is contract testing (2011!)

As ridiculous as it may initially sound a contract test shows that the service implementation behaves correctly in the way the client expects - i.e. it verifies that the test double's behaviour is in alignment with the actual service.

The idea is to maintain the isolation and speed of the original microtests in order to remain viable to be executed quickly and frequently.

Contract tests are run regularly but much less frequently as they require more resources and are more complicated. When the test double's implementation becomes stale the contract test will fail. It is at this point that the test double is updated to make the contract test pass - then the updated test double is deployed to the collection of microtests.

re: Promises

reject('fail');

MDN: Promise.reject()

For debugging purposes and selective error catching, it is useful to make reason an instanceof Error.

Collapse
 
dwwhalen profile image
Dennis Whalen

Good stuff Bob!

Collapse
 
eekayonline profile image
Edwin Klesman

Great set of examples! Love this read

Collapse
 
rfornal profile image
bob.ts

Thank you!