DEV Community

Cover image for 5 steps to better unit/integration testing
Daniel Macák
Daniel Macák

Posted on

5 steps to better unit/integration testing

1. Use setup functions

Setup functions are much better pattern than using test lifecycle methods like beforeEach, beforeAll and the rest. What do I mean by setup functions?

it('should do something with current user', () => {
  const { userService } = setup();
  ...
  expect(userService.getUser()).toBe(...);
});

function setup() {
  const userApi = new UserTestApi();
  const userService = new UserService(userApi);
  return { userService };
}
Enter fullscreen mode Exit fullscreen mode

They do as their name says, they set up everything that's needed to run your tests, including mocks, stubs, populate models, you name it.

Why is using test lifecycle methods an anti-pattern?

  1. They create hidden dependencies for all test they apply to.
  2. They are inflexible. If you need different behavior for some of your tests, you have to put them in a different describe with a different lifecycle method implementation. Their number can really explode if the test vary a lot.
  3. They are rigid, since they can't be parameterized from the individual tests.

Setup functions don't suffer from these problems. On the contrary, they allow to overcome them easily. In the same order:

  1. Hidden dependencies - not a problem. You can have various setup methods, or just parameterize them and then the tests position in the file doesn't matter. You can move it elsewhere, even outside of top level describe and it won't break the test.
  2. They are flexible for the reasons mentioned above.
  3. The aren't rigid since you can pass unique arguments from inside each of your tests.

2. Prevent nesting

Nesting is one of the most wanted bandits in any kind of code, as it makes code less readable and harder to reason about. This is no different for tests. If you nest multiple describes and aggravate the problem by layering test lifecycle methods as well, you've got a serious problem on your hands.

describe('My component', () => {
  beforeAll(() => {...});

  describe('in read-only mode', () => {
    beforeEach(() => {...});

    describe('on touch device', () => {
      beforeEach(() => {...});

      it('poor me if you want to change the setup!', () => {...});
Enter fullscreen mode Exit fullscreen mode

Reading and thus maintaining this code is hell. If you want to move the actual test case elsewhere, or change the way it's set up, good luck!

The solution is easy. Don't nest, don't divide the setup into multiple layers, just keep it shallow as much as you can.

3. Treat your tests same as your production code

We programmers are used to caring a lot about our production code. We refactor it, divide it into neat modules, extract reusable parts, make sure it's commented and nicely readable. But honestly, do you treat your tests the same way? The truth is that you probably don't, at least according to my experience from many companies.

We programmers often treat tests as necessary evil that we want to be done with asap. This is partly due to the tests often being a kind of an afterthought. Lot of people tend to write production code, refactor it, test it manually, and only when thinking they are finally done do they realize they should write tests for their code as well.

I am not the kind of guy who will tell you to do TDD, because in many situations it simply doesn't make sense. If you want to improve however, be more conscious of the principles you apply to your production code and apply them to your tests too. Are your tests not DRY? Just extract common steps into reusable functions. Are your test files too long and unwieldy? Split them up into multiple logical units (files). And refactor your tests every time you feel like you don't know understand what the hell is going on there.

4. Find the right balance

Finally getting to the juicy stuff. This might be controversial, but I don't care. Let's just forget once and for all about the mythical 100% test coverage. It's a lie and you will never achieve it, whatever your code coverage tool or Uncle Bob says. Even if people accept this, they sometimes hold it as an ideal which they strive towards. I think this leads only to a lot of irrelevant and/or unmaintained tests and therefore to a great time waste.

The bigger your project, the more impractical this ideal is. The reality is that you have to prioritize the same way as you would with the rest of your code base and your project in general. You can't write test for everything the same way as you can't implement all the features you'd like nor perform all the refactorings you wish.

There is an important caveat to this though. Most of us work on projects that are not terribly important, sorry, and the agility in development and good real-time monitoring is of more value for us than high code coverage. That said, there are programs that people's lives depend on, and any mistake can be fatal to its users. This can be software used in aviation, medicine or weapon systems. It goes without saying, that making sure the program works correctly is of essence and testing everything is absolutely necessary.

So the takeaway is following: Judge the importance and ROI of your tests, but most importantly use common sense, and choose your desired coverage accordingly.

5. Prefer integration over unit

Again, this is highly contextual, but in my experience for most apps created today, unit tests make less sense. If you have a unit, be it class or function, which does something either very technically involved, or encapsulates important business logic on its own, unit test is useful. But let's be frank here, most of the time, our units are not that interesting, and creating unit tests for them is waste of time.

One other aspect is that if you want to change your unit, even if you took great care writing your test properly, you are probably gonna break it. And we change units often. There is nothing worse than being slowed down by tests breaking left and right when you know it's just the implementation changing, not the resulting system behaviour. Such DX often leads to people ditching tests altogether.

That's the reason I prefer integration tests, ideally such that create the whole app in headless mode, so without the UI, but otherwise with full functionality! Here is an example:

it('should sign in user', async () => {
  const { userController } = await setup();
  await userController.signIn(...);
  expect(userController.getUser()).toBeDefined();
  ...
});

async function setup() {
  return await initTestApp();
}
Enter fullscreen mode Exit fullscreen mode

In this setup, I like to test the high level system behaviour instead of the implementation details. Because of this architecture, if I change my units, but the resulting behaviour is still the same, the integration tests don't care. I can change the signIn's implementation, but as long as it populates the current user in the model, the test is still gonna pass.

But this is not all! By using the holistic approach instead of the pretty low-level focus on units, we are testing much closer to what really matters to users - that the system works as a whole. Which means such tests bring much more value than unit tests, which exist in completely mocked and therefore detached environment.

That's all folks

Let me know if you agree or if your approach is different. I am happy to discuss.

Top comments (0)