DEV Community

Cover image for No excuses, write unit tests
Jack Marchant
Jack Marchant

Posted on

No excuses, write unit tests

Unit testing can sometimes be a tricky subject no matter what language you're writing in. There's a few reasons for this:

  • There's fear unit testing will take time your team doesn't have
  • Your team can't agree on an acceptable level of test coverage or get stuck bike-shedding*
  • People are frustrated by breaking tests when changing code

First let's invest a bit of time understanding what I mean by unit testing. A unit can be any block of code that can be isolated and executed on its own. This can be a function or even a group of functions, although the latter makes it more difficult to test due to many moving parts.

A function is easily testable, if it always produces the same output, that is, it returns the same thing from inside the function, when given the same inputs (parameters).

It's great for testing because we can make assumptions and set expectations based on those return values. The idea being, that when the test passes, the function still satisfies the requirements in the assertions, regardless of how it gets to that result.

An example of simple testing:

import { it } from 'mocha';
import { expect } from 'chai';

/**
 * Add numbers together
 *
 * @param {int} numbers One or many numbers to add
 */
const add = (...numbers) => {
  return numbers.reduce((acc, val) => {
    return acc + val;
  }, 0);
};

it('should add numbers', () => {
  const expected = 15;
  const actual = add(1, 2, 3, 4, 5);

  expect(actual).to.equal(expected); // true
});

/**
 * Subtract numbers from an initial number
 *
 * @param {int} initialNumber The number we start from when subtracting
 * @param {int} numbers       One or many numbers to subtract
 */
const minus = (initialNumber, ...numbers) => {
  return numbers.reduce((acc, val) => {
    return acc - val;
  }, initialNumber);
};

it('should minus numbers', () => {
  const expected = 5;
  const actual = minus(15, 5, 3, 2);

  expect(actual).to.equal(expected); // true
});

You can go as far with these tests as you like. If we wanted to we could add tests for what happens when the add and minus functions are passed values that are not numbers, does it need to deal with negative numbers?

Adding tests for even the simplest functions can provide you with more information about:

  • How hard the function is to use (number of parameters, understanding of output by function name)
  • Potential risks for the function living in the wild and being used by other developers
  • Whether the function is doing too much, either because you have to mock the world for it to even run, or if you are asserting too many things per function

There's so much to gain from writing tests, and so much to lose if you don't.

You've got time for unit testing

Unit testing your code takes some extra time upfront, because of course, you need to write extra code – the tests.
Then, weeks or months go by after you've written those tests and you make a change to a function that was tested, and the test breaks. Bugger. Now you've got to go in and fix the test.

I've heard people complain that fixing broken tests is hard, time consuming and/or a waste of time. My response is where would you rather be fixing that bug? Would you rather it be in production while people are angry features are broken, or in a unit test, prolonging the time it takes to complete a task?

If you change an API, things should break. If tests did not break, and that code went out to production, now everywhere else the code was used is now broken, you've got 99 problems but lucky you, testing ain't one.
I'll tell you what, most teams don't have time to fix bugs in production, yet time is always made for it. Everyone knows fixing bugs that occur in production is important, from managers to developers, but we're always waiting until they're in production to fix them.

To me, it seems as if we could move the bug fixing earlier in the process and spend more time focussing in code clarity which promotes understanding of the code. Half of the time spent fixing a bug is figuring out how the hell it happened. If you had a unit test it would tell you as soon as you change something and run the test.

Writing tests gets easier the more you do it. You will find that after a while you start writing code that's easily testable because you were thinking about how you would test that code, while you were writing it! Imagine that!

Just write the damn test

Engineers are renowned for over-engineering. We think in abstractions and it's normal to think too much about what should be a simple solution. The hardest part of that is just realising you might be going too far.

Often, when new things pop up, we think of the best solution without addressing the core problems in an efficient manner.

Deciding on team coding best practices is great. Including test coverage, what and how to test among other things is good. Preventing your team from trying things and learning from mistakes is bad.

Don't let it stand in your way of writing the damn test. A good rule of thumb for any new software is:

First, make it work. Then make it right.

This rule can be applied to unit testing in a number of ways, but the most useful I've found is to first write the code to make the thing work, preferably small functions and then write a test for it.

Now that you've got a tested function, change the internal code of that function and see if the test still passes.
Simply – Write. Test. Refactor.

Dealing with broken tests

After you've been going writing tests for a while, you should start to notice more things you change will break existing tests. This is a good thing. Don't underestimate the power of a broken test.

Firstly, it forces the developer that broke the test to understand a bit more about how a piece of code will run in. As in what inputs and outputs are expected, depending on how well it was tested.

Secondly, it forces any API changes to be well thought-out and potentially discussed as a team depending on the size of the change.

Third, and most importantly, is that you found out in your terminal, as opposed to when a customer tried to do something.

Just like anything, you can go too far with testing. And it depends on the application as to how deep you go into unit testing.

In my experience, I see no reason good enough not to at least have some unit testing. Run the code in some expected scenarios and see what happens.

It's just like when you deploy your application and start clicking around on buttons, interacting with the app.

You're not going to just deploy your application and forget it even exists!

Or would you? Start unit testing today. Start small and work your way up.

*Bike-shedding refers to the time spent solving relatively unimportant issues when the larger problem should be solved before addressing minor details.

Top comments (18)

Collapse
 
aghost7 profile image
Jonathan Boudreau

I personally find that integration or end to end tests are more worthwhile.

Collapse
 
nielskrijger profile image
Niels Krijger • Edited

Same here. Whitebox testing only verifies your function works; for a library, standalone module, difficult algorithm or some such it's great. If your application has a dozen layers and it's too complex for any human to understand (kuch J2EE kuch) you might require it too.

But it does not verify your API or feature at all. In fact, it does the opposite; it gives the developer the feeling he has done a good job and he verified the request with all his assumptions (stubs, spies, fakes, etc). But does it actually work in real life? No idea.

Building some JSON API endpoint I'd use supertest or something similar. Run a docker-compose file with your database(s) and mock any external requests using nock (nock.recorder.rec()!).

It's quick enough, few assumptions and actual verification it works. 95% code coverage in less than half the amount of testcode.

No excuses, don't write unit tests for an API server.

Collapse
 
jiandongc profile image
Jiandong Chen

Your API end points usually end up calling various underling classes before returning the result...The number of code paths your integration tests will cover is limited.

Given the integration tests are more expensive to run, as well as more difficult to write, my usual approach is to write more detailed unit tests, and write one or two integration test to ensure the API works end to end.

Thread Thread
 
nielskrijger profile image
Niels Krijger • Edited

Please note: the following only applies to API tests ;-) I use unit tests on a daily basis for other stuff (libraries, client apps).

There are certainly code paths not easily covered by API tests, a failing database connection for example. In practice I've found the majority of test scenarios can be covered by API tests. A code path in your code should primarily exist to produce a different behaviour; and it should be clear to the client app if that code path worked as intended. I.e. it should produce a different response if it is failing.

Here an extract of some tests in an old project of mine (I write tests still much the same way, only with async/await):

import { expect } from 'chai';
import { expectError } from '../setup';
import loadFixtures from '../fixtures';
import { post } from '../requestHelpers';

describe('POST /users/reset', () => {
  beforeEach(() => {
    return loadFixtures();
  });

  it('should return 204 when user has verified email address', () => {
    return post('/users/reset', 204, { user: 'uSeR1' });
  });

  it('should return 204 when resetting password with email address', () => {
    return post('/users/reset', 204, { user: 'USER-1-a@TesT.org' });
  });

  it('should return 403 when user has not been verified yet', () => {
    return post('/users/reset', 403, { user: 'Unverified' }).then((res) => {
      expect(res.body.error).to.equal('unverified');
    });
  });

  it('should return 404 when username does not exist', () => {
    return post('/users/reset', 404, { user: 'invalid' });
  });

  it('should return 400 when username has not been defined', () => {
    return post('/users/reset', 400, {}).then((res) => {
      expect(res.body.error_details.length).to.equal(1);
      expectError(res, '', 'required', 'should have required property \'user\'');
    });
  });
});

Yes, you do need to make an investment to setup API tests. In the code above you'll notice the loadFixtures and post helper methods actually start/stop the API server. You also want to mock any external API's requests (using some library).

Regardless of what you think of the endpoints; it is very clear what the tests do and are very brief.

If you commit on adding 1 or 2 happy-flow API tests you'll have to make that investment anyway and the majority of work is done. All unhappy flows are usually trimmed-down version of happy-flow code. I'd argue they are much easier to write than unit tests; you only focus on the output, not how your code works internally and the behaviour you're expecting from all mocked dependencies. On top of that you're able to refactor your codebase to your heart's content when using API tests without fear of breaking the API or changing existing tests.

Using this approach with only a limited number of tests the code coverage goes somewhere to low 90% and code paths between 80-90%. I usually don't bother with the paths left over; if a database starts erring out I don't care much how the server reacts. If it fails it fails and my infra needs to solve that properly. In those unforseen circumstances no backend server function properly anyway and I'm more concerned how users experience it; i.e. requests timing out on client app or 5XX errors, stuff I test on the client app, not backend API. Uncaught errors in the backend server I always translate to a 500 code. Two common exceptions where I prefer to fail silently is when I run batch jobs or make third-party requests to analytics/BI-type services that shouldn't affect the happy flow ever.

You're talking about classes and many code paths. That hints at a more serious problem: complexity.

I just love Dijkstra's quotes;

“Simplicity is a great virtue but it requires hard work to achieve it and education to appreciate it. And to make matters worse: complexity sells better.”

“Simplicity is prerequisite for reliability.”

There's a famous paper "Out of the Tar Pit" that talkes about this a great deal (altough it's a tough read, mind you).

For me (and it took me 6 years to learn this! Stuck in the PHP/Java paradigm); it doesn't make much sense to build a heavily OO-oriented codebase with lots of interacting components, code paths, ORM, and maintaining lots of state… for a stateless backend API request that is processed in a fixed set of steps.

A request, a response, and code reuse where it makes sense. With client apps becoming much more complex (Redux, React, Rxjs, Ember, Angular, React Native, native mobile apps, etc. etc), backend API's actually have become much simpler.

I fully agree API tests take longer to run. When I was doing Java I didn't bother with them at all; the whole environment was way too slow to reset each time. When I switched to Nodejs I rediscovered them, and I Golang I do much the same now too. Running a single test (reset database, run migration scripts, load test fixtures, start server, run test, stop server) takes 150-250ms. I now have a project with 230 tests and it takes up to 40 seconds to run. That's a long wait if you're doing that during development. In practice I hardly ever run the entire test set and limit myself to running just a specific test suite or test case during development. But I do that too with unit tests anyway. The build server always guarantees everything works when I push to my feature branch and trust that.

Collapse
 
ddiukariev profile image
Dmitry Diukariev • Edited

Yes, my experience tell me the same. Unit test typically verifies single function. But if you follow good coding practices most of your functions will be small. And I find it that for 2/3 of functions it is easy to proof it works as expected. E.g. why would you bother writing unit test for smth. like this:

function testMe() {
if (a) {
return 1;
} else {
return 2;
}
}

So I only write unit tests for parts that are complex/not obvious. But that's definitely not 100% of code base.

It is better to invest time in Integration/End to End tests which save a lot of time on regression testing.

Collapse
 
gaumala profile image
Gabriel Aumala

I believe it is a tradeoff. If you run unit tests that only tests small functions, when it fails debugging is pretty much trivial. With a quick glance you realize: "Oh, silly me! I typed that wrong!". Perhaps so trivial that you spend more time writing the test, than writing the actual code.

If you skip directly to intergration tests for a function that calls various underlying classes/modules you save a lot more time writing tests and quickly get nice coverage, but when it fails you are going to spend more time debugging. You have to pinpoint where exactly is the mistake among a bunch of files.

I think one should be judicious about which test to write in a given situation. You can't generalize and say that one a approach will always be more worthwhile than the other.

Collapse
 
mortoray profile image
edA‑qa mort‑ora‑y

In a lot of situations writing a test first adds absolutely zero overhead time. As a programmer you need some way to verify your new feature works anyway, so the test code might as well be a unit test.

I'd go further and say that unit testing can even save development time by isolating features. Many features become complex when exposed to higher layers, or by the time they make it into the UI. By isolating the actual new functionality at a lower level you can save effort trying to test the feature.

Note, I draw one big exception to unit testing, and that's the final UI layer. I'm still in favour of structured testing, but find low-level automation is not cost effective.

Collapse
 
londonopendata profile image
LondonOpenData

"You’re not going to just deploy your application and forget it even exists!" well maybe you are. In fact maybe there's quite a large class of coding, which is in the form of quick experiments never to see the light of day. Maybe many proper applications start life like this (as a not-really-serious experiment) so then I guess there's a question of when to decide to get serious and start adding unit tests to this thing.

My point is, surely its forgivable to not always be unit testing from the beginning. Or am I just not buying into this properly?

I saw some interesting ponderings at the other end of the spectrum, if you're unit testing yourself up to the eyeballs (going for 100% coverage) ...maybe you're going too far: labs.ig.com/code-coverage-100-perc...

Collapse
 
politedog profile image
Chris Koeberle

Once you make unit tests part of your routine, you will likely find that you are much more efficient at writing your quick experiments that never see the light of day - assuming that you're working in a context where you are already proficient at writing unit tests.

If your quick experiment is of the "let me see if I can integrate these two unfamiliar technologies" sort, then of course unit tests aren't going to be helpful. But if your quick experiment is (or can be) easily testable, then it most likely makes sense to write the tests.

In essence, I'm convinced that "will it go to prod" is completely orthogonal to "should I write tests." Unit tests are there to make development easier. If you want your development to be easier, make a conscious decision about whether the effort to write the tests is justified.

Collapse
 
tamouse profile image
Tamara Temple

This is the most common excuse given to me for not writing tests, and then holy cow, that sucker is in production and people are looking at it daily... too late to write tests, we need more features now!!

Write the damn tests.

Collapse
 
erebos-manannan profile image
Erebos Manannán

I regularly write tests for the small things I write as experiments, or small tools, things that I "know" will never go to production - how better to test that the experiment/tool/whatever works than with automated tests?

You often also save time by setting up some automated tests, as instead of doing some complicated manual process to test if your stuff works after making changes your unit tests just get triggered automatically (if correctly set up) and will tell you if the code works.

Additionally there is a clear benefit in writing code that is easy to test - it's often better in quality due to readability, less dependencies, less "God object" -type issues, and so on.

Collapse
 
jessydmd profile image
Jessy

I've started hearing about unit testing only quite recently, since I never learned about tests at school, and the other dev where I work never used them either. But now that I'm starting a new job in a month, it is something I definitely want to learn.

However, I still do not understand how to actually use them in a real world situation. Every tutorial I find uses very basic examples, usually some kind of addition function. Most of my functions are way more complex than that!

Collapse
 
chuckwoodraska profile image
Chuck Woodraska

I feel like a lot of my time is spent writing mock objects for DB data or other outside things. Any suggestions for writing tests that hit external sources.

Collapse
 
jackmarchant profile image
Jack Marchant

I would suggest that as a general rule you shouldn't be writing tests any more than 20-30% of the time while building a feature. If it's more than that I would question the architecture decisions in the app.
Sometimes you will have to write mocks but I find being strict about the interface to an external source is key!

Collapse
 
erebos-manannan profile image
Erebos Manannán

You really don't say much that can be used to help you out with your specific use case, it depends heavily on what you're writing, what languages, frameworks and tools, etc. you are using.

However, some common solutions:

  • Fixtures: Record known good test data sets somewhere and reuse that in all your tests as much as possible

  • Automatic mocking: Many frameworks and languages provide powerful ways to (mostly) automatically mock certain kinds of things (e.g. standard HTTP requests, database access)

  • Spend some time improving your tools: If you constantly spend time mocking your DB layer, make your DB layer support mocking out of the box - e.g. define the mock class and fixture at the same time you create your model classes

  • Change tools: If your tools for some reason require an unreasonable effort to set up unit tests, maybe you should find better tools. Of course I'm not suggesting you just take your 5 year project and rewrite it from scratch, but you can start a process of migrating to a better framework/language/other tool progressively.

Generally when starting a new project I spend some time in making sure it's easy to work with, including to make unit tests.

Here's some resources for some languages I happen to have recent experience in testing:

Most languages and frameworks with any reasonable standing in the world will have ways to run unit tests, but sometimes it takes a bit extra effort to cater to your specific use-case.

Collapse
 
gruntfuggly profile image
Nigel Scott

Good article, but I'd go one step further - write the test first, then the code. That way you know you've actually written something that passes the test...

Collapse
 
muizzuddeen_20 profile image
mzzn

Solid point!