loading...

Discussion on: No excuses, write unit tests

Collapse
aghost7 profile image
Jonathan Boudreau

I personally find that integration or end to end tests are more worthwhile.

Collapse
gaumala profile image
Gabriel Aumala

I believe it is a tradeoff. If you run unit tests that only tests small functions, when it fails debugging is pretty much trivial. With a quick glance you realize: "Oh, silly me! I typed that wrong!". Perhaps so trivial that you spend more time writing the test, than writing the actual code.

If you skip directly to intergration tests for a function that calls various underlying classes/modules you save a lot more time writing tests and quickly get nice coverage, but when it fails you are going to spend more time debugging. You have to pinpoint where exactly is the mistake among a bunch of files.

I think one should be judicious about which test to write in a given situation. You can't generalize and say that one a approach will always be more worthwhile than the other.

Collapse
ddiukariev profile image
Dmitry Diukariev

Yes, my experience tell me the same. Unit test typically verifies single function. But if you follow good coding practices most of your functions will be small. And I find it that for 2/3 of functions it is easy to proof it works as expected. E.g. why would you bother writing unit test for smth. like this:

function testMe() {
if (a) {
return 1;
} else {
return 2;
}
}

So I only write unit tests for parts that are complex/not obvious. But that's definitely not 100% of code base.

It is better to invest time in Integration/End to End tests which save a lot of time on regression testing.

Collapse
nielskrijger profile image
Niels Krijger

Same here. Whitebox testing only verifies your function works; for a library, standalone module, difficult algorithm or some such it's great. If your application has a dozen layers and it's too complex for any human to understand (kuch J2EE kuch) you might require it too.

But it does not verify your API or feature at all. In fact, it does the opposite; it gives the developer the feeling he has done a good job and he verified the request with all his assumptions (stubs, spies, fakes, etc). But does it actually work in real life? No idea.

Building some JSON API endpoint I'd use supertest or something similar. Run a docker-compose file with your database(s) and mock any external requests using nock (nock.recorder.rec()!).

It's quick enough, few assumptions and actual verification it works. 95% code coverage in less than half the amount of testcode.

No excuses, don't write unit tests for an API server.

Collapse
jiandongc profile image
Jiandong Chen

Your API end points usually end up calling various underling classes before returning the result...The number of code paths your integration tests will cover is limited.

Given the integration tests are more expensive to run, as well as more difficult to write, my usual approach is to write more detailed unit tests, and write one or two integration test to ensure the API works end to end.

Thread Thread
nielskrijger profile image
Niels Krijger

Please note: the following only applies to API tests ;-) I use unit tests on a daily basis for other stuff (libraries, client apps).

There are certainly code paths not easily covered by API tests, a failing database connection for example. In practice I've found the majority of test scenarios can be covered by API tests. A code path in your code should primarily exist to produce a different behaviour; and it should be clear to the client app if that code path worked as intended. I.e. it should produce a different response if it is failing.

Here an extract of some tests in an old project of mine (I write tests still much the same way, only with async/await):

import { expect } from 'chai';
import { expectError } from '../setup';
import loadFixtures from '../fixtures';
import { post } from '../requestHelpers';

describe('POST /users/reset', () => {
  beforeEach(() => {
    return loadFixtures();
  });

  it('should return 204 when user has verified email address', () => {
    return post('/users/reset', 204, { user: 'uSeR1' });
  });

  it('should return 204 when resetting password with email address', () => {
    return post('/users/reset', 204, { user: 'USER-1-a@TesT.org' });
  });

  it('should return 403 when user has not been verified yet', () => {
    return post('/users/reset', 403, { user: 'Unverified' }).then((res) => {
      expect(res.body.error).to.equal('unverified');
    });
  });

  it('should return 404 when username does not exist', () => {
    return post('/users/reset', 404, { user: 'invalid' });
  });

  it('should return 400 when username has not been defined', () => {
    return post('/users/reset', 400, {}).then((res) => {
      expect(res.body.error_details.length).to.equal(1);
      expectError(res, '', 'required', 'should have required property \'user\'');
    });
  });
});

Yes, you do need to make an investment to setup API tests. In the code above you'll notice the loadFixtures and post helper methods actually start/stop the API server. You also want to mock any external API's requests (using some library).

Regardless of what you think of the endpoints; it is very clear what the tests do and are very brief.

If you commit on adding 1 or 2 happy-flow API tests you'll have to make that investment anyway and the majority of work is done. All unhappy flows are usually trimmed-down version of happy-flow code. I'd argue they are much easier to write than unit tests; you only focus on the output, not how your code works internally and the behaviour you're expecting from all mocked dependencies. On top of that you're able to refactor your codebase to your heart's content when using API tests without fear of breaking the API or changing existing tests.

Using this approach with only a limited number of tests the code coverage goes somewhere to low 90% and code paths between 80-90%. I usually don't bother with the paths left over; if a database starts erring out I don't care much how the server reacts. If it fails it fails and my infra needs to solve that properly. In those unforseen circumstances no backend server function properly anyway and I'm more concerned how users experience it; i.e. requests timing out on client app or 5XX errors, stuff I test on the client app, not backend API. Uncaught errors in the backend server I always translate to a 500 code. Two common exceptions where I prefer to fail silently is when I run batch jobs or make third-party requests to analytics/BI-type services that shouldn't affect the happy flow ever.

You're talking about classes and many code paths. That hints at a more serious problem: complexity.

I just love Dijkstra's quotes;

“Simplicity is a great virtue but it requires hard work to achieve it and education to appreciate it. And to make matters worse: complexity sells better.”

“Simplicity is prerequisite for reliability.”

There's a famous paper "Out of the Tar Pit" that talkes about this a great deal (altough it's a tough read, mind you).

For me (and it took me 6 years to learn this! Stuck in the PHP/Java paradigm); it doesn't make much sense to build a heavily OO-oriented codebase with lots of interacting components, code paths, ORM, and maintaining lots of state… for a stateless backend API request that is processed in a fixed set of steps.

A request, a response, and code reuse where it makes sense. With client apps becoming much more complex (Redux, React, Rxjs, Ember, Angular, React Native, native mobile apps, etc. etc), backend API's actually have become much simpler.

I fully agree API tests take longer to run. When I was doing Java I didn't bother with them at all; the whole environment was way too slow to reset each time. When I switched to Nodejs I rediscovered them, and I Golang I do much the same now too. Running a single test (reset database, run migration scripts, load test fixtures, start server, run test, stop server) takes 150-250ms. I now have a project with 230 tests and it takes up to 40 seconds to run. That's a long wait if you're doing that during development. In practice I hardly ever run the entire test set and limit myself to running just a specific test suite or test case during development. But I do that too with unit tests anyway. The build server always guarantees everything works when I push to my feature branch and trust that.