DEV Community

Cover image for Jest and recurring actions
Chris Bongers
Chris Bongers

Posted on • Originally published at daily-dev-tips.com

Jest and recurring actions

Sometimes you want to have Jest perform a recurring action between each task.

Some examples:
Query a database, clear storage, clear mocked data, or reset a mocked route.

We don't really want to be bothered with having this recurring code in each test, and luckily Jest has a solution for us.

Recurring actions

There are four functions we can hook into:

  • beforeEach: Runs before each test
  • afterEach: Runs after each test
  • beforeAll: Runs before all tests
  • afterAll: Runs after all tests

Let's sketch an example.
We have a database function to call, so the steps we want to achieve are:

  • create database
  • populate with mock data
  • re-populate for each test
  • remove database

This scenario is a perfect case for all four functions to hook into.

The first thing we want is to create our database, for which we'll use the beforeAll function.

beforeAll(() => {
  return createDatabase();
});
Enter fullscreen mode Exit fullscreen mode

The next step is to populate the database with demo data we can alter in our tests.

beforeEach(() => {
  return populateDatabase();
});
Enter fullscreen mode Exit fullscreen mode

Our test might alter/remove/create elements in this database, so we want to clear it between each test.

afterEach(() => {
  return clearDatabase();
});
Enter fullscreen mode Exit fullscreen mode

And once we are all done, we should remove the database so the next run will be fresh again.

afterAll(() => {
  return removeDatabase();
});
Enter fullscreen mode Exit fullscreen mode

And that's it, these four steps will now run at the needed times.
To showcase this, let's create this sample test file and see when each call is used.

test('user database has Chris', () => {
  expect(db.user.hasName('Chris')).toBeTruthy();
});

test('user database doesnt have Thomas', () => {
  expect(db.user.hasName('Thomas')).not.toBeTruthy();
});
Enter fullscreen mode Exit fullscreen mode

The firing order is as follows:

  • beforeAll: Created the database
  • beforeEach: Populates database
  • Test 1 runs: Find user Chris
  • afterEach: Clear database
  • beforeEach: Populates database
  • Test 2 runs: Can't find Thomas
  • afterEach: Clear database
  • afterAll: Remove database

And that's the flow it will take.

We can quickly make our test more manageable and work in specific ways to ensure each test is solid and fresh.

Thank you for reading, and let's connect!

Thank you for reading my blog. Feel free to subscribe to my email newsletter and connect on Facebook or Twitter

Latest comments (5)

Collapse
 
jakecarpenter profile image
Jake Carpenter • Edited

I highly recommend against setting up or "seeding" data in shared setup and/or context. Your example of using the before/after actions to set up the database is an excellent use of them. However, the one case of beforeEach() to seed common data is where it can go too far and lead to confusing tests in the future.

It always begins with an innocent case of putting one record in the "context" so each test can assume is there, but before long the following will happen:

A new user is seeded with specific circumstances for a single new test

  1. That user meets the needs of another set of tests, so it is reused across 3 new tests
  2. A requirement changes and the first test needs to be adjusted, so the seeded data is changed
  3. Now the other 3 tests which used that seeded user are failing, having never intended to be changed

Unfortunately, now the 3 failing tests need to be changed but they did not make the data they needed clear within the test - instead relying on shared setup. This can make for a difficult task of reconciling data requirements for each and adding new seed data.

The best way to solve this is to skip the beforeEach approach of pre-seeded data and instead perform that logic in each test, creating any helper functions needed to make that as easy and clear as possible. Tests can then look like this and be much more up-front about what circumstances they are testing:

test('user database has Chris', () => {
  addUserToDatabase(db, 'Chris');
  expect(db.user.hasName('Chris')).toBeTruthy();
});

test('user database doesnt have Thomas', () => {
  addUserToDatabase(db, 'arbitrary');
  expect(db.user.hasName('Thomas')).not.toBeTruthy();
});
Enter fullscreen mode Exit fullscreen mode
Collapse
 
dailydevtips1 profile image
Chris Bongers

Lovely point Jake!
This was a bit of a hard one to "not overcomplicate", but still make clear.

In general I would never have this run in a specific test, rather a setup script that would not be modified and handle all the setup needed props.

In cases on specific beforeEach we would only handle mocked API's that are super specific to a function.

Collapse
 
peerreynders profile image
peerreynders • Edited

Beware of the Share - the counterpart to Don't Repeat Yourself.

In essence you strongly prefer Delegated Setup over Implicit Setup.

Implicit Setup does mention under when to use it:

"It can however lead to Obscure Tests caused by a Mystery Guest by making the test fixture used by each test less obvious. It can also lead to Fragile Fixture if all the tests in the class do not really need identical test fixtures."

What you are describing is Data Sensitivity.

A new user is seeded with specific circumstances for a single new test

It could be argued that the requirement for a "new user" should prompt the creation of entirely new suite, i.e. Testcase Class per Fixture or in this context "Suite per Fixture".

Your concern is well founded but I think this is another case where everybody's obvious and favourite "demonstration code" can actually be a smell in the real world. One could argue that microtests shouldn't be using a database at all and where is the repository anyway? Unfortunately it's an example that everybody seems to understand even when its real world utility can be somewhat questionable.

On the whole it doesn't invalidate the usefulness of the suite and test setup (beforeAll, beforeEach) and teardown (afterAll, afterEach) actions; it just can be difficult to find clear and short code examples that demonstrate their usefulness.

Collapse
 
lexlohr profile image
Alex Lohr

I really like uvu's pattern of providing a context in the before/after handling and the tests that allows you to write stuff like:

const test = suite('my test suite');

test.before.each((context) => {
  context.unsubscribe = stream.subscribe((next) => context.data = next);
});
test.after.each((context) => {
  context.unsubscribe();
  context.data = undefined;
});

test('handle data from stream', ({ data }) => {
  ...
});

test.run();
Enter fullscreen mode Exit fullscreen mode

Unfortunately, jest does not provide such a nice pattern, but you can easily use an object inside a describe-block to hold your context for the same effect.

Collapse
 
peerreynders profile image
peerreynders

From what I can tell uvu's context is simply a Suite Fixture, which started out as an immutable shared fixture but is now just a shared fixture—i.e. a fixture belonging to the suite that is shared by all of the suite's tests.

However there are lots of fixture styles independent of what any one tool may directly support:

Fresh Fixture Setup: Each test constructs its own brand-new test fixture for its own private use. When teardown (afterEach) is required it's a Persistent Fresh Fixture.

  • In-line Setup: Each test creates its own fresh fixture constructing everything in-line within the test.
  • Delegated Setup: Each test creates its own fresh fixture using suite scoped creation functions.
    • Creation Method: Set up the test fixture by calling functions that hide the mechanics of building ready-to-use "values" behind intent-revealing names.
  • Implicit Setup: Build the test fixture common to several tests in the setUp method (beforeEach in Jest).

Shared Fixture Construction: Reuse the same instance of the test fixture across many tests.

  • Prebuilt Fixture: The shared fixture is built separately from the tests (usually in beforeAll).
  • Lazy Setup: Use lazy initialization of the fixture to create it in the first test that needs it. (Only needed when beforeAll isn't supported)
  • Suite Fixture Setup: Build/destroy the shared fixture in special functions (beforeAll,afterAll) called by the Test Automation Framework before/after the first/last test is called.
  • Setup Decorator: Wrap the test suite with a Decorator that sets up the shared test fixture before running the tests and tears it down after all the tests are done.
  • Chained Tests: Let other tests in a test suite set up the test fixture.

Also

So in Jest/Vitest the Suite Fixture looks something like this:


describe('User suite', () => {
  let context = initialContext();

  beforeAll(async () => {
    context.client = await db.connect();
  });

  beforeEach(async () => {
    context.user = await context.client.insert(
      'insert into users ... returning *'
    );
  });

  afterEach(async () => {
    await context.client.destroy(
      `delete from users where id = ${context.user.id}`
    );
    context.user = undefined;
  });

  afterAll(async () => {
    context.client = await context.client.end();
  });

  test('valid user present', () => {
    expect(context.user).toBeDefined();
    expect(context.user).toHaveProperty('id');
    expect(context.user.id).toBeGreaterThan(0);
  });

  // more tests ...

});
Enter fullscreen mode Exit fullscreen mode

One could argue that context.client is a shared fixture while context.user is a fresh fixture. I think that uvu's api style made context a necessity to simplify shared fixtures. Jasmine always gets criticised for its polluting globals; Jest preserved that style and all of the suite's tests are injected with a single function - a closure that can hold context across the suite's tests.

uvu adopted a less verbose, more modern api style that avoids polluting globals. suite() creates a Suite callable object that contains all the necessary methods to incrementally configure and add tests to the suite prior to running. So in the absence of that overarching closure of a Jasmine/Jest suite, the context parameter threads a shared fixture throughout a suite's tests. In my judgement uvu's approach is cleaner though I wouldn't be surprised if some people prefer the Jasmine/Jest approach.