So, it's been a year since the last article, and a lot has changed. In this one, we're going to talk about integrating with Mock Service Worker (MSW). I'll also describe what I tried to implement in my quest for system resilience - what worked out and what didn't.
So, did my tests actually help me?
I can't say the time investment paid off in spades, but one thing's for sure - it definitely wasn't a waste of time.
Here are the main areas where the tests really proved their worth:
- When contracts were lost or changed;
- Fixing the fallout from merge conflicts (given the quirks of our processes, this is the most common scenario);
- Refactoring (it's hard to be objective here since our project's test coverage isn't huge, but before any refactoring, I try to at least cover the code with local tests).
Then again, all those fancy things like Cursor with their powerful autocomplete freed up some time, so why not spend a bit of it on tests?
The Fight for Reliability, or Reality Strikes Back
Let me start with what didn't work out.
The first thing I tried was implementing E2E tests with Playwright. You know, testing business logic in the browser by simulating real user actions.
In our existing project, this turned out to be really tough. The main problem was setting up the initial test data. In my case, that meant the database. It needed to be as small as possible, but still have all the necessary data for testing.
In theory, it sounds simple: take a database, tweak the data, create a Docker image, and boom - you're golden. Well, I got stuck at the very first step of preparing that database. It requires a firm decision and a coordinated effort, meaning help from the backend team and DevOps (who are always busy). In the end, on my project, we shelved the idea for the time being.
I also tried replacing actual backend interaction by mocking API requests directly within Playwright, but that felt like a dead end. Maintaining yet another set of mocks (on top of the MSW we already had) combined with the slow browser startup times just didn't seem rational, unless for some very specific tasks.
About Unit Tests and MSW
All in all, I decided to focus on unit tests (which, in the classic sense, are more like integration tests in our context). They're fast, isolated, simple, and reliable.
To mock network interactions, I set up MSW (Mock Service Worker). This later allowed us to practice contract-first programming and parallel development.
So, first you install MSW (the official guide is your best friend here).
Then, I moved the vite configuration for it into vitest.workspace.js (note: in new versions DEPRECATED). This isn't mandatory, but it's convenient if you need to separate node and browser environments.
import { defineWorkspace } from 'vitest/config';
export default defineWorkspace([
'packages/*',
{
extends: './vite.config.js',
test: {
environment: 'jsdom',
name: 'unit',
include: ['src/**/*.spec.{ts,js}'],
deps: {
inline: ['element-plus'],
},
setupFiles: ['./src/mocks/setup.ts'], // path for the msw config
},
},
]);
Since it's an independent service, I put it in a mocks folder, so it's easy to cleanly remove if needed.
import { server } from './server.ts';
import { afterAll, afterEach, beforeAll } from 'vitest';
beforeAll(() => return server.listen({ onUnhandledRequest: 'warn' });
afterAll(() => server.close());
afterEach(() => server.resetHandlers());
user/handlers.ts
import { GET_USERS } from '@/api/constants/APIEndPoints.js';
import { HttpResponse, http } from 'msw';
import { USERS_FAKE_RESPONSE} from './fixtures.ts';
export const handlers = [
http.get('*' + GET_USERS , () => {
return HttpResponse.json(USERS_FAKE_RESPONSE);
}),
];
Now, whenever a call is made to the URL defined in the GET_USER identifier, it will return the value stored in USER_FAKE_RESPONSE.
Interestingly, MSW, especially with its plugins, can generate handlers from an openApi.json file, which can cover all your API requests. It can also use faker.js to generate response data with fake values.
I'm not a big fan of that approach myself (it can complicate parallel work), so I prefer to create response fixtures and handlers manually, and then fill them in - even using AI helpers sometimes - which results in more human-readable responses.
export const USER_FAKE_RESPONSE = {
items:[
{ firstName: 'John' , lastName: 'Smith'}
{ firstName: 'Willy' , lastName: 'Willson'}
]
}
Using it in Tests
For a clear example, let's imagine we have a component with a button to fetch users and a block to display the response. A traditional test might look something like this (a detailed test was in the previous article; this is just a schematic).
import * as USER_API from 'some api folder'
let wrapper
const createComponent = (params {}) => {
wrapper = shallowMount(OurGetUsersComponent, {
props: {
...params.props,
},
global: {
renderStubDefaultSlot: true,
stubs: {
...params.stubs,
},
},
});
};
test('Handling user retrieval when the Find button is clicked', async () => {
const spyGetUsers = vi.spyOn(USER_API, 'getUsersRequest').mockImplementation(() =>{ items:[
{ firstName: 'John' , lastName: 'Smith'}
{ firstName: 'Willy' , lastName: 'Willson'}
]})
createComponent ()
const buttonNode = wrapper.find('.button') //not a very good selector, but we only have 1 button
await buttonNode.trigger('click');
await flushPromises();
expect(spyGetUsers).toHaveBeenCalled(); //here you can also check the parameters
expect(wrapper.text()).toContain('Smith')
expect(wrapper.text()).toContain('Willson')
});
That approach works, but what if we need to test the behavior when the server returns an error? For example, when a 500 error triggers a toast notification saying, "The server is temporarily unavailable, please try again later."
This is exactly where MSW comes to the rescue.
import { server } from '@/mocks/server';
import { http, HttpResponse } from 'msw';
import { USER_FAKE_RESPONSE } from '...fixtures'
import * as MESSAGE_MODULE from "utils"
import { GET_USERS } from '@/api/constants/APIEndPoints.js';
let wrapper
const createComponent = (params {}) => {
wrapper = shallowMount(OurGetUsersComponent, {
props: {
...params.props,
},
global: {
renderStubDefaultSlot: true,
stubs: {
...params.stubs,
},
},
});
};
test('Handling user retrieval when the search button is clicked', async () => {
const spyGetUsers = vi.spyOn(USER_API, 'getUsersRequest') // the implementation already exists in MSW and doesn't need to be duplicated here
createComponent ()
// it's better to search the same way as the user
const buttonNode = wrapper.findAll('.button').filter(item=>item.text()=="Search")[0]
await buttonNode.trigger('click');
await flushPromises();
expect(spyGetUsers).toHaveBeenCalled(); // This step might be redundant since the result is what matters to the user
expect(wrapper.text()).toContain(USER_FAKE_RESPONSE.items[0].lastName)
expect(wrapper.text()).toContain(USER_FAKE_RESPONSE.items[1].lastName)
});
test('Handling server errors when retrieving users', async ()=>{
spyMessage = vi.spyOn(MESSAGE_MODULE , 'showErrorMessage')
server.use(
http.get('*' + GET_USERS, () => {
return new HttpResponse(null, { status: 500 });
}),
);
createComponent ()
const buttonNode = wrapper.findAll('.button').filter(item=>item.text()=="Search")[0]
await buttonNode.trigger('click');
expect(spyMessage ).toHaveBeenCalledWith({message: 'The server is temporarily unavailable, please try again later' });
})
This way, you can make your unit tests a little more honest and your team's capabilities a little broader.
Author: Dmitry Simonov

Top comments (0)