I'm trying to learn TDD approach. I have no troubles with a simple code where I do not need to mock or stub any external methods or dependencies, but where it comes to write tests for some code that based on database I'm getting feel a little bit confused.
It's clear to me, that the unit tests should tests a small piece of code that do not depends on other services and so on.
Lets assume that I want to unit test a creating user feature.
I know that in TDD I should start by writing failing test - but I don't know how it should look like. ;)
So, here is the example, simplified service:
class UserService {
// typeORM repository
constructor(userRepository: Repository<User>) {}
async createAccount(user) {
// Saves user to database and return created entity object
return await this.userRepository.save(user);
}
}
By now I'm able to spy on method and check if is called properly, with correct params etc. (I'm using Jest)
it('Should create a user.', async () => {
const USER = {
email: 'test@user.com',
username: 'user',
password: 'user',
};
const SPY = jest.fn(() => USER);
jest
.spyOn(userService, 'createAccount')
.mockImplementation(() => SPY(USER));
await userService.createAccount(USER);
expect(SPY).toHaveBeenCalledTimes(1);
expect(SPY).toHaveBeenCalledWith(USER);
});
And here comes my main question. How should the failing tests looks like? I mean if I stub createAccount method like I did above, it will always pass. Even if the method is empty. It's only requires method declaration. Should I mock repository methods that returns what typeORM is supposed to returns? e.g.:
const REPOSITORY_MOCK = jest.fn(() => ({
save: jest.fn().mockImplementation((user) => user),
}));
I think that I missed something. :/
Or Am I try to over complicate it and I should only testing if the method is calling properly? And the whole database related stuff should be test by integration tests where I'm able to work on database?
I'm looking forward any kind of help. Thank you in advance.
Cheers, Kuba.
Top comments (11)
I run integration tests against a separate (empty) database. Maintaining fixtures is enough work when you don't have to worry about whether they actually reflect your current data model. Node doesn't really have a formal separation between unit and integration tests like Maven-style Java projects do so it all winds up being "just tests" anyway.
I'm pretty sure there is a big distinction between unit tests and integration tests, in any language, framework or paradigm.
With every JS test runner I've used, the way you can tell which is which is that some fail when your external dependencies aren't available.
Ah sorry, my bad, now I think I understand what you said, you meant how that framework calls them.
Basically yes, they are all automatic tests.
I replied for an abstract/conceptual level, how you write them, what they are for and when they are run (and how long they should take).
And to answer to the the post question, unit tests are not supposed ti hit any dependency, outside of that function, so it is ok. But you may want to check the other 2 types of tests
Hi,
I have same issue.
I'm new to writing test cases for my project - Typescript, Node js, MySQL.
Integration Testing:
from my own experience, it is easier to setup a separate database for testing and you run several scripts to prepare database before running.
If you are in deep need to run without database and you are using mongodb and moongoose, I know a library called mockgoose which is pretty good to mock db
I'm fairly new to testing. Joined an existing project where the guy who wrote the code has left the company and now us who maintain don't have an entire system knowledge. And of course, there are no tests to easily understand what's happening in the code. So now when we change a part of the code either for bugfix or feature change we first reverse engineer tests so we have confidence we don't have regressions.
I find that for this existing system it's better to use the real DB for testing a.k.a make integration tests. I write down that does this feature X do. Then test each of the functionality instead of just mainly worrying about a particular function.
Example, have a function that deletes a user, but the user model in a database has set up a cascade delete on different tables. I need to run it against the database to sure it gets deleted as of a "side-effect". As just reading the tested code won't make it obvious what is happening on the database level. For each test, I first created test records written in the database, then call the function in testing, then assert results and clean the records. With this setup, if the assertion fails the data isn't cleaned, I didn't want to create nested describe scopes with before/after for each separate test. But when looking at the test I can easily see that is going on there without scrolling.
But mocking has its place for working with 3rd-party libraries where you can't call them.
I saw that the new trend is to make the integration tests with real databases, that are spawned for each test, using a docker integration.
Power of the containers to the rescue!
How can one do that?
A good place to start is learning Docker
How time has gone by, now familiar with the tools. Thanks.