Automated testing during software development involves many different techniques, one that shouldn’t be used is mocking. Mocks are a distraction at best and provide false confidence at worst.
What is Mocking?
It is common for software developers to use mocks to simulate behaviour of code for network calls to other services or for database access. This enables unit tests to be run that are both:
- Fast because they don’t need to rely on additional services.
- Stable because they avoid availability issues.
This means that mocks are generally used for code with side effects, which is code that relies on or modifies something outside its parameters. This lets us classify functions as:
- Pure: A function without any side effects.
- Impure: A function that contains one or more side effects.
The Problems with Mocks
Mocks aren’t equivalent to the integrations they replace. If you mock a database client then you haven’t tested the integration with the real client. This means that your code may work with the mock but you will still need to do integration testing to make sure it works without mocks.
Feature Parity Is Not Feasible. If you make a quick mock then it won’t return useful data. The more time you spend improving the mock the more useful the data will be. However it can never be a true representation.
Mocks that aren’t used are a waste of time and effort. If you mock out a database client and don’t use it then there is no point mocking it. This can occur if some code requires valid configuration to initialise but doesn’t use it.
How Do We Replace Mocks?
Mocks are used to provide speed and stability but we can manage this in other ways.
Refactor your code! We can replace the need for mocks by separating the pure from the impure functions. Pure functions can be unit tested without mocks and impure functions should only be integration tested.
Improve Your Automation! By automating software packaging, deployment, and testing we can focus on integration testing faster instead of relying on unit tests. This also enables continuous delivery and reduces the impact of “it works on my machine” which are beneficial in modern software development.
Summary
Mocking is a short term solution and a long term problem. If you want to deliver software faster then you should spend less time on mocks and more time on refactoring and automation.
If you would like to see more content like this follow me on medium.
Let me know your thoughts on Twitter @BenTorvo or by Email ben@torvo.com.au
Top comments (11)
The reason behind mocks is not to enable tests in an early stage but decoupling the dependencies from back to front teams conceptually.
They have nothing to do with unit tests (they validate that a function returns the expected from a given input. Good tests do that multiple times with different inputs to validate every possible situation).
They also have nothing to do with end to end tests (by definition, they need to be done AFTER the integration).
1
Let's say a backend team estimate tasks for a week, frontend estimate theirs for another week. The delivery would be in 2 weeks then.
2
Instead, both teams agree on a model, schema, structure or response, they mock it as the contract and they use it to perform further developments till both parts are finished (and unit tested... hopefully).
This work about defining needs that leads to a contract will be needed anyway so no time wasted here. The mock can be also auto-generated from the contract, so again, no time wasted here.
We've just used a week at this point, then you just need -let's say- +1 day to integrate (usually less).
3
Integration process begins, if the backend and frontend implementations are correct from the contract point of view it will be OK at the first try, otherwise one team or another will need to perform further changes to adapt it, which is the reason for estimate integrations according to the model complexity.
Note that we reduced the delivery time of that [feature or whatever] by 4 labour days by using mocks.
4
Once it's finished QA team will apply integration tests, also called End to End (which are by no means replaceable by unit tests) and which are not -usually- a developer responsibility.
The main reasons for using mocks are:
Nothing more, nothing less. It's convenience.
On the other hand I don't really know how you plan to call my "pure functions" or "methods" when they are private to their context.
Do you plan to export every single function so you can test them individually? That's ridiculous!
Other reasons for using mocks:
Mocks can either be deleted once the integration is done or stored within a tool to see diferences on them for future updates, they are a good place to quick search when a change was made so you can check the email to see the client request 😂
They are also good when doing PoCs, because usually the user will require changes on them before they qualify as "valid" to start developing over them. If you code in real the DB model/schema, migrations, validations, CRUD functions etc, when the customer requires a change you'll need to edit all those steps instead a single mock, which is clearly a lose of time (and more times that what we would like to admit, garbage will be left in the code from those changes).
The tests performed on functions that use mocks will also work after the integration. If a test fails after the integration step chances are that the issue is either due to a contract break by one of the teams or in the data (or lack of it) and is usually the first thing to look at after the contract-implementation double-check.
TLDR;
This is exactly the use case I like, thanks for saving me from having to write a response of my own! The other modifications to this use case is to allow the stakeholders access to the frontend early, for feedback and refinement. The mock you use in this case becomes the basis for the data contract that the backend team uses later to complete the API. This allows the finished product to be as closely aligned as possible to the stakeholders’ needs quickly, and with effort limited only to the frontend.
2nd post from you in my feed, and I wholeheartedly agree with this one as well. I still think having basic stubs/mocks for unit tests are good if you practice Test Driven Development. The point is for design, not just "does the code work". The key would be just to use dependency injection in OOP or "passing parameters to functions in FP". In your example above, that'd be
database_write
being a function passed in; stub in unit tests:Then use it:
And a real function in integration.
I hope you keep writing articles like these.
Agree we should not spend too much time writing mocks for testing. But the point of a mock is to be able to unit test something that cannot (easily) be rewritten to exclude an external dependency. You can say, we should only unit test pure functions, but in the real world thats not always an option. For example in React, we work a lot with context providers, these are external dependencies to a lot of the components that you definitely want to unit test individually. In this case we have to mock the context providers, because otherwise the test will throw an error. What we dont need to do is provide a realistic mock, because we're not testing component integration in the rest of the app. We just want to test the different internal states of the component. Mocking is not harmful in that case and I'm sure there are plenty of other valid reasons to use mocks.
I would agree that web frontend work is a reasonable use of mocks. I do believe this is mostly due to limitations of react and other frameworks though, and not because mocking is generally a good idea.
I need to start by stating that there is a difference between unittest mocking and integration test mocking.
Bringing logic into pure functions for unit testing is definitely recommended over mocking. Try to reduce the dependency graph for any task is great in this regard. Unit testing can't connect to external systems or services as that is integration.
Then you have integration testing where different systems can be tested for integration. You cannot avoid integrating with real system.
If you build out a good mock system it not only provides faster more reliable test automation, it makes it possible to run tests not possible when using a live system (namely a not live system).
Is maintaining mocks more work? Yes. Is it always worth it? I think it usually is.
Here is what speed gets you. You refactor code and can be confident it operates as expected.
I have to be honest that I have no interest in mocking a database, this is likely a combination of the complexity for what a database does and its stability as a reliable service.
Holy shit, you're on a mission aren't you? Last 3 posts I've seen from you are quite controversial xD I'm not complaining though
**Mocks aren’t equivalent to the integrations they replace.
True, but that is intentional. You can ensure that they are returning the right data via schema validation based on the contract of the service.
In a word, you can ensure that mocks return the same thing (that is, the same schema not same data per se) without the complexity of integrating with the real thing.
**If you mock a database client then you haven’t tested the integration with the real client. This means that your code may work with the mock but you will still need to do integration testing to make sure it works without mocks.
If you don't mock, then you have to set up your testing environment to speak to the real service. This requires extra maintenance, and when there is an issue (which is definitely not uncommon) it can be very confusing to know what needs to be fixed.
Without mocking, you need to add more configuration to your testing environment, increase your knowledge to know how the service works under the hood, and learn how to debug. This leads to change amplification, increased cognitive load, and unknown unknowns which are the three main causes of software complexity.
Lastly, if the connection with the database isn't working, you can catch that via manual testing in the browser, or by a "smoke test" before releasing to production. In both these cases, you can test the database connection implicitly. Integration tests don't need to test that every single connection works, just that everything is working as expected from the vantage point of the user.
**Feature Parity Is Not Feasible.
You can still have feature parity. Mocks can behave however you desire, and that means they can behave exactly like the real service.
However, the point of mocks is to not work just like the service. They should have the same signature and behavior but more sensible data.
**If you make a quick mock then it won’t return useful data. The more time you spend improving the mock the more useful the data will be. However it can never be a true representation.
A mock isn't supposed to be a true representation. It's meant to be more sensible and easier to work with. However, as mentioned above, you still can ensure that the sensible data is still valid data via schema validation.
**Mocks that aren’t used are a waste of time and effort.
Then delete them.
**If you mock out a database client and don’t use it then there is no point mocking it. This can occur if some code requires valid configuration to initialise but doesn’t use it.
Same as above. Also, you don't have to mock out a database client in its entirety.
Agree on everything, also adding that test automation, integration tests or unit tests are not meant to validate service availability, we have tools on any cloud service such automated health checks, monitoring tools and so on that are responsible for that.
If a DB gets downed you'll notice it by an instant notification and probably a trigger will restart that service before you can log into the dashboard.
On the other hand there are two ways to work with databases.
Code First vs Database First.
The ORM will handle one or the other depending on the initial setup.
If the project qualifies for that you'll set up database first and have some data guy building the best possible data structure, otherwise you'll probably use code first and let the ORM do the rest.
It is a [definition -> schema] process on one side or the other which is the reason I'd better use some seeders that match the possible data (according to the schema) and go ahead with that.
The ORM will also validate that the input data matches the schema and will throw an error if it's not the case and you can double-check that through backend unit testing.
Those seeders will be used in development always anyway, I expect people not to copy/export data from real clients stored in the secured production database into other environments for obvious reasons so on a way or another yes, you'll need mock data.
Can't count the number of times I've heard this same opinion (and I really mean opinion) in different ways. I spent my share of time understanding how mocks sometimes are not the way to go, but having to run a live database for my tests costs money (unless you can use something like SQLite), and running integration tests takes way more time than unit testing. But in general if you notice 50% of your test is just mocking dependencies, you're gonna have a bad time.
Other people also pointed out that mocks can also mean something like mocks ng an integration to work in parallel with other developers. So not using mocks at all is not a good idea.
When should I not use mocks? Vashikaran Specialist In Ghaziabad