
They sound similar, but choosing wrong can slow your entire CI pipeline.
It's Tuesday morning. Your CI pipeline has been flaky for three days. The test suite fails roughly one run in four always on the same integration test, always with a timeout error pointing at your order-service database client. You spend two hours bisecting commits, finding nothing. A colleague spots it: the staging database has a 200ms latency spike every Tuesday when a backup job runs. Your test has a 150ms timeout. The mock you added last sprint? It only wraps the repository layer. The database driver is still making a real network call.
You added a mock. The real dependency is still leaking in. This is the moment most engineers realize they've been reaching for the wrong tool.
TL;DR
API mocking replaces a dependency in code. Service virtualization replaces it on the network.
If the dependency is inside your process boundary, mock it. If it's outside, virtualize it.
What is API mocking?
API mocking intercepts a function call or HTTP client inside your application's process and returns a predetermined response instead of hitting a real service.
It lives in the same runtime as your test. When you write jest.mock('axios'), you're telling the test runner to swap the real axios module for a fake one before your code ever runs. No network socket is opened. No port is bound. The fake lives entirely inside the test process.
A minimal example in Node.js:
// orderService.test.js
jest.mock('../clients/paymentClient');
const paymentClient = require('../clients/paymentClient');
test('creates order when payment succeeds', async () => {
paymentClient.charge.mockResolvedValue({ status: 'ok', transactionId: 'txn_123' });
const order = await createOrder({ userId: 'u1', amount: 49.99 });
expect(order.status).toBe('confirmed');
expect(paymentClient.charge).toHaveBeenCalledWith({ userId: 'u1', amount: 49.99 });
});
The equivalent in Python with unittest.mock:
from unittest.mock import patch, MagicMock
@patch('app.clients.payment_client.charge')
def test_creates_order_when_payment_succeeds(mock_charge):
mock_charge.return_value = {'status': 'ok', 'transaction_id': 'txn_123'}
order = create_order(user_id='u1', amount=49.99)
assert order['status'] == 'confirmed'
mock_charge.assert_called_once_with(user_id='u1', amount=49.99)
What mocking is great at:
- Unit tests that need to run in milliseconds
- Isolating a single function or class from its collaborators
- Simulating edge cases that are hard to trigger on a real service (timeouts, 500s, malformed payloads)
- Zero infrastructure no Docker, no ports, no network config Where mocking breaks down: Fixtures drift. The real payment API ships a new field next week. Your mock still returns the old shape. Tests stay green. The production bug lands on Friday at 6pm. The more your mocks diverge from reality, the more confident your CI is about a lie.
What is service virtualization?
Service virtualization runs a lightweight server that impersonates a real dependency at the network level same host, same port, same protocol. Your application can't tell the difference between the real service and the virtual one, because from the network stack's perspective, there is no difference.
It lives outside your application process. You configure the virtual service once, spin it up alongside your app (usually in docker-compose or a CI service block), and your application connects to it exactly as it would connect to production.
A WireMock stub mapping:
{
"request": {
"method": "POST",
"url": "/v1/charges"
},
"response": {
"status": 200,
"headers": { "Content-Type": "application/json" },
"jsonBody": {
"status": "ok",
"transactionId": "txn_123"
}
}
}
docker-compose integration:
services:
app:
build: .
environment:
PAYMENT_SERVICE_URL: http://wiremock:8080
depends_on:
- wiremock
wiremock:
image: wiremock/wiremock:3.3.1
volumes:
- ./stubs:/home/wiremock/mappings
ports:
- "8080:8080"
Your app talks to http://wiremock:8080/v1/charges in tests. It talks to https://api.payments.io/v1/charges in production. The application code never changes.
What service virtualization is great at:
- Integration and end-to-end tests where multiple services are involved
- Teams where the dependency is owned by another squad you virtualize their service contract and stop waiting on their staging environment
- Protocol fidelity gRPC, SOAP, message queues, and binary protocols that mocking libraries struggle with
- Shared test environments where many developers run tests against the same virtual service Where service virtualization breaks down: Setup overhead is real. Someone has to author those stub mappings. In a fast-moving codebase, stubs go stale just like mock fixtures do they just do it more expensively. A WireMock mapping file that nobody is responsible for maintaining is a slow-motion time bomb.
Head-to-head: mocking vs service virtualization
| Dimension | API mocking | Service virtualization |
|---|---|---|
| Where it runs | Inside your test process | Separate network process |
| What it intercepts | Function / module calls | TCP/HTTP connections |
| Setup effort | Low a few lines in your test file | Medium-high stub files, docker config |
| Protocol support | HTTP/REST via HTTP clients | HTTP, gRPC, SOAP, MQ, custom TCP |
| State management | Stateless by default | Can simulate stateful sequences |
| CI speed impact | Fastest no network I/O | Slightly slower process startup, but still much faster than a real service |
| Fixture authoring | Manual you write the fake response | Manual you write the stub mapping |
| Best for | Unit tests, component isolation | Integration tests, multi-service environments |
| Drift risk | High easy to forget to update | High same problem, more infrastructure |
| Example tools | Jest, Mockito, unittest.mock, Sinon | WireMock, Hoverfly, Mountebank, Prism |
A simple decision rule:
Is the dependency inside your process boundary?
YES → mock it
NO → virtualize it (or record it see below)
Are you testing a single unit in isolation?
YES → mock it
Are you testing how two or more services interact?
YES → virtualize it
Is the dependency owned by another team with no reliable staging environment?
YES → virtualize it
Common anti-patterns to avoid:
- Over-mocking in integration tests. If you're mocking the HTTP client in an integration test, you're not testing integration you're testing that your code calls the right URL with the right parameters. Use a virtual service instead.
-
Under-mocking in unit tests. Spinning up WireMock to test a single function that calls a payment API is massive overkill. A
jest.mock()gets you there in three lines. - Mixing both in the same test. If half your dependencies are mocked at the code level and half are virtualized at the network level, you've created a hybrid environment that's hard to reason about and harder to debug.
The authoring problem and how Keploy solves it
Here's the uncomfortable truth that the mocking vs service virtualization debate glosses over: both approaches require you to hand-write fake data that goes stale.
Whether you're crafting a mockResolvedValue({ status: 'ok' }) in Jest or a JSON stub mapping in WireMock, you are making an assumption about what the real service returns. That assumption was valid the day you wrote it. In three months, after the payment API ships v2 with a restructured response body, your assumption is a liability.
You end up maintaining a shadow API. It lives in your test directory, nobody owns it, and it silently diverges from reality while your CI pipeline stays confidently green.
This is the problem Keploy is built to solve not by picking a side in the mocking vs virtualization debate, but by eliminating the authoring step entirely.
How Keploy's record/replay works:
Instead of writing fixtures by hand, Keploy intercepts real traffic between your application and its dependencies, records the full request/response cycle, and generates test cases and mock files automatically.
Step 1 Record real traffic:
bash
keploy record -c "go run main.go"
Make a real API call while Keploy is recording:
bash
curl -X POST http://localhost:8080/orders \
-H "Content-Type: application/json" \
-d '{"userId": "u1", "amount": 49.99}'
Step 2 Inspect what Keploy captured:
Keploy writes two files for every recorded interaction: a test case and a mock.
keploy/tests/test-1.yaml:
version: api.keploy.io/v1beta1
kind: Http
name: test-1
spec:
metadata: {}
req:
method: POST
proto_major: 1
proto_minor: 1
url: http://localhost:8080/orders
header:
Content-Type: application/json
body: '{"userId":"u1","amount":49.99}'
timestamp: 2024-11-12T09:14:32.001Z
resp:
status_code: 201
header:
Content-Type: application/json
body: '{"orderId":"ord_789","status":"confirmed","transactionId":"txn_123"}'
timestamp: 2024-11-12T09:14:32.187Z
keploy/mocks/mock-1.yaml:
version: api.keploy.io/v1beta1
kind: Generic
name: mock-1
spec:
metadata:
operation: POST /v1/charges
req:
body: '{"userId":"u1","amount":49.99}'
resp:
body: '{"status":"ok","transactionId":"txn_123"}'
created: 2024-11-12T09:14:32.050Z
Step 3 Replay in CI:
keploy test -c "go run main.go" --delay 10
Keploy replays the recorded HTTP interactions against your app, using the captured mocks instead of hitting real dependencies. Your CI job never needs network access to the payment service. The test data matches production behavior exactly because it came from production behavior.
Where Keploy fits in the picture:
Keploy operates at the network level (like service virtualization) but requires zero authoring (unlike both mocking and service virtualization). It's the answer to "I want network-level fidelity without the stub maintenance overhead."
- Use mocking when you need fast unit tests and you're comfortable owning the fixtures.
- Use service virtualization when you need multi-service integration tests and have a team to maintain stub mappings.
- Use Keploy when you're tired of maintaining either, and you want your test data to stay in sync with reality automatically.
The verdict
API mocking and service virtualization are not competing philosophies they're tools for different layers of your test pyramid.
Mock at the unit layer. When you're testing a single function, class, or module in isolation, reach for your language's native mocking library. It's faster to write, faster to run, and simpler to debug. Just accept that you're responsible for keeping those fixtures current.
Virtualize at the integration layer. When you're testing how your service behaves against a real network contract especially a contract owned by another team spin up a virtual service. The overhead is worth it because you're testing something real: the actual shape of the HTTP exchange.
Record when you're tired of authoring. If fixture drift is a recurring problem on your team and on most teams it is tools like Keploy remove the authoring problem from the equation entirely. Record once, replay forever, re-record when the contract changes.
The CI pipeline that keeps failing on Tuesday mornings doesn't need better mocks. It needs mocks that are actually accurate. That's a data freshness problem, not a coverage problem.
Try it yourself
If the authoring problem resonates, Keploy takes about five minutes to get running on an existing Go, Node, or Python service.
👉 Keploy quickstart guide record your first test in under five minutes, no stub files required.
What's your current mocking setup? Are you hand-writing fixtures, using contract tests, or something else entirely? Drop it in the comments genuinely curious how different teams handle fixture drift.
Top comments (0)