DEV Community

Angel Lopez Bellmont
Angel Lopez Bellmont

Posted on

Integration Testing Scaffolds: a centralized testing approach for microservice architectures

We all know some version of the testing pyramid: unit tests at the bottom, then integration/contract tests, and finally a thin layer of E2E tests.

But here's the reality in most teams: They nail the unit tests. They implement contract testing between services. They build comprehensive E2E suites with Playwright, Cypress, Selenium...

And still, production breaks in ways none of these tests catch.

There's a gap in the pyramid, and many teams are falling straight through it.

The Blind Spot in Modern Testing

A typical microservice setup already includes:

  • Unit tests → solid and fast
  • Contract tests → schema compatibility guaranteed
  • E2E tests → complete UI workflows covered

So where do failures still happen?

They don't happen in single services. They happen between services.

Things like:

  • Data losing precision as it flows through multiple transformations
  • OAuth tokens failing in certain service combinations
  • Rate-limiters behaving differently under real load patterns
  • Event-driven flows silently dropping messages
  • Transactions that work individually but break when chained
  • Cache invalidation not propagating across service boundaries

Unit tests can't see this. Contract tests can't see this.

And E2E tests? They hit the entire system through the UI, which is too shallow, too slow, and often mocked at the integration points that matter most.

This is where integration API testing should live.

But there's a reason you don't see this in many organizations. It's simply very hard to do, and that has a lot to do with how real companies actually grow and evolve over time.

They grow over many years. They acquire other companies, and each one brings its own tech stack, tools, and ways of working. Systems that started as a single monolith are gradually broken down into microservices, but they don't disappear overnight. Code and decisions made several years ago are still running in production today, often based on assumptions that no longer match the current reality.

In the end, you get a landscape where monoliths and microservices live side by side. You have several programming languages, different databases, message hubs, caches, and many different integration patterns all coexisting. Making good architectural decisions across all of this is genuinely hard.

Integration tests that cross service boundaries require coordination between multiple teams, but this work rarely has a clearly defined owner. With unclear ownership and many competing priorities, cross-cutting testing and infrastructure tends not to emerge naturally, especially because on paper, companies already feel "covered" with unit tests, contract tests, and E2E tests.

Why a Centralized Testing Repository?

The solution proposed in this article is a central-testing-repository: a single repository dedicated to integration tests that verify cross-service behavior after deployment.

This approach has costs. It means maintaining tests separately from service code. It requires coordination between teams. It introduces coupling.

So why choose this?

Because it matches reality. The complexity we just described: monoliths and microservices coexisting, multiple tech stacks, old ERPs, distributed ownership, isn't going to disappear tomorrow. You need system-level confidence now, not after a multi-year migration completes.

That's the scaffold idea: temporary infrastructure that provides immediate value while your architecture evolves. Some companies remove it once they stabilize. Others keep it as a permanent system verification layer. Either works.

There's an organizational benefit too. When something breaks between your legacy ERP and your modern services, who's responsible? With a central-testing-repository, a QA or platform team owns that question. Tests document how systems should interact. When a test fails, you know exactly which service returned bad data.

The Monorepo Insight

Here's a useful way to think about it: a central-testing-repository is essentially a lightweight monorepo, but only for integration tests.

Companies that have fully adopted monorepos (Google, Meta, etc.) get cross-service testing almost for free. All services live in one place, so writing a test that spans multiple services is just... a test. No coordination between repos needed.

But most companies can't migrate to a full monorepo. The organizational cost is too high. The tooling investment is massive. Existing systems don't fit neatly.

The central-testing-repository gives you the testing benefit of a monorepo without requiring the full migration. A single repository that has visibility into all service APIs, can test cross-service workflows, runs against real deployed services, and owns the "glue" that holds your distributed system together.

Yes, this means two PRs when you change an API: one in the service, one in the tests. But that's not a cost, it's the point. That moment of coordination is exactly when you catch breaking changes, before they reach production, not after. A few extra minutes of coordination beats hours of incident response.

There's a balance here: if you distribute your architecture, you need to centralize your verification. The more you split services apart, the more you need a single place that sees how they work together. The services stay distributed. The integration verification becomes monolithic.

Wait, Doesn't Contract Testing Catch This?

If you're familiar with contract testing, you might be thinking: "Can't Pact or similar tools catch these integration issues?"

Not quite. Contract testing and integration testing solve different problems.

Contract testing verifies structure: Can services talk to each other? Is there a field called price? Is it a number? Does the response schema match what consumers expect?

Integration testing verifies behavior: Do services work correctly together? Is the value 19.99 maintained exactly through 5 transformations? Does the tax calculation match between Cart and Invoice? Did the side effect (inventory reservation, email sent) actually happen?

Contract tests verify that services can communicate. Integration tests verify that when they do, the results are correct.

What Contract Testing Integration Testing
Schema matches?
Values correct?
Side effects happen?
Cross-service consistency?
Timing/ordering correct?

With that distinction clear, let me show you real failures that slip through.

Real Production Failures That Slip Through

These are the kind of bugs that make it to production because nothing in your test suite is looking for them.

Example 1: Tax Calculation Inconsistency

  • Cart Service uses Decimal types: tax = €3.80
  • Invoice Service uses float: tax = €3.79
  • For 3 items: Cart shows €11.40, Invoice shows €11.39

Contract test: Both return {tax: number}. Pass.

Integration test: Verifies all services return the same tax amount. Fails.

Example 2: Date Format Interpretation

  • Order Service (Java) serializes: "01/02/2024" (January 2nd, US format)
  • Shipping Service (Go) parses: February 1st (EU format)

Contract test: Both handle { "orderDate": "string" }. Pass.

Integration test: Creates order, verifies Shipping Service parsed the correct date. Fails.

Example 3: Status String Casing

  • Payment Service (C#) returns: status: "COMPLETED"
  • Notification Service (Node.js) expects: "completed"
  • Comparison fails silently. No notification sent.

Contract test: Both handle { "status": "string" }. Pass.

Integration test: Processes payment, verifies notification triggered. Fails.

Same pattern in all three: contract passes, behavior breaks.

Why E2E Tests Can't Fill This Gap

When teams don't have integration API testing, they try to stuff all backend validation into E2E tests.

That's where problems multiply.

E2E tests can catch these problems, but they are a terrible primary tool for backend integration: they're too slow, too flaky, and failures are too hard to interpret. Integration API tests are a much better fit:

Characteristic E2E Tests Integration API Tests
Speed Minutes per test Seconds per test
Stability Flaky (UI changes, timing) Stable (API contracts)
Failure clarity "Button didn't appear" "API returned 400: invalid_tax_rate"
Async handling Poor (timeouts, polling) Native (direct event verification)
Parallelization Limited (browser resources) Easy (stateless HTTP calls)
Maintenance cost High (selectors, flows) Low (API contracts)

The real cost: Teams lose trust in their E2E suites. Tests get marked as "flaky" and ignored. Real bugs slip through because nobody believes the failures anymore.

What Each Test Type Should Own (code confidence, not data confusion)

E2E Tests: The happy path. Does the basic user journey work?

Integration API Tests: The backend complexity. Do services behave correctly together? Payment triggers notification, inventory updates propagate, cancellations refund and restore stock.

Keep E2E thin and reliable. Let integration tests handle the cross-service logic.

When your E2E test fails, is it because a developer broke something, or because the test data is corrupted?

Well-designed integration tests solve this. When a test fails, you know exactly why. You're not debugging data. You're debugging code.

How to Make This Work in Practice

Ownership: Give the repository a clear owner, typically a QA or platform team. Other teams contribute tests, but one group maintains standards and tooling.

Scope: Focus on cross-service flows that matter: checkout, payments, authentication, data sync. Don't duplicate unit or contract tests.

Execution: Run tests post-deploy in a shared environment (staging or dedicated test environment). Wire it into your CD pipeline so deployments can fail or rollback based on these tests.

Data: Provide stable test fixtures for core entities. Avoid over-randomizing: you want deterministic, explainable failures. Each test creates its own data and cleans up after itself.

Change workflow: When a service change breaks a central test, don't treat it as noise. It's a signal to discuss the impact with other teams. If tests fail too often for the wrong reasons, fix the test data or contracts rather than skipping the tests.

Conclusion

Integration API testing isn't about replacing existing tests. It's about filling the gap between unit tests, contract tests and E2E tests.

The central-testing-repository approach is a scaffold. You build it because your architecture is evolving, legacy and modern systems coexist, and you need system-level confidence today, not after a multi-year migration completes.

The value is clear for everyone involved. Developers stop debugging production issues that could've been caught in CI. QA owns the quality of API interactions with clear attribution when something breaks. Engineering gets fewer incidents and better ROI than flaky E2E suites. Architects get system-level verification without requiring organizational changes.

Yes, it requires coordination. Yes, it means two PRs sometimes. But the alternative is production failures that no existing test catches.

Start small. Pick your most critical flow. Write one test that follows a request through multiple services. Then expand from there.


Resources


Have you built something similar? Let me know in the comments.

Top comments (0)