DEV Community

Cover image for Monolithic Thinking Is the Biggest Bug in Your Microservice Testing
Shazzadur Rahman
Shazzadur Rahman

Posted on • Originally published at Medium

Monolithic Thinking Is the Biggest Bug in Your Microservice Testing

If you started your QA career testing a monolithic application, you already have a foundation. But that foundation can quietly work against you in a microservice system — because the rules have changed, and most junior QA engineers don't realize it until something breaks in a way they can't explain.

This article will help you understand how microservice systems are built, why they behave differently, and what you need to start thinking about as a QA engineer working in one.


What Even Is a Microservice Architecture?

In a traditional monolithic application, everything lives in one place. The user interface, the business logic, the database — all bundled together. When you test a login feature, you're testing one system.

Microservice architecture breaks that single system into smaller, independently running services. Each service:

  • Handles one specific business capability (e.g., payments, user accounts, notifications)
  • Manages its own database
  • Communicates with other services through APIs or events

Here's what a simplified e-commerce system might look like:

API Gateway
              │
  ┌───────────┼───────────┐
  ▼           ▼           ▼
User       Order       Payment
Service    Service     Service
  │           │           │
  ▼           ▼           ▼
User DB    Order DB   Payment DB
Enter fullscreen mode Exit fullscreen mode

Each of those services is developed, deployed, and scaled by a separate team — independently of the others.

Flexible? Yes.
Complex to test? Absolutely.


Why Your Old Testing Approach Breaks Here

In a monolith, a login flow looks like this:

User enters credentials → Business logic validates → Database checks → Response returned
Enter fullscreen mode Exit fullscreen mode

That's one system. One place to look when something breaks.

In a microservice system, the exact same login might look like this:

Request hits API Gateway
       ↓
  Auth Service (validates credentials)
       ↓
  User Service (fetches user profile)
       ↓
  Session Service (creates session token)
       ↓
  Response returned
Enter fullscreen mode Exit fullscreen mode

Four services involved in a single login.

If anything fails — a timeout in the Auth Service, a slow response from the Session Service, a data mismatch in the User Service — the whole flow breaks. And the error message the user sees might give you zero clue about which service actually caused it.

This is why junior QA engineers often feel lost in microservice environments. They're testing the output without understanding the system producing it.


The Mindset Shift: From Feature Tester to System Observer

In a monolith, the right question is:

“Does this feature work correctly?”

In microservices, the right question is:

“Which services are involved, and what happens when one of them fails?”

This is the core shift.

You're no longer just validating a response — you're observing a system.

When you pick up a test scenario, train yourself to ask:

  • Which services participate in this flow?
  • What data does each service depend on?
  • What happens if one service responds slowly or not at all?
  • Is the data consistent across services after the transaction completes?

You don't need to know how to build microservices to ask these questions. You just need to understand the architecture well enough to think through the flow.


Step 1: Get the Architecture Diagram (And Actually Read It)

This is the most underrated habit in QA.

Before you write a single test case for a feature, ask your team for:

  • A service interaction diagram (which services talk to which)
  • An event flow diagram (what triggers what)
  • A deployment overview (how services are grouped)

These diagrams tell you things that no test case document will:

  • Where does the data originate?
  • Which service owns a piece of data?
  • How do services notify each other when something happens?

If you're testing a payment flow but you don't know that the Payment Service sends an event to the Notification Service after a transaction, you'll miss testing whether the user gets their confirmation email — because that's not the Payment Service's job.

Without the diagram, you're testing in the dark.


Step 2: Understand How Services Talk to Each Other

Microservices don't communicate the same way in all situations. There are two main patterns, and you need to test them differently.

Synchronous Communication (HTTP / gRPC)

One service directly calls another and waits for a response.

Service A ──── REST API call ───→ Service B
               (waits for response)
Enter fullscreen mode Exit fullscreen mode

What to test:

  • Does the API return the correct data when everything works?
  • What happens when Service B is slow? Does Service A handle the timeout?
  • What error does the user see if Service B is down?
  • Are the API contracts (request/response formats) being respected?

Asynchronous Communication (Events / Message Queues)

One service publishes an event. Another service picks it up later — independently.

Service A → publishes event to Kafka → Service B consumes it → Service C is triggered
Enter fullscreen mode Exit fullscreen mode

This is trickier to test because there's no instant response. The flow happens over time.

What to test:

  • Is the event published in the correct format?
  • Does Service B actually consume the event and process it correctly?
  • What happens if the event is published but Service B is temporarily down — does the message get processed later?
  • Are there duplicate events being sent? How does the system handle that?

As a junior QA engineer, asynchronous flows are where most real bugs hide. If you're not thinking about them, you're missing a big part of your system.


Step 3: Understand Who Owns the Data

In a monolith, there's usually one database. You can query it directly to verify that data was saved correctly.

In microservices, every service has its own database:

Order Service   → Order Database
Payment Service → Payment Database
User Service    → User Database
Enter fullscreen mode Exit fullscreen mode

This creates a testing challenge called eventual consistency.

When an order is placed, the Order Service records it immediately — but the Payment Service might take a few milliseconds (or longer) to update its records. During that window, the data across the two databases is temporarily inconsistent.

What junior QA engineers often miss:

  • Checking the Order Database but forgetting to verify the Payment Database
  • Not accounting for the delay in asynchronous updates
  • Assuming that if one service confirms success, all services have updated

A good habit: after any cross-service transaction, check the relevant database in each service that was involved. Don't just trust the API response.


Step 4: Learn to Follow a Request Through the System

When a bug happens in a microservice system, “it's broken” tells you almost nothing. You need to know where it broke.

Teams handle this with observability tools — tools that track what a request does as it moves through services:

  • Centralized logging (e.g., ELK Stack, Datadog): collects logs from all services in one place
  • Distributed tracing (e.g., Jaeger, Zipkin): shows you the full path of a request across services, with timing
  • Monitoring dashboards: give you a real-time view of service health

Here's what a trace might show you:

Request ID: abc-123
API Gateway        → 5ms
Auth Service       → 12ms
User Service       → 340ms  ← slow!
Session Service    → 8ms
Total: 365ms
Enter fullscreen mode Exit fullscreen mode

That trace tells you the User Service is the bottleneck — not the Auth Service, not the gateway. Without tracing, you'd have no idea.

As a QA engineer, you don't need to set up these tools — but you do need to learn how to read them. Ask your team to walk you through how they trace requests. It will make you significantly more effective at diagnosing failures.


Step 5: Test What Happens When Things Go Wrong

This is where microservice testing gets interesting — and where most junior QA engineers don't spend enough time.

In a monolith, you mostly test happy paths and a few negative scenarios. In microservices, failure is a normal part of the system.

Services go down.
Networks get slow.
Events get delayed.

The system is supposed to handle these gracefully. Your job is to verify that it does.

Failure scenarios worth testing:

Scenario What to verify
A dependent service is down Does the system return a clear error? Does it retry?
Network latency is high Does the service timeout correctly?
An event is published twice Does the consumer handle duplicates without creating duplicate records?
A service restarts mid-transaction Is the transaction completed, rolled back, or left in a broken state?

Concrete example:

Your team builds a checkout flow. The Payment Service is unavailable.

Questions to test:

  • Does the Order Service still create an order record? Should it?
  • Does the user see “Payment failed, please try again” — or a generic 500 error?
  • Is there a retry mechanism? How many times does it retry?
  • If the payment eventually succeeds on retry, does the order status update correctly?

These are not edge cases.
These are real production scenarios.

Test them.


The Testing Pyramid for Microservices

Effective microservice testing doesn't happen at one level — it happens at four:

1. Unit Testing

Developers test individual functions in isolation.
You won't own this, but you should know it exists.

2. Service Testing

Each microservice is tested independently, with its dependencies mocked.
You verify that the service behaves correctly on its own.

3. Integration Testing

Two or more services are tested together.
You verify that Service A and Service B communicate correctly under real conditions.

4. End-to-End Testing

You test the complete user workflow across all services.
This is where most QA engineers spend their time, and where most real-world failures are caught.

The further up the pyramid you go, the slower and more expensive the tests become — but also the more realistic.

Balance matters.


What This Means for Your Career

The QA role is evolving.

In modern teams, a QA engineer who only knows how to write UI test scripts has a ceiling.

A QA engineer who understands distributed systems, can read traces, and thinks in flows rather than features — that person is genuinely hard to replace.

You don't need to become a software architect. But you do need to understand the system you're testing well enough to ask the right questions and catch the bugs that only appear at the seams between services.


Final Thought

The hardest bugs in microservice systems don't live inside a single service.

They live in the space between services — in the assumptions each one makes about the others.

If you're only testing the API response in front of you, you're seeing one part of a much larger picture.

Start with the architecture diagram.
Follow the flow.
Test the failures.

That's where the real testing work is.


Shazzadur Rahman

Senior QA Engineer | AI Dashcam & Telematics Testing

Top comments (0)