(and why you should stop saying "asynchronous" without clarifying what you mean)
Many engineers say "we use asynchronous communication" — but they're almost always wrong.
What they're actually describing is non-blocking data transfer, not architectural asynchrony.
Let's draw the line between them — and see why mixing the two leads to poor system design.
Table of Contents
- Two Layers of Asynchrony
- The Criteria for Architectural Asynchrony
- Why the Confusion
- Common Myths
- When You Actually Need Architectural Asynchrony
- What to Say Instead of "Asynchronous Call"
- Final Thoughts
Two Layers of Asynchrony
The term "asynchronous" is overloaded. It's used to describe everything from queues to reactive APIs to event-driven systems.
In reality, there are two distinct layers:
- Transport asynchrony — non-blocking calls, reactive APIs, message queues. A property of the transport.
- Architectural asynchrony — one-way event flow with no expectation of response. A property of the interaction model.
If a service waits for a result, it is architecturally synchronous, even if it communicates through a broker.
The Criteria for Architectural Asynchrony
A service can be considered architecturally asynchronous if:
- it does not wait for a reply;
- its logic is independent from other services;
- communication is one-way only.
One direction — one interaction.
Why the Confusion
Engineers often mistake technical mechanisms — queues, reactive APIs — for architectural properties.
That's why you hear statements like:
"We reduced coupling by switching to asynchronous communication."
— when in fact, they just added a broker between tightly coupled services.
Common Myths
The following myths stem from conflating transport and architectural asynchrony — mistaking non-blocking mechanisms for architectural independence.
Myth 1. Asynchrony reduces coupling
Not necessarily. If your system still follows a request/reply pattern, it remains tightly coupled — regardless of whether you use synchronous HTTP or an asynchronous queue.
There are two forms of coupling to consider:
- Temporal coupling — dependency on immediate availability.
- Logical coupling — dependency on another service’s result.
Queues can remove temporal coupling but leave logical coupling intact.
That’s why replacing HTTP with Kafka doesn’t make your system event-driven — it only moves the waiting into a different layer.
Myth 2. Asynchrony increases resilience
Transport asynchrony breaks temporal coupling, reducing cascading failures.
Your system stays responsive even when dependencies are down.
But resilience ≠ reliability.
Asynchrony absorbs transient failures — it doesn’t guarantee recovery.
True resilience requires explicit failure handling: retries, timeouts, idempotency, and dead-letter queues.
Without them, you get either message loss or resource exhaustion.
Queues don’t fix broken systems — they just buy you time to notice.
Myth 3. Asynchrony improves scalability
Architectural asynchrony doesn't make systems scalable — it trades immediate failure for deferred complexity. Queues smooth spikes but also hide overload conditions until it’s too late.
Behind every “scalable” async design lurk retries, idempotency, message ordering, and dead-letter queues. These aren’t optional — they’re the real price of reliability.
And there’s a hidden trap: loss of centralized deadline control. Each service runs on its own clock, and when the client times out, downstream consumers may still process events long after the result is useless.
In synchronous chains, closing the connection stops all work. In event-driven systems, you must propagate deadlines explicitly — or waste resources on obsolete tasks.
Asynchrony doesn’t give you scalability for free. It just gives you the tools — if you know how to use them.
Myth 4. Asynchrony speeds up results
No. This myth applies to both transport and architectural asynchrony.
Asynchrony improves responsiveness (UI doesn't freeze) but often increases end-to-end latency.
Example: sync HTTP returns in 300ms; async queue returns "202 Accepted" in 10ms but delivers the actual result via webhook after 500ms. The UI feels fast, but the result arrives later. Use synchronous calls when you need immediate results (balance checks, searches). Use asynchronous flows when the task can wait without blocking the UI (reports, batch processing).
Summary of the Myths
Transport asynchrony is a mechanism.
Architectural asynchrony is a semantic property.
Mixing them up leads to flawed reasoning and poor design.
When You Actually Need Architectural Asynchrony
- One-way (fire-and-forget) events — logging, metrics, push notifications.
- Multiple consumers (Pub/Sub) — one event processed by multiple independent services.
- Deferred processing — batch jobs, background tasks, integrations with slow APIs.
- Separation of concerns — each service handles its own data subset, no shared transaction (eventual consistency).
If an immediate response is required — architectural asynchrony is not appropriate.
Use synchronous RPC or plain HTTP.
What to Say Instead of "Asynchronous Call"
In engineering practice, "asynchronous call" has become synonymous with non-blocking execution — when a thread doesn't wait for I/O.
But in architectural discussions, "asynchronous" should mean temporal decoupling — services don't depend on each other's immediate availability or response.
Mixing these contexts obscures design intent:
- Transport asynchrony solves blocking (performance problem)
- Architectural asynchrony solves coupling (dependency problem)
To avoid confusion, reserve "asynchronous" for implementation details, and use more precise terms for architectural patterns:
Term | Meaning | Evaluation |
---|---|---|
One-way call | Data transfer with no response expected. UML-compatible. | Optimal. |
No-reply call | Emphasizes the absence of a reply. | Clear, but less formal. |
Fire-and-forget | Common in event-driven contexts. | Familiar, but informal for documentation. |
Deferred call | Processing scheduled for later execution. | Valid in business workflows. |
Recommendation:
Use "one-way call" — it's precise and avoids confusion between transport and architecture.
Final Thoughts
Until engineers distinguish transport asynchrony from architectural asynchrony,
all discussions about "reactivity" and "scalability" will remain superficial.
Transport asynchrony solves the blocking problem.
Architectural asynchrony solves the dependency problem.
Confusing them leads to systems that only appear loosely coupled.
What about your team?
Do you call request/reply over Kafka "asynchronous communication"?
Have you faced distributed timeouts or found that queues didn't actually make things faster?
Share your experience in the comments!
References:
Top comments (0)