DEV Community

Cover image for Microservices Doesn't Mean Lambda Everything
Joel C
Joel C

Posted on

Microservices Doesn't Mean Lambda Everything

Here's a scenario that might be familiar.

A team is building on a microservices architecture. Things are moving. There's momentum. Then a long-running background process starts failing — timeouts, cost spikes, instability. Someone raises the concern: Lambda is ephemeral. It's designed for short, simple, one-time tasks. It runs, finishes, goes back to sleep. A long-running daily job is the wrong fit.

The lead's response: No, that's not what Lambda is for.

Then, in the same breath, he repeated the same explanation back: Lambda is ephemeral, it handles short, quick tasks. Then he used it as the justification for keeping the Lambda in place. The DevOps engineer in the room, who shared the same concern, was instructed to make it work regardless.

I was in that room. And what followed was predictable in hindsight: costs climbed, the system became increasingly fragile, and eventually the whole thing came down.

Not because microservices was the wrong pattern. But because the team had confused an architectural philosophy with a compute mandate, and the person with decision-making authority wasn't willing to separate the two, even when the reasoning to do so came out of his own mouth.

That distinction is what this post is about.


What Microservices Actually Means

Microservices is an architectural pattern built around one core idea: breaking a system into small, independently deployable services, each responsible for a specific business capability, each able to be developed, scaled, and maintained on its own.

The benefits are real when applied correctly:

  • Services can be scaled independently based on their specific load

  • Teams can own and deploy individual services without coordinating a monolith release

  • A failure in one service doesn't have to cascade across the entire system

  • Different services can use different technologies where appropriate

Notice what's absent from that list: any mention of how those services are computed. Microservices says nothing about Lambda, containers, VMs, or bare metal. It describes boundaries and responsibilities, not runtime infrastructure.

This is where teams slip.


The Conflation Problem

When a team goes serverless-first alongside a microservices architecture, there's a seductive logic that takes hold: each microservice is a function, Lambda runs functions, therefore each microservice should be a Lambda.

This sounds coherent. It is not.

A microservice is a unit of business capability with its own data, its own API, its own deployment lifecycle. A Lambda function is a specific compute primitive with specific constraints. It is stateless, ephemeral, time-limited. These are not the same thing, and they don't map cleanly onto each other.

Some microservices are perfectly suited for Lambda: lightweight, event-driven, short-lived operations. An authentication token validator. A webhook processor. A notification dispatcher. These fit the Lambda model well.

Others are not. A service that runs a nightly data reconciliation job. A service that processes large files sequentially over an extended period. A service that maintains a persistent connection. Forcing these into Lambda doesn't make your architecture more "micro". It makes it more fragile, more expensive, and harder to debug.

The pattern should serve the problem. When it starts working the other way around, you're no longer doing architecture. You are cosplaying at scale.


Complexity Is a Cost

One of the underappreciated principles of system design is that complexity has a price, and that price compounds over time.
Microservices, done well, manage complexity by isolating it. Each service owns its domain cleanly, and the interactions between services are well-defined. Done poorly, microservices multiply complexity: more services means more network calls, more failure points, more observability challenges, more deployment overhead.

The decision to adopt microservices should come with an honest accounting of that overhead. It makes sense when:

  • The system is large enough that different parts genuinely need to scale independently

  • Multiple teams need to work autonomously without stepping on each other

  • Different components have meaningfully different reliability or performance requirements

It adds cost without proportional benefit when:

  • The system is early-stage and the domain isn't well understood yet
  • The team is small and the coordination overhead outweighs the autonomy gains
  • The services are so tightly coupled that deploying one still requires deploying others

Starting with a well-structured monolith and extracting services as genuine need emerges is often the more pragmatic path. It's less exciting to say. It's more honest.


The Principle Underneath All of This

Every architectural decision is a trade-off. Microservices trades simplicity for scalability and autonomy. Lambda trades control and longevity for speed and cost-efficiency at the right scale. These are good trades in the right contexts.

The failure mode isn't choosing one of these patterns. It's choosing them without understanding what you're trading, and then refusing to revisit that choice when the evidence suggests it isn't working.

What made the meeting I described particularly costly wasn't just the wrong technical call. It was that the correct reasoning was present in the room, understood well enough to be articulated, and still didn't change the outcome. When that happens, the problem is no longer technical. It's structural. And structural problems tend to be more expensive.

Good architecture is not a set of decisions made once at the beginning of a project. It's an ongoing conversation between what the system needs and what the current design provides. When that conversation stops, when the architecture becomes fixed and unquestionable, the system starts accumulating the cost of that silence.

Sometimes that cost is technical debt. Sometimes it's a failed deployment. Sometimes it's the whole thing coming down.


What This Looks Like in Practice

A few questions worth asking before committing to any architectural pattern:

  • What problem is this pattern solving for this system, right now?
  • What are the constraints and costs this pattern introduces?
  • Are we using this because it's the right fit, or because it's what we know?
  • What would need to be true for this decision to be wrong? Are we watching for it?

That last question is the most important one. An architecture you can't question is an architecture you can't improve.

Part 3 covers what happens when the person responsible for asking those questions is the one refusing to ask them.


Part 1: Serverless Is Not a Silver Bullet — Understanding What Lambdas Are Actually For
Part 3: Coming soon — When the Lead Is the Bottleneck

This is part of an ongoing series on tech decisions, architecture, and the human dynamics that shape both.

Top comments (0)