DEV Community

Cover image for Serverless vs Kubernetes: The Final Showdown - Why we decided to hold this talk and where to go from here

Serverless vs Kubernetes: The Final Showdown - Why we decided to hold this talk and where to go from here

In the past 8 years, I mostly built serverless applications, then joined a company running its workloads on Kubernetes. The transition for me was hard, and since I stayed in contact with an old ex-colleague who on the other end, spent his career building on containers and on-premise k8s in particular I often reached out to him to understand Kubernetes, and more than occasionally, rant about it.

In response, he shared his frustrating experiences in attempting to run serverless applications - both from cold starts and costs perspectives.

That's how we came to the idea of writing a Talk about this and presenting it at Conferences.

Every engineering team has heard, or been part of, the endless “Serverless vs. Kubernetes” debate. Strong opinions are thrown around, memes circulate on LinkedIn, and blanket statements spread like wildfire. And like all stereotypes, they are true and false at the same time.

never trust a blanket statement

The truth? Much of the conversation is oversimplified, emotionally charged, and disconnected from the reality teams face when building and operating software at scale.

Our talk, and this article, is our attempt to bring nuance back into the discussion.

Too often we watch people compare these two paradigms the way you’d compare Car-Sharing to owning a car

Both models are valid. Both are powerful. And neither can replace the other across all use cases.

The real question is not “Which one is better?” but Which one is better for this workload, with this team, in this organisation, at this time?”

In our talk we go through several aspects and try to debunk them.

Control vs. Complexity

Kubernetes gives you:

  • precise control over deployments
  • resource scheduling (CPU, GPU, affinity rules)
  • hybrid/on-prem/cloud consistency
  • advanced rollout patterns and self-healing

Serverless gives you:

  • minimal configuration
  • "infrastructure you don't manage"
  • the fastest path to shipping business logic

But control comes with cost:
A typical Kubernetes stack isn’t just k8s. It’s:

  • Ingress controller
  • Service mesh
  • Metrics stack
  • Logging stack
  • Secrets manager
  • Autoscalers (HPA, VPA, CA, Karpenter...)
  • Backup tools
  • Certificate management
  • Policy/OPA
  • CI/CD integrations

So you don't just have to tame the Kubernetes beast; you also need to understand and manage an ecosystem of several critical open-source tools.

Serverless shifts much of this burden to AWS (or GCP/Azure), leveraging the shared responsibility/fate model.

Shared responsibility model

Automagical scaling

Serverless: “I don’t want to be bothered.”
K8S: “We’re a large org; we must be bothered.”

Scalability in K8S works with Autoscalers (HPA, VPA, CA, Karpenter, EKS Automode), but each solves a different slice of the scaling problem. And each introduces configuration overhead.

Autoscaling in K8S

Serverless scalability feels magical until you hit concurrency quotas or encounter cold starts.
Because Lambda does have constraints in reality:

  • Account/Regional concurrency limit
  • Scaling rate = 1,000 new execution environments every 10 seconds
  • Requests per second limit (10k RPS per 10 seconds)

and you indeed to understand how they work and how to use

  • Reserved concurrency
  • Provisioned concurrency

Cost is too often misunderstood

When comparing serverless and K8S on high-volume apps (~4B requests/month) it seems obvious who the winner is. By the thousands ( of $$$ ).

But the key missing factor is:

The most expensive resource isn’t compute. It’s people.

If you need:

  • platform engineers
  • k8s experts
  • observability specialists
  • SRE practices
  • on-call rotations
  • tooling ownership

…you must include those in your cost model.

A Kubernetes cluster may be cheaper than Lambda…
but is it cheaper than hiring two expert platform engineers?

I would rather pay for 2 more engineers developing our business logic and adding value to the product.

Long-Running Processes and Application Modernisation

Most detractors of serverless usually then mention Lambda’s 15-minute timeout and Lift & Shift as deal-breakers.

But this assumes you build long-running monolithic workloads.
while Serverless is, for me, primarily a mindset. Once you start seeing things from a Serverless perspective, things look different and you might not even have such problems.

colorful lego monolith

Because serverless offers alternative patterns:

  • Task chunking
  • SQS fan-out
  • Event-driven pipelines
  • Step Functions (with workflows lasting up to a year)
  • Checkpointing/state transition events

And because Lift and Shift does not address the architectural and technical challenges that are pushing your organisation into modernisation. Serverless and the Strangler Fig pattern do!

  • API Gateway + Lambda façade
  • Gradually carve out endpoints
  • Migrate at your own pace
  • Avoid rewriting the entire system upfront

Strangler Fig

Serverless modernization is incremental.
Kubernetes modernization is infrastructural.

Different problems, different solutions.

So… How Do You Choose?

Organizational factors trump technical ones.

You should ask:

  • What is our current team capable of supporting?

  • Is our priority speed, cost, stability, or control?

  • What are the budget constraints ( predictability vs cost per transaction )

  • What are the company priorities, direction and budget ( survival, growth, innovation )?

Project Paradox

The smartest teams don’t argue over technology choices.
They ask: “Which architecture best serves our workload and our organization right now?”

And instead of fighting over paradigms, we should be learning from each other—teams that build scalable serverless systems and teams that run rock-solid Kubernetes platforms.

Both worlds have powerful ideas worth sharing.


Your turn: Help us expand the conversation

I don't want to go too much into detail because I don't want to spoil the talk, as I hope after presenting it already at AWS Community Day Poland and Netherlands, as well as DevFest Hamburg, I hope we will be able to visit many other conferences in 2026.

Talk Q&A

But since the talk sparked a lot of interest during these conferences, I would like refine and deepen it further.

👉 Which technical aspects would you like us to explore next?

  • Kubernetes autoscaling internals
  • Hybrid architectures (EKS + serverless)
  • Evolutionary architectures
  • EDA patterns and Migrating from monolith to event-driven?

Or maybe you disagree with something we said.
Perfect — tell us!

Drop your thoughts, critiques, and questions.
Your input will help shape Version 2 of our talk.

Top comments (0)