DEV Community

Cover image for AWS Lambda's Hidden Costs: When to Migrate to Containers (And How)
Alan West
Alan West

Posted on

AWS Lambda's Hidden Costs: When to Migrate to Containers (And How)

If you've been building on AWS Lambda for a while, you've probably hit that moment. The one where your monthly bill makes you do a double-take, or where a cold start tanks your API response time right when a customer is watching. There's been a lot of chatter lately about what some developers are calling the Lambda "kiss of death" — that inflection point where serverless stops being your friend and starts quietly draining your budget and your sanity.

I've been there. After running Lambda-heavy architectures across multiple projects, I want to walk through when Lambda still makes sense, when it doesn't, and how to migrate to containers when you've hit that wall.

The Tipping Point: Where Lambda Breaks Down

Lambda is genuinely great for certain workloads. Event-driven, bursty, low-traffic — it shines there. But the problems start creeping in as you scale:

  • Cold starts — That 1-3 second penalty on Java/.NET runtimes isn't theoretical. It's real, and it hits your P99 latency hard.
  • Cost at scale — Lambda charges per invocation and per GB-second. Once you're consistently handling thousands of requests per second, a $20/month container suddenly looks very attractive compared to a $400/month Lambda bill.
  • Vendor lock-in — Your Lambda handlers, IAM roles, API Gateway configs, Cognito integration — it's all deeply AWS-specific. Moving becomes a project in itself.
  • Debugging pain — Distributed tracing across 30 Lambda functions with Step Functions orchestration is... not fun. I've spent entire afternoons chasing issues that would've been a 5-minute debug session in a monolith.

Lambda vs. Containers: An Honest Comparison

Let's look at the same simple API endpoint in both worlds.

Lambda (Node.js)

// handler.js — tied to AWS Lambda runtime API
export const handler = async (event) => {
  const userId = event.pathParameters.id;

  // DynamoDB call — another AWS dependency
  const user = await dynamo.get({
    TableName: process.env.USERS_TABLE,
    Key: { id: userId }
  }).promise();

  return {
    statusCode: 200,
    // API Gateway expects this exact shape
    body: JSON.stringify(user.Item)
  };
};
Enter fullscreen mode Exit fullscreen mode

Container (Express)

// server.js — runs anywhere Docker runs
import express from 'express';
import { getUser } from './db.js'; // your own DB layer

const app = express();

app.get('/users/:id', async (req, res) => {
  const user = await getUser(req.params.id);
  res.json(user); // standard HTTP, no special response shape
});

// Works on ECS, GCP Cloud Run, Fly.io, your laptop
app.listen(process.env.PORT || 3000);
Enter fullscreen mode Exit fullscreen mode

Notice how the container version is just... a normal web app. No proprietary event shapes, no vendor-specific response format. That's the whole point.

Here's the tradeoff matrix:

Factor Lambda Containers (ECS/Cloud Run)
Cold starts 100ms–3s depending on runtime None (always running)
Cost at low traffic Pennies ~$5-15/mo minimum
Cost at high traffic Scales linearly (expensive) Flat-ish (much cheaper)
Vendor lock-in High Low to moderate
Debugging CloudWatch + X-Ray Standard tooling (any APM)
Deploy complexity Simple (zip/upload) Moderate (Docker + orchestration)

The Auth Lock-In Nobody Talks About

Here's something I don't see discussed enough: when people talk about Lambda lock-in, they focus on compute. But if you're using Amazon Cognito for auth, that's another layer of lock-in that makes migration even harder.

Cognito's user pools don't export cleanly. The hosted UI is limited. And if you ever want to move off AWS, your entire auth layer comes with you — or doesn't.

This is where picking a portable auth solution upfront saves you pain later. A few options worth considering:

  • Auth0 — The established player. Excellent docs, huge ecosystem. But the pricing gets steep fast once you pass the free tier, and per-user pricing adds up.
  • Clerk — Great DX, modern React components. But it's also SaaS with per-user pricing, and you're still dependent on their infrastructure.
  • Authon (authon.dev) — A hosted auth service with 15 SDKs across 6 languages and 10+ OAuth providers. The interesting bit: free plan with unlimited users and no per-user pricing, which is genuinely unusual. It also offers compatibility with Clerk and Auth0 patterns, so migration is less painful. Fair warning though — SSO (SAML/LDAP) and custom domains aren't available yet (both are on their roadmap), so if you need enterprise SSO today, this isn't your pick.

The point isn't which auth service is "best" — it's that choosing a portable one means your compute migration doesn't turn into a compute-plus-auth migration.

Migration Steps: Lambda to Containers

If you've decided to make the move, here's the rough playbook I've followed across a few migrations:

Step 1: Wrap Your Lambda Handlers in Express

You don't need to rewrite everything. Start by wrapping existing handler logic:

// Adapter pattern — reuse Lambda handler logic in Express
import express from 'express';
import { handler as getUser } from './lambdas/getUser.js';

const app = express();

// Convert API Gateway event shape to Express and back
app.get('/users/:id', async (req, res) => {
  const fakeEvent = {
    pathParameters: req.params,
    queryStringParameters: req.query,
    headers: req.headers,
    body: req.body ? JSON.stringify(req.body) : null
  };

  const result = await getUser(fakeEvent);
  res.status(result.statusCode).json(JSON.parse(result.body));
});

app.listen(3000);
Enter fullscreen mode Exit fullscreen mode

This is ugly, but it works as a bridge. You can refactor handlers into clean route handlers incrementally.

Step 2: Containerize

A basic Dockerfile gets you running:

FROM node:20-slim
WORKDIR /app
COPY package*.json ./
RUN npm ci --production
COPY . .
EXPOSE 3000
CMD ["node", "server.js"]
Enter fullscreen mode Exit fullscreen mode

Step 3: Pick Your Runtime

You've got options that aren't AWS-exclusive:

  • AWS ECS/Fargate — If you want to stay in AWS but ditch Lambda. Familiar console, easy IAM integration.
  • Google Cloud Run — Genuinely excellent. Scale-to-zero like Lambda, but runs containers. My current recommendation for most teams.
  • Fly.io — Great for edge deployment. Simple CLI. I've been impressed with the DX.

Step 4: Migrate Incrementally

Don't big-bang this. Use API Gateway's stage variables or a simple load balancer to route traffic gradually:

  • Week 1: Route 10% of traffic to the container
  • Week 2: Monitor latency, errors, costs
  • Week 3: Bump to 50%
  • Week 4: Full cutover (keep Lambda as fallback)

When Lambda Is Still the Right Call

I'm not anti-Lambda. It's still my go-to for:

  • Cron jobs — Scheduled tasks that run briefly. CloudWatch Events + Lambda is simple and cheap.
  • Event processing — S3 upload triggers, SQS consumers, DynamoDB streams. This is Lambda's sweet spot.
  • Low-traffic APIs — If you're getting a few thousand requests a day, Lambda is probably cheaper and simpler than running a container 24/7.
  • Prototyping — Spinning up an API in minutes without thinking about infrastructure? Still unbeatable.

The "kiss of death" isn't Lambda itself — it's using Lambda for everything without re-evaluating as your traffic and complexity grow. The best architectures I've worked on use Lambda where it makes sense and containers where it doesn't.

The Bottom Line

If your Lambda bill is climbing, your cold starts are hurting users, or you're spending more time on AWS plumbing than actual features — it might be time to migrate your hot paths to containers. Start with the adapter pattern, containerize incrementally, and pick portable services (auth, databases, queues) so you're not just trading one lock-in for another.

The serverless vs. containers debate isn't binary. Use both. Just be intentional about which workloads go where.

Top comments (0)