AWS Lambda cold starts: causes and how to fix them
What happens on a cold start
Cold: Download image → Start runtime → Init code → Execute handler
Warm: Execute handler only
Cold starts only happen when there's no warm container available.
Cold start times by runtime
Go: 50-200ms
Node.js: 100-300ms
Python: 100-400ms
Java (JVM): 1000-4000ms ← painful outlier
Fixes ranked by ROI
1. Slim the container image (biggest ROI)
FROM node:18 AS builder
WORKDIR /app; COPY package*.json ./; RUN npm ci --production
FROM node:18-alpine
WORKDIR /app
COPY --from=builder /app/node_modules ./node_modules
COPY . .; CMD ["node", "handler.js"]
Large image (~1GB) → 3-8s cold start. Slimmed (~100MB) → 0.5-2s.
2. Move heavy init outside the handler
# Good: runs once per container lifetime
db = create_connection_pool()
def handler(event, context):
return process(event, db) # db already exists
3. Java SnapStart (reduces 2-4s to 100-200ms)
resource "aws_lambda_function" "api" {
runtime = "java21"
snap_start { apply_on = "PublishedVersions" }
}
4. Provisioned concurrency (last resort)
resource "aws_lambda_provisioned_concurrency_config" "api" {
function_name = aws_lambda_function.api.function_name
qualifier = aws_lambda_alias.live.name
provisioned_concurrent_executions = 2 # ~$11/month
}
Only use if cold start rate is >1% of invocations AND latency matters for your use case.
Summary
For Python/Node: fix image size + init outside handler. Usually enough.
For Java: enable SnapStart first.
Top comments (0)