The False Dichotomy
The "serverless vs containers" debate treats these as competing solutions to the same problem. They're not—they solve different problems, and most mature applications use both.
Serverless Functions
Code that runs on-demand. Zero servers to manage.
// Vercel Edge Function
export async function GET(request: Request) {
const userId = request.headers.get('x-user-id');
const data = await db.users.findUnique({ where: { id: userId! } });
return Response.json(data);
}
// Scales from 0 to 100k requests/second automatically
// You pay per invocation (~$0.0000002 each)
Serverless strengths:
- Zero operational overhead
- Scales to zero (no idle cost)
- Near-infinite automatic scaling
- Global edge distribution
- Per-invocation pricing
Serverless limitations:
- Cold starts (50ms-2s)
- Execution time limits (typically 5-15 minutes)
- No persistent connections (WebSockets need workarounds)
- No in-process state (no caching between requests)
- Local development is harder to replicate
Containers
Docker containers running on servers you control (or managed services).
FROM node:20-alpine
WORKDIR /app
COPY package*.json ./
RUN npm ci --production
COPY dist/ ./dist/
CMD ["node", "dist/server.js"]
# Runs 24/7, handles WebSockets, maintains connections
Container strengths:
- Persistent connections (WebSockets, long polling)
- In-process caching (Redis-like behavior without Redis)
- No cold starts
- Predictable performance
- Longer running tasks (background jobs, ML inference)
- Easier local development (same environment everywhere)
Container limitations:
- Pay for idle time
- You manage scaling (or pay for managed scaling)
- More operational complexity
The Architecture Most Apps Use
SaaS web app architecture:
CDN (Cloudflare/Vercel Edge)
↓
API Routes (Serverless)
→ Auth checks, simple CRUD, webhooks
→ Cold start OK because these are stateless
Container Services
→ WebSocket server (real-time features)
→ Background job workers (BullMQ, video processing)
→ Scheduled jobs (reports, cleanup)
→ ML inference (if needed)
Managed Services (neither serverless nor containers)
→ PostgreSQL (RDS, Supabase, Neon)
→ Redis (Upstash, ElastiCache)
→ S3 (file storage)
Concrete Use Cases
Always serverless:
// Webhooks — infrequent, stateless
export async function POST(req: Request) {
const event = await verifyStripeWebhook(req);
await processWebhookEvent(event);
return new Response('OK');
}
// Image resizing on upload
export async function POST(req: Request) {
const image = await req.blob();
const resized = await sharp(await image.arrayBuffer()).resize(800).toBuffer();
await s3.upload(resized);
}
Always containers:
// WebSocket server — persistent connections
const wss = new WebSocketServer({ port: 8080 });
wss.on('connection', (ws) => {
// Maintains connection for minutes/hours
});
// Background worker — long-running
const worker = new Worker('jobs', async (job) => {
await generateReport(job.data); // takes 5+ minutes
}, { connection: redis });
Cold Start Mitigation
If cold starts are a problem for serverless:
// Keep functions warm with a cron
// Vercel: vercel.json
{
"crons": [{ "path": "/api/ping", "schedule": "*/5 * * * *" }]
}
// Or use provisioned concurrency (AWS Lambda, Vercel Pro)
// Keeps N instances warm at all times
The Decision Rule
Ask: Does this need to be always running?
Yes → Container
No → Serverless
Ask: Does this need persistent connections?
Yes → Container
No → Serverless
Ask: Does this run for > 15 minutes?
Yes → Container
No → Either works, but serverless is simpler
Most API routes: serverless. Most background workers: containers. Most SaaS apps: both.
Architecture designed for hybrid serverless + container deployment: Whoff Agents AI SaaS Starter Kit deploys to Vercel (serverless) with Railway workers (containers).
Top comments (0)