DEV Community

Atlas Whoff
Atlas Whoff

Posted on

Serverless Functions vs Always-On Servers: Choosing the Right Architecture

Serverless Functions vs Always-On Servers: Choosing the Right Architecture

Serverless isn't always cheaper. Always-on isn't always simpler.
Here's the honest trade-off analysis.

What Serverless Actually Means

Not 'no server' — it means 'no server you manage'.

You write a function. The platform runs it on demand.
You pay per invocation, not per hour of uptime.

Examples: Vercel Edge Functions, AWS Lambda, Cloudflare Workers, Netlify Functions

Serverless: The Real Benefits

// Vercel serverless function
// app/api/send-email/route.ts
export async function POST(request: Request) {
  const { to, subject, body } = await request.json()
  await resend.emails.send({ to, subject, html: body })
  return Response.json({ sent: true })
}
// Deploys automatically. Scales to 0 when idle. Scales to 10k concurrent instantly.
Enter fullscreen mode Exit fullscreen mode

True benefits:

  • Zero ops overhead (no servers to patch or maintain)
  • Scales to zero (no idle cost)
  • Scales up automatically (no capacity planning)
  • Pay per use (great for spiky or low-volume workloads)

Always-On: The Real Benefits

// Express on a $5/mo Fly.io VM
const server = express()
server.post('/api/process', async (req, res) => {
  // Can hold connections open
  // Can use WebSockets
  // Can cache in memory
  // No cold start
  // Can run background jobs
})
Enter fullscreen mode Exit fullscreen mode

True benefits:

  • No cold starts (consistent latency)
  • Can hold state in memory (caches, connections)
  • WebSockets and long connections work
  • Background jobs run natively
  • Simpler debugging (just SSH in)

The Cold Start Problem

Serverless cold start latency:
  Node.js Lambda: 100-800ms
  Edge functions (Cloudflare/Vercel): 0-50ms
  Always-on server: <5ms
Enter fullscreen mode Exit fullscreen mode

Cold starts matter for:

  • User-facing APIs (makes requests feel slow)
  • Latency-sensitive operations

Cold starts DON'T matter for:

  • Webhook handlers (user not waiting)
  • Batch processing
  • Background jobs

Cost Comparison

Scenario: 1M requests/month, 100ms avg duration

AWS Lambda:
  Compute: ~$2.08
  Requests: $0.20
  Total: ~$2.28/month

Fly.io 256MB VM:
  $1.94/month (shared CPU)
  Can handle ~1M req/month easily

Both cheap. Lambda wins at 0 requests. VM wins at sustained load.
Enter fullscreen mode Exit fullscreen mode

When Serverless Wins

  • Spiky traffic (product launches, viral moments)
  • Low-volume APIs (webhooks, callbacks)
  • Edge-deployed functions (low latency globally)
  • No-DevOps requirement (solo founders)
  • Event-driven backends

When Always-On Wins

  • WebSocket servers
  • Background job workers
  • CPU-intensive operations (video processing, ML inference)
  • Services needing in-memory state
  • Teams comfortable with basic ops

The Practical Answer

Most Next.js apps: serverless by default (Vercel handles it).

Add always-on when you need:

  • Background job processing (BullMQ workers)
  • WebSocket connections
  • Scheduled cron jobs that must run reliably

Hybrid works: Next.js on Vercel (serverless) + a small Fly.io VM for workers.


The Ship Fast Skill Pack includes a /deploy skill that generates the right infrastructure config for your workload. $49 one-time.

Top comments (0)