I'm going to make some Prisma fans angry. That's fine.
I shipped my last three SaaS projects on Next.js + Vercel + Postgres. The first two used Prisma because that's what every tutorial recommends. The third one I switched to Drizzle.
Cold start on a fresh deploy:
- Prisma: 2.8 seconds
- Drizzle: 180 milliseconds
That's not a typo. That's the difference between a user thinking your app is broken and your app feeling instant. And it's not a benchmark I cherry-picked from a blog post that's the p95 from my own Vercel logs across two weeks of production traffic.
If you're building on Vercel (or any serverless platform Netlify, Cloudflare Workers, AWS Lambda) and you're still reaching for Prisma because that's what the tutorial said, this article is for you.
What "cold start" actually means and why it matters more than you think
Every serverless function on Vercel starts cold the first time it's hit. Cold means the container doesn't exist yet Vercel has to spin up a fresh Node.js process, load your code, initialize your dependencies, and then run your handler.
Once the container is warm, subsequent requests are fast. But "warm" expires. On Vercel's hobby tier, containers go cold after about 15 seconds of inactivity. On Pro it's longer but still measured in minutes, not hours.
This means:
- The first user to hit any given route after a quiet period waits for cold start
- If your app has 10 routes, you have 10 separate cold-start surfaces
- The bigger your dependency graph, the slower every cold start gets
Prisma is one of the biggest dependencies you can add to a Node.js project. The reason has nothing to do with code quality and everything to do with how Prisma is built.
The real problem: Prisma's architecture wasn't built for serverless
Prisma is two things bolted together. There's a TypeScript layer that generates types and gives you the nice query API. And underneath that, there's a query engine written in Rust, compiled to a native binary that ships in your node_modules.
When your serverless function cold-starts, Node.js has to:
- Load
@prisma/client(a few hundred KB of generated TypeScript) - Locate the correct query engine binary for the runtime architecture
- Spawn a child process for the query engine
- Establish IPC between Node and the engine
- Initialize the connection pool
- Then, finally, run your query
On a beefy dev machine you don't notice. On a Vercel cold start, every one of those steps adds milliseconds, and they compound.
The official Prisma docs acknowledge this. They have a whole page on cold starts. Their suggested fix is to use Prisma Accelerate a paid connection pooling service that, conveniently, Prisma sells. The pricing starts free but ramps fast, and now you have another service in your critical path that can fail.
I tried Accelerate. It works. It also adds another network hop, costs money, and doesn't solve the underlying problem it just hides it behind a cache.
What Drizzle does differently
Drizzle is a query builder, not an ORM in the Prisma sense. There's no query engine. There's no binary. There's no code generation step that you run separately.
When you write:
const users = await db.select().from(usersTable).where(eq(usersTable.id, userId))
That compiles to plain SQL at build time. The runtime is just a thin wrapper that sends SQL strings to whatever database driver you're using. For Neon (which is what I use now), the driver is @neondatabase/serverless it speaks Postgres over HTTP, no TCP connection pooling needed, designed for serverless from day one.
The whole runtime fits in your function bundle. There's no separate process. There's no IPC. There's no waiting for a Rust binary to wake up. Cold start is essentially "load some TypeScript, send an HTTP request."
That's where the 180ms vs 2.8s gap comes from. It's not magic, it's not better code it's that one architecture was designed for long-running servers and the other was designed for serverless.
"But the DX is worse, right?"
This is what every Prisma defender says. It's worth being honest about.
For 90% of queries, Drizzle's API is just as readable as Prisma's. Find by ID, insert, update, delete with a where clause these all look basically the same.
// Prisma
const user = await prisma.user.findUnique({ where: { id } })
// Drizzle
const user = await db.query.user.findFirst({ where: eq(user.id, id) })
Where Drizzle wins: you can drop into raw SQL anywhere, mix and match, and the migrations are plain .sql files you can read and edit. When something breaks in production at 2am, you can psql into the database and run the same query.
Where Prisma wins: relations and nested includes are slightly nicer in Prisma. The auto-completion for related fields is a touch better. If you're doing complex graph queries through 4 levels of relations, Prisma still has the edge in ergonomics.
For a typical SaaS app users, sessions, subscriptions, a few business entities you will not feel this difference. You will feel the cold start difference every single day.
"But I'm not on serverless"
Fair. If you're deploying to a long-running container Railway, Fly, Render, your own VPS Prisma's cold start cost is paid once at process start and then forgotten. The trade-off math is different. Prisma is genuinely fine there.
But if you're on Vercel, Netlify, Cloudflare, or any serverless platform where cold starts happen regularly, you're paying that 2-3 second tax every time the platform decides to recycle a container. Which, for a low-traffic SaaS in its early days, is constantly.
The cruel irony: the smaller your app, the more it gets recycled, the more cold starts your few users hit, the worse Prisma feels. By the time you have enough traffic to keep containers warm, you've already lost the early users who bounced because the first page took 3 seconds to render.
What I actually use now
For the last year, every Next.js project I've shipped uses the same data layer:
- Drizzle ORM for queries and migrations
- Neon for serverless Postgres (scales to zero, free tier is generous, branching for staging is great)
-
@neondatabase/serverlessas the driver speaks Postgres over HTTP, perfect for Vercel
The whole thing is maybe 50 lines of setup. Migrations are SQL files I can read. Cold starts are imperceptible. When something breaks, I can debug it because there's no Rust binary in the middle.
The boring conclusion
Prisma isn't bad. The Prisma team is talented and the project is well-maintained. The problem is that Prisma was designed in 2018-2019 when "Node.js backend" meant "long-running Express server on AWS EC2." Serverless was a niche. The architecture reflects that era.
The default stack for Next.js in 2026 is serverless. Vercel is serverless. Cloudflare Workers is serverless. Even traditional hosts are pushing edge runtimes. Recommending Prisma to a beginner who's deploying to Vercel is recommending a tool that was built for a different deployment model and hoping they won't notice the cost.
They notice. They notice when their landing page takes 3 seconds to load on the first visit of the day. They notice when their Lighthouse score is in the 60s. They notice when paying users open a ticket asking why the dashboard "freezes for a few seconds."
If you're starting a new Next.js project on Vercel today, try Drizzle. Spend an afternoon with it. If you hate it, go back to Prisma you've lost half a day. If you like it, you've saved your users 2.5 seconds on every cold start, forever.
That's the whole pitch. The numbers don't lie.
I write a Next.js + BetterAuth + Polar + Drizzle boilerplate that uses exactly this stack auth, payments, admin panel, blog, all wired together with cold starts under 200ms. Live demo (admin@yopmail.com / Password123!): https://nextjs-better-auth-polar-neon-boil.vercel.app
Top comments (0)