Prisma is genuinely good software. The schema DSL is clean, the type generation works well, and for a new project it gets you to a working data layer in an hour. I used it for about a year before I started noticing things.
The first sign was a query that should have taken 5ms taking 80ms. The second was a N+1 that I'd technically solved with include but was still generating 15 SQL statements. The third was opening prisma.$queryRaw for the third time in a week because the query builder couldn't express what I needed.
At that point I stopped fighting the abstraction and started writing SQL directly.
What Prisma Actually Does to Your Queries
This is a simple query with a filter and pagination:
const logs = await prisma.logEntry.findMany({
where: {
organizationId: orgId,
timestamp: {
gte: from,
lt: to,
},
level: { in: ["error", "fatal"] },
},
orderBy: { timestamp: "desc" },
take: 50,
skip: page * 50,
});
The SQL Prisma generates:
SELECT
"public"."log_entries"."id",
"public"."log_entries"."timestamp",
"public"."log_entries"."service",
"public"."log_entries"."level",
"public"."log_entries"."message",
"public"."log_entries"."metadata",
"public"."log_entries"."organization_id",
"public"."log_entries"."created_at",
"public"."log_entries"."updated_at"
FROM "public"."log_entries"
WHERE (
"public"."log_entries"."organization_id" = $1
AND "public"."log_entries"."timestamp" >= $2
AND "public"."log_entries"."timestamp" < $3
AND "public"."log_entries"."level" IN ($4, $5)
)
ORDER BY "public"."log_entries"."timestamp" DESC
LIMIT $6 OFFSET $7
This is fine SQL. But notice: it selects every column (including created_at and updated_at that my UI doesn't need), it uses OFFSET pagination (slow on large tables), and I have no control over any of it without escaping to $queryRaw.
The equivalent raw query:
const logs = await pool.query(
`SELECT id, timestamp, service, level, message, metadata
FROM log_entries
WHERE organization_id = $1
AND timestamp >= $2
AND timestamp < $3
AND level = ANY($4)
AND (timestamp, id) < ($5, $6)
ORDER BY timestamp DESC, id DESC
LIMIT $7`,
[orgId, from, to, ["error", "fatal"], cursorTs, cursorId, 50]
);
Keyset pagination instead of OFFSET, only the columns I need, and the query is exactly what I want the database to run.
The N+1 Problem Prisma Doesn't Fully Solve
Prisma's include resolves N+1 queries by using IN clauses instead of per-row queries. But "no N+1" doesn't mean "one query":
const projects = await prisma.project.findMany({
where: { organizationId: orgId },
include: {
members: true,
apiKeys: { where: { active: true } },
_count: { select: { logEntries: true } },
},
});
Prisma executes this as 4 separate queries: one for projects, one for members, one for apiKeys, one for the count. Then it assembles the result in JavaScript.
The raw equivalent is one query. The naive approach would be chaining multiple LEFT JOIN on one-to-many tables and relying on GROUP BY - but that produces a Cartesian fan-out: if a project has 10 members, 5 API keys, and 100 log entries, the database materializes 10x5x100 = 5,000 intermediate rows per project before collapsing them. COUNT(DISTINCT ...) hides the bug in the results, but performance collapses as the tables grow.
The correct version pre-aggregates each relationship with CTEs before joining:
WITH member_stats AS (
SELECT
project_id,
COUNT(user_id) AS member_count,
jsonb_agg(jsonb_build_object('id', user_id, 'role', role)) AS members
FROM project_members
GROUP BY project_id
),
key_stats AS (
SELECT project_id, COUNT(id) AS active_key_count
FROM api_keys
WHERE active = true
GROUP BY project_id
),
log_stats AS (
SELECT project_id, COUNT(id) AS log_entry_count
FROM log_entries
WHERE timestamp > NOW() - INTERVAL '24 hours'
GROUP BY project_id
)
SELECT
p.id,
p.name,
p.created_at,
COALESCE(ms.member_count, 0) AS member_count,
COALESCE(ks.active_key_count, 0) AS active_key_count,
COALESCE(ls.log_entry_count, 0) AS log_entry_count,
COALESCE(ms.members, '[]'::jsonb) AS members
FROM projects p
LEFT JOIN member_stats ms ON ms.project_id = p.id
LEFT JOIN key_stats ks ON ks.project_id = p.id
LEFT JOIN log_stats ls ON ls.project_id = p.id
WHERE p.organization_id = $1
ORDER BY p.created_at DESC
Each CTE scans and aggregates its table independently. The final join works on already-collapsed rows - no fan-out, no wasted intermediate rows. One round trip, and actually faster than Prisma's 4 queries at scale.
Prisma can't generate this query. $queryRaw can run it, but then you lose the type safety that was the point of using Prisma.
The Performance Numbers
Same endpoint, same data, same index configuration. 50k rows in the table.
| Query type | p50 | p95 | p99 |
|---|---|---|---|
Prisma findMany with include
|
45ms | 120ms | 310ms |
4 separate pg queries |
18ms | 40ms | 95ms |
| Single JOIN query | 6ms | 14ms | 28ms |
The 10x headline comes from the p99 comparison. At p50 it's closer to 7x. Both are real.
The Prisma numbers aren't bad in absolute terms for most applications. They become a problem when you're doing this on every request, at scale, with connection pool pressure from concurrent requests.
Migrating Without Rewriting Everything
You don't have to replace Prisma everywhere at once. The practical path:
Step 1: Keep Prisma for writes and simple reads
Prisma is genuinely good for inserts, updates, and single-record lookups by primary key. The query generation for these is optimal and the type safety is useful.
// Keep this in Prisma - it's fine
await prisma.user.create({ data: { email, name, organizationId } });
await prisma.project.update({ where: { id }, data: { name } });
const user = await prisma.user.findUnique({ where: { id } });
Step 2: Replace list queries and anything with joins
This is where the overhead compounds. Add a pg pool alongside Prisma:
import { Pool } from "pg";
import { PrismaClient } from "@prisma/client";
export const prisma = new PrismaClient();
export const pool = new Pool({ connectionString: process.env.DATABASE_URL });
Step 3: Write a thin query layer
The thing I missed most from Prisma was typed results. TypeScript with raw SQL defaults to any. Fix it:
async function queryLogs(params: LogQuery): Promise<LogEntry[]> {
const result = await pool.query<{
id: string;
timestamp: Date;
service: string;
level: string;
message: string;
metadata: Record<string, unknown>;
}>(
`SELECT id, timestamp, service, level, message, metadata
FROM log_entries
WHERE organization_id = $1
AND timestamp >= $2
AND timestamp < $3
ORDER BY timestamp DESC
LIMIT $4`,
[params.orgId, params.from, params.to, params.limit]
);
return result.rows;
}
The generic parameter on pool.query<T> types the rows. It's not as ergonomic as Prisma's generated types, but it's enough to catch most mistakes at compile time.
If you want SQL-level control with Prisma-level type safety, look into Kysely or Drizzle ORM. Both let you write SQL-close queries while inferring full TypeScript types from your schema - without the ORM magic that makes query optimization hard. Kysely in particular is worth a look if the manual typing in pool.query<T> feels too brittle.
What You Actually Lose
This is important to say clearly: there are real things you give up.
Schema migrations. Prisma Migrate is good. When you drop Prisma from your query layer you still want a migration tool. I use node-pg-migrate, others use db-migrate or just raw SQL files in a migrations folder with a simple runner. None of them are as polished as Prisma Migrate.
The schema as source of truth. Prisma's schema file makes it easy to see your data model at a glance and generates types from it. With raw SQL you're maintaining types manually or generating them from the database schema with something like pgtyped or zapatos.
Prisma Studio. Minor thing but worth mentioning - having a UI to browse your data is useful during development.
Onboarding speed. New developers on a project with raw SQL need to know SQL. This is not a bad thing, but it's a real cost.
When to Keep Prisma
Prisma is the right choice when:
- Your team isn't comfortable with SQL
- You're building a CRUD app where the Prisma query builder covers 90%+ of your needs
- You're early stage and query performance isn't a bottleneck yet
- The productivity gain from the DX outweighs the performance cost
It stops being the right choice when:
- Your most important queries can't be expressed through the query builder
- You're regularly escaping to
$queryRawfor anything beyond simple lookups - Query times are a meaningful part of your latency budget
- You need fine-grained control over indexes, hints, or query plans
The answer for most production systems that have been running for more than a year is: use both. Prisma for the simple stuff, raw SQL for the queries that matter.
What ORM or query approach are you using in production? Anything that changed your mind in either direction? Comments are open.
Top comments (1)
This is a classic, ORM scenario and limitations in some scenarios, but eventually some tools are needed for productivity, i had a good experience with sqlc sqlc.dev/, is a different approach , have good day!