You ask Claude to "add a posts query and an author field that resolves the user," and 30 seconds later you ship something that looks fine in review:
- A resolver that returns the Prisma
Usermodel directly —passwordHash,stripeCustomerId, andinternalNotesnow ride along on every response. -
Post.authorresolves withprisma.user.findUniqueper row — staging dies at 500 RPS the first time the timeline query goes live. -
posts(offset, limit)for pagination — duplicates and skips appear the moment a new post is inserted mid-scroll. -
throw new Error('Not found')— the client parses theerrors[]string with a regex and breaks the day someone fixes the typo.
The model didn't fail. It pattern-matched on tutorials where GraphQL is a toy with three types and zero ops concerns. Production makes each one a real incident.
A CLAUDE.md at the root of your repo fixes this. Claude Code reads it on every task. Cursor, Aider, and Copilot do the same. Below are four of the thirteen rules I drop into every GraphQL/Apollo project — full set in the free gist linked at the end.
Rule 1 — Never expose ORM models directly — always map to GraphQL types
Why: Returning a Prisma/TypeORM/Sequelize entity from a resolver leaks every column you ever add. AI will gladly return user; and ship passwordHash, internalNotes, and stripeCustomerId to the client. GraphQL's "client asks only for what it needs" is meaningless if the server returns objects that contain everything — the network payload is filtered, but the shape of your domain is not. The next migration that adds a column adds it to the public API.
Bad:
// Resolver returns the raw Prisma model.
const resolvers = {
Query: {
me: async (_p, _a, ctx) => {
return await ctx.prisma.user.findUnique({ where: { id: ctx.userId } });
// → ships passwordHash, stripeCustomerId, internalNotes, ...
},
},
};
Good:
// Explicit mapper. Schema and DB evolve independently.
function toGraphQLUser(u: PrismaUser): GraphQLUser {
return {
id: u.id,
email: u.email,
displayName: u.displayName,
createdAt: u.createdAt.toISOString(),
};
}
const resolvers = {
Query: {
me: async (_p, _a, ctx) => {
const u = await ctx.prisma.user.findUnique({ where: { id: ctx.userId } });
return u ? toGraphQLUser(u) : null;
},
},
};
Rule for CLAUDE.md:
Resolvers return domain DTOs, not ORM entities.
Every GraphQL type has a corresponding `toGraphQL<Type>(entity)` mapper, OR
uses codegen-driven resolver types with explicit field selection.
A lint rule forbids `return await prisma.<model>.find...` directly from a resolver.
Adding a DB column never adds a field to the public schema implicitly.
Rule 2 — Solve N+1 with DataLoader on every relation — not "later"
Why: Post.author resolved as prisma.user.findUnique({ where: { id: post.authorId } }) runs once per post in the list. A 50-item timeline becomes 51 queries. AI writes this every single time because each resolver looks fine in isolation. Staging doesn't notice. Production absolutely does — the GraphQL endpoint becomes slower than the REST one it replaced, and your DB's connection pool drowns under the burst.
Bad:
const resolvers = {
Post: {
// 1 query per post in the result set. Hello, N+1.
author: (post, _a, ctx) =>
ctx.prisma.user.findUnique({ where: { id: post.authorId } }),
},
};
Good:
// One DataLoader per entity, instantiated per request in `context`.
function createContext(req): Context {
return {
userLoader: new DataLoader<string, User>(async (ids) => {
const users = await prisma.user.findMany({ where: { id: { in: [...ids] } } });
const byId = new Map(users.map((u) => [u.id, u]));
return ids.map((id) => byId.get(id) ?? null);
}),
};
}
const resolvers = {
Post: {
// Batched across the whole request. 1 query instead of N.
author: (post, _a, ctx) => ctx.userLoader.load(post.authorId),
},
};
Rule for CLAUDE.md:
Every relation field uses a DataLoader. One loader per entity (`userLoader`,
`commentLoader`, ...), instantiated per-request inside `createContext()`.
Raw ORM calls in field resolvers are forbidden — they go through the loader
or through a method that batches.
A request-level metric counts queries; a regression threshold fails CI on any
endpoint whose query count grows with collection size.
Rule 3 — Pagination uses cursors (Relay spec), not offset / limit
Why: offset: 1000, limit: 20 makes the database scan and discard 1000 rows on every page load. It also breaks under concurrent writes — insert a row mid-scroll and the next page returns the row the user just saw, or skips one entirely. AI defaults to offset because it's three lines of code and the docs use it. Cursor pagination is stable, scales O(limit) instead of O(offset+limit), and is what Apollo and Relay clients expect for normalized cache updates.
Bad:
type Query {
posts(offset: Int = 0, limit: Int = 20): [Post!]!
}
posts: (_p, { offset, limit }, ctx) =>
ctx.prisma.post.findMany({ skip: offset, take: limit, orderBy: { id: "desc" } });
Good:
type Query {
posts(first: Int, after: String): PostConnection!
}
type PostConnection {
edges: [PostEdge!]!
pageInfo: PageInfo!
}
type PostEdge { cursor: String! node: Post! }
type PageInfo { hasNextPage: Boolean! endCursor: String }
posts: async (_p, { first = 20, after }, ctx) => {
const cursor = after ? { id: decodeCursor(after) } : undefined;
const rows = await ctx.prisma.post.findMany({
take: first + 1,
cursor,
skip: cursor ? 1 : 0,
orderBy: { id: "desc" },
});
const hasNextPage = rows.length > first;
const nodes = rows.slice(0, first);
return {
edges: nodes.map((n) => ({ cursor: encodeCursor(n.id), node: n })),
pageInfo: { hasNextPage, endCursor: nodes.at(-1) ? encodeCursor(nodes.at(-1)!.id) : null },
};
};
Rule for CLAUDE.md:
List queries follow the Relay Connection spec: `first/after` (and `last/before`
when bidirectional), returning `{ edges, pageInfo }`.
Cursors are opaque base64 — never raw IDs or offsets — so the encoding can
evolve without breaking clients.
Offset pagination is reserved for admin/internal tools where stability under
concurrent writes does not matter; production user-facing lists never use it.
Rule 4 — Disable introspection and apply query depth/complexity limits in production
Why: GraphQL trades REST's "many small endpoints" for "one endpoint that can do anything" — including melt your DB. { user { friends { friends { friends { friends { ... } } } } } } is a free DoS vector that any anonymous client can craft. Public introspection lets attackers map your entire schema in one request and find the expensive fields. AI never adds these limits because the tutorials don't either; they only matter once you have real traffic, by which point it's too late.
Bad:
const server = new ApolloServer({
typeDefs,
resolvers,
// introspection on by default in dev, often left on in prod by accident.
// No depth limit. No complexity limit. No persisted-query allowlist.
});
Good:
import depthLimit from "graphql-depth-limit";
import { createComplexityRule } from "graphql-query-complexity";
const server = new ApolloServer({
typeDefs,
resolvers,
introspection: process.env.NODE_ENV !== "production",
validationRules: [
depthLimit(7),
createComplexityRule({
maximumComplexity: 1000,
estimators: [
// Cost connections by `first`, fields by 1, expensive resolvers by N.
fieldExtensionsEstimator(),
simpleEstimator({ defaultComplexity: 1 }),
],
onComplete: (cost) => { /* log to metrics */ },
}),
],
});
Rule for CLAUDE.md:
In production: `introspection: false`, `graphql-depth-limit` set to ~7,
`graphql-query-complexity` with field-level cost — bounded by `first` on
connections — capped at a documented maximum.
Public APIs use persisted queries (allowlist of hashed operations) so the
server only ever executes operations shipped from a known client build.
Per-IP / per-token rate limits run in front of the GraphQL endpoint, not
inside it — by the time a request reaches a resolver it has already paid
parse + validate cost.
How to Use These Rules
- Drop a
CLAUDE.mdat the root of the repo, next to yourschema.graphqland your Apollo Server bootstrap. - Paste the rules. Edit what doesn't fit your stack (Apollo vs Yoga vs Mercurius, Prisma vs Drizzle vs raw SQL, federation vs monolith).
- Restart Claude Code so it picks up the new context file. The same file works for Cursor, Aider, Codex, and Copilot Workspace.
The full set covers schema-first SDL design, deliberate nullability (! is a contract, not a default), authentication-in-context vs authorization-in-resolvers, Zod-validated input types, typed error unions over throw new Error, mutation payloads that return the modified node for cache updates, real pub/sub (Redis/Kafka) for subscriptions across replicas, expand→migrate→contract schema evolution with @deprecated, and the resolver-unit + executable-schema-integration test pyramid.
Get the Rules
Free GraphQL gist with all 13 rules → gist.github.com/oliviacraft/dc210a59317b2beac5050d9f1c256513
The 13 rules above are one chapter of the CLAUDE.md Rules Pack — editions covering Go, Rust, Python, FastAPI, Next.js, React Native, Terraform, Docker, Kubernetes, PostgreSQL, GraphQL, Java, Redis, MongoDB, and more. Production-tested AI guardrails, packaged as drop-in CLAUDE.md files.
→ Get the full pack on Gumroad: oliviacraftlat.gumroad.com/l/skdgt — one-time payment, lifetime updates.
Top comments (0)