- I don’t let Cursor/Claude “refactor freely” anymore.
- I add cheap guardrails: type-level checks + runtime assertions.
- I catch the dumb bugs: undefined envs, wrong shapes, silent 200s.
- You can copy my 4-file setup in 10 minutes.
Context
I build small SaaS apps. Usually solo. Usually fast.
AI-assisted coding makes me faster. Also makes me sloppy.
My recurring failure mode: I ask for a “cleanup refactor”. Cursor applies a wide diff. Claude rewrites helpers. Tests don’t exist yet. Then I ship. Then prod logs say something like TypeError: Cannot read properties of undefined (reading 'split').
Brutal part? The code looks cleaner. The bug hides in the seams.
So I stopped arguing with my own workflow. I added guardrails that are boring, loud, and hard to accidentally remove.
1) I start by making config impossible to “kind of work”
My favorite bug.
Missing env var. Code still boots. Then it fails 12 minutes later.
I used to do this:
// ❌ Don't do this
const baseUrl = process.env.NEXT_PUBLIC_BASE_URL || "http://localhost:3000";
That default is a trap. It makes staging point at localhost. I’ve done it.
Now I parse env once. At startup. And I crash early.
// src/env.ts
import { z } from "zod";
const EnvSchema = z.object({
NODE_ENV: z.enum(["development", "test", "production"]).default("development"),
NEXT_PUBLIC_BASE_URL: z.string().url(),
DATABASE_URL: z.string().min(1),
SUPABASE_SERVICE_ROLE_KEY: z.string().min(1),
});
export const env = EnvSchema.parse({
NODE_ENV: process.env.NODE_ENV,
NEXT_PUBLIC_BASE_URL: process.env.NEXT_PUBLIC_BASE_URL,
DATABASE_URL: process.env.DATABASE_URL,
SUPABASE_SERVICE_ROLE_KEY: process.env.SUPABASE_SERVICE_ROLE_KEY,
});
Two wins.
One: if Claude “helpfully” renames an env var, I’ll know immediately.
Two: when I paste snippets, I import env and stop touching process.env directly. Less surface area for mistakes.
And yeah, this has saved me on deploy. More than once.
2) I force every API response through a single type
Next.js route handlers are easy to mess up during refactors.
AI loves changing shapes.
Yesterday it’s { data: ... }. Today it’s { result: ... }. My client keeps compiling because it’s any somewhere. Then runtime explodes.
So I made one response envelope. Everywhere.
// src/lib/api-response.ts
export type ApiOk = { ok: true; data: T };
export type ApiErr = {
ok: false;
error: { code: string; message: string; details?: unknown };
};
export type ApiResponse = ApiOk | ApiErr;
export function ok(data: T): ApiOk {
return { ok: true, data };
}
export function err(code: string, message: string, details?: unknown): ApiErr {
return { ok: false, error: { code, message, details } };
}
Then I use it in a route handler.
// src/app/api/user/route.ts
import { NextResponse } from "next/server";
import { ok, err, type ApiResponse } from "@/lib/api-response";
type UserDto = { id: string; email: string };
export async function GET() {
try {
// Pretend this comes from DB.
const user: UserDto = { id: "u_123", email: "a@b.com" };
const body: ApiResponse = ok(user);
return NextResponse.json(body, { status: 200 });
} catch (e) {
const body: ApiResponse = err("INTERNAL", "Something broke", e);
return NextResponse.json(body, { status: 500 });
}
}
Now when AI tries to return { user } or { success: true }, TypeScript complains.
Not always. But often enough.
And when it doesn’t complain, the next guardrail does.
3) I validate request + response at runtime (not just types)
Types lie.
Especially when the boundary is JSON.
So I validate input. And I validate output.
I don’t do it for every endpoint. Only the ones I touch a lot.
// src/lib/schemas.ts
import { z } from "zod";
export const CreateThingSchema = z.object({
name: z.string().min(1).max(80),
enabled: z.boolean().default(true),
});
export const ThingSchema = z.object({
id: z.string().min(1),
name: z.string().min(1),
enabled: z.boolean(),
createdAt: z.string().datetime(),
});
export type CreateThingInput = z.infer;
export type Thing = z.infer;
And in the route:
// src/app/api/things/route.ts
import { NextResponse } from "next/server";
import { z } from "zod";
import { ok, err, type ApiResponse } from "@/lib/api-response";
import { CreateThingSchema, ThingSchema, type Thing } from "@/lib/schemas";
export async function POST(req: Request) {
try {
const json = await req.json();
const input = CreateThingSchema.parse(json);
// Fake DB write.
const thing: Thing = {
id: crypto.randomUUID(),
name: input.name,
enabled: input.enabled,
createdAt: new Date().toISOString(),
};
// Output validation catches “helpful” refactors.
const safeThing = ThingSchema.parse(thing);
const body: ApiResponse = ok(safeThing);
return NextResponse.json(body, { status: 201 });
} catch (e) {
const message = e instanceof z.ZodError ? "Invalid payload" : "Internal error";
const body: ApiResponse = err("BAD_REQUEST", message, e);
return NextResponse.json(body, { status: 400 });
}
}
That last ThingSchema.parse(thing) feels redundant.
It isn’t.
I had Claude refactor a DTO once and “simplify” createdAt into a Date. Everything compiled. JSON serialization changed. Client parsing broke. Took me 2 hours.
Now it dies in the handler.
Fast.
4) I make Cursor show me a checklist before I accept a diff
This part isn’t code. But it’s still a guardrail.
I keep a tiny REFRACTOR_CHECKLIST.md in the repo root. Cursor sees it. Claude sees it. And I paste it into chat when I’m about to do a big change.
It’s not motivational. It’s annoying on purpose.
Example of what I keep in mine:
# Refactor checklist (non-negotiable)
- No new `process.env` usage. Import from `src/env.ts`.
- API responses must match `ApiResponse`.
- Any JSON boundary: Zod parse input.
- If you changed response shape: update schema + client.
- No silent fallbacks ("" || default) for URLs/keys.
- Run: `pnpm lint` and `pnpm typecheck`.
Yeah, it’s simple.
But it changes the vibe. Cursor stops being “do whatever.” It becomes “do it, but inside the rails.”
And when I ignore it, I can’t pretend I didn’t know.
Results
I used this setup on 3 refactors over the last 9 days.
Before: I’d usually ship 1 dumb runtime bug per refactor. Missing env var. Wrong response key. A Date sneaking into JSON. Stuff like that.
After: I caught 8 failures locally.
- 3 were missing env vars during
next dev. - 2 were response shape mismatches.
- 2 were invalid request payloads I didn’t handle.
- 1 was a
createdAttype mismatch.
No prod incidents from those refactors. Small win. I’ll take it.
Key takeaways
- Parse env once. Crash early. Don’t default URLs.
- Use one API envelope. Force it with types.
- Validate JSON boundaries at runtime with Zod.
- Validate outputs too, not just inputs.
- Keep a repo checklist file so AI sees the rules.
Closing
Cursor + Claude make big diffs feel safe. They aren’t.
I stopped trying to “prompt better” and started adding rails the codebase can enforce.
What guardrail do you rely on most: env parsing, response envelopes, runtime schemas, or something else entirely?
Top comments (0)