DEV Community

Cover image for 5 Instruction File Patterns I Wish I Knew From Day 1
Docat
Docat

Posted on

5 Instruction File Patterns I Wish I Knew From Day 1

You asked your AI to add an API route. It generated pages/api/users.ts with getServerSideProps. Your project uses Next.js 14 App Router. There is no pages/ directory.

You asked it to query the database. It wrote Prisma code. You migrated to Drizzle two months ago.

Sound familiar? The AI isn't broken. It just has zero context about your project — so it guesses based on the most common patterns in its training data.

The fix is deceptively simple: a project instruction file. In Claude Code, it's called CLAUDE.md. Cursor uses .cursorrules. GitHub Copilot has .github/copilot-instructions.md. The name doesn't matter -- the patterns do.

I've been refining my instruction files across a dozen projects over the past few months. Here are the 5 patterns that made the biggest difference -- each one with a before/after so you can steal them today.


Pattern 1: Project Conventions (Stop the Style Roulette)

Without explicit conventions, every AI response is a coin flip. camelCase or snake_case? Tabs or spaces? Named exports or default exports? The AI will happily switch between all of them within the same file.

The pattern: Declare your naming, file structure, and formatting rules upfront.

## Conventions
- Naming: kebab-case files, camelCase variables, PascalCase types
- Exports: named exports only, no default exports
- Formatting: Biome (not Prettier, not ESLint)
- Imports: absolute paths via @/ alias, no relative imports above 2 levels
- File size: max 200 lines per file, split if larger
Enter fullscreen mode Exit fullscreen mode

Why it works: The AI treats these as hard constraints, not suggestions. Instead of guessing your style from the 3 files it can see, it follows explicit rules consistently. This alone eliminates ~70% of "fix the formatting" follow-up prompts.

Pro tip: Add a "Forbidden" section for things that keep creeping back in:

## Forbidden
- No `any` type (use `unknown` + type guard)
- No default exports
- No barrel files (index.ts re-exports)
- No console.log in committed code (use logger util)
Enter fullscreen mode Exit fullscreen mode

The "never do X" framing is surprisingly effective. LLMs respond well to explicit prohibitions because it narrows the generation space -- fewer valid options means fewer wrong ones.


Pattern 2: Role Definition (Shape the AI's Decision-Making)

This is the pattern most people skip, and it's arguably the most powerful. You're not just telling the AI what to write -- you're telling it who to be when making decisions.

The pattern: Define the kind of developer you want the AI to emulate.

## Role
You are a senior backend engineer who:
- Prioritizes readability over cleverness
- Writes code that a junior dev can understand in 6 months
- Prefers boring, proven solutions over cutting-edge experiments
- Always handles errors explicitly (no silent catches)
- Treats tests as documentation, not afterthoughts
Enter fullscreen mode Exit fullscreen mode

Before (no role):

// AI writes "clever" code
const getUser = (id: string) =>
  pipe(id, validateUUID, chain(fetchFromCache), alt(() => fetchFromDB), map(sanitize));
Enter fullscreen mode Exit fullscreen mode

After (with role):

// AI writes readable code
async function getUser(id: string): Promise<User> {
  if (!isValidUUID(id)) {
    throw new InvalidIdError(id);
  }

  const cached = await cache.get<User>(`user:${id}`);
  if (cached) {
    return cached;
  }

  const user = await db.users.findById(id);
  if (!user) {
    throw new NotFoundError("User", id);
  }

  return user;
}
Enter fullscreen mode Exit fullscreen mode

Same functionality. One version is a puzzle. The other one is code your team can actually maintain.

When to customize: Adjust the role based on the project. A quick prototype might want "move fast, skip edge cases." A payment system wants "paranoid about data integrity, validate everything twice." The role shapes every decision downstream.


Pattern 3: Anti-Patterns List (Prevent Recurring Mistakes)

Every codebase has landmines that AI assistants love stepping on. Maybe it's importing from an internal package that's being deprecated. Maybe it's using an ORM method that has a known N+1 problem. Maybe it's that one API endpoint that silently returns 200 even on failure.

The pattern: Document the specific mistakes you've seen the AI make (or that your team keeps making), and explicitly ban them.

## Anti-Patterns (NEVER do these)

### Database
- Never use `findAll()` without pagination — will OOM on large tables
- Never raw SQL without parameterized queries
- Never cascade delete without explicit confirmation in the code

### API Design
- Never return 200 for errors — use proper HTTP status codes
- Never expose internal IDs in public APIs — use UUIDs
- Never accept unbounded arrays in request bodies — always set maxItems

### State Management
- Never store derived state — compute it from source of truth
- Never mutate function arguments — clone first
- Never use global mutable state — pass dependencies explicitly

### Security
- Never log request bodies (may contain PII/tokens)
- Never hardcode secrets, even in tests — use env vars
- Never trust client-side validation alone — always validate server-side
Enter fullscreen mode Exit fullscreen mode

Why this works better than you'd expect: LLMs are trained on millions of code examples, including a lot of bad ones. Without anti-pattern rules, the AI might generate code that's technically correct but uses patterns you've specifically moved away from. The anti-pattern list acts as a negative filter on its training data.

Real-world example: I had an AI repeatedly generate try { ... } catch (e) { return null; } -- silently swallowing errors. Adding Never silently catch errors — always log or re-throw with context to the anti-patterns section killed that behavior immediately.


Pattern 4: Stack-Specific Instructions (Framework-Aware Generation)

Generic AI coding advice is fine. But every framework has idioms, best practices, and sharp edges that only matter if you're actually using that framework. A React 18 project has completely different rules than a Svelte 5 project, even if they're both "frontend."

The pattern: Add framework-specific rules with version numbers.

## Tech Stack
- Runtime: Node.js 22 (use native fetch, no node-fetch)
- Framework: Next.js 14 App Router (NOT Pages Router)
- ORM: Drizzle (NOT Prisma — we migrated in Q3)
- Auth: Better Auth v1 (NOT NextAuth)
- Styling: Tailwind CSS v4 (use @theme, not tailwind.config)
- Testing: Vitest + React Testing Library

## Framework Rules
### Next.js 14
- Use Server Components by default, add "use client" only when needed
- Data fetching in Server Components, not useEffect
- Use `next/navigation` not `next/router`
- Route handlers go in app/api/[route]/route.ts
- Loading states via loading.tsx, not manual Suspense

### Drizzle ORM
- Schema defined in src/db/schema.ts
- Migrations via `drizzle-kit generate` then `drizzle-kit migrate`
- Always use `.prepare()` for frequently-run queries
- Relations defined with `relations()` helper, not raw joins
Enter fullscreen mode Exit fullscreen mode

Why version numbers matter: AI models are trained on code from multiple framework versions. Without Next.js 14 App Router, you'll get a mix of Pages Router (v12) and App Router (v13/14) patterns in the same file. Specifying the version eliminates an entire class of hallucination.

The "NOT X" pattern is gold for recent migrations. If you switched from Prisma to Drizzle last quarter, the AI will still reach for Prisma unless you explicitly block it. (NOT Prisma — we migrated in Q3) gives it both the rule and the reason.


Pattern 5: Testing & Deployment Constraints (Guard the Pipeline)

The AI can write beautiful code that breaks your CI pipeline, fails your linter, or deploys to the wrong environment. This pattern makes the AI aware of your operational constraints.

The pattern: Document your testing expectations and deployment boundaries.

## Testing Requirements
- Every new function needs at least one test
- Test file location: colocated (Button.test.tsx next to Button.tsx)
- Test naming: `describe("functionName")``it("should [expected behavior] when [condition]")`
- Mocking: use vi.mock() for external services, never mock internal utils
- No snapshot tests — they break on every UI change and nobody reviews the diff
- Integration tests for API routes, unit tests for utils/helpers

## Deployment
- Platform: Cloudflare Workers (no Node.js APIs: no fs, no path, no process)
- Max bundle size: 1MB (Workers limit)
- Environment variables: accessed via env parameter, NOT process.env
- Database: D1 (SQLite-compatible, no PostgreSQL features)
- Secrets: `wrangler secret put`, never commit .env files

## CI Pipeline
- Biome check runs first — fix formatting before pushing
- Type check: `tsc --noEmit` must pass
- Tests: `vitest run` must pass, no `.skip` or `.only` in committed code
- PR titles: Conventional Commits format (feat:, fix:, refactor:, etc.)
Enter fullscreen mode Exit fullscreen mode

The Cloudflare Workers example is instructive. Without these constraints, an AI will happily write import fs from 'node:fs' in a Workers project. It'll compile locally and explode in production. The constraint no Node.js APIs: no fs, no path, no process prevents this at generation time instead of debug time.

The "no snapshot tests" rule is a taste decision, not a universal truth. That's fine -- your instruction file should encode your team's decisions, not generic best practices. The AI doesn't need to agree. It needs to comply.


Putting It All Together

Here's a minimal but complete instruction file combining all 5 patterns:

# Project Instructions

## Role
Senior TypeScript developer. Prioritize readability and maintainability.
Prefer explicit over clever. Handle all errors.

## Conventions
- Files: kebab-case | Variables: camelCase | Types: PascalCase
- Named exports only. No barrel files.
- Max 200 lines per file.

## Stack
- Next.js 14 App Router + TypeScript strict mode
- Drizzle ORM + D1 database
- Tailwind CSS v4
- Vitest + React Testing Library

## Anti-Patterns
- No `any` type — use `unknown` with type guards
- No `try/catch` that silently returns null
- No `useEffect` for data fetching — use Server Components
- No `process.env` — use Workers env parameter

## Testing
- Colocate test files. Name: `describe → it("should X when Y")`
- Unit tests for logic, integration tests for API routes
- No `.skip` or `.only` in committed code

## Deployment
- Cloudflare Workers — no Node.js APIs (fs, path, process)
- Max 1MB bundle. Secrets via `wrangler secret put`.
Enter fullscreen mode Exit fullscreen mode

That's ~30 lines. It takes 10 minutes to write. And it fundamentally changes the quality of every AI-generated code block in your project.


Your Next Steps

  1. Create the file right now. Open your project, add CLAUDE.md (or .cursorrules or whatever your tool uses), and write just the Conventions section. Even 5 lines make a difference.

  2. Build it iteratively. Every time the AI generates something wrong, don't just fix it -- add a rule to prevent it. Your instruction file is a living document that gets smarter over time.

  3. Share it with your team. Commit it to the repo. Now every developer gets the same AI behavior, not just whoever happens to know the right prompt.

Your instruction file should be personal — built from your own mistakes and preferences. Start with 5 lines today, and add a rule every time the AI gets something wrong. In a week, you'll have a file that feels like a cheat code.

Want a head start? I maintain a collection of 12 starter templates for different project types (Next.js, FastAPI, Rust, Go, etc). Steal what's useful, delete the rest.


What's the ONE rule in your instruction file that saved you the most debugging time? Mine is Never silently catch errors — always log or re-throw with context. That single line eliminated a whole category of "why is this returning null?" bugs.

Drop yours in the comments — I reply to every one.

Follow @docat0209 for weekly deep-dives on AI-assisted development.

Top comments (0)

Some comments may only be visible to logged-in visitors. Sign in to view all comments.