I used to spend more time editing AI output than writing from scratch.
Not because the code was wrong — it usually worked. But every function had a different naming convention. Comments were either absent or three paragraphs long. Variable names ranged from x to temporaryDataHolderForUserInput.
Sound familiar?
The Problem Nobody Talks About
We obsess over getting AI to produce correct output. But correctness is table stakes. The real productivity killer is inconsistency.
When every AI response uses a different style, you burn time:
- Reformatting code to match your project
- Rewriting comments to match your team's voice
- Adjusting naming conventions after every generation
- Doing a "style pass" on every pull request
That's not AI-assisted development. That's AI-assisted rough-drafting with a human cleanup crew.
The Fix: A One-Page Style Guide
I created a simple markdown file — STYLE.md — and started including it in every prompt context. Here's the structure:
# Project Style Guide
## Naming
- Functions: camelCase, verb-first (getUser, parseConfig)
- Constants: UPPER_SNAKE (MAX_RETRIES, API_TIMEOUT)
- Files: kebab-case (user-service.ts, parse-config.ts)
## Comments
- No obvious comments ("// increment i" → delete)
- Function docstrings only for public APIs
- TODO format: // TODO(name): description
## Code Patterns
- Early returns over nested ifs
- Destructure function params when >2 args
- Errors: throw typed errors, never return null for failures
## Formatting
- Max line length: 100
- Imports: stdlib → external → internal (blank line between)
That's it. One page. Takes 10 minutes to write.
What Changed
Before Style Guide
// This function gets the user data from the database
async function fetchUserData(userId: string) {
const result = await db.query('SELECT * FROM users WHERE id = ?', [userId]);
if (result) {
if (result.rows.length > 0) {
return result.rows[0];
} else {
return null;
}
} else {
return null;
}
}
After Style Guide
async function getUser(userId: string): Promise<User> {
const { rows } = await db.query('SELECT * FROM users WHERE id = ?', [userId]);
if (!rows.length) throw new NotFoundError(`User ${userId} not found`);
return rows[0];
}
Same AI. Same model. Same prompt complexity. The only difference: the style guide was in context.
Three Things I Learned
1. Be specific about what you hate. Generic guidance like "write clean code" does nothing. "Early returns over nested ifs" gives the model something to act on.
2. Include counter-examples. I added a "Don't" section after I kept getting the same bad patterns:
## Don't
- Don't wrap everything in try/catch — let errors propagate
- Don't add `@param` JSDoc for obvious parameters
- Don't use `any` — use `unknown` and narrow
3. Update it when you catch yourself editing. Every time I manually changed the same thing in AI output twice, I added a rule. After two weeks, the guide was dialed in and I barely touched AI output anymore.
The ROI
Before the style guide, I estimated I spent 15-20 minutes per AI coding session on reformatting. After: maybe 2 minutes, mostly edge cases.
For a team, the impact multiplies. Five developers each saving 15 minutes a day is over 6 hours of recovered productivity. Per day.
Try This Today
- Open a new file called
STYLE.mdin your project root - Write down your top 10 style preferences (naming, comments, patterns)
- Add 3-5 "Don't" rules based on AI output you've manually fixed recently
- Include the file in your AI assistant's context
- Notice the difference in the first response
The best prompt engineering isn't clever tricks. Sometimes it's just telling the model what "good" looks like in your world.
What rules would you put in your style guide? I'm curious which patterns bother people most.
Top comments (0)