Every developer using AI coding assistants hits the same wall: the AI generates code that works but doesn't match your project at all.
Wrong file structure. Wrong patterns. Deprecated APIs. Missing type hints. Using var instead of const. Raw SQL where you use an ORM. No tests. No error handling.
You spend 15 minutes fixing what the AI generated in 15 seconds.
I spent a year refining my configuration, and I want to share the most impactful things I've learned.
The File That Changes Everything: CLAUDE.md
Claude Code reads a CLAUDE.md file from your project root on every session start. Cursor reads .cursorrules. These aren't just style guides — they're the operating manual for your AI assistant.
Most people either don't use these files, or they put something generic like:
# CLAUDE.md
Use TypeScript. Follow best practices. Write clean code.
That's useless. Here's what actually works — a real CLAUDE.md I use for FastAPI projects:
# CLAUDE.md — Python FastAPI Backend
## Tech Stack
- Framework: FastAPI 0.100+
- Language: Python 3.12+
- ORM: SQLAlchemy 2.0 (async)
- Database: PostgreSQL
- Validation: Pydantic v2
- Testing: pytest + httpx (async)
## Architecture Rules
### Layered Architecture
1. **Endpoints** (api/) — HTTP layer only. Parse request, call service, return response.
2. **Services** (services/) — Business logic. No HTTP concepts.
3. **Models** (models/) — Database schema. No business logic.
4. **Schemas** (schemas/) — Input/output validation. Separate Create/Update/Response schemas.
### Database Patterns
- Use async SQLAlchemy sessions everywhere
- Never commit in service layer
- Use selectinload()/joinedload() for relationships (avoid N+1)
### Error Handling
- Raise HTTPException only in endpoints, not services
- Services raise custom exceptions (UserNotFoundError)
- Consistent format: {"detail": "message"}
## Code Style
- Type hints on ALL functions (params and return)
- Use Annotated for DI: `db: Annotated[AsyncSession, Depends(get_db)]`
- Max line length: 88 (Black)
- Prefer f-strings
## DO NOT
- Put business logic in endpoints — use service layer
- Use synchronous database calls
- Return SQLAlchemy models directly — use Pydantic schemas
- Store passwords in plain text
- Use * imports
- Commit .env files
The difference is night and day. Before: Claude generates a FastAPI endpoint with all the business logic inline, synchronous DB calls, and returns the ORM model directly. After: it generates proper layered architecture with async sessions, separate schemas, and service layer — every time.
The key sections:
- Tech Stack — Be specific about versions. "Python" is too vague; "Python 3.12+ with FastAPI 0.100+" tells the AI exactly which APIs to use.
- Architecture Rules — Define your layers. This prevents the AI from dumping everything in one file.
- DO NOT list — This is the most powerful section. Every anti-pattern you've corrected goes here. The AI learns what NOT to do.
Hooks: Set-and-Forget Automation
Claude Code supports hooks — scripts that run automatically on events. Here are three that I consider essential:
Auto-Format After Every Edit
{
"hooks": {
"PostToolUse": [
{
"matcher": "Write|Edit",
"command": "npx prettier --write \"$CLAUDE_FILE_PATH\" 2>/dev/null || black \"$CLAUDE_FILE_PATH\" 2>/dev/null || true"
}
]
}
}
Every file Claude touches gets formatted immediately. No more fixing indentation.
Secret Detection
{
"hooks": {
"PostToolUse": [
{
"matcher": "Write|Edit",
"command": "if grep -qEi '(api_key|secret|password|token)\\s*[:=]\\s*[\"'\"'\"'][^\"'\"'\"']' \"$CLAUDE_FILE_PATH\" 2>/dev/null; then echo 'SECURITY: Possible hardcoded secret!'; fi"
}
]
}
}
Catches hardcoded API keys and passwords before they reach your commit.
Large Diff Warning
{
"hooks": {
"PostToolUse": [
{
"matcher": "Write|Edit",
"command": "LINES=$(git diff -- \"$CLAUDE_FILE_PATH\" 2>/dev/null | wc -l); if [ \"$LINES\" -gt 200 ]; then echo \"WARNING: $LINES lines changed — verify this is correct\"; fi"
}
]
}
}
When the AI makes a 500-line change to a file, something's probably wrong.
Prompts That Teach, Not Just Fix
The biggest mistake developers make with AI assistants is asking "fix this error." You get the fix, learn nothing, and hit the same error next week.
Here's the prompt I use instead:
I'm getting this error and I don't understand it:
[paste full error with stack trace]
Explain:
1. What this error means in plain English
2. The most common causes
3. How to fix it (with code)
4. How to prevent it in the future
Point 4 is the game changer. "How to prevent it in the future" turns every bug into a permanent lesson.
For code review, instead of "review this code," try:
Review this code for production readiness:
[paste code]
Check:
1. Error handling — all failure modes covered?
2. Logging — enough to debug issues in production?
3. Input validation — all user input sanitized?
4. Performance — N+1 queries, missing indexes, unbounded lists?
5. Security — auth checks, data exposure, injection vectors?
6. Rollback — can this be safely reverted?
This structured format forces the AI to be systematic instead of giving vague "looks good" feedback.
The Full Setup
I've packaged everything I use daily into the AI Coding Kit:
- 8 CLAUDE.md templates — Next.js/Supabase, FastAPI, Express/TypeScript, React/Vite, Go/Chi, Rust/Axum, Django/DRF, React Native/Expo
- 8 matching .cursorrules — Same configs optimized for Cursor
- 30 prompt templates — Debugging, code review, feature building, refactoring, architecture
- 15 Claude Code skills — Slash commands like /review, /test, /debug, /security
- 10 automation hooks — Auto-format, type-check, lint, secret detection, diff warnings
- MCP server configs — Pre-configured for Supabase, GitHub, Sentry, PostgreSQL
It's $29, one ZIP file, 22 files, no subscription.
But even if you don't buy it — start with the CLAUDE.md approach above. Write out your stack, your architecture rules, and your DON'T list. The improvement in AI output quality is immediate.
Free sample with the FastAPI template, 5 prompts, and 3 hooks: GitHub Gist
Full kit: AI Coding Kit ($29)
Top comments (0)