You've probably heard the complaint: "AI tools are powerful, but they don't follow my project's rules."
You give it a task, and it generates code in a style you didn't ask for. Maybe you use Kotlin, but it generates Java. Maybe you enforce a 100-character line limit, but the AI generates 200-character lines. Maybe you have security rules — no hardcoded keys, no internet requests — and the AI ignores all of them.
The frustrating part? The AI isn't being stubborn. It's just guessing.
There's a solution. It's called CLAUDE.md.
What Is CLAUDE.md?
CLAUDE.md is a project-level configuration file that tells Claude Code — Anthropic's official AI development tool — exactly how your project works.
It's not a secret hack. It's not a workaround. It's a first-class feature of Claude Code. You drop a CLAUDE.md file in your project root, Claude Code reads it before every task, and suddenly the AI respects your rules.
Think of it like .eslintrc for code style, or gradle.properties for Android builds. Except instead of configuring a tool, you're configuring the instructions that Claude Code receives.
Without CLAUDE.md: AI makes assumptions. It might choose the wrong tech stack, ignore your naming conventions, or generate code that breaks your security rules.
With CLAUDE.md: AI follows your rules exactly, every single time.
Why This Matters
Here's the real problem with AI code generation at scale: inconsistency.
You run Claude Code 10 times on your project. Each time, you get slightly different output because the AI doesn't know what you actually care about. By task 5, you're explaining the same rules again. By task 10, you're frustrated because the AI should have learned this by now.
But it can't learn from individual tasks. Each session starts fresh. The AI has no memory of your project's rules unless you embed them somewhere it will see them.
CLAUDE.md solves this.
Once you write it, every Claude Code task in that project is informed by the same rules. No re-explaining. No guessing. No inconsistency.
For teams, this is critical. A single project file means every team member — human and AI — is working under the same constraints.
4 Effective Patterns That Actually Work
Here are the patterns that produce real results. These aren't theoretical — they're extracted from production projects.
Pattern 1: Tech Stack Enforcement
The most basic pattern: tell the AI which technologies to use, and which ones to avoid.
## Tech Stack
- **Language**: Kotlin only. No Java.
- **UI Framework**: Jetpack Compose only. No XML layouts.
- **Database**: Room. No raw SQLite, no Firebase.
- **HTTP**: Retrofit + OkHttp. No other HTTP clients.
- **Async**: Coroutines only. No RxJava, no threading.
Why? Because ambiguity costs time. If you don't specify, the AI might use the right framework 70% of the time and a wrong one 30% of the time. Over a 10-task project, that's 3 conflicts you have to fix manually.
With this pattern, you get 100% consistency.
Pattern 2: Code Style Rules
Code style matters more than people admit. It affects readability, review time, and maintenance cost.
## Code Style
- **Indentation**: 4 spaces (not tabs).
- **Line Length**: Max 120 characters.
- **Naming**: camelCase for variables/functions, PascalCase for classes.
- **Blank Lines**: 1 blank line between methods, 2 between logical sections.
- **Comments**: Only for *why*, not *what*. Self-documenting code preferred.
Specify these rules explicitly, and the AI generates code that already matches your style. That means no reformatting phase. No lint failures on commit.
Pattern 3: Security Rules
Security is non-negotiable. Hardcoded keys, unvalidated inputs, and unnecessary network access are the kinds of mistakes that slip past code review if you're not careful.
## Security Rules
- **Hardcoded Keys**: NEVER embed API keys, tokens, or passwords in source code. Use environment variables or secure storage.
- **Internet Permission**: Only request INTERNET if the app genuinely needs it. Privacy-first default.
- **Input Validation**: All user inputs must be validated before use. No exceptions.
- **Logging**: Never log sensitive data (API keys, user tokens, PII).
When security rules are explicit, the AI treats them as constraints. It won't generate a function that logs your authentication token "just in case." It won't add INTERNET permission "for later use."
Pattern 4: Deny List (Things AI Should Never Do)
Sometimes it's easier to say what you don't want.
## Deny List
- **No @Suppress annotations**: Code warnings should be fixed, not suppressed.
- **No TODO comments without context**: If you add a TODO, link it to an issue.
- **No unused imports**: Clean up after every generation.
- **No MutableState in Views**: Use ViewModel + StateFlow, not mutable composable state.
- **No hardcoded strings in UI**: All strings go in strings.xml.
A deny list prevents whole categories of anti-patterns. It's your way of saying: "I know AI sometimes does X. Don't do it here."
Real Example: A Complete CLAUDE.md for a Habit Tracker
Here's a production-ready example for a simple Android app:
# Habit Tracker — Project Configuration
## Tech Stack
- **Language**: Kotlin
- **UI**: Jetpack Compose + Material3
- **Database**: Room (SQLite)
- **Async**: Coroutines + StateFlow
- **Testing**: JUnit (unit), Instrumented (Android tests)
## Code Structure
src/
├── data/
│ ├── entity/ # Room @entity classes
│ ├── dao/ # @dao interfaces
│ └── repository/ # Repository pattern (single source of truth)
├── ui/
│ ├── viewmodel/ # ViewModels with StateFlow
│ └── screens/ # Composable screens (pure UI, no logic)
└── App.kt
## Code Style
- **Indentation**: 4 spaces
- **Line Length**: 120 characters max
- **Naming**: camelCase variables/functions, PascalCase classes
- **Imports**: Organize alphabetically, remove unused
- **Comments**: *Why* only. Self-documenting code preferred.
## Architecture Rules
- **Separation of Concerns**: No database logic in ViewModels. No business logic in UI.
- **Repository Pattern**: Always use Repository as single source of truth.
- **ViewModel**: Use StateFlow for UI state, never mutable composable state.
- **Data Access**: DAO queries return Flow<T> for reactive updates.
## Security Rules
- **INTERNET Permission**: Only if network is actually needed. Default: no.
- **Hardcoded Data**: NEVER. Use BuildConfig, environment variables, or secure storage.
- **Logging**: No sensitive data (keys, tokens, user info).
- **Input Validation**: All user inputs validated before database insert.
## Testing Rules
- **Unit Tests**: ViewModel logic, Repository logic, data transformations.
- **No GUI Mocking**: Use instrumented tests for Composable UI.
- **Coverage Target**: 70%+ for critical paths.
## Deny List
- ❌ No @Suppress annotations without justification
- ❌ No hardcoded strings in UI (use strings.xml)
- ❌ No MutableState in Composables (use ViewModel)
- ❌ No direct database calls from UI (go through Repository)
- ❌ No TODO comments without linked issues
- ❌ No unused imports or variables
- ❌ No LiveData (use StateFlow)
- ❌ No RxJava (use Coroutines)
## Performance Constraints
- **UI Thread**: No database operations on UI thread (use Coroutines).
- **Memory**: No loading entire dataset into memory; use pagination for large lists.
- **Recomposition**: Avoid state that causes unnecessary Compose recompositions.
## What to Generate
When generating code for this project:
1. Create the full layered architecture (Entity → DAO → Repository → ViewModel → Screen)
2. Use Room annotations correctly (@Entity, @Dao, @Query, @Insert, etc.)
3. Implement ViewModel with StateFlow for mutable state
4. Use Composable screens with @Composable and no business logic
5. Add basic error handling (try/catch for database operations, error states in UI)
## What NOT to Generate
1. ❌ XML layout files (Compose only)
2. ❌ Multiple data sources without Repository abstraction
3. ❌ Direct database calls from ViewModels or UI
4. ❌ Hardcoded configuration or secrets
5. ❌ Unused dependencies in build.gradle.kts
This is a real file from a real project. It's specific enough to guide the AI, but not so verbose that it becomes a maintenance burden.
3 Common Mistakes With CLAUDE.md
People often make CLAUDE.md worse than no file at all. Here's how to avoid it.
Mistake 1: Vague Instructions
❌ Bad: "Write good code."
✅ Good: "Line length max 120 characters. Use camelCase for variables. Every method is either private or public, never protected."
Vague instructions make the AI guess. Specific rules are constraints that produce consistent output.
Mistake 2: Making It Too Long
A CLAUDE.md should be readable in 2-3 minutes. If it's over 200 lines, you've added noise.
Most projects don't need:
- Extensive history of why decisions were made
- Detailed philosophy statements
- Edge case handling for hypothetical scenarios
Stick to rules that actually affect code generation. Leave the philosophy for architecture docs.
Mistake 3: Contradictory Rules
❌ Contradiction: "Always use dependency injection" + "No external libraries"
Claude Code will get confused and generate inconsistent code. Review your CLAUDE.md for conflicts before committing it.
The 3-Layer System: Global, Workspace, Project
CLAUDE.md works in three levels:
-
Global (
~/.claude/CLAUDE.md): Rules that apply to every project on your machine- Preferred languages, linting standards, security baseline
- Example: "Never install typosquatted packages"
-
Workspace (
~/workspace/CLAUDE.md): Rules for multiple projects in a folder- Team conventions, corporate standards
- Example: "All projects use Python 3.13"
-
Project (
./CLAUDE.md): Specific to this one repository- Tech stack for this specific app, architecture decisions, deny lists
- This is where the most useful rules live
Claude Code reads all three and merges them. Project-level rules override workspace rules, which override global rules.
For teams, this is powerful: the workspace file enforces baseline standards, while each project can add specifics.
The Real Impact
Here's what happens when you use CLAUDE.md properly:
- First task: Claude Code generates code. It's not perfect, but it respects your architecture, style, and security rules.
- Second task: No more re-explaining. The AI already knows your stack, your naming conventions, your denied patterns.
- Tenth task: Your project has consistent style, correct architecture, and zero tech stack conflicts — not because you reviewed every line, but because the AI was informed from the start.
That's not a small thing. For teams, it's the difference between AI tools that slow you down (because they generate code you have to fix) and AI tools that speed you up (because they generate code that fits).
Get Started
Create a CLAUDE.md file in your project root. Start with the patterns above. Keep it under 150 lines. Be specific about:
- Your tech stack (languages, frameworks, libraries)
- Your code style (indentation, naming, line length)
- Your security constraints (no hardcoded keys, no unnecessary permissions)
- Your deny list (patterns you never want to see)
Then run Claude Code with the same task you've done before. You'll immediately see the difference.
If you want to dive deeper into prompt engineering for AI development — including how to structure requests for maximum consistency and quality — I've published a complete pattern book on Zenn.
For practical examples like the Habit Tracker above, including full Kotlin + Jetpack Compose + Room projects you can customize, check out the app templates on Gumroad.
What rules would you add to CLAUDE.md if you had one? Are there specific AI mistakes that frustrate you every time?
Top comments (0)