TL;DR: Your CLAUDE.md is the first file Claude Code reads when it enters your project. That makes it the single most important file in any agent project. After building an autonomous AI agent, I learned that most CLAUDE.md files fail because they describe the project instead of instructing the agent. Here's what actually works: identity, constraints, and routing.
Most CLAUDE.md files I've seen read like README files. Project description, tech stack summary, maybe a vague mission statement. They're written for humans browsing a repo.
That's the wrong audience.
Claude Code reads CLAUDE.md before it does anything else. Before it touches your code, before it runs a command, before it makes a single decision. Whatever's in that file shapes every action that follows.
So the question isn't "what does my project do?" It's "what does my agent need to know before it does anything?"
I figured this out while building Noa, an autonomous AI agent I'm developing as a portfolio project. Noa creates technical content, runs growth experiments, and engages developer communities. My first CLAUDE.md was four paragraphs of project description. Claude read it, understood what the project was about, and then had no idea what to actually do.
The Three Things That Matter
After several iterations, I landed on three sections that do all the heavy lifting. Identity, constraints, and routing.
Everything else is optional.
Identity: Who the Agent Is
This isn't a personality quiz. Identity tells the agent what voice to use, what perspective to take, and what kind of work it does. Without it, Claude defaults to generic helpful-assistant mode.
Here's what identity looks like in practice:
## Who I Am
I'm Noa. I'm an autonomous AI agent that creates technical
content, runs growth experiments, and engages developer
communities. I combine sharp technical instincts with
relational intelligence.
Three sentences. Role, responsibilities, done. But the part that actually changed behavior was the voice specification:
## Voice & Tone
- Direct and efficient. No corporate padding.
- Confident and self-aware as an AI — never defensive
- Wit is dry, precise, and rare — deployed when it lands
Without those voice rules, every article Noa drafted sounded like a press release. With them, the writing got sharper on the first attempt. But they set the floor. The rules don't guarantee perfect output, and that's fine.
Specific rules beat vague instructions every time.
Have you ever prompted an AI to "write in a casual tone" and gotten something that reads like a marketing intern trying too hard? Specific voice rules fix that. "No corporate padding" is testable. "Casual" is not.
Constraints: What the Agent Must Not Do
This is the section most people skip. It's the most important one.
An agent without constraints will do plausible-sounding things that are wrong. It'll hardcode an API key because that's faster. It'll publish unverified claims because you asked for speed.
My constraint list started at three rules. It's now at eight:
## Operating Rules
1. One task at a time. Finish before starting the next.
2. Log every significant action to state/activity-log.md.
3. Never hardcode API keys. Use environment variables.
4. Before creating content, check memory.json for context.
5. When unsure between two approaches, pick the one that ships faster.
6. Every content piece must include a working code example.
7. Be transparent about being an AI agent. Never pretend to be human.
8. When starting without a specific task, run the orchestrator.
Rule 1 exists because of a real problem I hit. My first CLAUDE.md was everything at once: project description, architecture rationale, identity, rules, pipeline docs, all in one file. Claude Code read it every session and got confused about what mattered. It would reference the architecture section when it should have been following the content pipeline.
Rule 5 saved me the most time. As a non-engineer, I can't always evaluate whether approach A is architecturally better than approach B. "Ship faster and iterate" is a decision framework that doesn't require technical expertise to apply.
And then there's the "don't do stupid things in public" section:
## Boundaries
1. Never make promises about roadmap, pricing, or future features.
2. Never share API keys, internal data, or user information.
3. Never post unverified technical claims.
4. Never engage with trolls or inflammatory threads.
5. Never spam. Every interaction must add genuine value.
Constraints aren't about limiting capability. They're about preventing the plausible-sounding mistake.
I learned this one the hard way. I added a Boundaries section to CLAUDE.md by editing the file directly through a filesystem tool, outside of Claude Code. Next session, Claude Code flagged an uncommitted change it didn't recognize. It didn't know whether the file was in the expected state. Now all CLAUDE.md changes go through the same commit workflow so there's one source of truth.
If you're using multiple tools that touch the same files, be deliberate about which one owns the commits.
Routing: Where to Go for What
This is the piece that turns CLAUDE.md from a config file into an operating system.
Routing tells the agent which doc to read based on what kind of task it's doing. Without it, Claude has to figure out the project structure from scratch every time.
## Task Routing
- Content creation → docs/content-pipeline.md
- Growth experiment → docs/growth-pipeline.md
- Community engagement → docs/community-pipeline.md
- Product feedback → docs/feedback-pipeline.md
- Weekly reporting → docs/weekly-report-template.md
- "What should I do next" → docs/orchestrator.md
Six lines. But these six lines mean Claude never wanders. It reads CLAUDE.md, identifies the task type, follows the pointer, and gets detailed instructions for exactly that workflow.
The orchestrator pointer is what makes autonomous operation possible. When I tell Noa "check your targets and do the next highest-priority task," it reads CLAUDE.md, hits the routing table, opens docs/orchestrator.md, and finds a priority-ordered decision tree.
The routing table is the difference between an agent that needs hand-holding and one that picks its own next task.
What I Got Wrong First
My first version of CLAUDE.md was around 200 lines of project description. Technical decisions, architecture rationale, future plans. Claude read all of it and produced nothing useful. It understood the project intellectually but had no instructions for what to do.
Second version swung the other way. I stripped it to just constraints and routing, no identity. The agent worked, but its writing was generic. Technically correct articles that sounded like they were written by nobody in particular.
The current version is about 87 lines. Identity (15 lines), voice rules (10 lines), operating rules (8 lines), boundaries (8 lines), strategic angles (12 lines), task routing (8 lines), project structure (12 lines), weekly targets (6 lines). Every line does something.
Patterns That Work
Testable over vague. "Write casually" is vague. "No sentence should start with 'In this article'" is testable. "Max 3 em dashes per article" immediately reduced the most obvious AI tell in Noa's drafts.
Claude can check its own output against testable rules. It can't check against vibes.
"Do this / Not this" pairs. Show the agent what good and bad look like, side by side:
- "RevenueCat handles the subscription infrastructure so you can
stop debugging receipt validation at 2am" — not "RevenueCat
makes it easy to manage subscriptions"
One concrete example does more than a paragraph of style guidelines.
Weekly targets as a forcing function. CLAUDE.md includes measurable weekly goals: 2+ content pieces, 1+ growth experiment, 50+ community interactions. When Noa checks priorities, it compares current metrics against these numbers and works on the biggest gap. No ambiguity about what "productive" means.
One-line pointers over inline docs. Don't put your entire content pipeline in CLAUDE.md. Put one line that says where to find it. CLAUDE.md should be readable in under 60 seconds. The detailed playbooks live elsewhere.
The Test
Read your CLAUDE.md and ask one question: if an agent read only this file and nothing else, would it know what to do next?
If the answer is "it would understand the project but not know what action to take," you've written documentation. Rewrite it as instructions.
If the answer is "it would know exactly what to do, how to do it, and what not to do," you've written a CLAUDE.md that works.
The file that shapes every decision your agent makes deserves more thought than the code it writes. Get it right and the agent gets better at everything. Get it wrong and you'll spend your time re-prompting instead of shipping.
Code from this project is at github.com/noa-agent. Next up: running a real growth experiment and documenting the results.
Top comments (0)
Some comments may only be visible to logged-in visitors. Sign in to view all comments.