Why Steering Documents Matter
A well-maintained AGENTS.md
is the contract between your codebase and the agent ecosystem. It answers:
- What can I ask this agent to do here?
- What tools, conventions, or workflows are in scope?
A good baseline of steering documents makes AI more predictable and reliable. Hinting it to follow your patterns, understands your architecture, and generates code that fits your codebase. It doesn't replace good AI practices, but it improves the OOTB experience.
Remember it is written for AI agents, not human. Keep them concise and to the point. For example, see the GPT5-codex prompting guide and the terseness of the prompts. Characters add to an agent context window, so terseness allow them to do more.
How to Test Your Steering Docs
Testing steering documents is similar to testing an onboarding guide. You need to walk through it step by step.
Create a collection of example prompts.
Think of all the tasks an engineer might do inside your monorepo, make sure you have one or 2 example prompts per task.
Example: “Migrate an email template to our new framework. Here's the previous code: [...]”Store them somewhere durable.
A shared Google Doc or Confluence page works fine—these are lightweight, editable, and accessible.Run the prompts with all your supported tools.
Try Claude Code, Cursor, Codex CLI, or whichever agents your team supports. Different agents work differently, it's best to test across.Observe breakdowns.
Where did the agent get lost? What knowledge did it lack to properly complete the task?git reset --hard
, edit your steering doc, and retry until your agent one-shot your test prompt.
This may feel simple, but simplicity is a feature. You’ll uncover real gaps faster than if you over-engineer an evaluation framework. Over time, you can start introducing automated evals—but don’t let that block you getting started.
AGENTS.md and Its Variants
Most tools today recognize AGENTS.md
as a standard.
A notable exception at the time of writing is Anthropic's Claude code which only supports CLAUDE.md
. (let's hope this changes soon)
To support Claude code, I recommend against a symlink. Instead I approach it like this:
echo "Read @AGENTS.md" > CLAUDE.md
I prefer this approach because it gives us the flexibility to expand it with Claude-specific features (like sub-agents.)
repo-root/
├── AGENTS.md
├── CLAUDE.md # contains "Read @AGENTS.md"
└── src/
└── ...
Root AGENTS.md as your router
Nested AGENTS.md
are the default recommendation for monorepo. This means the closest AGENTS.md
to the edited file wins, but I find this approach quite limited on its own! For one, that only work if working from a sub-folder/specific file or if a user @
reference the file manually. Doing only that:
- Limits the agent access to examples/patterns in the rest of the codebase.
- The user needs to know from which sub-folder to work; making discoverability harder.
We can bridge that gap with a root AGENTS.md
to progressively disclose information to your agent. For example:
# Tasks
To create an email, read @emails/AGENTS.md
To create a Go service, read @go/services/AGENTS.md
To add unit tests, read @.agents/unit-tests.md
Whenever appropriate, we prefer adding documentation in an AGENTS.md
contextual to a folder's content. But a general .agents/
folder to collect the other type of content is quite valuable for more generic context.
Folder structure example:
repo-root/
├── AGENTS.md
├── emails/
└── AGENTS.md
├── go/
└── services/
└── AGENTS.md
└── .agents/
└── unit-tests.md
This way, the root AGENTS.md
becomes a map, pointing agents to only read the relevant document based on their task. This helps a lot managing the context window when working on longer running tasks.
What about documentation maintained elsewhere?
At a high level, there's nothing wrong referencing external documentation.
To create an email, use Atlassian MCP server to read https://...
The downside is the risk of filling the context window with documentation written with humans in mind. But because you're testing your core prompts, you'll see easily if you're causing an agent to hallucinates.
We recommend a pragmatic approach at first, and expanding later on.
Alternatively, you can ask the agent to summarize the content of the external documentation. Put the output in an AGENTS.md
file, and tell the agent they can search for more information if needed at the given URL.
This will then need to be kept up to date as any documentation. And that's something we're hoping to delegate to autonomous agents soon enough...
Bring your Platform Teams along
At some point, centralized dev experience teams can’t be experts in everything. Platform and product teams must own their own steering content.
- The central team provides the scaffolding (AGENTS.md, routing, shared configs).
- Each partner team fills in domain-specific instructions (e.g., “how to add observability to a python backend service”).
- This shifts expertise closer to where the work happens, while maintaining a consistent navigation structure.
Supporting User Customization
Not every engineer works the same way. Customization matters.
- Global preferences. Example: tone, tool prioritization. Place these in
~/AGENTS.md
. - User-repo-specific overrides. Example: which service they own, what's their scope, etc. Introduce
AGENTS.local.md
..gitignore
it. And instruct your rootAGENTS.md
to check it first:If present, prioritize instructions inside @AGENTS.local.md
repo-root/
├── AGENTS.md
└── AGENTS.local.md # .gitignored, user-specific
Conclusion
Steering documents are still a pretty new territory for us. So we’re more than eager to learn from the community!
- How do you structure steering docs in your monorepos?
- Do you share other configs (beyond steering), when/what?
- Where have you seen agents get lost most?
This space is moving fast—best practices will come from the community as much as from tools. This is only an approach, and called to evolve.
Top comments (0)