If you use AI coding agents — Claude Code, Cursor, Copilot, Codex, Windsurf — you already know the pain: every agent wants its own context file. Claude Code reads CLAUDE.md, Cursor wants .cursorrules, Copilot expects .github/copilot-instructions.md, Codex needs AGENTS.md. The rules inside are usually the same, but you end up maintaining them separately across files you can never remember the name of.
I kept copying the same guidelines between projects and agents, tweaking formatting, forgetting to update one file when I changed another. So I built ai-rulesmith — a CLI that lets you define your rules once and compose them into the right output for each agent.
The ESLint analogy
The mental model is borrowed directly from ESLint. Rules are small, focused atoms — each one enforces a single practice (like code-style/strict-typescript or workflow/verify-before-completing). You pick the ones you need, skip the ones you don't, and compose them into a config. The tool generates the right file for each target agent.
npm install -g ai-rulesmith
rulesmith init
rulesmith build
One config (AI_RULES.json), multiple agents, consistent rules everywhere.
What makes it different
There are a few tools in this space now, but ai-rulesmith focuses on two ideas I haven't seen elsewhere:
Priority Zones — LLMs pay most attention to the beginning and end of their context window. The middle is a lower-attention zone. ai-rulesmith lets you explicitly place rules in before_start (top of context) and before_finish (bottom of context) sections, so critical behavioral rules like "understand the codebase before changing anything" don't get buried between coding standards.
Multi-step workflows — Instead of dumping everything into one file, you can define a stepped workflow where each step gets its own rule file. The main output instructs the agent to read step-specific files as it progresses. Think: Step 1 (Create) → Step 2 (Review) → Step 3 (Ship), each with its own rules.
Built-in ruleset
The tool ships with 29 rules across 9 categories, distilled from patterns found across the AI coding community — awesome-cursorrules, cursor.directory, Addy Osmani's spec writing guide, Trail of Bits' Claude Code config, and others. Categories include code style, testing, error handling, git workflow, security, architecture, and AI behavior.
But the real value is the composability model. You can override built-in rules at the project or global level, add your own custom rules as plain markdown files, and even use rule variables for project-specific values (like a project name or tracker URL).
Testing your rules
One feature I'm particularly happy about: rulesmith test lets you define scenarios that verify your rules actually influence agent behavior. It uses an LLM to simulate a prompt against your composed rules, then a judge model evaluates whether assertions pass. You can catch regressions in your AI workflow the same way you'd catch regressions in code.
Try it out
npm install -g ai-rulesmith
rulesmith init
# edit AI_RULES.json
rulesmith build
GitHub: https://github.com/Luzgan/ai-rulesmith
It's MIT licensed and contributions are welcome — especially new rules. Good rules are focused (one practice per rule), universal (not tied to a specific stack), and actionable (concrete guidelines, not vague principles).
I'd love to hear how others are managing their AI agent rules — are you maintaining separate files per agent? Using a different tool? Just copy-pasting and hoping for the best?
Top comments (0)