DEV Community

Cover image for One Memory to Rule Them All: Taming AI CLI Instruction Sprawl
John Munn
John Munn

Posted on

One Memory to Rule Them All: Taming AI CLI Instruction Sprawl

If you’re like me, you probably use multiple AI CLIs in your coding process. Claude, Copilot, Gemini, Codex. Each has its own strengths and weaknesses, but there’s one problem I keep running into:

Your repo starts clean… and then slowly fills with:

  • CLAUDE.md
  • GEMINI.md
  • AGENTS.md
  • .github/copilot-instructions.md

Each one almost the same.
Each one slowly drifting.
Each one technically required.

Tooling isn't the problem this is an emergent coordination problem.

And right now, most teams are solving it badly, if at all.


The real problem isn’t the files

Every AI CLI made a reasonable choice:

  • “We’ll look for a well-known filename.”
  • “We won’t follow includes or remote references.”
  • “We want deterministic, local context.”

Individually, that’s fine.

Collectively, it creates a mess.

There are to many instruction files and the problem is that there’s no canonical source of truth.

So you end up with:

  • Slightly different guidance per tool
  • Accidental edits to the wrong file
  • Subtle behavioral differences between agents
  • No confidence that your AI tools are operating under the same assumptions

That leads to real instruction drift.


Why this hasn’t been “solved” already

There is no shared standard for AI CLI memory.

Each vendor evolved independently, and each tool treats “memory” a little differently:

  • Guardrails vs working context
  • Agent contracts vs prompt framing
  • Repo‑local vs global scope

None of them defer to a shared spec.
None of them coordinate.

So if you’re waiting for a magic ai.config.md file that everyone respects…

You’ll be waiting a while.


A pragmatic solution: canonicalize, then fan out

Instead of fighting the tools, accept their constraints and add a thin layer of automation.

The approach is simple:

  1. Choose one canonical instruction file
  2. Generate the tool-specific files from it
  3. Automate the sync so drift can’t happen

That’s it.

This keeps every AI CLI happy and gives you one place to think.


The setup

I put together a small repo that does exactly this:

AI CLI Memory Sync Repo
One source of truth for AI behavior across Claude, Copilot and Gemini.

The structure looks like this:

.ai/
  INSTRUCTIONS.md        # the only file you edit
scripts/
  ai-sync.mjs            # generates everything else

CLAUDE.md                # generated
GEMINI.md                # generated
AGENTS.md                # generated
.github/
  copilot-instructions.md
Enter fullscreen mode Exit fullscreen mode

You edit .ai/INSTRUCTIONS.md.
Everything else is derived.

Each generated file includes a small header:

DO NOT EDIT — generated from .ai/INSTRUCTIONS.md

That alone eliminates most accidental drift.


Automation: make drift impossible

Manual syncing isn’t enough and humans forget.

So the repo supports three layers of protection:

1. Live watch mode (local dev)

A file watcher monitors the canonical file and re-syncs on save:

npm run ai:watch
Enter fullscreen mode Exit fullscreen mode

Change the instructions → all tools update automatically.

2. Pre-commit hook (team safety net)

Before every commit:

  • Instructions are re-generated
  • Tool files are staged automatically

No sync, no commit.

3. CI check (last line of defense)

In CI:

npm run ai:check
Enter fullscreen mode Exit fullscreen mode

If anything is out of sync, the build fails.
No drift reaches main.


Why not symlinks?

On macOS/Linux, symlinks work and give you a true single file.

But:

  • Windows support is inconsistent
  • Some tools behave oddly with symlinks
  • Teams tend to trip over it

Generating files is boring and boring is good.


This is infrastructure glue, not a shiny AI tool

This repo doesn’t:

  • Call any AI APIs
  • Wrap CLIs
  • Add clever abstractions

It does one thing:

Establish a deterministic AI contract for a repo

As we move into a world of multiple agents, multiple copilots, and overlapping AI roles, that contract matters.

Not because it’s fancy.

Because without it, things can fall apart.


Is this overkill?

If you use one AI tool?

Yes.

If you use two or more?

You’ve probably already felt the pain.


The repo

If this resonates, the repo is here:

AI CLI Memory Sync Repo
https://github.com/Tawe/AI-CLI-Memory-Sync-Repo

Steal it. Fork it. Adapt it.

And if you ever find yourself wondering why Claude and Copilot behave differently in the same repo, check your memory files first.

That’s usually where the truth is hiding.

Top comments (3)

Collapse
 
nedcodes profile image
Ned C

This solves a real problem. I've been writing about .cursorrules patterns and one thing that keeps coming up is how much time people spend maintaining near-identical instruction files across tools.

The "canonicalize then fan out" approach is the right call. The alternative (manually keeping CLAUDE.md, .cursorrules, and copilot-instructions.md in sync) guarantees drift within a week.

One thing I'd add: the generated files could include tool-specific sections that only apply to that particular CLI. For example, Claude handles negative constraints better than some other tools, so you might want a Claude-specific addendum that gets appended during generation. A shared core plus tool-specific overrides.

The CI check is the real MVP here. Pre-commit hooks are great but they can be bypassed. A failing build can't.

Collapse
 
tawe profile image
John Munn

Appreciate this, you really nailed the core issue. The “canonicalize then fan out” model is really about preventing drift, and I like the idea of layering tool-specific overrides on top of a shared core. And yeah, the CI check is doing the real enforcement work here, hooks help, but a failing build will stops entropy.

Collapse
 
nedcodes profile image
Ned C

yeah the CI step is what actually makes this hold. without it the shared core just becomes another config file that drifts and nobody notices until three tools are doing different things