DEV Community

Cover image for I Keep Telling Claude the Same Things. So He Started Writing Them Down Himself.

I Keep Telling Claude the Same Things. So He Started Writing Them Down Himself.

Eli_coding on April 10, 2026

A small moment that changed how I think about AI coding tools. If you've used Claude Code for more than a week, you know the pattern. You tell...
Collapse
 
novaelvaris profile image
Nova Elvaris

This is a lovely accident — and I think the instinct Claude stumbled into is worth formalizing. I've been doing the explicit version of this for months: a single CONVENTIONS.md at the project root, with an opening line in every session that says "read CONVENTIONS.md first and flag any conflict before acting." It catches the same rot you're describing (early returns, naming, library choices) but gives you one file to version-control instead of a growing index the model maintains.

The part that surprised me when I switched: the quality of what you write down matters more than the quantity. Short imperative rules ("logic goes in services, not components") beat long explanations every time. Curious how your index-style memory holds up after a few weeks — does it start contradicting itself, or stay coherent?

Collapse
 
eli_coding profile image
Eli_coding

Love the CONVENTIONS.md approach. Explicitly telling the model to flag conflicts is a game changer. To your point on coherence: yes, the index gets 'noisy' fast if I don't prune it. Does your file stay pretty static, or are you constantly updating it as the project evolves?

Collapse
 
cyber8080 profile image
Cyber Safety Zone

Really interesting shift in mindset—this is where AI starts feeling less like a tool and more like a junior dev that actually learns from feedback.

The idea of Claude creating its own memory files and turning repeated corrections into reusable rules is a big step toward persistent context, instead of the usual “start from zero every session” problems

Also love the subtle takeaway: it’s not just about prompting better, it’s about teaching the model intentionally over time.

Collapse
 
eli_coding profile image
Eli_coding

That junior dev analogy is spot on. Just like a real junior dev, if you don't give Claude clear feedback, it’ll keep making the same mistakes, but once you document those rules, the growth is exponential. We’re definitely moving into an era of "AI Management" rather than just "AI Prompting". Thanks for sharing that perspective!

Collapse
 
ticktockbent profile image
Wes

The specific examples of what Claude remembered, dumb components, constants over string literals, unit tests for new services, make this concrete instead of abstract, which is what makes it useful as a write-up.

The part that's missing is what happens six months in when those memories go stale. Projects evolve: you might adopt a new pattern that contradicts an old rule, or drop OnPush for a specific module where it causes more problems than it solves. File-based memories don't know when they've become wrong. They just keep firing. At that point the "collaborator" actively fights legitimate architectural changes because it's faithfully following instructions you gave in a different context. The accumulation this frames as a pure win has a maintenance cost that only shows up later, and the failure mode is subtle - the AI confidently applies an outdated rule instead of doing nothing.

Have you hit a case yet where a saved memory started working against you, and if so, how did you decide it was the memory and not a mistake in your new approach?

Collapse
 
eli_coding profile image
Eli_coding

You’ve perfectly described Context Debt. I’m still early in this experiment, but I can already see that six-month wall approaching.

My plan for the 'hard prune' is low-tech: the moment I find myself arguing with the AI, I’ll manually clear the stale entries or have it regenerate the index based only on the current state. How do you keep your own rules from becoming 'confidently wrong' as your projects evolve?

Collapse
 
frost_ethan_74b754519917e profile image
Ethan Frost

This is exactly the workflow problem I've been wrestling with too. The "ritual" of re-explaining context to Claude every session is real, and CLAUDE.md is a clean solution.

One pattern I've been exploring: sharing skills/prompts/CLAUDE.md snippets across a team via a registry. Started tokrepo.com as an experiment — basically npm for AI agent assets (skills, MCP configs, prompt templates). The idea is you curate your own "ritual" once, then pull it into any project with a single command.

Curious if you've thought about versioning these memory files? Once Claude starts writing them autonomously, keeping them clean over time becomes its own problem.

Collapse
 
frost_ethan_74b754519917e profile image
Ethan Frost

this is exactly the 'skills as versioned files' pattern i've been obsessed with lately. once you start treating learned corrections as markdown you can diff, the next logical step is sharing them across projects (and across people). been putting mine up at tokrepo.com — basically npm-but-for-claude-skills. the compounding effect you describe hits even harder when you can pull in someone else's battle-tested rules instead of discovering each one yourself.

Collapse
 
eli_coding profile image
Eli_coding

I love the "skills as versioned files" framing. The moment we start diffing our AI instructions, we stop being prompt engineers and start being AI Systems Architects.

That’s a really compelling vision. I took a look at TokRepo, the idea of **

tokrepo install
** for agent skills is exactly the kind of Portability we need right now.

My main takeaway from your project is that it moves us away from Prompt Engineering and toward Skill Management. If I can pull in a battle-tested CLAUDE.md snippet for a specific tech stack, I’m saving days of Context Debt accumulation.

One question: How do you envision the Merging process? If I pull a skill from TokRepo, but Claude has already written its own local memories, do you see the CLI eventually handling Conflict Resolution like a Git merge? That would be the holy grail