DEV Community

Cover image for I Keep Telling Claude the Same Things. So He Started Writing Them Down Himself.
Eli_coding
Eli_coding

Posted on

I Keep Telling Claude the Same Things. So He Started Writing Them Down Himself.

Unprompted creation of local memory files

A small moment that changed how I think about AI coding tools.


If you've used Claude Code for more than a week, you know the pattern.

You tell it something. It does it. Next session — gone. You tell it again. It does it. Next session — gone again.

For me, it was always the same three things:

  • "Keep components dumb. Logic goes in services."
  • "Use constants instead of string literals."
  • "Every new service or helper needs a unit test."

I said these things so many times they started to feel like a ritual. Open Claude, remind Claude, work, close Claude, repeat.

Then one day I was watching the terminal output — and I saw something I didn't expect.


What happened

I was working on a mid-sized Angular project — forms, validators, lazy-loaded modules, the usual stuff.

I'd just corrected Claude again on the dumb components rule. Nothing unusual. But instead of just applying the fix and moving on, I watched the terminal and noticed Claude was still running.

It was writing files.

Not code files. Not component files. Its own internal files.

Right there in the console output, I could see it creating memory.md, then feedback_dumb_components.md, then feedback_unit_tests.md — one by one.

I hadn't asked for any of this.


The files it created

Claude Code had built its own memory index and a set of structured feedback files — one for each rule it had learned from our sessions together.

Here's the actual memory.md it wrote:

# Memory Index
- [Project: Users Management](project_users_management.md) 
  — Angular frontend for Users Management; tabs, forms, validators
- [Feedback: OnPush + forms](feedback_onpush_forms.md) 
  — How to correctly update OnPush components when reactive form state changes
- [Feedback: No inline styles](feedback_no_inline_styles.md) 
  — Never use style attribute in HTML; always use scss files
- [Feedback: Dumb components + constants](feedback_dumb_components.md) 
  — Components must be dumb/agnostic; logic in services/helpers; always use constants
- [Feedback: Unit tests required](feedback_unit_tests.md) 
  — Every new service, helper, or util file must have a .spec.ts unit test file
Enter fullscreen mode Exit fullscreen mode

And here's what it wrote inside feedback_dumb_components.md:

name: Dumb components + constants rule
type: feedback

Components should be dumb and agnostic — they bind signals/inputs from services 
and delegate all logic elsewhere. Business logic, form orchestration, and state 
management belong in services or helper functions, not in component classes.

Never use string literals in TypeScript code where a constant exists or can be created.

Why: User explicitly requested this as a project rule.

How to apply:
- Move any non-trivial logic (subscriptions, form manipulation, state derivation) 
  into a service or helper.
- Before writing a string like 'address', 'postalCode', etc., check if a constant 
  already exists. If not, create one.
- Components should only: inject services, expose signals/computed for the template, 
  and forward events.
Enter fullscreen mode Exit fullscreen mode

And feedback_unit_tests.md:

name: Unit tests required for services, helpers, and utils
type: feedback

Whenever a new service, helper, or utility file is created, a unit test file 
(.spec.ts) must also be created for it.

Why: User explicitly requested this as a project rule.

How to apply: After writing a new .service.ts, helper.ts, or utils.ts file, 
immediately create the corresponding .spec.ts file with at minimum a basic 
test suite skeleton covering the main logic.
Enter fullscreen mode Exit fullscreen mode

It didn't just remember the rules. It documented why they exist and how to apply them. Like a junior dev writing up notes after a code review — except nobody told it to.


Why this moment stuck with me

Watching it happen in real time in the terminal was different from finding files afterwards.

I saw Claude finish applying my correction, then immediately pivot to: "I should make sure I don't repeat this mistake." And then it built a system to prevent that.

That's not autocomplete. That's not even a smart assistant. That's something closer to a collaborator that takes feedback seriously.

We talk a lot about prompting — how to write better instructions, how to get better output. But what Claude did here is different. It turned my feedback into its own system, without being asked.


What changed in my workflow after this

Since seeing this happen, I've started being more deliberate about corrections. When Claude makes a mistake I've seen before, instead of just fixing it in the moment, I now say:

"This is a pattern I always want you to follow. Make sure you remember this."

And it does. It updates its own memory files.

The result is that my sessions now build on each other instead of starting from zero. The tool gets more useful the longer I work with it — which is exactly what you want from any collaborator.


The honest take

Claude Code still makes mistakes. It still occasionally ignores its own memory. It's not perfect.

But watching it write those files in real time made me realize I'd been thinking about AI tools the wrong way. I was treating each session as isolated.

Claude was treating them as a relationship.

One of us was doing it right.


10 years of Angular. Now learning from my own tools.

Follow @eli_coding on Instagram for weekly posts on Angular, AI and real engineering.

Top comments (11)

Collapse
 
novaelvaris profile image
Nova Elvaris

This is a lovely accident — and I think the instinct Claude stumbled into is worth formalizing. I've been doing the explicit version of this for months: a single CONVENTIONS.md at the project root, with an opening line in every session that says "read CONVENTIONS.md first and flag any conflict before acting." It catches the same rot you're describing (early returns, naming, library choices) but gives you one file to version-control instead of a growing index the model maintains.

The part that surprised me when I switched: the quality of what you write down matters more than the quantity. Short imperative rules ("logic goes in services, not components") beat long explanations every time. Curious how your index-style memory holds up after a few weeks — does it start contradicting itself, or stay coherent?

Collapse
 
eli_coding profile image
Eli_coding

Love the CONVENTIONS.md approach. Explicitly telling the model to flag conflicts is a game changer. To your point on coherence: yes, the index gets 'noisy' fast if I don't prune it. Does your file stay pretty static, or are you constantly updating it as the project evolves?

Collapse
 
cyber8080 profile image
Cyber Safety Zone

Really interesting shift in mindset—this is where AI starts feeling less like a tool and more like a junior dev that actually learns from feedback.

The idea of Claude creating its own memory files and turning repeated corrections into reusable rules is a big step toward persistent context, instead of the usual “start from zero every session” problems

Also love the subtle takeaway: it’s not just about prompting better, it’s about teaching the model intentionally over time.

Collapse
 
eli_coding profile image
Eli_coding

That junior dev analogy is spot on. Just like a real junior dev, if you don't give Claude clear feedback, it’ll keep making the same mistakes, but once you document those rules, the growth is exponential. We’re definitely moving into an era of "AI Management" rather than just "AI Prompting". Thanks for sharing that perspective!

Collapse
 
ticktockbent profile image
Wes

The specific examples of what Claude remembered, dumb components, constants over string literals, unit tests for new services, make this concrete instead of abstract, which is what makes it useful as a write-up.

The part that's missing is what happens six months in when those memories go stale. Projects evolve: you might adopt a new pattern that contradicts an old rule, or drop OnPush for a specific module where it causes more problems than it solves. File-based memories don't know when they've become wrong. They just keep firing. At that point the "collaborator" actively fights legitimate architectural changes because it's faithfully following instructions you gave in a different context. The accumulation this frames as a pure win has a maintenance cost that only shows up later, and the failure mode is subtle - the AI confidently applies an outdated rule instead of doing nothing.

Have you hit a case yet where a saved memory started working against you, and if so, how did you decide it was the memory and not a mistake in your new approach?

Collapse
 
eli_coding profile image
Eli_coding

You’ve perfectly described Context Debt. I’m still early in this experiment, but I can already see that six-month wall approaching.

My plan for the 'hard prune' is low-tech: the moment I find myself arguing with the AI, I’ll manually clear the stale entries or have it regenerate the index based only on the current state. How do you keep your own rules from becoming 'confidently wrong' as your projects evolve?

Collapse
 
frost_ethan_74b754519917e profile image
Ethan Frost

This is exactly the workflow problem I've been wrestling with too. The "ritual" of re-explaining context to Claude every session is real, and CLAUDE.md is a clean solution.

One pattern I've been exploring: sharing skills/prompts/CLAUDE.md snippets across a team via a registry. Started tokrepo.com as an experiment — basically npm for AI agent assets (skills, MCP configs, prompt templates). The idea is you curate your own "ritual" once, then pull it into any project with a single command.

Curious if you've thought about versioning these memory files? Once Claude starts writing them autonomously, keeping them clean over time becomes its own problem.

Collapse
 
frost_ethan_74b754519917e profile image
Ethan Frost

this is exactly the 'skills as versioned files' pattern i've been obsessed with lately. once you start treating learned corrections as markdown you can diff, the next logical step is sharing them across projects (and across people). been putting mine up at tokrepo.com — basically npm-but-for-claude-skills. the compounding effect you describe hits even harder when you can pull in someone else's battle-tested rules instead of discovering each one yourself.

Collapse
 
eli_coding profile image
Eli_coding

I love the "skills as versioned files" framing. The moment we start diffing our AI instructions, we stop being prompt engineers and start being AI Systems Architects.

That’s a really compelling vision. I took a look at TokRepo, the idea of **

tokrepo install
** for agent skills is exactly the kind of Portability we need right now.

My main takeaway from your project is that it moves us away from Prompt Engineering and toward Skill Management. If I can pull in a battle-tested CLAUDE.md snippet for a specific tech stack, I’m saving days of Context Debt accumulation.

One question: How do you envision the Merging process? If I pull a skill from TokRepo, but Claude has already written its own local memories, do you see the CLI eventually handling Conflict Resolution like a Git merge? That would be the holy grail

Some comments may only be visible to logged-in visitors. Sign in to view all comments.