A small moment that changed how I think about AI coding tools.
If you've used Claude Code for more than a week, you know the pattern.
You tell it something. It does it. Next session — gone. You tell it again. It does it. Next session — gone again.
For me, it was always the same three things:
- "Keep components dumb. Logic goes in services."
- "Use constants instead of string literals."
- "Every new service or helper needs a unit test."
I said these things so many times they started to feel like a ritual. Open Claude, remind Claude, work, close Claude, repeat.
Then one day I was watching the terminal output — and I saw something I didn't expect.
What happened
I was working on a mid-sized Angular project — forms, validators, lazy-loaded modules, the usual stuff.
I'd just corrected Claude again on the dumb components rule. Nothing unusual. But instead of just applying the fix and moving on, I watched the terminal and noticed Claude was still running.
It was writing files.
Not code files. Not component files. Its own internal files.
Right there in the console output, I could see it creating memory.md, then feedback_dumb_components.md, then feedback_unit_tests.md — one by one.
I hadn't asked for any of this.
The files it created
Claude Code had built its own memory index and a set of structured feedback files — one for each rule it had learned from our sessions together.
Here's the actual memory.md it wrote:
# Memory Index
- [Project: Users Management](project_users_management.md)
— Angular frontend for Users Management; tabs, forms, validators
- [Feedback: OnPush + forms](feedback_onpush_forms.md)
— How to correctly update OnPush components when reactive form state changes
- [Feedback: No inline styles](feedback_no_inline_styles.md)
— Never use style attribute in HTML; always use scss files
- [Feedback: Dumb components + constants](feedback_dumb_components.md)
— Components must be dumb/agnostic; logic in services/helpers; always use constants
- [Feedback: Unit tests required](feedback_unit_tests.md)
— Every new service, helper, or util file must have a .spec.ts unit test file
And here's what it wrote inside feedback_dumb_components.md:
name: Dumb components + constants rule
type: feedback
Components should be dumb and agnostic — they bind signals/inputs from services
and delegate all logic elsewhere. Business logic, form orchestration, and state
management belong in services or helper functions, not in component classes.
Never use string literals in TypeScript code where a constant exists or can be created.
Why: User explicitly requested this as a project rule.
How to apply:
- Move any non-trivial logic (subscriptions, form manipulation, state derivation)
into a service or helper.
- Before writing a string like 'address', 'postalCode', etc., check if a constant
already exists. If not, create one.
- Components should only: inject services, expose signals/computed for the template,
and forward events.
And feedback_unit_tests.md:
name: Unit tests required for services, helpers, and utils
type: feedback
Whenever a new service, helper, or utility file is created, a unit test file
(.spec.ts) must also be created for it.
Why: User explicitly requested this as a project rule.
How to apply: After writing a new .service.ts, helper.ts, or utils.ts file,
immediately create the corresponding .spec.ts file with at minimum a basic
test suite skeleton covering the main logic.
It didn't just remember the rules. It documented why they exist and how to apply them. Like a junior dev writing up notes after a code review — except nobody told it to.
Why this moment stuck with me
Watching it happen in real time in the terminal was different from finding files afterwards.
I saw Claude finish applying my correction, then immediately pivot to: "I should make sure I don't repeat this mistake." And then it built a system to prevent that.
That's not autocomplete. That's not even a smart assistant. That's something closer to a collaborator that takes feedback seriously.
We talk a lot about prompting — how to write better instructions, how to get better output. But what Claude did here is different. It turned my feedback into its own system, without being asked.
What changed in my workflow after this
Since seeing this happen, I've started being more deliberate about corrections. When Claude makes a mistake I've seen before, instead of just fixing it in the moment, I now say:
"This is a pattern I always want you to follow. Make sure you remember this."
And it does. It updates its own memory files.
The result is that my sessions now build on each other instead of starting from zero. The tool gets more useful the longer I work with it — which is exactly what you want from any collaborator.
The honest take
Claude Code still makes mistakes. It still occasionally ignores its own memory. It's not perfect.
But watching it write those files in real time made me realize I'd been thinking about AI tools the wrong way. I was treating each session as isolated.
Claude was treating them as a relationship.
One of us was doing it right.
10 years of Angular. Now learning from my own tools.
Follow @eli_coding on Instagram for weekly posts on Angular, AI and real engineering.
Top comments (1)
This is a lovely accident — and I think the instinct Claude stumbled into is worth formalizing. I've been doing the explicit version of this for months: a single CONVENTIONS.md at the project root, with an opening line in every session that says "read CONVENTIONS.md first and flag any conflict before acting." It catches the same rot you're describing (early returns, naming, library choices) but gives you one file to version-control instead of a growing index the model maintains.
The part that surprised me when I switched: the quality of what you write down matters more than the quantity. Short imperative rules ("logic goes in services, not components") beat long explanations every time. Curious how your index-style memory holds up after a few weeks — does it start contradicting itself, or stay coherent?