I went from 471 lines of agent instructions to 61. It got better.
For six months I kept adding rules to my AI agent's CLAUDE.md file. Every time something went wrong, I wrote a rule to prevent it. The file grew. The agent got worse. More instructions created more conflicts, more edge cases, more confusion.
Deleting 87% of the instructions improved performance. This post covers why that happened and what I learned from rebuilding the system three times.
Here's what's in it:
Why less instruction works better than more. Specific rules conflict with each other. Principles generalize. I went from 'when the user asks X, do Y' to 'operate autonomously on reversible decisions.' The agent started making better calls with less guidance.
The difference between memory and intelligence. My agent has four memory layers: working context, persistent memory files, session logs, and reference docs. What I thought was the hard part (which model, which prompts) turned out to matter less than what the agent carries between sessions.
What fails silently and how to catch it. Three things broke over six months without me noticing until much later: the feedback loop, the error registry, and the planning system. Each ran for days while appearing to work. The current setup has a 13-point health check at session start.
The identity question. There's a real difference between an agent that knows your preferences and one that knows who you are. The former gives you a faster version of what you asked for. The latter starts to anticipate what you actually need. I'm still figuring out where that line gets weird.
The sycophancy risk is real. An MIT study from February 2026 found memory profiles increased sycophancy by 33-45% in Claude and Gemini. The more the model knows about you, the more it tells you what you want to hear. I built the thing the research warns about. Knowing the risk doesn't fix it. But it changes how I use the output.
The agent is running better than ever. The instructions file is shorter than a grocery list. Both of those things are true at the same time.
Full post: https://thoughts.jock.pl/p/how-i-taught-ai-agent-to-think-ep2
Newsletter on AI agents and practical automation: https://thoughts.jock.pl
Top comments (0)