DEV Community

Olivia Craft
Olivia Craft

Posted on

Why Claude Ignores Your Instructions (And How to Fix It With CLAUDE.md)

Why Claude Ignores Your Instructions (And How to Fix It With CLAUDE.md)

You write clear instructions. Claude reads them. Then it does something completely different.

Not wrong, exactly. Just... not what you asked for. It adds docstrings you didn't request. It refactors code you told it to leave alone. It "improves" things that were fine. You correct it, it apologizes, and three prompts later it drifts again.

This isn't a bug. It's how large language models process context — and once you understand the mechanism, you can fix it with a single file.


Why LLMs Drift From Your Instructions

Large language models don't "remember" your instructions the way you remember a conversation. They process a context window — a fixed block of text that includes the system prompt, your messages, and the model's own responses.

Here's the problem: as the conversation grows, your original instructions get pushed further from the active generation point. The model's attention shifts toward recent context. Your carefully written rules become background noise.

This is called context drift, and it happens in every long coding session. The model isn't ignoring you on purpose. Your instructions are just losing the attention competition against more recent tokens.

There's a second problem: instruction ambiguity. When you say "keep it simple," Claude interprets that against its training data. Its definition of "simple" might include adding type annotations, error handling, and comments — things that feel helpful but weren't what you meant.

Verbal instructions in chat are inherently lossy. They decay with every new message.


Passive Rules vs. Enforcement Hooks

Most developers try to fix drift by writing longer prompts. More detail, more examples, more emphasis. This doesn't work because it treats the symptom (bad output) instead of the cause (attention decay).

There are two fundamentally different approaches to controlling AI behavior:

Passive rules are instructions you write in a prompt or system message. They work at the start of a conversation but degrade over time. They have no mechanism to reassert themselves.

Enforcement hooks are structural — they exist in the environment, not just the conversation. Claude Code's CLAUDE.md file is an enforcement hook. It gets loaded into context at the start of every interaction, regardless of conversation length. It doesn't decay because it's re-read, not remembered.

The difference matters. A passive rule says "don't add comments." An enforcement hook says "don't add comments" in a file that Claude re-reads every time it acts on your code.


3 CLAUDE.md Patterns That Actually Stick

After testing dozens of configurations, these three patterns consistently prevent drift.

Pattern 1: Negative Constraints Before Positive Instructions

Claude's training biases it toward being helpful — which means adding things. Counter this by leading with explicit prohibitions.

Before (drifts after ~5 interactions):

## Rules
- Write concise code
- Follow existing patterns
- Use TypeScript
Enter fullscreen mode Exit fullscreen mode

After (holds across long sessions):

## Rules
# DO NOT
- Do not add comments or docstrings to code you didn't write
- Do not refactor code unrelated to the current task
- Do not add error handling unless explicitly asked
- Do not introduce new dependencies without approval

# DO
- Follow existing patterns in the codebase
- Write concise code — fewer lines is better
- Use TypeScript with strict mode
Enter fullscreen mode Exit fullscreen mode

Why this works: negative constraints create hard boundaries. They're easier for the model to evaluate ("did I add a comment? yes → remove it") than vague positive guidance ("is this concise enough?").

Pattern 2: Scope Anchoring With Task Boundaries

Drift happens when Claude expands the scope of a task beyond what you asked. Anchor the scope explicitly.

Before:

- Keep changes minimal
Enter fullscreen mode Exit fullscreen mode

After:

## Scope Rules
- Only modify files directly related to the current request
- One logical change per response — if a task requires multiple changes, list them and wait for approval
- If you notice a bug or improvement outside the current scope, mention it but do NOT fix it
- Never modify test files unless the task specifically involves tests
Enter fullscreen mode Exit fullscreen mode

The key phrase is "mention it but do NOT fix it." This gives Claude a pressure release valve — it can still flag issues without acting on them, which satisfies its helpfulness training without corrupting your task.

Pattern 3: Format Enforcement With Concrete Examples

Abstract style rules ("be concise") get reinterpreted. Concrete examples don't.

Before:

- Write clean commit messages
Enter fullscreen mode Exit fullscreen mode

After:

## Commit Messages
Format: <type>: <description in lowercase, max 72 chars>
Types: fix, feat, refactor, test, docs

Example:
  fix: resolve race condition in websocket reconnection

NOT:
  Fixed the race condition bug that was causing websocket reconnection issues
  Updated websocket handler to fix race condition and improve reliability
Enter fullscreen mode Exit fullscreen mode

Showing the wrong output is as important as showing the right output. Claude is trained on pattern matching — giving it a negative example is a stronger signal than a positive rule alone.


Before and After: A Real Session

Here's what a typical coding session looks like without and with these patterns.

Without CLAUDE.md:

You: "Add a retry mechanism to the API client"

Claude: adds retry logic, refactors the error handling, adds JSDoc comments to three functions, creates a new RetryConfig type, and suggests moving the client to a separate module

Five changes when you asked for one.

With CLAUDE.md (using all three patterns):

You: "Add a retry mechanism to the API client"

Claude: adds retry logic with exponential backoff to the existing fetch wrapper. Notes: "I noticed the error handling could be simplified, but that's outside the current scope — want me to address it separately?"

One change. One note. No drift.


The Mechanism Behind the Fix

CLAUDE.md works because it changes where instructions live. Instead of existing in conversation context (which decays), instructions exist in the project structure (which persists). Every time Claude Code reads your project, it re-reads CLAUDE.md. Your rules don't compete with recent context — they reset with every interaction.

This is the difference between telling someone a rule once and posting it on the wall where they work.


Start With Structure, Not Hope

If Claude is ignoring your instructions, don't write longer prompts. Write a CLAUDE.md file with explicit negative constraints, scope boundaries, and concrete format examples.

I've packaged 40+ production-tested rules into a drop-in CLAUDE.md file that covers scope control, code style enforcement, commit formatting, and drift prevention. If you want a ready-made starting point: CLAUDE.md Rules Pack.

If you're using Cursor, the same structural principles apply to .cursorrules files. I have a separate pack for that: Cursor Rules Pack v2.

Stop hoping the AI will remember. Make it re-read.

Top comments (0)