DEV Community

BLNCraft
BLNCraft

Posted on

4 CLAUDE.md Mistakes That Are Killing Your AI Coding Sessions (And How to Fix Them)

Why your CLAUDE.md is hurting your AI coding sessions

After reviewing dozens of codebases where developers use Cursor and Claude Code, I kept seeing the same CLAUDE.md mistakes over and over. Here is what I found — and how to fix it.


The 4 mistakes that burn your token budget

1. Too long — burns context before actual code

Most CLAUDE.md files I see are 3,000+ tokens of aspirational rules. The AI reads them before every response. That is expensive and often ignored.

Fix: Keep your CLAUDE.md under 800 tokens for the main prelude. Move framework-specific context into separate files using @include.

# CLAUDE.md (under 800 tokens)
You are a TypeScript developer. Prefer functional patterns. No default exports.

@include .claude/rules/nextjs.md
@include .claude/rules/testing.md
Enter fullscreen mode Exit fullscreen mode

Only the relevant includes load for each file.


2. Missing glob patterns — rules do not auto-attach

Generic rules like "always write tests" do nothing if your AI does not know when to apply them.

Fix: Use file-scoped rules with globs:

# .cursor/rules/testing.mdc
---
globs: ["**/*.test.ts", "**/__tests__/**"]
---
Use describe/it structure. Always mock external dependencies. Coverage threshold: 80%.
Enter fullscreen mode Exit fullscreen mode

This rule only fires when you are editing test files. Everything else is unaffected.


3. No framework-specific sections — same rules for Next.js and Go

A monolith CLAUDE.md that covers everything ends up covering nothing. Your Go service does not need React patterns in context.

Fix: Separate rule files per domain, composable via includes:

.claude/
  rules/
    nextjs.md      # App Router, RSC, data fetching
    go.md          # error handling, context propagation
    db.md          # migrations, query patterns
    auth.md        # JWT, session, RBAC patterns
    testing.md     # framework-specific testing
Enter fullscreen mode Exit fullscreen mode

4. One-file-fits-all rules — multi-rule files get ignored

Long rule files with many concerns get treated as background noise. The AI picks what it thinks is relevant and skips the rest.

Fix: One rule = one concern. Smaller, focused files that auto-attach by glob outperform long catch-all files every time.


The patterns that actually work

Here is the structure I use across projects:

.claude/
  main.md              # ≤800 tokens — team conventions, tech choices
  rules/
    auth/              # 14 rules for auth patterns
    db/                # 18 rules for query + migration patterns
    testing/           # 22 rules for TDD + coverage
    ui/                # 31 rules for component patterns
    deploy/            # 15 rules for CI/CD + deployment
    observability/     # 12 rules for logging + tracing
.cursor/
  rules/
    *.mdc              # glob-scoped, auto-attach
Enter fullscreen mode Exit fullscreen mode

What changed when I applied this

  • Token usage per session dropped by ~40% (shorter main context)
  • Rules actually fired at the right time (glob-scoped)
  • New team members could understand the AI setup in under 10 minutes
  • No more "the AI keeps ignoring X" — because X was finally scoped correctly

TL;DR

  1. Keep CLAUDE.md under 800 tokens, use @include for the rest
  2. Glob-scope every Cursor rule so it fires on the right files
  3. One concern per rules file — no catch-alls
  4. Separate by domain, not by project

I packaged up 162 rules files organized by domain (auth, db, testing, ui, deploy, observability) with 9 CLAUDE.md presets for major frameworks. If you want to skip the iteration: Cursor Rules Pack on Gumroad — use code **BLNCRAFT20* for 20% off launch week.*

Top comments (0)