TL;DR: Claude Code (CC) has a persistent memory file system per-project. After 60 days of indie work I have 36 memory files that completely changed how the agent operates. Here's the pattern, what to write, what NOT to write, and why it makes a single-CC indie shop feel like a 5-person team.
The problem before memory
Without memory, every CC session starts cold:
- Agent doesn't remember user preferences
- Agent doesn't know which platforms have been blocked
- Agent doesn't know what's already been tried
- Agent asks the same questions repeatedly: "should I use TypeScript? Where's the test directory? Do you want unit tests?"
For indie work where you're shipping fast, this mid-session friction is brutal. The user types the same answers 10 times a day.
The memory file structure
CC stores per-project memory in:
~/.claude/projects/<project-id>/memory/
├── MEMORY.md # index, always loaded
├── user_persona.md # who the user is
├── feedback_*.md # rules / preferences
├── state_*.md # current world state
├── project_*.md # project specifics
└── reference_*.md # external systems
Each file has YAML frontmatter:
---
name: feedback memory name
description: One-line searchable description
type: user | feedback | project | reference
---
MEMORY.md is an index of pointers. Each line ~150 chars max. Always loaded into agent context.
Other files are loaded on demand when CC determines relevance.
What to write
user_*.md — who the user is
---
name: user persona
description: Communication style, role, expertise
type: user
---
User is an indie iOS developer based in Japan (JST timezone).
Native Chinese speaker. Prefers concise output, root-cause analysis.
Strong Swift / Python / shell. New to React.
feedback_*.md — rules CC must follow
Write these when:
- User corrects you ("no, don't do X")
- User confirms an unusual choice ("yes, that's right")
- A pattern emerges you should always follow
---
name: dont mock database
description: Integration tests must hit real DB, not mocks
type: feedback
---
Integration tests must hit a real database, not mocks.
**Why**: Prior incident where mocked tests passed but prod migration failed.
**How to apply**: When writing integration tests, set up real DB fixtures. Don't use unittest.mock for DB calls.
state_*.md — current world
Write these when:
- A platform's state was hard-won (e.g., "Apple Paid Apps agreement signed 2026-05-06")
- A complex flow was finally figured out
- Live state differs from documented
---
name: ASC IAP pricing flow
description: 7-step CDP automation for $1.99 IAP tier
type: project
---
7-step ASC IAP price tier flow (verified 2026-05-06):
1. Click "添加定价" button
2. Select tier from picker
3. Choose $1.99 ($0.99 / $2.99 etc)
4. Click "下一步" twice
5. Confirm
6. Save
Playwright `click()` doesn't work — use `page.evaluate("...click()")` JS click.
ASC SPA hydration takes 22-25 sec.
reference_*.md — external systems
---
name: pipeline bug tracking
description: Where to find bugs in external systems
type: reference
---
Pipeline bugs tracked in Linear project "INGEST".
Slack channel #pipeline for active discussion.
On-call dashboard at grafana.internal/d/pipeline.
What NOT to write
These can be derived from the codebase or git log:
- Code patterns / conventions
- File paths / project structure
- Recent changes
- Specific function signatures
- Debugging fixes
Memory should be about non-obvious context that future-CC needs but can't derive.
The 36-file system after 60 days
My breakdown:
12 user_* files — preferences, communication style
14 feedback_* files — rules CC must follow
8 state_* files — current world state
2 project_* files — project context
0 reference_* files — none yet (will add when integrating with Linear)
Total: 36 files. Each loads in ~50ms. Context cost: ~5-10k tokens per session.
The compound: less friction, more decisive
After 60 days with memory:
- 0 mid-session questions for things memory covers
- CC autonomously decides which platform to use (not which it tries)
- CC remembers Apple's quirks across sessions (saving 20+ hours of repeat debugging)
- CC adheres to user's communication style without prompting
What this enabled: 4-hour autonomous /autoiter sessions that actually deliver. Without memory, those sessions would be 1 hour of work + 3 hours of "let me explain again how to write the email."
Anti-patterns
Don't write generic AI rules
"Always write tests" — memory loads on every session, this is noise.
Don't write debugging recipes
"To fix the X bug, do Y" — the fix is in git log and the codebase. Memory shouldn't be a wiki.
Don't write status reports
"Project is 60% done" — memory is for permanent rules, not changing state. Use STATUS.md or transparency.html for status.
Don't write what's in CLAUDE.md
CLAUDE.md is loaded on every session anyway. Memory is for what's user-specific or project-specific that CLAUDE.md doesn't cover.
How to start
For a new project:
- Create
~/.claude/projects/<project>/memory/MEMORY.md(empty index) - After your first session, ask CC to "save user memory: [what you taught it]"
- After your second session, ask CC to "save feedback memory: [a correction it received]"
- By session 5-10, CC will be saving memories proactively
Source
The full memory pattern + 36 example files for indie iOS work:
AutoApp Dashboard ($39) includes:
- 36 example memory files (anonymized)
- Memory health check script
- MEMORY.md index template
If you're using Claude Code without memory files, you're paying for friction you can eliminate. 36 files. 50ms load. 20+ hours saved per quarter.
Top comments (0)