The Frustration We All Know
Me: "Let's continue the payment feature from yesterday"
Claude: "I don't have context from previous sessions.
Could you explain what you've built so far?"
Me: *sighs deeply*
If you use Claude Code, you've been here.
Three hours architecting a solution. Explaining every decision. Finally getting AI to understand your codebase.
Next day? Blank slate.
This Is a Known Problem
MIT Technology Review calls it the "Context Loss Problem":
"The biggest limitation of AI coding assistants is that LLMs can only hold a limited amount of information in their context window, and they tend to forget what they were doing in longer tasks."
Even with 200K token context windows, the fundamental issue remains:
| What AI Promises | What Actually Happens |
|---|---|
| "I understand your codebase" | Forgets after session ends |
| "I'll follow your patterns" | Reverts to training data defaults |
| "I remember our decisions" | What decisions? |
For solo developers on side projects, this hits harder. Teams might document things. When you're alone, context lives in your head—and dies with the session.
What I Tried (And What Failed)
❌ Longer prompts
Pasting entire codebases. Hit token limits. AI got confused.
❌ Session summaries
Manual notes at session end. Forgot to do it. Notes got stale.
❌ Hoping AI would remember
Narrator: It did not.
What Actually Works
After months of frustration, I found a two-part solution:
Part 1: CLAUDE.md (Built-in Feature)
Claude Code automatically reads CLAUDE.md in your project root. This is your persistent rulebook.
# CLAUDE.md
## Project Rules
- TypeScript strict mode, always
- API base URL: https://api.myapp.com/v2
- Auth: JWT with 24-hour expiry
- Never guess business logic—ask first
## Architecture
- /src/services → Business logic
- /src/api → Express routes
- /src/utils → Shared utilities
## Critical: Don't Assume
- Payment amounts → ASK
- API endpoints → ASK
- Security settings → ASK
This survives sessions. AI reads it first. Huge improvement.
But it's not enough. CLAUDE.md tells AI the rules—not the reasons behind your code.
Part 2: CodeSyncer (My Solution)
I built an open source tool that stores context inside your code itself.
The idea: What if every AI decision was recorded as a comment tag?
/**
* Payment processor
*
* @codesyncer-decision [2026-01-15] Chose synchronous processing
* → User feedback showed async confused customers at checkout
* @codesyncer-inference Minimum $1.00 (Stripe policy)
* @codesyncer-todo Add refund logic after v1 launch
*/
async function processPayment(amount: number) {
// @codesyncer-why Idempotency key prevents duplicate charges
// on network retry (learned this the hard way)
const idempotencyKey = generateKey(userId, amount);
// ...
}
Next session, AI reads the code and instantly knows:
- What was decided
- Why it was decided
- What's still pending
No re-explaining. No context loss. The code is the context.
Quick Start
# Install
npm install -g codesyncer
# Initialize (creates CLAUDE.md structure)
npx codesyncer init
# Watch mode: auto-syncs tags to DECISIONS.md
npx codesyncer watch
That's it. 5 minutes to start.
bitjaru
/
codesyncer
AI-powered multi-repository collaboration system
CodeSyncer CLI
Is your AI coding dumb? Make it smarter - Persistent project context, controlled inference, and live architecture sync for Claude Code
한국어 문서 | English
🎬 Demo
🤔 The Problem
Working with AI on real projects? You face these issues:
1. Context is lost every session 😫
- New AI session = Start from scratch
- Explain the same architecture again and again
- "What's the API endpoint?" "How does auth work?" - Every. Single. Time.
2. Multi-repo chaos 🤯
my-saas-project/
├── api-server/ (backend)
├── web-client/ (frontend)
└── mobile-app/ (mobile)
- AI only sees one repo at a time
- Missing context from other repos → Fragmented code
- "Add login" needs backend API + frontend UI, but AI doesn't know both
3. AI makes dangerous assumptions
- "I'll set the timeout to 30 seconds" - Wait, should be 5!
- "Using /api/v1/..." - Wrong endpoint!
- Guesses business logic, security settings, pricing rules
Result…
The Tag System
| Tag | When to Use | Example |
|---|---|---|
@codesyncer-decision |
After discussing with AI | Chose REST over GraphQL (team familiarity) |
@codesyncer-inference |
AI made an assumption | Page size 20 (standard UX practice) |
@codesyncer-todo |
Needs follow-up | Add rate limiting before production |
@codesyncer-context |
Business logic explanation | GDPR requires 30-day data retention |
@codesyncer-why |
Non-obvious implementation | Using any type (external lib has no types) |
These tags become searchable documentation:
# Find all pending items
grep -r "@codesyncer-todo" ./src
# Find all architectural decisions
grep -r "@codesyncer-decision" ./src
For Teams: Gas Town
Credit where it's due—Steve Yegge (Google veteran, worked on Grok) built Gas Town tackling the same problem at enterprise scale.
steveyegge
/
gastown
Gas Town - multi-agent workspace manager
Gas Town
Multi-agent orchestration system for Claude Code with persistent work tracking
Overview
Gas Town is a workspace manager that lets you coordinate multiple Claude Code agents working on different tasks. Instead of losing context when agents restart, Gas Town persists work state in git-backed hooks, enabling reliable multi-agent workflows.
What Problem Does This Solve?
Challenge
Gas Town Solution
Agents lose context on restart
Work persists in git-backed hooks
Manual agent coordination
Built-in mailboxes, identities, and handoffs
4-10 agents become chaotic
Scale comfortably to 20-30 agents
Work state lost in agent memory
Work state stored in Beads ledger
Architecture
<div class="js-render-enrichment-target" data-json='{"data":"graph TB\n Mayor[The Mayor<br/>AI Coordinator]\n Town[Town Workspace<br/>~/gt/]\n\n Town --> Mayor\n Town --> Rig1[Rig: Project A]\n Town --> Rig2[Rig: Project B]\n\n Rig1 --> Crew1[Crew Member<br/>Your workspace]\n Rig1 --> Hooks1[Hooks<br/>Persistent storage]\n Rig1 --> Polecats1[Polecats<br/>Worker agents]\n\n Rig2 --> Crew2[Crew Member]\n Rig2 --> Hooks2[Hooks]\n Rig2 --> Polecats2[Polecats]\n\n Hooks1 -.git worktree.-> GitRepo1[Git Repository]\n Hooks2 -.git worktree.-> GitRepo2[Git Repository]\n\n style Mayor fill:#e1f5ff\n style Town fill:#f0f0f0\n style Rig1 fill:#fff4e1\n style Rig2 fill:#fff4e1\n"}' data-plain="graph TB
Mayor[The Mayor<br/>AI Coordinator]
Town[Town Workspace<br/>~/gt/]
Town --> Mayor
Town --> Rig1[Rig: Project A]
Town --> Rig2[Rig: Project B]
Rig1 --> Crew1[Crew Member<br/>Your workspace]
Rig1 --> Hooks1[Hooks<br/>Persistent storage]
Rig1 --> Polecats1[Polecats<br/>Worker agents]
Rig2 --> Crew2[Crew Member]
Rig2 --> Hooks2[Hooks]
Rig2 --> Polecats2[Polecats]
Hooks1 -.git worktree.-> GitRepo1[Git Repository]
Hooks2 -.git worktree.-> GitRepo2[Git Repository]
style Mayor fill:#e1f5ff
style Town fill:#f0f0f0
style Rig1 fill:#fff4e1
style Rig2 fill:#fff4e1
" dir="auto">
graph TB
Mayor[The Mayor<br/>AI Coordinator]
Town[Town Workspace<br/>~/gt/]
Town --> Mayor
Town --> Rig1[Rig: Project A]
Town --> Rig2[Rig: Project B]
Rig1 --> Crew1[Crew Member<br/>Your workspace]
Rig1 --> Hooks1[Hooks<br/>Persistent storage]
Rig1 --> Polecats1[Polecats<br/>Worker agents]
Rig2 --> Crew2[Crew Member]
Rig2 --> Hooks2[Hooks]
Rig2 --> Polecats2[Polecats]
Hooks1 -.git worktree.-> GitRepo1[Git Repository]
Hooks2 -.git worktree.-></pre>…</div>
His approach: Run 20-30 AI agents simultaneously with a "Mayor" AI orchestrating them. Git-based state persistence.
Impressive, but the learning curve is steep. He says it requires "Stage 7+ AI-assisted development experience."
Quick Comparison
| Gas Town | CodeSyncer | |
|---|---|---|
| Tagline | "Scale your agents" | "Remember your context" |
| Target | Teams, enterprise | Solo developers |
| Agents | 20-30 simultaneous | 1, deeply |
| Setup time | Hours (Go, complex concepts) | 5 minutes |
| State storage | Git worktrees | Code comments |
Same philosophy: "AI is ephemeral, but context should be permanent."
Different scale: Pick based on your needs.
Real Workflow Example
Here's how my sessions look now:
Day 1
Me: "Let's add Stripe payments"
Claude: ⚠️ Payment keyword detected. Let me confirm:
- Which payment provider?
- Supported currencies?
- Minimum amount?
Me: "Stripe, USD/EUR/KRW, minimum $1"
Claude: ✅ Recording decision...
*Creates code with:*
// @codesyncer-decision [2026-01-17] Stripe for payments
// Supports USD, EUR, KRW. Min $1 (Stripe policy).
Day 2 (New Session)
Me: "Continue the payment feature"
Claude: *reads codebase*
"I see from the code:
- You're using Stripe (decision from Jan 17)
- Supporting USD/EUR/KRW
- Minimum $1
Should I add the webhook handler next?"
Me: 🎉
No re-explaining. No lost context. It just works.
The 2026 Bottleneck
AI models are getting smarter every month. GPT-5, Claude 4, Gemini Ultra—they'll all be incredible.
But here's my take:
The bottleneck isn't "smarter AI." It's "better context management."
The devs who master context persistence will ship faster than those waiting for AI to magically remember.
Try It Today
For solo developers:
npx codesyncer init
npx codesyncer watch
For teams:
→ Gas Town on GitHub
Your Turn
Have you struggled with Claude Code forgetting context?
What workarounds have you tried?
Drop a comment below — I'd love to hear how others are solving this.
If this helped, a ⭐ on the repo means a lot. Building in public, feedback welcome!

Top comments (0)