DEV Community

Zaptain Zi
Zaptain Zi

Posted on

I Built an Experience Layer for Claude Code So It Would Stop Re-Solving the Same Bugs

Coding agents can analyze and solve problems, but they can't accumulate experience. They have impressions, but no notes. RadioHeader fills that gap for Claude Code.


A real story

I was using Claude Code for iOS development when I hit a bug — the app showed a white screen for tens of seconds on launch. Claude did the usual debugging routine — checking logs, reading files, trying various approaches. After a long time, it still hadn't pinpointed the cause. Eventually I remembered that a different project had the same issue a while back. I pointed Claude to that project's old records, and it finally found the root cause — an Xcode scheme setting.

That's when it clicked: coding agents like Claude Code are great at analysis and writing code, but they don't share experience across projects. Claude Code later added a memory system, but the fundamental problem didn't change — it remembers the general direction, not the key details, let alone reusing them across projects.

Project memories are completely isolated. What Project A learned, Project B has no idea about. If I happen to remember "I've seen this before," I can manually point the way. But what if I've forgotten too?

Most existing tools are working at the "rules" level. Rules are useful, but they can't capture "I've actually hit this exact bug before and I know how to fix it." When I was doing hardware product development, I ran into the same kind of thing — the kind of know-how you don't get from docs — you have to hit it yourself, solve it, and then it sticks.

I couldn't find a tool that worked at the "experience" level for coding agents. So I built one.


RadioHeader wasn't designed upfront. It was forced into existence by real problems, one step at a time.

Step 1: Teaching the agent to take notes

It started with memory loss within a single project. After working on a project for a while, Claude Code's earlier experience would start to fade. It remembered "I've seen something similar before," but couldn't recall how it was solved.

Put simply: agents need notes, not impressions. The built-in memory captures rough impressions. Structured notes that can be precisely searched are what actually have reuse value.

My idea was straightforward: have Claude actively record experience after solving each problem. Two types: experience entries go into the memory/ directory — root cause, fix, caveats, organized by topic for easy retrieval. Task logs go into logs/ — one log per problem solved, recording background, process, and conclusions with full context. Log filenames follow a deliberate format: date-topic-author, with both the agent and human signing their names for traceability. Early on I'd manually remind Claude to "sync project info" at key moments; later I automated it.

I called this experience flows back — after completing a task, experience automatically flows back into the memory system. Every bug fix, architecture decision, or counterintuitive gotcha gets written back. Claude Code's hook system makes it automatic: a PostToolUse hook fires when memory is written to trigger a check, and a Stop hook reminds at session end if any new experience was missed.

Within a single project, problems that used to require re-investigation could now be solved by checking the notes.

But soon I hit the exact scenario from the opening: Project A's notes were invisible to Project B.

Step 2: Breaking down the walls between projects

So I built a global experience hub above all projects: ~/.claude/radioheader/.

The name — I'm a fan of the band Radiohead, and this global hub works like a signal tower that projects tune into when they need information. RadioHeader. That's how it got its name.

The architecture became three layers:

┌─────────────────────────────────┐
│  RadioHeader (global hub)        │  ← shared across all projects
├─────────────────────────────────┤
│  Project memory/ (per-project)   │  ← current project
├─────────────────────────────────┤
│  Session context (ephemeral)     │  ← current conversation
└─────────────────────────────────┘
Enter fullscreen mode Exit fullscreen mode

Each time experience flows back, Claude makes an extra judgment call: is this useful across projects? If so, it goes into the global layer, tagged with the source project. Any project hitting a similar issue later searches here first. With the signal tower metaphor in place, I started calling this process Echo — experience bouncing back to the tower like a signal echo.

After installing RadioHeader, the same white screen bug:

You: App shows white screen for 10+ seconds on launch

Claude: RadioHeader has experience from ProjectA:
        "Xcode scheme Launch configuration causes iOS app white screen,
        check the relevant scheme settings..."

        Let me verify this applies here... ✓ Same pattern. Applying the fix.
Enter fullscreen mode Exit fullscreen mode

No more manual pointing. From the second time onward, the same class of problem goes from minutes to seconds. Not because the AI got smarter — because the answer was already there.

Distillation: from project experience to universal knowledge

After enabling cross-project sharing, a new problem emerged: once experience entries hit the global layer, the original project context created too much search noise and token overhead. For example, an entry reading [source:DarkWriting] iCloud I/O blocks main thread — another project might use CoreData instead of iCloud, but the root cause is identical. Project names and framework-specific details became search obstacles.

So I added a distillation layer: Shortwave — strips out project names, file paths, and framework details, keeping only universal knowledge. This isn't just about reducing noise — it's also about privacy. Raw experience entries might contain project paths, internal naming, or even API keys. Shortwave strips all of that out.

That's how the three layers connect: experience flows back to RadioHeader via Echo, then Shortwave strips the project noise and turns it into reusable knowledge that can be broadcast across all projects.

---
id: sw-ios-task-inherits-mainactor
domain: iOS, SwiftUI, Concurrency
tags: white screen | slow launch | startup | 10s+ | Main Actor | Task
---
### Task {} inherits Main Actor in @MainActor context, I/O blocks causing white screen

symptoms: 10s+ white screen on app launch, first load hangs
cause: Task {} created in @MainActor context inherits main thread
fix: use Task.detached(priority:) to move I/O off the main thread
Enter fullscreen mode Exit fullscreen mode

Notice the tags: "white screen", "slow launch", "startup" — these are the words developers actually search for. If you only keep solution keywords like "Task.detached", the entry becomes unfindable. Symptom keywords matter far more than solution keywords. Strip the symptom words and the entry might as well not exist.

Worse than not finding it: finding it and not using it

Then came another headache: Claude would find experience but ignore it.

Early on I wrote in CLAUDE.md: "search RadioHeader when encountering technical problems." Claude did search, and did find relevant results — then completely ignored them and jumped straight into independent analysis. Like giving a new colleague a wiki. He opens it, glances at it, and decides to figure things out on his own anyway.

I eventually figured it out: behavioral instructions are far more effective than informational descriptions. Writing "there's a knowledge base you can check" in CLAUDE.md doesn't work. Writing "you MUST search, you MUST cite what you find, you are FORBIDDEN from ignoring matches" does. The difference between these two phrasings in an agent system is enormous.

The final solution is a three-step mandatory rule — Search → Apply → Trace:

  1. Search: hit a technical problem, search RadioHeader first
  2. Apply: find relevant experience, must cite and verify it
  3. Trace: need more detail, trace back to the source project's memory

Plus one ban: "finding relevant experience but not citing or applying it, jumping straight to independent analysis, is explicitly prohibited." That single sentence is more effective than the three rules above it.

Step 3: From personal memory to community sharing

The first two steps solved my own repeated-pitfall problem, but my experience was still an island. The problems I encounter — someone else in the world has definitely solved them before. And the pitfalls I've documented might be exactly what someone else needs.

Could I share the distilled shortwave entries? The hard part isn't technical, it's quality governance — you can't dump everything into the pool, but I can't do manual review either.

I tried looking for inspiration in biology, and actually found something — Stigmergy. It's from ant colony behavioral science: ants leave pheromones on paths. The more ants walk a path, the stronger the pheromone. Paths nobody walks fade naturally. No one needs to manage anything; good paths emerge on their own.

Applying this to knowledge sharing quality governance:

  • Auto-voting: after Claude uses a community shortwave to solve a problem, it automatically judges whether the entry actually helped. +1 or -1
  • Time decay: scores decrease over time (half-life ~125 days), outdated experience fades naturally (still validating the exact parameters)
  • Weekly cleanup: GitHub Actions aggregates votes, marks high-score entries as verified, archives low-score ones

Publishing has gates too — quality score (≥6/8), privacy scan (no leaked paths or keys), and dedup check. No moderators needed. Good experience floats up, bad experience sinks down.

Current status

This isn't a demo I whipped up to tell a story. It's been running in my own workflow for months. 13 projects, covering 7 domains: iOS/SwiftUI, Rust, backend deployment, network proxies, AI APIs, Claude Code, and hardware products. 205 raw experience entries, 120 distilled shortwaves, 114 published to the community pool. (As of publication.)

Try it

You don't need to understand the full design first. Pick the project where you most often re-solve the same problems, install it, run it for a week, and see if it remembers the second time.

git clone https://github.com/ZaptainZ/radioheader.git
cd radioheader
./install.sh
Enter fullscreen mode Exit fullscreen mode

After that, start Claude Code in any project and it's active.

# Enable community sharing
radioheader community on
radioheader sync
Enter fullscreen mode Exit fullscreen mode

GitHub: ZaptainZ/radioheader


RadioHeader is still being refined, but it's already producing real value in my own projects, so I'm putting it out there. Open source has given me a lot. This is me giving back something I actually built and used.

MIT License. If you're also tired of re-solving the same bugs, give it a try. Found it useful? Star it. Found it useless? Open an issue and roast it.

Top comments (0)