I've been shipping features with Claude Code for months now. The velocity is incredible — what used to take days takes an afternoon. But something kept bugging me.
The code diffs on my PRs looked fine. Tests passed. Lint was clean. But every few weeks I'd open a file I hadn't touched in a month and find it importing from three new places, calling services that shouldn't know about each other, and slowly becoming unrecognizable.
The architecture was drifting. Nobody noticed because nobody was looking.
Why code review misses architectural drift
Pull request review is built around lines of code. You see what changed in a file. You don't see how those changes affect the shape of the system.
A reviewer looking at + import { db } from '../../prisma' in a frontend component won't catch that the frontend should never touch the database directly. The line looks reasonable in isolation. The architectural implication is invisible unless you already have the system map in your head.
And in the AI era, the velocity problem makes this worse. An agent can add 14 new imports across 8 files in a single PR. The diff is too big to hold in your head, so you skim and approve.
What I wanted
I wanted something that looked at PRs and said:
"This PR introduces a new dependency from
AuthtoBilling. Is that intended?"
Not a black-box ML model that flags "drift detected." Just a clear, visual statement of what changed architecturally. Reviewable. Explainable. Reversible.
I also wanted the architecture map to live in the repo — not in Confluence, not in a SaaS dashboard. Versioned, diffable, the single source of truth that travels with the code.
What I built
A Claude Code skill called unkode. Here's what it does:
1. One command generates a YAML architecture map.
/unkode
Claude reads the codebase and writes unkode.yaml — a list of modules, components, their dependencies, and deployment topology. Commit it to main. That's your baseline.
architecture:
- name: Web Application
path: apps/remix
kind: frontend
tech: [TypeScript, React]
role: Main user-facing app
depends_on: [API Layer, Authentication]
- name: API Layer
path: packages/trpc
kind: backend
tech: [TypeScript, tRPC]
role: Type-safe API between frontend and services
depends_on: [Database, Authentication]
- name: PostgreSQL
type: external
kind: database
role: Primary data store
2. Every code change, you run /unkode again.
It diffs your changes against the existing YAML and updates only what changed. Incremental sync costs around 500 tokens — pennies.
3. The diagram regenerates automatically.
A deterministic script converts the YAML to Mermaid, written to arch_map.md. Zero AI involved. GitHub renders it natively.
4. On every PR, a GitHub Action posts a diff.
Color-coded. Green = new, red = removed, amber = modified.
The reviewer now sees, at a glance: "This PR added a Billing module that depends on Stripe, and added a new dependency from Authentication to Redis." Any reviewer can make a judgment call on that. They don't need to read 40 files.
Why this approach instead of ML-based drift detection
There are other tools that do drift detection. They use learned embeddings and risk scores. Fine for what they are, but two things bugged me:
They're black boxes. When the tool says "arch is drifting" what do I do with that? I can't inspect it. I can't disagree with it. I can't explain it in a standup.
The baseline is theirs, not mine. I don't get to say "actually, the Auth module having a dependency on Redis is fine." I have to live with whatever the model thinks.
Unkode flips this. The YAML is yours. You can edit it by hand. You can override what Claude generated. You can encode deliberate architectural decisions. The diff is visible, the rules are inspectable, nothing is hidden.
The token cost I was worried about
My biggest concern was cost. A tool that eats tokens on every commit is a non-starter.
After several iterations (and burning my tokens testing large repos) the numbers surprised me:
| Operation | Tokens | Time |
|---|---|---|
| First-time generation on a 1.5M LOC monorepo | ~4,000 | ~3 min |
| Incremental sync on a branch | ~500–1,000 | under a minute |
| Mermaid rendering | 0 | instant |
| PR diff in CI | 0 | seconds |
The Mermaid diagram and the PR diff are pure Python — deterministic transforms on the YAML. They use zero AI tokens. Only the YAML update touches an LLM.
The first-time cost scales more with how well-documented your project is than how big it is. A 1.5M LOC monorepo with clean package boundaries costs roughly the same as a smaller but messier repo.
Installing it
Three steps:
1. Copy the skill into your repo.
# Claude Code
cp -r unkode/skills/unkode .claude/skills/unkode
# Or for Codex, Cursor, Aider, etc.
cp -r unkode/skills/unkode .agents/skills/unkode
2. Generate the baseline.
/unkode
Commit unkode.yaml and arch_map.md to main.
3. (Recommended) Add the GitHub Action.
mkdir -p .github/actions/unkode
cp unkode/config/github/action.yml .github/actions/unkode/action.yml
cp unkode/config/github/unkode_arch_check.yml .github/workflows/unkode_arch_check.yml
From then on, every PR gets a diff diagram as a comment.
What's next
The tool works today but it's early. A few things I'm still figuring out:
- Hallucination prevention. Right now the agent trusts well-written READMEs a little too much. Adding strict path validation so every module in the YAML must resolve to an actual folder.
- Multi-repo view. Useful for monorepos and microservices alike.
- History timeline. Right now git log is the history. A visual timeline of how the architecture evolved would be nice.
If this resonates with problems you've hit — I'd love feedback. The repo is github.com/deepcodersinc/unkode, Apache 2.0, and a star helps if you find it useful ⭐
If you'd like to hear about major updates as they ship, there's a waitlist — no marketing, no spam, just a single notification when a new iteration is ready.
Would love to know: how is your team tracking architecture today? Confluence diagrams that nobody updates? A weekly tech debt meeting? Nothing at all? A better tool? Genuinely curious.

Top comments (0)