DEV Community

Cover image for I Turned Claude Code Into a Dev Team With One File
Waruna
Waruna

Posted on

I Turned Claude Code Into a Dev Team With One File

Most people drop a few rules into their CLAUDE.md and call it a day. I built an entire engineering workflow — a Tech Lead, specialist agents, a senior code reviewer, and a validation pipeline — all orchestrated from a single global file sitting in ~/.claude/CLAUDE.md.

Here's how I got there, and why it changed how I work with Claude Code.


The Problem: Claude Code Is a Brilliant Intern

Out of the box, Claude Code is smart but unsupervised. It'll jump straight into implementing something without asking clarifying questions. It'll say "done" without compiling. It'll delete a file it shouldn't have touched. It'll assume what you meant instead of asking what you meant.

Sound familiar?

These aren't Claude Code problems — they're process problems. And they're the exact same problems you'd have with a junior developer who has no engineering lead, no code review, and no CI pipeline.

So I gave Claude Code all three.


Global vs Project: The Right Separation

Before I walk through the file, one important design decision: I keep two layers of CLAUDE.md.

Global (~/.claude/CLAUDE.md) defines how work gets done — the process, the roles, the rules. It applies to everything.

Project-level (project-root/CLAUDE.md) defines what we're working with — the tech stack, naming conventions, file structure, dependencies. It's specific to each codebase.

This separation means my workflow stays consistent whether I'm working on a SwiftUI app, a Rails backend, or a Next.js frontend. The process doesn't change. Only the context does.


The Pipeline

Here's the core idea: every task flows through a pipeline, just like it would on a real engineering team.

User Task
    → Tech Lead Orchestrator (analyze, route, spec)
        → Specialist Agent (implement)
            → Senior Code Reviewer (review, loop until clean)
                → Validation (build, compile, test — prove it works)
                    → Delivery
Enter fullscreen mode Exit fullscreen mode

No shortcuts on quality. Every path goes through review and validation. Let me break down each piece.


1. The Tech Lead Orchestrator

Every task starts here. No agent works independently without a brief from the Tech Lead. This is the single most important rule in the file, and it solves the "Claude just started coding before understanding the task" problem.

The Tech Lead's job:

  • Read the project's CLAUDE.md before doing anything (context loading)
  • Analyze the task and determine complexity
  • Clarify ambiguities with the user before assigning work
  • Route appropriately — trivial tasks get a fast track, complex ones get a full spec

That context loading step is subtle but critical. Without it, Claude will carry assumptions from a previous task into a new one. Explicitly telling it to re-read the project file at the start of every task prevents drift.


2. Fast-Track vs Full Path

Not every task needs the same ceremony. A typo fix shouldn't go through the same process as a new feature.

Fast-track is for trivial changes — single-file fixes, formatting, config tweaks, simple renames. The Tech Lead assigns directly to a specialist with a lightweight brief instead of a full spec.

Full path is for everything else. The Tech Lead writes a detailed spec with acceptance criteria, relevant file paths, and clear deliverables.

Here's the key: fast-track skips the detailed spec, not the review. Both paths converge at the same Senior Code Reviewer and Validation gates. Nothing ships unreviewed, regardless of how simple it seems.

This was a deliberate choice. I've been bitten too many times by "trivial" changes that introduced regressions. Even a one-liner deserves a second pair of eyes.


3. The Senior Code Reviewer

This is where quality gets enforced. Every change, from every path, passes through the Senior Code Reviewer before it reaches the user.

The reviewer checks against:

  • Project standards and the project-level CLAUDE.md
  • Bugs, edge cases, and security issues
  • A severity classification: critical (blocker), warning (should fix), nit (optional)

If there are concerns, the work goes back to the specialist with specific feedback. This loops until the reviewer passes with no concerns. No critical issues means the code moves to validation.

The review loop is what turns Claude Code from "generate and hope" into "generate and verify." It's the difference between code that looks right and code that is right.


4. Validation: Prove It Works

This is the piece most CLAUDE.md files miss entirely. Claude will happily tell you "the implementation is complete" without ever running the code. That's not good enough.

After review passes, the code must be validated using the appropriate method for the project:

  • Swift/iOSxcodebuild, run in iOS Simulator
  • Android → Gradle build, run in Android Emulator
  • Web (JS/TS)npm run build or yarn build
  • Web (Rails)bundle exec rails test
  • Browser UI → Playwright end-to-end tests
  • CLI tools → Run the command, verify output
  • Libraries → Run the test suite (pytest, rspec, jest, etc.)

If validation fails, it loops back to the specialist with the actual errors. No task is considered done until it has been proven working.

This single addition — "compile it, build it, run it" — eliminated an entire class of bugs I used to catch manually after delivery.


5. The Safety Rails

Beyond the pipeline, I have a few rules that apply globally across all agents.

The Golden Rule

NEVER ASSUME. ALWAYS ASK.

If requirements are unclear, ambiguous, or incomplete: stop work, list specific questions, wait for clarification. This applies to every agent at every stage.

It sounds obvious, but without this rule, Claude will fill in the blanks with assumptions — and those assumptions are often wrong. Especially in agentic loops where one bad assumption compounds through the entire pipeline.

Destructive Actions

Certain operations require explicit user confirmation:

  • Deleting files or directories
  • Renaming files that are imported elsewhere
  • Large-scale refactors across multiple files
  • Removing or replacing dependencies
  • Dropping database tables or destructive migrations

I learned this the hard way. Claude is fast. Sometimes too fast. A confirmation gate on irreversible actions gives you a chance to catch mistakes before they happen.

Iteration Limit

Maximum 3 revision cycles across review and validation combined. If the issue isn't resolved after 3 loops, the Tech Lead escalates to the user with a summary of what's blocking progress. This prevents infinite loops where Claude keeps trying and failing to fix the same issue.


6. Standardized Communication

When you have multiple agents handing off work to each other, the format of those handoffs matters. I standardized it:

  • Tech Lead → Specialist: Task description, acceptance criteria as a checklist, relevant file paths.
  • Specialist → Reviewer: Summary of changes, files modified, trade-offs or decisions made.
  • Reviewer → Tech Lead: Critical issues (blockers), warnings (should fix), nits (optional). Clear pass/fail.

This isn't glamorous, but it prevents a lot of wasted cycles. When the reviewer gets a change with no context about what was intended, the review quality drops. When the specialist gets vague feedback like "needs improvement," they guess — and usually guess wrong.


The Meta Moment

Here's the part that made me smile: I built this file with Claude. I started with my original version, asked Claude to critique it, iterated on the suggestions, and refined the result through conversation.

Claude reviewed its own operating instructions, found gaps, and helped fix them. If that's not eating your own dog food, I don't know what is.


What Changed in Practice

Since adopting this workflow:

  • Fewer "it's done" surprises. The validation step catches compilation errors and test failures before I ever see the output.
  • Clearer communication. When Claude asks me a clarifying question instead of assuming, it saves 10 minutes of undoing wrong assumptions.
  • Consistent quality across projects. Whether I'm in a Swift codebase or a Rails app, the process is the same. Only the project-level context changes.
  • Less babysitting. The review loop catches issues I would have caught manually in a second pass. Now I spend that time on higher-level decisions.

Try It Yourself

The full file lives in ~/.claude/CLAUDE.md. That's the global config — it applies to every project you work on with Claude Code.

For project-specific rules (tech stack, naming conventions, file structure), create a CLAUDE.md in your project root. The global file handles the how, the project file handles the what.

Start with the pieces that solve your biggest pain points. If Claude keeps assuming instead of asking, add the Golden Rule. If you're tired of "done" without proof, add the Validation step. If changes keep shipping with bugs, add the Review loop.

You don't need the whole pipeline on day one. But once you have it, you won't go back to working without it.


If you're using Claude Code and want to see the complete CLAUDE.md file, drop a comment — happy to share the full thing.

Top comments (0)