DEV Community

Deeshath
Deeshath

Posted on

Stop Chatting With Claude. Start Orchestrating It.

Most developers are using Claude Code like an expensive autocomplete. I run 3 features in parallel with it. Here's the exact setup.

When I first started using Claude Code, I was doing the obvious thing:

Ask → Wait → Review → Repeat

One task. One agent. One thread. Me, waiting.

That's basically hiring a genius and making them work one ticket at a time while you watch.

Here's what I actually do now — and why it's a completely different category of tool.

                YOU (Lead / Orchestrator)
                         │
        ┌────────────────┼────────────────┐
        │                │                │
   project-1/       project-2/       project-3/
   (Agent A)        (Agent B)        (Agent C)
   Feature A        Feature B        Feature C

Each:
- Own repo copy/ worktrees
- Own Claude session
- Own branch + state

Enter fullscreen mode Exit fullscreen mode

The Mental Model That Changed Everything

Claude Code isn't a chat assistant. It has full access to my repo. It reads files, runs commands, writes code, runs tests.

Most people use it like a smarter Stack Overflow. The actual unlock is this:

I'm not writing code anymore. I'm an engineering lead coordinating a team of AI developers.

You stop being the person at the keyboard and start being the person deciding what gets built and whether it's right. The keyboard work? Delegated.

That sounds abstract. Let me make it concrete by walking through my actual workflow.


Step 1: Be Clear on What and Why Before Anything Else

Before I touch anything, I get my intent straight.

I used to jump straight to implementation, and Claude would confidently build the wrong thing. Fast, clean, wrong. Like a contractor who builds the exact wall you described — in the wrong room.

Now: I tell Claude exactly what I want to build and why — the business reason, the constraint, the goal. Not a formal spec doc, not a design review process. Just clarity. What problem are we solving, what does done look like, what are the constraints.

Then I go straight to planning.

What I learned the hard way: "Simple" tasks are exactly where unexamined assumptions cause the most wasted work. Two sentences of context upfront saves an hour of fixing the wrong implementation.


Step 2: Write a Plan Before Writing Code

The plan says how — exact file paths, exact code changes, nothing left to interpretation.

A good plan looks like this:

Task 1: Add user notification feature

Files:
- Modify: src/services/notification_service
- Modify: src/models/user

Changes:
- notification_service: add send_notification method
  [exact diff of what gets added]
- user model: add notification_preferences field
  [exact diff of what changes]

Verify: run tests, confirm passing
Commit: "feat: add user notification"
Enter fullscreen mode Exit fullscreen mode

Every file path is exact. Every change is shown as a diff — not described, shown. No ambiguity about what the agent is supposed to do.

Think of it as writing sheet music, not just saying "play something jazzy." The agent follows the score.

The plan also gets saved to .claude/plans/<feature-name>.md — so if a session gets interrupted (laptop dies, connection drops, you accidentally close the tab like I definitely haven't done), the next session picks up exactly where things left off.

A reviewer subagent reads the plan before execution starts. Its only job: find gaps, missing steps, anything that would cause an agent to get stuck. Fix, re-review, proceed.


Step 3: Isolate — And Know Which Isolation Method to Use

Here's where most people hit a wall they don't understand. Turns out "just run it in parallel" is not a plan.

Git Worktrees — for a single isolated feature

When I'm working on one feature that needs its own branch and clean state:

git worktree add .worktrees/feature-auth -b feature/auth-refactor
cd .worktrees/feature-auth
# install deps, run tests to verify clean baseline
Enter fullscreen mode Exit fullscreen mode

The worktree gets its own directory and branch. Claude can commit, test, and experiment without touching my main working state. Lightweight, clean, no drama.

Separate repo copies — for working on multiple features in parallel

Git worktrees have a hard limitation people don't talk about: you can't run two worktrees in parallel without them stepping on each other.

Worktrees share the underlying git objects. Depending on your stack, this causes conflicts — shared process state, directory locks, test runs fighting over the same resources. Two agents, one worktree setup, instant chaos.

I tried. It was not a fun afternoon.

The solution: full repo copies.

cp -r ~/project ~/project-1
cp -r ~/project ~/project-2
cp -r ~/project ~/project-3
Enter fullscreen mode Exit fullscreen mode

Each copy is a completely independent environment. One agent building feature A, another building feature B, a third on feature C — all simultaneously, each with their own branch, their own state, their own everything.

project/   → my main working copy
project-1/ → Agent working on feature A
project-2/ → Agent working on feature B
project-3/ → Agent working on feature C
Enter fullscreen mode Exit fullscreen mode

While one agent is deep in feature A, I'm reviewing feature B's output. Feature C is running in the background. I'm never waiting on a single thread — and neither is any agent.

To kick one off — open Claude Code in that copy's directory, point it to the plan file, and it runs from there. Each copy is a fully independent Claude Code session.

When an agent finishes, I pull the diff out and raise a PR from that copy.

Why not just worktrees for this too?

Worktrees isolate at the branch level. Copies isolate at the process level. For parallel agents actively building and running tests, you need process-level isolation. Worktrees can't give you that — copies can.

Rule of thumb:

  • One feature, one agent, sequential → git worktree
  • Multiple features, multiple agents, parallel → separate repo copies

Step 4: How Each Agent Actually Works Inside a Copy

Each repo copy has one Claude agent. That agent owns its feature end to end. It's not a junior dev you have to babysit — it's a senior dev who happens to need very explicit instructions and will do exactly what you say, for better or worse.

Inside each copy, the agent follows the plan task by task. For each task, I dispatch a fresh subagent with exactly what it needs: the task, the relevant context, the constraints, what to return. Nothing else.

Fresh subagent per task = no context bleed. The agent on task 3 doesn't carry confusion from task 1. Every task starts sharp.

After each task, two review gates before moving on:

Gate 1 — Requirement compliance

Does the code match what was asked? Nothing missing, nothing extra?

If the agent added something clever that wasn't asked for — out. If it skipped something required — back in. "Close enough" fails this gate every time.

Gate 2 — Code quality

Is the implementation actually well-built?

Gate 2 only runs after Gate 1 passes. No point stress-testing the quality of something that built the wrong thing.

Both gates are fresh subagents. Issues found → implementer fixes → reviewer re-checks. Loop until both approve, then next task. No shortcuts.

Meanwhile, project-2 is running the exact same flow for a different feature. And project-3. All simultaneously. All operating from the same coding standards — because each copy carries the full Claude config.

This is the actual parallel. Not just running tests faster. Entire feature tracks shipping at the same time.


The Thing That Makes All of This Work: CLAUDE.md + Skills

Every agent I dispatch — every subagent, every reviewer, every parallel agent in every repo copy — reads my CLAUDE.md before touching anything.

It's the employee handbook for my AI team. Except unlike most employee handbooks, people actually follow this one.

Mine covers:

  • Architecture conventions (where things live, how they're structured)
  • What's forbidden (patterns I've banned and exactly why)
  • Testing strategy (what to test, what not to mock, how tests are structured)
  • Key commands (how to run tests, lint, build)

Since each repo copy is a full copy of the repo, it carries the full Claude config — CLAUDE.md and all the domain skills configured for that codebase. Skills are reusable instruction sets that auto-trigger based on what you're working on — think of them as specialised playbooks Claude loads when it detects you're touching a specific part of the system.

Every agent in every copy automatically gets the same standards, the same workflows, the same constraints. No setup per copy. No drift between agents.

This is what keeps multiple agents consistent without me managing each one individually. They're not just isolated from each other — they're all working from the same rulebook.

The rules I include aren't just what — they explain why. If Claude understands the reasoning, it makes better calls in gray areas instead of picking the most literal interpretation of your instructions and running with it off a cliff.

What I cut: anything where I'd answer "removing this won't cause Claude to make mistakes." Every unnecessary line dilutes the signal.


What I Still Review Myself

Claude Code is fast. It's not a mind reader, and it's not infallible.

Things I always personally review, regardless of what review gates say:

  • Authorization logic — never skip, never assume it's right
  • Anything touching shared state across users or accounts
  • Core business logic with real edge cases

Two hard rules I enforce on every agent, no exceptions:

No auto-commit. Agents stage changes, present the diff, and wait for my explicit sign-off before anything touches git. I've seen agents commit mid-task with half-finished work. Hard no.

Cleanup before done. Any code added during a session that got replaced or made redundant during iteration gets deleted before the task closes. Agents leave behind ghost methods from earlier attempts like forgotten tabs in a browser. The rule: review everything added in the session, delete anything no longer used.

AI-generated code can be confidently wrong. "Approved" from a reviewer subagent doesn't replace my eyes on critical paths.


What You Actually Walk Away With

Apply these today — regardless of where you are with Claude Code

  • Start with intent, not implementation. Two sentences of what + why before any prompt saves you an hour of wrong output.
  • CLAUDE.md is your team's shared brain. Set it up once. Every agent reads it. The more precise it is, the less you babysit.
  • No auto-commit. No exceptions. Agents stage, you approve, then it goes to git. You're the last line.
  • Worktrees for one feature, repo copies for parallel ones. Don't fight the tool. Each has a job — and conflating them is a bad afternoon.

Once you're running parallel agents

  • Plan with exact diffs, not descriptions. Vague plans produce vague code. The agent follows the score — write the score.
  • Fresh subagent per task. Context bleed is real. Every task starts clean, focused, without baggage from the one before it.
  • Two review gates, always in order. Right requirements first, then quality. Reviewing quality on the wrong feature is just polishing a mistake.

What Actually Changed

I used to be the bottleneck. I'd write code, review it, test it, fix it — all sequential, all me. One thread. Very human. Very slow.

Now I design, delegate, and validate. I'm often waiting on myself to review output, not waiting on Claude to finish. That's a good problem to have.

The actual unlock isn't "Claude writes faster code."

It's: I stopped being the execution bottleneck and became the decision-maker.

The engineers are faster. The leverage is real. The judgment still has to come from you — and honestly, that's the interesting part of the job anyway.


All patterns in this post come from my actual Claude Code setup — workflows and conventions I run daily on a production codebase.


What's your current Claude Code setup? Still single-threaded or already running parallel? Drop it in the comments — curious what workflows people have settled on.

Top comments (0)