DEV Community

Sahil Kathpal
Sahil Kathpal

Posted on • Originally published at codeongrass.com

How to Keep Parallel Coding Agents from Stepping on Each Other

Running two or three AI coding agents in parallel on the same codebase is a legitimate productivity multiplier — until they silently collide. Without isolation and explicit ownership boundaries, agents overwrite each other's changes, launch conflicting refactors of the same file, and surface confusing approval requests that leave you wondering which session touched what. This guide gives you a concrete, tool-agnostic framework: git worktree isolation per agent, explicit ownership assignment via a shared manifest file, and cross-agent audit tooling so you always know what happened and when to intervene.

TL;DR: Use one git worktree per agent so they can't write to the same working tree. Define explicit file ownership in an AGENTS.md manifest. Use Lazyagent to trace per-tool-call activity across concurrent sessions. Add Loopi for cross-agent critique between plan and implement phases. If you want a unified intervention surface when you're away from your desk, Grass runs all your sessions on an always-on cloud VM and forwards every approval gate to your phone.


Why Parallel Agents Step on Each Other

A thread in r/ClaudeAI captures the failure mode precisely: when running multiple Claude Code agents on the same project, neither agent knows the other exists. One agent refactors src/utils/helpers.ts mid-task while another has a feature branch that depends on the pre-refactor interface. Neither flags a conflict. The human finds out afterward. As one developer put it: "The agent often asks me, did you know this happened or did you approve this change?" — and the answer is always no.

A parallel thread on r/ClaudeCode, How are you managing multiple coding agents in parallel without things getting messy?, confirms this is widespread with no established patterns. The recurring pain points: ownership ambiguity, overlapping file edits, and no recovery path when a run goes sideways.

The structural problem: agents operate with full write access to the working tree by default, have no mechanism to coordinate with peer agents, and have no visibility into what another concurrent session has changed. Careful prompting reduces this — it doesn't solve it. The fix is explicit architecture.


Prerequisites

  • Git 2.5+ (worktree support is stable across all modern versions)
  • Claude Code, Codex, or OpenCode installed and authenticated
  • Node 18+ if you plan to use Lazyagent or Loopi
  • Optional but recommended: Grass for multi-session monitoring and mobile approval forwarding when you're away from your desk

Step 1: Isolate Each Agent in Its Own Git Worktree

A git worktree (git worktree add) checks out a branch into a separate directory — a fully independent working tree backed by the same repository object store. Agents in different worktrees write to different directories. They cannot accidentally overwrite each other's uncommitted changes.

Set up one worktree per agent task:

# From your main repo root
git worktree add ../myproject-agent-auth  feature/auth-refactor
git worktree add ../myproject-agent-api   feature/api-v2
git worktree add ../myproject-agent-tests feature/test-coverage
Enter fullscreen mode Exit fullscreen mode

Start each agent inside its own worktree directory:

# Terminal 1 — auth agent
cd ../myproject-agent-auth && claude

# Terminal 2 — API agent
cd ../myproject-agent-api && codex

# Terminal 3 — test agent
cd ../myproject-agent-tests && opencode
Enter fullscreen mode Exit fullscreen mode

This is the structural foundation. As the Parallel Agentic Development guide from MindStudio notes: even with worktrees, if two agents both have permission to modify a shared utility file, you'll still get a merge conflict when the branches land. Worktrees prevent working-tree contamination — they don't enforce file-level scope. That's Step 2.


Step 2: Define Explicit Ownership in AGENTS.md

Create an AGENTS.md file in the repo root and commit it on every worktree branch. This file tells each agent exactly what it owns, what it must not touch, and what the handoff protocol is when it needs something outside its scope.

# AGENTS.md — Parallel Agent Ownership Map

## Active agents

| Agent        | Branch                 | Owns                                  | Must not touch              |
|--------------|------------------------|---------------------------------------|-----------------------------|
| auth-agent   | feature/auth-refactor  | src/auth/**, src/middleware/auth.ts   | src/api/**, src/utils/**    |
| api-agent    | feature/api-v2         | src/api/**, openapi.yaml              | src/auth/**, src/utils/**   |
| test-agent   | feature/test-coverage  | tests/**, *.spec.ts                   | src/** (read-only)          |

## Shared files — single owner rule

- `src/utils/helpers.ts` — owned by api-agent. All others: read-only.
  If modification needed, append to "Pending handoffs" below and surface a permission request.
- `package.json` — test-agent owns devDependencies only. Coordinate with auth-agent for auth deps.

## Handoff protocol

When a task requires modifying a file outside your ownership:
1. Stop. Do not proceed past the boundary.
2. Append an entry to "Pending handoffs" below.
3. Surface a permission request summarizing what change is needed and why.

## Pending handoffs

<!-- agents append here during the session -->
Enter fullscreen mode Exit fullscreen mode

Wire this into each agent's context via CLAUDE.md (or the equivalent system prompt file for your agent):

# CLAUDE.md

Read AGENTS.md before starting any task. You are operating in a parallel multi-agent setup.
Respect the ownership map exactly. If a task requires modifying a file listed under "Must not touch",
stop immediately, append a note to the "Pending handoffs" section, and surface a permission request.
Do not proceed past an ownership boundary without explicit human approval.
Enter fullscreen mode Exit fullscreen mode

The Parallel File Ownership Claude Code Skill implements a more structured version of this pattern — but the AGENTS.md approach works with any agent CLI, zero additional dependencies, and is inspectable by both humans and agents alike.


Step 3: Audit Per-Agent Tool Calls with Lazyagent

Worktrees and the ownership manifest handle the static layer. What they don't give you is runtime visibility: which tool calls each agent is actually making, in what order, and whether any are crossing the lines you defined.

Lazyagent is a terminal TUI built specifically for this gap. It connects to multiple concurrent Claude Code, Codex, and OpenCode sessions and shows per-agent tool call activity as it happens. The key capability: "The agent tree shows parent-child relationships, so you can trace exactly what a spawned subagent did vs what the parent delegated."

This matters because Claude Code and OpenCode both support spawning subagents. Without tracing, you can't tell whether a file write was initiated by your top-level agent or a subagent it spawned internally — and subagents don't inherit your AGENTS.md constraints unless you explicitly include the ownership manifest in the subagent's initialization prompt.

With Lazyagent running, watch for these patterns:

  • Out-of-scope file writes — a tool call targeting a path outside the agent's AGENTS.md ownership column
  • Duplicate reads on the same file — two agents hammering the same file repeatedly usually means they're both blocked on a shared dependency
  • Unconstrained subagent spawns — a spawned agent with no explicit system prompt inherits no ownership rules

When Lazyagent surfaces an anomaly, you have three options without interrupting the whole session: let it proceed if the action looks benign, deny the specific pending permission gate, or abort and redirect that one agent.


Step 4: Add Cross-Agent Critique with Loopi

The subtler failure mode in parallel agent workflows isn't file collisions — it's epistemic agreement. If one agent writes a flawed implementation and another reviews it using the same underlying model, you get two agents confidently endorsing the same mistake. The review stage adds no signal.

Loopi solves this by enforcing a Plan → Implement → Review sequence across different CLIs. Each stage runs in a separate agent session with a fresh context and an explicitly adversarial role. The reviewing agent didn't write the code — it critiques it. Loopi's stage gates prevent any agent from auto-approving the previous stage's output.

This maps directly to what OpenAI's practical guide to building agents describes as a decentralized handoff pattern: agents hand off control to each other with explicit state transfer rather than shared memory, where each agent in the chain has a defined role and bounded context.

Use Loopi as the gate before merging any worktree branch back to main:

  1. Plan phase — Claude Code produces a task plan and expected diff
  2. Implement phase — Codex implements against the plan in the worktree
  3. Review phase — OpenCode reviews the actual diff against the plan, surfaces objections

If the review stage returns objections, the implementing agent addresses them before the branch is merged. This cycle catches the category of bugs that neither worktree isolation nor ownership files address: logical errors that a fresh perspective would catch.


Step 5: Define Your Intervention Triggers Before the Run Starts

Knowing when to step in is as important as having the tools to do it. The AI Agent Handoff Protocols framework describes a useful spectrum: from full autonomy to full supervision, with "monitored autonomy" — agents operate freely while humans are alerted on specific triggers — as the practical baseline for parallel coding work.

Define your triggers before launching sessions, not during:

Hard stops — interrupt immediately:

  • An agent attempts a write outside its AGENTS.md ownership scope
  • An agent proposes a schema migration, drop table, or any destructive database operation
  • Lazyagent shows a subagent spawned without an explicit system prompt
  • Two agents produce diffs to the same file within the same 10-minute window

Soft alerts — review before next session starts:

  • A session has consumed 3x the expected token budget with no commits (usually means it's looping)
  • An agent has run 30+ minutes of tool activity with zero git commits
  • Loopi's review stage returns more than three distinct objections on one diff

Write these triggers into the task brief you give each agent at session start. That way the agent knows to surface a permission request when it hits a boundary rather than proceeding silently. Understanding exactly what those gates protect — and where they fall short — is covered in what is an agent approval gate?.


How to Verify the Setup Works

Run a dry-run before you use this framework on a real task.

# Verify worktree isolation: changes in one worktree don't appear in another
cd ../myproject-agent-auth
echo "// test write" >> src/api/routes.ts   # outside auth-agent's scope
git diff                                     # shows the rogue change
git checkout src/api/routes.ts              # restore — confirms the worktree is isolated

# Verify AGENTS.md is loaded: ask the agent directly
# In your agent session, send:
# "Read AGENTS.md and list every file path you are not permitted to modify."
# It should enumerate the "Must not touch" column for your agent row accurately.
Enter fullscreen mode Exit fullscreen mode

For Lazyagent: start two agents on trivial tasks (e.g., "add a comment to a test file"), connect Lazyagent, and confirm you see both session trees with separate tool call logs. If you see one session's events appearing in the other's tree, the session IDs may be configured incorrectly.


Troubleshooting Common Issues

Worktree branches diverge so far that merging becomes expensive.
Keep worktree branches short-lived — one focused task per branch, merged within a working day. For longer-running work, add an explicit sync step at the start of each session: git fetch origin && git rebase origin/main. Rebase rather than merge to keep the branch history linear.

An agent ignores the AGENTS.md ownership rules mid-session.
System prompts can drift in influence over very long sessions. Add a PreToolUse hook that reads the ownership map before any file write and surfaces a warning if the target path is outside scope. The hook fires at the kernel level, before the LLM decides to proceed.

Lazyagent loses a session after a network interruption.
Lazyagent connects to agents via their local API ports. If sessions are running inside tmux or a remote machine, ensure the relevant ports are forwarded and stable. For remote sessions, Tailscale between the machine and your Lazyagent client is the most reliable path.

Two agents both modify a shared file despite AGENTS.md.
Add a lightweight lock file convention: each agent writes a <filename>.agent-lock file containing its name before editing, and checks for an existing lock before proceeding. It's low-tech but reliable for the small number of genuinely shared files.

Loopi's review stage takes too long and blocks the pipeline.
Run the review agent on a faster model variant (Sonnet instead of Opus) for latency, or scope the review prompt to only the files that changed rather than the full repository diff.


How Grass Makes This Workflow Better

The framework above runs entirely from your terminal. Where it breaks down: when you're away from your desk with three agents running in parallel, you have no ergonomic way to monitor all three sessions, handle permission gates, or intervene without returning to a laptop.

Grass — a machine built for AI coding agents — runs sessions on an always-on cloud VM and surfaces all of them in a single mobile app. For parallel workflows specifically:

One surface for every active session. You switch between your auth agent, API agent, and test agent from a single screen. Each session shows its current activity, streaming output, and any pending permission requests. You don't need three tmux panes on a laptop to monitor three concurrent runs — it's all in the Grass multi-session dashboard.

Permission gates forwarded to your phone as native modals. When one of your parallel agents hits an ownership boundary you defined — or any tool call that requires approval — Grass delivers the request to your phone with full context: the tool name, the exact command or file path, and a syntax-highlighted preview of what will execute. One tap to approve or deny. The agent waits; your session stays alive.

Sessions persist when your laptop closes. If you're running three worktrees across three agent sessions on a local machine, all three die when the lid closes. Grass runs sessions on persistent Daytona VMs — they stay alive, keep working, and you reconnect from wherever you are.

Agent-agnostic. Claude Code, Codex, and OpenCode are all first-class citizens. If your parallel workflow mixes agents — Claude Code for implementation, OpenCode for the Loopi review phase — you manage both from the same surface with no context switching.

Grass is a recommended addition to this workflow, not a prerequisite. The worktree isolation and ownership framework above works without it. But if you're running parallel agents seriously, one surface for every session is the difference between juggling tabs and staying genuinely in control.

Try Grass at codeongrass.com — free tier includes 10 hours, no credit card required.


Frequently Asked Questions

How do I prevent two Claude Code agents from editing the same file at the same time?
Use git worktrees to give each agent a separate working tree isolated by directory. Then define explicit file ownership in an AGENTS.md file that lists which paths each agent owns and which it must not touch. Include this manifest in each agent's CLAUDE.md or system prompt so the agent enforces the boundary itself.

What is the best tool for auditing what multiple Claude Code agents did in parallel?
Lazyagent is the most purpose-built option available today — it's a terminal TUI that shows per-agent tool call activity, parent-child subagent relationships, and inline diffs per operation across Claude Code, Codex, and OpenCode sessions simultaneously.

How do git worktrees work for parallel AI agent sessions?
git worktree add <path> <branch> creates a new directory with an independent working tree checked out to the specified branch, backed by the same repository. Changes committed in one worktree do not appear in another until branches are merged. Multiple agents can run in separate worktrees without their uncommitted file changes bleeding across sessions.

How do I stop parallel AI coding agents from agreeing with each other instead of catching each other's mistakes?
Use Loopi to enforce a Plan → Implement → Review cycle across different CLI tools. The reviewing agent runs in a fresh session context and didn't write the code it's reviewing — so it critiques rather than self-approves. Running the review stage on a different agent (e.g., OpenCode reviewing Claude Code's output) compounds the independence further.

When should I interrupt a parallel coding agent session?
Interrupt immediately if an agent attempts to write outside its ownership scope, proposes a destructive database operation, or if two agents produce diffs to the same file in the same time window. Softer signals — a session spending 3x expected tokens with no commits, or Loopi's review returning several objections — warrant review before the next session but not an immediate abort.

Can I mix Claude Code, Codex, and OpenCode in the same parallel workflow?
Yes. Worktrees are agent-agnostic — each directory is just a working tree that any CLI can run inside. Loopi is specifically designed for cross-CLI critique cycles where different agents review each other's work. Grass manages Claude Code and OpenCode sessions from the same mobile interface if you need unified oversight across all three.


Originally published at codeongrass.com

Top comments (0)