DEV Community

Sahil Kathpal
Sahil Kathpal

Posted on • Edited on • Originally published at codeongrass.com

Run Multiple Coding Agents in Parallel with Git Worktrees

TL;DR

You can run 2–5 Claude Code, Codex, or Aider sessions simultaneously on the same codebase by using git worktrees to give each agent its own working directory — no duplicate repo clones required. Worktrees share the .git object store, so branches stay cheap and merging is clean. Git worktrees solve filesystem isolation but not runtime isolation (port conflicts, shared databases, .env collisions), so you need a small amount of extra plumbing for those. Grass makes this practical at scale: each agent session is persistent and you can monitor all of them — plus approve tool executions — from a single mobile interface instead of juggling five terminal windows.


Why not just clone the repo five times?

The obvious approach to running parallel agents is cloning the repository into separate directories. It works, but it has compounding costs:

  • Disk bloat: Each clone duplicates all object history. A 2 GB repo with 5 clones means 10 GB on disk.
  • Divergent .git directories: You can't easily git log across all parallel branches without switching between directories.
  • Stale fetch problem: Each clone needs its own git fetch cycle. If agents are working on related features, they'll miss each other's commits until you remember to sync.
  • No shared pack files: Git's delta compression only works within a single object store. Multiple clones miss out on this entirely.

Git worktrees give you multiple working directories — each on its own branch — backed by a single .git store. The object store is shared, fetches propagate everywhere, and creating a new worktree is an O(1) operation in terms of disk.

# One clone, three worktrees for three agents
git worktree add ../project-feat-auth feat/auth
git worktree add ../project-feat-payments feat/payments
git worktree add ../project-feat-search feat/search
Enter fullscreen mode Exit fullscreen mode

Each directory behaves like a full checkout. The agent running inside project-feat-auth sees only that branch. Commits land on feat/auth. Nothing bleeds across.

How to set up git worktrees for parallel agents

Prerequisites

  • Git 2.5+ (worktrees have been stable since 2.15, but 2.5+ is the minimum)
  • A central repo you've already cloned — this becomes your "main" worktree

Step 1: Create branches for each task

Before creating worktrees, create the branches. Worktrees require a branch that isn't already checked out elsewhere.

cd ~/projects/myapp          # your main worktree
git fetch origin
git branch feat/auth origin/main
git branch feat/payments origin/main
git branch feat/search origin/main
Enter fullscreen mode Exit fullscreen mode

Step 2: Add worktrees

git worktree add ~/projects/myapp-auth feat/auth
git worktree add ~/projects/myapp-payments feat/payments
git worktree add ~/projects/myapp-search feat/search
Enter fullscreen mode Exit fullscreen mode

Check what you've got:

git worktree list
Enter fullscreen mode Exit fullscreen mode

Expected output:

/home/user/projects/myapp           abc1234 [main]
/home/user/projects/myapp-auth      def5678 [feat/auth]
/home/user/projects/myapp-payments  ghi9012 [feat/payments]
/home/user/projects/myapp-search    jkl3456 [feat/search]
Enter fullscreen mode Exit fullscreen mode

Step 3: Install dependencies per worktree

This is the first place people get tripped up. Your node_modules, .venv, or target/ directory is not shared — each worktree needs its own install.

# Node
for dir in ~/projects/myapp-{auth,payments,search}; do
  (cd "$dir" && npm install) &
done
wait

# Python
for dir in ~/projects/myapp-{auth,payments,search}; do
  (cd "$dir" && python -m venv .venv && .venv/bin/pip install -r requirements.txt) &
done
wait
Enter fullscreen mode Exit fullscreen mode

Running installs in parallel with & + wait cuts setup time significantly.

Step 4: Launch agents in each worktree

Use tmux or screen to give each agent its own pane. Here's a tmux setup that creates a window per worktree:

SESSION="agents"
tmux new-session -d -s $SESSION -n main

for task in auth payments search; do
  tmux new-window -t $SESSION -n "agent-$task"
  tmux send-keys -t $SESSION:"agent-$task" "cd ~/projects/myapp-$task" Enter
  # Claude Code
  tmux send-keys -t $SESSION:"agent-$task" "claude" Enter
done

tmux attach -t $SESSION
Enter fullscreen mode Exit fullscreen mode

Switch between agent windows with Ctrl+b then the window number. Each agent sees its own worktree, its own branch, and its own file state.


The runtime isolation problem git worktrees don't solve

Git worktrees handle filesystem isolation cleanly. They do not handle anything about what your code does when it runs. If you have 3 agents each spinning up a dev server, you have a collision problem.

Port collisions

If your app defaults to port 3000, three agents will fight over port 3000. The second and third will fail to start.

Fix: parameterize ports via environment variables and give each worktree a distinct .env.local (or equivalent):

# myapp-auth/.env.local
PORT=3001
VITE_PORT=3001

# myapp-payments/.env.local
PORT=3002
VITE_PORT=3002

# myapp-search/.env.local
PORT=3003
VITE_PORT=3003
Enter fullscreen mode Exit fullscreen mode

Most dev servers respect PORT. For those that don't, check the framework docs — Next.js uses -p, Vite uses --port, Rails uses -p.

If you're using a reverse proxy (nginx, Caddy) to route traffic, add upstream blocks pointing to each port:

upstream agent-auth     { server 127.0.0.1:3001; }
upstream agent-payments { server 127.0.0.1:3002; }
upstream agent-search   { server 127.0.0.1:3003; }
Enter fullscreen mode Exit fullscreen mode

Database collisions

Shared databases are the nastier problem. Three agents running migrations simultaneously against the same Postgres database will step on each other. Schema changes from one agent's feature branch can break the other agents' test suites mid-run.

Options, in order of isolation strength:

1. Separate databases per worktree (recommended for schema-changing work)

# Create per-worktree databases
for task in auth payments search; do
  createdb myapp_$task
done
Enter fullscreen mode Exit fullscreen mode

Update each .env.local:

# myapp-auth/.env.local
DATABASE_URL=postgres://localhost/myapp_auth

# myapp-payments/.env.local
DATABASE_URL=postgres://localhost/myapp_payments
Enter fullscreen mode Exit fullscreen mode

2. Separate schemas in one database (good for lightweight isolation)

CREATE SCHEMA agent_auth;
CREATE SCHEMA agent_payments;
CREATE SCHEMA agent_search;
Enter fullscreen mode Exit fullscreen mode

Set search_path per connection. Some ORMs support this natively; others need a wrapper.

3. Docker Compose per worktree (heaviest, but fully isolated)

If agents need different database versions or Redis instances:

# myapp-auth/docker-compose.yml
services:
  db:
    image: postgres:16
    ports:
      - "5433:5432"   # offset from default 5432
    environment:
      POSTGRES_DB: myapp_auth
Enter fullscreen mode Exit fullscreen mode
# myapp-payments/docker-compose.yml
services:
  db:
    image: postgres:16
    ports:
      - "5434:5432"
    environment:
      POSTGRES_DB: myapp_payments
Enter fullscreen mode Exit fullscreen mode

Run docker compose up -d in each worktree directory before starting its agent.

Shared .env files

Your root .env sits in the main worktree's directory. The auth worktree at ~/projects/myapp-auth won't automatically see ~/projects/myapp/.env.

Strategy: Use a base .env committed to the repo (with non-secret defaults) and a per-worktree .env.local that overrides runtime-specific values (ports, database URLs). Add .env.local to .gitignore.

# In each worktree
cp ~/projects/myapp/.env ~/projects/myapp-$task/.env
# Then edit .env.local for the port/DB overrides above
Enter fullscreen mode Exit fullscreen mode

Orchestration patterns: lead agent + worker agents

Running multiple agents in parallel without coordination leads to duplicated work and merge conflicts. A simple orchestration pattern that works well in practice:

The lead + workers model

One agent (the lead) handles planning, architecture decisions, and integration. Worker agents each own a bounded task. The lead reviews and merges worker output.

Lead agent (main worktree)
  ├── Writes the overall spec / task decomposition
  ├── Reviews PRs from worker agents
  └── Handles cross-cutting concerns (auth middleware, shared types)

Worker agent 1 (feat/auth worktree)       → implements auth endpoints
Worker agent 2 (feat/payments worktree)   → implements payment flow
Worker agent 3 (feat/search worktree)     → implements search indexing
Enter fullscreen mode Exit fullscreen mode

In practice, the lead's CLAUDE.md (or equivalent system prompt file) documents what the workers are doing so the lead doesn't duplicate it. Worker agents get scoped prompts:

You are working in the feat/auth worktree. Your task is to implement
JWT authentication endpoints in src/auth/. The shared types live in
src/types/ — do not modify them. When you're done, run the tests in
this worktree only and report results.
Enter fullscreen mode Exit fullscreen mode

Merging worker output

When a worker branch is ready, the lead reviews it from the main worktree:

cd ~/projects/myapp          # main worktree
git fetch origin feat/auth   # get the worker's latest
git diff main...feat/auth    # review the delta
git merge feat/auth          # merge when clean
Enter fullscreen mode Exit fullscreen mode

Because all worktrees share the same object store, this fetch is near-instant — the objects are already local.


Managing parallel sessions without losing your mind

Here's where the workflow gets genuinely painful without tooling: you have 3–5 terminal windows, each running an agent, each potentially waiting for your approval on a file write or bash command. You miss one, the agent hangs. You switch to the wrong window, you're reading the wrong context.

This is the exact problem Grass is designed for. Each Claude Code session running in a Grass VM is a persistent session — if you close your laptop, the agents keep running. When you come back, you reconnect to each session without losing state. The mobile interface shows you all running sessions, and permission prompts (file writes, bash executions) come through as native modals you can approve or deny in-place. You're not juggling five SSH connections; you have a single pane that shows you what each agent is doing and lets you intervene when it needs you.

For a worktree-based parallel workflow, the session persistence matters especially because parallel tasks often take 20–40 minutes each. Grass's free tier gives you 10 hours to try this without a credit card.


Cleaning up

When a feature is merged, remove its worktree:

git worktree remove ~/projects/myapp-auth
git branch -d feat/auth
Enter fullscreen mode Exit fullscreen mode

If the worktree directory was deleted manually (agent went rogue, disk cleanup), prune the stale references:

git worktree prune
Enter fullscreen mode Exit fullscreen mode

FAQ

Can two git worktrees be on the same branch at the same time?

No. Git enforces one worktree per branch. If you try git worktree add with a branch that's already checked out in another worktree, you'll get an error: fatal: 'feat/auth' is already checked out. Create a new branch or use a bare clone as your object store if you need the same branch in two places.

Do I need to run npm install separately in every worktree?

Yes. node_modules is not shared between worktrees. Each worktree is an independent working directory. The exception is if you're using a monorepo tool with a shared cache (Turborepo, Nx) — those tools can share build artifacts but the install itself still needs to run per worktree.

How do I run Claude Code in a specific worktree?

cd into the worktree directory and run claude there. Claude Code picks up context from the current directory. If you're using CLAUDE.md for project instructions, put a worktree-specific one in each worktree directory to scope the agent's behavior.

My agents keep hitting rate limits when running in parallel. What can I do?

Anthropic's rate limits are per API key. Five simultaneous agents with aggressive token usage will hit them. Options: stagger agent start times, use different API keys per agent (each in its worktree's .env.local), or pace each agent's request frequency with --max-tokens or model selection (Haiku for worker tasks, Sonnet for lead).

What's the difference between a git worktree and a git submodule?

Completely different things. Submodules embed one repository inside another. Worktrees create multiple working directories for the same repository. For parallel agents on the same codebase, you want worktrees.

Top comments (1)

Collapse
 
peacebinflow profile image
PEACEBINFLOW

The AGENTS.md ownership manifest is doing something quietly radical: it's giving the agents a shared social contract. Without it, each agent operates in a vacuum, assuming it has full sovereignty over the codebase. That's not a technical bug—it's a missing governance layer. The manifest says "you exist in a system with other actors, here are your boundaries, here's what happens when you need to cross them." That's less like a config file and more like a constitution for a tiny agent society.

What I keep thinking about is the handoff protocol. "Append to Pending handoffs and surface a permission request" is a workflow that requires the agent to recognize it's about to cross a boundary, stop itself, and ask for help. That's a surprisingly sophisticated behavior. Most agents will just barrel through and edit the file because they can. The protocol only works if the agent actually respects it, and that respect isn't enforced by the filesystem—it's enforced by the prompt. Which means the whole system's reliability depends on how consistently the model follows instructions over long sessions where context windows shift and system prompts drift.

The subagent tracing problem in Lazyagent is the detail that makes me nervous about this setup in practice. A spawned subagent doesn't inherit the AGENTS.md constraints unless you explicitly inject them. That means the ownership boundaries you carefully defined for the parent agent are invisible to any subagent it creates. One poorly-constrained subagent spawn is all it takes for an out-of-scope write to slip through. You mention needing to include the manifest in subagent initialization—is that something you've found a reliable way to enforce, or is it still a manual step that's easy to forget in the middle of a long session?