Everyone talks about AI coding agents like the hard part is getting them to write code.
It isn’t.
The hard part is what happens after you have 3, 5, or 10 agents all trying to push work forward in the same codebase.
That’s where things get weird.
Not in a “the models are bad” way.
In a much more boring, painful way:
- two agents touch the same file
- one finishes work that quietly makes another stale
- three branches all look fine on their own and then collide at merge time
- nobody knows what should land next
- humans end up doing project management for the agents instead of actually shipping
I built Switchman because I kept feeling like the tooling around AI coding was missing the actual problem.
We have tools for:
- generating code
- editing code
- opening PRs
- running in isolated environments
But once multiple agents are involved, the real problem becomes:
How do you coordinate parallel software work without turning your repo into chaos?
The moment it clicked for me
At first I thought the problem was simple.
“Just give each agent its own branch.”
Or:
“Just use Git worktrees.”
Or:
“Just let merge conflicts happen in PRs.”
And to be fair, those things help.
But they don’t solve the actual coordination layer.
Worktrees give agents isolated checkouts.
Branches give them isolated histories.
Neither one answers:
- who should work on what
- how to stop duplicate work early
- how to catch overlap before the end
- how to know what became stale
- how to decide what should land next
- how to merge parallel work safely without babysitting it
That’s the gap I kept running into.
The problem wasn’t “how do I isolate agents?”
It was:
How do I run several agents at once and still trust what’s happening?
What actually breaks when you try this
Once you move past one-agent demos, the failure modes are pretty predictable.
1. Duplicate work
Two agents end up solving the same problem from different angles because nobody assigned clear ownership.
2. Silent overlap
They don’t even have to touch the exact same file. One changes a shared module, another builds on assumptions that are now outdated, and the collision only shows up later.
3. Stale work
One agent finishes something important, and another agent’s “done” work is now not really done anymore.
4. Merge queue pain
Even if every branch looks valid on its own, you still need to decide:
- what lands first
- what waits
- what should be retried
- what needs review
- what is too risky to merge yet
5. Humans become the scheduler
This is the part nobody advertises.
Without a coordination layer, the human ends up doing all of this manually:
- routing work
- checking overlap
- resolving priority
- deciding merge order
- figuring out what broke what
- telling agents what to retry
At that point, “parallel agents” can actually create more overhead instead of less.
What I think the real workflow is
The workflow is not:
one prompt -> one agent -> one result
That’s too small.
The real workflow for AI-native software teams is closer to this:
- Several goals arrive at once.
- The work gets broken into parallel tasks.
- Agents and humans move at the same time.
- Overlap, drift, and stale work need to be caught early.
- Validation and review need to happen in the right places.
- Someone has to decide what is safe to land, and in what order.
- The repo needs a trusted path back to
main.
That’s the workflow I think tools should own.
Not just code generation.
Parallel software change.
What I built instead
Switchman acts more like a control layer than a code-writing tool.
The idea is simple:
- hand out tasks
- claim files before editing
- block overlap early
- detect stale work
- keep work visible
- run checks before landing
- queue finished branches
- land the right work safely
So instead of “let a bunch of agents loose and hope Git sorts it out,” the flow becomes more like:
- here’s who owns what
- here’s what’s blocked
- here’s what got stale
- here’s what should land next
- here’s the exact command to recover when something goes wrong
That sounds small on paper.
It feels very different in practice.
The part I didn’t expect
The interesting thing is that once you solve the coordination problem, the product stops feeling like “file locking for AI.”
It starts feeling more like operational trust for AI-driven software work.
Because teams don’t really just want faster code generation.
They want to be able to say:
- we can run many agents at once
- we know what they’re doing
- we know what changed
- we know what got invalidated
- we know what is safe to merge
- we know why something was blocked
- we can recover when workflows get messy
That’s a much bigger category.
And honestly, I think that’s where the real value is.
The pushback I hear most
A common response is:
“Why not just use worktrees?”
I think that’s a fair question.
Worktrees are part of the setup.
They’re useful.
They help with isolation.
But they don’t solve the coordination problem on their own.
They don’t decide who should work on what.
They don’t stop duplicate effort.
They don’t detect stale work.
They don’t manage the landing queue.
They don’t explain what should happen next.
They solve isolation.
They do not solve orchestration.
And once you have several agents moving at once, orchestration is the real problem.
What changed recently
The project has evolved a lot from the early version.
It now has much better support for:
- safe landing flows
- synthetic landing branches for multi-branch work
- stale-work recovery
- queue planning
- operator-friendly
status - PR / CI summaries
- policy-aware landing
- repair and self-healing flows
- clearer “why blocked?” explanations
One thing I cared about a lot was the first-run experience.
If a tool like this is going to matter, it can’t just be powerful.
It has to feel understandable in the first few minutes.
So a lot of recent work went into making:
switchman setupswitchman demoswitchman status
feel much more obvious and less intimidating.
What I think happens next
I think the next wave of developer tools is going to split into two camps.
Camp 1: “better coding”
Tools that help a single developer or agent write code faster.
Camp 2: “better software change”
Tools that help teams coordinate, govern, and land software work safely when lots of parallel work is happening.
I’m much more interested in the second one.
Because as soon as multiple agents become normal, the bottleneck stops being code output.
The bottleneck becomes:
- coordination
- merge risk
- review routing
- stale work
- trust
That’s what I’m building for.
If you’re experimenting with multiple agents
My strong opinion is this:
Don’t stop at isolation.
Isolation is necessary, but it’s not enough.
If you want multiple agents to be genuinely useful in real repos, you need a way to coordinate the work and control the path back to merge.
Otherwise you’re just moving the mess to a later stage.
Try it
If you want to play with it:
bash
npm install -g switchman-dev
switchman setup --agents 3
switchman demo
Top comments (0)