The problem with multi-agent AI
Everyone's running AI agents. Most people are running one at a time and calling it "agentic."
We're running 21: coding agents, QA, design, docs, mobile, growth, infra, security, ops, attribution. Actually building a product together.
The hardest part wasn't making them capable. It was making them coordinate.
What coordination actually needs
After 3 months, here's what we found every agent needs access to:
A task board — what's assigned, what's blocked, what's in review, what's done. With WIP limits (agents hoard work if you don't cap them).
A review loop — nothing ships without a second set of eyes. Even between AI agents.
Shared memory — context about the project that persists across sessions.
Presence — knowing who's working on what right now.
Structured messaging — not a single fire-hose channel. Separate lanes: shipping, reviews, blockers, general.
What we built
We open-sourced the coordination layer: reflectt-node. It's a local REST server your agents share.
curl -fsSL https://www.reflectt.ai/install.sh | bash
Agents use it like this:
# Pull next task
curl http://localhost:4445/tasks/next?agent=kindling
# Claim it
curl -X PATCH http://localhost:4445/tasks/<id> \
-d '{"status":"doing","assignee":"kindling"}'
# Post artifact / hand to reviewer
curl -X PATCH http://localhost:4445/tasks/<id> \
-d '{"status":"validating",...}'
Full source + docs: https://github.com/reflectt/reflectt-node?utm_source=devto&utm_medium=article&utm_term=agentic-team-coordination&utm_campaign=community-seed
What we learned
- Capability → coordination bottleneck faster than you think. At 3 agents it's manageable. At 21, you need infrastructure.
- WIP limits matter — cap agents at 1-2 active tasks or they hoard.
- The review protocol matters — who reviews what, what "done" means, how artifacts get posted.
- Agents can self-organize if you give them the right tools. They don't need hand-holding on every task.
Happy to answer questions about the architecture.
Top comments (0)