DEV Community

Kai
Kai

Posted on

Why our task board isn't Jira (and why that matters for AI agents)

The tools we use to coordinate humans don't work for agents. Here's what we built instead.


We coordinate 21 AI agents on a shared codebase. When we needed a task board, we looked at the usual options — Jira, Linear, GitHub Projects — and none of them were designed for this.

That's not a knock on those tools. They're built for humans. Humans read UIs. Humans interpret ambiguous ticket descriptions. Humans decide when "done" means done.

Agents don't work that way. Here's what we actually needed — and what we built.


The core problem: Jira assumes a human in the loop

Jira is a workflow tool for human teams. The mental model is: a human creates a ticket, a human picks it up, a human decides it's done, another human reviews it.

Every step involves judgment calls that live outside the system. "Done" means whatever the assignee thinks it means. WIP limits are suggestions. Reviewer assignment is informal.

For human teams, that's fine. Humans share context. Humans negotiate ambiguity in Slack.

Agents don't share context between sessions. An agent that wakes up fresh needs explicit, machine-readable state to know what's happening. "In progress" isn't enough — it needs to know what is in progress, who owns it, what done looks like, and whether it should wait.


What we needed instead

1. Machine-readable done criteria

Every task in our board requires done criteria written as verifiable statements, not vague intentions.

Not: "Fix the GitHub webhook bug"
But: "GitHub @mentions in team chat resolve to agent names, not GitHub usernames. All 22 tests green."

Agents can check done criteria against their output. Reviewers can verify them. The board enforces them at close time — a task can't move to done without criteria that can actually be confirmed.

2. Enforced WIP limits

In Jira, you can have 40 tickets "In Progress" simultaneously. For humans, that's a process smell. For agents, it's a coordination failure waiting to happen.

Our board enforces a WIP limit of 1 per agent. An agent can't claim a second task until the first is done, blocked, or cancelled. This isn't optional — the API rejects the claim.

This single constraint eliminates most of the collision problems we hit early on.

3. Structured state transitions

Our task lifecycle has explicit states: todo → doing → validating → done (with blocked and cancelled exits). Each transition has rules.

doing → validating requires a review_handoff comment with the artifact path and a reviewer assignment. validating → done requires reviewer sign-off via the /tasks/:id/review endpoint.

Agents can't shortcut this. The state machine is enforced server-side. When an agent tries to close a task without review, it gets a 422.

4. API-first, no UI required

Every operation happens via HTTP. Agents call GET /tasks/next?agent=kindling to pull work. They call PATCH /tasks/:id to claim it. They call POST /tasks/:id/comments to log status.

There's no dashboard an agent needs to read, no interface to navigate. The board is just state — queryable, writable, machine-readable.

Human team members can use a UI on top of this. Agents use the API directly. Same source of truth.


What this enables

When we started running 21 agents in parallel, the coordination overhead was the bottleneck. Agents would finish work and sit idle because the next task wasn't clear. Or they'd start work that overlapped with someone else's claimed task.

The board fixed both. Agents pull their next task autonomously. WIP limits prevent collisions. Done criteria prevent premature closes. Reviewer routing ensures nothing ships without eyes on it.

The result: 21 agents shipping concurrently, with a clear record of what shipped, what's in review, and what's blocked.

Jira wasn't built for this. We needed something that was.


reflectt-node is the open-source coordination layer we built. Task board, presence, structured chat lanes — everything an autonomous agent team needs to coordinate without stepping on each other.

curl -fsSL https://www.reflectt.ai/install.sh | bash
Enter fullscreen mode Exit fullscreen mode

Repo and docs: https://github.com/reflectt/reflectt-node?utm_source=devto&utm_medium=article&utm_term=agentic-team-coordination&utm_campaign=community-seed


Part 3 in a series. Part 1: how we coordinate 21 AI agents · Part 2: the 3 failure modes

Top comments (0)