How a markdown-backed kanban board fills the awkward gap between agent orchestrators and the humans who still have to approve, review, and decide.
TL;DR
- AI agents can already do useful work end to end. The messy part starts when a human needs to review, approve, or redirect that work.
- Most teams handle that moment with a patchwork of orchestration UIs, Slack messages, GitHub issues, and half-documented decisions.
- kanban-lite offers a simpler pattern: keep shared workflow state in a board that both humans and agents can use — backed by plain markdown in your repo.
- It is not a workflow engine. It is a lightweight human control surface you can put on top of the agent stack you already have.
The Part of Agent Workflows That Still Feels Awkward
Teams are getting better at making AI agents do useful work.
One agent researches. Another drafts. A third evaluates the output and decides whether to retry, escalate, or move on. If you’re building with LangGraph, n8n, Temporal, Claude Code, Codex, or a homegrown orchestration layer, that part is getting easier fast.
The awkward part starts when the work needs a person.
Maybe the draft needs designer review. Maybe a PM has to approve the next step. Maybe a compliance lead needs to answer a checklist before anything can ship.
That is where a lot of agent workflows fall apart — not because the agents fail, but because the human side of the workflow lives in the wrong places.
The agent run is in one UI. The approval request lands in Slack. Comments are buried in email. The final decision gets copied into a ticket. A week later, nobody can easily answer three simple questions:
- What happened already?
- What is waiting on a human?
- Where does the work go next?
Most orchestration tools are good at tracking what machines are doing. They are usually much worse at giving humans a shared place to review, decide, comment, and hand work back.
That missing layer is what I mean by a human control surface.
Why Existing Tools Usually Fall Short
You can absolutely force this workflow into tools you already use. Many teams do.
But each option has an obvious tradeoff once agents and humans start collaborating in the same flow.
Orchestrator UIs
Great for execution graphs, retries, traces, and internal machine state.
Usually not great for non-technical reviewers who just want to say: approve this, send it back, ask legal, or leave feedback next to the work itself.
Slack and email
Fast and familiar.
Also terrible as durable workflow state. A decision made in chat is easy to miss, hard to audit, and annoying for agents to work with reliably.
GitHub Issues or Jira
Useful for human task tracking, but often awkward as the main working surface for agent handoffs.
Yes, agents can already connect to GitHub and sometimes Jira. But that does not make those tools the ideal place to hold the live operational state of a human-in-the-loop (HITL) workflow. They add friction, split context away from the work, and push humans and agents into tools that were not designed to share the same surface cleanly.
The interesting alternative is simpler:
What if the workflow state lived next to the work itself, in a format both humans and agents could read and update directly?
What kanban-lite Actually Is
kanban-lite is not an agent framework, and that is part of the appeal.
It does not replace LangGraph, CrewAI, n8n, Temporal, or whatever runs your agents.
What it gives you is a lightweight, shared control surface for the moments when humans need to step in.
Despite the name, it is not just a board. It is a practical layer for:
- task state
- approvals
- comments
- attachments
- logs
- structured forms
- and explicit human actions.
By default, cards are stored as markdown files with YAML frontmatter under .kanban/boards/<boardId>/<status>/ in your repo.
So instead of a workflow instance being trapped inside a hidden backend, it looks like this:
---
id: blog-post-q3-launch-2026-03-26
status: review
priority: high
assignee: maya
dueDate: 2026-03-30
labels: ["marketing", "agent-generated"]
actions:
retry-draft: "Retry with softer tone"
approve-publish: "Approve & Publish"
---
# Q3 launch blog post
Initial draft attached.
Needs human tone review before publishing.
That card is doing a lot of work at once:
- Humans can read it without opening a proprietary dashboard.
- Agents can update it without scraping UI state.
- The full workflow history can live in Git.
- The same object can be accessed through the web UI, VS Code, CLI, REST API, SDK, MCP tools, or n8n.
That is the core idea: shared workflow state, in a format that is human-readable and machine-actionable at the same time.
A More Realistic Example Than “Agent Calls Tool”
Imagine a small product team preparing a launch announcement.
A developer asks an agent to draft the post from product notes, changelog items, and customer context.
The agent produces a draft and creates a card in draft.
Then the workflow stops being purely technical.
- The designer needs to review the screenshots.
- The PM needs to confirm the positioning.
- Marketing wants a softer tone.
- Legal needs one required disclaimer.
This is where many teams reach for four separate systems:
- the repo for source material
- Slack for pings
- an issue tracker for approvals
- and the orchestration UI for “what the agent did.”
That works, but it creates a lot of unnecessary surface area.
With kanban-lite, the agent can move the card to design-review, attach the draft, append logs, and expose a clear next action. The designer can comment directly on the card. The PM can click approve-publish. Legal can fill a structured form. The orchestrator can resume when the human signal arrives.
Everybody is looking at the same object.
That is the difference between “human-in-the-loop” (HITL) as a buzzword and a workflow that people can actually operate day to day.
Why Repo-Native State Matters
This is the part that feels especially strong for agent systems.
Agents are already comfortable with repos, files, markdown, YAML, JSON, and structured edits. Humans are comfortable with boards, comments, assignees, labels, and approvals.
A markdown-backed workflow surface meets both sides in the middle.
Compared with a repo + issue tracker + chat stack, that gives you a few nice properties:
- Less context splitting — the work and the workflow state stay closer together.
- Lower agent friction — files are usually easier for agents to inspect and modify reliably than proprietary issue-tracker workflows.
- Auditability — Git history gives you a useful paper trail by default.
- Portability — you are not locked into one hosted product just to preserve task state. This is not a claim that GitHub Issues or Jira are bad. They are good at what they are for.
It is a claim that once agents are participating directly in the workflow, a repo-native control surface can be a simpler fit.
Two Kinds of Human Signals Matter
One of the more practical ideas in kanban-lite is that it separates two kinds of human interaction that often get blurred together.
- Mutation webhooks: something changed
If a card moves, a comment gets added, an attachment appears, or a form is submitted, kanban-lite emits a state-change event such as task.moved, comment.created, or form.submit.
If a reviewer drags a card from in-progress to review, your orchestrator can react to that change automatically.
- Action webhooks: a human explicitly decided something
Actions are stronger signals.
If the card defines actions like retry-draft or approve-publish, clicking one triggers a webhook board.action.trigger or card.action.trigger with the action name and full card context.
That distinction matters.
- Mutation webhook means: “something about the card changed.”
- Action webhook means: “a human intentionally chose this next step.”
For real workflows, those are not the same thing. Treating them differently makes agent handoffs much cleaner.
Forms Are a Bigger Deal Than They Sound
A surprising number of human review steps are not really “leave a comment” moments.
They are structured-decision moments.
Think about:
- release sign-offs
- QA checklists
- incident intake
- vendor review
- legal approval
- compliance acknowledgment.
Those steps need required fields, allowed values, and typed data — not just free text.
kanban-lite supports schema-driven forms backed by JSON Schema. Submitted values live on the card, are validated before save, and can emit a form.submit event.
That makes the control surface not just readable, but validatable.
And once you have that, you are no longer just moving cards around. You have a lightweight system for collecting trustworthy human input in the middle of automated workflows.
Not Just a Board, But Also Not Too Much Platform
What makes kanban-lite interesting is that it sits in a useful middle ground.
It is more capable than a simple kanban board because it supports:
- named actions
- comments and attachments
- structured forms
- webhooks
- plugins
- multiple interfaces
- and storage backends beyond plain markdown when you need them.
But it also avoids trying to become your entire orchestration stack.
That restraint is a feature.
You can start with markdown files in Git. If the workflow grows up, you can move to SQLite or MySQL, store attachments in S3, add auth and RBAC, or integrate with n8n. The tool scales outward without forcing a full rewrite of how your humans and agents coordinate.
The Honest Caveats
There are a few caveats worth stating directly.
- It is not a workflow engine. You still need an orchestrator.
- Webhook delivery is fire-and-forget. If you need durable retries and queues, put something like n8n, Make, or Zapier on the receiving side.
- The plugin ecosystem is still young.
- If you need enterprise-grade audit logs, approval chains, or compliance controls out of the box, you will need to build additional layers.
That does not weaken the idea. It makes the positioning cleaner.
The Bigger Point
A lot of discussion about AI agents focuses on planning, routing, tool use, memory, and model choice.
All of that matters.
But once the workflow touches real teams, the harder question is often much less glamorous:
Where does shared operational state live once humans enter the loop?
Where does the PM look? Where does the designer comment? Where does an agent write back when it finishes a step? Where does an auditor reconstruct who approved what?
kanban-lite has a refreshingly practical answer: put that shared state on a board that both humans and agents can use, and keep it in a format that does not fight either side.
Not because kanban is fashionable. Not because everything should live in markdown.
But because one of the most useful things you can do in an agent system is reduce the number of places where workflow truth can hide.
If your team is already wrestling with human-in-the-loop behavior in production, that is the part worth paying attention to.
I’ve been building agent-to-human integration patterns for years — an AI-powered incident response platform at Fidelity, and now through IncidentMind, where I design custom agent systems and MCP servers for operations teams. kanban-lite is the open-source layer that keeps surfacing as the missing piece.
If your team is building HITL into production agent systems — whether you’re hiring or looking for a design partner — I’m reachable on LinkedIn
Viktor Burdyey — IncidentMind. Previously CTO at EAT24 (acquired by Yelp, $134M) and Senior Platform Engineer at Fidelity. LinkedIn
_
🔗 kanban-lite on GitHub · npm · Documentation · MIT License



Top comments (0)