The $42 Kanban Board
This week I built a kanban board. Nothing special — Express server, React frontend, WebSocket for real-time updates. The twist? It's operated by an AI agent, and every completed task shows exactly what it cost in API tokens.
26 tasks. $42.24 total. And the distribution is fascinating.
The Setup
ClawKanban is a task board that lives inside an AI-powered workflow. An autonomous agent picks up tasks, works them, and moves them to done. Standard kanban stuff — except the agent is the developer.
The board itself was built this way. The agent wrote the server. The agent wrote the React UI. The agent debugged its own WebSocket issues. And now, every task card shows a purple cost badge telling you exactly what it cost to complete.
💲 $0.10 — Add a connection LED indicator
💰 $1.82 — Implement task archiving with date ranges
💰💰 $3.57 — Build human-readable task identifiers
💰💰💰 $11.46 — Add cost-per-task tracking (yes, this feature tracked itself)
Why Cost-Per-Task Matters
If you're running AI agents on real projects, you're burning API tokens constantly. Most people track this at the account level — you log into your provider dashboard, see a daily total, and shrug. That's like tracking engineering costs by looking at your total payroll without knowing who worked on what.
Per-task costing changes how you think about AI work.
Insight 1: Complexity Isn't Linear
Our cheapest task was $0.10 (adding a status LED to the UI). Our most expensive was $11.46 (building cost tracking itself — deliciously meta). But the relationship between "how hard does this sound" and "what it actually costs" is surprisingly loose.
Adding WebSocket support? $0.29. Debugging why the UI collapsed when clicking a comment box? $1.48. The investigation cost more than the infrastructure.
Insight 2: The Investigation Tax
Tasks involving debugging or reverse-engineering are disproportionately expensive. The real-time UI fix ($3.27) wasn't complex — add fs.watch, broadcast changes. But the agent had to read code, form hypotheses, test them, and iterate. That thinking burns tokens.
Compare that with "give the UI some personality" ($1.29) — a creative task with a clear output. The agent just... did it. Dark theme, rounded cards, purple accents, done.
Takeaway: If you want cheap AI work, give it clear specs. If you give it mysteries, expect to pay for the detective work.
Insight 3: Sub-Agents Are Efficient
We use spawned sub-agents for focused tasks — they get a clear brief, do the work, report back. These consistently cost less than equivalent work done in the main conversational session, because they carry less context baggage.
The task identifier feature ($3.57 via sub-agent) involved creating a word dictionary, updating the server, modifying the UI, and backfilling data. That's a lot of work for under $4.
Insight 4: The Median Tells the Real Story
- Average cost: $1.62
- Median cost: $0.90
- Min: $0.10
- Max: $11.46
Most tasks are under a dollar. A few investigation-heavy ones pull the average up. If you're budgeting for AI agent work, plan for $1-2 per atomic task, with occasional spikes for complex debugging.
How It Works
The implementation is surprisingly simple:
1. Session transcripts are the source of truth. Every API call gets logged with timestamps and token counts (input, output, cache reads/writes).
2. Tasks have time windows. When work starts, the agent adds a comment with a timestamp. When it's done, another timestamp. These bookend the work.
3. On completion, match and sum. When a task moves to "done", the server scans transcripts for API calls within that time window, multiplies tokens by model-specific rates, and writes the cost to the task JSON.
// Auto-calculate cost when moving to done
if (existing.state === 'done' && !existing.cost) {
existing.cost = calculateTaskCost(existing);
}
- Visual classification. Costs are bucketed by standard deviation — below mean gets a subtle 💲, above +2σ gets a glowing red 💰💰💰. You can spot expensive tasks at a glance.
The cost data lives right on the task JSON:
{
"id": "ab70fe9e795b",
"identifier": "AmberHeronHavana",
"title": "add kanban task identifiers",
"state": "done",
"cost": {
"usd": 3.57,
"inputTokens": 510642,
"outputTokens": 4670,
"messages": 30
}
}
The Archive View
Completed tasks are grouped by day in an archive column with daily cost totals. It's like a timesheet, except instead of hours it's dollars. You can scan back through the week and see:
- Monday: 8 tasks, $12.40
- Tuesday: 12 tasks, $18.60
It turns abstract API spending into something tangible and actionable.
What I'd Do Differently
1. Separate investigation from implementation. If I could split "figure out what's wrong" and "fix it" into separate tasks, the cost attribution would be cleaner and the investigation tax more visible.
2. Track cost during execution, not just after. A live token counter on in-progress tasks would let you catch runaway investigations early.
3. Add cost budgets. Flag a task if it exceeds $X. Some tasks aren't worth more than a dollar — knowing that upfront would help the agent prioritize approaches.
The Bottom Line
26 features built and shipped for $42. That's an average of $1.62 per feature. Some were trivial ($0.10 to add an LED indicator), some were substantial ($3.57 to build a full identifier system with 216,000 unique combinations).
But the real value isn't the total — it's the visibility. When every task has a price tag, you start making different decisions:
- "Is this bug worth investigating, or should we just work around it?"
- "Should the agent research this, or should I just tell it the answer?"
- "Is this $0.50 task or a $5 task? Let's scope it accordingly."
Cost transparency turns AI agent work from a black box into a manageable budget. And at $42 for a full-featured kanban board with real-time updates, task archiving, cost tracking, and a dark theme? That's a pretty good deal.
The full source is on GitHub: dooougs/clawkanban. ClawKanban runs on OpenClaw, an open-source AI agent framework. The board, the features, and this cost analysis were all produced by the agent itself.
Total cost of writing this blog post: ask the kanban board. 🐾

Top comments (0)