I run 3–4 Claude Code agents across multiple projects every day. One researches markets, another writes code, a third drafts marketing copy. They're great at executing — but managing them was pure chaos.
Who's working on what? Did the researcher finish that competitive analysis? Is the developer blocked waiting for my decision? What should I assign next?
I was manually tracking everything in my head, losing context between sessions, and constantly re-explaining priorities. So I built Mission Control — an open-source task management system designed specifically for humans who delegate work to AI agents. And then I gave it an autonomous daemon.
GitHub: MeisnerDan/mission-control
The Architecture: JSON Files as IPC
The core design decision was local-first, no database. All data lives in plain JSON files:
data/
tasks.json # Tasks with Eisenhower + Kanban + agent assignment
agents.json # Agent registry (profiles, instructions, capabilities)
inbox.json # Agent <-> human messages and reports
decisions.json # Pending decisions requiring human judgment
activity-log.json # Timestamped event log
ai-context.md # Generated ~650-token workspace snapshot
Why JSON instead of PostgreSQL or SQLite? Because the files serve double duty. The Next.js web UI reads them through API routes. AI agents read them directly from the filesystem. Same source of truth, no sync layer needed.
This means any agent that can read files — Claude Code, Cursor, Windsurf, or a bash script — can participate in the system without an SDK or API client.
From Task Board to Execution Engine
Mission Control doesn't just track tasks — it runs them. There are three layers of execution:
One-Click Execution
Press the 🚀 Launch button on any task card. It spawns a Claude Code session with the agent's persona and task context, then handles everything automatically: marks the task done, posts a completion report to your inbox, and logs the activity. Live status indicators show you what's running.
Continuous Missions
Click the rocket on a project and it runs all tasks until done. As each task completes, the next batch auto-dispatches, respecting dependency chains and concurrency limits. A real-time progress bar shows overall completion with a stop button if you need to intervene.
Loop Detection
This one solved a real pain point. Agents sometimes get stuck — retrying the same failing approach over and over. Mission Control auto-detects failure loops after 3 attempts and escalates to you with options: retry with a different approach, skip the task, or stop the mission. No more agents burning tokens on a dead end.
The Daemon: Autonomous Agent Orchestration
The daemon takes all of the above and runs it 24/7 without you. It's a background Node.js process that polls for new tasks and spawns Claude Code sessions automatically.
How It Works
1. Poll tasks.json for tasks with kanban: "not-started"
2. Group by assignedTo agent
3. Spawn claude -p sessions with agent persona + task context
4. Monitor execution: capture stdout, enforce timeouts
5. On completion: mark task done, post report to inbox, log activity
6. Loop detection: escalate stuck tasks after 3 failures
7. Repeat on configurable interval
Configuration
{
"polling": { "enabled": true, "intervalMinutes": 5 },
"concurrency": { "maxParallelAgents": 3 },
"schedule": {
"dailyPlan": { "enabled": true, "cron": "0 7 * * *", "command": "daily-plan" },
"standup": { "enabled": true, "cron": "0 9 * * 1-5", "command": "standup" },
"weeklyReview": { "enabled": true, "cron": "0 17 * * 5", "command": "weekly-review" }
},
"execution": {
"maxTurns": 25,
"timeoutMinutes": 30,
"retries": 1,
"retryDelayMinutes": 5
}
}
The daemon enforces maxParallelAgents so you don't accidentally spawn 10 Claude sessions and burn through your usage. Failed tasks get retried with a configurable delay. Scheduled commands (daily planning, standups, weekly reviews) run on cron.
Security Hardening
When you're spawning AI processes automatically, security matters. The daemon implements:
-
Binary whitelisting — Only
claude,claude.cmd, orclaude.execan be spawned. No arbitrary command execution. - Credential scrubbing — All stdout/stderr is sanitized before logging. API keys, tokens, and secrets are redacted.
-
Prompt fencing — Task data is wrapped in
<task-context>delimiters so the agent knows what's user content vs. system instructions. -
Safe environment — Child processes only inherit
PATH,HOME, andTEMP. No API keys, no credentials. - No network listener — The daemon is a pure local process. Zero network attack surface.
Solving Concurrent Writes with Mutex
When multiple agents write to the same JSON file simultaneously, you get data corruption. Regular file locks (flock) don't work well with Node.js async I/O.
The solution: per-file async mutexes.
import { Mutex } from "async-mutex";
const fileMutexes = {
tasks: new Mutex(),
inbox: new Mutex(),
decisions: new Mutex(),
// ... one per file
};
export async function mutateTasks(fn: (data: TasksFile) => Promise<void>) {
const release = await fileMutexes.tasks.acquire();
try {
const data = await getTasks();
await fn(data);
await saveTasks(data);
} finally {
release();
}
}
Every API write endpoint goes through this pattern. Two simultaneous writes to tasks.json queue safely instead of corrupting data. Direct file reads don't need locking — reads are safe without it.
Token Optimization: 50 Tokens vs. 5,400
AI agents pay for every token they read. A naive approach dumps the entire tasks.json into context — that's ~5,400 tokens for a workspace with 50 tasks.
Mission Control's API returns only what agents need:
# Get only your in-progress tasks (~50 tokens)
GET /api/tasks?assignedTo=developer&kanban=in-progress
# Sparse fields — return only what you need
GET /api/tasks?fields=id,title,kanban
# Get just the DO quadrant
GET /api/tasks?quadrant=do
There's also a compressed context snapshot (ai-context.md) that summarizes the entire workspace state in ~650 tokens. Agents read this first for situational awareness, then query the API for specifics.
Testing: 193 Tests Across 5 Suites
For an open-source project, test coverage is a trust signal. Mission Control has 193 automated tests using Vitest:
| Suite | Tests | What It Covers |
|---|---|---|
| Validation | 90 | All 17 Zod schemas — field defaults, constraints, edge cases |
| Daemon | 42 | Security (credential scrubbing, path validation, binary whitelist), config, prompt builder |
| Data Layer | 19 | File I/O, mutex safety, archive operations |
| Agent Flow | 17 | End-to-end: task creation → delegation → inbox → decisions → activity log |
| Security | 25 | API auth, rate limiting, token/origin validation, CSRF protection |
The full suite runs in CI on every push and PR.
What I Learned
JSON files work better than expected. For a single-user, local-first app, the simplicity of JSON + mutex beats the complexity of a database. No migrations, no ORM, no connection pooling. The tradeoff is obvious — it won't scale to 10,000 tasks — but that's a future problem solved by a cloud tier.
Agents need structure, not freedom. Giving an AI agent a blank canvas and saying "do stuff" produces inconsistent results. Giving it a role, specific instructions, a skills library, and a defined reporting protocol produces reliable work.
The daemon changed everything. Before the daemon, I was manually running agents. After the daemon, I wake up to completed tasks and progress reports in my inbox. It's the difference between a tool and an autonomous system.
Loop detection was a necessity, not a luxury. Early on, I had agents stuck in failure loops — retrying the same broken approach, burning tokens, producing nothing. Adding automatic escalation after 3 failures was one of those features that immediately proved its value. Agents need guardrails.
Continuous Missions are the endgame. One-click execution on individual tasks is useful. But running an entire project — with tasks auto-dispatching as dependencies clear, loop detection catching stuck agents, and a progress bar showing overall completion — that's when it starts feeling like you actually have a team.
The pieces work together. The Eisenhower matrix tells agents what matters. The Kanban board tracks where work stands. The inbox carries reports. Per-task notes preserve context across sessions. The compressed ai-context.md gives each new session just enough state to continue seamlessly. I've been running Continuous Missions on the codebase itself — agents pick up and continue work across sessions like a person would, without anything clever. It just works.
Try It
Mission Control is open source under the MIT license.
git clone https://github.com/MeisnerDan/mission-control.git
cd mission-control/mission-control
pnpm install
pnpm dev
Open http://localhost:3000 and click "Load Demo Data" to see it in action.
What's next:
- Docker support for one-command setup
- Cloud sync option for cross-device access
- Mobile companion app
- More agent integrations beyond Claude Code
The API is token-optimized (~50 tokens per request vs ~5,400 unfiltered, a 92% reduction), and it works with any AI agent that can read/write local files — not just Claude Code.
If you're a solo dev running AI agents and feeling the coordination chaos — give it a try. Stars and feedback welcome.
Top comments (0)