DEV Community

sagiyaacoby
sagiyaacoby

Posted on

Build Your Own AI Agent Team in 5 Minutes with TeamHero

#v1

You have Claude. Maybe you've got a few agents doing different things. But if you're honest, it's a mess -- scattered chat windows, no history of who did what, and no way to review work before it goes live.

TeamHero fixes that. It's an open-source platform that gives your AI agents a proper project management layer: task boards, status lifecycles, persistent memory, and a live dashboard. Think of it as Linear or Jira, but for AI agents instead of humans.

This tutorial walks through the full setup. By the end, you'll have a running team with specialized agents, assigned tasks, and a dashboard showing everything in real time.

Prerequisites

  • Node.js 18+ installed
  • Claude CLI from Anthropic (install via npm install -g @anthropic-ai/claude-cli)
  • A terminal (works on Windows, macOS, Linux)

That's it. No database, no Docker, no cloud account.

Step 1: Initialize Your Project

npx teamhero@latest init my-team
cd my-team
npm start
Enter fullscreen mode Exit fullscreen mode

The server starts and opens a dashboard at http://localhost:3787. You should see an empty team board with just the Orchestrator agent, which comes built in.

The Orchestrator is the coordinator. It reads tasks, delegates them to agents, and manages the review process. You don't work with it directly most of the time -- it runs behind the scenes.

Step 2: Create Your First Agent

Let's add a research agent. Open a new terminal and run:

curl -X POST http://localhost:3787/api/agents \
  -H "Content-Type: application/json" \
  -d '{
    "name": "Scout",
    "role": "Researcher & Analyst",
    "mission": "Investigate topics and produce structured reports",
    "description": "Handles all research tasks. Gathers data from multiple sources, cross-references findings, and delivers clear reports with citations.",
    "personality": {
      "traits": ["thorough", "skeptical", "methodical"],
      "tone": "Academic but accessible",
      "style": "Evidence-first, always cites sources"
    },
    "rules": [
      "Verify claims from at least two sources",
      "Flag uncertainty explicitly",
      "Include source links in every report"
    ],
    "capabilities": ["web research", "data analysis", "report writing"]
  }'
Enter fullscreen mode Exit fullscreen mode

Refresh the dashboard. Scout appears as a new agent card with its role, personality, and status.

Each agent gets its own folder on disk with markdown files for short-term memory (current context) and long-term memory (accumulated knowledge). This means agents remember what they've worked on across sessions.

Step 3: Add More Specialists

A team needs more than one agent. Here's a developer agent:

curl -X POST http://localhost:3787/api/agents \
  -H "Content-Type: application/json" \
  -d '{
    "name": "Dev",
    "role": "Full-Stack Developer",
    "mission": "Write clean, tested, production-ready code",
    "personality": {
      "traits": ["pragmatic", "detail-oriented", "security-conscious"],
      "tone": "Direct and technical",
      "style": "Shows code, explains tradeoffs"
    },
    "rules": [
      "Write tests for all new functionality",
      "Never commit secrets or API keys",
      "Document public interfaces"
    ],
    "capabilities": ["Node.js", "TypeScript", "REST APIs", "testing"]
  }'
Enter fullscreen mode Exit fullscreen mode

And a content writer:

curl -X POST http://localhost:3787/api/agents \
  -H "Content-Type: application/json" \
  -d '{
    "name": "Pen",
    "role": "Content Writer",
    "mission": "Create clear, engaging written content",
    "personality": {
      "traits": ["creative", "concise", "audience-aware"],
      "tone": "Conversational and clear",
      "style": "Short paragraphs, active voice, concrete examples"
    },
    "rules": [
      "Match tone to the target platform",
      "No filler phrases or jargon without explanation",
      "Always include a call to action"
    ],
    "capabilities": ["blog posts", "documentation", "social media copy"]
  }'
Enter fullscreen mode Exit fullscreen mode

Your dashboard now shows four agents: Orchestrator, Scout, Dev, and Pen.

Step 4: Create and Assign a Task

Tasks follow a lifecycle: draft -> working -> pending -> accepted -> closed. This gives you a clear review step before any agent output goes live.

Create a research task and assign it to Scout:

curl -X POST http://localhost:3787/api/tasks \
  -H "Content-Type: application/json" \
  -d '{
    "title": "Research competitor pricing models",
    "description": "Analyze the pricing strategies of the top 5 AI agent frameworks. Compare free tiers, paid plans, and enterprise options. Deliver a comparison table with recommendations.",
    "assignedTo": "SCOUT_AGENT_ID",
    "status": "draft",
    "priority": "high",
    "type": "research"
  }'
Enter fullscreen mode Exit fullscreen mode

Replace SCOUT_AGENT_ID with the actual ID returned when you created Scout (you can also find it on the dashboard or via GET /api/agents).

The task appears on the task board in Draft status.

Step 5: Run the Task

When the Orchestrator picks up the task, it launches Scout as a subagent. Scout:

  1. Reads its agent definition (role, rules, personality)
  2. Loads its memory files for context
  3. Executes the work using its available tools (web search, file reading, shell commands)
  4. Saves deliverables to a versioned folder (data/tasks/{id}/v1/)
  5. Logs progress updates
  6. Sets the task status to pending_approval

You can watch this happen in real time on the dashboard. The task card updates as the agent works, and progress logs stream in.

Step 6: Review and Approve

When a task reaches Pending, it's waiting for your review. Open the task on the dashboard to see:

  • The deliverable files the agent produced
  • Progress logs showing what the agent did step by step
  • The agent's confidence notes or flagged uncertainties

You have three options:

  • Accept -- the work is good. Task moves to Accepted, then Closed.
  • Improve -- send feedback. The agent revises and creates a new version (v2, v3, etc.).
  • Hold or Cancel -- pause or abandon the task.
# Accept the deliverable
curl -X PUT http://localhost:3787/api/tasks/TASK_ID \
  -H "Content-Type: application/json" \
  -d '{"action": "accept"}'

# Or request revisions with feedback
curl -X PUT http://localhost:3787/api/tasks/TASK_ID \
  -H "Content-Type: application/json" \
  -d '{"action": "improve", "comments": "Add a section comparing open-source options"}'
Enter fullscreen mode Exit fullscreen mode

Step 7: Use Subtasks and Dependencies

Real projects involve multiple agents working in sequence. Break a parent task into subtasks with dependencies:

# Create a parent task
curl -X POST http://localhost:3787/api/tasks \
  -H "Content-Type: application/json" \
  -d '{
    "title": "Launch blog post about new feature",
    "description": "Full pipeline: research, write, review",
    "status": "draft",
    "priority": "high",
    "type": "content"
  }'

# Create subtasks under it
curl -X POST http://localhost:3787/api/tasks/PARENT_ID/subtasks \
  -H "Content-Type: application/json" \
  -d '{
    "title": "Research the topic",
    "assignedTo": "SCOUT_ID",
    "type": "research"
  }'

curl -X POST http://localhost:3787/api/tasks/PARENT_ID/subtasks \
  -H "Content-Type: application/json" \
  -d '{
    "title": "Write the blog post",
    "assignedTo": "PEN_ID",
    "type": "content",
    "dependsOn": ["RESEARCH_SUBTASK_ID"]
  }'
Enter fullscreen mode Exit fullscreen mode

The writing subtask won't start until the research subtask is accepted. When all subtasks finish, the parent task auto-advances to Pending for your review.

Autopilot Mode

For routine or low-risk tasks, skip the manual review:

curl -X POST http://localhost:3787/api/tasks \
  -H "Content-Type: application/json" \
  -d '{
    "title": "Generate weekly status report",
    "assignedTo": "SCOUT_ID",
    "autopilot": true,
    "type": "operations"
  }'
Enter fullscreen mode Exit fullscreen mode

The agent delivers, the orchestrator auto-accepts, and the task closes. You still get full progress logs for auditing. Toggle autopilot off any time to go back to manual review.

Knowledge Base

When an agent produces something worth keeping, promote it to the shared knowledge base:

curl -X POST http://localhost:3787/api/tasks/TASK_ID/promote
Enter fullscreen mode Exit fullscreen mode

This copies the deliverable into a persistent store with metadata (title, tags, category). Other agents can reference it in future work. Stale documents get flagged during round table reviews so nothing rots.

What You Get

After this setup, you have:

  • Specialized agents with distinct roles, personalities, and persistent memory
  • Structured task management with a clear draft-to-closed lifecycle
  • Human review by default with optional autopilot for trusted work
  • Subtask dependencies so multi-step projects execute in order
  • A live dashboard showing agent status, task progress, and deliverables in real time
  • Version history on every deliverable, so you can trace how work evolved
  • A shared knowledge base that grows as your team completes tasks

Everything runs locally on your machine. No cloud dependency, no database to manage, no vendor lock-in. The entire state lives in JSON files you can read, edit, and version control.

Try It

TeamHero is MIT-licensed and works on Windows, macOS, and Linux.

npx teamhero@latest init my-team
cd my-team && npm start
Enter fullscreen mode Exit fullscreen mode

GitHub repo with full docs and API reference:

https://github.com/sagiyaacoby/TeamHero

If you run into issues or have ideas for what the management layer for AI agent teams should look like, open an issue on GitHub. Contributions welcome.


Built by Sagi Yaacoby. TeamHero v2.5.1.

Top comments (0)