DEV Community

Lakshmi Sravya Vedantham
Lakshmi Sravya Vedantham

Posted on

I built a Claude Code skill that assembles AI teams and runs them in parallel

You give it a problem. It builds a cross-functional team, assigns missions, and executes everything in dependency-based waves — using real parallel subagents.

That is what Assemble does. Here is exactly what using it looks like.


The problem it solves

AI tasks are still mostly serial. Research, then spec, then plan. Each step waits for the last. But that is not how teams work — teams run in parallel, with dependencies managed by a coordinator.

Assemble brings that model to Claude Code. You describe the problem. A Project Manager agent organizes teams, assigns missions, and runs waves of parallel subagents until everything is done.


A full walkthrough

Let us build a CLI tool that analyzes git history and outputs a developer personality report.

Phase 1 — Intake

You run /assemble. The PM asks 4 questions, one at a time:

PM: What are we building or solving?

> Build a CLI tool that analyzes git history and outputs a developer
  personality report. Should work on any public GitHub repo.

PM: What constraints matter? (timeline, tech stack, budget, team preferences)

> Python only. No paid APIs. Should run in under 5 seconds on a repo
  with 1000 commits.

PM: What does done look like? What would make this a success?

> A pip-installable CLI: `gitdna analyze owner/repo`
  Outputs a 1-page markdown report with archetype, patterns, and stats.

PM: Any teams or roles you want included or excluded?

> Include Research and Engineering. Skip Design and Infra.
  Add QA — I want a test plan.
Enter fullscreen mode Exit fullscreen mode

After the 4 answers, the PM confirms what it heard:

Got it. Here is what I am working from:

· Goal: pip-installable CLI that analyzes git history → personality report
· Constraints: Python only, no paid APIs, <5s on 1000 commits
· Done: `gitdna analyze owner/repo` produces a markdown archetype report

Teams: Research, Engineering, QA

Moving to Phase 2.
Enter fullscreen mode Exit fullscreen mode

Phase 2 — Organize

The PM builds the project board and shows you everything before touching a single file:

ASSEMBLE — Project Board
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Mission: pip-installable CLI that analyzes git history → personality report
Constraints: Python only, no paid APIs, <5s on 1000 commits
Done looks like: `gitdna analyze owner/repo` outputs a markdown archetype report

[WAVE 1] — Starting immediately

┌ Research Team
│ Mission: Evaluate git parsing libraries and identify commit patterns
│           that map to distinct developer archetypes
│ Tasks:
│   · Survey gitpython, pygit2, dulwich — compare API ergonomics and speed
│   · Identify 5–8 measurable commit signals (message tone, time of day,
│     file churn, commit size, etc.)
│   · Map signals to 4–6 personality archetypes with concrete examples
│   · Document recommended library and signal extraction approach
│ Output: docs/research-notes.md
│ Status: ⏳ Pending approval

[WAVE 2] — Unlocks after Wave 1 approval

┌ Engineering Team
│ Mission: Build the CLI and report generator based on research output
│ Tasks:
│   · Scaffold pip-installable package with pyproject.toml and Click CLI
│   · Implement git history fetcher using recommended library from research
│   · Build signal extractor — reads commits, outputs feature dict
│   · Build archetype classifier — maps features to personality type
│   · Build markdown report generator — 1-page output with stats and narrative
│ Output: docs/implementation-plan.md + working code in src/
│ Depends on: Research
│ Status: 🔒 Locked

┌ QA Team
│ Mission: Design test strategy and acceptance criteria for the CLI
│ Tasks:
│   · Define test fixtures — sample git histories for each archetype
│   · Write unit tests for signal extractor and classifier
│   · Write integration test: clone a real repo, run CLI, validate output shape
│   · Define performance acceptance criteria: <5s on 1000 commits
│ Output: docs/qa-checklist.md
│ Depends on: Research, Engineering
│ Status: 🔒 Locked

━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Ready to execute. Approve? (yes / adjust [instruction])
Enter fullscreen mode Exit fullscreen mode

You see every wave, every team, every task — before anything runs. You can say adjust skip the signal extraction doc or adjust add a 5th task to Research and the board updates. Or just:

> yes
Enter fullscreen mode Exit fullscreen mode

Phase 3 — Execute

Wave 1 kicks off. The PM spawns the Research Team agent and shows you it is running:

Wave 1 — executing...
  → Research Team agent spawned
Enter fullscreen mode Exit fullscreen mode

The Research agent does real work: reads documentation, evaluates libraries, writes a structured findings file to docs/research-notes.md. When it finishes, it sends a structured report back to the PM:

status: complete
confidence: high
completed_tasks:
  - Evaluated gitpython, pygit2, dulwich — gitpython recommended
  - Identified 6 commit signals: message length, time-of-day spread,
    file type mix, commit frequency, churn rate, refactor ratio
  - Mapped to 5 archetypes: The Architect, The Firefighter, The Craftsman,
    The Experimenter, The Finisher
artifacts_written:
  - docs/research-notes.md
key_findings:
  - gitpython is 3x slower than pygit2 on large repos — recommend pygit2
  - Time-of-day spread is the strongest single predictor of archetype
  - Commit message length alone is a weak signal; combine with churn rate
next_step: Engineering can use docs/research-notes.md as input
Enter fullscreen mode Exit fullscreen mode

The PM updates the board and shows you the Wave 1 checkpoint:

━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
✅ Wave 1 complete.

Research Team   ✅  docs/research-notes.md written

Key findings:
  · Research: pygit2 recommended over gitpython (3x faster on large repos)
  · Research: time-of-day spread is the strongest archetype predictor
  · Research: 5 archetypes defined — Architect, Firefighter, Craftsman,
              Experimenter, Finisher

━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Continue to Wave 2? (continue / adjust [instruction] / stop)
Enter fullscreen mode Exit fullscreen mode

You read the findings. You decide.

> continue
Enter fullscreen mode Exit fullscreen mode

Wave 2 kicks off. Engineering and QA run in parallel — both get spawned at the same time. Engineering gets docs/research-notes.md as input. QA gets both research and engineering output.

Wave 2 — executing...
  → Engineering Team agent spawned
  → QA Team agent spawned
Enter fullscreen mode Exit fullscreen mode

Both run simultaneously. Engineering writes the implementation plan and scaffolds the code. QA writes the test strategy based on the research and engineering outputs.

When both return:

━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
✅ Wave 2 complete.

Engineering Team   ✅  docs/implementation-plan.md written
                   ✅  src/gitdna/ scaffolded
QA Team            ✅  docs/qa-checklist.md written

Key findings:
  · Engineering: pygit2 + Click stack confirmed, CLI entry point working
  · Engineering: archetype classifier uses weighted signal scoring
  · QA: 12 unit tests defined, 3 integration scenarios, perf target <5s

━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
All waves complete. Moving to Phase 4.
Enter fullscreen mode Exit fullscreen mode

Phase 4 — Close

The PM writes docs/executive-summary.md and lists every artifact:

━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
ASSEMBLE — Complete

Artifacts written:
  docs/research-notes.md
  docs/implementation-plan.md
  docs/qa-checklist.md
  docs/executive-summary.md
  src/gitdna/__init__.py
  src/gitdna/cli.py
  src/gitdna/parser.py
  src/gitdna/classifier.py
  src/gitdna/reporter.py
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Enter fullscreen mode Exit fullscreen mode

You now have: a research-backed library recommendation, a full implementation plan, working scaffolded code, and a test strategy — all from a single /assemble run.


Querying at any point

While any of this is running, you can ask:

> What is QA doing?

PM: QA Team — Wave 2, currently in progress.
    Mission: Design test strategy and acceptance criteria.
    Writing: docs/qa-checklist.md
    Status: running (no blockers reported)
Enter fullscreen mode Exit fullscreen mode
> Show project status

PM: [renders full board with current statuses]
Enter fullscreen mode Exit fullscreen mode

No new agents spawned. The PM answers from in-memory board state.


What happens when a team is blocked

If a team fails, the PM does not give up or fill in the gap itself. It retries once with adjusted scope:

⚠️ Infra Team — blocked: no cloud credentials available in environment
Retrying with adjusted scope (local mock environment)...
✅ Infra Team — partial complete. docs/infra-plan.md written (mock config only)
Enter fullscreen mode Exit fullscreen mode

If the retry also fails:

🔴 Infra Team — blocked after retry: mock environment insufficient for VPC config
Downstream teams that depend on this: Engineering

Hold (wait for you to resolve) or continue (proceed without Infra output)?
(hold / continue)
Enter fullscreen mode Exit fullscreen mode

You decide. The PM waits.


Architecture

Four files:

SKILL.md          ← PM prompt registered with Claude Code plugin system
team-agent.md     ← Contract template every spawned agent receives
team-library.md   ← 8 default teams: missions, roles, output artifacts
status-schema.md  ← Structured report format (status, confidence, findings)
Enter fullscreen mode Exit fullscreen mode

The PM never touches project artifacts directly. It organizes, delegates, and reports. Every team agent writes a real file — no hallucinated summaries.


Install

# Copy support files
mkdir -p ~/.claude/skills/assemble
git clone https://github.com/LakshmiSravyaVedantham/assemble.git /tmp/assemble
cp /tmp/assemble/team-agent.md ~/.claude/skills/assemble/
cp /tmp/assemble/team-library.md ~/.claude/skills/assemble/
cp /tmp/assemble/status-schema.md ~/.claude/skills/assemble/

# Register with Claude Code (requires superpowers plugin)
SKILLS_DIR=~/.claude/plugins/cache/claude-plugins-official/superpowers/5.0.0/skills
mkdir -p "$SKILLS_DIR/assemble"
cp /tmp/assemble/SKILL.md "$SKILLS_DIR/assemble/SKILL.md"
Enter fullscreen mode Exit fullscreen mode

Then open Claude Code and run /assemble.

Repo: https://github.com/LakshmiSravyaVedantham/assemble

Top comments (0)