You're already using ChatGPT, Claude, or Gemini.
But have you noticed — chatting with one AI hits a ceiling pretty fast?
You ask it a question, it gives you one perspective. But any real decision needs a product perspective, a technical perspective, a financial perspective, a marketing perspective... then you synthesize.
Agency Orchestrator does exactly this: one sentence in, multiple AI roles collaborate automatically, a full plan out in minutes.
Open-source. Free. Here's the complete guide.
What It Does
I typed one command in my terminal:
ao compose "I'm a programmer looking to start an AI content side hustle, target $3K/month, give me a complete plan" --run
3 minutes later, 5 AI roles had each completed their part:
- 🔭 Trend Researcher — compared 6 niches by competition, revenue ceiling, and AI leverage
- 📱 Platform Analyst — scored 6 platforms, recommended "YouTube + Newsletter" combo
- 💰 Financial Planner — broke down $3K/month into courses ($1,800) + community ($600) + consulting ($600)
- ✍️ Content Strategist — produced 20 topics, 4 headline formulas, a content SOP
- 📋 Execution Planner — mapped out a 90-day action plan, day by day
The output isn't "you should try content creation" generic advice — it's a concrete, executable plan.
Install (2 Minutes)
Requires Node.js 18+.
npm install -g agency-orchestrator
Verify:
ao --version
# v0.5.0
Then download the English role library:
ao init --lang en
That's it. You're ready.
Configure Your AI Model
Agency Orchestrator supports 10 LLM providers. The twist: 7 of them need no API key.
No API Key Needed (use your existing subscription)
| You have... | Config | Cost to you |
|---|---|---|
| Claude Max/Pro ($20/mo) | --provider claude-code |
$0 extra |
| ChatGPT Plus/Pro ($20/mo) | --provider codex-cli |
$0 extra |
| GitHub Copilot ($10/mo) | --provider copilot-cli |
$0 extra |
| Google Account | --provider gemini-cli |
Free (1000 req/day) |
| Hermes Agent | --provider hermes-cli |
Free (🔥 NousResearch open-source) |
| Ollama | --provider ollama |
Free (local models, fully offline) |
| OpenClaw | --provider openclaw-cli |
Free |
API Key Providers (pay per token)
| Provider | Setup | Cost |
|---|---|---|
| DeepSeek (recommended) | export DEEPSEEK_API_KEY="your-key" |
~$2 lasts a long time |
| OpenAI | export OPENAI_API_KEY="your-key" |
Per token |
| Any OpenAI-compatible API | Set OPENAI_BASE_URL
|
Varies |
My recommendation: DeepSeek for everyday use (cheap), Claude Code for important tasks (highest quality).
Three Ways to Use It
Way 1: One Sentence → Full Result (Easiest)
ao compose "what you want AI to do" --run
No config. No role selection. AI automatically picks the right roles from 211 available experts, generates a workflow, and executes it.
Examples:
# Business analysis
ao compose "Analyze the feasibility of building an AI budgeting app" --run
# Tech comparison
ao compose "Compare Cursor, Windsurf, and Copilot — give me a recommendation" --run
# Long-form writing
ao compose "Write a deep-dive article on AI Agent trends" --run
# Startup planning
ao compose "Plan an AI education startup with $15K budget" --run
# Code review
ao compose "PR code review covering security and performance" --run
If you only want to generate the YAML without executing:
ao compose "your description"
# Generates workflows/xxx.yaml — run later with ao run
Way 2: Use Built-in Templates
# Product requirements review
ao run workflows/product-review.yaml --input prd_content=@prd.md
# PR code review (3-way parallel → summary)
ao run workflows/dev/pr-review.yaml --input code=@src/main.ts
# Business plan
ao run workflows/strategy/business-plan.yaml --input idea="AI-powered resume builder"
# Collaborative fiction
ao run workflows/story-creation.yaml --input premise="A time travel story" --input style="thriller"
32 built-in templates covering dev, marketing, strategy, legal, HR, and more.
Way 3: Write Your Own YAML (Most Flexible)
name: "Market Analysis"
agents_dir: "agency-agents"
llm:
provider: "deepseek"
model: "deepseek-chat"
inputs:
- name: topic
required: true
steps:
- id: research
role: "product/product-trend-researcher"
task: "Research market trends and competitive landscape for {{topic}}"
output: market_data
- id: analysis
role: "strategy/nexus-strategy"
task: "Based on {{market_data}}, provide strategic recommendations"
depends_on: [research]
output: strategy
- id: plan
role: "product/product-manager"
task: "Based on {{strategy}}, create a product roadmap"
depends_on: [analysis]
output: roadmap
Run it:
ao run my-workflow.yaml --input topic="AI education"
211 Built-in AI Roles
ao roles
Covers virtually every role in a company:
| Category | Count | Examples |
|---|---|---|
| Strategy | 8 | CEO, Strategy Analyst, Innovation Catalyst |
| Product | 15 | Product Manager, Trend Researcher, User Researcher |
| Engineering | 25 | Architect, Full-stack Dev, Code Reviewer, DevOps |
| Design | 12 | Brand Director, UX Designer, Interaction Designer |
| Marketing | 20 | Growth Hacker, Content Strategist, SEO Expert, Social Media |
| Finance | 10 | Financial Analyst, Budget Planner, Risk Analyst |
| Writing | 18 | Blog Writer, Copywriter, Editor, Technical Writer |
| HR | 8 | Recruiter, Interviewer, Org Development |
| Legal | 6 | Contract Reviewer, Compliance, IP |
| Testing | 10 | QA Engineer, Performance Testing, Security Testing |
| More | 47 | Data Analytics, Customer Success, Project Management... |
Each role has a comprehensive system prompt — not just "you are a product manager" but a full professional definition with workflow, output format, and thinking models.
Key Features
Auto-Parallel DAG
Steps without dependencies run in parallel automatically:
steps:
- id: market # Layer 1
task: "Market research"
- id: user # Layer 1 (parallel with market)
task: "User research"
- id: product # Layer 2 (waits for both)
task: "Product planning"
depends_on: [market, user]
market and user run simultaneously. product starts only after both finish.
Variable Passing
Output from one step automatically flows to the next:
- id: research
task: "Research market data"
output: market_data # output variable
- id: analysis
task: "Analyze based on {{market_data}}" # reference previous output
depends_on: [research]
Per-Step Model Override (v0.5.0)
Use a cheap model for research, a powerful one for decisions:
steps:
- id: research
role: "product/product-trend-researcher"
task: "Research market trends"
llm:
provider: deepseek
model: deepseek-chat # cheap, good for bulk research
- id: decision
role: "strategy/nexus-strategy"
task: "Make final strategic decision"
llm:
provider: openai
model: gpt-4o # high quality for key decisions
Or override all steps from the command line:
ao run workflow.yaml --provider claude-code
Streaming + Auto-Resume (v0.5.0)
The biggest pain with DeepSeek: long tasks would timeout mid-generation (60-second server limit).
v0.5.0 fixes this completely:
- Streaming — output appears as it's generated, no more waiting
- Auto-resume — if connection drops, automatically continues from where it stopped (up to 3 retries)
- Smart retry — 429 rate limits, 500 server errors, network glitches all handled with exponential backoff
Resume & Iterate
Ran a 9-step workflow but the financial analysis in step 7 wasn't detailed enough? Don't re-run everything:
ao run workflow.yaml --resume last --from finance_plan
Only re-runs from finance_plan onward. Previous 6 steps reuse cached results. Saves time and tokens.
Condition Branching
Route based on previous output:
- id: review
task: "Review plan quality"
output: review_result
- id: revise
task: "Revise the plan"
depends_on: [review]
condition: "{{review_result}} contains needs revision"
Loop Iteration
Let AI iterate until satisfied:
- id: write
task: "Write the article"
output: draft
loop:
back_to: review
max_iterations: 3
exit_condition: "{{review_result}} contains approved"
File Input
Pass local file contents to AI:
ao run workflow.yaml --input prd_content=@prd.md
Works Inside Your IDE
Agency Orchestrator integrates with 14 AI coding tools — Cursor, Claude Code, Copilot, Windsurf, Trae, and more.
Install with one command:
npx agency-orchestrator install --tool cursor # or trae, copilot, etc.
Then just tell your IDE: "run a workflow to review this PR" — it handles the rest.
See the integration guides for setup details.
Where Are the Results?
Each run saves to ao-output/:
ao-output/
└── market-analysis-2026-04-14T14-38-06/
├── metadata.json # Execution info (timing, tokens, status)
└── steps/
├── 1-research.md
├── 2-analysis.md
└── 3-plan.md
Each role's output is a separate Markdown file — easy to read, reference, and share.
FAQ
Q: How much does it cost?
The tool itself is free and open-source. Running a full workflow with DeepSeek API costs ~$0.01–0.05. Using Claude Code, Copilot, or other subscription-based models costs nothing extra.
Q: How is this different from just chatting with ChatGPT/Claude?
Chatting gives you one AI's perspective. ao has multiple specialized roles each completing their area of expertise, then synthesizing. One person vs. a whole team.
Q: How does this compare to CrewAI / LangGraph?
They require Python, API keys, and writing your own role definitions. ao uses zero-code YAML, 211 roles out of the box, and 7 providers need no API key.
Q: What if it disconnects mid-run?
v0.5.0 has streaming + auto-resume + smart retry. If it still fails, use --resume last to continue from where it stopped.
Q: Does it work on Windows?
Yes. v0.5.0 fixed Windows compatibility.
Q: Can I use local models?
Yes. --provider ollama supports all Ollama local models, fully offline.
Get Started
npm install -g agency-orchestrator
ao compose "your idea here" --run
- GitHub: github.com/jnMetaCode/agency-orchestrator
- Role Library: agency-agents (English) / agency-agents-zh (Chinese)
If this is useful, a ⭐ on GitHub helps more people discover the project.
Agency Orchestrator is an open-source multi-agent orchestration tool. Define AI collaboration workflows in YAML, auto-parallel with DAG. 211 expert roles out of the box, 10 LLM providers (7 need no API key).
Top comments (0)