Last month I watched a team burn $400 in API credits in a single afternoon.
They had four agents working on a research task. Each agent could see every other agent's output. Every time one agent updated its findings, the other three would re-process, generate new outputs, and trigger another round of updates. Four agents, talking to each other in a full mesh, with no termination condition. The token counter looked like a slot machine.
The fix took ten minutes. They didn't need a mesh. They needed a hierarchy — one coordinator agent that dispatched three specialists and merged the results. Same four agents, same task, same quality. One-eighth the cost.
The topology you choose for a multi-agent system — how agents communicate, who talks to whom, who decides — determines whether your system is fast or slow, reliable or brittle, cheap or expensive. Most teams default to chaining agents sequentially and never question it. But agent orchestration is a graph problem, and different tasks demand fundamentally different graph shapes.
I've spent the past year building and studying multi-agent architectures. Here are the 12 topology patterns that cover every coordination scenario I've encountered, grouped by complexity.
Simple Topologies
These are the building blocks. If you're new to multi-agent orchestration, start here.
1. Sequential
A ──▶ B ──▶ C ──▶ D
Each agent completes its work before passing output to the next. Classic pipeline.
When to use: Step-by-step workflows where order matters — document extraction, then classification, then summarization. Code generation, then review, then testing.
Example: An invoice processing pipeline where Agent A extracts line items from a PDF, Agent B validates amounts against a purchase order, Agent C flags discrepancies, and Agent D generates an approval recommendation. Each step needs the full output of the previous one.
Trade-off: Simple to reason about and debug. But slow — your total latency is the sum of every step. One slow agent blocks everything downstream.
2. Parallel
┌── A ──┐
│ │
IN ──┼── B ──┼──▶ MERGE
│ │
└── C ──┘
All agents receive the same input and run simultaneously. Results merge at the end.
When to use: Independent analysis from multiple perspectives — sentiment analysis across languages, security scanning with different rule sets, researching a topic from multiple sources.
Example: Running a security audit, a performance profiler, and a code style linter on the same codebase simultaneously. Each analyzer is independent. Results merge into a single report. What took 90 seconds sequentially finishes in 30.
Trade-off: Dramatically faster than sequential when tasks are independent. But you need a merge strategy, and if agents produce conflicting outputs, the merge logic becomes the hard part.
Structured Topologies
These patterns impose hierarchy or dependency ordering. They handle complex, real-world workflows where "everything runs at once" isn't realistic.
3. Hierarchical
BOSS
/ | \
/ | \
W1 W2 W3
A supervisor agent decomposes a task and delegates sub-tasks to worker agents. Workers report back. The boss synthesizes.
When to use: Task decomposition — "write a research report" becomes "gather data" + "analyze trends" + "draft sections." Project management workflows where a lead agent coordinates specialists.
Example: A research agent receives "analyze the competitive landscape for AI agent frameworks." It spawns three workers: one searches academic papers, one crawls GitHub repos for stars and commit activity, one analyzes pricing pages. The boss merges findings into a structured competitive matrix.
Trade-off: Clean separation of concerns. The boss agent is a single point of failure, though — if it decomposes poorly, every worker goes in the wrong direction. Boss prompt quality is everything.
4. DAG (Directed Acyclic Graph)
A ──▶ B ──▶ D
\ ▲
└──▶ C ──┘
Agents form a dependency graph. An agent runs only when all its dependencies have completed. No cycles.
When to use: Complex workflows with mixed dependencies — data fetching (parallel) feeds into analysis (sequential) which feeds into report generation, but only after a separate validation step also completes.
Example: A CI/CD pipeline where linting and type-checking run in parallel after checkout, unit tests run after linting passes, integration tests wait for both unit tests and a database migration to complete, and deployment triggers only after every test is green.
Trade-off: Maximum flexibility and parallelism where the dependency structure allows it. The cost is complexity — you need a scheduler that understands the graph and handles failures mid-execution. This is the topology that build systems (Make, Bazel) have used for decades.
5. Star
S1
|
S4 ── HUB ── S2
|
S3
A central hub agent coordinates all communication. Spoke agents never talk to each other directly — everything routes through the hub.
When to use: Centralized coordination where you need one agent to maintain global state — a customer service router dispatching to specialist agents, or a planning agent that tracks progress across multiple workstreams.
Trade-off: Easy to monitor and control since all traffic flows through one point. But the hub is a bottleneck. If the coordinator agent is slow or hits rate limits, the entire system stalls.
6. Grid
A1 ── A2 ── A3
| | |
B1 ── B2 ── B3
| | |
C1 ── C2 ── C3
Agents are arranged in a matrix with structured communication along rows and columns. Each agent can communicate with its direct neighbors.
When to use: Structured team simulations — rows represent departments (engineering, design, QA) and columns represent features. Each agent has a specific role at a specific intersection.
Trade-off: Excellent for problems that naturally map to two dimensions. Rigid structure means it's overkill for simpler problems, and the communication overhead grows with grid size.
Collaborative Topologies
These patterns prioritize interaction quality over structural simplicity. Agents actively engage with each other's outputs.
7. Debate
PRO ◀──────▶ CON
\ /
▼ ▼
JUDGE
Two or more agents argue opposing positions. A judge agent evaluates the arguments and renders a decision.
When to use: High-stakes decisions where you need to stress-test reasoning — architecture decisions, risk assessments, legal analysis. Any situation where a single agent's confidence might mask a blind spot.
Example: An architecture decision record where one agent argues for microservices, another argues for a monolith, and a judge evaluates both against concrete constraints — team size, deployment frequency, latency budget. The judge doesn't just pick a winner; it synthesizes a recommendation with trade-offs acknowledged.
Trade-off: Produces higher-quality decisions by exposing weak arguments. Expensive in tokens — you're running 3+ agents on the same problem. Not worth it for routine tasks.
8. Mesh
A ◀──▶ B
|\ /|
| X |
|/ \|
C ◀──▶ D
Every agent can communicate with every other agent. No hierarchy, no fixed paths.
When to use: Collaborative problem-solving where agents need to negotiate, share partial results, and build on each other's work — brainstorming sessions, collaborative writing, complex debugging where each agent sees a different part of the system.
Trade-off: Maximum flexibility and emergent collaboration. But message volume grows quadratically with agent count. Without careful protocol design, mesh systems devolve into noise. Works well with 3-5 agents; breaks down beyond that. This is the topology from my opening story — powerful, but easy to misuse.
9. Circular (Round-Robin)
A ──▶ B
▲ |
| ▼
D ◀── C
Agents pass work around a ring, each refining or extending the previous agent's output. Multiple rounds.
When to use: Iterative refinement — draft, review, revise, polish. Translation chains where each pass improves quality. Red team / blue team cycles where attack and defense alternate.
Example: A writing pipeline where Agent A drafts, Agent B critiques for logical gaps, Agent C rewrites based on the critique, and Agent D fact-checks. The cycle repeats. Each round sharpens the output. After three rounds, the draft reads like it had a human editorial team.
Trade-off: Natural fit for refinement workflows. Quality improves with each round — up to a point. Diminishing returns kick in, and without a termination condition, the system loops forever. Always set a max-rounds limit.
10. Mixture-of-Agents
┌── A ──┐
│ │
IN ──┼── B ──┼──▶ SYNTHESIZER ──▶ OUT
│ │
└── C ──┘
Multiple models (or the same model with different temperatures/prompts) process the same input. A dedicated synthesizer agent combines them into a single response that's better than any individual output.
When to use: Best-of-N generation — multiple agents draft answers, and a synthesizer picks the best parts of each. Content moderation where GPT-4, Claude, and Gemini each classify a piece of content, and majority voting catches edge cases any single model misses.
Trade-off: Consistently outperforms single-agent and simple-parallel approaches on quality benchmarks. The synthesizer is doing the hard intellectual work, though, so its prompt and capability matter enormously. Also 3-4x the token cost of a single agent.
Specialized Topologies
These patterns are purpose-built for specific domains.
11. Forest
ROOT1 ROOT2
/ | \ / \
A B C D E
/ \ |
F G H
Multiple independent trees running in parallel. Each tree has its own hierarchy. No cross-tree communication until results merge at the end.
When to use: Parallel hierarchies — multiple teams working on separate features simultaneously, multi-repository analysis where each repo gets its own agent tree, running the same analysis against multiple datasets.
Trade-off: Scales linearly with the number of trees. Each tree is isolated, which is great for independence but means you can't share discoveries across trees mid-execution. Use this when the sub-problems are truly independent.
12. Maker (Build-Test-Ship)
ARCHITECT ──▶ BUILDER ──▶ TESTER ──▶ DEPLOYER
▲ |
└── fix ─────┘
A specialized engineering pipeline with feedback loops. The builder creates, the tester validates, and failures loop back to the builder for fixes. Only passing work advances.
When to use: Software engineering workflows — code generation with automated testing, infrastructure-as-code with validation, any creative process where output quality must be verified before moving forward.
Example: A code generation workflow where the Maker agent writes a function and the Tester agent runs the test suite. Tests fail. The Tester sends the failure output and stack trace back to the Maker with instructions to fix. Two iterations later, all tests pass and the code advances to deployment.
Trade-off: Built-in quality gates mean output is more reliable than a straight pipeline. The feedback loop adds latency and cost, but catches errors before they propagate. The key design decision is how many retry cycles to allow before escalating.
Choosing the Right Topology
There's no universal best topology. The right choice depends on your constraints:
| Constraint | Best Topologies |
|---|---|
| Minimize latency | Parallel, Forest, DAG |
| Maximize quality | Debate, Mixture-of-Agents, Circular |
| Minimize cost | Sequential, Star |
| Handle complex dependencies | DAG, Grid |
| Need iterative refinement | Circular, Maker |
| Collaboration-heavy | Mesh, Debate |
Start with the simplest topology that fits your problem. Sequential until you need speed. Parallel until you need dependencies. DAG when Parallel gets tangled. Debate only when the decision is worth 3x the tokens. Mesh almost never — and when you do use it, set a message budget.
In practice, production systems compose topologies. A DAG might contain a Debate sub-graph at a critical decision node, with Parallel branches for independent analysis. The twelve patterns are composable primitives, not rigid choices.
One More Thing
All 12 topologies ship as composable primitives in Qualixar OS, with full execution semantics, retry policies, and observability built in. The formal definitions, message-passing protocols, and benchmark results are in the paper (arXiv:2604.06392).

Top comments (0)