DEV Community

Cover image for Understanding Multi-Agent Patterns in Strands Agent: Graph, Swarm, and Workflow
Hung____ for AWS Community Builders

Posted on

Understanding Multi-Agent Patterns in Strands Agent: Graph, Swarm, and Workflow

Building complex, useful AI applications often requires more than a single agent. When you need multiple AI agents to collaborate, the question becomes: how should they work together?

Strands Agent offers three distinct orchestration patterns, each designed for different scenarios.

In this post, we'll explore the Graph, Swarm, and Workflow patterns through simple, practical examples using AWS Bedrock and Amazon Nova.

The Three Patterns at a Glance

Pattern Execution Flow Best For
Graph LLM decides routing Conditional branching & decision trees
Swarm Agents hand off autonomously Collaborative problem-solving
Workflow Pre-defined DAG Repeatable processes with parallel tasks

The key difference? Who controls the flow:

  • Graph: Developer defines the map, LLM chooses the path
  • Swarm: Agents decide who to hand off to next
  • Workflow: System executes a fixed dependency graph

Pattern 1: Graph

When to use: The Graph pattern is ideal when you have a structured process with conditional branches based on inputs. Specifically, use Graph when:

  • You need intelligent routing where the LLM evaluates context and decides the best path forward
  • Your process has multiple possible paths but the correct path depends on input characteristics (complexity, urgency, user type, content category)
  • You want cycles and loops for retry logic, escalation paths, or iterative refinement when initial attempts fail
  • You need human-in-the-loop approval gates or decision points embedded in the flow
  • You want to maintain control over possible outcomes while allowing AI flexibility within those boundaries

Example Architecture - Pizza Ordering System

Order Taker
    ├─→ Simple Processor → Confirmer
    └─→ Custom Processor → Confirmer
Enter fullscreen mode Exit fullscreen mode

The LLM at the Order Taker node decides whether to route to the simple or custom processor based on order complexity.

from strands import Agent, Graph
from strands.models import BedrockModel

nova_model = BedrockModel(
    model_id="amazon.nova-pro-v1:0",
    region_name="us-east-1"
)

# Define agents
order_taker = Agent(
    model=nova_model,
    system_prompt="Route to 'simple_processor' for standard orders,
                   'custom_processor' for special requests",
    tools=[take_order]
)

# Create graph
graph = Graph()
graph.add_agent("order_taker", order_taker)
graph.add_agent("simple_processor", simple_processor)
graph.add_agent("custom_processor", custom_processor)

# Define possible routing paths
graph.add_edge("order_taker", "simple_processor")
graph.add_edge("order_taker", "custom_processor")

result = graph("I want a large pepperoni pizza")
Enter fullscreen mode Exit fullscreen mode

Real-World Use Cases

  • Customer support routing
  • Loan approval workflows with risk assessment branches
  • Multi-step troubleshooting systems
  • Insurance claim processing
  • Medical imaging analysis with specialist referral paths
  • In content moderation, automatic approval for clearly safe content, flagging for review, and immediate blocking for violations

Key advantage: Combines structure with flexibility - you define the possible paths, but the AI makes intelligent routing decisions. This prevents unpredictable behavior while enabling sophisticated decision-making.

Pattern 2: Swarm

When to use:

  • You have distinct specializations where each agent brings unique expertise to the problem
  • Tasks require iterative collaboration with multiple rounds of back-and-forth between agents
  • Agents need to self-organize and determine when their contribution is complete
  • The optimal sequence of work isn't always predictable upfront
  • The problem is too complex for a single agent but doesn't fit a fixed pipeline

Example Architecture - Blog Post Creation

Researcher → Writer → Editor
Enter fullscreen mode Exit fullscreen mode

Each agent decides when its work is complete and hands off to the next agent.

from strands import Agent, Swarm

researcher = Agent(
    model=nova_model,
    system_prompt="Research the topic, then hand off to 'writer'",
    tools=[research_topic]
)

writer = Agent(
    model=nova_model,
    system_prompt="Write the draft, then hand off to 'editor'",
    tools=[write_draft]
)

# Create swarm
swarm = Swarm()
swarm.add_agent("researcher", researcher)
swarm.add_agent("writer", writer)
swarm.add_agent("editor", editor)
swarm.set_entry_agent("researcher")

result = swarm("Create a blog post about AI multi-agent systems")
Enter fullscreen mode Exit fullscreen mode

Real-World Use Cases

  • Code review systems where security, performance, and style specialists each contribute feedback
  • Content creation pipelines
  • Legal document review and drafting
  • Sales process

Key advantage: Natural collaboration that mimics human teams. Agents autonomously determine when to hand off based on task completion. The system can handle unexpected complexities as agents self-coordinate.

Pattern 3: Workflow

When to use: The Workflow pattern is perfect for repeatable processes with clear dependencies. Deploy Workflow when:

  • You have a well-defined, repeatable process that doesn't change between executions
  • You need parallel execution to maximize efficiency and reduce total runtime
  • Process steps have clear inputs and outputs that flow between tasks
  • You want predictable, deterministic behavior every single time
  • You need audit trails and visibility into each step's execution
  • Failure handling requires retry logic for specific tasks without restarting everything

Example Architecture - Email Campaign Pipeline

Load Data → Segment Customers → ┌─→ VIP Emails  ──┐
                                ├─→ Regular Emails├─→ Schedule Campaign
                                └─→ New Emails ───┘
                                    (parallel)
Enter fullscreen mode Exit fullscreen mode

The workflow executes tasks based on a dependency graph (DAG), running independent tasks in parallel.

from strands import Agent, Workflow

# Create workflow
workflow = Workflow()

# Add tasks
workflow.add_task("load", load_agent)
workflow.add_task("segment", segment_agent)
workflow.add_task("vip_email", vip_email_agent)
workflow.add_task("regular_email", regular_email_agent)
workflow.add_task("schedule", schedule_agent)

# Define dependencies
workflow.add_dependency("segment", "load")
workflow.add_dependency("vip_email", "segment")
workflow.add_dependency("regular_email", "segment")
workflow.add_dependency("schedule", "vip_email")
workflow.add_dependency("schedule", "regular_email")

result = workflow("Create personalized email campaign")
Enter fullscreen mode Exit fullscreen mode

Real-World Use Cases

  • Data ETL pipelines
  • Batch processing jobs
  • Employee onboarding automation
  • Report generation systems

Key advantage: Deterministic execution with automatic parallelization. Perfect for predictable, repeatable processes where efficiency and reliability are most important.

Pattern Combinations

These patterns aren't mutually exclusive. You can:

  • Use a Workflow as a tool within a Graph node
  • Have a Swarm agent invoke a Workflow for a specific task
  • Embed a Graph within a larger Workflow pipeline

Shared State Across Multi-Agent Patterns

Both Graph and Swarm patterns support passing shared state to all agents through the invocation_state parameter. This enables sharing context and configuration across agents without exposing it to the LLM.

How Shared State Works

The invocation_state is automatically propagated to:

  • All agents in the pattern via their **kwargs
  • Tools via ToolContext when using @tool(context=True)
  • Tool-related hooks (BeforeToolCallEvent, AfterToolCallEvent)

Example Usage

from strands import Agent, Graph, Swarm, tool, ToolContext
from strands.models import BedrockModel

nova_model = BedrockModel(
    model_id="amazon.nova-pro-v1:0",
    region_name="us-east-1"
)

# Define shared state with configuration and context
shared_state = {
    "user_id": "user123",
    "session_id": "sess456",
    "debug_mode": True,
    "api_key": "secret_key_789",
    "database_connection": db_connection_object
}

# Use with Graph pattern
graph = Graph()
graph.add_agent("analyzer", analyzer_agent)
graph.add_agent("processor", processor_agent)

result = graph(
    "Analyze customer data",
    invocation_state=shared_state
)

# Use with Swarm pattern (same shared_state)
swarm = Swarm()
swarm.add_agent("researcher", researcher_agent)
swarm.add_agent("writer", writer_agent)
swarm.set_entry_agent("researcher")

result = swarm(
    "Create customer report",
    invocation_state=shared_state
)
Enter fullscreen mode Exit fullscreen mode

Accessing Shared State in Tools

@tool(context=True)
def query_customer_data(query: str, tool_context: ToolContext) -> str:
    """Query customer database using shared configuration."""
    # Access invocation_state from tool context
    user_id = tool_context.invocation_state.get("user_id")
    debug_mode = tool_context.invocation_state.get("debug_mode", False)
    db_conn = tool_context.invocation_state.get("database_connection")

    if debug_mode:
        print(f"Querying for user: {user_id}")

    # Use shared context for personalized queries
    results = db_conn.execute(query, user_id=user_id)
    return results

@tool(context=True)
def send_notification(message: str, tool_context: ToolContext) -> str:
    """Send notification using shared API key."""
    api_key = tool_context.invocation_state.get("api_key")
    session_id = tool_context.invocation_state.get("session_id")

    # Use API key from shared state
    notification_service.send(
        message=message,
        api_key=api_key,
        session=session_id
    )
    return "Notification sent successfully"
Enter fullscreen mode Exit fullscreen mode

Important Distinctions

Shared State (invocation_state):

  • Configuration and objects passed behind the scenes
  • Not visible to the LLM in prompts
  • Used for: API keys, database connections, user context, debug flags

Pattern-Specific Data Flow:

  • Data that the LLM should reason about
  • Visible in conversation context
  • Graph: Explicit state dictionary passed between agents
  • Swarm: Shared conversation history and context from handoffs

Best Practice: Use invocation_state for context and configuration that shouldn't appear in prompts, while using each pattern's specific data flow mechanisms for data the LLM should reason about.

Key Takeaways

  1. Graph = Structured routing with AI-driven decisions
  2. Swarm = Autonomous collaboration between specialists
  3. Workflow = Fixed dependencies with parallel execution

Choose based on:

  • How predictable your process is (Workflow > Graph > Swarm)
  • How much AI do you want (Swarm > Graph > Workflow)
  • Whether you need parallel execution (Workflow best, Swarm/Graph sequential)
  • Start simple, scale complexity
  • Consider maintenance and debugging
  • Performance considerations

The right pattern isn't about which is "best" in absolute terms—it's about matching your specific requirements. Many systems will use all three patterns in different parts of their architecture.

Top comments (0)