DEV Community

Jason
Jason

Posted on

Markus A2A Protocol in Action: Four Real-World Collaboration Scenarios

Introduction: Theory Is Not Enough

In our previous four articles (s01-s04), we've covered the A2A protocol fundamentals:

  • s01 explained why agents need communication—single agents can't handle complex tasks, collaboration is the way
  • s02 covered the message type system—task_delegate, status_sync, resource_request, info_request, each with its own purpose
  • s03 explained task delegation mechanisms—how to split large tasks into subtasks and distribute them across agents
  • s04 covered collaborative sessions and session management—how to organize multi-turn conversations and track them

But knowing theory doesn't mean you can use it. Like reading a swimming manual doesn't guarantee you can swim—the real learning happens in the water.

This article (s05) is learning to swim in the water. We won't discuss protocol details; instead, we'll look at four real use cases, each with complete code examples that can run in production.


Scenario 1: Research Task Decomposition

Background: The marketing team needs a comprehensive competitive analysis report. The Research Director receives the task and needs to coordinate three specialized researchers (Market Researcher, Technical Analyst, Financial Analyst) to work in parallel.

Step 1: Task Analysis and Decomposition

// Research Director analyzes the task and creates a decomposition plan
interface TaskDecomposition {
  mainTaskId: string;
  subtasks: SubTaskPlan[];
  dependencies: DependencyGraph;
  resultAggregator: string; // Agent ID responsible for aggregating results
}

const decomposition: TaskDecomposition = {
  mainTaskId: 'research_competitive_2024Q1',
  subtasks: [
    {
      id: 'sub_001',
      title: "'Market Research - Competitor Landscape',"
      agentType: 'market-researcher',
      description: "'Research top 5 competitors: product offerings, market share, user base, growth trajectory',"
      priority: 'high',
      estimatedDuration: 30, // minutes
      outputFormat: 'structured_report'
    },
    {
      id: 'sub_002',
      title: "'Technical Analysis - Technology Stack',"
      agentType: 'technical-analyst',
      description: "'Analyze competitors technical stack: architecture, scalability, tech innovation',"
      priority: 'high',
      estimatedDuration: 25,
      outputFormat: 'technical_report'
    },
    {
      id: 'sub_003',
      title: "'Financial Analysis - Business Model',"
      agentType: 'financial-analyst',
      description: "'Research competitors financials: revenue, funding, unit economics, pricing strategy',"
      priority: 'medium',
      estimatedDuration: 35,
      outputFormat: 'financial_report'
    }
  ],
  dependencies: {
    'sub_001': [],
    'sub_002': [],
    'sub_003': [],
    // Results aggregation starts only after all three complete
    'aggregation': ['sub_001', 'sub_002', 'sub_003']
  },
  resultAggregator: 'research-director'
};
Enter fullscreen mode Exit fullscreen mode

Step 2: Subtask Delegation via A2A

// Research Director delegates tasks to three specialized agents
async function delegateResearchTasks(decomposition: TaskDecomposition) {
  const results: Map<string, any> = new Map();
  const agentIds = await discoverAgentsByType([
    'market-researcher',
    'technical-analyst', 
    'financial-analyst'
  ]);

  // Send delegation messages to all agents in parallel
  const delegationPromises = decomposition.subtasks.map(async (subtask, index) => {
    const agentId = agentIds[index];

    const message: A2AMessage = {
      messageId: generateMessageId(),
      type: 'task_delegate',
      from: 'research-director',
      to: agentId,
      timestamp: new Date().toISOString(),
      payload: {
        taskId: subtask.id,
        title: "subtask.title,"
        description: "subtask.description,"
        priority: subtask.priority,
        deadline: calculateDeadline(subtask.estimatedDuration),
        outputFormat: subtask.outputFormat,
        callbackChannel: 'research-director' // Where to send results
      },
      headers: {
        correlationId: decomposition.mainTaskId,
        traceId: generateTraceId()
      }
    };

    await agent_send_message({
      agent_id: agentId,
      message: JSON.stringify(message),
      wait_for_reply: false
    });

    console.log(`[Delegated] ${subtask.id}${agentId}`);
    return subtask.id;
  });

  await Promise.all(delegationPromises);
  console.log(`All ${decomposition.subtasks.length} tasks delegated`);

  return results;
}
Enter fullscreen mode Exit fullscreen mode

Step 3: Parallel Execution and Status Monitoring

// Monitor execution progress and collect results
class ResearchProgressMonitor {
  private results: Map<string, any> = new Map();
  private status: Map<string, 'pending' | 'in_progress' | 'completed' | 'failed'> = new Map();

  async start() {
    // Subscribe to status updates from all agents
    for (const subtask of decomposition.subtasks) {
      this.status.set(subtask.id, 'pending');

      // Simulate status polling (in production, use event-based notifications)
      this.pollStatus(subtask.id, 10000); // Check every 10 seconds
    }
  }

  async pollStatus(taskId: string, intervalMs: number) {
    setInterval(async () => {
      const status = await queryAgentStatus(taskId);
      this.status.set(taskId, status.state);

      if (status.state === 'completed') {
        this.results.set(taskId, status.result);
        this.onSubtaskComplete(taskId, status.result);
      }

      if (status.state === 'failed') {
        this.onSubtaskFailed(taskId, status.error);
      }
    }, intervalMs);
  }

  onSubtaskComplete(taskId: string, result: any) {
    console.log(`[Complete] ${taskId} received result of ${JSON.stringify(result).length} chars`);

    // Check if all subtasks are done
    const allDone = Array.from(this.status.values())
      .every(s => s === 'completed' || s === 'failed');

    if (allDone) {
      this.aggregateResults();
    }
  }

  async aggregateResults() {
    const aggregated = {
      timestamp: new Date().toISOString(),
      mainTaskId: decomposition.mainTaskId,
      results: Object.fromEntries(this.results),
      summary: this.generateSummary()
    };

    // Send aggregated report to user
    await notify_user({
      title: "'Competitive Analysis Report Ready',"
      body: JSON.stringify(aggregated, null, 2)
    });
  }
}
Enter fullscreen mode Exit fullscreen mode

Step 4: Results Aggregation and Report Generation

// Aggregate all research results into a comprehensive report
async function aggregateResearchResults(results: Map<string, any>) {
  const marketReport = results.get('sub_001');
  const technicalReport = results.get('sub_002');
  const financialReport = results.get('sub_003');

  const finalReport = {
    title: "'Competitive Analysis Report - Q1 2024',"
    executiveSummary: generateExecutiveSummary(marketReport, technicalReport, financialReport),
    marketAnalysis: marketReport.content,
    technicalAnalysis: technicalReport.content,
    financialAnalysis: financialReport.content,
    recommendations: generateRecommendations(marketReport, technicalReport, financialReport),
    appendices: {
      dataSources: collectDataSources([marketReport, technicalReport, financialReport]),
      methodology: 'Multi-agent parallel research with expert synthesis'
    }
  };

  return finalReport;
}
Enter fullscreen mode Exit fullscreen mode

Key Patterns in This Scenario:

  1. Task decomposition: Large tasks split into parallel subtasks
  2. Agent discovery: Find right agents by type/capability
  3. Parallel delegation: Send tasks to multiple agents simultaneously
  4. Status monitoring: Track progress of all parallel tasks
  5. Result aggregation: Combine outputs from different agents
  6. Deadline management: Set reasonable time limits for each subtask

Scenario 2: Code Review Pipeline

Background: A pull request with 15 files was submitted. We need to run three types of reviews (code quality, security, performance) in parallel, then aggregate results into a final review report.

Step 1: PR Analysis and Review Task Planning

interface ReviewTask {
  agentId: string;
  files: PRFile[];
  focus: string;  // e.g., 'code-quality', 'security', 'performance'
}

interface ReviewerConfig {
  name: string;
  focusPatterns: RegExp[];
  priority: number;
}

// Three types of reviewers
const reviewerConfigs: ReviewerConfig[] = [
  {
    name: 'code-quality-reviewer',
    focusPatterns: [/\.ts$/, /\.js$/, /\.tsx$/],
    priority: 1
  },
  {
    name: 'security-reviewer',
    focusPatterns: [/auth/, /security/, /permission/, /\.ts$/],
    priority: 2
  },
  {
    name: 'performance-reviewer',
    focusPatterns: [/database/, /api/, /cache/, /query/],
    priority: 3
  }
];

// Analyze PR files and distribute to appropriate reviewers
function prepareReviewTasks(files: PRFile[], configs: typeof reviewerConfigs): ReviewTask[] {
  const tasks: ReviewTask[] = [];

  for (const config of configs) {
    const matchedFiles = files.filter(f => 
      config.focusPatterns.some(pattern => pattern.test(f.path))
    );

    if (matchedFiles.length > 0) {
      tasks.push({
        agentId: '', // Will be resolved by agent registry
        files: matchedFiles,
        focus: config.name.replace('-reviewer', '').replace('-', ' ').toUpperCase()
      });
    }
  }

  return tasks;
}

// Example: 15 files in PR
const prFiles: PRFile[] = [
  { path: 'src/auth/login.ts', additions: 150, deletions: 20 },
  { path: 'src/auth/permissions.ts', additions: 80, deletions: 10 },
  { path: 'src/api/users.ts', additions: 200, deletions: 30 },
  { path: 'src/database/query.ts', additions: 120, deletions: 15 },
  { path: 'src/cache/redis.ts', additions: 60, deletions: 5 },
  // ... more files
];

const reviewTasks = prepareReviewTasks(prFiles, reviewerConfigs);
// Result: code quality → 12 files, security → 3 files, performance → 5 files
Enter fullscreen mode Exit fullscreen mode

Step 2: Concurrent Review Execution

// Run all three reviews concurrently
async function runConcurrentReviews(reviewTasks: ReviewTask[]) {
  // Find available agents for each review type
  const agents = await Promise.all(
    reviewTasks.map(task => discoverAgentByFocus(task.focus))
  );

  // Create review delegation messages
  const reviewMessages = reviewTasks.map((task, i) => ({
    to: agents[i],
    message: {
      type: 'task_delegate',
      payload: {
        taskId: `review_${task.focus.toLowerCase()}_${Date.now()}`,
        title: `${task.focus} Code Review`,
        files: task.files.map(f => f.path),
        reviewCriteria: getReviewCriteria(task.focus),
        outputFormat: 'structured_issues'
      }
    }
  }));

  // Send all messages in parallel
  const results = await Promise.allSettled(
    reviewMessages.map(({ to, message }) => 
      agent_send_message({ agent_id: to, message: JSON.stringify(message) })
    )
  );

  // Handle any failures
  results.forEach((result, i) => {
    if (result.status === 'rejected') {
      console.error(`Review ${i} failed:`, result.reason);
    }
  });

  return results;
}
Enter fullscreen mode Exit fullscreen mode

Step 3: Aggregating Reviews into Final Report

// Aggregate review results from all three reviewers
interface ReviewResult {
  focus: string;
  issues: Issue[];
  score: number; // 0-10
  recommendations: string[];
}

interface AggregatedReview {
  prId: string;
  overallScore: number;
  criticalIssues: Issue[];
  warnings: Issue[];
  suggestions: Issue[];
  reviewDetails: ReviewResult[];
  consensus: string; // 'approve' | 'request_changes' | 'reject'
}

function aggregateReviewResults(results: ReviewResult[]): AggregatedReview {
  // Group issues by severity
  const allIssues = results.flatMap(r => r.issues);
  const criticalIssues = allIssues.filter(i => i.severity === 'critical');
  const warnings = allIssues.filter(i => i.severity === 'warning');
  const suggestions = allIssues.filter(i => i.severity === 'suggestion');

  // Calculate overall score (weighted average)
  const weights = { critical: 0.5, warning: 0.3, suggestion: 0.2 };
  const overallScore = results.reduce((sum, r) => sum + r.score, 0) / results.length;

  // Determine consensus
  let consensus: 'approve' | 'request_changes' | 'reject';
  if (criticalIssues.length > 0) {
    consensus = 'reject';
  } else if (warnings.length > 3) {
    consensus = 'request_changes';
  } else {
    consensus = 'approve';
  }

  return {
    prId: 'PR-1234',
    overallScore,
    criticalIssues,
    warnings,
    suggestions,
    reviewDetails: results,
    consensus
  };
}
Enter fullscreen mode Exit fullscreen mode

Key Patterns in This Scenario:

  1. Smart file routing: Distribute files to right reviewers based on content type
  2. Concurrent execution: Multiple reviewers work simultaneously
  3. Result merging: Combine outputs with severity-based grouping
  4. Weighted scoring: Different issue types affect final score differently
  5. Consensus logic: Automatic approve/reject based on issue severity

Scenario 3: Cross-Team Resource Coordination

Background: A large-scale data migration requires coordinated GPU resources across three teams (ML team, Data team, Backend team). A Resource Coordinator acts as the central hub to manage allocation, scheduling, and conflict resolution.

Step 1: Resource Discovery and Registration

// Resource Coordinator manages all shared resources
class ResourceCoordinator {
  private pools: Map<string, ResourcePool> = new Map();

  constructor() {
    // Initialize resource pools for each team
    this.pools.set('gpu', {
      type: 'gpu',
      totalCapacity: 24,  // 24 GPUs total across teams
      allocated: 0,
      reservations: [],
      allocations: new Map()
    });

    this.pools.set('memory', {
      type: 'memory',
      totalCapacity: 512,  // 512 GB total
      allocated: 0,
      reservations: [],
      allocations: new Map()
    });
  }

  // Teams register their available resources
  async registerResources(team: string, resources: Resource[]) {
    for (const resource of resources) {
      const pool = this.pools.get(resource.type);
      if (pool) {
        pool.totalCapacity += resource.capacity;
        console.log(`[Registered] ${team} added ${resource.capacity} ${resource.type}`);
      }
    }
  }

  // Show current resource status
  getResourceStatus(): ResourceStatus[] {
    return Array.from(this.pools.entries()).map(([type, pool]) => ({
      type,
      total: pool.totalCapacity,
      allocated: pool.allocated,
      available: pool.totalCapacity - pool.allocated,
      utilization: `${((pool.allocated / pool.totalCapacity) * 100).toFixed(1)}%`
    }));
  }
}
Enter fullscreen mode Exit fullscreen mode

Step 2: Resource Request and Allocation

interface ResourceRequest {
  requestId: string;
  teamId: string;
  resourceType: 'gpu' | 'memory';
  quantity: number;
  priority: 'critical' | 'high' | 'normal' | 'low';
  duration: number;  // Expected usage time in ms
  purpose: string;
  expiry: string;
  token?: string;  // For reservation confirmation
}

// Handle resource requests with priority-based allocation
async function handleResourceRequest(request: ResourceRequest): Promise<ResourceAllocation> {
  const coordinator = new ResourceCoordinator();
  const pool = coordinator.pools.get(request.resourceType);

  // Check if enough resources available
  if (pool.allocated + request.quantity > pool.totalCapacity) {
    // Insufficient resources - implement queuing or rejection
    return {
      success: false,
      reason: 'insufficient_resources',
      queuePosition: await queueRequest(request)
    };
  }

  // Generate reservation token
  const token = coordinator.generateToken(request.requestId);

  // Allocate resources
  pool.allocated += request.quantity;
  pool.allocations.set(request.requestId, {
    requestId: request.requestId,
    quantity: request.quantity,
    allocatedAt: new Date().toISOString(),
    expiresAt: request.expiry
  });

  return {
    success: true,
    token,
    allocatedQuantity: request.quantity,
    expiresAt: request.expiry
  };
}
Enter fullscreen mode Exit fullscreen mode

Step 3: Cross-Team Coordination via A2A

// Resource Coordinator communicates with team agents
async function coordinateResources(requests: ResourceRequest[]) {
  const results: Map<string, ResourceAllocation> = new Map();

  // Process requests in priority order
  requests.sort((a, b) => {
    const priorityOrder = { critical: 0, high: 1, normal: 2, low: 3 };
    return priorityOrder[a.priority] - priorityOrder[b.priority];
  });

  for (const request of requests) {
    // Send request to resource coordinator agent
    const result = await agent_send_message({
      agent_id: 'resource-coordinator',
      message: JSON.stringify({
        type: 'resource_request',
        payload: request
      }),
      wait_for_reply: true
    });

    results.set(request.requestId, result);

    // Notify team of allocation result
    if (result.success) {
      await agent_send_message({
        agent_id: request.teamId,
        message: JSON.stringify({
          type: 'resource_allocated',
          payload: {
            token: result.token,
            quantity: result.allocatedQuantity,
            expiresAt: result.expiresAt
          }
        })
      });
    }
  }

  return results;
}
Enter fullscreen mode Exit fullscreen mode

Key Patterns in This Scenario:

  1. Central registry: One place to track all shared resources
  2. Priority-based allocation: Critical tasks get resources first
  3. Reservation system: Tokens ensure allocated resources are claimed
  4. TTL management: Resources auto-release after timeout
  5. Cross-team communication: A2A messages coordinate actions

Scenario 4: Long-Running Workflow State Tracking

Background: A compliance audit workflow spans 5 stages over 72 hours. Multiple agents need to track state, handle failures, and provide real-time visibility to stakeholders.

Step 1: Define Workflow State Machine

type WorkflowStage = 
  | 'document_collection' 
  | 'evidence_gathering' 
  | 'analysis'
  | 'review'
  | 'report_generation';

interface WorkflowState {
  workflowId: string;
  currentStage: WorkflowStage;
  stageStatus: 'pending' | 'in_progress' | 'completed' | 'failed' | 'skipped';
  progress: number;  // 0-100
  startedAt: string;
  estimatedCompletion: string;
  stageDetails: StageDetail[];
  errors: WorkflowError[];
}

interface StageDetail {
  stage: WorkflowStage;
  status: string;
  startedAt: string;
  completedAt?: string;
  agentId: string;
  output?: any;
}
Enter fullscreen mode Exit fullscreen mode

Step 2: Stage Progression via A2A Messages

// Orchestrator manages workflow progression
class WorkflowOrchestrator {
  private workflows: Map<string, WorkflowState> = new Map();

  async startWorkflow(workflowId: string) {
    const state: WorkflowState = {
      workflowId,
      currentStage: 'document_collection',
      stageStatus: 'in_progress',
      progress: 0,
      startedAt: new Date().toISOString(),
      estimatedCompletion: this.calculateETA(5),
      stageDetails: [],
      errors: []
    };

    this.workflows.set(workflowId, state);

    // Start first stage
    await this.executeStage(workflowId, 'document_collection');
  }

  async executeStage(workflowId: string, stage: WorkflowStage) {
    const state = this.workflows.get(workflowId);
    const stageAgent = this.getStageAgent(stage);

    // Notify agent to start stage
    await agent_send_message({
      agent_id: stageAgent,
      message: JSON.stringify({
        type: 'task_delegate',
        payload: {
          taskId: `${workflowId}_${stage}`,
          stage,
          workflowId,
          callback: 'workflow-orchestrator'
        }
      })
    });

    // Update state
    state.stageDetails.push({
      stage,
      status: 'in_progress',
      startedAt: new Date().toISOString(),
      agentId: stageAgent
    });

    await this.broadcastStateUpdate(workflowId);
  }

  async onStageComplete(workflowId: string, stage: WorkflowStage, output: any) {
    const state = this.workflows.get(workflowId);
    const stageDetail = state.stageDetails.find(s => s.stage === stage);

    if (stageDetail) {
      stageDetail.status = 'completed';
      stageDetail.completedAt = new Date().toISOString();
      stageDetail.output = output;
    }

    // Calculate progress
    const completedStages = state.stageDetails.filter(s => s.status === 'completed').length;
    state.progress = (completedStages / 5) * 100;

    // Move to next stage
    const nextStage = this.getNextStage(stage);
    if (nextStage) {
      state.currentStage = nextStage;
      await this.executeStage(workflowId, nextStage);
    } else {
      // Workflow complete
      state.stageStatus = 'completed';
      await this.notifyCompletion(workflowId);
    }

    await this.broadcastStateUpdate(workflowId);
  }
}
Enter fullscreen mode Exit fullscreen mode

Step 3: Real-Time State Visualization

// Dashboard agent shows real-time workflow status
async function displayWorkflowDashboard(workflowId: string) {
  const state = await queryWorkflowState(workflowId);

  const dashboard = {
    workflowId,
    overall: {
      progress: `${state.progress.toFixed(1)}%`,
      status: state.stageStatus,
      startedAt: state.startedAt,
      estimatedCompletion: state.estimatedCompletion
    },
    stages: state.stageDetails.map(s => ({
      name: formatStageName(s.stage),
      status: s.status,
      duration: s.completedAt 
        ? calculateDuration(s.startedAt, s.completedAt)
        : 'in progress',
      agent: s.agentId
    })),
    errors: state.errors.map(e => ({
      stage: e.stage,
      message: e.message,
      timestamp: e.timestamp
    }))
  };

  // Send to dashboard display
  await updateDashboard(dashboard);
  return dashboard;
}
Enter fullscreen mode Exit fullscreen mode

Key Patterns in This Scenario:

  1. State machine: Clear definition of workflow stages and transitions
  2. Persistent state: State survives agent restarts
  3. Progress tracking: Real-time progress calculation
  4. Stage callbacks: Agents notify orchestrator on completion
  5. Dashboard integration: Visual status updates for stakeholders

Key Takeaways

Through these four scenarios, you've learned:

  1. Task Decomposition and Delegation: How to coordinate multiple specialized agents
  2. Concurrency and Coordination: How to handle parallel execution and result aggregation
  3. Resource Management: How to implement intelligent resource allocation and scheduling
  4. State Tracking: How to achieve real-time state synchronization for long-running workflows

Most importantly, you learned how to avoid pitfalls—knowing what not to do is more important than knowing what to do.

The power of the A2A protocol lies in composition. You can combine the patterns from today to build more complex systems:

  • Research Director + Multiple Specialized Agents + Resource Coordinator + Dashboard
  • Code Review Pipeline + Security Review + Resource Pool Management + Audit Logs

The key is: Start simple, evolve incrementally. Don't design a perfect architecture from the start—get it running first, then optimize.

Build powerful multi-agent systems!


This is the final article in the A2A Protocol Series (s05).

Series Articles:

Top comments (0)