DEV Community

Cover image for Multi-Agent Local - Claude Code
João Pedro
João Pedro

Posted on

Multi-Agent Local - Claude Code

Specialized agent system that works in chain to research, review, and correct content.

This project demonstrates how to use local agents in Claude Code to create automated workflows. Agents are defined in Markdown files and can be invoked via @agent-name or through the agents API.

What It Does

  • Research: fetch structured information from the web
  • Review: analyze content against clear criteria, highlight errors and suggestions
  • Correct: apply fixes to files based on review feedback

Agents work in chain: research → review → correct. Each has custom prompts, defined tools, and structured outputs.

Structure

.claude/agents/
├── pesquisa.md    # Web search, returns findings + sources
├── revisao.md     # Validates content, lists problems + suggestions
└── correcao.md    # Applies changes to files
Enter fullscreen mode Exit fullscreen mode

Each file defines:

  • name: agent identifier
  • description: brief summary (shown in UI)
  • tools: available tools (WebSearch, Read, Grep, Edit, etc.)
  • model: Claude model to use (haiku, sonnet, opus)
  • Custom prompt in Markdown

Quick Start

Example: Research Recipe and Refine

# Via Claude Code CLI or Web App, invoke the chain:
@agent-pesquisa How to make chocolate cake?
@agent-revisao Review what was researched
@agent-correcao Fix based on review feedback
Enter fullscreen mode Exit fullscreen mode

Or via API (Python):

from anthropic import Anthropic

client = Anthropic()

# Research
pesquisa_response = client.messages.create(
    model="claude-opus-4-6",
    max_tokens=2000,
    system=open(".claude/agents/pesquisa.md").read(),
    messages=[{"role": "user", "content": "How to make chocolate cake?"}]
)

# Review result
revisao_response = client.messages.create(
    model="claude-opus-4-6",
    max_tokens=2000,
    system=open(".claude/agents/revisao.md").read(),
    messages=[{"role": "user", "content": pesquisa_response.content[0].text}]
)

# Correct
correcao_response = client.messages.create(
    model="claude-opus-4-6",
    max_tokens=2000,
    system=open(".claude/agents/correcao.md").read(),
    messages=[{"role": "user", "content": revisao_response.content[0].text}]
)
Enter fullscreen mode Exit fullscreen mode

Available Agents

1. Research (pesquisa.md)

What it does: Fetch structured information from web

Tools: WebSearch

Input: question or topic

Expected output:

**Key findings:**
- [Data 1 with source]
- [Data 2 with source]

**Sources:**
- [Title](URL) - context

**Summary:** [synthesis 1-2 sentences]
Enter fullscreen mode Exit fullscreen mode

Example:

@agent-pesquisa What is generative artificial intelligence?
Enter fullscreen mode Exit fullscreen mode

2. Review (revisao.md)

What it does: Validate content against 6 quality criteria

Tools: Read, Grep

Input: content to review + context (type: code, docs, recipe)

Criteria:

  • [ ] Spelling/grammar
  • [ ] Internal logic
  • [ ] Clarity
  • [ ] Completeness
  • [ ] Formatting
  • [ ] Appropriate tone

Expected output:

**Errors found:**
- [Error] → context/line

**Improvement suggestions:**
- [Suggestion + reason]

**Priority:** [High/Medium/Low]
Enter fullscreen mode Exit fullscreen mode

Example:

@agent-revisao Review this cake recipe about [CONTENT]
Enter fullscreen mode Exit fullscreen mode

3. Correct (correcao.md)

What it does: Apply fixes to file based on review feedback

Tools: Read, Grep, Edit

Input: review feedback + file path

Execution:

  1. Read file with Read
  2. Identify sections to change
  3. Apply Edit one at a time
  4. Validate before confirming

Output: "File corrected: [list of changes]"

Example:

@agent-correcao Fix /path/file.md based on feedback: [FEEDBACK]
Enter fullscreen mode Exit fullscreen mode

Creating New Agents

Minimal structure

Create /home/user/projetos/agents/multi_agents/.claude/agents/your-agent.md:

---
name: your-agent
description: "What this agent does in 1 line"
tools: Read, Grep, Edit, WebSearch
model: haiku
---

## Task
You are a [specialty] agent. Your function: [what it does].

## Criteria
- [ ] Criterion 1
- [ ] Criterion 2

## Response Format
**Section 1:** [structure]
**Section 2:** [structure]

No fluff. Direct.
Enter fullscreen mode Exit fullscreen mode

Best practices

Tools:

  • Use haiku (faster/cheaper) for simple tasks
  • Use sonnet or opus for complex logic
  • Include only necessary tools

Prompt:

  • Start with clear function
  • Structure input/output
  • Add criteria/checklist if applicable
  • One responsibility per agent

Examples:

---
name: translator
description: Translates content preserving tone
tools: Read
model: haiku
---
Translator agent. Task: translate [SOURCE-LANG] → [TARGET-LANG].

## Rules
- Preserve original tone
- Adapt cultural references
- Keep formatting

## Format
**Original text:** [content]
**Translation:** [result]
Enter fullscreen mode Exit fullscreen mode
---
name: test-generator
description: Generate test cases for code
tools: Read, Grep
model: sonnet
---
Test agent. Generate test cases for function/class.

## Checklist
- [ ] Happy path
- [ ] Edge cases
- [ ] Error handling
- [ ] Boundary values

## Output
Structured test code, no explanations.
Enter fullscreen mode Exit fullscreen mode

Complete Example: Chocolate Cake

Step 1: Research

@agent-pesquisa How to make chocolate cake? Find practical recipes.
Enter fullscreen mode Exit fullscreen mode

Result:

  • Basic recipe with ingredients/instructions
  • 3+ reliable sources
  • Prep time and tips

Step 2: Review

@agent-revisao Review this cake recipe. Criteria:
- Correct proportions
- Logical and clear steps
- Temperatures specified
- Warnings about critical ingredients
Enter fullscreen mode Exit fullscreen mode

Result:

  • Identifies 10 issues:
    • Disproportionately high cocoa
    • Hot water without temperature
    • Ambiguous toothpick test
    • Missing cooling steps
    • etc.

Step 3: Correct

@agent-correcao Apply these fixes to the recipe:
1. Reduce cocoa from 3/4 to 1/2 cup
2. Specify water 60-70°C
3. Clarify toothpick test
4. Add cooling steps
5. Include warnings about room temp eggs
[...]
Enter fullscreen mode Exit fullscreen mode

Result:

  • Refined, testable recipe
  • Proportions verified
  • All warnings included
  • Ready to use

Standard Workflow

Original Task
    ↓
@agent-pesquisa (if data needed) → Result 1
    ↓
@agent-revisao (validate result) → Problem list
    ↓
@agent-correcao (apply feedback) → Final result
    ↓
✓ Done
Enter fullscreen mode Exit fullscreen mode

Customizable as needed.

Available Tools by Agent

Tool Function Used by
WebSearch Search web pesquisa
Read Read files revisao, correcao
Grep Search content revisao, correcao
Edit Edit file correcao
Write Create file [custom]
Bash Execute commands [custom]

Configuration

Agents reside in .claude/agents/*.md. Claude Code loads them automatically on startup.

To invoke:

  • CLI: @agent-name prompt here
  • Web: type @ in chat
  • IDE: same syntax

Tips

  1. Specific prompts: the more detailed, the better results
  2. One thing per agent: avoids confusion, reusable
  3. Structured outputs: use consistent format (bullets, tables, blocks)
  4. Validation: always review agent output before applying
  5. Iteration: research→review→correct chain is gold standard

Example with English Agents

All agents can be configured for English. Prompts and outputs in English.

@agent-pesquisa What is the capital of Australia?
@agent-revisao Review the response for accuracy and clarity
@agent-correcao Fix as needed
Enter fullscreen mode Exit fullscreen mode

Next Steps

  • Create translator agent (Read → translate → Edit)
  • Create test agent (Read code → generate tests)
  • Create documentation agent (Read code → generate docs)
  • Implement automatic feedback loop (review → correct → re-review)

Resources

Top comments (0)