Specialized agent system that works in chain to research, review, and correct content.
This project demonstrates how to use local agents in Claude Code to create automated workflows. Agents are defined in Markdown files and can be invoked via @agent-name or through the agents API.
What It Does
- Research: fetch structured information from the web
- Review: analyze content against clear criteria, highlight errors and suggestions
- Correct: apply fixes to files based on review feedback
Agents work in chain: research → review → correct. Each has custom prompts, defined tools, and structured outputs.
Structure
.claude/agents/
├── pesquisa.md # Web search, returns findings + sources
├── revisao.md # Validates content, lists problems + suggestions
└── correcao.md # Applies changes to files
Each file defines:
-
name: agent identifier -
description: brief summary (shown in UI) -
tools: available tools (WebSearch, Read, Grep, Edit, etc.) -
model: Claude model to use (haiku, sonnet, opus) - Custom prompt in Markdown
Quick Start
Example: Research Recipe and Refine
# Via Claude Code CLI or Web App, invoke the chain:
@agent-pesquisa How to make chocolate cake?
@agent-revisao Review what was researched
@agent-correcao Fix based on review feedback
Or via API (Python):
from anthropic import Anthropic
client = Anthropic()
# Research
pesquisa_response = client.messages.create(
model="claude-opus-4-6",
max_tokens=2000,
system=open(".claude/agents/pesquisa.md").read(),
messages=[{"role": "user", "content": "How to make chocolate cake?"}]
)
# Review result
revisao_response = client.messages.create(
model="claude-opus-4-6",
max_tokens=2000,
system=open(".claude/agents/revisao.md").read(),
messages=[{"role": "user", "content": pesquisa_response.content[0].text}]
)
# Correct
correcao_response = client.messages.create(
model="claude-opus-4-6",
max_tokens=2000,
system=open(".claude/agents/correcao.md").read(),
messages=[{"role": "user", "content": revisao_response.content[0].text}]
)
Available Agents
1. Research (pesquisa.md)
What it does: Fetch structured information from web
Tools: WebSearch
Input: question or topic
Expected output:
**Key findings:**
- [Data 1 with source]
- [Data 2 with source]
**Sources:**
- [Title](URL) - context
**Summary:** [synthesis 1-2 sentences]
Example:
@agent-pesquisa What is generative artificial intelligence?
2. Review (revisao.md)
What it does: Validate content against 6 quality criteria
Tools: Read, Grep
Input: content to review + context (type: code, docs, recipe)
Criteria:
- [ ] Spelling/grammar
- [ ] Internal logic
- [ ] Clarity
- [ ] Completeness
- [ ] Formatting
- [ ] Appropriate tone
Expected output:
**Errors found:**
- [Error] → context/line
**Improvement suggestions:**
- [Suggestion + reason]
**Priority:** [High/Medium/Low]
Example:
@agent-revisao Review this cake recipe about [CONTENT]
3. Correct (correcao.md)
What it does: Apply fixes to file based on review feedback
Tools: Read, Grep, Edit
Input: review feedback + file path
Execution:
- Read file with
Read - Identify sections to change
- Apply
Editone at a time - Validate before confirming
Output: "File corrected: [list of changes]"
Example:
@agent-correcao Fix /path/file.md based on feedback: [FEEDBACK]
Creating New Agents
Minimal structure
Create /home/user/projetos/agents/multi_agents/.claude/agents/your-agent.md:
---
name: your-agent
description: "What this agent does in 1 line"
tools: Read, Grep, Edit, WebSearch
model: haiku
---
## Task
You are a [specialty] agent. Your function: [what it does].
## Criteria
- [ ] Criterion 1
- [ ] Criterion 2
## Response Format
**Section 1:** [structure]
**Section 2:** [structure]
No fluff. Direct.
Best practices
Tools:
- Use
haiku(faster/cheaper) for simple tasks - Use
sonnetoropusfor complex logic - Include only necessary tools
Prompt:
- Start with clear function
- Structure input/output
- Add criteria/checklist if applicable
- One responsibility per agent
Examples:
---
name: translator
description: Translates content preserving tone
tools: Read
model: haiku
---
Translator agent. Task: translate [SOURCE-LANG] → [TARGET-LANG].
## Rules
- Preserve original tone
- Adapt cultural references
- Keep formatting
## Format
**Original text:** [content]
**Translation:** [result]
---
name: test-generator
description: Generate test cases for code
tools: Read, Grep
model: sonnet
---
Test agent. Generate test cases for function/class.
## Checklist
- [ ] Happy path
- [ ] Edge cases
- [ ] Error handling
- [ ] Boundary values
## Output
Structured test code, no explanations.
Complete Example: Chocolate Cake
Step 1: Research
@agent-pesquisa How to make chocolate cake? Find practical recipes.
Result:
- Basic recipe with ingredients/instructions
- 3+ reliable sources
- Prep time and tips
Step 2: Review
@agent-revisao Review this cake recipe. Criteria:
- Correct proportions
- Logical and clear steps
- Temperatures specified
- Warnings about critical ingredients
Result:
- Identifies 10 issues:
- Disproportionately high cocoa
- Hot water without temperature
- Ambiguous toothpick test
- Missing cooling steps
- etc.
Step 3: Correct
@agent-correcao Apply these fixes to the recipe:
1. Reduce cocoa from 3/4 to 1/2 cup
2. Specify water 60-70°C
3. Clarify toothpick test
4. Add cooling steps
5. Include warnings about room temp eggs
[...]
Result:
- Refined, testable recipe
- Proportions verified
- All warnings included
- Ready to use
Standard Workflow
Original Task
↓
@agent-pesquisa (if data needed) → Result 1
↓
@agent-revisao (validate result) → Problem list
↓
@agent-correcao (apply feedback) → Final result
↓
✓ Done
Customizable as needed.
Available Tools by Agent
| Tool | Function | Used by |
|---|---|---|
WebSearch |
Search web | pesquisa |
Read |
Read files | revisao, correcao |
Grep |
Search content | revisao, correcao |
Edit |
Edit file | correcao |
Write |
Create file | [custom] |
Bash |
Execute commands | [custom] |
Configuration
Agents reside in .claude/agents/*.md. Claude Code loads them automatically on startup.
To invoke:
-
CLI:
@agent-name prompt here -
Web: type
@in chat - IDE: same syntax
Tips
- Specific prompts: the more detailed, the better results
- One thing per agent: avoids confusion, reusable
- Structured outputs: use consistent format (bullets, tables, blocks)
- Validation: always review agent output before applying
- Iteration: research→review→correct chain is gold standard
Example with English Agents
All agents can be configured for English. Prompts and outputs in English.
@agent-pesquisa What is the capital of Australia?
@agent-revisao Review the response for accuracy and clarity
@agent-correcao Fix as needed
Next Steps
- Create translator agent (Read → translate → Edit)
- Create test agent (Read code → generate tests)
- Create documentation agent (Read code → generate docs)
- Implement automatic feedback loop (review → correct → re-review)
Top comments (0)