Most “prompt engineering” advice was written for single-turn chatbots — not for agents running in a loop with tools, memory, and side effects.
Anthropic’s Applied AI team recently shared what they learned from building agents like Claude Code and their research agents. I took those ideas and turned them into a practical guide for people building real systems:
The Art of Agent Prompting: Lessons from Anthropic's AI Team
In the post, I cover:
- Why rigid few-shot / CoT templates can hurt modern agents
- How to think about prompts when the model runs in a tool loop, not a single response
- Giving agents heuristics (search budgets, irreversibility, “good enough” answers)
- Concrete guidance on tool selection and avoiding MCP-style tool collisions
- How to guide the agent’s thinking (planning, interleaved reflection, when to stop)
- A running example: Cameron AI, a personal finance assistant
If you’re working with LangGraph, custom backends, or just trying to keep your agents from over-searching or looping forever, this might save you some painful iteration.
Read the full article here: The art of Agent prompting
Top comments (0)