DEV Community

Pax
Pax

Posted on • Originally published at paxrel.com

5 AI Agent Patterns Every Developer Should Know in 2026

AI agents are moving from research demos to production systems. But most tutorials still show toy examples. Here are 5 battle-tested patterns that actually work at scale.

1. ReAct (Reasoning + Acting)

The most versatile pattern. The agent thinks about what to do, takes an action, observes the result, and repeats.

def run_react_agent(task, tools, max_steps=10):
    messages = [{"role": "user", "content": task}]

    for step in range(max_steps):
        response = client.messages.create(
            model="claude-sonnet-4-20250514",
            tools=tools,
            messages=messages
        )

        if response.stop_reason == "tool_use":
            tool_results = execute_tools(response)
            messages.extend([
                {"role": "assistant", "content": response.content},
                {"role": "user", "content": tool_results}
            ])
        else:
            return response.content[0].text
Enter fullscreen mode Exit fullscreen mode

When to use: General-purpose tasks — research, data analysis, customer support.

2. Router Pattern

Instead of one agent doing everything, use a lightweight router to dispatch to specialized agents.

class RouterAgent:
    def __init__(self):
        self.agents = {
            "code": CodeAgent(),
            "research": ResearchAgent(),
            "writing": WritingAgent(),
        }

    def route(self, task):
        category = classify_task(task)  # haiku-level model
        return self.agents[category].run(task)
Enter fullscreen mode Exit fullscreen mode

Why it works: Specialized agents with focused system prompts outperform generalist agents by 30-40% on benchmarks. And routing with a small model costs <$0.001 per request.

3. Tool-Use with Fallback Chain

Production agents need graceful degradation. If Tool A fails, try Tool B. If both fail, fall back to the LLM's knowledge.

class FallbackToolAgent:
    def execute_with_fallback(self, tool_name, params):
        try:
            return self.tools[tool_name](**params)
        except Exception:
            pass

        fallback = self.fallbacks.get(tool_name)
        if fallback:
            try:
                return fallback(**params)
            except Exception:
                pass

        return self.llm_fallback(tool_name, params)
Enter fullscreen mode Exit fullscreen mode

Key insight: In production, tools fail 5-15% of the time (API timeouts, rate limits, data issues). Without fallbacks, your agent fails 5-15% of the time too.

4. Multi-Agent with Shared Memory

When tasks are complex enough to need multiple agents, they need to share context efficiently.

class SharedMemory:
    def __init__(self):
        self.facts = {}
        self.history = []

    def add_fact(self, agent, key, value):
        self.facts[key] = {"value": value, "source": agent}
        self.history.append({"agent": agent, "action": "added", "key": key})

    def get_context_for(self, agent):
        return {k: v for k, v in self.facts.items()
                if self._is_relevant(k, agent)}
Enter fullscreen mode Exit fullscreen mode

The mistake everyone makes: Sharing the entire conversation history between agents. This explodes token usage. Instead, share structured facts.

5. Human-in-the-Loop Checkpoints

The most important pattern for production: let the agent work autonomously, but pause for human approval at critical points.

class CheckpointAgent:
    def __init__(self, checkpoints):
        self.checkpoints = checkpoints

    async def run(self, task):
        plan = self.create_plan(task)

        for step in plan:
            if step.name in self.checkpoints:
                approved = await self.request_human_approval(step)
                if not approved:
                    return self.handle_rejection(step)

            result = await self.execute_step(step)

        return result
Enter fullscreen mode Exit fullscreen mode

Rule of thumb: Any action that's hard to reverse (sending emails, making payments, modifying production data) should have a checkpoint.


Framework Comparison (Quick Reference)

Framework Best For Learning Curve
LangGraph Complex workflows with branching Moderate
CrewAI Multi-agent collaboration Low
Claude Agent SDK Tool-use agents Low
OpenAI Agents SDK GPT-based agents Low
Custom (no framework) Simple agents, full control Varies

Next Steps

Originally published on paxrel.com

Top comments (0)