Welcome back!
In the last post, we built a custom pattern with conditional logic and tool use. But in many real-world apps, you don’t want to leave everything to agents. You want a human to step in before critical actions — especially in tasks involving judgment, risk, or sensitive data.
That’s where Human-in-the-Loop (HITL) comes in.
Why HITL?
Human-in-the-loop is useful when:
- You want approval or intervention before agents execute a plan
- You want to review intermediate results
- You’re building co-creative tools (e.g., coding assistants, design AI)
- You need accountability (e.g., legal, medical, or finance use cases)
Step 1: Add a Human Agent
AG-2 allows you to include a human as an agent in the loop. You can interact via terminal, notebook, or integrate a UI later.
Here’s how to set it up:
from ag2 import Agent, HumanAgent, Pattern, Conversation
import os
os.environ["OPENAI_API_KEY"] = "your-api-key-here"
# AI Agent
writer = Agent(
name="writer",
llm="openai/gpt-4",
system_message="You write summaries based on user prompts.",
)
# Human Reviewer
human_reviewer = HumanAgent(
name="human_reviewer",
system_message="You are a human reviewer. Approve or suggest edits.",
)
Step 2: Create a Review Flow with a Human
from ag2 import Orchestrator
orchestrator = Orchestrator(
agents=[writer, human_reviewer],
rules=[
{"from": "user", "to": "writer"},
{"from": "writer", "to": "human_reviewer"},
{"from": "human_reviewer", "to": "user"},
]
)
conv = Conversation(orchestrator=orchestrator)
conv.send("Write a summary of the history of the internet.")
Now the human will be prompted to review the AI’s output before it’s finalized.
Step 3: Try It in Terminal
When run from your terminal, AG-2 will pause and ask for human input at the human_reviewer
step:
python hitl_review.py
Example flow:
- Writer generates a summary
- Terminal pauses and shows output to the human
- You (the human) type: “Looks good, approve.”
- Flow ends or loops based on what you say
Step 4: Add HITL to Complex Patterns
HITL can be used inside custom patterns, as a QA reviewer, escalation point, or fallback agent when the system gets stuck.
You can mix and match AI + humans freely — even simulate multi-role humans if needed.
What’s Next?
In the next post, we’ll explore:
- The AG2 Studio for visual workflows
- Inspecting logs, replays, and debugging sessions
- Deploying and iterating on agent systems without writing code
Keep coding
Top comments (0)