If you've tried building an AI agent with Claude and Bedrock, you've hit stopReason. Maybe you ignored it, maybe you cargo-culted the pattern from a tutorial. Either way — here's what's actually happening and why it matters for production systems.
TL;DR
-
stopReason: "tool_use"→ execute the tool, append result, loop again -
stopReason: "end_turn"→ agent is done, return the response - Always append the assistant message before tool results
- Never use text content checks as your stop condition
Why this pattern exists
Claude on Bedrock doesn't magically "know" to use tools. You have to implement the loop that lets it. The model's job is to reason about your task and signal what it needs — your code's job is to listen to that signal and respond correctly.
The signal is stopReason. Everything else follows from there.
The loop, step by step
1. Send message + tool definitions to Claude
2. Read stopReason from response
3a. If "tool_use" → run the tool → append result → go to step 1
3b. If "end_turn" → return the final response
That's it. Here's the full working implementation:
import boto3
client = boto3.client("bedrock-runtime", region_name="us-east-1")
# Define your tools
tools = [{
"toolSpec": {
"name": "get_order_status",
"description": "Look up the status of a customer order by order ID",
"inputSchema": {
"json": {
"type": "object",
"properties": {
"order_id": {"type": "string"}
},
"required": ["order_id"]
}
}
}
}]
def run_tool(tool_name, tool_input):
if tool_name == "get_order_status":
return {"status": "shipped", "eta": "2 days"}
return {"error": "unknown tool"}
def run_agent(user_message):
messages = [{"role": "user", "content": [{"text": user_message}]}]
for _ in range(20): # Safety cap — not your primary stop mechanism
response = client.converse(
modelId="anthropic.claude-3-sonnet-20240229-v1:0",
messages=messages,
toolConfig={"tools": tools}
)
stop_reason = response["stopReason"]
output_message = response["output"]["message"]
# Step 1: Always append Claude's response first
messages.append(output_message)
# Step 2: Check stopReason — not content type
if stop_reason == "end_turn":
for block in output_message["content"]:
if "text" in block:
return block["text"]
elif stop_reason == "tool_use":
tool_results = []
for block in output_message["content"]:
if "toolUse" in block:
tool = block["toolUse"]
result = run_tool(tool["name"], tool["input"])
tool_results.append({
"toolResult": {
"toolUseId": tool["toolUseId"],
"content": [{"json": result}]
}
})
# Step 3: Append tool results and loop
messages.append({"role": "user", "content": tool_results})
return "Safety cap reached"
print(run_agent("What's the status of order ORD-12345?"))
The three bugs that will waste your afternoon
Bug 1 — Missing assistant message append
# What most tutorials show (broken)
messages.append(tool_results_message)
# What you actually need
messages.append(output_message) # Claude's response FIRST
messages.append(tool_results_message) # Then your tool results
Claude needs to see its own previous response to connect tool results back to the tool call it made. Skip this and you'll get confused responses or infinite loops.
Bug 2 — Content type check as stop condition
# Breaks silently on complex tasks
if response["output"]["message"]["content"][0]["type"] == "text":
return response # WRONG
# Correct
if response["stopReason"] == "end_turn":
return response # RIGHT
Claude can return {"type": "text", ...} blocks alongside tool_use blocks. A message like "I'll look that up now" followed by a tool call has text in position [0] — your agent terminates before the tool ever runs.
Bug 3 — Iteration cap as primary stop
# This is a safety net, not a strategy
for i in range(10):
if i == 9:
break # Wrong use of cap
Caps prevent runaway agents. They don't signal completion. A task that genuinely needs 12 iterations gets cut off at 10. A task that finishes in 3 wastes 7 loops. The model signals completion via stopReason — listen to it.
Where this goes next
Master the core loop and these patterns become straightforward:
Multi-step tool sequencing — Claude chains multiple tool calls across iterations, reasoning about each result before deciding the next action.
Multi-agent systems — coordinator agents treat subagents as tools. Same tool_use / end_turn cycle, one level up.
Human escalation — on low-confidence results, append a special toolResult that prompts Claude to request human input instead of proceeding.
Error propagation — structured error responses in tool results let Claude decide whether to retry, use a fallback tool, or surface the failure cleanly.
Want to build this in a real AWS Bedrock sandbox?
This is Mission 1 of the CCA-001: Claude Certified Architect track — 22 labs covering agentic loops, MCP servers, multi-agent systems, Bedrock Guardrails, CI/CD with Claude Code, and observability dashboards. Real AWS sandbox, automated validation, no account setup needed.
👉 cloudedventures.com/labs/bedrock-agentic-loop-stopreason
Also join r/ClaudeCertifications on Reddit — building a community for Claude and Bedrock practitioners.
Drop a comment if anything's unclear — what part of the loop trips you up most?
Top comments (1)
Bug #1 (forgetting to append Claude's own response before tool results) burned me for two days straight. The message ordering is the hardest part of the agentic loop to get right and most tutorials skip over it completely.