In Part 1, we defined the challenge of productionizing AI agents and introduced our modern stack: Strands for building the agent's logic and Bedrock Agentcore for running it at scale. Now, we dive into the heart of the development process—crafting the agent's intelligence. This part is a hands-on guide to using the Strands SDK to build a functional customer support agent locally, before we even think about the cloud.
A Deep Dive into the Strands SDK
The philosophy behind Strands is to get out of the developer's way and leverage the reasoning power of modern LLMs. It eschews complex, hardcoded workflows in favor of a simple, model-driven approach. At its core, every Strands agent is composed of three simple components :
- A Model: The LLM that provides the reasoning capabilities. Strands is model-agnostic, supporting models from Amazon Bedrock, Anthropic, OpenAI, and local models via Ollama, among others.
- A Prompt: A system_prompt that defines the agent's persona, its purpose, and its rules of engagement. This is where you shape the agent's behavior.
- A Set of Tools: Python functions or external services that the agent can call to perform actions or retrieve information from the outside world.
These components work together in a continuous agentic loop. When a user sends a message, the agent's LLM analyzes the request in the context of the conversation history and the available tools. It then decides whether to respond directly, ask a clarifying question, or call one or more tools to gather information. If a tool is called, its output is fed back into the loop, allowing the agent to reason over the new information and decide on the next step, continuing until it arrives at a final answer.
Setting Up the Local Development Environment
Before writing any agent code, let's establish a clean and reproducible development environment. This is a critical first step that prevents common configuration issues down the line.
First, create a project directory and set up a Python virtual environment to isolate our dependencies:
# Create a new directory for your project
mkdir customer-support-agent
cd customer-support-agent
# Create and activate a virtual environment
python3 -m venv.venv
source.venv/bin/activate
# On Windows use:.venv\Scripts\activate
Next, install the necessary Python packages. We need strands-agents _for the core SDK, _strands-agents-tools for useful pre-built tools, and boto3 to interact with AWS services like Amazon Bedrock :
pip install strands-agents strands-agents-tools boto3
Finally, ensure your AWS credentials are configured correctly. The best practice for local development is to use a dedicated AWS CLI profile. This isolates permissions and makes it easy to switch between different AWS accounts or roles. If you haven't already, install the AWS CLI and run:
# Configure a specific profile for our agent
aws configure --profile bedrock-agent
# Verify your identity and check which models you have access to
aws sts get-caller-identity --profile bedrock-agent
aws bedrock list-foundation-models --profile bedrock-agent
Pro Tip: A common pitfall is misconfigured AWS regions or IAM permissions. Ensure your bedrock-agent profile has permissions for bedrock:InvokeModel and that you have requested access to the desired foundation model (e.g., Anthropic's Claude 4 Sonnet) in the Amazon Bedrock console for your default region.
Architecting Agent Capabilities: Creating Custom Tools
Tools are how we extend our agent's capabilities beyond the knowledge of its LLM. With Strands, creating a custom tool is as simple as writing a Python function and decorating it with @tool. The decorator is a powerful abstraction that transforms a standard function into a capability that the LLM can discover and call. Strands automatically parses the function's name, parameters (including type hints), and its docstring to create a description for the model. This means the docstring is not just for developers; it's a crucial part of the prompt that guides the LLM's tool selection.
Let's create a tools.py file and define a suite of tools for our customer support agent, inspired by real-world use cases. For this tutorial, these tools will interact with a mock database, but they could easily be adapted to call real APIs.
# tools.py
from strands import tool
import json
# A mock database for demonstration purposes
MOCK_ORDERS_DB = {
"12345": {"status": "Shipped", "estimated_delivery": "2025-10-28"},
"67890": {"status": "Processing", "items":},
}
@tool
def get_order_status(order_id: str) -> str:
"""
Retrieves the current status and details for a given order ID.
Use this tool to answer customer questions about their orders.
"""
print(f"--- Tool: get_order_status called with order_id={order_id} ---")
status = MOCK_ORDERS_DB.get(order_id)
if status:
return json.dumps(status)
return json.dumps({"error": "Order not found."})
@tool
def lookup_return_policy(product_category: str) -> str:
"""
Looks up the return policy for a specific product category.
Valid categories are 'electronics', 'apparel', and 'home_goods'.
"""
print(f"--- Tool: lookup_return_policy called with category={product_category} ---")
policies = {
"electronics": "Electronics can be returned within 30 days of purchase with original packaging.",
"apparel": "Apparel can be returned within 60 days, provided it is unworn with tags attached.",
"home_goods": "Home goods have a 90-day return policy."
}
return policies.get(product_category.lower(), "Sorry, I could not find a policy for that category.")
@tool
def initiate_refund(order_id: str, reason: str) -> str:
"""
Initiates a refund process for a given order ID and reason.
Only use this tool when a customer explicitly requests a refund.
Returns a confirmation number for the refund request.
"""
print(f"--- Tool: initiate_refund called for order_id={order_id} ---")
if order_id in MOCK_ORDERS_DB:
confirmation_number = f"REF-{order_id}-{hash(reason)[:6]}"
return json.dumps({"status": "Refund initiated", "confirmation_number": confirmation_number})
return json.dumps({"error": "Cannot initiate refund. Order not found."})
Assembling and Prompting the Agent
With our tools defined, we can now assemble the agent in a main.py file. We will import the Agent class and our custom tools. The most critical step here is crafting the system_prompt. This prompt acts as the agent's constitution, defining its personality, capabilities, and constraints. A well-crafted prompt is specific enough to guide the agent but flexible enough to let it reason effectively.
# main.py
import os
import logging
from strands import Agent
from strands_tools import calculator # Import a pre-built tool
from tools import get_order_status, lookup_return_policy, initiate_refund
# Set AWS profile to use the one we configured
os.environ = "bedrock-agent"
# Enable debug logging to see the agent's thought process
logging.basicConfig(level=logging.INFO)
logging.getLogger("strands").setLevel(logging.DEBUG)
# Define the agent's persona and instructions
SYSTEM_PROMPT = """
You are a helpful and efficient customer support assistant for an online retailer.
Your goal is to resolve customer issues accurately and quickly.
You have access to the following tools:
- get_order_status: To check the status of a customer's order.
- lookup_return_policy: To provide information about return policies for different product categories.
- initiate_refund: To start the refund process for an order.
- calculator: To perform any necessary calculations.
Follow these rules:
1. Be polite and empathetic in all your responses.
2. Before using a tool that requires an order ID, always confirm the order ID with the customer if it was not provided.
3. Do not make up information. If you cannot answer a question with your available tools, say so.
"""
# Instantiate the agent
support_agent = Agent(
tools=[
get_order_status,
lookup_return_policy,
initiate_refund,
calculator
],
system_prompt=SYSTEM_PROMPT
)
def main():
print("Customer Support Agent is ready. Type 'exit' to quit.")
while True:
user_input = input("You: ")
if user_input.lower() in ["exit", "quit"]:
break
# Invoke the agent
response = support_agent(user_input)
print(f"Agent: {response.message}")
if __name__ == "__main__":
main()
Local Testing and the "Aha!" Moment
Now, run the agent from your terminal:
python -u main.py
You can now interact with your agent. Try a few queries to see its reasoning in action:
- Query 1: Hi, what's the status of order 12345? The agent will directly call the get_order_status tool and return the status. The debug logs will show the LLM's decision to use the tool and the parameters it supplied.
- Query 2: I need to return a laptop. The agent will recognize that "laptop" falls under the "electronics" category and will call the lookup_return_policy tool to provide the correct information.
- Query 3: Can you please refund my order? The agent, following the rules in its system prompt, will ask for the order ID before attempting to call the initiate_refund tool. This demonstrates its ability to follow instructions and engage in multi-turn conversation.
This local feedback loop is the "aha!" moment—seeing the agent correctly interpret natural language, select the right tool from its toolkit, and take action. With just a few Python files, we have built the brain of a sophisticated AI assistant. In Part 3, we will take this exact code and deploy it to the cloud with Bedrock Agentcore, transforming our local prototype into a scalable, production-ready service.
Top comments (0)