Imagine an AI that doesn't just answer questions, but actually "does things" for you, for example, checking the weather, booking appointments, or analyzing data. These are called agentic AI systems, and they're transforming how we interact with artificial intelligence.
BTW, this is Part 1 of a four-part project series that I am building.
Overview
This series walks you step-by-step through building real agentic AI systems, from your sime first tool call, to a full production-ready knowledge assistant.
Project 1 — Weather Agent (Beginner) -> Learn the core agent pattern: Define a tool → Call the model → Handle tool_use → Return results to Claude.
Project 2 — Research Assistant (Intermediate) -> Introduce multi-tool orchestration and build an agentic loop that handles multiple tool calls.
Project 3 — Task Planner (Advanced) -> Your agent plans its own execution, retries on failure, and reflects on improvements.
Project 4 — Knowledge Base Agent (Production) -> Use Bedrock’s managed agents + RAG to build enterprise-ready assistants.
My Approach: Applying Infrastructure as Code with Python
Why Python Scripts Instead of Manual Console Work?
Throughout this tutorial, you'll notice that I use Python scripts to create and configure AWS resources rather than clicking through the AWS Console. This is intentional and reflects real-world best practices. Here's why:
Repeatability
Why moving out from the GUI Approach:
• Make a mistake? Start over, clicking through 20+ screens
• Want to create in another region? Repeat all steps manually
• Team member needs to replicate? Send them a 50-step document
Why is better with Python Scripting Approach:
• Make a mistake? Fix the script and re-run
• Different region? Change one variable
• Share with team? Send them the script
Project 1: Build Your First AI Agent With Amazon Bedrock
In this project you'll build your first AI agent from scratch using Amazon Bedrock and AWS Lambda. By the end, you'll have a fully functional weather assistant that:
- Understands natural language requests
- Calls an external API to get real-time data
- Synthesizes information into helpful responses
What you'll learn:
- What Amazon Bedrock is and how it works
- How to create AI agents that can use tools
- How to build Lambda functions as agent tools
- How to connect everything using Action Groups
- How to test and deploy your agent
Prerequisites:
- AWS account
- Basic Python knowledge (or use AI to help you, as I did)
- Terminal/command line familiarity
- 20-40 minutes of your time
Key Concepts
Foundation Models: Pre-trained AI models you can use via API. Bedrock offers:
- Claude (Anthropic) - Advanced reasoning and tool use
- Llama (Meta)
- Amazon Titan
Agents: AI systems that can:
- Understand natural language
- Decide when to use tools
- Execute actions
- Synthesize responses
Action Groups: Collections of tools (functions) your agent can use. Think of them as the agent's "superpowers."
Tools: Functions your agent can call, like:
- Checking weather
- Querying databases
- Calling APIs
- Processing data
How Bedrock Agents Work: The Big Picture
User asks: "What's the weather in NYC?"
↓
[Bedrock Agent]
- Powered by Claude
- Has your instructions
- Knows about action groups
↓
Agent thinks: "I need weather data.
I have a 'get_weather' action.
I'll use that!"
↓
[Action Group]
- Knows this action calls a Lambda function
↓
[Lambda Function]
- Calls weather API
- Gets real data
- Returns it
↓
[Back to Agent]
- Receives weather data
- Crafts natural response
↓
User gets: "It's 52°F and rainy in New York City.
Bring an umbrella!"
Enable Bedrock Model Access
Before we start coding, check if you have access to Claude (or your AI model of preference)
aws bedrock list-foundation-models \
--region us-east-1 \
--query 'modelSummaries[?contains(modelId,anthropic.claude`)].modelId'
Note: If you see an error, go to AWS Console → Bedrock → Model Access → Request Access to Claude 3.5 Sonnet models
Building the Lambda Function
What is Lambda? AWS Lambda is a service that runs your code without you managing servers.
Perfect for our use case because:
- Agent calls Lambda only when user asks about weather
- Could be once an hour or 1000 times an hour - Lambda scales automatically
- You pay only for actual requests (free tier: 1M requests/month)
Create lambda_function.py:
import json
import os
import urllib.request
import urllib.parse
from typing import Tuple, Optional, Dict
USER_AGENT = "WeatherAssistant/1.0"
def http_get_json(url: str, headers: Optional[Dict] = None, timeout: int = 8) -> dict:
req = urllib.request.Request(url, headers=headers or {})
with urllib.request.urlopen(req, timeout=timeout) as resp:
return json.loads(resp.read().decode("utf-8"))
def geocode_city(city: str) -> Optional[Tuple[float, float]]:
if not city:
return None
params = urllib.parse.urlencode({"q": city, "format": "json", "limit": 1})
url = f"https://nominatim.openstreetmap.org/search?{params}"
headers = {"User-Agent": USER_AGENT}
try:
results = http_get_json(url, headers=headers)
if not results:
return None
return float(results[0]["lat"]), float(results[0]["lon"])
except Exception as e:
print(f"[ERROR] Geocoding failed: {e}")
return None
def get_grid_endpoints(lat: float, lon: float) -> Tuple[str, str]:
url = f"https://api.weather.gov/points/{lat:.4f},{lon:.4f}"
headers = {"User-Agent": USER_AGENT, "Accept": "application/geo+json"}
data = http_get_json(url, headers=headers)
props = data.get("properties", {})
return props["forecast"], props["forecastHourly"]
def get_forecast(url: str) -> dict:
headers = {"User-Agent": USER_AGENT, "Accept": "application/geo+json"}
return http_get_json(url, headers=headers)
def summarize_hourly_24h(hourly_json: dict) -> str:
periods = hourly_json.get("properties", {}).get("periods", [])
if not periods:
return "No hourly forecast available."
lines = []
for p in periods[:24]:
temp = p.get("temperature")
unit = p.get("temperatureUnit", "F")
short = p.get("shortForecast", "")
wind = f"{p.get('windSpeed', '')} {p.get('windDirection', '')}".strip()
lines.append(f"{temp}°{unit}, {short}, wind {wind}")
return "Next 24 hours: " + " | ".join(lines[:6])
def lambda_handler(event, context):
print(f"Received event: {json.dumps(event, default=str)}")
api_path = event.get('apiPath', '/weather') # Extract parameters from Bedrock agent format
http_method = event.get('httpMethod', 'GET')
parameters = event.get('parameters', [])
# Parse parameters
city = None
mode = "hourly"
for param in parameters:
if param['name'] == 'city':
city = param['value']
elif param['name'] == 'mode':
mode = param['value']
print(f"Parsed: city={city}, mode={mode}")
# Validate input
if not city:
return {
"messageVersion": "1.0",
"response": {
"actionGroup": event.get('actionGroup', 'weather-tools'),
"apiPath": api_path,
"httpMethod": http_method,
"httpStatusCode": 400,
"responseBody": {
"application/json": {
"body": json.dumps({"error": "Missing required parameter: city"})
}
}
}
}
try:
# Geocode city
coords = geocode_city(city)
if not coords:
return {
"messageVersion": "1.0",
"response": {
"actionGroup": event.get('actionGroup', 'weather-tools'),
"apiPath": api_path,
"httpMethod": http_method,
"httpStatusCode": 404,
"responseBody": {
"application/json": {
"body": json.dumps({"error": f"Could not find city: {city}"})
}
}
}
}
lat, lon = coords
print(f"Geocoded '{city}' to lat={lat}, lon={lon}")
# Get weather
forecast_url, hourly_url = get_grid_endpoints(lat, lon)
data = get_forecast(hourly_url if mode == "hourly" else forecast_url)
weather_report = summarize_hourly_24h(data)
# Return in correct Bedrock format
return {
"messageVersion": "1.0",
"response": {
"actionGroup": event.get('actionGroup', 'weather-tools'),
"apiPath": api_path, # CRITICAL: Must match input
"httpMethod": http_method,
"httpStatusCode": 200,
"responseBody": {
"application/json": {
"body": json.dumps({"weather_report": weather_report})
}
}
}
}
except Exception as e:
print(f"[ERROR] {e}")
return {
"messageVersion": "1.0",
"response": {
"actionGroup": event.get('actionGroup', 'weather-tools'),
"apiPath": api_path,
"httpMethod": http_method,
"httpStatusCode": 500,
"responseBody": {
"application/json": {
"body": json.dumps({"error": "Weather service error"})
}
}
}
}
Understanding the code:
- geocode_city: Converts city names to GPS coordinates
- get_grid_endpoints: Gets weather.gov forecast URLs for coordinates
- get_forecast: Fetches weather data
- summarize_hourly_24h: Formats data for easy reading
- lambda_handler: Main entry point that orchestrates everything
Lambda needs permission to write logs, for this we are going to create an IAM Role for the Lambda function
Create trust-policy.json:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "lambda.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}
Create the IAM role
aws iam create-role \
--role-name WeatherLambdaRole \
--assume-role-policy-document file://trust-policy.json
Attach policy for CloudWatch logging
aws iam attach-role-policy \
--role-name WeatherLambdaRole \
--policy-arn arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole
Get the role ARN (save this!)
aws iam get-role \
--role-name WeatherLambdaRole \
--query 'Role.Arn' \
--output text
Save the ARN - it looks like: arn:aws:iam::123456789012:role/WeatherLambdaRole
Deploy the Lambda Function
Package the code
zip function.zip lambda_function.py
Deploy to AWS (replace YOUR_ROLE_ARN with the ARN from step 2)
aws lambda create-function \
--function-name bedrock-weather-function \
--runtime python3.11 \
--role **YOUR_ROLE_ARN **\
--handler lambda_function.lambda_handler \
--zip-file fileb://function.zip \
--timeout 30 \
--memory-size 256
Creating the Bedrock Agent
Step 1: Create IAM Role for the Agent
The agent needs permissions to invoke Lambda and call Bedrock models.
Create agent-trust-policy.json:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "bedrock.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}
Create agent-permissions.json:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"lambda:InvokeFunction"
],
"Resource": "arn:aws:lambda:us-east-1:*:function:bedrock-weather-function"
},
{
"Effect": "Allow",
"Action": [
"bedrock:InvokeModel"
],
"Resource": "*"
}
]
}
Create role
aws iam create-role \
--role-name BedrockAgentRole \
--assume-role-policy-document file://agent-trust-policy.json
Attach permissions
aws iam put-role-policy \
--role-name BedrockAgentRole \
--policy-name AgentPermissions \
--policy-document file://agent-permissions.json
Get the role ARN (save this!)
aws iam get-role \
--role-name BedrockAgentRole \
--query 'Role.Arn' \
--output text
Wait for role to propagate
sleep 10
Step 2: Create the Agent
Create create_agent.py:
import boto3
import json
import time
client = boto3.client('bedrock-agent', region_name='us-east-1')
# Replace with your role ARN
ROLE_ARN = 'arn:aws:iam::YOUR_ACCOUNT:role/BedrockAgentRole'
print("Creating Bedrock agent...")
try:
response = client.create_agent(
agentName='weather-assistant',
agentResourceRoleArn=ROLE_ARN,
foundationModel='us.anthropic.claude-3-5-sonnet-20241022-v2:0',
instruction=(
"You are a helpful weather assistant. "
"When users ask about weather in any US city, use the getWeather tool "
"to fetch current conditions and forecasts. "
"Provide friendly, conversational responses with relevant weather details. "
"If asked about multiple cities, check each one. "
"Always mention temperature, conditions, and any important weather alerts."
),
idleSessionTTLInSeconds=600
)
agent_id = response['agent']['agentId']
print(f"✅ Agent created successfully!")
print(f"Agent ID: {agent_id}")
print(f"Agent Name: {response['agent']['agentName']}")
print(f"\nSave this Agent ID for the next steps!")
# Save to file
with open('agent-config.json', 'w') as f:
json.dump({'agent_id': agent_id}, f)
except Exception as e:
print(f"❌ Error: {str(e)}")
Run it:
python3 create_agent.py
Step 3: Create the Action Group
Action Groups connect your Lambda function to the agent using an OpenAPI schema.
Create create_action_group.py:
import boto3
import json
client = boto3.client('bedrock-agent', region_name='us-east-1')
#Replace these with your values
AGENT_ID = 'YOUR_AGENT_ID' # From previous step
LAMBDA_ARN = 'arn:aws:lambda:us-east-1:YOUR_ACCOUNT:function:bedrock-weather-function'
#OpenAPI schema defining the tool
openapi_schema = {
"openapi": "3.0.0",
"info": {
"title": "Weather API",
"version": "1.0.0",
"description": "API for getting weather forecasts for US cities"
},
"paths": {
"/weather": {
"get": {
"summary": "Get weather forecast",
"description": "Get weather forecast for a US city. Use when user asks about weather, temperature, or conditions.",
"operationId": "getWeather",
"parameters": [
{
"name": "city",
"in": "query",
"description": "US city name (e.g. 'Seattle', 'New York')",
"required": True,
"schema": {"type": "string"}
},
{
"name": "mode",
"in": "query",
"description": "Forecast type: 'hourly' or 'daily'",
"required": False,
"schema": {
"type": "string",
"enum": ["hourly", "daily"],
"default": "hourly"
}
}
],
"responses": {
"200": {
"description": "Weather forecast",
"content": {
"application/json": {
"schema": {
"type": "object",
"properties": {
"weather_report": {"type": "string"}
}
}
}
}
}
}
}
}
}
}
print("Creating action group...")
try:
response = client.create_agent_action_group(
agentId=AGENT_ID,
agentVersion='DRAFT',
actionGroupName='weather-tools',
actionGroupExecutor={'lambda': LAMBDA_ARN},
apiSchema={'payload': json.dumps(openapi_schema)},
description='Tools for getting weather information',
actionGroupState='ENABLED'
)
print("✅ Action group created!")
print(f"Action Group ID: {response['agentActionGroup']['actionGroupId']}")
except Exception as e:
print(f"❌ Error: {str(e)}")
Run it:
python3 create_action_group.py
Step 4: Prepare and Deploy the Agent
Prepare the agent (builds and validates)
aws bedrock-agent prepare-agent --agent-id YOUR_AGENT_ID
Wait for preparation (30-60 seconds)
sleep 30
Create an alias for testing
aws bedrock-agent create-agent-alias \
--agent-id YOUR_AGENT_ID \
--agent-alias-name dev
Save the alias ID from the output
Part 5: Testing Your Agent
Create the Test Script called test-agent.py:
import boto3
import json
import sys
client = boto3.client('bedrock-agent-runtime', region_name='us-east-1')
# Replace with your values
AGENT_ID = 'YOUR_AGENT_ID'
AGENT_ALIAS_ID = 'YOUR_ALIAS_ID'
def chat(message):
"""Send a message to the agent and display the response."""
print(f"\n{'='*70}")
print(f"You: {message}")
print(f"{'='*70}\n")
response = client.invoke_agent(
agentId=AGENT_ID,
agentAliasId=AGENT_ALIAS_ID,
sessionId='test-session',
inputText=message,
enableTrace=True
)
print("Agent: ", end='', flush=True)
for event in response['completion']:
if 'chunk' in event:
chunk = event['chunk']
if 'bytes' in chunk:
print(chunk['bytes'].decode('utf-8'), end='', flush=True)
elif 'trace' in event:
trace = event['trace']
if 'trace' in trace:
trace_data = trace['trace']
if 'orchestrationTrace' in trace_data:
orch = trace_data['orchestrationTrace']
if 'invocationInput' in orch:
inv = orch['invocationInput']
if 'actionGroupInvocationInput' in inv:
action = inv['actionGroupInvocationInput']
params = {p['name']: p['value'] for p in action.get('parameters', [])}
print(f"\n[Calling tool with params: {params}]")
print("\n" + "="*70 + "\n")
if __name__ == "__main__":
# Interactive mode
print("\n🌤️ Weather Agent - Type 'exit' to quit\n")
while True:
try:
query = input("You: ").strip()
if query.lower() in ['exit', 'quit']:
break
if query:
chat(query)
except KeyboardInterrupt:
break
print("\n👋 Goodbye!\n")
Test It!
python3 test-agent.py
Try these queries:
- "What's the weather in NYC?"
- "How's the weather in Miami and New York?"
- "Is it going to rain in Boston today?"
- "What's the forecast for San Francisco?"
Expected interaction:
You: What's the weather in NYC?
[Calling tool with params: {'city': 'NYC', 'mode': 'hourly'}]
Agent: The weather in NYC is currently 52°F with partly cloudy skies.
Wind is light at 5 mph from the west. Over the next few hours, temperatures
will stay in the low 50s with continued partly cloudy conditions. It's a
pleasant day in NYC!
Part 6: Understanding What You Built
The Complete Flow
Let's trace through what happens when you ask "What's the weather in NYC?":
1. User Input → Bedrock Agent
You type: "What's the weather in NYC?"
2. Agent Reasoning (Claude 3.5 Sonnet)
Claude thinks: "The user wants weather information for NYC.
I have a getWeather tool available. I should use it with city='NYC'."
3. Agent → Action Group
Request to Action Group:
{
"apiPath": "/weather",
"parameters": [
{"name": "city", "value": "NYC"}
]
}
4. Action Group → Lambda Function
Bedrock invokes: bedrock-weather-function
With parameters: city=NYC, mode=hourly
5. Lambda Execution
Step 1: Geocode "NYC" → (40.7306,-73.9352)
Step 2: Call weather.gov/points/40.7306,-73.9352
Step 3: Get hourly forecast URL
Step 4: Fetch forecast data
Step 5: Format response
6. Lambda → Agent
Returns: {
"weather_report": "Next 24 hours: 52°F, Partly Cloudy, wind 5 mph W | ..."
}
7. Agent Synthesis
Claude reads the data and creates a natural response:
"The weather in NYC is currently 52°F with partly cloudy skies..."
8. Response → User
You see the friendly, natural language response!
Key Concepts Demonstrated
1. Agentic Reasoning
The AI doesn't just follow a script—it decides when and how to use tools based on the user's request.
2. Tool Abstraction
You defined what the tool does (OpenAPI schema), and the AI figures out how to use it.
3. Natural Language Interface
Users don't need to know about APIs, parameters, or JSON—they just ask naturally.
4. Error Handling
The Lambda function handles missing cities, API failures, and edge cases gracefully.
5. Serverless Architecture
Everything scales automatically. No servers to manage, pay only for what you use.
Troubleshooting
Common Issues and Solutions
Issue: "Invocation of model ID with on-demand throughput isn't supported"
Solution: Use inference profile instead of direct model ID:
Update agent with correct model
aws bedrock-agent update-agent \
--agent-id YOUR_AGENT_ID \
--agent-name weather-assistant \
--agent-resource-role-arn YOUR_ROLE_ARN \
--foundation-model us.anthropic.claude-3-5-sonnet-20241022-v2:0
# Prepare agent
aws bedrock-agent prepare-agent --agent-id YOUR_AGENT_ID
Issue: "APIPath in Lambda response doesn't match input"
Solution: Ensure Lambda returns apiPath exactly as received:
return {
"response": {
"apiPath": event.get('apiPath', '/weather'), # Must match input
...
}
}
Issue: "City not found"
Cause: OpenStreetMap couldn't geocode the city name.
Solution: Try more specific names:
- ❌ "Portland" → ✅ "Portland, OR"
- ❌ "Springfield" → ✅ "Springfield, IL"
Issue: Lambda times out
Solution: Increase timeout:
aws lambda update-function-configuration \
--function-name bedrock-weather-function \
--timeout 60
Issue: "Weather service temporarily unavailable"
Cause: weather.gov only covers US locations.
Solution: The API works only for US cities. For international weather, you'd need a different API (like OpenWeatherMap).
Viewing Lambda Logs for Debugging
View recent logs
aws logs tail /aws/lambda/bedrock-weather-function --follow
This shows:
- Each function execution
- Print statements
- Errors with stack traces
- Execution duration and memory used
Cost Analysis
What Does This Cost to Run?
Let's break down the costs for 1,000 weather queries per month:
1. Amazon Bedrock (Claude 3.5 Sonnet)
- Input: ~200 tokens per query = 200K tokens
- Output: ~150 tokens per response = 150K tokens
- Cost: $3.00 per million input tokens, $15.00 per million output tokens
- Monthly cost: $0.60 + $2.25 = $2.85
2. AWS Lambda
- 1,000 executions × 2 seconds each = 2,000 seconds
- 256MB memory allocation
- First 1M requests free, then $0.20 per million
- First 400,000 GB-seconds free
- Monthly cost: $0.00 (within free tier)
3. CloudWatch Logs
- ~5KB per execution = 5MB total
- First 5GB free per month
- Monthly cost: $0.00 (within free tier)
Total: ~$3.00 per month for 1,000 queries
Conclusion
You've just built a fully functional AI agent that:
- Uses Claude 3.5 Sonnet for natural language understanding
- Autonomously decides when to use tools
- Calls external APIs to get real-time data
- Handles errors gracefully
- Costs less than $3/month for typical usage
- Scales automatically with demand
This is the foundation for building powerful AI applications. The pattern you learned—Agent → Action Group → Lambda → External API—can be applied to countless use cases.
Thank You!
I hope this guide has shown you that building AI agents is not only possible but actually straightforward with the right tools.
Coming in Part 2:
- Building multi-tool agents that orchestrate complex workflows
- Adding Knowledge Bases for document retrieval (RAG)
- Implementing agent memory and conversation context
- Advanced error handling and recovery strategies
Stay tuned!

Top comments (0)