Introduction
I'm currently developing a personal AI agent called TONaRi. It also has an X (Twitter) account where it posts tech news and more.
https://x.com/tonari_with
The agent's core architecture is built on Strands Agents + Amazon Bedrock AgentCore.
In this article, I combined AgentCore Code Interpreter with Strands Agents' Agent Skills to implement a workflow that retrieves AWS cost data and generates chart images using code. Check out the video demo below:
https://x.com/_cityside/status/2035339843014987845
Although this was an addition to an existing web application codebase, I hope it also serves as a useful reference for building something similar from scratch.
Here are the main technologies used:
- AgentCore Code Interpreter: One of Amazon Bedrock AgentCore's building blocks that executes code in a sandboxed environment
- Agent Skills (SKILL.md): Externalized prompts that are loaded on demand
- Cost Explorer API: An API for retrieving AWS cost data, called from an agent tool
- S3: Stores chart images generated by Code Interpreter, served to the frontend via Presigned URLs
Amazon Bedrock AgentCore Code Interpreter
Amazon Bedrock AgentCore Code Interpreter (hereafter "Code Interpreter") is one of the building blocks that allows agents hosted on AgentCore Runtime to safely execute code in a sandboxed environment.
https://aws.amazon.com/blogs/machine-learning/introducing-the-amazon-bedrock-agentcore-code-interpreter/
Key features include:
- Code execution in a sandboxed environment
- Pre-installed libraries such as pandas, numpy, and matplotlib
- In addition to the default access-restricted environment, you can create user-defined environments with public internet access or VPC connectivity
In this project, I use Code Interpreter to have the agent dynamically generate chart images from data using matplotlib.
Strands Agents Skills
Agent Skills is a mechanism originally proposed by Anthropic. In a nutshell, it works like this: you define procedures you want the agent to execute in Markdown files (similar to system prompts), then inject only the metadata into the system prompt. The agent dynamically loads the Skill files based on the metadata and executes the procedures. This approach helps reduce token consumption and prevents context pollution.
As of March 2026, Agent Skills are now available in Strands Agents as well:
https://strandsagents.com/docs/user-guide/concepts/plugins/skills/
For this project, I defined the following workflow as a Skill:
- Call the Cost Explorer API tool to retrieve cost data for the user-specified period
- Call the cost visualization tool
- 2-1. Convert cost data into a chart image using Code Interpreter
- 2-2. Upload the image to S3
- 2-3. Return the S3 presigned URL
Processing Flow
Here's a simplified overview of the processing flow:
User: "Show me this month's AWS costs"
↓
Main Agent
├─ ① skills tool: Load skill
├─ ② get_aws_cost tool: Call Cost Explorer API
└─ ③ execute_python tool
└─ ③-1 Generate matplotlib chart via Code Interpreter
③-2 Upload to S3
③-3 Return presigned URL
↓
Frontend: Detect S3 image URL in text → Display inline in chat
Implementation
get_aws_cost: Cost Data Retrieval Tool
The AWS cost retrieval tool is defined as an agent tool using the @tool decorator. The logic is separated from the Code Interpreter chart image generation.
import boto3
from strands import tool
_ce_client = boto3.client("ce", region_name="ap-northeast-1")
@tool
def get_aws_cost(
period: str = "monthly",
months: int = 1,
group_by_service: bool = True,
) -> str:
"""Retrieve AWS cost data from Cost Explorer.
Use this tool to fetch cost data. Then pass the result to execute_python
to create matplotlib charts for visualization.
Args:
period: Granularity - "monthly" or "daily".
months: Number of months to look back (default: 1, max: 6).
group_by_service: If True, break down costs by AWS service.
Returns:
JSON string with cost data.
"""
ce = _ce_client
# ...
response = ce.get_cost_and_usage(
TimePeriod={"Start": start, "End": end},
Granularity="MONTHLY",
Metrics=["UnblendedCost"],
GroupBy=[{"Type": "DIMENSION", "Key": "SERVICE"}],
)
return json.dumps({"data": data})
execute_python: Code Execution Tool
Similarly, Code Interpreter code execution is defined as an agent tool using the @tool decorator. To reliably capture matplotlib figures, the tool automatically injects capture code before and after the agent-generated code.
from bedrock_agentcore.tools.code_interpreter_client import code_session
CODE_INTERPRETER_REGION = os.getenv("CODE_INTERPRETER_REGION", "ap-northeast-1")
OUTPUT_BUCKET = os.getenv("CODE_INTERPRETER_OUTPUT_BUCKET", "ap-northeast-1")
_s3_client = boto3.client("s3", region_name=os.getenv("AWS_REGION", "ap-northeast-1"))
@tool
def execute_python(code: str, description: str = "") -> str:
"""Execute Python code in a sandboxed environment. Use this to run data analysis,
generate charts with matplotlib, or perform calculations.
Available libraries: pandas, numpy, matplotlib, json, datetime.
Use ONLY matplotlib for plotting (not seaborn).
Use English for all chart labels and titles (Japanese fonts are not available).
IMPORTANT for chart generation:
- Do NOT call plt.savefig() — images are auto-captured from open figures.
- Do NOT call plt.close() — closing figures prevents image capture.
- Just create figures with plt.subplots() and leave them open.
- Do NOT use boto3 — the sandbox has no AWS credentials.
Args:
code: Python code to execute.
description: Optional description of what the code does.
Returns:
JSON string with execution results including stdout, stderr, and image URLs.
"""
# Automatically inject matplotlib image capture code
img_code = f"""
import matplotlib
matplotlib.use('Agg')
{code}
import matplotlib.pyplot as plt, base64, io, json as _json
_imgs = []
for _i in plt.get_fignums():
_b = io.BytesIO()
plt.figure(_i).savefig(_b, format='png', bbox_inches='tight', dpi=100)
_b.seek(0)
_imgs.append({{'i': _i, 'd': base64.b64encode(_b.read()).decode()}})
if _imgs:
print('_IMG_' + _json.dumps(_imgs) + '_END_')
plt.close('all')
"""
with code_session(CODE_INTERPRETER_REGION) as code_client:
response = code_client.invoke("executeCode", {
"code": img_code,
"language": "python",
"clearContext": False,
})
# Extract images from stdout using _IMG_..._END_ markers
# Upload to S3 and return presigned URLs
Creating the SKILL.md
Now that the tools are defined, we create the Agent Skill that defines how to call them. The directory structure looks like this:
agentcore/
├── skills/
│ └── aws-cost/
│ └── SKILL.md
├── app.py
└── ...
The SKILL.md file contains YAML frontmatter and a Markdown-formatted prompt:
---
name: aws-cost
description: "Analyze and visualize AWS cost data using get_aws_cost"
for data retrieval and execute_python for matplotlib chart generation
allowed-tools: get_aws_cost execute_python
---
# AWS Cost Analysis Skill
Two-step process: fetch data with `get_aws_cost`,
then visualize with `execute_python`.
## Critical Rules
- **NEVER call plt.savefig()** — images are auto-captured from open figures.
- **NEVER call plt.close()** — closing figures prevents image capture.
- **Use English for ALL text** in charts — Japanese fonts are unavailable.
## Step 1: Fetch Data
(How to call get_aws_cost)
## Step 2: Visualize
(matplotlib code template)
Integrating with the Agent
The tools are passed via the tools parameter, and the Skill is initialized with the AgentSkills plugin and passed to the agent.
from strands import Agent, AgentSkills
from src.agent.code_interpreter import execute_python
from src.agent.aws_cost import get_aws_cost
# Initialize the Skills plugin
skills_plugin = AgentSkills(skills="./skills/")
# Create the agent
agent = Agent(
tools=[*other_tools, execute_python, get_aws_cost],
plugins=[skills_plugin],
system_prompt=system_prompt,
)
I'll skip the frontend implementation details, but essentially it detects image URLs in the agent's response and automatically fetches and displays them inline.
Demo
Here's what it looks like when the skill is actually running. Since the chart-generating code is dynamically created by the agent, the output varies depending on how you phrase your instructions.
Here's the video demo again from the beginning of the article:
https://x.com/_cityside/status/2035339843014987845
Wrapping Up
That's how I implemented an AWS cost charting feature using Agent Skills + Code Interpreter. (Admittedly, you could just look at the Cost Explorer console for the same information, but this was more of a proof of concept...)
In this implementation, I used the default Code Interpreter tool, which restricts public internet access. However, by using a user-defined Code Interpreter tool, you could enable more flexible code execution. I'd love to explore the possibilities further.


Top comments (0)