Introduction
In May 2025, AWS released Strands Agents, an open-source SDK designed to make building autonomous AI agents easier and more intuitive. What makes Strands stand out is its “model-first” philosophy instead of hard-coding control logic or predefined workflows, it lets powerful foundation models handle the heavy lifting. The LLM does the planning, reasoning, and tool invocation, so you can focus on defining the essentials: a model, a prompt, and a set of tools. Strands figures out the rest.
In this post, I’ll walk through why Strands Agents is worth paying attention to, explore its architecture and core features, discuss when to use it, and share a simple example to help you get started.
Why Strands Agents Matters
From Workflow-Driven to Model-Driven
Most existing agent frameworks like LangChain, Semantic Kernel, or Microsoft Agent Framework already let you integrate LLMs with tools. In those systems, the model can decide which tool to use and when to call it. However, you typically still need to define a workflow structure for example, using LangChain’s ReAct loop, a planner-executor setup, or a custom graph of nodes and edges. In other words, the developer still describes the how the order of operations, the logic for branching, and the retry or error handling behavior while the LLM fills in the reasoning within that structure.
Strands takes this one step further.
Instead of requiring you to design a chain or graph, Strands lets the model itself handle the orchestration end-to-end. You simply specify three things: the model, the prompt, and a set of tools. The LLM then decides autonomously when to use a tool, how to combine multiple tools, when to iterate, and when to stop all without an explicit workflow definition.
This means developers spend less time wiring up control logic and more time defining what the agent should achieve. As models continue improving in reasoning and planning, Strands’ “model-first” approach naturally becomes more capable without needing to change your code.
Multi-Model and Provider Flexibility
Even though Strands was built by AWS, it’s not tied to AWS models alone. It supports Amazon Bedrock (including models like Claude on Bedrock) as a first-class option, but it also works with Anthropic, Llama APIs, Ollama, LiteLLM, and custom model providers. You can switch models or providers without overhauling your architecture a big win for flexibility and portability.
Built-In Tools and MCP Integration
Strands ships with a rich set of built-in tools for making HTTP requests, running Python code, or managing files all easily defined using a simple Python @tool decorator. On top of that, it supports the Model Context Protocol (MCP), allowing agents to connect to external tool servers in a standardized way. This makes it easy to extend capabilities without writing custom integrations every time.
Observability and Production Readiness
Strands isn’t just a research toy it’s built for production. It includes tracing, metrics, detailed logging, and robust error handling for things like rate limits or context overflows. Whether you’re running locally for prototyping or deploying at scale on Lambda, Fargate, EC2, or even on-prem, the same agent code works seamlessly across environments.
Multi-Agent Systems and Autonomous Behavior
Strands also supports multi-agent architectures agents can call other agents as tools (a concept known as agent-as-tool) or coordinate as part of a larger system. You can even build autonomous looping agents that continuously act, learn, and adapt over time. This makes Strands a strong foundation for building complex, intelligent, and evolving AI systems.
When to Use Strands Agents
Strands Agents shines in scenarios where you want the model’s reasoning to take the lead, rather than spending time hardcoding orchestration logic. It’s especially useful if you’re looking to build intelligent, flexible, and production-ready agent systems without getting bogged down in complex workflow definitions.
You might consider using Strands when:
You’re already in the AWS ecosystem and want native integration with services like Lambda, Step Functions, Bedrock(AgentCore), or Fargate.
You prefer model-driven reasoning over manually constructing detailed control flows or brittle orchestrations.
You want flexibility in model providers, with the freedom to switch between Claude (via Bedrock), Anthropic, Llama, Ollama, or other APIs without rearchitecting your agent.
You need built-in observability and reliability, such as tracing, structured logs, retries, and error recovery — all essential for production workloads.
Your use cases involve tool use or API calls, and you want to easily extend agents with custom tools or integrate with external MCP servers.
You’re exploring multi-agent systems or autonomous loops, where agents can coordinate, reason, and act continuously without direct supervision.
On the other hand, Strands might not be the best fit if your workflows are highly deterministic (for example, fixed data pipelines or rule-based automations), or if your logic is simple enough that a traditional workflow engine or function orchestration tool would do the job just as well.
First install pip install strands-agents or UV Add strands-agents
Setting Up Before Running the our simple Chatbot
To run the “Simple Chatbot” Strands agent, you’ll first need to configure credentials for your model provider and make sure the chosen model is enabled for access.
By default, Strands uses Amazon Bedrock as the model provider and the Claude 4 Sonnet inference model. The model ID automatically depends on the AWS region set in your credentials. For example, if your region is us-east-1, the default model ID will be:
us.anthropic.claude-sonnet-4-20250514-v1:0
For the default Bedrock setup, refer to the Boto3 documentation for instructions on configuring AWS credentials. In most development environments, credentials are either set through AWS_-prefixed environment variables or by running the command:
aws configure
You’ll also need to enable access to the Claude 4 Sonnet model within Amazon Bedrock. Follow the AWS documentation to request and activate model access before running your Strands agent.
When building AI agents with the Strands framework, one of the first challenges developers face is properly configuring AWS credentials. This guide will walk you through multiple methods to set up your AWS credentials so you can run your Strands agent without repeatedly exporting environment variables.
Why Configure AWS Credentials?
By default, Strands agents use Amazon Bedrock as the model provider, which requires AWS authentication. Instead of exporting credentials every time you run your agent:
Don't do this every time!
export AWS_ACCESS_KEY_ID="your-access-key"
export AWS_SECRET_ACCESS_KEY="your-secret-key"
export AWS_DEFAULT_REGION="us-east-1"
You can configure them once and forget about it.
Method 1: Using AWS CLI (Recommended)
Step 1: Install AWS CLI
On macOS with Homebrew
brew install awscli
On Ubuntu/Debian
sudo apt-get install awscli
On Windows (using Chocolatey)
choco install awscli
Step 2: Configure Credentials
aws configure
You'll be prompted to enter:
AWS Access Key ID: AKIA... (your access key)
AWS Secret Access Key: +abc... (your secret key)
Default region name: us-east-1 (or your preferred region)
Default output format: json
Step 3: Verify Configuration
aws sts get-caller-identity
This should return your AWS account information.
Simple Chatbot using Strands Agent
from strands import Agent
# Create an agent with Claude Sonnet 4.5 (which you have access to)
agent = Agent(model="global.anthropic.claude-sonnet-4-5-20250929-v1:0")
# Interactive loop - take user input until they type "exit"
print("🤖 AWS Strands Agent - Type 'exit' to quit")
print("=" * 50)
while True:
try:
user_input = input("\nYou: ").strip()
if user_input.lower() == "exit":
print("👋 Goodbye!")
break
if not user_input:
print("Please enter a question or type 'exit' to quit.")
continue
print("\n🤖 Agent: ", end="", flush=True)
try:
response = agent(user_input)
print(response)
except Exception as e:
print(f"❌ Error: {e}")
except EOFError:
print("\n\n⚠️ Interactive input not available in this environment.")
print("✅ The agent is working correctly! Run this in a terminal to use interactively.")
break
except KeyboardInterrupt:
print("\n\n👋 Goodbye!")
break
Running Strands Simple Chatbot
Conclusion
Strands Agents marks an important shift in how we think about building AI agents — from workflow-driven to model-driven development. Instead of wiring up complex logic or control flows, you can now let the LLM orchestrate the reasoning, planning, and tool use. With strong support for AWS services, multi-model flexibility, and production-grade observability, Strands makes it easier to prototype locally and scale seamlessly across environments.
Whether you’re building autonomous research assistants, customer support bots, or intelligent enterprise agents that coordinate multiple tasks, Strands provides a solid foundation that evolves as models get smarter.
What’s Next
In the next post, we’ll explore how Strands Agents works with other model providers such as OpenAI and Gemini, dive into multi-agent orchestration, and learn how to extend your agents with MCP servers and Guardrails for safety and reliability.
Stay tuned we’re just getting started with the next generation of model-driven agent frameworks.
Thanks
Sreeni Ramadorai
Top comments (0)