DEV Community

Abhinav Kantamaneni
Abhinav Kantamaneni

Posted on

Getting Started with Strands Agents: A Simple Guide to Building AI Agents the Easy Way

Getting Started with Strands Agents: A Beginner-Friendly Guide

Introduction

In May 2025, AWS released Strands Agents, an open-source SDK that makes it easier to build autonomous AI agents. Instead of wiring up complex flows, Strands uses a model-first approach: the language model plans, reasons, and calls tools. You focus on three things: a model, a prompt, and a set of tools.

This post explains what that means, when to use it, and how to spin up a simple chatbot.


Why Strands Agents Matter

From Workflow-Driven to Model-Driven

Traditional agent frameworks (e.g., LangChain, Semantic Kernel) let you connect LLMs to tools but often require you to design the workflow (chains/graphs, branching, retries).
With Strands, you provide the essentials and the model handles:

  • when to call a tool
  • how to combine tools
  • how many steps to take
  • when to stop

Multi-Model and Provider Flexibility

Although built by AWS, Strands can run with multiple providers (e.g., Bedrock/Claude, Anthropic, Llama, Ollama, LiteLLM, custom backends). You can switch models without re-architecting.

Built-In Tools and MCP

Define tools with a simple Python @tool decorator. Strands also supports Model Context Protocol (MCP) so you can plug in external tool servers without custom glue code.

Production-Ready by Design

Strands includes tracing, metrics, structured logs, and robust error handling (rate limits, context overflows). The same code can run locally or on Lambda, Fargate, EC2, and on-prem.

Multi-Agent Patterns

Use agent-as-tool to let one agent call another. Build cooperative teams or autonomous loops that learn and adapt over time.


When to Use (and When Not To)

Use Strands Agents if you:

  • prefer model-driven reasoning over hand-built orchestration
  • are in the AWS ecosystem (Bedrock, Lambda, Step Functions, Fargate)
  • want the freedom to switch model providers later
  • need observability (tracing/logs/retries) out of the box
  • plan to use tools/APIs or connect MCP servers
  • are exploring multi-agent systems or autonomous loops

Maybe skip Strands if you:

  • have deterministic, fixed workflows (simple ETL, rule engines)
  • don’t need planning/reasoning and a basic script would do

Quick Start

Install

pip install strands-agents
Enter fullscreen mode Exit fullscreen mode

Or with UV:

uv add strands-agents
Enter fullscreen mode Exit fullscreen mode

Configure AWS Credentials (for Bedrock by default)

aws configure
Enter fullscreen mode Exit fullscreen mode

Provide your Access Key, Secret Key, default region (e.g., us-east-1), and output (json).

Verify:

aws sts get-caller-identity
Enter fullscreen mode Exit fullscreen mode

If you’re using Bedrock models (e.g., Claude), ensure model access is enabled in your AWS account.


Your First Strands Chatbot

from strands import Agent

# Choose a model you have access to in your account/region
agent = Agent(model="global.anthropic.claude-sonnet-4-5-20250929-v1:0")

print("Strands Chatbot — type 'exit' to quit")
print("=" * 50)

while True:
    try:
        user_input = input("\nYou: ").strip()
        if user_input.lower() == "exit":
            print("Goodbye!")
            break

        if not user_input:
            print("Please enter a message.")
            continue

        print("\nAgent: ", end="", flush=True)
        response = agent(user_input)
        print(response)

    except KeyboardInterrupt:
        print("\nGoodbye!")
        break
Enter fullscreen mode Exit fullscreen mode

How It Works (In Plain English)

  1. You provide the model, a prompt (system/user context), and optional tools.
  2. The model decides whether to call a tool, how to combine tools, and when to iterate.
  3. Strands captures traces/logs, handles retries, and passes outputs back to you.
  4. You deploy the same code to local dev, Lambda, Fargate, or EC2.

Tools, MCP, and Extensions

  • Define tools with a Python decorator (HTTP calls, DB queries, Python functions).
  • Connect external capabilities using MCP so agents can talk to standardized tool servers.
  • Add guardrails, validation, and safety checks around tool IO as needed.

Deployment Options

  • Local: fastest iteration for prototypes
  • AWS Lambda: event-driven, serverless
  • AWS Fargate: containerized long-running agents
  • Amazon EC2 / On-prem: full control over runtime and networking

FAQs

Do I need AWS?
No, Strands can work with other model providers. AWS Bedrock is just the default.

Can I switch models later?
Yes. Change the model string or provider settings; keep the rest of your agent code.

Is it good for strict, step-by-step workflows?
If your flow is fixed and simple, a traditional orchestrator or script might be better.


Conclusion

Strands Agents simplify agent development by letting models orchestrate the logic. You get multi-provider flexibility, production-grade observability, and powerful extensions (tools + MCP). If you want less boilerplate and more results, the model-first path is a strong default.


Top comments (0)