At work, we build agents using LangChain and LangGraph. It works, but it takes a while to get anything off the ground. When AWS dropped Strands Agents earlier this year, I was curious — is this actually simpler, or just another framework with a different name?
Here’s what I found.
What Even Is Strands?
Strands Agents is an open source SDK from AWS, released in May 2025. The whole idea is a model-driven approach — instead of you wiring up every step of an agent’s logic, you hand the LLM a prompt and a set of tools, and let it figure out what to do.
Three things are all you need to build an agent:
- A model (Bedrock, Anthropic, OpenAI, Ollama — your pick)
- A system prompt
- A list of tools
That’s it. The LLM plans, reasons, calls tools, and loops until the task is done.
By the way — why ‘Strands’? Think of each tool as a single thread. The agent weaves them together dynamically to complete a task. Compare that to LangGraph where you manually braid the threads in a specific order. With Strands, you hand the model a bundle of threads and say ‘figure it out.’ Honestly, a pretty good name for what it does.
The Code Difference Is Stark
Let’s say you want an agent that can answer a question and use a calculator tool.
With LangGraph:
from langgraph.graph import StateGraph, END
from langchain_core.messages import HumanMessage
from langchain_aws import ChatBedrock
from langchain_core.tools import tool
@tool
def calculator(expression: str) -> str:
"""Evaluate a math expression"""
return str(eval(expression))
llm = ChatBedrock(model_id="anthropic.claude-3-sonnet-20240229-v1:0")
llm_with_tools = llm.bind_tools([calculator])
from typing import TypedDict, Annotated
import operator
class AgentState(TypedDict):
messages: Annotated[list, operator.add]
def call_model(state):
response = llm_with_tools.invoke(state["messages"])
return {"messages": [response]}
def call_tool(state):
last_message = state["messages"][-1]
tool_call = last_message.tool_calls[0]
result = calculator.invoke(tool_call["args"])
from langchain_core.messages import ToolMessage
return {"messages": [ToolMessage(content=str(result), tool_call_id=tool_call["id"])]}
def should_continue(state):
last = state["messages"][-1]
return "tool" if hasattr(last, "tool_calls") and last.tool_calls else END
graph = StateGraph(AgentState)
graph.add_node("agent", call_model)
graph.add_node("tool", call_tool)
graph.set_entry_point("agent")
graph.add_conditional_edges("agent", should_continue)
graph.add_edge("tool", "agent")
agent = graph.compile()
result = agent.invoke({"messages": [HumanMessage(content="What is 1764 square rooted?")]})
print(result["messages"][-1].content)
With Strands:
from strands import Agent
from strands_tools import calculator
agent = Agent(tools=[calculator])
agent("What is the square root of 1764?")
Yes, that’s really it. Three lines.
The LangGraph version above is ~40 lines for the same task — and that's still the happy path, no error handling yet.
So Should You Just Switch to Strands?
Not necessarily. Here’s the honest breakdown:
Strands is better when:
- You’re prototyping fast and don’t need tight control over every step
- You trust the LLM to reason well for your use case
- You’re already on AWS and want native Bedrock + Lambda + ECS integration
- Your agent is relatively self-contained (one agent, many tools)
LangGraph is still better when:
- You need precise, predictable control over the flow (e.g. financial workflows, compliance-heavy systems)
- You have complex branching logic that you can’t leave up to the model
- You need parallel sub-agent execution with strict state isolation
- Your team already has a lot of LangGraph investment and tooling
The key difference is the philosophy. LangGraph says: you design the graph, the model executes it. Strands says: you give the model tools and let it figure out the graph itself.
In practice, Strands works great for tasks where the LLM is good at reasoning through what to do next. It starts to feel shaky when you have very specific business logic that must run in a precise order — that’s where explicit graph control still wins.
AWS Integration Is a Real Advantage
If you’re already in the AWS ecosystem, Strands has a genuine edge. It deploys natively to:
- Lambda (for serverless agent invocations)
- ECS/Fargate
- Bedrock AgentCore — a managed runtime that handles identity (IAM, Cognito), memory, long-running tasks (up to 8 hours), and observability out of the box
Adding observability is also almost zero effort. Strands has built-in OpenTelemetry support, so traces go straight to CloudWatch, X-Ray, or any OTLP-compatible tool.
With LangChain, you’re usually reaching for LangSmith or wiring up your own tracing. It works, but it’s extra setup.
What About Error Handling?
This one surprised me. In LangGraph you explicitly wire up what happens when something fails — fallback edges, retry nodes, conditional logic. It’s more code but you’re in full control.
Strands takes a different approach: when something goes wrong, the model reasons about alternatives rather than following a predetermined error path. For most cases that actually works fine.
For production tools that call external APIs, you can still add a resilience decorator with retries:
from strands import tool
from strands.tools import RetryConfig
@tool(retry_config=RetryConfig(max_attempts=3, backoff_multiplier=2))
def call_external_api(query: str) -> str:
"""Call an external API"""
# your API call here
pass
What About Multi-Agent Setups?
Strands has built-in patterns for this — Graph, Swarm, and Workflow modes. You can also expose one Strands agent as a tool for another agent, which makes composing multi-agent systems pretty clean.
For very large multi-agent architectures (many specialized sub-agents with their own isolated contexts), AWS actually recommends Agent Squad over Strands. Agent Squad is a separate library focused purely on routing and orchestration across many agents. Strands + Agent Squad can be used together.
LangGraph is still competitive here for complex multi-agent graphs, especially when you need fine-grained state sharing between nodes.
My Take
I came in skeptical. Another SDK, another abstraction, does the world really need this?
But Strands is genuinely simpler for the 80% case. If your agent needs to answer questions, call APIs, retrieve docs, or do multi-step reasoning — just use Strands. You’ll be done in an hour instead of a day.
If you’re building something where the sequence of operations really matters and you can’t leave decisions to the model, keep LangGraph. It’s not going anywhere, and the explicit graph control is valuable.
For teams already on AWS, I’d start new agent projects with Strands and reach for LangGraph only when I hit a wall.
Quick Start
pip install strands-agents strands-agents-tools
from strands import Agent, tool
@tool
def get_weather(city: str) -> str:
"""Get current weather for a city"""
# your actual API call here
return f"It's sunny in {city}, 22°C"
agent = Agent(
system_prompt="You are a helpful travel assistant.",
tools=[get_weather]
)
agent("What's the weather like in Lisbon?")
Make sure you have AWS credentials configured and Bedrock model access enabled (Claude Sonnet in us-west-2 by default).
Using LangGraph at work and tried Strands? I’d love to hear how it compared for your use case — drop a comment below.
Top comments (1)
The 40-lines-to-3 comparison is compelling for the demo case, but it's worth being precise about what's actually being compared. LangGraph's verbosity comes from explicitly defining state, nodes, edges, and conditional routing -- which is overhead for a calculator query, but becomes the entire point when you need deterministic execution paths. Strands hides that complexity inside the model's reasoning, which is great until you need to explain to an auditor exactly which tools were called in which order and why.