DEV Community

Cover image for Implementing a Basic Strands Agent with MCP Servers
Om Shree
Om Shree

Posted on • Originally published at glama.ai

Implementing a Basic Strands Agent with MCP Servers

In this hands-on guide, we'll walk through building a simple AI agent using the Strands Agents SDK1, integrated with an MCP (Model Context Protocol) tool. This example uses a local MCP server to demonstrate how Strands seamlessly connects with external tool endpoints.

1. Set up your environment

Begin by installing the SDK and related packages:

pip install strands-agents strands-agents-tools strands-agents-builder
pip install mcp-client
Enter fullscreen mode Exit fullscreen mode

Make sure your Python version is 3.9 or higher.

2. Create an MCP server

The server exposes simple tools through MCP over HTTP. Below is a minimalist example using FastMCP:

# mcp_server.py
from mcp.server.fastmcp import FastMCP

mcp = FastMCP("simple-server", stateless_http=True, host="0.0.0.0", port=8002)

@mcp.tool()
def get_greeting(name: str) -> str:
    return f"Hello, {name}!"

if __name__ == "__main__":
    mcp.run(transport="streamable-http")
Enter fullscreen mode Exit fullscreen mode

Run this script with:

python mcp_server.py
Enter fullscreen mode Exit fullscreen mode

This registers a single tool—get_greeting—accessible via HTTP MCP interaction.

3. Build a Strands agent that uses the MCP tool2

Create a Python script agent_with_mcp.py:

from strands import Agent
from strands_agents.builder import MCPClient, streamablehttp_client

MCP_URL = "http://localhost:8002/mcp/"
mcp_client = MCPClient(lambda: streamablehttp_client(MCP_URL))

agent = Agent(model="bedrock:nova", tools=mcp_client.tools)

response = agent("Please greet Alice using the greeting tool.")
print(response)
Enter fullscreen mode Exit fullscreen mode

This agent3:

  • Connects to the MCP server for tool metadata,
  • Sends the user prompt to the LLM,
  • LLM selects and invokes get_greeting, and
  • Returns the result.

This pattern demonstrates how Strands can dynamically discover and use MCP tools for reasoning and task execution.

4. Run the full setup

  1. Start the MCP server:
   python mcp_server.py
Enter fullscreen mode Exit fullscreen mode
  1. Run the agent script:
   python agent_with_mcp.py
Enter fullscreen mode Exit fullscreen mode

Expected output:

Hello, Alice!
Enter fullscreen mode Exit fullscreen mode

This confirms the agent successfully called the remote tool and integrated the result into its response.

5. What’s happening behind the scenes

When the agent runs, it follows a defined internal workflow powered by MCP and Strands:

  1. Tool Discovery: At startup, the Strands agent queries the MCP server for available tools, fetching their metadata—names, parameter schemas, and usage descriptions 1.
  2. User Input Parsing: The user’s request is sent to the LLM, which interprets the intent and chooses the right tool (e.g., get_greeting) based on its metadata.
  3. MCP Tool Invocation: The agent uses the MCP client to send a structured tools/call request to the server. This is a JSON-RPC call carrying function name and arguments 2.
  4. Tool Execution & Response: The MCP server runs the function (e.g., the Python get_greeting tool), and returns a typed, structured result.
  5. Agentic Reflection: The agent injects the tool's output into the LLM's context. The model then incorporates this result into its next reasoning step and generates the final response.

This flow illustrates how Strands combines runtime tool discovery, standardized MCP communication, and model-first reasoning to execute user requests without hardcoded logic—making the system flexible and maintainable.

6. Why this matters

  • Decoupled architecture: You can update the tool logic independently by modifying the MCP server.
  • Tool discovery: Agents automatically consume tools advertised by MCP—no manual registration needed.
  • Model-driven flow: The agentic loop takes care of interpreting user intent, invoking tools, and generating responses.

References


  1. AWS blog: "Open Protocols for Agent Interoperability Part 3: Strands Agents & MCP"  

  2. Strands Agents documentation – MCP examples  

  3. DEV tutorial: "Agent with Local, Remote MCP Tools using AWS Strands Agents" 

Top comments (5)

Collapse
 
thedeepseeker profile image
Anna kowoski

Nice one, can u explain why we have 3 different agents mode in AWS (Strands, Bedrock and agentcore) ? why dont we have a single agent that does it all???

Collapse
 
om_shree_0709 profile image
Om Shree

Good question! It’s mainly because each one is made for different needs:

  • Strands Agents are open-source and flexible if you want to build and run everything yourself.
  • Bedrock Agents are fully managed by AWS — easy to use but less customizable.
  • AgentCore is more like a toolkit to help developers build agents directly with Bedrock.

So instead of one tool that tries to do everything (and maybe does it poorly), AWS gives options depending on how much control or convenience you want.

Collapse
 
thedeepseeker profile image
Anna kowoski

Thanks Om, this helps a lot.

Collapse
 
garv_paul231030258_f4544 profile image
GARV PAUL 231030258

Well written sir

Collapse
 
om_shree_0709 profile image
Om Shree

Thanks Garv!