DEV Community

Ömer Berat Sezer
Ömer Berat Sezer

Posted on • Edited on

Agent with Local, Remote MCP Tools using AWS Strands Agents 🧬 Bedrock, Nova, Fast API, Streamlit

Many major tech companies have declared this year as the Year of AI Agents. Additionally, several new agent development frameworks have been released in the past three months. In the past three months, TWO powerful AI agent development frameworks have been released:

  • AWS Strands Agents
  • Google Agent Development Kit (ADK)

Previous post was related agent sample code with Google ADK to implement remote & local MCP.

This time, we're diving to AWS Strands way.. In both frameworks, when implementing in the same scenario, even if some parts look the same, there are certain differences in approach (e.g. session, runners)

In this post, we’ll explore what the AWS Strands Agent is and how you can build an end-to-end app with remote and local MCP (Model Context Protocol)tools using Strands, Bedrock, Nova, FastAPI, and a Streamlit UI.

Table of Contents

What is AWS Strands Agents?

  • Strands agent is an open-source framework to develop AI agents to run anywhere:
    • VSCode, Terminal,
    • Docker Container
    • AWS Lambda
    • AWS ECS (Elastic Container Service)
    • AWS EKS (Elastic Kubernetes Service)
  • Well documentation to view/learn agents, tools, workflows, memory, multi agents, etc.

Motivation: Why Use an Agent Framework?

  • Structured workflows: Automates decision-making, tool use, and response generation in a clear loop.
  • Session & memory support: Keeps context across interactions for smarter, personalized behavior.
  • Multi-agent orchestration: Lets specialized agents collaborate on complex tasks.
  • Tool integration: Easily plug in MCP tools, APIs, functions that agents know when and how to use them.
  • Multi-model flexibility: Use and switch between different LLMs (e.g. GPT, Claude, Nova).
  • Production-ready: Built-in logging, monitoring, and error handling for real-world use.

Agent Loop

  • The agent loop is the mechanism through which a Strands agent interprets user input, makes decisions, uses tools, and produces responses. It enables advanced, multi-step reasoning and actions by tightly integrating tools with language models..

agent-loop

Fundamentally, the agent loop operates through the following steps:

  • Receives user input and contextual information,
  • Processes the input using a language model (LLM),
  • Decides whether to use tools to gather information or perform actions,
  • Executes tools and receives results,
  • Continues reasoning with the new information,
  • Produces a final response or iterates again through the loop.

This cycle can occur several times during a single user interaction, enabling the agent to carry out intricate, multi-step reasoning and act autonomously.

What is Model Context Protocol (MCP)?

MCP is an open standard designed to standardize how Large Language Models (LLMs) communicate with external applications, data sources, and tools.

Agent App with Local & Remote MCP using AWS Strands, Nova, FastAPI and Streamlit

TWO small sample projects with MCPs on GitHub using AWS-Strands:

Local MCP Tool - FileOps

FileOps MCP runs on Linux (WSL). Please install nodejs, npm, npx in your system to run npx-based MCP tool.

LocalMCP

Remote MCP Tool - Serper

Loging into the Serper App and Serper API key is required to run Serper MCP tool. Also, please install nodejs, npm, npx in your system to run npx-based MCP tool.

RemoteMCP

Installing Dependencies & Reaching Gemini Model

  • Please install AWS Strands Agents & Tools libraries:
pip install strands-agents
pip install strands-agents-tools strands-agents-builder
Enter fullscreen mode Exit fullscreen mode
  • Enable AWS Bedrock model access in your region or US-West-2

    • AWS Bedrock > Bedrock Configuration > Model Access > AWS Nova-Pro, or Claude 3.7 Sonnet, or Llama 4
  • In this code, we'll use AWS Nova-Pro, because it's served in different regions by AWS.

  • After model access, give permission in your IAM to access AWS Bedrock services: AmazonBedrockFullAccess

  • 2 Options to reach AWS Bedrock Model using your AWS account:

    • AWS Config: With aws configure, to create configand credentials files
    • Getting variables using .env file: Add .env file:
AWS_ACCESS_KEY_ID= PASTE_YOUR_ACCESS_KEY_ID_HERE
AWS_SECRET_ACCESS_KEY=PASTE_YOUR_SECRET_ACCESS_KEY_HERE
Enter fullscreen mode Exit fullscreen mode

Frontend - Streamlit UI

# app.py
import streamlit as st
import requests
import json 

st.set_page_config(page_title="AWS Strands Agent Chat", layout="centered")

st.title("AWS Strands Agent MCP")

# initialize chat history
if "messages" not in st.session_state:
    st.session_state.messages = []

user_query = st.chat_input("Ask for system file tool commands or anything...")

if user_query:
    st.session_state.messages.append({"role": "user", "content": user_query})

    try:
        response = requests.post("http://localhost:8000/ask", json={"query": user_query})
        response.raise_for_status()

        response_json = response.json()
        assistant_text = response_json["response"]["message"]["content"][0]["text"]

    except Exception as e:
        assistant_text = f"Error: {str(e)}"

    st.session_state.messages.append({"role": "assistant", "content": assistant_text})

# display all messages in order
for msg in st.session_state.messages:
    st.chat_message(msg["role"]).markdown(msg["content"])
Enter fullscreen mode Exit fullscreen mode

Backend Local MCP

# agent.py
import asyncio
from contextlib import asynccontextmanager, AsyncExitStack
from fastapi import FastAPI, Request
from pydantic import BaseModel
from strands import Agent
from strands.models import BedrockModel
from strands.tools.mcp import MCPClient
from mcp import stdio_client, StdioServerParameters

MODEL = "us.amazon.nova-pro-v1:0"
MCP_TOOL_PATH = "/home/omer/mcp-test"

bedrock_model = BedrockModel(
    model_id=MODEL,
    temperature=0.3,
    top_p=0.8,
)

agent = None
mcp_client = None

@asynccontextmanager
async def lifespan(app: FastAPI):
    global agent, mcp_client

    async with AsyncExitStack() as stack:
        mcp_client = MCPClient(lambda: stdio_client(
            StdioServerParameters(
                command="npx",
                args=["-y", "@modelcontextprotocol/server-filesystem", MCP_TOOL_PATH],
            )
        ))

        stack.enter_context(mcp_client)

        tools = mcp_client.list_tools_sync()
        agent = Agent(
            model=bedrock_model,
            system_prompt=(
                "Help user interact with the local filesystem using available tools. "
                "Fallback to LLM knowledge if necessary."
            ),
            tools=tools
        )

        yield

app = FastAPI(lifespan=lifespan)

class QueryRequest(BaseModel):
    query: str

@app.post("/ask")
async def ask(request: Request):
    data = await request.json()
    query = data.get("query", "")
    try:
        response = agent(query, stream=False)
        return {"response": response}
    except Exception as e:
        return {"error": str(e)}

if __name__ == "__main__":
    import uvicorn
    uvicorn.run("agent:app", host="0.0.0.0", port=8000)
Enter fullscreen mode Exit fullscreen mode

Local MCP Run & Demo

Run frontend (app.py):

streamlit run app.py
or
python -m streamlit run app.py
Enter fullscreen mode Exit fullscreen mode

Run backend (agent.py):

uvicorn agent:app --host 0.0.0.0 --port 8000
Enter fullscreen mode Exit fullscreen mode

Demo: GIF on GitHub

Backend Remote MCP

# agent.py
import asyncio
from contextlib import asynccontextmanager, AsyncExitStack
from fastapi import FastAPI, Request
from pydantic import BaseModel
from dotenv import load_dotenv
from strands import Agent
from strands.models import BedrockModel
from strands.tools.mcp import MCPClient
from mcp import stdio_client, StdioServerParameters
import os

load_dotenv()

MODEL = "us.amazon.nova-pro-v1:0"

bedrock_model = BedrockModel(
    model_id=MODEL,
    temperature=0.3,
    top_p=0.8,
)

agent = None
mcp_client = None
serper_api_key = os.getenv("SERPER_API_KEY")

@asynccontextmanager
async def lifespan(app: FastAPI):
    global agent, mcp_client

    async with AsyncExitStack() as stack:

        mcp_client = MCPClient(lambda: stdio_client(
            StdioServerParameters(
                command='npx',
                args=["-y", "serper-search-scrape-mcp-server"],
                env={"SERPER_API_KEY": serper_api_key}
            )
        ))

        stack.enter_context(mcp_client)

        tools = mcp_client.list_tools_sync()
        agent = Agent(
            model=bedrock_model,
            system_prompt=(
                """You are a research assistant and search on SERPER, search and give results in link, title, content
                'You MUST:
                - Use **google_search** to perform web searches when the user is asking for general information, how-to guides, comparisons, news, or any content that could be sourced from the internet. This tool retrieves:
                    - Perform web searches using user queries to find organic results, FAQs, related searches, and knowledge graph entries.
                    - Handle a wide range of search intents: informational, comparative, educational, technical, current events, etc.
                    - Always return useful summaries along with links to the most relevant pages.
                **Tool Parameters**
                    - `q`: Required. The search query string (e.g., "how Kubernetes works", "latest AI trends 2025"). Retrieve from the prompt.
                    - `gl`: Required. Geographic region code in ISO 3166-1 alpha-2 format (e.g., "us", "de", "gb"). Use "en".
                    - `hl`: Required. Language code in ISO 639-1 format (e.g., "en", "fr", "es"). Use "en.
                    - `location`: Required. Location for search results (e.g., 'SoHo, New York, United States', 'California, United States'). Use "United States".
                Always summarize the top results clearly and include direct URLs for reference.
                - Use **scrape** to extract content from a specific webpage when:
                    - The user provides a URL and asks for content, summaries, or metadata
                    - A relevant link was previously found via **google_search** and needs to be explored further
                - Use the tools wisely to assist users. Based on the provided results, use Function tool call returns, retrieve only content.  
                - Parse the JSON response carefully and extract **relevant fields**. Give the search results with TITLE, LINK, CONTENT or SNIPPET.
                - For all other questions, respond using your own knowledge."""
            ),
            tools=tools
        )

        yield

app = FastAPI(lifespan=lifespan)

class QueryRequest(BaseModel):
    query: str

@app.post("/ask")
async def ask(request: Request):
    data = await request.json()
    query = data.get("query", "")
    try:
        response = agent(query, stream=False)
        return {"response": response}
    except Exception as e:
        return {"error": str(e)}

if __name__ == "__main__":
    import uvicorn
    uvicorn.run("agent:app", host="0.0.0.0", port=8000)
Enter fullscreen mode Exit fullscreen mode

Remote MCP Run & Demo

Run frontend (app.py):

streamlit run app.py
or
python -m streamlit run app.py
Enter fullscreen mode Exit fullscreen mode

Run backend (agent.py):

uvicorn agent:app --host 0.0.0.0 --port 8000
Enter fullscreen mode Exit fullscreen mode

Demo: GIF on GitHub

Conclusion

In this post, we mentioned:

  • how to access AWS Bedrock Foundation Models,
  • how to implement AWS Strands agent with remote & local MCP tools using AWS Bedrock (Serverless), AWS Nova-Pro LLM, Fast API, Streamlit UI.

If you found the tutorial interesting, I’d love to hear your thoughts in the blog post comments. Feel free to share your reactions or leave a comment. I truly value your input and engagement 😉

For other posts 👉 https://dev.to/omerberatsezer 🧐

For Tips, Tutorials, Hands-On Labs:

References

Your comments 🤔

  • Which tools are you using to develop AI Agents (e.g. AWS Strands, Google ADK, CrewAI, Langchain, etc.)? Please mention in the comment your experience, your interest?
  • What are you thinking about AWS Strands Agent?
  • What are your thoughts and experiences with MCP tools?
  • What areas of LLM and agent development are you most interested in learning?

=> Welcome to any comments below related AI, agents for brainstorming 🤯🧐

Top comments (7)

Collapse
 
omerberatsezer profile image
Ömer Berat Sezer • Edited

I have evaluated several AI agent frameworks, including LangChain, CrewAI, AWS-Strands and Google ADK, applying them to various use cases, such as multi-agent collaboration, integration with MCP tools, support for different language models and workflow orchestration. I am currently continuing to evaluate them, doing examples with AWS-Strands and Google ADK. So far I like the way they both integrate open source methods themselves (LiteLLM, StdioServerParameters). I hope they will integrate more common, open source methods in their frameworks.

Collapse
 
nevodavid profile image
Nevo David

Pretty cool seeing it all wired up step by step like this - makes me wanna tinker more with these setups myself.

Collapse
 
omerberatsezer profile image
Ömer Berat Sezer

Thanks 😊 Same here, hands-on is the fastest way to learn.. Definitely worth tinkering with, you’ll have a blast.

Collapse
 
dotallio profile image
Dotallio

Super clear walkthrough - love the focus on hands-on MCP usage and detailed setup.
Have you tried mixing AWS Strands agents with any Langchain or CrewAI flows for more complex pipelines?

Collapse
 
omerberatsezer profile image
Ömer Berat Sezer • Edited

Thanks a lot 😊 Happy to hear the walkthrough was clear and helpful.
I haven’t combined them with LangChain or CrewAI yet. It is in my plan while implementing small projects. I think that while LangChain and LlamaIndex are great for managing RAG pipelines, like handling loaders, retrieval, embeddings, and vector stores, more complex agent workflows are often better handled with frameworks like AWS Strands or Google ADK.

Collapse
 
marry_jonas_71b02f2823a04 profile image
Mary Jonas

Pretty cool post! Thanks. :)

Collapse
 
omerberatsezer profile image
Ömer Berat Sezer

Thanks 😊

Some comments may only be visible to logged-in visitors. Sign in to view all comments.