DEV Community

Mindy Jen
Mindy Jen

Posted on

Boost Your Agents with MCPs - MCP Fundamentals

We want to get every reader familiar with using MCP as a tool with LLMs to accomplish productivity tasks for our daily needs. We will see how LLMs can use multiple tools in concert to accomplish more advanced tasks.

We will first learn the core concepts of MCP by building our first intelligent tool - an analyst that understands natural language and provides contextual responses. This exercise will introduce everyone to MCP, the universal open standard for connecting AI systems with data sources and demonstrate its capabilities.

An analyst MCP server that goes beyond simple arithmetic:

Feature Description Example
Natural Language Processing Understands conversational math/statistics queries "What's 15% of 250?" → Calculation + explanation
Context Awareness Remembers previous analyses References earlier results in conversation
Error Handling Graceful handling of invalid inputs Clear messages for division by zero, invalid syntax
Rich Responses Detailed explanations with breakdowns Step-by-step calculation process

What we need to do...

  • MCP Architecture: Understand how MCP connects AI models to tools
  • Server Implementation: Build a working MCP server from scratch
  • Tool Registration: Register functions that AI models can discover
  • Response Formatting: Structure responses for optimal AI interaction
  • Integration Testing: Test our server with the application

A. MCP Protocol Overview

The Model Context Protocol (MCP) is an open standard designed to create a universal interface between AI models and external tools, data, and services. By using Bedrock AgentCore Runtime, developers can transition these tools from local functions to secure, enterprise-grade managed microservices.

Core Concepts of MCP

The protocol replaces custom, one-off integrations with a standardized request-response pattern:

  • Standardization: Provides a single protocol for all tools (e.g., same interface for analysis or weather tools).
  • Interoperability: Works across different models (Claude, GPT, Nova) using the same toolset.
  • Security: Implements sandboxed execution, parameter validation via JSON Schema, and controlled access patterns.

Communication & Message Types

MCP follows a structured flow to ensure the AI model understands and executes tools correctly:

  • Tool Discovery (tools/list): The model identifies accessible tools and their required parameters (schemas).
  • Tool Execution (tools/call): The model sends structured arguments to a specific tool to perform tasks beyond its internal knowledge.
  • Transport Methods: Supports stdio (standard input/output) for local development, as well as HTTP and WebSockets for remote or real-time applications.

Production Deployment with AgentCore

While local tools are useful for development, Bedrock AgentCore Runtime provides the infrastructure for production scaling:

  • Managed Infrastructure: Moves tools to managed services, removing the burden of server maintenance and scaling.
  • Enhanced Security: Integrates with Amazon Cognito to enforce strict authentication, ensuring only authorized agents can trigger sensitive operations.
  • Simplified Integration: A Strands Agent can connect to a remote MCP server and handle authentication handshakes automatically, consuming cloud tools as if they were local functions.

Summary Table: Benefits of Managed MCP

Feature Local MCP (stdio) Managed AgentCore Runtime
Primary Use Development & testing Enterprise-scale production
Security Local process isolation Amazon Cognito Authentication
Scaling Manual/Local AWS-handled auto-scaling
Accessibility Limited to one machine Shareable across teams/agents

Key Takeaway: This architecture allows developers to build a reusable library of secure MCP tools that any agent in an organization can invoke with minimal code.

B. Key MCP Concepts

Deploying custom tools at scale requires moving beyond local functions to a managed infrastructure that is secure, authenticated, and scalable. Bedrock AgentCore Runtime enables the deployment of Model Context Protocol (MCP) servers as managed services, transitioning tools from local Python decorators to enterprise-grade microservices.

1. Building and Deploying an MCP Server

The development process is simplified using the FastMCP framework, which automates protocol message formatting and JSON Schema generation.

a. Define the Local Server

Using FastMCP, you can quickly define tools with standard Python type hints:

from mcp.server import FastMCP

# Create the MCP server instance
mcp = FastMCP("Analyzer")

# Register a tool; FastMCP handles the schema generation automatically
@mcp.tool(description="Add two numbers")
def add(x: float, y: float) -> float:
    return x + y

# Run using stdio for local testing
mcp.run(transport="stdio")
Enter fullscreen mode Exit fullscreen mode
b. Deploy to AgentCore Runtime

Once containerized, the server is registered with the Runtime Client to become a managed cloud service:

from bedrock_agentcore.tools.runtime_client import RuntimeClient

runtime_client = RuntimeClient(region="us-east-1")

# Deploy with Cognito security
mcp_server = runtime_client.create_mcp_server(
    name="SearchService",
    image="your-docker-image-uri",
    auth_config={
        "type": "COGNITO",
        "user_pool_id": "us-east-1_xxxxxxxxx",
        "client_id": "xxxxxxxxxxxxxxxx"
    }
)
Enter fullscreen mode Exit fullscreen mode

2. Connecting a Strands Agent to a Remote Tool

Strands Agents leverage the AgentCoreRuntime to connect to these remote services. The agent automatically manages the authentication handshake, treating the remote MCP server as a local capability.

from strands import Agent
from strands.models import BedrockModel
from strands_tools.runtime import AgentCoreRuntime

# Connect to the deployed runtime
agentcore_runtime = AgentCoreRuntime(region="us-east-1")

# Create agent using the remote managed tool
agent = Agent(
    model=BedrockModel(model_id="us.amazon.nova-pro-v1:0"),
    tools=[agentcore_runtime.mcp_tool(server_name="SearchService")]
)

# Execution: Discovery -> Intent Recognition -> Parameter Extraction -> Execution
agent("What are the key highlights from the latest AWS Re:Invent?")
Enter fullscreen mode Exit fullscreen mode

Summary of MCP Implementation

Component Role Technology
Frontend User interface and real-time chat React
Backend Agent orchestration FastAPI with Strands Agents
MCP Servers Tool implementations Python with FastMCP
AI Models Natural language processing Amazon Nova Pro / Claude

Key Takeaway: Bedrock AgentCore Runtime acts as the "production glue" for AI agents. It shifts the responsibility for security, scaling, and server management to AWS, allowing organizations to maintain a secure library of reusable tools accessible via a single line of code.

C. Building an Analyst MCP Server

An Analyzer MCP Server is created using the FastMCP framework. It demonstrates how to turn standard Python functions into tools that an AI agent can use to solve natural language math problems. Let's build our first MCP server - an analyst that handles natural language math queries with context and error handling.

A short excerpt of this code is shown below:

from mcp.server import FastMCP
import math

# Create MCP server instance
mcp = FastMCP("Analyzer Server")

@mcp.tool(description="Add two numbers together")
def add(x: float, y: float) -> float:
    """Add two numbers and return the result."""
    return x + y

...

@mcp.tool(description="Divide first number by second number")
def divide(x: float, y: float) -> float:
    """Divide x by y and return the result."""
    if y == 0:
        raise ValueError("Cannot divide by zero")
    return x / y

...

if __name__ == "__main__":
    print("🔢 Starting Analyzer MCP Server...")
    mcp.run(transport="stdio")
Enter fullscreen mode Exit fullscreen mode

To build a successful MCP server, focus on these four pillars:

Pillar Action Why it Matters
Context Use detailed docstrings and descriptions. Helps the AI choose the right tool for a natural language query.
Validation Always use Python type hints (x: float). Prevents the AI from sending "garbage" data that crashes your code.
Resilience Raise specific errors (e.g., ValueError). Allows the AI to explain the error to the user rather than failing silently.
Chaining Design tools to be composable. Enables the agent to solve complex, multi-step problems (e.g., Compound Interest).

1. Key Implementation Details

a. Server Creation (Core Implementation Flow)

The server is built by defining a FastMCP instance and decorating functions to expose them as tools.

  • Initialization:

    • Creates a new MCP server with a descriptive name
    • The name helps with debugging and tool discovery
mcp = FastMCP("Analyzer Server")
Enter fullscreen mode Exit fullscreen mode
  • Transport: Uses stdio (Standard Input/Output) for local communication between the AI and the server.

  • Registration: The @mcp.tool decorator tells the AI what the tool does and what inputs it requires.

b. Tool Registration (Essential Tool Components)
@mcp.tool(description="Add two numbers together")
def add(x: float, y: float) -> float:
Enter fullscreen mode Exit fullscreen mode

To ensure the AI uses our tools correctly, every function should include:

Component Purpose
@mcp.tool() Registers function as MCP tool
description Helps AI understand tool purpose
Type hints Enables automatic parameter validation
Docstring Provides additional context
c. Intelligence Error Handling
if y == 0:
    raise ValueError("Cannot divide by zero")
Enter fullscreen mode Exit fullscreen mode
  • Provides clear error messages (Natural Language Mapping) The AI automatically maps user phrases like "sum of 25 and 17" or "25 plus 17" to the add(x=25, y=17) function.
  • Prevents invalid operations (Complex Queries): The agent can "chain" tools. For a budget query, it might call subtract() multiple times to reach a final answer.
  • Errors are automatically sent back to AI agent (Graceful Failures): By raising a ValueError("Cannot divide by zero"), the error message is passed directly back to the AI, allowing it to explain the mistake to the user rather than just crashing.
d: Best Practices for Developers
  • Be Specific: Use clear descriptions like "Calculate 15% of 200" instead of "Does percentage stuff."
  • Strict Typing: Always use Python type hints to prevent the AI from sending the wrong data formats.
  • Logging: Use import logging to track how the AI invokes our tools in the background.
  • Memory: We can add "Context Awareness" by storing results in a list (e.g., calculation_history) so the agent can reference previous answers.

Key Takeaway: The Model Context Protocol (MCP) transforms isolated Python functions into intelligent, conversational tools that an AI agent can understand and execute. The transition from a local script to an enterprise-grade tool happens in three stages:

  1. Standardization (Local): Using FastMCP, we wrap standard Python functions with decorators. By providing strict type hints and descriptions, you create a "contract" that the AI model (like Claude or Nova) can read to understand exactly how to perform math/analysis or data tasks.

  2. Scalability (Managed): You move from running a script on your machine to deploying a containerized image via Bedrock AgentCore Runtime. This shifts the burden of server maintenance and scaling to AWS.

  3. Security (Enterprise): By integrating Amazon Cognito, you ensure that only authorized agents can trigger your tools, protecting sensitive operations like database searches or proprietary calculations.

MCP isn't just about math; it's the universal glue for the AI era. Whether we are building a simple analyzer or a complex anomaly detection system, MCP allows us to build our logic once and use it across any AI model or team in our organization.

Top comments (0)