DEV Community

Cover image for MCP Isn't Hard, Here's the Easiest Beginner-Friendly MCP MASTERCLASS EVER 🤗 (PART 2)
Fonyuy Gita
Fonyuy Gita

Posted on

MCP Isn't Hard, Here's the Easiest Beginner-Friendly MCP MASTERCLASS EVER 🤗 (PART 2)

Understanding MCP: Architecture, Components, and Practical Implementation

Introduction

Welcome back! If you're joining fresh, this is Part 2 of our comprehensive MCP tutorial series. I strongly recommend starting with Part 1: The Foundation, where we explored AI's 70-year journey from Alan Turing's groundbreaking 1950 paper to today's ChatGPT moment, and why we desperately needed MCP to solve the integration crisis facing AI agents.

For those continuing from Part 1, you understand the historical context and the integration nightmare developers face. Now we're pulling back the curtain to see exactly what MCP is, how it works under the hood, and why this protocol might be as revolutionary as HTTP was for the web.

In this part, we'll move from history to architecture, from understanding the problem to mastering the solution. I'll break down complex concepts into digestible pieces that will have you saying "Oh, that actually makes sense!"

Let's dive in.


Table of Contents

Chapter 2: What is MCP?

Chapter 3: MCP System Components


Chapter 2: What is MCP? Breaking Down Model Context Protocol

After understanding the historical journey that brought us here, it's time to answer the central question: What exactly is the Model Context Protocol, and why should you care?

Let me start with a story that makes everything clear.

2.1 The Three Pillars: Model, Context, Protocol

Imagine you're building a house. You have the world's best architect—think of this as your AI model, your Claude or GPT-4. This architect is brilliant, can design anything you imagine, and has decades of knowledge about construction, aesthetics, and engineering. But here's the problem: this architect sits in an office with no windows, no phone, and no internet connection.

MODEL CONTEXT PROTOCOL

You want this architect to design a house for a specific plot of land. But how do you tell them about the soil conditions? How do they check local building codes? How do they access weather data to design proper drainage? Every piece of information must be manually carried to them, written on paper, slipped under the door.

This is exactly the problem we face with large language models today. They're brilliant, but isolated.

Anthropic released the Model Context Protocol in November 2024 as an open standard for connecting AI assistants to systems where data lives—content repositories, business tools, and development environments. Let's break down what MCP actually means by examining its three fundamental pillars.

Model: The AI That Needs Information

When we talk about the "Model" in Model Context Protocol, we're referring to large language models like Claude, GPT-4, or any AI system that can understand and generate human language. These models are the "architect" in our story above. They have tremendous knowledge baked in from their training, but that knowledge has a cutoff date and, more importantly, lacks your specific context.

Model

Think about what a model can do on its own: write code, explain concepts, analyze text, and reason through problems. What it cannot do on its own: check your actual calendar, read files from your Google Drive, query your production database, or fetch the latest news from the web. The model is powerful, but it needs a way to interact with the real world.

aI WAR

Here's where things get interesting. Developers traditionally faced an N×M integration problem—they needed to build custom connectors for each combination of AI application and data source. Five different AI tools and ten different data sources meant fifty different integrations. That's unsustainable.

Context: The Information The Model Needs

The second pillar is context, and this is perhaps the most critical concept to understand. In the world of AI, context is everything.

Let me explain why. When you ask an AI model "What should I focus on this week?", the model has no idea what your actual priorities are unless you provide context. It doesn't know about the deadline your manager mentioned in Slack yesterday, the calendar events you have scheduled, or the tasks marked urgent in your project management tool. Without context, even the smartest model is just guessing.

Context bg

Context in MCP refers to all the information, tools, and capabilities that an AI system needs to be genuinely useful. This includes your files, your data, your APIs, your databases, and your tools. The Model Context Protocol standardizes how this context gets provided to AI models.

Here's a practical example. Imagine you're working with an AI coding assistant and you say "Refactor the authentication code." Without context, the AI has to ask "What authentication code? Can you paste it here?" But with MCP providing proper context, the AI can access your codebase directly, understand your project structure, find the authentication module, and make intelligent suggestions based on the actual code in your repository.

The beauty of MCP is that it provides a standardized way to connect AI models to different data sources and tools, similar to how USB-C standardized device connections. You define the context once through an MCP server, and any MCP-compatible AI client can access it.

Protocol: The Standard Language They Speak

The third pillar is the protocol itself—the standardized rules and format that enable models and context sources to communicate reliably. A protocol is essentially a shared language and set of conventions that both sides agree to follow.

Protocol mcp

Think about HTTP, the protocol that powers the web. When your browser wants to fetch a webpage, it doesn't need custom code for every website. It just sends an HTTP request following standard rules, and any web server that speaks HTTP can respond. The protocol is what makes the web work at scale.

MCP aims to do the same thing for AI systems. The protocol deliberately reuses message-flow concepts from the Language Server Protocol and is transported over JSON-RPC 2.0—proven, well-understood technologies. This was a smart decision by the MCP architects because they didn't try to reinvent the wheel. Instead, they built on solid foundations that developers already understand.

Here's what the protocol actually specifies:

  • How an AI client discovers what capabilities a server offers
  • How to request resources or invoke tools
  • How to handle errors and edge cases
  • How to maintain security and permissions
  • How to format messages consistently

The protocol ensures that once you build an MCP server, any MCP client can talk to it, and vice versa.

Let me show you a concrete example using Python with FastMCP, the framework we'll be using throughout this tutorial. When you create an MCP server, you're essentially saying "I have some context to share" and defining it in a standard format.

In a simple example, we can create an MCP server that exposes a single tool. Any AI model that connects to this server through the MCP protocol can now call this tool to get the current time. The protocol handles all the messy details of how the request is formatted, how the response is returned, and how errors are managed.

The three pillars work together seamlessly: the Model needs information, the Context provides that information, and the Protocol ensures they can communicate reliably. It's that simple, and that powerful.

2.2 Why MCP Matters Right Now

You might be thinking "Okay, I understand what MCP is, but why should I care? Why is everyone suddenly talking about this?"

The answer lies in timing. We're at a unique moment in AI history where model capabilities have dramatically outpaced integration capabilities. Even sophisticated models are constrained by their isolation from data, trapped behind information silos and legacy systems.

Let me paint the picture of where we are today. Companies like Anthropic, OpenAI, and Google have created AI models that can code, reason, write, analyze, and solve complex problems. These models are incredibly capable. But in most organizations, these powerful models sit on an island, disconnected from the actual data and tools that could make them truly transformative.

Consider a typical enterprise scenario. A company wants to use AI to help their customer support team. They have an AI model that's great at understanding customer issues and crafting helpful responses. But that model needs access to:

  • The customer database to see purchase history
  • The ticketing system to check open issues
  • The knowledge base to pull relevant documentation
  • The inventory system to check product availability

Without MCP or something like it, integrating all of these systems requires building custom code for each one—different API clients for each service, different authentication flows, different error handling, and different ways of formatting the data the AI receives.

This is exhausting and expensive. Teams spend more time building and maintaining integrations than they do actually using AI to solve problems. This is exactly what MCP addresses by providing a universal, open standard for connecting AI systems with data sources, replacing fragmented integrations with a single protocol.

But here's why right now is the perfect moment for MCP. The ecosystem is just beginning to form. When Anthropic released MCP in November 2024, the community rapidly built thousands of MCP servers, and major platforms like Zed, Replit, Codeium, and Sourcegraph integrated MCP support. This isn't theoretical future technology. This is happening now, and the developers who learn MCP today will be the ones building the next generation of AI applications.

Think about the early days of mobile apps. Developers who learned iOS and Android development in the early 2010s became incredibly valuable because they understood the platforms before they became ubiquitous. We're at that same moment with MCP.

2.3 Is MCP the New HTTP? A Bold Comparison

I've heard people call MCP "the HTTP of AI," and while that might sound like marketing hyperbole, there's actually a lot of truth to this comparison. Let me explain why this analogy makes sense, and also where it breaks down.

http vs mcp

HTTP, the Hypertext Transfer Protocol, transformed the internet from a collection of isolated systems into the interconnected web we know today. Before HTTP became the standard, different systems used different protocols to share information. You had Gopher for document retrieval, FTP for file transfers, NNTP for newsgroups, and dozens of other competing protocols. Each one required different client software, different server implementations, and different knowledge to use.

HTTP won because it was simple, flexible, and good enough. It didn't try to be perfect for every use case. It just provided a simple request-response model that worked for most situations. Developers could build on it, extend it, and adapt it to their needs. The standardization unlocked exponential growth. Once browsers spoke HTTP and servers understood HTTP, anyone could publish a website that anyone else could view.

MCP is attempting to do the same thing for AI integrations. Before MCP, if you wanted your AI application to interact with external services, you had several options:

  • Use OpenAI's function-calling API or the ChatGPT plugin framework (vendor-specific)
  • Build your own custom integration for each service
  • Use an agent framework like LangChain and write custom tools

Each approach worked, but none were standardized across the industry.

The comparison to HTTP is apt because MCP provides that same kind of foundational standardization. Just as HTTP defines how web clients and servers communicate, MCP defines how AI applications provide context to large language models through a standardized protocol. A developer building an MCP server knows that any MCP-compatible client will be able to use it. A developer building an MCP client knows that any MCP-compatible server will work with it.

But let me be honest about where the comparison breaks down. HTTP had the advantage of solving a problem that everyone immediately understood: you want to fetch a document from a remote server. Simple. MCP is solving a more complex problem. It's not just about fetching data—it's about providing context, tools, and capabilities to AI systems that can then reason about and act on that information.

Also, HTTP emerged in a different technological context. In the early 1990s, we were connecting documents. Today with MCP, we're connecting intelligent agents to data and tools. The complexity is higher, the security considerations are more nuanced, and the use cases are more varied.

That said, MCP was designed to address the M×N problem by transforming it into an M+N problem, just as HTTP did for web content. Instead of building M times N integrations, you build M clients plus N servers. That's the same architectural breakthrough that made the web scale.

Here's my take: MCP might not become as universally dominant as HTTP, but it doesn't need to be. If MCP becomes the standard way that AI applications connect to external systems—even if just within certain ecosystems or for certain use cases—that's still a massive win for developers and for the AI industry as a whole.

2.4 Real-World Use Cases You Can Relate To

Let me bring all of this theory down to earth with real-world examples that show why MCP isn't just technically interesting, but genuinely useful practically.

Use Case : The AI Coding Assistant That Actually Knows Your Codebase

Imagine you're working on a large software project. You're using an AI coding assistant to help you write code, but today's coding assistants have a frustrating limitation: they can only see the code you explicitly show them. If you paste a function and ask for help, the AI can help with that function. But it doesn't understand how that function fits into your larger application architecture.

With MCP, you can build or use a server that exposes your entire codebase as context. Here's how this works in practice: when you ask your AI assistant "How does the authentication system work?", the AI can use the MCP tools to search your codebase for authentication-related code, read the relevant files, understand the dependencies, and give you an informed answer based on your actual code, not generic advice.

MCP servers for Git, GitHub, and GitLab enable AI to search codebases, read files, or even commit changes. This transforms AI pair programming by giving the AI access to complete repository context when helping with development tasks.

We'll explore many more examples in our practical exercises, which will be the final part of this tutorial series.


Chapter 3: Components of an MCP System

Before we start building anything, I want you to understand exactly what you're building and why each piece matters. Think of this chapter as learning about the different parts of a car before you start driving. You could just get behind the wheel and press the gas pedal, but understanding what an engine does, how the transmission works, and why brakes matter will make you a much better driver. The same is true for MCP.

In this chapter, we'll methodically break down every component of an MCP system. We'll start with the big picture of how clients, servers, and hosts interact, then zoom into the specific building blocks that make MCP powerful: tools, resources, and prompts. By the end, you'll have a complete mental model of how everything fits together.

I'm not going to throw code at you before explaining what it does. I'm not going to use terminology before defining it. And I'm not going to assume you know anything beyond basic programming concepts. We're building this understanding from the ground up, step by step.

3.1 The Client-Server-Host Trinity: Understanding MCP's Architecture

Every MCP system has three distinct players, and understanding how they relate to each other is absolutely crucial. I'm going to use an analogy that will make this crystal clear, then we'll map it back to the technical reality.

The Restaurant Analogy: A Mental Model That Works

Imagine a restaurant. In this restaurant, you have three essential roles that make everything work:

Restaurant analogy

The Customer (MCP Host) is you, sitting at a table, hungry and wanting food. You have preferences, you have needs, but you can't cook the food yourself. You need someone to take your order and someone to prepare it. In the MCP world, the host is the application that orchestrates everything. This is typically your AI application, like Claude Desktop, Cursor, or a custom application you build. The host initiates the conversation and coordinates between the customer's needs and the kitchen's capabilities.

The Waiter (MCP Client) is the person who takes your order, communicates it to the kitchen, and brings your food back to you. The waiter speaks your language and also speaks the kitchen's language. They translate between the two worlds. In MCP terms, the client is the component that knows how to talk to AI models and also knows how to talk to MCP servers. The client implements the MCP protocol and manages the connection between the host application and the servers.

The Kitchen (MCP Server) is where the actual food preparation happens. The kitchen has specific capabilities, like grilling steaks, making pasta, or baking bread. Each kitchen might specialize in different cuisines. The kitchen doesn't interact directly with customers—it works through the waiter. In MCP, the server is what provides the actual capabilities: access to data, tools that can be executed, or pre-built prompts. Each server exposes specific resources and tools that the AI can use.

Now here's the key insight that many people miss when first learning about MCP: the host and the client are often in the same application, but they serve different conceptual roles. The host is the user-facing part that you interact with, while the client is the protocol-speaking part that communicates with servers.

The Technical Reality: How It Actually Works

Let me now translate this analogy into technical terms with complete precision.

technical working

The MCP Host is the application environment where the AI model runs and where users interact with the system. When you open Claude Desktop on your computer, Claude Desktop is the host. When you're writing code in Cursor with AI assistance, Cursor is the host. The host is responsible for:

  • Managing the user interface
  • Handling user inputs
  • Running the AI model
  • Deciding when to use MCP servers for additional context or capabilities

Think of the host as the conductor of an orchestra. It sees the big picture, understands what the user wants, and coordinates all the different components to make that happen.

The MCP Client is the component within the host application that implements the MCP protocol. This is a crucial distinction that often confuses beginners. The client is not a separate application you download—it's a library or module that the host application uses to communicate with MCP servers. The client handles all the protocol details:

  • Discovering what capabilities a server offers
  • Sending properly formatted requests
  • Receiving and parsing responses
  • Managing the connection lifecycle

When you configure Claude Desktop to connect to an MCP server, you're configuring Claude Desktop's built-in MCP client to establish that connection.

The MCP Server is a separate process or service that provides specific capabilities to MCP clients. This is what you will be building throughout this tutorial. The server is where your custom logic lives, where your data sources are accessed, where your tools are implemented. Each server is focused and specialized—it might provide access to a specific database, offer tools for interacting with a particular API, or expose resources from a certain file system.

Here's a concrete example that ties it all together. Let's say you're using Claude Desktop (the host) and you ask it to fetch the latest commit messages from your GitHub repository. Here's what happens step by step:

  1. Your question goes to Claude Desktop, which contains the AI model
  2. Claude Desktop's MCP client recognizes that to answer this question, it needs to connect to a GitHub MCP server
  3. The client sends an MCP request to the GitHub server asking for repository information
  4. The GitHub server receives the request, uses the GitHub API to fetch the actual commit data, formats it according to MCP standards, and sends it back
  5. The client receives this response and passes it to the AI model as context
  6. The AI model now has the commit information and can answer your question intelligently

The beauty of this architecture is separation of concerns. The host doesn't need to know anything about GitHub's API. The server doesn't need to know anything about how AI models work. The client provides the bridge, speaking both languages fluently. This is what makes MCP scalable and maintainable.

Why This Architecture Matters for You as a Developer

Understanding this trinity isn't just academic—it fundamentally shapes how you think about building MCP systems. When you build an MCP server, you're not building a complete application. You're building a specialized service that provides specific capabilities through a standardized interface. Your server doesn't need a user interface, it doesn't need to understand natural language, and it doesn't need to make decisions about when to be called. All of that is handled by the host and client.

This means you can focus your server on doing one thing really well: providing access to a particular data source or implementing specific tools. Your GitHub MCP server just needs to be great at GitHub operations. Your database MCP server just needs to be great at database queries. The host and client handle everything else.

This also means that once you build an MCP server, it can be used by any application that has an MCP client. You build it once, and it works with Claude Desktop, Cursor, or any other MCP-compatible host. That's the power of standardization.

3.2 Setting Up Your Development Environment

Before we can explore tools, resources, and prompts with actual code, we need to set up a proper development environment. I'm going to walk you through this step by step, explaining why each piece is necessary.

Understanding What We're Installing and Why

We're going to use Python with FastMCP, which is a modern, developer-friendly framework for building MCP servers. FastMCP was created by Joel Lowin and it dramatically simplifies the process of creating MCP servers by handling all the protocol-level details for you. You focus on what your server does, and FastMCP handles how it communicates.

Python is an excellent choice for learning MCP because it's readable, widely understood, and has a rich ecosystem of libraries for accessing different data sources and APIs.

Step 1: Verify Your Python Installation

First, let's make sure you have Python installed and it's the right version. FastMCP requires Python 3.10 or higher. Open your terminal or command prompt and run:

python --version
Enter fullscreen mode Exit fullscreen mode

You should see something like "Python 3.10.x" or higher. If you see "Python 2.x" or if the command isn't found, go to python.org, download the latest version for your operating system, and install it. On some systems, you might need to use python3 instead of python.

Step 2: Create a Dedicated Project Directory

Organization matters when you're learning. Let's create a clean space for all our MCP experiments:

mkdir mcp-tutorial
cd mcp-tutorial
Enter fullscreen mode Exit fullscreen mode

Step 3: Set Up a Virtual Environment

This is important and often skipped by beginners, but it's a professional practice you should adopt now. A virtual environment is an isolated Python environment that keeps the packages for this project separate from your system Python and other projects.

Create a virtual environment:

python -m venv venv
Enter fullscreen mode Exit fullscreen mode

Now activate it:

On macOS or Linux:

source venv/bin/activate
Enter fullscreen mode Exit fullscreen mode

On Windows:

venv\Scripts\activate
Enter fullscreen mode Exit fullscreen mode

When the virtual environment is activated, you'll see (venv) at the beginning of your terminal prompt.

Step 4: Install FastMCP and the MCP Inspector

With your virtual environment activated, run:

pip install fastmcp
pip install mcp-cli
Enter fullscreen mode Exit fullscreen mode

FastMCP is the framework we'll use to build MCP servers. The MCP Inspector lets you connect to your MCP server and interact with it directly, without needing to set up a full host application like Claude Desktop. This is invaluable for debugging.

Step 5: Verify Your Installation

Let's make sure everything installed correctly:

python -c "import fastmcp; print(f'FastMCP version: {fastmcp.__version__}')"
Enter fullscreen mode Exit fullscreen mode

If you see a version number printed, congratulations! FastMCP is installed and working.

Step 6: Create Your First MCP Server File

Create a new file called hello_mcp.py:

touch hello_mcp.py
Enter fullscreen mode Exit fullscreen mode

Open this file in your favorite code editor and type:

from fastmcp import FastMCP

# Create a new MCP server with a name and version
mcp = FastMCP("Hello MCP Server", version="1.0.0")

# This is the entry point that starts the server
if __name__ == "__main__":
    mcp.run()
Enter fullscreen mode Exit fullscreen mode

Let me explain what each line does:

  • The first line imports FastMCP, the framework that handles all the MCP protocol details
  • mcp = FastMCP("Hello MCP Server", version="1.0.0") creates a new MCP server and gives it a name and version number
  • The if __name__ == "__main__" block ensures this code only runs if the file is executed directly
  • mcp.run() starts the server and makes it ready to accept connections

Save this file and run it:

python hello_mcp.py
Enter fullscreen mode Exit fullscreen mode

Terminal after launch

You should see some output indicating the server is running. The server doesn't do anything useful yet—it doesn't have any tools, resources, or prompts defined—but it's a valid MCP server that can communicate using the protocol. Press Ctrl+C to stop the server.

Understanding What Just Happened

When you ran that server, something important happened behind the scenes. FastMCP set up a JSON-RPC server that can receive MCP protocol messages, registered the server's name and version, and prepared to handle standard MCP messages like capability discovery and resource requests. All of this happened with just those few lines of code, because FastMCP handles the complexity for you.

This is why we're using FastMCP for this tutorial. The underlying MCP protocol involves JSON-RPC 2.0 messages, capability negotiation, error handling, and various other details that would distract from learning the core concepts. FastMCP lets you focus on what your server provides, not how it communicates.

3.3 Tools: Enabling AI to Take Actions

Resources let AI models read information. Tools let AI models do things. This is where MCP becomes truly powerful, because you're not just providing context—you're providing capabilities.

TOOLS

What Are Tools? The Complete Picture

A tool in MCP is a function that the AI model can ask your server to execute. When you define a tool, you're essentially saying to the AI: "Here's something you can do through me. Tell me when you want it done, give me the necessary parameters, and I'll execute it for you."

Let me use an analogy that makes this concrete. Imagine you're a manager with an assistant. You can ask your assistant to read reports (that's like accessing resources), but you can also ask your assistant to perform tasks: "Please send an email to the client," or "Schedule a meeting for next Tuesday," or "Generate a summary of this quarter's sales data." Those are tools.

The crucial difference between tools and resources is that tools can have side effects. They can change state, modify data, trigger external actions, or cause things to happen in the real world. Resources are read-only and safe; tools are powerful and need to be used carefully.

The Anatomy of an MCP Tool

Every tool has several key properties that define how it works:

Anatomy of tools

The Name identifies the tool. This is what the AI model uses when it wants to call your tool. Names should be descriptive and follow a clear naming convention, like send_email, create_calendar_event, or search_documents.

The Description explains what the tool does and when it should be used. This is critical because the AI model reads tool descriptions to decide which tools to use for a given task. A good description is clear, specific, and includes relevant examples or constraints.

The Parameters define what information the tool needs to do its job. Each parameter has a name, a type, a description, and optionally whether it's required or has a default value.

The Return Value is what the tool sends back after execution. This might be a success message, the results of an operation, or error information if something went wrong.

Building Your First Tool: A Simple Calculator

Let's start with something simple and safe: a calculator tool. Create a new file called simple_tools.py:

from fastmcp import FastMCP

# Create the server
mcp = FastMCP("Simple Tools Server", version="1.0.0")

@mcp.tool()
def add_numbers(a: float, b: float) -> str:
    """
    Adds two numbers together and returns the result.

    This tool allows the AI to perform addition calculations.
    Use this when the user asks for the sum of two numbers.

    Args:
        a: The first number to add
        b: The second number to add

    Returns:
        A message containing the sum of the two numbers
    """
    result = a + b
    return f"The sum of {a} and {b} is {result}"

@mcp.tool()
def multiply_numbers(a: float, b: float) -> str:
    """
    Multiplies two numbers together and returns the result.

    This tool allows the AI to perform multiplication calculations.
    Use this when the user asks for the product of two numbers.

    Args:
        a: The first number to multiply
        b: The second number to multiply

    Returns:
        A message containing the product of the two numbers
    """
    result = a * b
    return f"The product of {a} and {b} is {result}"

if __name__ == "__main__":
    mcp.run()
Enter fullscreen mode Exit fullscreen mode

Let me explain what makes this code work:

The @mcp.tool() decorator tells FastMCP that this function should be exposed as a tool that AI models can call. Unlike resources, tools use @mcp.tool() without any arguments because the function name becomes the tool name.

The function parameters a: float and b: float are automatically detected by FastMCP. The type hints tell FastMCP what kind of data these parameters expect, and FastMCP will validate that the AI provides the right types.

The comprehensive docstring is crucial here. The first line gives a brief overview. The "Args" section documents each parameter, which helps the AI understand what values to provide. The "Returns" section explains what the tool gives back.

Run this server:

python simple_tools.py
Enter fullscreen mode Exit fullscreen mode

Now open the MCP Inspector in another terminal:

mcp dev simple_tools.py
Enter fullscreen mode Exit fullscreen mode

You'll see both tools listed. Try calling the add_numbers tool with parameters like a=5 and b=3. You'll see it return "The sum of 5.0 and 3.0 is 8.0".

Building Practical Tools: Real-World Examples

Calculators are nice for learning, but let's build something more useful. Create practical_tools.py:

from fastmcp import FastMCP
from datetime import datetime
import json

mcp = FastMCP("Practical Tools Server", version="1.0.0")

@mcp.tool()
def analyze_text(text: str) -> str:
    """
    Analyzes a piece of text and returns statistics about it.

    This tool counts words, sentences, and characters in the provided text.
    Use this when you need to understand the length or composition of text.

    Args:
        text: The text to analyze

    Returns:
        A JSON string containing text statistics
    """
    word_count = len(text.split())
    char_count = len(text)
    sentence_count = text.count('.') + text.count('!') + text.count('?')

    stats = {
        "word_count": word_count,
        "character_count": char_count,
        "sentence_count": sentence_count,
        "average_word_length": round(char_count / word_count, 2) if word_count > 0 else 0
    }

    return json.dumps(stats, indent=2)

@mcp.tool()
def create_timestamp_note(note: str) -> str:
    """
    Creates a timestamped note entry.

    This tool takes a note and adds a timestamp to it, useful for logging
    or creating time-stamped records of events or thoughts.

    Args:
        note: The content of the note to timestamp

    Returns:
        The note with an ISO formatted timestamp prepended
    """
    timestamp = datetime.now().isoformat()
    formatted_note = f"[{timestamp}] {note}"
    return formatted_note

@mcp.tool()
def format_as_markdown_list(items: str, ordered: bool = False) -> str:
    """
    Converts a comma-separated list of items into a Markdown formatted list.

    This tool helps format data as markdown lists, either bullet points
    or numbered lists depending on the ordered parameter.

    Args:
        items: Comma-separated items to format (e.g., "apple, banana, orange")
        ordered: If True, creates a numbered list. If False, creates a bullet list.

    Returns:
        A markdown formatted list
    """
    item_list = [item.strip() for item in items.split(',')]

    if ordered:
        formatted = '\n'.join(f"{i+1}. {item}" for i, item in enumerate(item_list))
    else:
        formatted = '\n'.join(f"- {item}" for item in item_list)

    return formatted

if __name__ == "__main__":
    mcp.run()
Enter fullscreen mode Exit fullscreen mode

These tools demonstrate several important patterns:

  • analyze_text shows how to perform computations and return structured data as JSON
  • create_timestamp_note demonstrates working with system functions like datetime
  • format_as_markdown_list shows how to use optional parameters with default values

Run this server and explore these tools with the MCP Inspector.

Understanding Tool Safety and Best Practices

Tools are powerful, which means they need to be designed carefully. Here are essential principles:

Validation is crucial. Always validate your inputs before using them. Check that strings aren't empty when they shouldn't be, that numbers are in acceptable ranges, and that data formats are correct.

Error handling matters. Wrap your tool logic in try-except blocks to catch and handle errors gracefully. Instead of letting your tool crash, return a clear error message explaining what went wrong.

Be explicit about side effects. If your tool modifies data, sends messages, or changes external state, make that crystal clear in the description. The AI needs to understand the consequences of calling your tool.

Keep tools focused. Each tool should do one thing well. Don't create a tool that "sends email and also updates the database and also logs to a file." Create separate tools for each action.

Here's an example of a well-designed tool with proper error handling:

@mcp.tool()
def divide_numbers(dividend: float, divisor: float) -> str:
    """
    Divides one number by another with error handling.

    This tool performs division while protecting against division by zero.
    Use this when you need to divide numbers safely.

    Args:
        dividend: The number to be divided
        divisor: The number to divide by (must not be zero)

    Returns:
        A message containing the division result or an error message
    """
    try:
        if divisor == 0:
            return "Error: Cannot divide by zero. Please provide a non-zero divisor."

        result = dividend / divisor
        return f"The result of {dividend} divided by {divisor} is {result}"

    except Exception as e:
        return f"Error performing division: {str(e)}"
Enter fullscreen mode Exit fullscreen mode

This tool demonstrates defensive programming: it explicitly checks for division by zero, wraps the operation in error handling, and returns clear, helpful error messages rather than crashing.

3.4 Resources: Providing Data to AI Models

Now let's talk about resources—the first of the data-providing capabilities an MCP server can offer.

MCP RESOURCES

What Are Resources? A Clear Definition

Resources in MCP are pieces of data that an AI model can read and use as context. Think of resources as files in a file system or documents in a library. They exist, they contain information, and the AI can request access to them when needed.

Here's the key distinction: resources are passive—they don't do anything, they just exist and can be read. When an AI model wants information from a resource, it asks the MCP server "Can I please see this resource?" and the server responds with the content.

Imagine you're a researcher working on a paper, and you have access to a university library. The books in that library are resources. They sit on shelves, containing information. When you need information about a specific topic, you go to the library, find the relevant book, and read it. The book doesn't do anything active—it's just there providing information when you need it.

The Anatomy of an MCP Resource

Every resource in MCP has three essential properties:

MCP RESOURCES ANATOMYY

The URI (Uniform Resource Identifier) is a unique identifier for the resource. Just like every web page has a URL, every MCP resource has a URI. The URI follows a pattern like protocol://path/to/resource. For example, file:///home/user/notes.txt or github://repo/main/README.md.

The Name is a human-readable label for the resource. While the URI is precise and machine-friendly, the name is what makes sense to humans. A resource with URI file:///home/user/notes.txt might have the name "Personal Notes".

The MIME Type describes what kind of data the resource contains. Is it plain text? JSON data? A PDF document? The MIME type is formatted like text/plain, application/json, or image/png. This tells the AI model how to interpret the data it receives.

Building Your First Resource: A Simple Example

Let's create an MCP server that exposes a simple text resource. Create a new file called simple_resources.py:

from fastmcp import FastMCP

mcp = FastMCP("Simple Resources Server", version="1.0.0")

@mcp.resource("memo://company/welcome")
def get_welcome_message() -> str:
    """
    This resource provides a welcome message for new employees.
    The AI can read this message and use it to greet or inform new hires.
    """
    return """Welcome to Acme Corporation!

We're thrilled to have you join our team. Here at Acme, we value innovation,
collaboration, and continuous learning. Your first week will include orientation
sessions on Monday and Tuesday, followed by team introductions on Wednesday.

If you have any questions, please don't hesitate to reach out to HR at
hr@acmecorp.com or call extension 1234.

Looking forward to working with you!"""

if __name__ == "__main__":
    mcp.run()
Enter fullscreen mode Exit fullscreen mode

Let me walk you through this code:

The @mcp.resource("memo://company/welcome") line is a decorator that tells FastMCP "this function provides a resource, and its URI is memo://company/welcome". The memo:// part is a custom URI scheme we've chosen to indicate this is a company memo.

The docstring serves as the resource's description. This is shown to AI models when they're deciding whether this resource might be useful for answering a question.

The function returns a simple string with the welcome message content. When an AI model requests this resource, this is exactly what it will receive.

Save this file and run it:

python simple_resources.py
Enter fullscreen mode Exit fullscreen mode

In another terminal, test it with the MCP Inspector:

mcp dev simple_resources.py
Enter fullscreen mode Exit fullscreen mode

The MCP Inspector will start and connect to your server. You should see an interface where you can explore your server's capabilities. Look for the resources section, and you'll see your memo://company/welcome resource listed.

Understanding What Makes Resources Powerful

What you just built might seem simple, but think about the implications. Any AI application that connects to your MCP server can now access this welcome message. The AI doesn't need custom code to fetch it, doesn't need to know where it's stored, and doesn't need special permissions beyond what your server grants.

Now imagine scaling this up. Instead of one static message, you could have resources that read from files, query databases, call APIs, or generate dynamic content.

Building Dynamic Resources: Real-World Data

Create a new file called dynamic_resources.py:

from fastmcp import FastMCP
import json
from datetime import datetime

mcp = FastMCP("Dynamic Resources Server", version="1.0.0")

@mcp.resource("system://server/status")
def get_server_status() -> str:
    """
    Provides current server status information including time and uptime.
    This demonstrates how resources can return dynamic, real-time data.
    """
    status_data = {
        "status": "operational",
        "current_time": datetime.now().isoformat(),
        "server_name": "MCP Tutorial Server",
        "version": "1.0.0"
    }

    return json.dumps(status_data, indent=2)

@mcp.resource("data://employees/count")
def get_employee_count() -> str:
    """
    Returns the current number of employees in the company.
    In a real application, this would query a database.
    """
    employee_count = 47
    return f"The company currently has {employee_count} employees."

@mcp.resource("document://handbook/remote-work")
def get_remote_work_policy() -> str:
    """
    Provides the company's remote work policy document.
    AI can reference this when answering questions about remote work.
    """
    return """Remote Work Policy

Effective Date: January 1, 2025

Acme Corporation supports flexible work arrangements. Employees may work remotely
up to three days per week with manager approval. Remote work days must be
scheduled at least 48 hours in advance through the scheduling system.

Requirements for remote work:
- Reliable internet connection (minimum 25 Mbps)
- Dedicated workspace free from distractions
- Availability during core business hours (10 AM - 3 PM)
- Response to messages within 2 hours during work hours

For questions about remote work eligibility, contact your manager or HR."""

if __name__ == "__main__":
    mcp.run()
Enter fullscreen mode Exit fullscreen mode

This server demonstrates several important concepts:

  • The first resource returns JSON data with the current timestamp, showing that resources can be dynamic
  • The second resource simulates querying a database for employee count
  • The third resource provides a multi-paragraph document

Run this server and explore it with the MCP Inspector:

python dynamic_resources.py
Enter fullscreen mode Exit fullscreen mode

In another terminal:

mcp dev dynamic_resources.py
Enter fullscreen mode Exit fullscreen mode

Try accessing each resource multiple times. Notice how the system://server/status resource shows different timestamps each time you access it, because it's generating fresh data with every request.

Resources vs. Tools: Understanding the Difference

This is a critical distinction: resources are for reading data, tools are for taking actions. Resources are like books in a library that you can read but not modify. Tools are like power tools that you can use to build or change things.

When should you use a resource?

  • Configuration files that define system settings
  • Documentation that explains how something works
  • Data summaries that provide current state information
  • Historical records that the AI might need to reference

When should you use a tool instead?

  • Sending emails
  • Creating calendar events
  • Modifying database records
  • Triggering external processes

The key question to ask yourself: "Am I providing information for the AI to read, or am I providing a capability for the AI to execute?" If it's information, use a resource. If it's an action, use a tool.

3.5 Prompts: Pre-Built Conversations (A Brief Overview)

Let's talk about the third capability that MCP servers can provide: prompts. I want to be completely transparent here. Prompts are the least commonly used of the three capabilities, and many production MCP servers never define any prompts at all. That's perfectly fine, because tools and resources cover most real-world needs.

However, prompts solve a specific problem elegantly when you need them.

What Are Prompts and When Do You Actually Need Them?

A prompt in MCP is essentially a reusable template for conversations with the AI. If you find yourself typing the same detailed instructions to your AI assistant over and over, that's a perfect candidate for an MCP prompt.

Here's a concrete scenario: imagine you're a manager who frequently asks the AI to analyze customer feedback. Every time, you type out the same detailed instructions: "Please analyze this feedback for sentiment, identify key pain points, categorize issues by department, and suggest action items." After the fifth time typing this, you realize you're wasting time and probably forgetting to include some criteria each time.

That's where an MCP prompt shines. You can define this analysis template once in your MCP server, and then simply invoke it with the feedback text as a parameter.

The key insight: prompts don't provide data like resources do, and they don't execute actions like tools do. Instead, prompts provide standardized ways to interact with the AI itself. They're templates that ensure consistent, thorough interactions for specific tasks.

A Simple Example

Here's one practical prompt that demonstrates the concept clearly. Create workflow_prompts.py:

from fastmcp import FastMCP

mcp = FastMCP("Workflow Prompts Server", version="1.0.0")

@mcp.prompt()
def analyze_customer_feedback(feedback_text: str, customer_tier: str) -> str:
    """
    Creates a structured prompt for analyzing customer feedback.

    This ensures consistent analysis across all customer feedback reviews,
    with special attention to high-value customers.

    Args:
        feedback_text: The customer's feedback to analyze
        customer_tier: Customer tier (bronze, silver, gold, platinum)

    Returns:
        A formatted prompt for feedback analysis
    """
    urgency_note = {
        "platinum": "URGENT: This is a platinum tier customer. Prioritize immediate action.",
        "gold": "HIGH PRIORITY: Gold tier customer feedback requires prompt attention.",
        "silver": "MODERATE PRIORITY: Silver tier customer - address within 48 hours.",
        "bronze": "STANDARD PRIORITY: Bronze tier customer - standard response timeline."
    }

    priority = urgency_note.get(customer_tier.lower(), "Please assess priority appropriately.")

    return f"""Analyze this customer feedback systematically:

**Customer Tier:** {customer_tier.upper()}
**Priority Level:** {priority}

**Feedback:**
{feedback_text}

Please provide a structured analysis covering:

1. **Sentiment Analysis**: Is this feedback positive, negative, or mixed? What's the emotional tone?

2. **Key Issues Identified**: What specific problems or pain points is the customer experiencing?

3. **Department Categorization**: Which teams should address these issues?

4. **Immediate Actions Required**: What needs to happen right now?

5. **Long-term Recommendations**: What systemic changes might prevent similar feedback?

6. **Response Draft**: Provide a draft response to send back to the customer."""

if __name__ == "__main__":
    mcp.run()
Enter fullscreen mode Exit fullscreen mode

This prompt encodes organizational knowledge about how customer feedback should be analyzed. The tier-based prioritization ensures high-value customers get appropriate attention.

Why Most Servers Skip Prompts Entirely

Let me give you the honest truth about prompts in MCP: most developers building MCP servers focus exclusively on tools and resources because those capabilities directly solve the core problem—giving AI access to data and enabling it to take actions. Prompts are more of a convenience feature for standardizing interactions.

If you're building an MCP server that connects to your company's database, you need tools to execute queries and resources to read data. You might never need a single prompt.

However, if you're building an MCP server specifically to encode organizational workflows or standardize how teams interact with AI, then prompts become valuable.

For most of your learning journey and initial MCP servers, you can safely focus on tools and resources. Come back to prompts later if you find yourself needing them.

3.6 Testing Your MCP Server with Claude Desktop

Now comes the exciting part. Everything we've built so far has been tested using the MCP Inspector. But what you really want is to use your MCP server with an actual AI application.

Installing Claude Desktop

Go to claude.ai and look for the desktop download option. You'll find downloads available for macOS, Windows, and Linux. Download the version appropriate for your operating system and install it.

Once installed, open Claude Desktop and sign in with your Anthropic account. The free tier is sufficient for testing MCP servers.

Configuring Claude Desktop to Use Your MCP Server

Claude Desktop needs to know about your MCP server before it can connect. This requires editing a configuration file.

The configuration file location depends on your operating system:

  • macOS: ~/Library/Application Support/Claude/claude_desktop_config.json
  • Windows: %APPDATA%\Claude\claude_desktop_config.json
  • Linux: ~/.config/Claude/claude_desktop_config.json

If this file doesn't exist yet, create it. Open the file in your text editor and add this configuration:

{
  "mcpServers": {
    "practical-tools": {
      "command": "python",
      "args": [
        "/full/path/to/your/practical_tools.py"
      ]
    }
  }
}
Enter fullscreen mode Exit fullscreen mode

Replace /full/path/to/your/practical_tools.py with the actual absolute path to where you saved your server. On macOS or Linux, this might look like /Users/yourname/mcp-tutorial/practical_tools.py. On Windows, it might be C:\Users\YourName\mcp-tutorial\practical_tools.py.

After editing the configuration file, save it and completely restart Claude Desktop—not just close the window, but actually quit the application and reopen it.

Using Your MCP Server Through Claude Desktop

With Claude Desktop restarted and configured, open it and start a new conversation. Claude won't automatically tell you which MCP servers are connected. You need to interact with it naturally, and Claude will use your MCP server when appropriate.

Let's test the practical tools server systematically. Try asking Claude:

"Can you analyze this text for me: 'The quick brown fox jumps over the lazy dog. This is a simple sentence. Testing word counts!'"

Claude should use the analyze_text tool from your MCP server to retrieve the text statistics and present them to you in a readable format.

Now try:

"Create a timestamp note saying 'Completed project review meeting'"

Claude should recognize that this requires using the create_timestamp_note tool, call your tool with the appropriate parameter, and relay the timestamped note back to you.

Finally, try:

"Format these items as a numbered markdown list: apple, banana, orange, grape, strawberry"

Claude will call the format_as_markdown_list tool with ordered=True and return a nicely formatted numbered list.

Understanding What Just Happened

Take a moment to appreciate what you've accomplished. You wrote Python code that defined tools and resources. That code runs as an MCP server on your local machine. Claude Desktop connects to that server using the MCP protocol. When you make requests in natural language, Claude decides which MCP capabilities to use, calls your server, receives the responses, and integrates them into coherent answers.

You've created a bridge between an AI model and custom functionality. Claude doesn't have built-in knowledge of your specific tools, but through MCP, it can interact with them as if they were native capabilities.

Troubleshooting Common Issues

If things didn't work as expected:

  1. Check that your server code has no errors - Run it manually in a terminal first: python practical_tools.py

  2. Verify the path in your configuration - Make sure it's an absolute path with no typos. Paths are case-sensitive on macOS and Linux.

  3. Fully restart Claude Desktop - Closing the window isn't enough; quit the application completely and reopen it.

  4. Be explicit in your requests - Instead of vague requests, try being more specific about what you want Claude to do.


What You've Learned and Next Steps

You now have a complete understanding of MCP architecture. You know that:

  • Hosts like Claude Desktop orchestrate everything
  • Clients handle the MCP protocol communication
  • Servers provide tools, resources, and optionally prompts

You've built working examples of each capability type and tested them with both the MCP Inspector and a real AI application.

More importantly, you've experienced firsthand how MCP creates a standardized way for AI to interact with custom systems. You didn't need to modify Claude or teach it about your specific tools. You just exposed capabilities through MCP, and Claude figured out how to use them based on user requests and the descriptions you provided.

In Part 3 of this tutorial series, we'll build more sophisticated MCP servers that connect to real databases, integrate with actual APIs, work with file systems, and solve practical business problems. We'll also cover important production topics like security, error handling, testing, and deployment strategies.

For now, I encourage you to experiment with the servers we've built. Try modifying them, adding new tools and resources, and seeing how Claude responds to different requests. The best way to internalize these concepts is through hands-on practice.


Further Reading


Ready for Part 3? We'll build production-ready MCP servers that integrate with databases, REST APIs, and file systems. We'll explore advanced patterns, security best practices, and how to share your servers with the community.

Top comments (0)