DEV Community

Cover image for I finally figured out how to get the best out of MCP Servers as an AI Developer
Developer Harsh for Composio

Posted on • Originally published at composio.dev

I finally figured out how to get the best out of MCP Servers as an AI Developer

A Brief Intro to MCP

MCP have been here for a while, but not many embrace it full potential by utilizing all its core components. Max tutorial / blog, I see are of using tools, none care for the rest.

So, for one of my projects, after researching a lot, I found out how to use them the easy way, and in this blog, I will share my learning with you all!

Let's begin by looking at Workspace Setup

TL; DR

  • Setup: MCP projects are best handled in workspace.
  • Tools: Letโ€™s LLMs run code or perform actions.
  • Prompts: Defines reusable, structured prompt templates for LLMs.
  • Resources: Expose local or remote data to feed info into LLM context.
  • Use Cases: Build smart agentsโ€”CLIs, bots, assistants, analyzersโ€”by combining tools + prompts + data.

Workspace Setup (recommended)

MCPโ€™s are unmoderated and can deal damage to original file systems. So, I prefer and recommend using it in separate workspace. Letโ€™s begin

Creates a new directory mcp and cd into it!

mkdir mcp
cd mcp
Enter fullscreen mode Exit fullscreen mode

Install UV - modern & fast alternative to pip with better package management

# windows
powershell -ExecutionPolicy ByPass -c "irm https://astral.sh/uv/install.ps1 | iex"

#ubuntu/mac
curl -LsSf https://astral.sh/uv/install.sh | sh
Enter fullscreen mode Exit fullscreen mode

Create a virtual environment

# windows
uv venv .venv
.venv\Scripts\activate

# ubuntu / mac
uv venv .venv
source .venv/bin/activate
Enter fullscreen mode Exit fullscreen mode

Install MCP package - uses FastMCP under the hood

# latest - ver 2.0
uv pip install fastmcp

# ver 1.0 - can cause issue
uv pip install "mcp[cli]"
Enter fullscreen mode Exit fullscreen mode

Ensure MCP library available

fastmcp version
Enter fullscreen mode Exit fullscreen mode

Output ๐Ÿ‘‡

image.png

Remember the root path, will help a lot later ๐Ÿ˜

With workspace ready to go let's look at the 1st MCP Component, tools!

Tools - The Action Takers

Overview

Tools enable LLM to perform tasks / action's through MCP servers. Think of it as an extra hand to LLMโ€™s.

With Tools + MCP, LLM / AI Agents can now:

  • interact with external system,
  • executes code and
  • can access non training data.

Tools have 3 key components:

  • Discovery: Client can fetch list of available tools by sending a tools/list request to MCP server.
  • Invocation: Any tool can be called by client via tool/call request. Once tool is exposed, server performs requested operation and return the result later to be used by AI Models.
  • Flexibility: Tools can be wrapped around a function / API interaction (opening up endless possibilities)

Now letโ€™s look at how to create tools!

Code

Tools in MCPc are functions / methods wrapped withing @mcp.tools(*kwargs) decorator. Yes, this is all you need!

Create a new file called mcp_hello.py and paste the following code

from fastmcp import FastMCP

mcp = FastMCP("Hello World")

# define the tool
@mcp.tool()
def hello_world(name: str) -> str:
    return f"Hello World. This is {name} ๐Ÿ‘‹"

# entry point - not needed but suggested
if __name__ == '__main__':
    mcp.run(transport = 'stdio')

Enter fullscreen mode Exit fullscreen mode

In terminal type

fastmcp dev mcp_hello.py
Enter fullscreen mode Exit fullscreen mode

Copy the provided session token, click on the provided link, paste in: Proxy Session Token and connect.

This will open a MCP debugger - a shortcut to test MCP Servers. I used it in the past in one of my earlier blog.

image.png

Note: In case you are installing for 1st time, you will get a confirmation prompt. press y and let installation happen

Now head to the Tools in nav bar and select list tools and notice history, you will see tools/list call to the server. Remember discovery.

image.png

& here is the response schema

{
  "tools": [
    {
      "name": "hello_world", //name of the mcp server
      "inputSchema": { // JSON Schema for the tool's parameters
        "type": "object",
        "properties": { // Tool Specific Properties
          "name": { // input parameter
            "title": "Name", // name of the parameter
            "type": "string" // dtype
          }
        },
        "required": [  
          "name" 
        ] // required parameter
      }
    }
  ]
}
Enter fullscreen mode Exit fullscreen mode

You can learn more about tools schema here

Next select hello world tool, enter your name and click Run Tool. I put devloper_hs and here is the output ๐Ÿ‘‡

image.png

If you notice history tools/call is made as part of tool invocation process. This is the request schema to MCP Server via client.

{
  "method": "tools/call", //method type - here tool_call
  "params": { // required parameter
    "name": "hello_world", // name of the tool / fn
    "arguments": { 
      "name": "devloper_hs" // argument provided
    },
    "_meta": { // metadata - ignore
      "progressToken": 0
    }
  }
}
Enter fullscreen mode Exit fullscreen mode

& after tool is executed (via server), following response schema is generated!

{
  "content": [ // content resource
    {
      "type": "text", // content type
      "text": "Hello World. This is devloper_hs ๐Ÿ‘‹" //actual content
    }
  ],
  "isError": false //any error?
}
Enter fullscreen mode Exit fullscreen mode

Guess the best part?

As MCP Tools are just function, a single mcp server can have any no of tools. Refer to this quick guide to learn more!

However a question remains, what are use the cases?

Here are some examples to get you started!

Examples

I know there are lot of MCPc, and it keeps on increasing day by day, but niche down and you will find lot of custom use cases for MCP. Here are some to get started!

Utility MCP Server

  • Use tool wrapped functions to perform everyday task - create, read, move, update, search files and directories all using natural conversation.

Data Toolbox MCP Server

  • Expose all your data science tools wrapped functions to llm , provide data and get analytics done. Build once use anywhere.

IRCTC MCP Server:

  • Build by one of my mentors for Hugging Face MCP hackathon, helps with handling IRCTC related tasks, except for ticket booking - uses multiple API as tools ๐Ÿ˜

For more ideas explore this blog.

But not every time LLM call tools, especially for complex / ambiguous scenarios. To mitigate, MCP provide prompts.


Prompts - The Context Builder

Overview

Prompt enables LLM to generate structured and purposeful responses by enabling server to define reusable and parameterized messages, often use to start conversations.

Prompts can be used for:

  • Take and pass dynamic input from user.
  • Include context using local data.
  • Chain multiple interaction for memory building.
  • Guide complex workflows
  • Act as frontend using / command - personal favorite

and all server needs is a prompts/get request from the client.

Let's understand this by building a MCP with prompts support.

Code

  • Like tools, prompts are wrapped around functions containing P*rompt Templates as f-string, Prompt Messages* using @mcp.prompt(*kwargs).
  • Additionally function definition may include type annotations (pydantic), optional parameters, and mcp metadata (override in decorator). Learn more here.

Here is a mcp prompt template that generates topic explanation prompt. Can be tested using MCP Inspector or cli.

# prompt_mcp.py

import asyncio 
from fastmcp import Client
from fastmcp import FastMCP

# create a mcp object with name TopicExplainer
mcp = FastMCP(name='TopicExplainerPrompt')

# create prompt
@mcp.prompt
def explain_topic(topic : str) -> str:
    "Generates a query prompt for explanation of topic"
    return f"Can you explain {topic} in a beginner friendly manner with simple wordings and no technical jargon. Include Concept & Examples."

# create mcp client to test server directly (final prompt display)
async def test_prompt():

    # create a aynschronus loop to run mcp client
    async with Client(mcp) as client:

        # fetch all prompts
        prompts = await client.list_prompts()
        print("Available prompts:", [p.name for p in prompts])

        # Provide the topic to explain_topic for testing and check results
        result = await client.get_prompt("explain_topic", {"topic": "machine learning"}) # change topic
        # add more prompts here for testing multiple prompts

        print("Generated prompt:", result.messages[0].content.text)

if __name__ == "__main__":
    asyncio.run(test_prompt())
Enter fullscreen mode Exit fullscreen mode

In terminal type

uv run prompt_mcp.py
Enter fullscreen mode Exit fullscreen mode

Entire prompt template generated for topic - machine learning
Entire prompt template generated for topic - machine learning

or use the mcp inspector with

fastmcp dev prompt_mcp.py
Enter fullscreen mode Exit fullscreen mode

& paste session token in config โ†’ Proxy Session Token. Press Connect and wait for ๐ŸŸข.

  • Head to prompt โ†’ list all prompts โ†’ select explain_topic and pass the topic (dropdown have text input option)
  • Hit Get Prompt and check out the text field. It will contain same output as show in terminal image. If curios, in code result.messages[0].content.text index the same to get the prompt.

image.png

Notice the history tab, you will see the following flow:

prompt flow!
prompt flow!

I highly encourage you to toggle each request. It will provide a good understanding of the schema followed internally. For reference on schema check out the documentation.

So, what all you can build with prompt + resources in your arsenal for starters?

Examples

Data Analyzer MCP:

  • Build a mcp server that uses prompt template to define analysis type and where to fetch data (local / url).
  • Then use tool to fetch the data , pass it to llm, let llm do the analysis based on analysis type and share the results.

Prompt Enhancer MCP

  • Create a prompt enhancer that uses the prompt template to combine user input / context with predefined SYSTEM_INSTRUCTIONS and pass it to LLM.
  • Then use LLM as tool (wrap LLLM calling into mcp tool) to create an optimsed prompt and return to user.
  • Add optional enhancement like - character limit, prompt size, conversational tone, required information. - personally, would like to have this for my daily workflow.
  • Code Converter MCP - MCP to handle code conversion - no tips here, let llm do the job ๐Ÿ˜

Now time to shift focus to resources - a very important component of mcp, but often overlooked (as per my research)


Resources - The Data Providers

Overview

Resources allow MCP servers to expose private data to client and provide context to LLMโ€™s. - think mcp version of partial rag.

Resource can be of many types, following are common ones:

  • File Content - Plain text
  • Database / Model Records - Contains schema, relationship, methods and so on
  • API Responses - response from server; often llm / other mcp server - a game changer for LLM
  • Screenshot / Images - Yup, mcp supports multimodality
  • Log files - log files generated like server_uptime.log
  • and many more.

All resources are identified using URI like :

[protocol]://[host]/[path]
Enter fullscreen mode Exit fullscreen mode

Some common examples are

File Resource

file:///Users/Harsh/Documents/mcp/mcp_prompt.py # /// - indicates empty host
Enter fullscreen mode Exit fullscreen mode

DB Resource

postgres://database/customers/schema # databade -> host , customer/schema -> path
Enter fullscreen mode Exit fullscreen mode

Screen Capture Resource

screen://localhost/display1 # locahost -> host , display1 -> path
Enter fullscreen mode Exit fullscreen mode

Best part, servers can define their custom URI schema as well. Refer to resource's docs to learn more!

Anywayโ€™s let now look at how to build resources supported mcp server!

Code

Simplest way to define a resource is to wrap it within a function decorated with @mcp.resource(*kargs) .

Copy and paste the following code in mcp_resources.py :

Note: The goal of code is to teach the usage, not use case. So, depending upon problem at hand, resources can have complex behaviors with returns type as well.

from fastmcp import FastMCP
from pathlib import Path

# define mcp server name
mcp = FastMCP(name="Resource Server")

# basic resource
@mcp.resource("resource://greeting")
def greet() -> str:
    """Simple greet"""
    return "Hey This Is Harsh๐Ÿ‘‹"

# Image resource with URL - protocol://host//path
@mcp.resource("images://img.jpg", mime_type="image/jpeg") # defined uri -> returns in json output for resource calls
def fetch_image_bytes() -> bytes:
    """Returns Harsh's profile photo"""
    file_path = Path("img.jpg").resolve()  # file must be present at script route

    if not file_path.exists():
        raise FileNotFoundError(f"Image file not found: {file_path}")

    return file_path.read_bytes()

if __name__ == "__main__":
    mcp.run(transport="stdio")
Enter fullscreen mode Exit fullscreen mode

The above script defines 2 mcp resources โ†’ Simple greet โ†’ greet message and profile photo โ†’ returns the base64 format image uri.

In general, you will aim to expose it to LLM, but here we will use mcp inspector. So, letsโ€™ pull up the inspector with:

fastmcp dev mcp_resources.py
Enter fullscreen mode Exit fullscreen mode

Once loaded, repeat the previous copying token and connecting step, then

  • Head to resources tab and click List Resouces
  • For greet โ†’ Click on it and check out greet tab โ†’ it contains contents array with uri, mime_type and text.
  • For fetch_image_bytes โ†’ Ciick on it and check fetch_image_bytes tab in far right โ†’ it contains content array with uri , mime_type and blob data โ†’ base64 byte encoding value.

(add images here!)

Here is the server-client request flow for both:

For Greet Flow
For Greet Flow

For Fetch Image Bytes Flow
For Fetch Image Bytes Flow

Highly encourage to toggle each request to understand the underlying flow, or I will cover later in another blog.

But what all can you build with the new powerup โšก?

Here are some examples to get started with building MCP having TOOLS + PROMPTS + RESOURCES!

Examples

Research Assistant MCP

  • Build a mcp server that uses tools and prompt and resources to summarize and compare academic papers.
  • Define mcp tools like: search, extract_data, citation_formatter.
  • Define mcp prompts like: research_system_prompt_template, user_prompt_template.
  • Set default mcp resources links for: Google Scholar, arXiv, Papers with code.

Here, tools โ†’ access, extract and format data, prompts โ†’ define assistant behavior, resources โ†’ provide raw search context.

Code Debugger MCP

  • Build a mcp server that uses prompt templates to detect, explain, and fix errors in code.
  • Define mcp tools like: python.run, syntax_checker, diff_tool.
  • Define mcp prompts like: debugger_system_prompt_template, user_code_input_template.
  • Set default mcp resources links like: Python Docs, Stack Overflow, Bento.io.

Here tools โ†’ run, analyze, and correct code issues, prompts โ†’ guide step-by-step explanations and fixes. resources โ†’ provide syntax rules, common issues, and examples.

Business Strategy MCP

  • Build a mcp server that helps users create SWOT analysis, competitive research, and business model canvases.
  • Define mcp tools like: web.search, canvas_builder, chart_gen.
  • Define mcp prompts like: strategy_system_prompt_template, user_business_query_template.
  • Point LLM to mcp resources: CB Insights, Statista, Strategyzer.

Here tools โ†’ gather, visualize, and structure business strategy data, prompts โ†’ simulate strategic consultant behavior, resources โ†’ deliver market insight, competitive intel, and visual frameworks.

Now time to combine everything learnt and put it into action!


Final Thoughts

Building MCP Servers and see them doing things on automation is fun. I personally love to experiment with them.

This blog just scratches the surface of what possible with MCP. So, itโ€™s highly recommended to explore the resources shared; cause the best way is to learn by exploring.

Happy Learning ๐Ÿ‘

Top comments (2)

Some comments may only be visible to logged-in visitors. Sign in to view all comments.