DEV Community

Cover image for I Didn’t Get MCP Until I Built One
Davide de Paolis
Davide de Paolis

Posted on

I Didn’t Get MCP Until I Built One

Eight months ago, I could barely tell you what an MCP server was.

I’d seen the term (Model Context Protocol) floating around on LinkedIn posts and in our AWS Community Builders Slack. Everyone seemed excited about it, but I had no idea what it was or why I should care. It felt like something for AI people - “Data Science and ML Engineers” - far removed from the daily grind of my team’s cloud infrastructure tasks and platform duties.

I had tried to use some of the AWS Labs MCP servers via Amazon Q, especially the documentation and pricing ones, but my experience wasn’t that exciting. They worked well, but I treated them like plugins - useful, but opaque.

I could use them; I couldn’t explain them.

Then, in November, Kiro became publicly available, and I started exploring its features more extensively (more on that in future posts).

Around the same time, I had the chance to participate in a Proof of Concept to build an MCP server. The initial idea was to expose our public API to AI tools and LLMs through an MCP server. Once proven, the goal was to move that into the product so that customers could use it too.

Of course, as I dove deeper into the topic MCP servers gradually started to make sense.

MCP servers are similar to APIs, but designed for AI agents. They allow them to communicate and utilise services and datasets outside their training data. to In this post, I’ll go through the basic concepts as I understood and learned them (by doing).

This post isn’t a guide or tutorial. You can find excellent resources around, or just ask your AI tool of choice, and you’ll get plenty of clarifications. It's a reflection on moving from “I have no idea what this is” to “I can build and deploy one.”

The Problem MCP Solves: The N×M Nightmare

Let’s start with the problem, because that’s what made everything click for me.

Large Language Models are powerful, but they are frozen in time. (the time when they were trained with data).

They don’t know:

  • the current state of your repositories
  • what’s in your internal documentation
  • the status of your Jira tickets
  • the shape of your APIs or databases

If we want AI tools to be genuinely useful in real engineering workflows, we need a way to safely and consistently connect them to live, external systems.

That’s where integrations come in, and that’s also where things get messy very quickly.

So, imagine you're building integrations for AI tools. You want your assistant to:

  • Access your GitHub repositories
  • Query your Jira tickets
  • Search your company’s documentation
  • Check your database

Now imagine you have multiple AI tools:

  • Claude Desktop
  • GitHub Copilot
  • Cursor
  • ChatGPT with plugins

Without a standard:

  • Claude needs a GitHub integration
  • Copilot needs a GitHub integration
  • Cursor needs a GitHub integration
  • ChatGPT needs a GitHub integration

Now multiply that by Jira, your docs, your database and the math is simple: 4 AI tools × 4 data sources = 16 custom integrations

Each integration is:

  • Built differently
  • Maintained separately
  • Incompatible with other tools
  • Duplicated effort

That’s completely unsustainable. Every AI tool needs its own integration with every data source. That’s N×M integrations, where N = AI tools and M = data sources.

With this standard protocol, instead of writing endless client integrations for each vendor, you describe your capabilities once, and any compliant AI can consume them.

Now you have N + M components instead of N×M integrations. The complexity shifts from O(N×M) to O(N+M).

MCP server standardized protocol

MCP is the missing universal adapter between our data sources and AI-native tools.

MCP is like USB-C for AI tools.
This is the metaphor you will find basically everywhere, and for a good reason (so I won't try to unnecessarily and awkwardly find another one).

Remember before USB? Every device had its own connector:

  • Keyboards had PS/2 ports
  • Mice had serial ports
  • Printers had parallel ports
  • Cameras had proprietary connectors

You needed different ports for everything, and devices only worked with specific computers.

Then USB came along: one standard connector that works everywhere.

MCP does the same thing for AI tools:

  • Before MCP: Every AI tool had custom integrations for every data source
  • After MCP: One standard protocol, any MCP server works with any MCP client

Build a GitHub MCP server once → it works with Claude, Copilot, Cursor, and any future MCP-compatible AI tool.

Before and After MCP

What Actually Is an MCP Server?

So, MCP is a standard. But what’s an MCP server?

Simply put, an MCP server is a program that:

  1. Connects to a data source (GitHub, database, API, etc.)
  2. Exposes that data through standardized “tools”
  3. Speaks the MCP protocol so any AI can use it

Basically, an MCP server is a wrapper, very similar to an API, but instead of several REST endpoints, you have so-called tools.

MCP

The Magic: Self-Describing Tools

When an MCP server connects to an AI, it announces its capabilities.

{
  "tools": [
    {
      "name": "search_repositories",
      "description": "Search GitHub repositories",
      "parameters": {
        "query": "string",
        "limit": "number"
      }
    },
    {
      "name": "get_file_contents",
      "description": "Get contents of a file from a repository",
      "parameters": {
        "owner": "string",
        "repo": "string",
        "path": "string"
      }
    }
  ]
}
Enter fullscreen mode Exit fullscreen mode

The AI now knows:

  • What tools are available
  • What each tool does
  • What parameters they need
  • How to call them

The tools are self-documenting (docs here. But you still have to be careful.

The Tool Overload Problem

As soon as I started understanding and enjoying MCP servers, I began adding more and more to my setup.
But having too many MCP servers installed - each with multiple tools retrieved (self-described) at startup - creates an unexpected issue: too many tools can overwhelm the AI.

5 MCP servers × 20 tools each = 100 tools available
Every time you ask the AI a question, it needs to:

  • Load all 100 tool definitions
  • Understand what each does
  • Decide which to use
  • Execute the right one

This causes context bloat because tool definitions consume valuable context window space and often lower accuracy because the wrong tool ( especially if poorly named) could be picked

With 100 tools, every request includes:

  • 100 tool names
  • 100 descriptions
  • 100 parameter schemas = thousands of tokens before you even ask a question! That’s why it’s important to be intentional, both as a user and as a developer.

From the User Perspective

*Don’t install every MCP server you find. *
Ask yourself:
Do I actually need this?
How often will I use these tools?
Can I enable/disable it when needed?

Example configuration:

{
  "mcpServers": {
    "awslabsaws-documentation-mcp-server": {
      "command": "uvx",
      // some args
      "disabled": false, // use it daily
      "disabledTools": [],
      "autoApprove": ["read_documentation"]
    },
    "terraform-mcp-server": {
      "command": "uvx",
      // some args
      "disabled": true,  // weekly use, enabled on demand
      "autoApprove": []
    }
  }
}
Enter fullscreen mode Exit fullscreen mode

Different agents can have different MCP setups: my CloudOps agent uses AWS documentation and Terraform MCP servers, while a frontend agent might use React and GitHub MCP servers.

You can also autoApprove safe commands so the agent executes them automatically.

From the MCP Server Author’s Perspective

Use Clear Tool Naming: When building your own MCP servers, use namespaced names.

# Bad
- search()
- get()
- list()

# Good
- github_search_repositories()
- github_get_file_contents()
- github_list_pull_requests()
Enter fullscreen mode Exit fullscreen mode

The AI can filter and understand domains faster.

Lazy Loading with Resources: Some MCP servers use resources instead of many tools. (docs here)

# Instead of 50 tools for different docs:
- get_lambda_docs()
- get_s3_docs()
- get_ec2_docs()

# One tool + resources:
- read_documentation(resource_uri)
  Resources:
  - docs://aws/lambda
  - docs://aws/s3
  - docs://aws/ec2
Enter fullscreen mode Exit fullscreen mode

Resources are discovered on-demand, not loaded upfront.

We’ll look deeper into best practices when we actually build our first MCP server.

MCP Transport Protocols: stdio vs Streamable HTTP

As I dug deeper, I learned that MCP servers can communicate in two ways. (docs here)
Understanding this helped me choose the right setup for different use cases.

stdio (Standard Input/Output)

How it works:

  • Runs as a local process
  • Communication via stdin/stdout (like terminal piping)
  • The AI client spawns and manages the process

Configuration example:

{
  "mcpServers": {
    "filesystem": {
      "command": "npx",
      "args": ["-y", "@modelcontextprotocol/server-filesystem", "/path/to/allowed/files"]
    }
  }
}
Enter fullscreen mode Exit fullscreen mode

When to use stdio:

✅ Local tools, file systems, git operations
✅ Great for privacy (no network)
✅ Simple and fast
✅ Works offline

Cons:
👎 Installed locally per machine
👎 Hard to share across teams

HTTP (Remote Server)

How it works:

  • Runs as a remote HTTP service
  • Clients connect via URL + auth

Configuration example:

{
  "mcpServers": {
    "company-api": {
      "type": "http",
      "url": "https://mcp.company.com",
      "headers": {
        "Authorization": "Bearer ${API_TOKEN}"
      }
    }
  }
}
Enter fullscreen mode Exit fullscreen mode

When to use HTTP:

✅ Shared services or APIs
✅ Team-wide/multi-user access
✅ Centralized updates & scalability

Cons:
👎 Requires network and auth setup
👎 Slightly more complex to operate

The Ecosystem

Once I understood MCP, I realized there’s an entire ecosystem growing fast:

There are plenty that can really be useful and boost how you use AI to work, but be careful and selective because you might find yourself in the first week you start using MCP servers with already a dozen installed!

Build your own

And of course, you can also build your own; the barrier to entry is very low:

In the next posts, I’ll show how we built our internal POC to showcase the concept to colleagues and management:
wrapping parts of our public API into an MCP server so AI tools could retrieve account information by simply chatting.

I’ll admit I was initially intimidated (that imposter syndrome never really goes away), but it turns out it’s not about “adding AI”, it’s about making your system AI-accessible.

From here, we’ll dive into the actual implementation.
As we’ll see, it’s easy to start, but moving from prototype to production takes care and design.
We’re not fully there yet, but we already have it running internally via AWS Agent Core Runtime.

Before we go deeper into building MCP servers, I need to introduce the tool that made this exploration possible at all: Kiro.

Kiro

As the title of this series suggests, this journey happened Vibecoding in between meetings. Without the right AI setup, it simply wouldn’t have been possible to stay hands-on while juggling everything else.
In the next post, I’ll walk through how I set up Kiro (both the CLI and the IDE) cover its core capabilities (hooks, steering, and powers), and share the MCP servers, skills, and prompts that genuinely changed how I work with AI agents.
They didn’t just make the experience more enjoyable; they made my limited time far more effective.

Stay tuned.

Top comments (0)