DEV Community

Cover image for What MCP Actually Is (And Why It Exists)
Esther
Esther Subscriber

Posted on

What MCP Actually Is (And Why It Exists)

What is MCP?

MCPs are a way to give AI applications the external context/capabilities they need to complete their mission.

Kind of a blanket statement, I know. You're probably wondering: doesn’t RAG already do this? What about tools? And you'd be right to think that, somewhat.

These are all ways to give AI context.

With RAG, you typically embed and store context somewhere if you're not trying to overshoot your context window by just sending an entire PDF to the LLM. This context is then retrieved each time the user makes a query to your app.

Although, realistically, you should have some form of reasoning that decides whether the user's query actually needs additional context because they could literally just be asking your app:

"How are you?" 😭

Anyways, Back to MCPs.

MCPs are more closely related to tools (function calling), but they also support resources, which makes them relevant to RAG-like systems too.

And I’m going to explain why MCP exists with a short example:

The Problem MCP Solves

  1. You have a nice OpenAI agent that does research. This agent makes tool calls to Google Scholar using the Serper API + a bunch of other services.

    PS: Serper wraps the Google API because working directly with Google is a nightmare.

  2. Along the way, you realise:

    Ugh, I don't even like OpenAI. Why am I using this?

    Also, they jumped into that deal with the Pentagon, sayonara! ✌🏽 LangChain, here I come.

  3. You read the LangChain docs and realise:

    Dang. I have to rewrite my tools from scratch to fit LangChain’s syntax.

    So now you're rewriting:

    • agent logic
    • tool logic
  4. Months later, you realise:

    • you hate LangChain
    • you hate their syntax
    • you hate installing 100 packages just to use a new LLM
    • you especially hate RunnablePassthrough and those damn pipes | 😭

    So now, CrewAI it is 🫠

  5. You read CrewAI docs. And yes, you guessed it. You're rewriting everything again: agents, tools, the whole shebang. Sigh.

At this point, you're probably sick of it all.

Unfortunately, you cannot avoid rewriting your agents when switching platforms (for now).

BUUUUT you can avoid rewriting your tools.

🚀 Enter MCP

MCP stands for Model Context Protocol.

It’s a standard created by Anthropic to ensure that LLMs can connect to external data sources and tools seamlessly.

What this means is:

You can build your tools once (as an MCP server), and any MCP-compatible platform can use them.

No more major rewrites every time you switch frameworks 🎉

MCPs were initially built to give LLMs like Claude (and potentially other models via MCP clients) access to more data sources. But now, they’ve evolved into something much bigger: a standard way for AI agents to access tools and context.

Benefits of MCP

1. Access to a growing ecosystem

With MCP, you get access to a whole ecosystem of tools. Many companies are building MCP servers for their platforms, exposing powerful capabilities.

You can explore them here: https://mcpservers.org/

2. Access to local + external data

MCPs can access both:

  • external systems (APIs, services)
  • local data (your machine, private files, internal knowledge bases)

So if you want Claude, ChatGPT, Cursor, or any external agent to access your private local knowledge, MCP is a great option.

3. Structured access to capabilities

MCP doesn’t just give access, it structures it.

Core MCP Concepts

Tools: Tools provide capabilities. They allow AI applications to perform actions on behalf of users.

external APIs that do things

Resources: Resources provide information. They allow AI systems to retrieve structured data and pass it as context to models.

knowledge, documents, data sources

We also have Prompts, but I got tired writing this article so read about it here: https://modelcontextprotocol.io/docs/learn/server-concepts#resources

Important Note

For an agent to use MCP tools, it must support the MCP protocol (i.e., be an MCP client). That said, platforms like CrewAI abstract this away, so you don’t always have to manually set this up but some platforms might not have this capability.

MCP Deployment Types

MCP servers can be:

  • Local (running on your machine)
  • Remote (hosted elsewhere)

When MCP first came out, it was heavily tied to Claude and required you to build your own server. Now, there are tons of ready-to-use servers.

Check some here:https://platform.claude.com/docs/en/agents-and-tools/remote-mcp-servers

Most platforms now just ask for an MCP URL, and you’re good to go.

Buuuuut, you can build your own. You don’t have to rely on existing servers. If you need a very specific capability, you can build your own MCP server tailored to your use case.

When It Makes Sense to Use MCP

  • You’re a large company and want a standardized way to expose platform capabilities
  • You utilise a lot of tools (this is where MCP really shines)
  • You want third-party agents to use your tools
  • You’re building reusable capabilities
  • You have an ecosystem of agents
  • You don’t want your tooling layer locked into one platform (👀 OpenAI)

Take your pick 😁.

When It Does NOT Make Sense 🚫

  • Because it’s “trending.”
  • You have one agent calling one tool. Please just write your tool yourself. 😭

MCP is for robust systems, not overengineering.

Where MCP Does NOT Help ❌

1. It does NOT standardize agent logic

If you switch platforms, you will still rewrite your agents.That said, with modern AI coding tools, it’s like an hour of work max.

2. Reducing complexity

You now have:

  • an MCP server
  • a protocol layer
  • more moving parts

For small projects, this is overkill.

3. Latency tradeoff

Before:
Agent → Tool

Now:
Agent → MCP → Tool

Congratulations! You just introduced an extra hop. 👏

My Final Thoughts

MCP is not about making agents easier or doing fancy things with LLMs. It’s about making systems reusable, scalable, and interoperable. It's also about giving agents all the tools (pun intended) they need to succeed. You can do some serious analysis by plugging an LLM into internal data.

Now, we wait for Anthropic to release a protocol that standardizes agents too 💀

Top comments (0)