DEV Community

Cover image for Bridging the Gap: A Deep Dive into the Model Context Protocol (MCP)
Razvan
Razvan

Posted on

Bridging the Gap: A Deep Dive into the Model Context Protocol (MCP)

In the rapidly evolving world of AI, Large Language Models (LLMs) are only as powerful as the data they can access and the tools they can use. Historically, connecting an AI application to a local database or a specific API required custom, bespoke integration or a lot of manual copy-pasting.

That changed at the end of 2024, when Anthropic released the Model Context Protocol (MCP) specification. MCP is an open-source standard for connecting AI to external systems, whether it’s browsing local files, querying a database, or triggering complex workflows.

Let’s start with the basics.

What is MCP?

At its core, MCP is an open-source standard that enables AI applications like ChatGPT or Claude to access data sources, tools, and workflows. By providing a unified bridge between the model and the “outside world,” MCP gives AI "superpowers", moving it beyond a simple chat interface and offering users infinite possibilities.

The Architecture: Host, Client, and Server

The architecture is elegantly simple, following a client-server model. Applications like VS Code or Claude Desktop are the MCP Host and connects to a local or remote program that exposes data or functionality, the MCP Server. The MCP Client is a dedicated connection within the host for every specific server.

One of my first experiments was connecting Jira to GitHub Copilot on VSCode (the Host). When VSCode connects to the Jira MCP server, it instantiates an MCP client object to maintain that specific connection. If I then connect to my local filesystem, VSCode simply instantiates additional client objects for those connections.

MCP Architecture infographics

The Two-Layer System

The protocol is split into two distinct layers to handle communication:

The Data Layer: Defines what is being said. Based on JSON-RPC, it handles the connection lifecycle and core primitives (tools, resources, and prompts).
The Transport Layer: Defines how the message gets there.

  • Stdio Transport: Used for local processes on the same machine (e.g., VSCode talking to your local files). It’s lightning-fast with zero network overhead.
  • Streamable HTTP Transport: Used for remote servers (e.g., the official Jira MCP server running on the Atlassian platform). It uses HTTP POST and Server-Sent Events (SSE) and supports secure authentication like OAuth.

The Building Blocks

Core Primitives

Once your host is connected to a server, the AI can interact with three main "primitives":

  • Tools (Executable Functions): The AI can actually do things.
    • Example: create_issue creates a Jira ticket; add_comment updates a task.
  • Resources (Data Sources): These provide the context the AI needs.
    • Example: Project Schemas help the AI understand that "Priority Highest" means "Blocker" while User Profiles help it map names to IDs.
  • Prompts (Templates): Reusable instructions that structure the interaction.
    • Example: A "Bug Reporter" Prompt can force the AI to ask for "Steps to Reproduce" before it is allowed to create a ticket.

Server-to-Client Primitives: A Two-Way Street

MCP isn't just a one-way street. Servers can also request help from the Client:

  • Sampling: A server can ask the AI Host to complete a task. For instance, a Jira server could say: "Hey, use your language skills to summarise these 10 tickets into a release note paragraph for me."
  • Elicitation: This allows a server to ask the user for missing info. If the AI tries to create a ticket but doesn't know the project, the server triggers a request: "Which project should this ticket go into?"
  • Logging: Enables servers to send debug messages to the client for easy monitoring.

Staying Sync

In a dynamic environment, tools and data sources change constantly. MCP handles this through notification system: using JSON-RPC 2.0, a server can "push" updates to the client without waiting for a request.

📖 JSON-RPC (JavaScript Object Notation-Remote Procedure Call) is a JSON-based wire protocol for remote procedure calls (RPC). JSON-RPC allows for notifications (data sent to the server that does not require a response) and for multiple calls to be sent to the server which may be answered asynchronously.

For more information about JSON-RPC specification click here.

The Notification system is a very important component of the MCP architecture. The client doesn't have to keep asking "Is there anything new?", it just waits to be told. If a database goes offline or a new tool becomes available, the AI knows instantly.

Conclusion

As LLMs become more powerful, their utility depends on context. The MCP specification provides a standardised way to provide that context, making our lives easier. By standardising how models access information and execute tasks, MCP is paving the way for AI tools that are more integrated, more capable, and easier to build than ever before.

PS: More and more companies are releasing official MCP servers every day. You can explore the most popular ones on the official Anthropic MCP Gallery.

Top comments (0)