DEV Community

Cover image for Understanding Model Context Protocol (MCP): Bridging LLMs and the Real World
Ariful Alam
Ariful Alam

Posted on

Understanding Model Context Protocol (MCP): Bridging LLMs and the Real World

Large language models are increasingly embedded into developer tools, products, and workflows—but most of them still operate in isolation. They can reason, explain, and suggest, yet they remain disconnected from the systems where real work happens. Model Context Protocol (MCP) addresses this gap by defining a standard way for AI models to interact with external tools, data sources, and services in a secure and structured manner.

Instead of treating AI as a passive interface that only generates text, MCP enables it to participate directly in real-world workflows. It establishes a common contract between models and systems, allowing developers to expose capabilities once and reuse them across different AI clients.

The Problem: LLMs Are Brilliant But Limited

Large language models have revolutionized how we interact with technology, but they face two fundamental limitations that prevent them from being truly useful in real-world scenarios:

1. Frozen Knowledge

LLMs' knowledge is fixed at training time. They cannot access:

  • Real-time information: Current stock prices, weather conditions, or breaking news
  • Private data: Your company's databases, personal files, or internal documentation
  • Recent developments: New libraries, updated APIs, or latest best practices

2. No Action Capability

LLMs can think and reason, but they cannot act:

  • Cannot execute code or run commands on your behalf
  • Cannot modify data in files, databases, or external systems
  • Cannot interact with APIs to perform real actions

Before MCP, every AI tool needed custom integrations for each external system it wanted to access. This created a fragmented ecosystem where developers had to build bespoke APIs for every combination of AI platform and data source.

MCP solves this by introducing a single, secure, and open protocol. Instead of building N×M custom integrations (N AI platforms × M data sources), you build M MCP servers that work with any MCP-compatible AI platform.

What is Model Context Protocol?

Model Context Protocol is an open standard that enables LLMs to securely connect to external data sources and tools through a unified interface. Think of it as USB for AI—a universal connector that lets any AI model plug into any tool or data source.

Core Capabilities

Standardized Interface
A unified way to expose tools and resources that works across all AI platforms.

Secure by Design
Explicit permissions with human oversight for every action. The LLM cannot do anything without proper authorization.

Cross-Platform Compatible
Write an MCP server once, use it with Claude Desktop, Cursor, custom AI applications, or any other MCP-compatible client.

A Real Example: The Database Query Problem

Let's look at a concrete example that demonstrates the difference MCP makes.

Scenario: You ask your AI assistant, "How many tables do I have in my Postgres database?"

Before MCP

Without MCP, the AI cannot access your database directly. Instead, it provides instructions:

  • Suggests using psql command-line tool
  • Provides SQL query examples you need to run manually
  • Explains different approaches to check table counts

Result: You get helpful advice, but you still have to do the work yourself.

After MCP

With an MCP server connected to your Postgres database, the AI:

  1. Directly queries your database using the MCP tools
  2. Retrieves the actual table count and names
  3. Returns the real answer: "You have 34 tables in your Postgres database"
  4. Lists all table names: admin_calendar_notifications, approval_requests, articles, etc.

Result: You get the actual answer immediately, without lifting a finger.

How MCP Works: The Architecture

MCP operates through a five-step workflow:

1. Request and Tool Discovery

The LLM identifies that it needs external tools to answer your request. It queries available MCP servers to discover what tools are accessible.

2. Tool Invocation

The LLM sends structured requests to invoke specific tools (e.g., "query database" or "read file").

3. External Action and Data Return

The MCP server executes the action on the external system and returns the data to the LLM.

4. Second Action and Response Generation

If needed, the LLM can invoke additional tools or use the data to generate a comprehensive response.

5. Final Confirmation

The LLM presents the result to the user, often with a confirmation step for sensitive actions.

MCP Architecture Components

MCP Host

The environment where the LLM operates:

  • Conversational assistants (Claude Desktop, ChatGPT)
  • IDEs (Cursor, VS Code)
  • Custom agent runtimes

MCP Client

Lives inside the host and acts as a bridge:

  • Discovers available MCP servers and their capabilities
  • Translates LLM requests into MCP protocol messages
  • Handles responses and passes data back to the LLM

MCP Server

Connects to external systems and exposes capabilities:

  • Database connections (Postgres, MySQL, MongoDB)
  • File system access
  • API integrations (GitHub, Slack, Linear)
  • Browser automation (Chrome DevTools)
  • Custom business logic

Transport Layer

Communication happens via JSON-RPC 2.0 over two transport mechanisms:

  • stdio transport: Local communication via standard input/output (for local tools)
  • HTTP with Server-Sent Events: Remote communication with streaming support (for cloud services)

MCP Primitives: The Building Blocks

MCP servers expose three types of primitives to LLMs:

Tools

Functions with side effects that the LLM can invoke to perform actions:

server.setRequestHandler(ListToolsRequestSchema, async () => ({
  tools: [{
    name: "query_database",
    description: "Execute a SQL query on the Postgres database",
    inputSchema: {
      type: "object",
      properties: {
        query: { type: "string", description: "SQL query to execute" }
      },
      required: ["query"]
    }
  }]
}));
Enter fullscreen mode Exit fullscreen mode

Examples: Query database, send email, create file, call API

Resources

Read-only data sources that provide context without side effects:

server.setRequestHandler(ListResourcesRequestSchema, async () => ({
  resources: [{
    uri: "postgres://tables",
    name: "Database Tables",
    description: "List of all tables in the database",
    mimeType: "application/json"
  }]
}));
Enter fullscreen mode Exit fullscreen mode

Examples: Configuration files, documentation, database schemas, API responses

Prompts

Reusable templates that guide the LLM's behavior for specific tasks:

server.setRequestHandler(ListPromptsRequestSchema, async () => ({
  prompts: [{
    name: "analyze_database",
    description: "Analyze database structure and suggest optimizations",
    arguments: [{
      name: "table_name",
      description: "Name of the table to analyze",
      required: false
    }]
  }]
}));
Enter fullscreen mode Exit fullscreen mode

Examples: "Review this PR", "Analyze performance", "Generate test cases"

Building Your First MCP Server

Here's a simple example of an MCP server that provides weather information:

import { Server } from "@modelcontextprotocol/sdk/server/index.js";
import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js";

const server = new Server({
  name: "weather-server",
  version: "1.0.0"
}, {
  capabilities: {
    tools: {}
  }
});

// Define a tool
server.setRequestHandler(CallToolRequestSchema, async (request) => {
  if (request.params.name === "get_weather") {
    const location = request.params.arguments.location;

    // Fetch actual weather data (simplified)
    const weather = await fetchWeatherData(location);

    return {
      content: [{
        type: "text",
        text: JSON.stringify(weather)
      }]
    };
  }

  throw new Error("Unknown tool");
});

// Start the server
const transport = new StdioServerTransport();
await server.connect(transport);
Enter fullscreen mode Exit fullscreen mode

This server can now be used by any MCP client to fetch weather information!

MCP Apps: Beyond Text Interactions

While MCP servers enable LLMs to access tools and data, MCP Apps take this further by adding custom user interfaces. They provide rich, visual interactions beyond simple text chat.

Example: Wordly MCP App

Wordly MCP App demonstrates how MCP Apps enhance user experience. Instead of getting grammar corrections as plain text, you get:

  • Interactive corrections: Visual highlighting of grammar issues
  • Multiple formality levels: Choose between Standard, Formal, Very Formal, or Casual
  • Real-time suggestions: See corrections as you type
  • Rich formatting: Better readability with styled components

Key Benefits of MCP Apps:

  • Custom UI components tailored to specific use cases
  • Interactive data visualizations (charts, tables, graphs)
  • Better user experience for complex workflows
  • Seamless integration with MCP tools

MCP vs. Other Approaches

MCP vs. RAG (Retrieval-Augmented Generation)

RAG is designed for:

  • Grounding responses in specific knowledge bases
  • Semantic search across documents
  • Providing relevant context from text corpora

MCP is designed for:

  • Executing actions with side effects
  • Real-time data access and manipulation
  • Interacting with structured systems (databases, APIs)
  • Tool invocation and workflow orchestration

Think of it this way: RAG is like giving the AI a library card to read books. MCP is like giving the AI hands to build things.

MCP vs. Agent Skills

Many AI platforms have proprietary "skills" or "plugins" systems. MCP standardizes this:

Traditional Agent Skills:

  • Platform-specific implementations
  • Limited portability between systems
  • Proprietary APIs and formats

MCP:

  • Open standard that works everywhere
  • Write once, use with any MCP client
  • Community-driven ecosystem

The Future of MCP

Model Context Protocol represents a paradigm shift in how AI systems interact with the world. As the ecosystem matures, we can expect:

Standardization: MCP becoming the universal protocol for AI-tool interactions, similar to how HTTP standardized web communication.

Ecosystem Growth: Thousands of MCP servers providing access to every tool and data source imaginable.

Enterprise Adoption: Organizations building internal MCP servers to safely expose their systems to AI.

Enhanced Capabilities: Advanced features like streaming responses, multi-modal tools, and complex workflow orchestration.

Explore MCP in Action

Ready to dive deeper? Check out these resources:

The future of AI is not just about smarter models—it's about models that can actually get things done. MCP is making that future a reality.

Top comments (0)