DEV Community

Cover image for MCPs - A Case for a Different AI-Native Data Standard
Odewole Babatunde Samson
Odewole Babatunde Samson

Posted on • Edited on

MCPs - A Case for a Different AI-Native Data Standard

AI has taken off in the last ten years. We’ve seen transformer architectures go from research papers to powering everyday tools. LLMs are rewriting how we interact with software. Yet, in the middle of this technological leap, one thing hasn’t developed nearly fast enough: the way our data systems connect to AI.

That’s where MCPs (Model Context Protocols) come in, an emerging standard aimed at bridging the increasingly obvious gap between traditional data systems and AI-native applications. They aren’t just another abstraction layer. MCPs propose a fundamental shift in how data is structured, exchanged, and interpreted by machine learning models, especially large language models (LLMs).

But to see why MCPs are such a big deal, let's take a quick look back.

The History of Data Standards and the Development of Interconnectivity

Remember when data mostly lived on single machines? Applications were built around their data. Then, databases arrived, and SQL became the common language for structured data. As the web grew, APIs showed up, letting different systems communicate. REST set the rules for data over HTTP, GraphQL gave us more control, and gRPC brought speed. Each of these steps promised the same thing: making it simpler for software to get and use data.

But here's the thing: none of these older standards were built for machine learning.

Not in a deep way, anyway. Tools like REST, GraphQL, and gRPC are made for human software. They expect a developer to read instructions, understand the data layout, write some code to connect things, and then launch. But LLMs don't read docs. They don't ask your backend engineer for help. They take in data and learn patterns. And they need that data set up in a way that makes sense for how they 'think' statistically, not how easy it is for a human to look at.

That's exactly where MCPs become crucial.

What Are MCPs?

So, what exactly are MCPs? Model Context Protocols is an open standard developed by Anthropic and adopted by OpenAI, among others. MCP are a new way to organize and send data, built specifically for AI models, especially LLMs.

Think of them like GraphQL, but instead of helping developers, they help models. MCPs set the rules for how data is asked for, shaped, shrunk, and given extra context so that models can truly understand and respond well, without human help.

Core Principles of MCPs:

  1. Model-first Design: Built for models first. MCPs make data easy for models to use, not just for humans to code.
  2. Context Awareness: Data isn't sent alone. It's packaged with extra info and signals about its purpose, helping models clear up confusion.
  3. Dynamic Compression: MCPs can shrink or summarize data on the fly, matching the model's limits and how much context it can handle.
  4. Semantic Anchoring: Rather than relying purely on syntactic correctness, MCPs lean into semantic clarity, “what this data means” rather than “what it looks like.”

Why MCPs Exist: The Pain of AI-First Applications

Because building with today's LLMs often feels like a broken process. It's a complicated, roundabout effort, like a Rube Goldberg machine, where you're:

  • Fetching data via API
  • Cleaning and changing it
  • Stuffing it into prompt templates
  • Injecting it into the model's main prompt
  • And then, basically, just hoping for the best.

Ask any engineer building with GPT-4 or Claude, and you’ll hear the same complaints:

  • "The model ignored the product ID I passed."
  • "It hallucinated an answer because the database schema wasn’t in the prompt."
  • "I’m token-limited, but I need context from multiple sources."

These aren't UX problems. They are protocol problems.

We’re still feeding AI models with plumbing designed for humans. It’s like trying to connect an electric vehicle to a gas pump. Yes, it technically fits, but it doesn’t work well, and it definitely doesn’t scale.

How do MCPs work in the real world?

Let’s walk through a simple example: an LLM needs to tell a user about their past orders.

Traditional API Flow:

  • Your app calls the /orders endpoint.
  • It gets back a raw JSON with order details, prices, and dates.
  • An engineer then has to parse that, build a prompt, like: 'Here are Jane’s orders: {orders}. Now, answer this question: {question}'

MCP Flow:

  1. The client sends a request like order_summary(query="What did Jane buy last month?").
  2. The MCP layer dynamically:
  • Compresses relevant orders into natural language
  • Adds time context (“last month = April”)
  • Embeds schema hints (“‘product_id’ refers to internal SKUs”)
  • It then outputs: 'In April, Jane purchased a black hoodie and two ceramic mugs. Her total spend was $67.'

This isn't just making your JSON look nicer. This is AI-optimized information – distilled, organized, and perfectly ready for the model to use.

Where MCPs Fit: The New AI Stack

The basic parts of the AI-native application stack are coming together:

  • Storage: Vector databases, blob storage, relational stores
  • Computation: GPUs, TPUs, inference APIs
  • Orchestration: LangChain, Semantic Kernel, PromptLayer
  • Interface: Chat, voice, agents

But the layer that’s been missing is a model context layer — something between the raw data and the prompt. MCPs fit snugly here.

They can be:

  • A protocol spec (open or proprietary)
  • A runtime engine (e.g., an edge service that preprocesses data)
  • A library (wrapping internal APIs and prompt templates)

And they’re not just for output generation. MCPs also power Retrieval-Augmented Generation (RAG) systems, enabling smarter, cheaper, and more context-aware document injection.

Are MCPs already out there?

While no one's officially calling them 'MCPs' yet, several startups and open-source projects are building with the same idea in mind:

  • Sierra, from Adept AI, introduces “model-native APIs” for agents
  • LangChain Expression Language (LCEL) hints at composability in model context
  • Vellum.ai lets teams define structured prompt inputs, close to MCPs in spirit
  • OpenRouter / Function Calling 2.0 and Toolformer models are moving toward more formalized interaction standards

None of these call themselves “MCPs” yet, but the trend is undeniable: we’re starting to build for machines, not humans.

The Business Case: Why This Matters Now

Why is this important for dev teams, CTOs, and infra architects? Because AI-native products consistently hit similar roadblocks:

  • Prompt costs jump through the roof as context gets longer.
  • LLM answers become less dependable as the input gets more complicated.
  • Time spent on data copying, cleaning, and prompt tuning eats up more engineering hours than building actual features.

MCPs offer a clear path forward:

  • Lower costs: By smartly cutting down on token usage.
  • More reliable answers: By giving models structured, meaningful input.
  • Faster development: By keeping data work separate from model logic.

It might not fix everything, but it's a huge step in the right direction.

Final Thought

Think about it: REST took over from RPC, and GraphQL pushed REST. Similarly, MCPs are set to grow from a niche idea into a widely accepted standard – maybe not by that exact name, but definitely in what they aim to do.

We're now building software for a new kind of 'user': a machine that thinks in probabilities. It’s high time our data standards evolved to match. If you’re creating AI-native products, ask yourself: Are you truly speaking your model's language? If not, perhaps it's time to learn MCP.

Top comments (1)

Collapse
 
jasperessien2 profile image
JasperEssien

Beautifully written bro.

Was quite helpful for me, been seeing this MCP stuff lately, your article gave a good insight of what it is about.