DEV Community

Shrijith Venkatramana
Shrijith Venkatramana

Posted on

MCP is APIs for Agents

REST APIs gave humans (and the code they wrote) a standardized way to access software over the network.

MCP is trying to do the same thing for agents.

That simple framing clears up a lot of confusion.

For nearly 20 years we built systems assuming the caller was a human developer writing code against APIs. Now the caller is increasingly an LLM-driven agent. MCP changes the interface layer accordingly.

The REST Era: APIs Designed for Humans

With REST + OpenAPI, the typical flow looked like this:

Human Developer
    ↓
SDK / HTTP Client
    ↓
REST API
    ↓
Service
Enter fullscreen mode Exit fullscreen mode

A human would read the docs, inspect the OpenAPI spec, figure out auth, pick the right endpoints, map parameters, handle retries and errors, and manually compose workflows.

OpenAPI became the universal machine-readable description of the API. It captured endpoints, request/response schemas, authentication, parameters, types, and examples. This enabled Swagger UI, SDK generators, Postman collections, API gateways, client codegen, and testing tools.

In short:

OpenAPI standardized "how humans and programs understand APIs."

But agents aren't humans. And that changes everything.

Why REST APIs Are Awkward for Agents

An LLM can call REST APIs directly — technically there's nothing stopping it. But raw REST has some serious friction when the consumer is an agent.

1. REST assumes deterministic callers
REST expects the caller to already know which endpoint to hit, which parameters matter, the right sequencing, and how to handle failures. Agents don't work that way. They reason step-by-step and make decisions dynamically.

2. OpenAPI is optimized for developers, not reasoning systems
Humans are great at inferring intent from sparse or messy docs. Agents struggle with ambiguous operation names, missing descriptions, inconsistent schemas, and undocumented behavior.

Multiple OpenAPI→MCP articles have pointed out the same thing: the quality of the MCP experience depends heavily on the semantic quality of the underlying OpenAPI spec.

3. REST exposes transport details too directly
Agents don't care about HTTP verbs, query params vs body, pagination formats, or JSON quirks. They care about capabilities.

Instead of thinking:

POST /api/v3/issues
Content-Type: application/json
Enter fullscreen mode Exit fullscreen mode

They want to think:

"Create a Jira ticket"
Enter fullscreen mode Exit fullscreen mode

MCP pulls the interface up to the level of tools and capabilities.

MCP: APIs for Agents

The Model Context Protocol (MCP) is essentially:

A standardized protocol that lets agents discover and invoke tools dynamically.

Anthropic has called it something like "USB-C for AI integrations."

The new flow usually looks like:

User
  ↓
LLM Host (Claude, Cursor, VSCode, etc)
  ↓
MCP Client
  ↓
MCP Server
  ↓
REST APIs / Databases / Tools / Systems
Enter fullscreen mode Exit fullscreen mode

The crucial mental model shift:

REST = interface for programmers
MCP = interface for agents

What MCP Actually Exposes

An MCP server exposes tools, resources, prompts, and capabilities. The star of the show is usually the tool.

Here's a simplified example:

{
  "name": "create_github_issue",
  "description": "Create a GitHub issue in a repository",
  "inputSchema": {
    ...
  }
}
Enter fullscreen mode Exit fullscreen mode

Notice everything that disappeared: HTTP verbs, endpoint URLs, transport details. The agent now reasons at the capability level.

REST vs MCP

REST/OpenAPI MCP
Designed for developers Designed for agents
Endpoint-centric Capability-centric
HTTP-first Tool-first
Human docs LLM-readable semantics
Explicit orchestration Dynamic reasoning
SDKs Tool registries
Request/response focus Intent/action focus

So Where Does OpenAPI Fit?

This is where things got exciting fast.

We already have massive amounts of structured API metadata sitting in OpenAPI specs. So instead of hand-writing MCP servers, the ecosystem started generating them automatically.

Tools like:

are basically doing OpenAPI → MCP tools.

The Core Conversion Idea

Take a REST endpoint:

POST /tickets
Enter fullscreen mode Exit fullscreen mode

with OpenAPI metadata:

operationId: createTicket
summary: Create support ticket
Enter fullscreen mode Exit fullscreen mode

An MCP generator turns it into a clean tool definition. Under the hood the MCP server still makes the HTTP call, but the agent sees a high-level capability.

Why This Works Surprisingly Well

OpenAPI already gives us schemas, parameters, descriptions, auth definitions, and operation names. A lot of REST APIs were already "halfway to MCP."

That's why OpenAPI-to-MCP tooling exploded so quickly.

But Conversion Is Not Enough

Here's where many early takes fall short.

A naive 1:1 mapping from REST endpoint to MCP tool is often... mediocre. MCP isn't just a protocol translation — it's an interface redesign for agents. Production teams figured this out quickly. (Example)

The Semantic Problem

Humans tolerate ugly APIs. Agents don't.

Bad naming (POST /v2/createTaskEx), weak descriptions (summary: Get task), or ambiguous parameters become painfully obvious when an agent tries to use them.

The Real Insight

OpenAPI→MCP isn't mere translation. It's transforming developer-oriented APIs into agent-oriented capabilities. That's a deeper change.

Good MCP Design Often Adds Abstractions

The best implementations go beyond CRUD. Instead of exposing createIssue, assignIssue, addLabel, they might offer manage_incident_ticket — a higher-level tool that orchestrates multiple calls behind the scenes.

Composite tools help agents reason much better with semantically meaningful operations.

MCP Servers Are Becoming API Gateways for Agents

Historically API gateways served humans and services. Now MCP servers are emerging as the gateway for agents — acting as capability registry, semantic adapter, auth broker, orchestration layer, safety boundary, and context provider.

Local vs Remote MCP

  • Local MCP (stdio) — perfect for Cursor, filesystem tools, IDE automation, desktop workflows.
  • Remote MCP (HTTP/SSE) — ideal for SaaS platforms, cloud APIs, enterprise systems. A lot of momentum is heading here. (Reference)

What Happens to SDKs?

SDKs aren't going away, but they're no longer the primary interface for AI-native systems.

The pattern is shifting from Human → SDK → API to Agent → MCP → API. The SDK often still lives inside the MCP server.

The Bigger Shift

Era Primary Consumer
Web era Humans
API era Programs
MCP/Agent era Reasoning systems

REST standardized service access. MCP standardizes agent access.

The Most Important Architectural Change

Software used to expose data. Now it's exposing capabilities.

Agents don't just retrieve information — they act. This demands semantic discoverability, richer intent descriptions, tool safety, permission boundaries, and composable workflows.

MCP is the protocol built for exactly that transition.

One Way To Think About It

OpenAPI was designed so humans could generate clients.
MCP is designed so models can generate behavior.

That's why it feels qualitatively different even when it's still calling REST APIs underneath.

Practical Architecture Today

Modern AI-native systems increasingly look like:

Frontend Agent
    ↓
MCP Client
    ↓
MCP Server
    ↓
REST/gRPC/DB/internal services
Enter fullscreen mode Exit fullscreen mode

And many companies are realizing they already own thousands of APIs — MCP is simply the new interaction layer sitting on top of them. (Example)

The agent era is here, and the interface layer is evolving with it.


What do you think — is MCP going to be as big a shift as REST was? Drop your thoughts below.


Now, a quick introduction to git-lrc.

git-lrc is a free micro AI code review tool that runs on Git commits as you develop software with AI agents.

AI can generate large amounts of code, but your team still owns the outcome. You cannot delegate responsibility—only execution.

git-lrc provides lightweight code reviews at commit time. It improves stability, security, and performance while reducing bugs and costs.

You run Git as usual. When you commit, a review is triggered. You receive a summary of changes and categorized issues—warnings, critical issues, performance problems, and security concerns.

The tool includes a web UI for reviewing results.

It is open source and available at github.com/HexmosTech/git-lrc.

For teams, pricing starts at $32 per month. It supports unlimited users and includes integrations with GitHub, GitLab, and Bitbucket, along with AI credits.

You can learn more at hexmos.com/git-lrc

Top comments (1)

Some comments may only be visible to logged-in visitors. Sign in to view all comments.