DEV Community

Cover image for What Is MCP (Model Context Protocol)? A Practical Guide
Carl-W for remoet.dev

Posted on • Originally published at remoet.dev

What Is MCP (Model Context Protocol)? A Practical Guide

MCP (Model Context Protocol) is an open standard that lets AI agents connect to external software, discover available tools, and take actions on your behalf. Instead of being trapped in a chat window, your AI can search databases, manage projects, update profiles, and interact with any service that runs an MCP server.

You've probably seen MCP mentioned everywhere lately. Twitter threads, blog posts, product announcements. Every AI company seems to be shipping an "MCP server" and every developer tool is adding "MCP support." But if you've tried to figure out what MCP actually is, you've probably run into a wall of jargon and protocol specs.

I'm going to cut through that.

MCP in Plain English

Anthropic introduced MCP in November 2024. In December 2025, they donated it to the Agentic AI Foundation under the Linux Foundation, with OpenAI, Block, and others as co-stewards. It's a genuinely open protocol now.

Here's the problem it solves. When you use Claude or ChatGPT, the AI can talk to you, but it can't actually do anything outside that conversation. It can't check your calendar, search a database, file a bug report, or look up your job applications. It's stuck inside a text box.

MCP changes that. It's a universal plug that lets an AI agent reach into other software and take actions on your behalf. Search a job board. Update a project. Star a company. Send a message. Whatever the connected service supports.

Yeah, I know, the USB analogy is the cliche everyone uses for MCP. But it's accurate, so I'll use it anyway: before USB, every device had its own proprietary connector. Printers, keyboards, cameras, all different cables. USB standardized the physical connection. MCP standardizes the AI-to-software connection.

How It Actually Works

Under the hood, MCP uses JSON-RPC 2.0 as its wire protocol. If you've worked with LSP (Language Server Protocol), the architecture will feel familiar. There are two sides to every MCP connection:

The client is your AI agent. Claude Desktop, Cursor, Windsurf, Claude Code, Cline. These are the apps where you type prompts and have conversations. They speak MCP natively.

The server is whatever tool or service you want the agent to access. A GitHub MCP server lets your agent manage repos and issues. A Notion MCP server lets it read and write documents. A Remoet MCP server lets it search remote job listings, manage your developer profile, and track applications.

When a client connects to a server, the server advertises its capabilities through three primitives:

  • Tools: Actions the agent can execute. Searching listings, creating a profile entry, submitting an application. These are the most commonly used primitive.
  • Resources: Data the agent can read, like files, database records, or configuration. Think of these as GET endpoints.
  • Prompts: Predefined templates that guide the agent through specific workflows. Less common but useful for complex multi-step tasks.

Each tool has a name, a description, and a JSON Schema defining its inputs. The AI reads those descriptions and figures out which tools to call based on what you ask.

So when you say "find me remote companies hiring React developers," the agent looks at the available tools, picks the search tool, fills in the right parameters, calls it, and returns the results. You never have to know the tool exists. You just describe what you want.

MCP vs Function Calling

This is the confusion I see most often. Function calling is a model-level feature where the AI can output structured JSON to invoke predefined functions. It's been around since 2023. MCP is the transport and discovery layer that sits on top of function calling.

Think of it this way: function calling is the engine. MCP is the road network.

Without MCP, every developer has to manually define function schemas, wire them up to API clients, handle authentication, and build the plumbing for each integration from scratch. MCP standardizes all of that. The server describes its tools once, any MCP client can discover and use them, and the client's function calling capability handles the actual invocation.

So they're complementary, not competing. Function calling is what lets the model decide to call a tool. MCP is what lets the model discover tools dynamically from external servers and execute them over a standardized transport.

MCP vs A2A (Agent-to-Agent Protocol)

Google released their Agent-to-Agent (A2A) protocol in early 2025, and I keep seeing people ask whether it competes with MCP. Short answer: no. They solve different problems.

MCP connects an agent to tools and data. A2A connects an agent to another agent. MCP is about giving one agent hands to interact with software. A2A is about letting multiple agents collaborate with each other on a task. You'd likely use both in a mature agentic system, with MCP for tool access and A2A for multi-agent coordination.

The Three Primitives in Practice

I mentioned Tools, Resources, and Prompts above, but it's worth seeing how they play out in a real MCP server.

Remoet has 38 MCP tools. When your agent connects, it receives all 38 tool definitions with their descriptions and parameter schemas. The agent doesn't need documentation. It reads the tool descriptions and figures out the right calls.

Ask "find companies using Go and Kubernetes" and the agent picks the search tool. Ask "show my applications" and it picks the applications tool. Ask "update my summary" and it picks the profile update tool. All from the same conversational interface. No menus, no navigation, no context switching.

This is fundamentally different from a traditional API integration. With an API, a developer writes code to call specific endpoints with specific parameters. With MCP, you describe your intent in natural language and the AI handles the rest.

Security: OAuth 2.1, PKCE, and Prompt Injection

I won't sugarcoat this: connecting AI agents to live services introduces real security considerations.

On the authentication side, MCP supports OAuth 2.1 with PKCE (Proof Key for Code Exchange). This is the same security standard used by major web applications. Each connection requires explicit user authorization, and you can revoke access at any time. Remoet's MCP implementation enforces PKCE on every authorization, with session limits and LRU eviction to prevent resource exhaustion.

The trickier risk is prompt injection. If an MCP server returns malicious content in tool results, it could theoretically trick the agent into taking unintended actions. Good MCP implementations mitigate this by treating all tool results as untrusted data (Remoet's server description explicitly instructs agents to do this), but it's an active area of research. The MCP spec itself is evolving to add better guardrails here.

If you're evaluating MCP servers, look for ones that use OAuth 2.1 rather than just API keys, enforce PKCE, and document their approach to prompt injection defense.

Who Supports MCP Today

The ecosystem has grown fast. Thousands of MCP servers are now listed across various directories, and the number is climbing weekly.

On the client side:

  • Claude Desktop and Claude Web have native MCP support, including custom connectors via OAuth
  • Claude Code supports MCP servers out of the box via the CLI
  • Cursor has built-in MCP configuration
  • Windsurf supports MCP servers in its settings
  • VS Code now has native MCP support through GitHub Copilot
  • Cline and Continue also support MCP

On the server side, the ecosystem spans every category:

Category Examples
Development GitHub, Linear, Sentry
Productivity Notion, Slack, Google Drive
Data Supabase, PostgreSQL, various database connectors
Search Brave Search, Context7 (documentation search)
Career & Jobs Remoet (remote job search, profile management, applications)
Automation n8n, Zapier

What's striking is that MCP servers aren't just developer tools anymore. They're showing up in every vertical: job platforms, finance tools, CRM systems, e-commerce. Any software with an API can become an MCP server, and increasingly, they are.

Why This Matters For You

If you're a developer, MCP means you can automate a huge chunk of your workflow through conversation. Instead of context-switching between 15 browser tabs, you tell your agent what you need and it handles the tool-hopping.

If you're job hunting, this is where it gets genuinely interesting. Traditional job boards make you do all the work: search, filter, scroll, click into each listing, apply one by one. An MCP-connected job platform flips that. You tell your agent "find companies using my tech stack that are actively hiring" and it does the searching, filtering, and shortlisting for you.

That's exactly what Remoet does. Connect your AI agent once, and it handles the rest: finding companies that match your stack, tracking applications, messaging hiring teams. Your agent becomes your career assistant.

But beyond any single platform, the bigger shift is this: software is becoming conversational. Instead of learning each app's UI, you describe what you want and your agent navigates the tools for you. MCP is what makes that possible at scale.

Getting Started

If you want to try MCP yourself, the setup is simpler than you'd expect. Most MCP-compatible AI apps let you add servers through a configuration file or settings page. You typically need an API key or OAuth authorization from the service you want to connect, and then you're up and running.

We've got a detailed setup guide covering Claude Desktop, Claude Code, Cursor, and Windsurf in our post on how to set up MCP servers.

For a complete walkthrough of what AI-powered job search looks like in practice, check out our agent job search guide.

Frequently Asked Questions

Is MCP only for Claude?

No. MCP is an open protocol governed by the Linux Foundation. While Anthropic created it, it's been adopted by Claude, Cursor, Windsurf, VS Code, Cline, Continue, and others. Any AI client can implement MCP support. OpenAI added MCP support to their agents SDK in early 2025.

Do I need to be a developer to use MCP?

Not necessarily. Some setups require editing a JSON config file, which is a bit technical. But Claude Web supports custom MCP connectors through a point-and-click OAuth flow that requires zero coding. Desktop Extensions are also making one-click installs possible. The trend is clearly toward making MCP accessible to everyone.

Is MCP secure?

MCP supports OAuth 2.1 with PKCE for authentication, which is the same security standard used by major web applications. Each connection requires explicit authorization. You control which services your agent can access and can revoke access at any time. The main emerging concern is prompt injection through tool results, which is an active area of research in the MCP community.

What's the difference between MCP and a browser extension or plugin?

Browser extensions and plugins are specific to one application. A Chrome extension only works in Chrome. A ChatGPT plugin only worked in ChatGPT (and OpenAI deprecated them). MCP is standardized across all compatible AI clients. Build one MCP server and it works with Claude, Cursor, Windsurf, VS Code, and any future client that supports the protocol. Build once, work everywhere.

How many MCP servers can I connect at once?

There's no hard protocol limit. You can connect as many MCP servers as your AI client supports. Most people connect 3 to 10 servers depending on their workflow, covering things like code management, documentation, search, and whatever domain-specific tools they need.

Top comments (0)