This article was originally published on do-nothing.ai. The canonical version is there.
Model Context Protocol (MCP): The Tool Ecosystem for AI Agents
This guide covers MCP from the ground up: what it is, why it exists, how to use the best available servers, and how to build your own. It is written for developers and builders integrating AI agents into real workflows.
What MCP Is and Why It Matters
Model Context Protocol (MCP) is an open standard that defines how AI applications give language models access to external tools, data, and context. Anthropic created it and released it publicly in late 2024.
Before MCP, every AI application had to build its own integration layer: custom code to let the model call APIs, read files, query databases, or interact with services. Each integration was proprietary. If you switched models or frameworks, you rebuilt everything.
MCP solves this with a single interface. An MCP server exposes capabilities. An MCP client (your AI application) connects to servers and makes their capabilities available to the model. One server, any compatible client. One client, any compatible server.
Think of it as a USB-C standard for AI tooling: you write the integration once, and it works anywhere that speaks the protocol.
As of 2026, MCP has become the de facto standard for tool use in serious AI agent deployments. Major clients include Claude Desktop, Claude Code, Cursor, and a growing number of custom agent frameworks.
How MCP Works
Every MCP server can expose three types of capabilities:
Tools — Functions the model can call. These are the action primitives of agent work. Examples: search_web, query_database, send_email, read_file, create_issue. The model decides when to call a tool and what arguments to pass; the server executes it and returns results.
Resources — Data the model can read. Resources are structured context the model can pull in: file contents, database records, API responses. Unlike tools (which take actions), resources are read-only references.
Prompts — Reusable prompt templates the model can invoke. Less commonly used, but useful for standardizing how certain tasks are approached across sessions.
The Connection Architecture
An MCP server runs as a local process or remote service. Your AI client connects to it via one of two transports:
- stdio — The server runs as a local subprocess. The client communicates through standard input/output. Simple, reliable, zero network overhead. Best for local tools.
- SSE (Server-Sent Events) — The server runs as an HTTP service. The client connects over the network. Required for remote servers or multi-user deployments.
For Claude Desktop, servers are configured in claude_desktop_config.json. For Claude Code (the CLI), they're configured in your .claude/settings.json or global settings. For custom agent frameworks, you initialize MCP clients programmatically.
Essential MCP Servers
Developer Workflow
| Server | What It Does | Best For |
|---|---|---|
@modelcontextprotocol/server-filesystem |
Read, write, and list files in specified directories | Code editing, file management, local data |
@modelcontextprotocol/server-github |
Create issues, manage PRs, read repos, search code | Engineering workflows, code review |
@modelcontextprotocol/server-git |
Git operations: log, diff, commit history | Local repo inspection |
Data and Search
| Server | What It Does | Best For |
|---|---|---|
@modelcontextprotocol/server-postgres |
Query Postgres databases with read access | Data analysis, debugging production data |
mcp-server-sqlite |
Read/write SQLite databases | Local data stores, prototyping |
@modelcontextprotocol/server-brave-search |
Web search via the Brave API | Research agents, current events |
@modelcontextprotocol/server-fetch |
Fetch any URL and return content | Web scraping, reading documentation |
Productivity and Communication
| Server | What It Does | Best For |
|---|---|---|
@modelcontextprotocol/server-slack |
Read channels, post messages, search workspace | Ops automation, notifications |
@modelcontextprotocol/server-google-maps |
Geocoding, directions, place search | Location-aware applications |
mcp-server-puppeteer |
Browser automation via Puppeteer | Web scraping, UI testing, form fills |
Memory and Knowledge
| Server | What It Does | Best For |
|---|---|---|
@modelcontextprotocol/server-memory |
Persistent key-value memory across sessions | Long-running agents, stateful workflows |
mcp-server-qdrant |
Vector search over a Qdrant database | Semantic search, RAG pipelines |
Quick Setup: Adding MCP Servers to Claude Desktop
Create or edit ~/Library/Application Support/Claude/claude_desktop_config.json on macOS:
{
"mcpServers": {
"filesystem": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-filesystem", "/Users/yourname/projects"]
},
"github": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-github"],
"env": {
"GITHUB_PERSONAL_ACCESS_TOKEN": "ghp_your_token_here"
}
},
"postgres": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-postgres", "postgresql://localhost/mydb"]
}
}
}
Restart Claude Desktop after editing. Each configured server appears in the tools panel. Claude can now call these tools during a conversation.
When to Use MCP vs. Native Tool Calling
MCP and native tool calling (defining tools inline in the API request) serve different use cases.
Use MCP when:
- You want to reuse tools across multiple agents or sessions without rewriting integration code
- The tool is complex enough that it deserves its own server (auth, state management, multiple functions)
- You're using Claude Desktop or Claude Code and want tools available in the UI
- You're building for a team and want a shared tool library
Use native tool calling when:
- You're building a focused application with 1-3 simple tools
- You need maximum control over the tool interface
- You're using an API client and MCP adds unnecessary overhead
- You're calling a one-off API that doesn't warrant a server
For production agent systems with multiple tools, MCP wins on maintainability. For simple scripts and one-off integrations, inline tool definitions are faster to ship.
Building Your Own MCP Server
Building a custom MCP server is the right move when you need to give agents access to internal systems: your company's database, your internal APIs, proprietary data sources, or custom business logic.
Python (Recommended)
pip install mcp
from mcp.server import Server
from mcp.server.models import InitializationOptions
import mcp.types as types
app = Server("my-internal-tools")
@app.list_tools()
async def list_tools() -> list[types.Tool]:
return [
types.Tool(
name="get_customer",
description="Retrieve customer details by ID from the internal CRM",
inputSchema={
"type": "object",
"properties": {
"customer_id": {"type": "string", "description": "The customer's unique ID"}
},
"required": ["customer_id"]
}
)
]
@app.call_tool()
async def call_tool(name: str, arguments: dict) -> list[types.TextContent]:
if name == "get_customer":
# Call your internal CRM API here
customer = fetch_customer_from_crm(arguments["customer_id"])
return [types.TextContent(type="text", text=str(customer))]
# Run the server
python -m mcp.server.stdio my_server:app
TypeScript / Node.js
npm install @modelcontextprotocol/sdk
import { Server } from '@modelcontextprotocol/sdk/server/index.js';
import { StdioServerTransport } from '@modelcontextprotocol/sdk/server/stdio.js';
const server = new Server(
{ name: 'my-internal-tools', version: '1.0.0' },
{ capabilities: { tools: {} } }
);
server.setRequestHandler(ListToolsRequestSchema, async () => ({
tools: [{
name: 'get_customer',
description: 'Retrieve customer details by ID',
inputSchema: {
type: 'object',
properties: { customer_id: { type: 'string' } },
required: ['customer_id']
}
}]
}));
const transport = new StdioServerTransport();
await server.connect(transport);
What Makes a Good MCP Tool Definition
The model relies on your tool descriptions to decide when and how to use each tool. Write descriptions as if explaining the tool to a capable developer who has never seen your system.
-
Name: Use a verb-noun format that's unambiguous (
get_customer,create_ticket,search_orders) - Description: Explain what the tool does, what it returns, and when to use it vs. other available tools
- Parameter descriptions: Be explicit about expected format, valid values, and what happens with edge cases
- Error messages: Return clear, actionable error text when calls fail
Security Considerations
MCP servers run with the permissions you give them. Before deploying:
- Filesystem servers: Restrict allowed paths to only what agents need. Do not allow write access to paths containing credentials or config files.
- Database servers: Use read-only database users unless the agent explicitly needs to write. Log all queries.
- API servers: Scope API tokens to minimum required permissions. Rotate tokens regularly.
- Remote servers: Use TLS for SSE transport. Authenticate client connections.
- Sensitive data: Do not build tools that return raw credentials, private keys, or PII unless the agent specifically needs them and the session is secure.
Related Guides
- Getting Started with AI Agents — Core concepts: what agents are and how they use tools
- How to Delegate Tasks to AI Agents — How to structure instructions for agents using tools like these
- Context Engineering for Coding Agents — How to manage what information agents work with
- Infrastructure Stack for a Solo AI Business — Where MCP fits in a production AI stack
Top comments (0)