Photo by Pavel Danilyuk: https://www.pexels.com/photo/a-robot-holding-a-wine-8439094/
The Model Context Protocol (MCP) has been described as “the USB-C for AI”. It’s a fitting analogy, but what does it really mean? What is MCP?
Large language models (LLMs) are incredibly capable, but they only know what they know. Once trained, an LLM can’t access real-time information or specialized systems until it connects to external tools.
MCP provides a communication protocol that lets models like GPT or Claude interact with any compatible tool or service. Instead of relying on proprietary APIs or one-off integrations, MCP introduces a shared language for interaction between AIs (as clients) and software (as servers).
How the MCP Works
At its core, MCP is a simple client–server model. The large language model acts as the client, while a server provides one or more tools the AI can use. Communication between the two happens through JSON-RPC.
During initialization, the AI and server negotiate capabilities. Then the AI sends a client a tools/list request.
{
"jsonrpc": "2.0",
"id": 1,
"method": "tools/list",
"params": {
"cursor": "optional-cursor-value"
}
}
And the server responds with a manifest of available tools:
{
"jsonrpc": "2.0",
"id": 1,
"result": {
"tools": [
{
"name": "get_weather",
"description": "Get current weather information for a location",
"inputSchema": {
"type": "object",
"properties": {
"location": {
"type": "string",
"description": "City name or zip code"
}
},
"required": ["location"]
}
}
],
"nextCursor": "next-page-cursor"
}
}
The AI now knows what tools or functions are available and picks one for a user request. So, in our example, the AI would call the get_weather tool.
{
"jsonrpc": "2.0",
"id": 2,
"method": "tools/call",
"params": {
"name": "get_weather",
"arguments": {
"location": "Lisbon"
}
}
}
And the MCP server responds with structured output, in this case, how's the weather in Lisbon:
{
"jsonrpc": "2.0",
"id": 2,
"result": {
"content": [
{
"type": "text",
"text": "Current weather in Lisbon:\nTemperature: 32°C\nConditions: Partly cloudy"
}
],
"isError": false
}
}
Local vs Remote MCP Servers
The easiest way to implement the MCP server is to run it on the same host as the client. For example, if I'm using OpenAI Codex or Claude Desktop, the AI can spawn an MCP server locally and communicate over standard input and output (stdio).
For more complex setups, the MCP allows communication over HTTP and provides mechanisms for authentication and authorization. These servers can require credentials, API keys, or tokens, depending on how sensitive their capabilities are.
The State of the Standard
MCP is still an emerging standard. It was introduced in 2024 as an open specification. MCP is being developed collaboratively by several players in the AI ecosystem.
The initial specification was published at modelcontextprotocol.info, and the work is happening in the open with input from AI companies, open-source developers, and infrastructure providers.
Conclusion
MCP represents a quiet but fundamental shift in how AI systems interact with the world.
It offers a shared, open standard; a common language that any model and any tool can use to talk to each other.
For developers, this means fewer one-off connectors and more reusable, interoperable systems. For users, it means AI assistants that can reach beyond their training data and tap into live information, files, or applications with precision and context.
Thanks for reading, and happy building!


Top comments (0)