From Buzzword to Builder: Why AI Agents Need a Common Language
Another week, another flood of "AI" articles. We've seen the hype cycle: from ChatGPT demos to "AI will replace developers" hot takes. But amidst the noise, a fascinating pattern emerges in the trending topics—practical implementations. The top article, "I Created An Enterprise MCP Gateway," points to a crucial shift: we're moving from talking about AI to building interconnected, capable systems. The key enabler? Protocols like the Model Context Protocol (MCP).
Think of it this way: a single LLM is a brilliant but isolated consultant. It knows a lot but can't act on its own. An AI agent powered by MCP is like giving that consultant a direct phone line to every tool, database, and API in your company. Suddenly, it can do things.
In this guide, we'll move past abstract concepts. You'll learn what MCP is at a technical level and, more importantly, build a functional, multi-tool AI agent from scratch using Python. This isn't just theory; it's a blueprint for automation.
Demystifying the Model Context Protocol (MCP)
At its core, the Model Context Protocol (MCP) is a standardized communication layer between a Large Language Model (LLM) and external resources (tools, data sources, etc.). It's an open protocol, spearheaded by Anthropic, designed to solve the "tool integration" problem in a vendor-agnostic way.
Before MCP, if you wanted your LLM to fetch data from PostgreSQL, search the web, and send a Slack message, you'd have to:
- Write custom functions for each action.
- Painstakingly describe these functions to the LLM using unstructured text or specific frameworks (like OpenAI's function calling).
- Handle authentication, error parsing, and state management ad-hoc.
MCP introduces a clean, server-client architecture:
- MCP Server: Hosts one or more Tools or Resources. A Tool is a callable function (e.g.,
query_database). A Resource is readable data (e.g.,company_metrics.json). The server advertises these capabilities via a standardized schema. - MCP Client: The LLM application (like Claude Desktop or our custom agent). It discovers the server's capabilities and can invoke them through a defined JSON-RPC over STDIO/SSE protocol.
This separation is powerful. You can write a single PostgreSQL MCP server, and any MCP-compliant client can instantly use it, without needing to know the intricacies of psycopg2.
Building Our Agent: The Weather & News Analyst
Let's build a practical agent. Our goal: an AI that can fetch the current weather for a city and retrieve the latest news headlines, then provide a combined analysis.
We'll need two MCP Servers and one Client.
Prerequisites
# Create a virtual environment
python -m venv mcp-agent-env
source mcp-agent-env/bin/activate # On Windows: .\mcp-agent-env\Scripts\activate
# Install core libraries
pip install mcp requests python-dotenv openai
Step 1: Building a Weather MCP Server
We'll use the Open-Meteo API (free, no key required).
Create server_weather.py:
# server_weather.py
import mcp.server as mcp
import mcp.types as types
import requests
from pydantic import BaseModel
# Define the structured input for our Tool
class WeatherParams(BaseModel):
city: str
country_code: str = "US"
# Create the server instance
app = mcp.Server("weather-server")
# Register a Tool
@app.list_tools()
async def list_tools():
return [
types.Tool(
name="get_current_weather",
description="Get the current temperature and conditions for a city.",
inputSchema={
"type": "object",
"properties": {
"city": {"type": "string", "description": "The city name."},
"country_code": {"type": "string", "description": "The ISO country code."}
},
"required": ["city"]
}
)
]
# Implement the Tool's logic
@app.call_tool()
async def call_tool(name: str, arguments: dict) -> list[mcp.types.TextContent]:
if name == "get_current_weather":
params = WeatherParams(**arguments)
# Simple geocoding (for demo). In production, use a proper service.
geo_url = f"https://geocoding-api.open-meteo.com/v1/search?name={params.city}&count=1"
geo_resp = requests.get(geo_url).json()
if not geo_resp.get('results'):
return [mcp.types.TextContent(text=f"City '{params.city}' not found.")]
location = geo_resp['results'][0]
lat, lon = location['latitude'], location['longitude']
# Fetch weather
weather_url = f"https://api.open-meteo.com/v1/forecast?latitude={lat}&longitude={lon}¤t_weather=true"
weather_resp = requests.get(weather_url).json()
current = weather_resp['current_weather']
result_text = f"""Weather in {location['name']}, {location['country_code']}:
Temperature: {current['temperature']}°C
Wind Speed: {current['windspeed']} km/h
Condition Code: {current['weathercode']}
Time: {current['time']}
"""
return [mcp.types.TextContent(text=result_text)]
raise ValueError(f"Unknown tool: {name}")
if __name__ == "__main__":
# Run the server using the standard MCP transport
mcp.run(app)
Step 2: Building a News MCP Server
Create server_news.py. We'll use NewsAPI (get a free key at newsapi.org).
# server_news.py
import mcp.server as mcp
import mcp.types as types
import requests
import os
from dotenv import load_dotenv
load_dotenv()
app = mcp.Server("news-server")
@app.list_tools()
async def list_tools():
return [
types.Tool(
name="get_top_headlines",
description="Fetch top news headlines for a country/category.",
inputSchema={
"type": "object",
"properties": {
"country": {"type": "string", "description": "ISO 2-letter country code (e.g., 'us')."},
"category": {"type": "string", "description": "Category: business, technology, etc."}
},
"required": []
}
)
]
@app.call_tool()
async def call_tool(name: str, arguments: dict) -> list[mcp.types.TextContent]:
if name == "get_top_headlines":
api_key = os.getenv("NEWS_API_KEY")
if not api_key:
return [mcp.types.TextContent(text="ERROR: NEWS_API_KEY not set.")]
url = "https://newsapi.org/v2/top-headlines"
params = {
"apiKey": api_key,
"pageSize": 5
}
# Add optional filters
if arguments.get('country'):
params['country'] = arguments['country']
if arguments.get('category'):
params['category'] = arguments['category']
resp = requests.get(url, params=params).json()
if resp['status'] != 'ok':
return [mcp.types.TextContent(text=f"News API Error: {resp.get('message')}")]
articles = resp['articles']
if not articles:
return [mcp.types.TextContent(text="No headlines found.")]
headlines = [f"- {art['title']} ({art['source']['name']})" for art in articles]
result = "Top Headlines:\n" + "\n".join(headlines)
return [mcp.types.TextContent(text=result)]
raise ValueError(f"Unknown tool: {name}")
if __name__ == "__main__":
mcp.run(app)
Step 3: The Brains: Building the MCP Client Agent
This is where the magic happens. Our client will connect to both servers, let the LLM decide which tools to use, and execute the plan.
Create agent_client.py:
# agent_client.py
import asyncio
import mcp.client as mcp
import mcp.client.stdio as stdio
from openai import OpenAI
import subprocess
import sys
from dotenv import load_dotenv
import os
load_dotenv()
class MCPAgent:
def __init__(self):
self.client = OpenAI(api_key=os.getenv("OPENAI_API_KEY"))
self.mcp_clients = []
async def start_server(self, script_path, server_name):
"""Start an MCP server as a subprocess and connect to it."""
print(f"[*] Starting {server_name}...")
process = subprocess.Popen(
[sys.executable, script_path],
stdout=subprocess.PIPE,
stdin=subprocess.PIPE,
stderr=subprocess.PIPE
)
# Create an MCP client connected to the subprocess's stdio
stdio_client = await stdio.create_client(process)
self.mcp_clients.append(stdio_client)
# Initialize the connection
await stdio_client.initialize()
print(f"[+] {server_name} connected.")
return stdio_client
async def get_all_tools(self):
"""Fetch tools from all connected servers."""
all_tools = []
for client in self.mcp_clients:
tools = await client.list_tools()
all_tools.extend(tools.tools)
return all_tools
async def run_agent(self, user_query: str):
print(f"\n[USER] {user_query}")
# 1. Get available tools and describe them for the LLM
tools = await self.get_all_tools()
if not tools:
print("[-] No tools available.")
return
# Format tools for OpenAI's function-calling
openai_tools = []
for tool in tools:
openai_tools.append({
"type": "function",
"function": {
"name": tool.name,
"description": tool.description,
"parameters": tool.inputSchema
}
})
# 2. Let the LLM decide the action
messages = [
{"role": "system", "content": "You are a helpful assistant with access to tools. Use them to answer the user's query. If you need multiple pieces of information, call tools sequentially."},
{"role": "user", "content": user_query}
]
response = self.client.chat.completions.create(
model="gpt-4-turbo-preview",
messages=messages,
tools=openai_tools,
tool_choice="auto"
)
message = response.choices[0].message
messages.append(message)
# 3. Execute tool calls if any
if message.tool_calls:
print("[AGENT] Action Plan:")
for tool_call in message.tool_calls:
func_name = tool_call.function.name
func_args = eval(tool_call.function.arguments)
print(f" -> Calling {func_name} with {func_args}")
# Find the right server and call the tool
for client in self.mcp_clients:
try:
result = await client.call_tool(func_name, func_args)
# Add result back to conversation
messages.append({
"role": "tool",
"tool_call_id": tool_call.id,
"content": result[0].text
})
print(f"[TOOL RESULT] {result[0].text[:150]}...")
break
except:
continue
# 4. Get the LLM's final answer with context
second_response = self.client.chat.completions.create(
model="gpt-4-turbo-preview",
messages=messages
)
final_answer = second_response.choices[0].message.content
print(f"\n[FINAL ANSWER]\n{final_answer}")
else:
print(f"\n[ANSWER]\n{message.content}")
async def main():
agent = MCPAgent()
# Start our servers
await agent.start_server("server_weather.py", "Weather Server")
await agent.start_server("server_news.py", "News Server")
print("\n" + "="*50)
print("MCP Agent Ready. Example queries:")
print(' - "What\'s the weather in Tokyo and any big tech news?"')
print(' - "Get headlines in the UK and check London weather."')
print("="*50)
# Run a demo query
await agent.run_agent("What's the weather in San Francisco right now, and what are the top technology headlines in the US?")
if __name__ == "__main__":
asyncio.run(main())
Running Your Agent Ecosystem
-
Set up your environment variables in a
.envfile:
OPENAI_API_KEY=sk-your-key-here NEWS_API_KEY=your-newsapi-key-here -
Run the client (it will spawn the servers):
python agent_client.py
You should see the agent start both servers, then process the query. The LLM (GPT-4) will decide to call get_current_weather and get_top_headlines, the client will execute these calls via the MCP protocol, and then synthesize a final answer.
The Bigger Picture: Why This Matters
What you've built is more than a script. It's a decoupled, scalable AI agent architecture.
- Server Independence: The weather and news teams can update their servers without touching the agent logic.
- Tool Discovery: New servers can be added dynamically. Imagine plugging in a
server_calendar.pyorserver_jira.py. - Standardization: Any MCP-compliant client (Claude Desktop, your custom app, Cursor) can now use these tools.
This is the future of AI integration—not monolithic prompts, but a universe of specialized tools that LLMs can safely and reliably orchestrate.
Your Next Steps
The MCP ecosystem is growing. You can:
- Explore existing servers: Check out the MCP GitHub repository for pre-built servers (PostgreSQL, Filesystem, Brave Search).
- Build your own server: Wrap an internal API or database. The protocol is simple and powerful.
- Integrate with other clients: Test your servers with Claude Desktop to see them in a production environment.
Stop just reading about AI agents. Start building them. The protocol is here, the tools are emerging, and the barrier to entry is now your curiosity and a few lines of Python.
What will you connect first?
Top comments (0)
Some comments may only be visible to logged-in visitors. Sign in to view all comments.