Modern data science projects often involve juggling multiple tools—databases, models, APIs. Hence all speaking different languages. Model Context Protocol (MCP) simplifies this by changing these tools as structured, callable endpoints. With MCP, AI agents can interact with SQL databases, graph tools like Neo4j, and ML models through a consistent interface, removing the need for custom integration code and making workflows more modular, scalable, and agent-ready.
Exposing Data Tools and ML Models via MCP
Below is a comprehensive example using Python fastmcp
, Neo4j tools, and a simple ML model via Semantic Kernel:
from mcp.server.fastmcp import FastMCP
import sqlite3
from semantic_kernel import Kernel
from semantic_kernel.connectors.mcp import MCPStdioPlugin
import asyncio
# Create MCP server
mcp = FastMCP("DataModelServer")
@mcp.tool()
def query_data(sql: str) -> str:
conn = sqlite3.connect("data.db")
rows = conn.execute(sql).fetchall()
conn.close()
return "\n".join(str(r) for r in rows)
# Semantic Kernel for ML model
kernel = Kernel()
@kernel.function(name="predict_category")
def predict_category(text: str) -> str:
return "electronics"
# Expose kernel as MCP server too
sk_server = kernel.as_mcp_server(server_name="model_server")
You can also integrate Neo4j and graph tools using Google’s MCP Toolbox:
from mcp_toolbox import MCPToolbox
toolbox = MCPToolbox(database="neo4j://localhost:7687", auth=("user", "pass"))
toolbox.expose_cypher("read-neo4j-cypher", read_only=True)
toolbox.expose_cypher("write-neo4j-cypher", read_only=False)
toolbox.run_on_cloud("localhost", port=9000)
Neo4j tools include safe read queries (read-neo4j-cypher
) and write queries which prompt user approval before execution 1.
Calling Tools from Agents or Python Clients
Agents can call tools using an MCP client. Here is an example using Semantic Kernel’s MCPSsePlugin
in Python:
from semantic_kernel import Kernel
from semantic_kernel.connectors.mcp import MCPSsePlugin
async def main():
async with MCPSsePlugin(name="neo4j", url="http://localhost:9000") as plugin:
kernel = Kernel()
kernel.add_plugin(plugin)
agent = kernel.create_chat_agent(service_id="openai", model_id="gpt-4")
response = await agent.invoke_async("Find top movies by revenue")
print(response.content)
asyncio.run(main())
This agent will automatically call read-neo4j-cypher
from the MCP server using the embedded Cypher query to retrieve data when needed 2.
Leveraging Semantic Kernel with All Tools
You can load existing MCP servers into Semantic Kernel agent workflows:
from semantic_kernel.connectors.mcp import MCPStdioPlugin
from semantic_kernel import Kernel
from semantic_kernel.agents import ChatCompletionAgent
async def main():
async with MCPStdioPlugin(name="data_server", description="SQL and ML tools", command="python data_model_server.py") as plugin:
kernel = Kernel()
kernel.add_plugin(plugin)
agent = ChatCompletionAgent(kernel=kernel, name="DataAgent", instructions="Predict category from text")
response = await agent.invoke_async("What category fits 'smart speaker'?")
print(response.content)
asyncio.run(main())
The agent will automatically choose and call the predict_category
tool when needed 3.
Behind the Scenes
MCP uses JSON-RPC 2.0 to communicate. Each tool comes with metadata, including its name, description, and input schema 4. Clients can query list_tools
to discover available capabilities. Secure read-only tools (like Cypher or SQL) are scoped with validation and limited access automatically 1.
The ScaleMCP framework allows agents to dynamically sync tools during runtime, improving scalability 5. Tools and resources should be published with secure oversight using IAM, logging, and policy controls like MCP Guardian or ETDI to prevent misuse 67.
My Thoughts
Exposing both data and ML model tools via a unified MCP server greatly simplifies agent development. Agents can issue calls like query_data
or predict_category
without additional integration code. This modular architecture supports cleaner workflows, better governance, and faster iteration.
Start with limited, read-only tools and secure write-capable ones. Monitor agent usage, validate inputs, and ensure credentials are stored securely. When done carefully, MCP offers a powerful foundation for agentic AI workflows that combine data retrieval, inference, and action.
Top comments (0)