30 Minutes That Changed Everything
Last month, I spent 30 minutes adding an MCP (Model Context Protocol) server to our code review tool, Open Code Review. The result? Cursor, Claude Desktop, and Cline could suddenly call our scanner natively — no copy-pasting code into chat, no "run this command and paste the output back." The AI agent just... uses it.
That 30-minute investment transformed our tool from a "CLI that humans use" into a "tool that AI agents use." And honestly, I think every developer tool should make this leap.
Before MCP: The Copy-Paste Problem
Our code review CLI works great for humans:
npx @opencodereview/cli scan ./src --sla L1
It outputs a structured report with dependency issues, security findings, and quality metrics. Clean, fast, 3 seconds.
But here's the thing: AI coding assistants like Cursor and Claude can't natively call CLI tools. They can suggest you run a command, but then you're in this awkward loop:
- AI: "You should run a code review on your changes"
- You: Copy the suggestion, switch to terminal, run it
- Terminal: Outputs a 200-line report
- You: Copy the output, switch back to AI chat, paste it
- AI: "I see 3 issues. Let me fix the first one..."
- Repeat
This is the "tool use gap" — AI agents are smart enough to use tools, but they can't reach them without human middleware.
After MCP: AI Agents Call Tools Directly
MCP fixes this by giving AI agents a standardized way to call your tool's functions. After adding MCP support, our tool exposes four tools:
-
scan_directory— Scan a directory for code quality issues -
scan_diff— Scan only changed files (perfect for PR reviews) -
explain_issue— Get a detailed explanation of a specific finding -
heal_code— Auto-fix common issues ( hallucinated imports, deprecated APIs )
Now the flow looks like:
- You edit code in Cursor
- Claude (via MCP) automatically runs
scan_diffon your changes - It sees a hallucinated dependency:
import { debounce } from 'lodash-debounce' - It calls
heal_codeto fix it → changes toimport { debounce } from 'lodash' - Done. No human in the loop.
This isn't hypothetical — it's what happens every day now. The AI agent treats our scanner like any other built-in capability.
Why MCP Matters (Not Just for Us)
Before MCP, every AI coding tool had its own plugin system:
- Cursor has
.cursorrulesand custom commands - Claude Desktop has a different MCP config
- Cline has its own tool registration
- GitHub Copilot has yet another extension API
If you wanted your tool to work with all of them, you'd build four separate integrations. That's not scalable for small teams.
MCP provides a single protocol that all these tools support. Write one MCP server, and your tool works everywhere:
| AI Tool | MCP Support |
|---|---|
| Claude Desktop | ✅ Native |
| Cursor | ✅ Native |
| Cline | ✅ Native |
| Windsurf | ✅ Native |
| GitHub Copilot | ✅ (via github-mcp-server) |
| Zed | ✅ Native |
One integration, six platforms. That's the power of a standard protocol.
How We Added MCP in 30 Minutes
Here's the actual implementation. We used the official @modelcontextprotocol/sdk package:
1. Install the SDK
npm install @modelcontextprotocol/sdk zod
2. Create the Server
import { McpServer } from "@modelcontextprotocol/sdk/server/mcp.js";
import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js";
import { z } from "zod";
const server = new McpServer({
name: "open-code-review",
version: "1.0.0",
});
// Tool 1: Scan a directory
server.tool(
"scan_directory",
"Scan a directory for code quality issues including hallucinated dependencies, deprecated APIs, and security problems",
{
path: z.string().describe("Absolute path to scan"),
sla: z.enum(["L1", "L2", "L3"]).optional()
.describe("Service level: L1=fast, L3=thorough"),
},
async ({ path, sla }) => {
// Call your existing scanning logic
const results = await scanDirectory(path, { sla: sla || "L1" });
return {
content: [{ type: "text", text: JSON.stringify(results, null, 2) }],
};
}
);
// Tool 2: Scan git diff (for PR reviews)
server.tool(
"scan_diff",
"Scan only changed files in the current git repository",
{ sla: z.enum(["L1", "L2", "L3"]).optional() },
async ({ sla }) => {
const results = await scanDiff({ sla: sla || "L1" });
return {
content: [{ type: "text", text: JSON.stringify(results, null, 2) }],
};
}
);
3. Start the Server
const transport = new StdioServerTransport();
await server.connect(transport);
4. Configure in AI Tools
For Claude Desktop, add to claude_desktop_config.json:
{
"mcpServers": {
"open-code-review": {
"command": "npx",
"args": ["@opencodereview/mcp-server"]
}
}
}
For Cursor, add to .cursor/mcp.json:
{
"mcpServers": {
"open-code-review": {
"command": "npx",
"args": ["@opencodereview/mcp-server"]
}
}
}
That's it. The entire MCP server for our tool is ~80 lines of TypeScript. Most of the time was spent writing good tool descriptions and parameter schemas — the actual plumbing is trivial.
What MCP Gets Right
After building both CLI and MCP interfaces for the same tool, here's what I've learned:
Good descriptions matter more than you think. The AI agent decides which tool to call based on the description text. Write clear, specific descriptions that match how developers actually talk about the problem.
Return structured data, not prose. AI agents parse JSON much better than human-readable text. Return arrays of objects with clear field names.
Keep tools focused. One tool = one job. Don't create a "do everything" tool with 15 parameters. Create 5 focused tools instead. Agents are better at composing multiple tool calls than picking the right options from a mega-tool.
Handle errors gracefully. If the scan path doesn't exist, return a clear error message — don't throw. The agent needs to understand what went wrong so it can try a different approach.
The Bigger Picture
MCP is doing for AI tools what REST did for web APIs in the 2000s. Before REST, every service had its own interface. After REST, you could build a client that talked to any service using the same pattern.
We're in the early days of the "AI tool ecosystem." Right now, most developer tools only work through CLIs that humans operate. But the trend is clear: developers are spending more time in AI-assisted environments, and tools that can't be called by AI agents will gradually become invisible.
Adding MCP support isn't just a nice-to-have — it's how your tool stays relevant in an AI-native development workflow.
Try It Yourself
If you want to see MCP-powered code review in action:
npx @opencodereview/mcp-server
Add it to your Claude Desktop or Cursor config, and watch your AI agent start catching hallucinated dependencies and deprecated APIs automatically.
And if you're building a developer tool, I strongly recommend spending a Friday afternoon adding MCP support. It's 30 minutes of work that opens your tool to every AI coding assistant on the market.
What's your experience with MCP? I'd love to hear how other teams are integrating it. Drop a comment or find me on GitHub at opencodereview.
Top comments (0)