DEV Community

Ekrem MUTLU
Ekrem MUTLU

Posted on

Building MCP Servers: Extend AI with Custom Tools

Building MCP Servers: Extend AI with Custom Tools

Artificial intelligence is rapidly transforming how we interact with technology. Large Language Models (LLMs) like Claude and GPT are incredibly powerful, but they often lack access to specific, proprietary data and tools within your organization. This limitation hinders their ability to perform complex, context-aware tasks. Imagine Claude being able to directly query your CRM, update inventory, or access internal documentation – the possibilities are immense!

That's where the Model Context Protocol (MCP) comes in. MCP provides a standardized way for LLMs to interact with external tools and data sources, effectively extending their knowledge and capabilities. In this article, we'll dive deep into MCP, explain how it works, and guide you through building a custom MCP server using TypeScript that allows you to connect Claude or GPT to your internal systems.

What is the Model Context Protocol (MCP)?

Think of MCP as a universal adapter that allows LLMs to communicate with different services. Instead of requiring LLMs to be pre-trained on every possible data source or tool, MCP enables them to dynamically query and interact with these resources at runtime. This dynamic interaction is crucial for providing LLMs with the real-time context they need to make informed decisions and take meaningful actions.

Here's a simplified breakdown of the MCP process:

  1. The LLM (e.g., Claude, GPT) identifies a need for external information or action. It formulates a request in a standardized MCP format.
  2. The LLM sends the MCP request to an MCP server. This server acts as an intermediary.
  3. The MCP server receives the request and interprets it. It then translates the request into a format understood by the target tool or data source (e.g., a database query, an API call).
  4. The MCP server executes the request and retrieves the response.
  5. The MCP server formats the response according to the MCP standard and sends it back to the LLM.
  6. The LLM receives the response and uses it to inform its decision-making process.

The key benefit of MCP is its standardization. By adhering to a well-defined protocol, different LLMs and tools can seamlessly integrate, regardless of their underlying technologies.

Why Build Your Own MCP Server?

You might be wondering, "Why not just use an existing MCP server?" While pre-built solutions exist, building your own offers several advantages:

  • Customization: You have complete control over how your MCP server interacts with your internal systems. You can tailor the logic to perfectly match your specific needs and security requirements.
  • Security: You can implement robust security measures to protect sensitive data and prevent unauthorized access.
  • Cost-effectiveness: Building your own server can be more cost-effective in the long run, especially if you have complex integration requirements.
  • Learning: Building your own MCP server gives you a deep understanding of how the protocol works, allowing you to troubleshoot issues and optimize performance.

Building a Custom MCP Server with TypeScript

Let's walk through building a simple MCP server using TypeScript. This example will demonstrate how to connect Claude (or GPT) to a hypothetical internal tool that retrieves employee information.

Prerequisites:

  • Node.js and npm (or yarn) installed
  • TypeScript installed globally (npm install -g typescript)

1. Project Setup:

Create a new directory for your project and initialize a TypeScript project:

mkdir mcp-server
cd mcp-server
npm init -y
tsc --init
Enter fullscreen mode Exit fullscreen mode

2. Install Dependencies:

We'll use Express.js for our server and body-parser to handle JSON requests:

npm install express body-parser @types/express @types/body-parser
Enter fullscreen mode Exit fullscreen mode

3. Create index.ts:

Create a file named index.ts and add the following code:

import express, { Request, Response } from 'express';
import bodyParser from 'body-parser';

const app = express();
const port = 3000;

app.use(bodyParser.json());

// Mock employee data
const employees = [
  { id: 1, name: 'Alice Smith', department: 'Engineering' },
  { id: 2, name: 'Bob Johnson', department: 'Sales' },
];

// MCP Endpoint
app.post('/mcp', (req: Request, res: Response) => {
  const { query } = req.body;

  console.log('Received MCP request:', query);

  // Simulate processing the query and retrieving employee information
  if (query && query.includes('employee') && query.includes('Alice')) {
    const alice = employees.find(e => e.name === 'Alice Smith');
    if (alice) {
      const response = {
        result: `Alice Smith works in the ${alice.department} department.`
      };
      res.json(response);
    } else {
      res.status(404).json({ error: 'Employee not found' });
    }
  } else {
    res.status(400).json({ error: 'Invalid query' });
  }
});

app.listen(port, () => {
  console.log(`MCP server listening at http://localhost:${port}`);
});
Enter fullscreen mode Exit fullscreen mode

4. Compile and Run:

Compile the TypeScript code:

tsc
Enter fullscreen mode Exit fullscreen mode

Run the server:

node index.js
Enter fullscreen mode Exit fullscreen mode

5. Testing the MCP Server:

You can test the server using curl or Postman. Here's an example using curl:

curl -X POST -H "Content-Type: application/json" -d '{"query": "Tell me about employee Alice."}' http://localhost:3000/mcp
Enter fullscreen mode Exit fullscreen mode

This should return a JSON response similar to:

{"result":"Alice Smith works in the Engineering department."}
Enter fullscreen mode Exit fullscreen mode

Explanation:

  • The code sets up a simple Express.js server with a single endpoint /mcp. This is where the LLM will send its requests.
  • The server receives a JSON payload containing a query field. This field represents the LLM's request.
  • The code then parses the query and simulates retrieving employee information. In a real-world scenario, you would replace this with code that interacts with your actual internal systems.
  • Finally, the server formats the response as a JSON object and sends it back to the LLM.

Important Considerations:

  • Security: This is a very basic example and lacks proper authentication and authorization. In a production environment, you must implement robust security measures to protect your data.
  • Error Handling: The error handling in this example is minimal. You should add more comprehensive error handling to gracefully handle unexpected situations.
  • Scalability: For high-volume applications, you may need to consider using a more scalable architecture, such as a message queue or a distributed database.
  • Input Validation: Validate the input query to prevent injection attacks and ensure data integrity.

Integrating with Claude/GPT

While the specifics of integration depend on the LLM and its API, the general process involves:

  1. Prompt Engineering: Craft your prompts to instruct the LLM to use the MCP server when it needs information from your internal tools. For example, you might include instructions like, "If you need information about an employee, use the MCP server at http://localhost:3000/mcp."
  2. API Calls: Use the LLM's API to send the prompt. The LLM will then analyze the prompt and, if necessary, generate an MCP request and send it to your server.
  3. Response Handling: When the LLM receives the response from the MCP server, it will use that information to answer the user's question or perform the requested action.

Conclusion

Building a custom MCP server is a powerful way to extend the capabilities of LLMs like Claude and GPT, allowing them to access and interact with your internal tools and data sources. This opens up a world of possibilities for automating tasks, improving decision-making, and creating more intelligent applications. While the example provided is a basic starting point, it demonstrates the core concepts and provides a foundation for building more sophisticated MCP servers tailored to your specific needs.

Ready to take your AI integrations to the next level? Check out our pre-built MCP Server solution for a faster and more robust implementation: https://bilgestore.com/product/mcp-server

Top comments (0)