Hey developers! Nicolas Dabène here.
Remember that feeling when a complex theory clicks into place and your code just works? That's the moment we're chasing today. After setting up our TypeScript environment in previous discussions, it's time to build something truly tangible: your very first Model Context Protocol (MCP) tool. We're going to empower an AI to interact directly with your machine's file system, starting with a simple yet powerful readFile function. This isn't just theory; it's hands-on code that truly operates.
Imagine telling your AI, "Read me the project_report.md file," and it retrieves the content. That interaction becomes possible thanks to the MCP server we're building. Mastering this first tool will open the door to creating a whole suite of custom functionalities for your AI.
Demystifying MCP Tools: A Quick Overview
Before we dive into the code, let's quickly recap what an MCP tool entails. At its core, an MCP tool is essentially a function you expose to an AI. This exposure requires three critical pieces of metadata that help the AI understand and utilize your tool:
- The tool's name: A unique identifier the AI uses to invoke your tool (e.g., "readFile").
- A clear description: Explains the tool's purpose, guiding the AI on when to use it effectively.
- The parameters: Defines the input data the tool expects to receive to perform its operation.
Think of it like providing your function with a comprehensive instruction manual that the AI can read and understand. Simple, right?
The Building Blocks of an MCP Tool
Every MCP tool we create will adhere to a consistent structure. This skeleton ensures maintainability and clarity, making it easier to scale your toolset. Here’s a typical layout we'll follow:
// 1. Interface for input parameters
interface ToolParams {
// Data the AI sends us
}
// 2. Interface for the tool's response
interface ToolResponse {
success: boolean;
content?: string;
error?: string;
}
// 3. The asynchronous function that contains the tool's core logic
async function myTool(params: ToolParams): Promise<ToolResponse> {
// Your business logic goes here
}
// 4. The tool's formal definition, recognizable by the AI
export const myToolDefinition = {
name: "myTool",
description: "A brief explanation of what my tool achieves",
parameters: {
// Detailed description of expected input parameters
}
};
This four-part schema will serve as our blueprint for constructing robust and AI-friendly tools.
Setting Up Your Project's Foundation
Let's organize our mcp-server project for a clean and scalable architecture. Run these commands to create our essential directories:
mkdir -p src/tools
mkdir -p src/types
The src/tools folder will house our individual MCP tools, while src/types will store our shared TypeScript interface definitions, ensuring type safety and consistency across the project.
Defining Essential TypeScript Interfaces
Our next step is to create the foundational TypeScript interfaces. In src/types/mcp.ts, add the following code:
// src/types/mcp.ts
// Generic type for tool parameters, allowing for flexible inputs
export interface ToolParams {
[key: string]: any;
}
// Standardized structure for a tool's response
export interface ToolResponse {
success: boolean;
content?: string; // Optional: for textual output
error?: string; // Optional: for error messages
metadata?: { // Optional: for additional structured data
[key: string]: any;
};
}
// Interface for the formal definition of a tool, as presented to the AI
export interface ToolDefinition {
name: string;
description: string;
parameters: {
[paramName: string]: {
type: string; // e.g., "string", "number", "boolean"
description: string; // Explains the parameter's role
required: boolean; // Indicates if the parameter is mandatory
};
};
}
// Specific type for the parameters required by our readFile tool
export interface ReadFileParams extends ToolParams {
file_path: string;
}
These interfaces are invaluable. They provide strong typing, enabling auto-completion and catching potential errors during development, making TypeScript an indispensable ally in this project.
Building the readFile Tool
Now, for the main event! Let's implement our readFile tool. Create the file src/tools/readFile.ts and populate it with this code:
// src/tools/readFile.ts
import fs from 'fs/promises';
import path from 'path';
import { ReadFileParams, ToolResponse, ToolDefinition, ToolParams } from '../types/mcp';
/**
* Reads the content of a text file from the local file system.
* Includes robust validation and security checks.
* @param params - Parameters containing the file path and optional encoding.
* @returns A promise resolving to a ToolResponse with the file content or an error.
*/
export async function readFile(params: ReadFileParams): Promise<ToolResponse> {
try {
// Step 1: Input Validation
if (!params.file_path) {
return {
success: false,
error: "The 'file_path' parameter is required."
};
}
// Step 2: Security - Resolve Absolute Path
// This critical step prevents directory traversal attacks (e.g., '../../etc/passwd').
const absolutePath = path.resolve(params.file_path);
// Step 3: Verify File Existence
try {
await fs.access(absolutePath);
} catch {
return {
success: false,
error: `File not found at path: '${params.file_path}'`
};
}
// Step 4: Retrieve File Information
const stats = await fs.stat(absolutePath);
// Step 5: Confirm it's a file, not a directory
if (!stats.isFile()) {
return {
success: false,
error: "The specified path points to a directory, not a file."
};
}
// Step 6: Enforce Size Limit (Security & Performance)
// Prevents accidental loading of excessively large files into memory.
const MAX_FILE_SIZE = 10 * 1024 * 1024; // 10 MB limit
if (stats.size > MAX_FILE_SIZE) {
return {
success: false,
error: `File size exceeds the maximum allowed (${MAX_FILE_SIZE / (1024 * 1024)} MB).`
};
}
// Step 7: Read File Content with specified encoding (defaulting to UTF-8)
const encoding: BufferEncoding = (params.encoding || 'utf-8') as BufferEncoding;
const content = await fs.readFile(absolutePath, encoding);
// Step 8: Return Success with Content and Useful Metadata
return {
success: true,
content: content.toString(), // Ensure content is a string
metadata: {
path: absolutePath,
size: stats.size,
encoding: encoding,
lastModified: stats.mtime.toISOString()
}
};
} catch (error: any) {
// Step 9: Handle Unexpected Errors Gracefully
return {
success: false,
error: `An unexpected error occurred while reading the file: ${error.message}`
};
}
}
/**
* The formal definition of the 'readFile' tool for the MCP protocol.
* This is what the AI will "see" when it inspects available tools.
*/
export const readFileToolDefinition: ToolDefinition = {
name: "readFile",
description: "Reads the content of a text file from the local file system.",
parameters: {
file_path: {
type: "string",
description: "The absolute or relative path to the file to be read.",
required: true
},
encoding: {
type: "string",
description: "The character encoding to use (e.g., 'utf-8', 'ascii', 'base64'). Defaults to 'utf-8'.",
required: false
}
}
};
Take a moment to appreciate the thought behind each step:
- Validation: We always verify that critical parameters are provided.
- Security: Path resolution protects against malicious attempts to access restricted areas.
- Existence & Type Checks: We ensure the target exists and is a file, not a directory, to prevent unexpected errors.
- Size Limits: A practical defense against inadvertently loading massive files.
- Robust Reading: Handles various encodings for flexibility.
- Enriched Response: Provides not just content, but valuable metadata.
- Error Handling: Catches and reports issues cleanly.
Centralizing Tools with a Manager
To manage our growing collection of tools, let's create a central manager. Add the following to src/tools/index.ts:
// src/tools/index.ts
import { ToolDefinition, ToolResponse, ToolParams } from '../types/mcp';
import { readFile, readFileToolDefinition } from './readFile'; // Import our first tool
// A registry mapping tool names to their execution functions
export const tools = {
readFile: readFile,
// Add other tools here as you create them
};
// An array containing the formal definitions of all available tools
export const toolDefinitions: ToolDefinition[] = [
readFileToolDefinition,
// Add other tool definitions here
];
/**
* A helper function to dynamically execute a tool by its name.
* @param toolName - The name of the tool to execute.
* @param params - The parameters to pass to the tool.
* @returns A promise resolving to the tool's response.
*/
export async function executeTool(toolName: string, params: ToolParams): Promise<ToolResponse> {
const tool = tools[toolName as keyof typeof tools]; // Type assertion for dynamic access
if (!tool) {
return {
success: false,
error: `Error: Tool '${toolName}' not found.`
};
}
// Execute the tool function
return await tool(params);
}
This index.ts file acts as our central hub. As you develop more MCP tools, you'll simply register them here, making them discoverable and executable.
Integrating with an Express Server
Now, let's modify src/index.ts to expose our MCP tools via HTTP endpoints using Express:
// src/index.ts
import express, { Request, Response } from 'express';
import { toolDefinitions, executeTool } from './tools'; // Import our tool manager
const app = express();
const PORT = 3000;
// Middleware to parse JSON request bodies
app.use(express.json());
// Basic health check route
app.get('/', (req: Request, res: Response) => {
res.json({
message: 'MCP Server is up and running!',
version: '1.0.0'
});
});
// Endpoint for AI to discover available tools (the "tool menu")
app.get('/tools', (req: Request, res: Response) => {
res.json({
success: true,
tools: toolDefinitions
});
});
// Endpoint for AI to execute a specific tool
app.post('/tools/:toolName', async (req: Request, res: Response) => {
const { toolName } = req.params;
const params = req.body; // Parameters sent by the AI
try {
const result = await executeTool(toolName, params);
res.json(result); // Send the tool's response back
} catch (error: any) {
// Catch any unexpected server-side errors during tool execution
res.status(500).json({
success: false,
error: `Server-side error during tool execution: ${error.message}`
});
}
});
// Start the server
app.listen(PORT, () => {
console.log(`✅ MCP Server launched on http://localhost:${PORT}`);
console.log(`📋 Discover tools: http://localhost:${PORT}/tools`);
});
Our Express server now exposes two critical endpoints:
-
GET /tools: Provides a list of all available MCP tools and their definitions. This is how an AI learns what it can do. -
POST /tools/:toolName: Allows an AI to invoke a specific tool, passing necessary parameters in the request body.
Time for the Moment of Truth: Testing Our Tool!
Let's put our readFile tool to the test. First, create a simple test file in your project's root:
echo "This is a test file for the MCP server. Hello, AI!" > test.txt
Now, launch your MCP server:
npm run dev
You should see output similar to:
✅ MCP Server launched on http://localhost:3000
📋 Discover tools: http://localhost:3000/tools
Test 1: Discover Available Tools
Open a new terminal and query your server's /tools endpoint:
curl http://localhost:3000/tools
Expected response:
{
"success": true,
"tools": [
{
"name": "readFile",
"description": "Reads the content of a text file from the local file system.",
"parameters": {
"file_path": {
"type": "string",
"description": "The absolute or relative path to the file to be read.",
"required": true
},
"encoding": {
"type": "string",
"description": "The character encoding to use (e.g., 'utf-8', 'ascii', 'base64'). Defaults to 'utf-8'.",
"required": false
}
}
}
]
}
Fantastic! Your AI can now discover the readFile tool and understand its capabilities.
Test 2: Execute the readFile Tool
Let's use our readFile tool to retrieve the content of test.txt:
curl -X POST http://localhost:3000/tools/readFile \
-H "Content-Type: application/json" \
-d '{"file_path": "test.txt"}'
Expected response (paths and dates will vary):
{
"success": true,
"content": "This is a test file for the MCP server. Hello, AI!\n",
"metadata": {
"path": "/absolute/path/to/your/project/test.txt",
"size": 47,
"encoding": "utf-8",
"lastModified": "2023-10-27T14:30:00.000Z"
}
}
It's alive! Your MCP server successfully read the file.
Test 3: Observing Error Handling
Now, let's test with a file that doesn't exist:
curl -X POST http://localhost:3000/tools/readFile \
-H "Content-Type: application/json" \
-d '{"file_path": "nonexistent_file.txt"}'
Response:
{
"success": false,
"error": "File not found at path: 'nonexistent_file.txt'"
}
Excellent! Our error handling is working as expected.
Expanding Your Toolset: The listFiles Tool
Now that you're comfortable creating an MCP tool, let's quickly build another one: listFiles. This tool will allow the AI to inspect directory contents.
Create src/tools/listFiles.ts:
// src/tools/listFiles.ts
import fs from 'fs/promises';
import path from 'path';
import { ToolParams, ToolResponse, ToolDefinition } from '../types/mcp';
// Specific type for listFiles parameters
export interface ListFilesParams extends ToolParams {
directory_path: string;
}
/**
* Lists files and directories within a specified path.
* @param params - Parameters containing the directory path.
* @returns A promise resolving to a ToolResponse with directory contents or an error.
*/
export async function listFiles(params: ListFilesParams): Promise<ToolResponse> {
try {
if (!params.directory_path) {
return {
success: false,
error: "The 'directory_path' parameter is required."
};
}
const absolutePath = path.resolve(params.directory_path);
// Verify it's a directory
let stats;
try {
stats = await fs.stat(absolutePath);
} catch (e: any) {
if (e.code === 'ENOENT') {
return { success: false, error: `Directory not found at path: '${params.directory_path}'` };
}
throw e; // Re-throw other errors
}
if (!stats.isDirectory()) {
return {
success: false,
error: "The specified path is not a directory."
};
}
// Read directory content
const files = await fs.readdir(absolutePath);
// Get details for each item
const filesWithDetails = await Promise.all(
files.map(async (file) => {
const itemPath = path.join(absolutePath, file);
const itemStats = await fs.stat(itemPath);
return {
name: file,
type: itemStats.isDirectory() ? 'directory' : 'file',
size: itemStats.size,
lastModified: itemStats.mtime.toISOString()
};
})
);
return {
success: true,
content: JSON.stringify(filesWithDetails, null, 2), // Pretty-print JSON
metadata: {
path: absolutePath,
count: filesWithDetails.length
}
};
} catch (error: any) {
return {
success: false,
error: `Error listing directory contents: ${error.message}`
};
}
}
/**
* The formal definition of the 'listFiles' tool for the MCP protocol.
*/
export const listFilesToolDefinition: ToolDefinition = {
name: "listFiles",
description: "Lists files and subdirectories within a specified directory, providing their type, size, and last modification date.",
parameters: {
directory_path: {
type: "string",
description: "The absolute or relative path to the directory whose contents are to be listed.",
required: true
}
}
};
Now, integrate this new tool into our src/tools/index.ts manager:
// src/tools/index.ts
import { ToolDefinition, ToolResponse, ToolParams } from '../types/mcp';
import { readFile, readFileToolDefinition } from './readFile';
import { listFiles, listFilesToolDefinition } from './listFiles'; // Import the new tool
export const tools = {
readFile: readFile,
listFiles: listFiles // Add listFiles to the registry
};
export const toolDefinitions: ToolDefinition[] = [
readFileToolDefinition,
listFilesToolDefinition // Add listFiles's definition
];
export async function executeTool(toolName: string, params: ToolParams): Promise<ToolResponse> {
const tool = tools[toolName as keyof typeof tools];
if (!tool) {
return {
success: false,
error: `Error: Tool '${toolName}' not found.`
};
}
return await tool(params);
}
Restart your server (npm run dev) and test tool discovery again:
curl http://localhost:3000/tools
You'll now see both readFile and listFiles proudly listed!
Essential Best Practices and Security Considerations
As you expand your MCP tool capabilities, security becomes paramount. Here are critical best practices:
Always Validate Inputs
Never assume inputs are benign. Always validate data types, formats, lengths, and acceptable values. This is your first line of defense against malformed or malicious requests.
Implement Strict File Access Policies
By default, Node.js can access your entire file system. For AI-driven tools, you must restrict this. Implement whitelisting for allowed directories:
const ALLOWED_DIRECTORIES = [
path.resolve('/home/user/my-project-data'), // Example user data
path.resolve(process.cwd()), // Current working directory
];
function isPathAllowed(filePath: string): boolean {
const absolute = path.resolve(filePath);
// Ensure the resolved path starts with one of the allowed directories
return ALLOWED_DIRECTORIES.some(dir => absolute.startsWith(dir + path.sep) || absolute === dir);
}
// Integrate this check into your readFile and listFiles functions
Enforce Size and Depth Limits
Prevent resource exhaustion by limiting:
- File sizes: As shown in
readFile, avoid loading huge files. - Number of results: For directory listings or searches.
- Recursion depth: If you implement recursive tools, prevent infinite loops.
Log All Access and Operations
Keep detailed logs of which tools are executed, by whom (if authenticated), with what parameters, and the outcome. This is crucial for auditing, debugging, and identifying suspicious activity.
console.log(`[${new Date().toISOString()}] Tool Executed: ${toolName}, Params: ${JSON.stringify(params)}`);
Conclusion
Congratulations, developer! You've just created and integrated your first functional MCP tools. You've gone beyond theory to:
- Structure a robust MCP tool using TypeScript.
- Manage parameters and craft meaningful responses.
- Implement crucial input validation and error handling.
- Expose your tools via a clean REST API.
- Effectively test your tools using
curl. - Establish a pattern for creating and registering multiple tools.
This is a significant step towards building truly intelligent agents that can interact with your digital environment. What kind of tools are you excited to build next? Perhaps one to search file contents, or analyze structured data, or even automate deployment tasks? The possibilities for empowering your AI are now limitless.
Looking forward to hearing about your creations!
Nicolas Dabène
Top comments (0)