Building an MCP server: from concept to secure AI interactions
Six months ago, MCP (Model Context Protocol) wasn’t even in most developers’ vocabulary. There was no standard approach to exposing tools to AI agents. But as AI evolved from simple chatbots to powerful assistants capable of real-world tasks, secure and structured interactions quickly became essential.
At Scalekit, we built our own MCP server, and this is how we went about it.
Why streaming mattered
Before writing any authentication logic, we had to decide how to stream responses to AI agents. This choice shaped how responses were structured and how agents parsed them.
- Server-Sent Events (SSE): Reliable and simple, but limited.
- HTTP Streamable: Flexible, supports structured JSON, and better for complex interactions.
We chose HTTP Streamable for its ability to handle richer, multi-step outputs.
Step 1: Build simple tools first
We started with a few straightforward tools, each with:
- A clear name and description
- Defined access scopes
- A simple
run()
function
Example:
server.tool("list_environments", {
description: "List all environments accessible by the current user",
run: async ({ userId }) => {
const envs = await getEnvsForUser(userId);
return {
content: envs.map(env => ({
type: "text",
text: `${env.id} (${env.name})`
}))
};
}
});
We validated inputs with Zod schemas:
const schema = z.object({
org_id: z.string().startsWith('org_'),
});
Tools were registered centrally for clarity:
export function registerTools(server: McpServer){
server.tool("create_organization", createOrgTool);
server.tool("list_environments", listEnvsTool);
}
Step 2: Security and access control from day one
Every tool had explicit scopes to define access clearly. Using OAuth 2.1 and protected resource metadata allowed agents to discover required scopes programmatically. Here’s what we ensured:
- Every tool should be explicitly scoped
- Every request should be traceable to who (or what) triggered it
- And no tool should be callable without being deliberately authorized
Step 3: Secure your server with OAuth in four steps
- Issue OAuth tokens: Use client credentials for machine-to-machine access.
- Validate JWT tokens on every request:
const token = req.headers.authorization?.split(' ')[1];
const claims = await verifyToken(token);
- Expose protected resource metadata so agents can discover available scopes:
GET /.well-known/oauth-protected-resource
This returns a JSON document that gives the client the required information:
const metadata = {
"resource": "https://mcp.scalekit.com",
"authorization_servers": [
"https://mcp.scalekit.com/.well-known/oauth-authorization-server"
],
"bearer_methods_supported": [
"header"
],
"resource_documentation": "https://docs.scalekit.com",
"scopes_supported": [
"wks:read",
"wks:write",
"env:read",
"env:write",
"org:read",
"org:write"
]
}
- Enforce scope-based access control for each tool:
if (!claims.scope.includes("org:write")) {
throw new Error("Missing required scope: org:write");
}
Step 4: Validate with real AI agents
We tested with agents like Claude Desktop, Windsurf, and ChatGPT via MCP SuperAssistant, using mcp-remote
to bridge connections:
npx mcp-remote https://mcp.example.com/sse
This surfaced issues like scope mismatches and unclear parameters early.
Developer tip: use a debugging sidekick
Tools like MCP Inspector display registered tools, input schemas, and scopes, allowing quick manual tests without a full agent session.
If you’d like a deep dive into the process, read the full guide on building MCP servers.
Your turn
Have you built an MCP server? What are the challenges you faced? Let me know in the comments 👇
Top comments (0)