If you've been exploring Claude Code or building AI tools, you've probably heard about MCP (Model Context Protocol). But when it comes to setting up your own MCP server, one question often comes up: which transport should I use?
In this article, I'll break down the three main MCP transports—STDIO, SSE, and HTTP Streamable—explain when each was introduced, what problems they solve, and help you choose the right one for your project.
What is MCP?
MCP (Model Context Protocol) is an open protocol that lets AI assistants like Claude connect to external tools and data sources. Think of it as a standardized way for AI to "talk" to your code.
An MCP server exposes tools (functions the AI can call) and resources (data the AI can access). The transport is simply how the client (like Claude Code) communicates with your server.
The Three Transports at a Glance
| Transport | Use Case | Network | Complexity |
|---|---|---|---|
| STDIO | Local tools, scripts | None (subprocess) | Simple |
| SSE | Web environments, legacy | HTTP | Medium |
| HTTP Streamable | Production, remote servers | HTTP | Medium |
Let's dive into each one.
Transport 1: STDIO (Standard Input/Output)
What is it?
STDIO is the simplest transport. Your MCP server runs as a subprocess, and communication happens through stdin (input) and stdout (output)—the same streams you use when piping commands in a terminal.
Claude Code --stdin--> Your Server
<-stdout--
When was it introduced?
STDIO has been part of MCP since the protocol's initial release in November 2024. It's the original transport for local tools and remains the most common choice for Claude Code integrations.
What problems does it solve?
- No network setup: No ports, no HTTP, no CORS issues
- Simple deployment: Just run a script
- Security: Runs locally with your permissions
When to use STDIO
- Building local tools for Claude Code
- Quick prototypes and experiments
- Tools that access local files or system resources
- When you don't need network access
Code Example
Here's a minimal STDIO MCP server:
#!/usr/bin/env node
import { Server } from "@modelcontextprotocol/sdk/server/index.js";
import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js";
import { CallToolRequestSchema, ListToolsRequestSchema } from "@modelcontextprotocol/sdk/types.js";
const server = new Server(
{ name: "my-mcp-server", version: "1.0.0" },
{ capabilities: { tools: {} } }
);
// Define tools
const tools = [
{
name: "greet",
description: "Greet someone by name",
inputSchema: {
type: "object",
properties: {
name: { type: "string", description: "Name to greet" }
},
required: ["name"]
}
}
];
// Handle tool listing
server.setRequestHandler(ListToolsRequestSchema, async () => ({ tools }));
// Handle tool calls
server.setRequestHandler(CallToolRequestSchema, async (request) => {
const { name, arguments: args } = request.params;
if (name === "greet") {
return {
content: [{ type: "text", text: `Hello, ${args.name}!` }]
};
}
return { content: [{ type: "text", text: "Unknown tool" }], isError: true };
});
// Start server
const transport = new StdioServerTransport();
await server.connect(transport);
Configuration for Claude Code
{
"mcpServers": {
"my-server": {
"command": "node",
"args": ["/path/to/server.js"]
}
}
}
Pros and Cons
Pros:
- Zero network configuration
- Simple to implement and debug
- Secure by default (local only)
Cons:
- Local only—can't share with others
- One client per server instance
- Requires the server script on the local machine
Transport 2: SSE (Server-Sent Events)
What is it?
SSE transport uses HTTP with Server-Sent Events for real-time communication:
- Client → Server: HTTP POST requests
- Server → Client: SSE event stream (long-lived connection)
Client --POST /messages--> Server
<----SSE stream----
When was it introduced?
SSE transport was introduced in the MCP specification 2024-11-05 as the first HTTP-based transport, enabling web-based integrations and remote access. However, it has since been deprecated in favor of HTTP Streamable as of specification version 2025-03-26.
What problems does it solve?
- Real-time updates: Server can push messages without polling
- HTTP-friendly: Works through firewalls and proxies
- Web compatibility: Works in browsers
When to use SSE
- Legacy systems that already use SSE
- When you specifically need the SSE pattern
- Web applications with existing SSE infrastructure
Note: For new projects, consider HTTP Streamable instead—it's the modern recommended approach.
Code Example
#!/usr/bin/env node
import { McpServer } from "@modelcontextprotocol/sdk/server/mcp.js";
import { SSEServerTransport } from "@modelcontextprotocol/sdk/server/sse.js";
import express from "express";
const app = express();
app.use(express.json());
const transports = new Map();
function createServer() {
const server = new McpServer({
name: "my-mcp-server-sse",
version: "1.0.0"
});
server.tool("greet", "Greet someone", {
name: { type: "string", description: "Name to greet" }
}, async ({ name }) => ({
content: [{ type: "text", text: `Hello, ${name}!` }]
}));
return server;
}
// SSE endpoint - client connects here
app.get("/sse", async (req, res) => {
const transport = new SSEServerTransport("/messages", res);
const server = createServer();
transports.set(transport.sessionId, { transport, server });
res.on("close", () => transports.delete(transport.sessionId));
await server.connect(transport);
});
// Messages endpoint - client sends requests here
app.post("/messages", async (req, res) => {
const sessionId = req.query.sessionId;
const session = transports.get(sessionId);
if (session) {
await session.transport.handlePostMessage(req, res);
} else {
res.status(400).json({ error: "No session" });
}
});
app.listen(3002, () => console.log("SSE server on port 3002"));
Pros and Cons
Pros:
- Real-time server push
- Works over HTTP (firewall-friendly)
- Good browser support
Cons:
- More complex than STDIO
- Requires maintaining long-lived connections
- Deprecated in favor of HTTP Streamable
Transport 3: HTTP Streamable (Recommended)
What is it?
HTTP Streamable is the modern, recommended transport for remote MCP servers. It uses standard HTTP requests with optional streaming:
- Client → Server: HTTP POST requests
- Server → Client: HTTP responses (can be streamed)
Client --POST /mcp--> Server
<--Response---
When was it introduced?
HTTP Streamable was introduced in the MCP specification version 2025-03-26 as the recommended approach for remote servers. It replaces SSE as the primary HTTP transport and represents the evolution of MCP's HTTP transport capabilities.
What problems does it solve?
Scalability: Stateless architecture for horizontal scaling
Unlike SSE which requires maintaining persistent connections, HTTP Streamable can operate in a fully stateless manner. This means you can deploy multiple instances of your MCP server behind a load balancer, and any instance can handle any request. When traffic spikes, simply spin up more server instances—no need to worry about connection affinity or sticky sessions.
In practice, this looks like:
┌─ MCP Server Instance 1
Client ─► Load ─────┼─ MCP Server Instance 2
Balancer └─ MCP Server Instance 3
Each request is independent, making auto-scaling straightforward with Kubernetes, AWS ECS, or any container orchestration platform.
Authentication: Native HTTP auth support
HTTP Streamable uses standard HTTP, which means you get the entire HTTP authentication ecosystem for free:
-
Bearer tokens: Pass JWT tokens in the
Authorization: Bearer <token>header -
API keys: Use custom headers like
X-API-Keyfor simple integrations - OAuth 2.0: Implement full OAuth flows for enterprise applications
- mTLS: Use client certificates for machine-to-machine authentication
This is a significant advantage over STDIO (which has no authentication) and SSE (where auth can be awkward to implement). Your existing auth middleware—whether it's Passport.js, Auth0, or a custom solution—works out of the box.
Multi-client: One server, many users
With HTTP Streamable, a single server deployment can handle requests from hundreds or thousands of different clients simultaneously. Each request carries its own session context, so there's no confusion about which client is which.
Consider a team scenario:
- Developer A uses the MCP server from their laptop
- Developer B uses it from their desktop
- A CI/CD pipeline calls it for automated tasks
- A web dashboard makes calls for analytics
All these clients hit the same server endpoint, each with their own authentication credentials and session state. The server doesn't need to maintain separate long-lived connections for each client.
Infrastructure: Works with your existing HTTP stack
HTTP Streamable doesn't require special infrastructure—it's just HTTP. This means you can leverage all the battle-tested tools you already know:
- Load balancers: AWS ALB, nginx, HAProxy all work without special configuration
- API gateways: Kong, AWS API Gateway, or Cloudflare can add rate limiting, caching, and monitoring
- Reverse proxies: Standard nginx or Traefik configs apply
- CDNs: Edge caching for read-heavy operations
- Monitoring: Prometheus, Datadog, New Relic—any HTTP metrics tool works
- Logging: Standard access logs capture every request
You don't need to learn new tools or convince your ops team to adopt unfamiliar technology. HTTP Streamable fits into existing infrastructure patterns.
When to use HTTP Streamable
Production deployments
When your MCP server moves from your laptop to serving real users, HTTP Streamable is the right choice. It provides the reliability, observability, and operational patterns that production systems require. You can deploy with confidence knowing that standard deployment practices—blue-green deployments, canary releases, health checks—all work as expected.
Remote servers (not on the same machine as the client)
If your MCP server needs to run somewhere other than the user's local machine, HTTP Streamable is designed for this. Whether your server lives in:
- A cloud VM (AWS EC2, Google Compute, DigitalOcean)
- A container platform (Kubernetes, ECS, Cloud Run)
- A serverless function (with some adaptations)
- A different machine on your local network
HTTP provides the network transport you need. Unlike STDIO which requires the server to be a local subprocess, HTTP Streamable works across any network boundary.
When you need authentication
If you need to answer "who is making this request?", use HTTP Streamable. Common scenarios include:
- Paid APIs: Verify users have valid subscriptions before processing requests
- Enterprise deployments: Integrate with corporate SSO (SAML, OIDC)
- Rate limiting per user: Track and limit usage by authenticated identity
- Audit logging: Record who did what for compliance requirements
- Feature flags: Enable different capabilities for different user tiers
Multi-user applications
Building an MCP server that multiple people will use? HTTP Streamable handles this naturally. Each user authenticates independently, and the server can:
- Maintain per-user quotas and limits
- Store user-specific preferences or context
- Isolate data between users for security
- Scale to handle concurrent users without connection management headaches
Cloud-hosted MCP servers
If you're deploying to any cloud platform, HTTP Streamable is the natural fit. Cloud providers are built around HTTP:
- Serverless: AWS Lambda, Google Cloud Functions, and Azure Functions all trigger on HTTP requests
- Container services: Cloud Run, App Runner, and Fargate expect HTTP workloads
- PaaS platforms: Heroku, Railway, and Render deploy HTTP apps seamlessly
You'll also benefit from cloud-native features like automatic HTTPS, DDoS protection, and global distribution—all without extra configuration.
Code Example
#!/usr/bin/env node
import { McpServer } from "@modelcontextprotocol/sdk/server/mcp.js";
import { StreamableHTTPServerTransport } from "@modelcontextprotocol/sdk/server/streamableHttp.js";
import express from "express";
import crypto from "crypto";
const app = express();
app.use(express.json());
const sessions = new Map();
function createServer() {
const server = new McpServer({
name: "my-mcp-server-http",
version: "1.0.0"
});
server.tool("greet", "Greet someone", {
name: { type: "string", description: "Name to greet" }
}, async ({ name }) => ({
content: [{ type: "text", text: `Hello, ${name}!` }]
}));
return server;
}
async function handleMcp(req, res) {
const sessionId = req.headers["mcp-session-id"];
let session = sessions.get(sessionId);
if (!session && req.method === "POST") {
const transport = new StreamableHTTPServerTransport({
sessionIdGenerator: () => crypto.randomUUID(),
onsessioninitialized: (id) => {
sessions.set(id, { transport, server });
}
});
const server = createServer();
session = { transport, server };
await server.connect(transport);
}
if (session) {
await session.transport.handleRequest(req, res, req.body);
} else {
res.status(400).json({ error: "No session" });
}
}
app.post("/mcp", handleMcp);
app.get("/mcp", handleMcp);
app.delete("/mcp", (req, res) => {
sessions.delete(req.headers["mcp-session-id"]);
res.status(200).end();
});
app.listen(3001, () => console.log("HTTP server on port 3001"));
Configuration for Claude Code
{
"mcpServers": {
"my-server-http": {
"url": "http://localhost:3001/mcp"
}
}
}
Pros and Cons
Pros:
- Production-ready
- Supports authentication
- Scalable (can be stateless)
- Works with standard HTTP infrastructure
Cons:
- More setup than STDIO
- Requires network access
- Need to handle session management
Want to understand the technical differences between SSE and HTTP Streamable? Read the companion article: Deep Dive: SSE vs HTTP Streamable — What's the Difference?
Comparison: Which Transport Should You Choose?
| Question | STDIO | SSE | HTTP Streamable |
|---|---|---|---|
| Building a local tool? | Yes | No | No |
| Need remote access? | No | Yes | Yes |
| Need authentication? | No | Possible | Yes |
| Multiple clients? | No | Yes | Yes |
| Production deployment? | No | Possible | Yes |
| Simplest setup? | Yes | No | No |
Decision Flowchart
-
Is this for local use only?
- Yes → Use STDIO
- No → Continue...
-
Do you need production features (auth, scaling)?
- Yes → Use HTTP Streamable
- No → Continue...
-
Do you have existing SSE infrastructure you must integrate with?
- Yes → Use SSE (but plan to migrate)
- No → Use HTTP Streamable (it's the modern default)
Getting Started with Templates
I've created a template repository with all three transports: my-mcp-server
my-mcp-server/
├── stdio/ # STDIO transport (local tools)
├── sse/ # SSE transport (Server-Sent Events)
└── http/ # HTTP Streamable transport (recommended for remote)
Quick Start
# Clone the repo
git clone https://github.com/fzoricic/my-mcp-server.git
cd my-mcp-server
# Try STDIO (local)
cd stdio && npm install && npm start
# Or try HTTP Streamable (remote)
cd http && npm install && npm start
Each template includes:
- Working server with a
greettool - Clear comments explaining each part
- README with setup instructions
- Instructions for adding your own tools
Conclusion
MCP transports might seem confusing at first, but the choice is usually straightforward:
- Local tools? → STDIO
- Remote/production? → HTTP Streamable
- Legacy SSE system? → SSE (but plan migration)
Start with STDIO for experimentation—it's the simplest. When you're ready to deploy remotely or need authentication, upgrade to HTTP Streamable.
Happy building!
Resources
- Deep Dive: SSE vs HTTP Streamable — Technical comparison of the two HTTP transports
- MCP Official Documentation
- MCP Architecture Guide
- MCP Specification Changelog — Track transport changes across versions
- Building MCP Servers
- MCP SDK on npm
Top comments (0)