Building AI Agents with MCP and TypeScript in 2026
MCP (Model Context Protocol) just crossed 97 million monthly SDK downloads.
In December 2025, Anthropic donated MCP to the Linux Foundation's new Agentic AI Foundation — with OpenAI, Google, Microsoft, AWS, Cloudflare, and Bloomberg as founding members. Every major AI lab now supports it.
If 2025 was the year MCP was adopted, 2026 is the year it goes into production. And going into production means dealing with the messy reality of MCP: servers that go down, rate limits that get hit, auth flows that break.
This article covers how to use NeuroLink — a TypeScript-first AI SDK — to connect your agents to MCP servers the right way. Not just the happy path, but the production path: circuit breakers, retry logic, rate limiting, OAuth 2.1, and more.
What Is MCP and Why Does It Matter?
MCP is a protocol that standardizes how AI models talk to external tools and data sources. Think of it as REST for AI tool use. Instead of each AI framework inventing its own way to call a GitHub API or query a database, MCP gives everyone a common interface.
Before MCP, integrating tools with AI agents meant:
- Writing custom adapter code per tool per framework
- No standardized auth mechanism
- No reusable tool servers
- Reinventing the wheel every time
Today, there are 5,800+ MCP servers available and growing. Connect once; any MCP-compatible model or agent can use the tool.
NeuroLink treats MCP as a first-class citizen. It ships with 58+ built-in MCP tools and lets you dynamically add external MCP servers at runtime — with production-grade reliability built in.
Getting Started
Install NeuroLink:
npm install @juspay/neurolink
The package requires Node >= 20.18.1.
Your First MCP-Powered Agent
Before diving into production patterns, here's the simplest possible example — an agent that uses built-in MCP tools automatically:
import { NeuroLink } from "@juspay/neurolink";
const neurolink = new NeuroLink();
const result = await neurolink.generate({
input: { text: "Search the web for the latest MCP adoption stats and summarize them" },
provider: "anthropic",
model: "claude-3-5-sonnet-20241022",
maxSteps: 10, // allow up to 10 tool-use turns
});
console.log(result.content);
console.log(`Tools used: ${result.toolsUsed?.join(", ")}`);
NeuroLink's 58+ built-in tools are available immediately — no configuration required for the defaults.
Adding External MCP Servers
The real power comes from connecting your own MCP servers. NeuroLink's addExternalMCPServer() method handles this with three transport types.
Stdio Transport — Local CLI Tools
Stdio is the simplest transport. NeuroLink spawns a child process and communicates over stdin/stdout. This is ideal for CLI-based tools like the Bitbucket MCP server, filesystem tools, or any local utility:
import { NeuroLink } from "@juspay/neurolink";
const neurolink = new NeuroLink();
await neurolink.addExternalMCPServer("bitbucket", {
id: "bitbucket",
name: "Bitbucket MCP Server",
transport: "stdio",
command: "npx",
args: ["-y", "@nexus2520/bitbucket-mcp-server"],
env: {
BITBUCKET_USERNAME: process.env.BITBUCKET_USERNAME,
BITBUCKET_TOKEN: process.env.BITBUCKET_TOKEN,
},
});
// The agent can now use Bitbucket tools automatically
const result = await neurolink.generate({
input: { text: "List all open pull requests in the main repository" },
provider: "openai",
model: "gpt-4o",
});
When the agent needs to list PRs, it calls the Bitbucket MCP server directly. No adapter code, no custom function wrappers.
HTTP Transport — Remote APIs
For remote MCP servers, use the HTTP transport. This is the path for production microservices, third-party APIs, and cloud-hosted tools:
await neurolink.addExternalMCPServer("github-copilot", {
id: "github-copilot",
transport: "http",
url: "https://api.githubcopilot.com/mcp",
headers: {
Authorization: `Bearer ${process.env.GITHUB_TOKEN}`,
},
httpOptions: {
connectionTimeout: 30000,
requestTimeout: 60000,
idleTimeout: 120000,
},
retryConfig: {
maxAttempts: 3,
initialDelay: 1000,
maxDelay: 30000,
backoffMultiplier: 2,
},
rateLimiting: {
requestsPerMinute: 60,
maxBurst: 10,
},
});
Notice the retryConfig and rateLimiting options. These aren't optional extras — they're what separates a demo from a production deployment.
Retry config handles transient failures: network blips, temporary server unavailability, overloaded upstream APIs. Exponential backoff (backoffMultiplier: 2) prevents retry storms.
Rate limiting protects you from exceeding API quotas. The token bucket algorithm (maxBurst) allows occasional bursts while enforcing the average rate.
OAuth 2.1 with PKCE — Enterprise APIs
Enterprise MCP servers commonly require OAuth. NeuroLink supports OAuth 2.1 with PKCE natively:
await neurolink.addExternalMCPServer("enterprise-api", {
id: "enterprise-api",
transport: "http",
url: "https://api.enterprise.example.com/mcp/v1",
auth: {
type: "oauth2",
oauth: {
clientId: process.env.OAUTH_CLIENT_ID,
authorizationUrl: "https://auth.example.com/authorize",
tokenUrl: "https://auth.example.com/token",
scope: "mcp:read mcp:write",
usePKCE: true,
},
},
});
NeuroLink handles the OAuth flow: token acquisition, refresh, and header injection. Your application code doesn't need to manage tokens.
Transport Types at a Glance
NeuroLink supports seven transport types for MCP:
| Transport | Best For |
|---|---|
stdio |
Local CLI tools, child processes |
http |
Remote APIs, cloud services |
sse |
Server-Sent Events, streaming servers |
websocket / ws
|
Bidirectional, real-time servers |
tcp |
Raw TCP connections |
unix |
Unix domain sockets, same-machine IPC |
The Circuit Breaker: Production MCP
MCP servers fail. That's a fact of distributed systems. The question is whether your agent fails with them.
NeuroLink includes a built-in circuit breaker (mcpCircuitBreaker.ts) that automatically handles unreliable MCP servers. Here's how it works:
Closed state (normal): requests pass through. Failures are counted.
Open state (triggered): after enough consecutive failures, the circuit opens. Requests fail fast — no waiting for timeouts. The agent degrades gracefully, using remaining available tools.
Half-open state (recovery): after a cooldown period, one request is allowed through. If it succeeds, the circuit closes. If it fails, it opens again.
This is automatic. You don't configure it per server; NeuroLink applies it across all MCP connections.
Monitoring Your MCP Connections
After adding servers, you can inspect their status:
const status = await neurolink.getMCPStatus();
console.log(`Total servers: ${status.totalServers}`);
console.log(`Available servers: ${status.availableServers}`);
console.log(`Total tools loaded: ${status.totalTools}`);
This is useful for health checks, dashboards, and debugging when a tool call fails.
Full Production Example: Multi-Tool Agent
Here's a complete example that combines stdio (local filesystem), HTTP (remote API with retry), and OAuth (enterprise system), then runs an agent task across all of them:
import { NeuroLink } from "@juspay/neurolink";
const neurolink = new NeuroLink({
observability: {
langfuse: {
enabled: true,
publicKey: process.env.LANGFUSE_PUBLIC_KEY!,
secretKey: process.env.LANGFUSE_SECRET_KEY!,
environment: "production",
traceNameFormat: "userId:operationName",
},
},
});
// Local filesystem tool
await neurolink.addExternalMCPServer("filesystem", {
id: "filesystem",
transport: "stdio",
command: "npx",
args: ["-y", "@modelcontextprotocol/server-filesystem", "/workspace"],
});
// Remote API with retry
await neurolink.addExternalMCPServer("internal-api", {
id: "internal-api",
transport: "http",
url: "https://api.internal.example.com/mcp",
headers: { "X-API-Key": process.env.INTERNAL_API_KEY! },
retryConfig: {
maxAttempts: 5,
initialDelay: 500,
maxDelay: 15000,
backoffMultiplier: 2,
},
rateLimiting: {
requestsPerMinute: 100,
maxBurst: 20,
},
});
// Enterprise system with OAuth
await neurolink.addExternalMCPServer("crm", {
id: "crm",
transport: "http",
url: "https://crm.enterprise.example.com/mcp/v2",
auth: {
type: "oauth2",
oauth: {
clientId: process.env.CRM_CLIENT_ID!,
authorizationUrl: "https://auth.enterprise.example.com/authorize",
tokenUrl: "https://auth.enterprise.example.com/token",
scope: "crm:read crm:write",
usePKCE: true,
},
},
});
// Check that all servers loaded
const status = await neurolink.getMCPStatus();
console.log(`${status.totalTools} tools available across ${status.availableServers} servers`);
// Run the agent
const result = await neurolink.generate({
input: {
text: `
1. Read the customer list from /workspace/customers.json
2. For each customer, check their account status in the CRM
3. Generate a summary report and save it to /workspace/report.md
`,
},
provider: "anthropic",
model: "claude-3-5-sonnet-20241022",
maxSteps: 50,
requestId: "agent-task-001",
});
console.log(result.content);
console.log(`Tools invoked: ${result.toolsUsed?.join(", ")}`);
console.log(`Response time: ${result.responseTime}ms`);
Every generate() call here is traced to Langfuse automatically, with tool calls, token usage, and timing visible in the dashboard.
Why MCP-Native Architecture Matters in 2026
MCP joining the Linux Foundation changes the calculus for enterprise adoption. It's no longer "Anthropic's protocol" — it's an open standard with governance from Google, Microsoft, AWS, and OpenAI.
That means:
- MCP servers built today will be compatible with future model providers
- Security tooling, compliance frameworks, and monitoring systems will converge around MCP
- The 5,800+ servers available now will grow to cover virtually every enterprise system
NeuroLink's MCP-native approach means you're not bolting on protocol support after the fact. The circuit breakers, rate limiting, retry logic, and OAuth flows are built for production from the start.
Summary
In 2026, MCP is how AI agents connect to the world. NeuroLink gives you:
- Three transport types: stdio for local tools, HTTP for remote APIs, and all others for specialized needs
- OAuth 2.1 with PKCE: enterprise auth without token management code
- Automatic circuit breaking: agents degrade gracefully when servers fail
- Rate limiting and retry: configurable per-server, with exponential backoff
- 58+ built-in tools: zero configuration for common use cases
- Full observability: every tool call traced to Langfuse/OpenTelemetry
The protocol is standard. The infrastructure shouldn't be your problem.
Try NeuroLink:
- GitHub: github.com/juspay/neurolink — give it a star if this helped
- Discord: Join the community for questions and examples
- Install:
npm install @juspay/neurolink
Questions? Drop them in the comments. What MCP servers are you connecting to?
Top comments (0)