Getting an AI agent to work is easy. Getting it to production is a different story.
You wire up your tools, the agent responds, the demo looks great. Then reality hits:
- How do you handle user auth?
- How do you isolate permissions across multiple tenants?
- How do you inject secrets securely?
- When something breaks at 2am, how do you even know what went wrong?
The AI agent ecosystem has frameworks, model providers, and tool integrations covered. What it's been missing is a runtime: the layer that handles governance, security, and deployment so your agent can actually run in production, at scale, without being locked into any single vendor.
That's what we built. And today we're open-sourcing it.
The Gap Between Demo and Production
Building an AI agent is the easy part. The hard part is everything that comes after.
Authentication. Multi-tenancy. Secret injection. Observability. Protocol compatibility across clients. These aren't glamorous problems, but they're the ones that decide whether your agent ships or stays a demo forever.
The MCP protocol gives you a standard way to connect tools to agents — but it doesn't tell you how to handle any of this. You're left to build it yourself, from scratch, for every project. And if you switch LLM providers or deployment environments down the road, you might rebuild it all again.
We've seen this pattern too many times. So we built the runtime that handles it.
What LeanMCP Handles
LeanMCP is an open-source agent runtime with a production-grade governance layer built in. Think of it as the operating system for your AI agents — the foundation that lets them run anywhere, work with any client, and stay secure at scale.
| The problem you hit | What LeanMCP handles |
|---|---|
| User authentication | Auth0, Supabase, Cognito, Firebase — one decorator, done |
| Multi-tenancy | Per-user API keys and permission scoping |
| Secret injection | Per-request env vars, no global pollution |
| Production observability | Logging, monitoring, audit trails |
| Client compatibility | Claude Desktop, Cursor, Windsurf — HTTP/SSE/WebSocket |
| Governance | Security policies, access control, compliance-ready audit logs |
The core SDK is open source, MIT licensed. Fork it, modify it, self-host it. The production platform — managed deployment, security governance, elastic auto-scaling across 30+ edge regions — is where we run a business. The foundation is free. The enterprise layer is where serious teams invest.
From Zero to Running in 10 Minutes
npm install -g @leanmcp/cli
npx @leanmcp/cli create my-agent
cd my-agent && npm start
Your agent runtime is live at http://localhost:8080/mcp.
Adding authentication takes one decorator:
import { AuthProvider, Authenticated } from "@leanmcp/auth";
const auth = new AuthProvider('cognito', {
region: process.env.AWS_REGION,
userPoolId: process.env.COGNITO_USER_POOL_ID,
clientId: process.env.COGNITO_CLIENT_ID
});
await auth.init();
@Authenticated(auth)
export class MyService {
@Tool({ description: "Authenticated users only" })
async doSomething(args: { input: string }) {
return { result: "done" };
}
}
No custom middleware. No manually parsing auth headers. No deep-diving into protocol internals. You write the logic that matters. We handle the runtime.
Zero Vendor Lock-in Is a Design Principle, Not a Feature
We support every major agent client — Claude Desktop, Cursor, Windsurf — and every transport protocol: HTTP, SSE, WebSocket. Your agents run on our edge network or your own infrastructure. You're never forced to choose our deployment just to get our governance layer.
We don't want to be the thing that locks you in. We've watched developers get burned by betting early on a single vendor's SDK, then facing deprecations or price hikes they didn't see coming. The AI space moves too fast to assume today's best option stays that way.
If we build this well enough, you won't want to leave. But you always can.
Where We Are Today
- Open-source SDK: TypeScript (50k+ downloads) + Python (200k+ downloads), MIT licensed
- Community: 6,000+ developers, across 6 global hackathons
- Production platform: Managed deployment, observability, elastic scaling on 30+ edge nodes
The Bet We're Making
We don't know which LLM wins in three years. We don't know which cloud becomes the default home for agents.
But we know this: whoever wins, agents will need a runtime that's stable, secure, and not owned by any single player.
That's what we're building — the ground AI agents stand on.
Code: github.com/LeanMCP/leanmcp-sdk
Website: leanmcp.com
Questions: founders@leanmcp.com
— Xian Lu, @__luxian__
Top comments (0)