The Problem
The MCP ecosystem is expanding at an extraordinary pace. Anthropic, Microsoft, Google, AWS, and Cloudflare are all publishing official MCP servers. Hundreds of open source servers exist for every conceivable integration. Developers are connecting AI tools — Claude, Cursor, Windsurf — to internal databases, codebases, APIs, and filesystems.
The infrastructure for doing this exists. The governance layer does not.
Today, at most engineering teams:
- Every developer runs MCP servers on their own machine
- There is no central record of what servers are active
- There is no audit trail of what tools were called, by whom, or when
- Credentials — GitHub personal access tokens, database connection strings, API keys — are stored in JSON files on developer laptops
- There is no approval process for which servers developers can use
- There is no isolation — MCP servers run with the full permissions of the local user
This is the gap MCPNest fills.
The Platform
MCPNest is the governance and infrastructure platform for MCP servers. It operates across three layers: a marketplace for discovery, a gateway for control, and a hosted infrastructure layer for isolation and centralisation.
Layer 1 — Marketplace
What it is
A searchable catalogue of 7,500+ MCP servers indexed from the official Anthropic registry and GitHub. Every server has a quality score, compatibility matrix, publisher profile, and install configuration.
What problem it solves
Without a central catalogue, developers find MCP servers through GitHub searches, Reddit posts, and blog articles. There is no quality signal, no verification, no compatibility information. Teams end up with inconsistent tooling and no visibility into what is actually being used.
Features
- 7,500+ servers indexed with quality scores and compatibility filters
- One-click install for Cursor and VS Code
- Publisher profiles with verified badges
- Server of the Week editorial picks
- Collections and curated starter packs
- Config Validator — validates syntax, endpoints, and arguments before installation
- MCP Composer — build multi-server configurations and share via link
- Trending servers with weekly install data
Layer 2 — Gateway
What it is
A single authenticated HTTPS endpoint per workspace. Every developer on the team points their AI client at the same Gateway URL. The Gateway authenticates the request, checks the tool allowlist, proxies to the correct upstream server, and logs the call.
What problem it solves
Without a gateway, every developer maintains their own local configuration. When a server changes, everyone updates manually. There is no central authentication, no audit, and no way to enforce which tools developers can use.
Features
Single endpoint per workspace
One URL replaces individual configurations across every developer machine. When the admin adds or removes a server, the change is immediate for the entire team.
Bearer token authentication (SHA-256)
Every request is authenticated. Tokens are stored as SHA-256 hashes with timing-safe comparison. No plain text secrets.
Per-member tokens
Each developer has their own Bearer token. This means every tool call is attributable to a specific individual, not just to the workspace. Tokens can be revoked instantly for any member without affecting the rest of the team.
Tool allowlists per member
Admins define which tools each developer can call at the protocol level. A developer with access to the GitHub MCP server can be restricted to specific tools — for example, read-only operations only. Workspace-wide enforcement toggle.
Full audit log per tool call
Every call is logged with: workspace ID, member ID, server, tool name, HTTP status, latency, timestamp, and error code where applicable. No inputs or outputs are stored — only metadata. GDPR safe by design. Logs are exportable.
Tool namespacing
Automatic conflict resolution when multiple servers expose tools with the same name. No manual configuration required.
Workspace RBAC
Three roles: Owner, Admin, Member. Only Admins can approve servers for the workspace. Separation of duties between team management and tool usage.
Layer 3 — Hosted Infrastructure
What it is
MCP servers running in isolated Docker containers on central infrastructure, managed by the MCPNest Orchestrator. Developers deploy servers from a catalogue via the workspace dashboard. The AI client connects to the Gateway, which proxies to the hosted container.
What problem it solves
Running MCP servers locally means no isolation, no central credential management, no shared infrastructure, and no visibility into what is actually running. When a developer's laptop is lost or stolen, every credential in every local config file is exposed. When a developer leaves the company, there is no clean offboarding process.
Features
12 verified servers available for one-click deploy
Filesystem, GitHub, PostgreSQL, Notion, Context7, Slack, SQLite, Brave Search, Puppeteer, Memory, Sequential Thinking, Everything. All pre-validated and running on the MCPNest Bridge image.
Container isolation
Every container runs with: cap_drop ALL (zero Linux capabilities), no-new-privileges flag, CPU and memory resource limits enforced, dedicated Docker network per workspace.
Credential management
Servers that require credentials — GitHub personal access tokens, database connection strings, Slack bot tokens, API keys — prompt for them via a modal before deploy. Credentials are encrypted and never logged. Developers never see or store them locally.
Real-time deploy console
A terminal modal streams container logs during startup. The system auto-detects RUNNING + HEALTHY state and closes automatically. No manual refresh required.
Per-instance log viewer
Every running instance has a Logs button that shows the last 50 lines of container output. Debugging without SSH access.
Terminate with cleanup
Stopping a server removes the container and cleans the database record. No orphaned containers.
MCP Bridge
A stdio-to-HTTP adapter that wraps any npx-based MCP server into an HTTP endpoint compatible with the Gateway. Enables hosting of any MCP server without modifying its source.
Security and Compliance
Infrastructure
All infrastructure is EU-based. Supabase (Frankfurt) for the database. Hetzner (Nuremberg) for the orchestrator and hosted containers. Vercel edge network for the application layer.
Token security
Bearer tokens are stored as SHA-256 hashes. Timing-safe comparison on every request. Per-member tokens enable individual audit trails and instant revocation.
Data handling
No inputs or outputs from tool calls are stored. Only metadata is logged (who, when, which tool, what status). GDPR safe by design.
Container security
cap_drop ALL removes all Linux capabilities from containers. no-new-privileges prevents privilege escalation. Resource limits prevent noisy neighbour and runaway processes. Dedicated Docker network per workspace.
Self-host
The full MCPNest stack is available for self-hosted deployment via Docker for teams that require on-premise infrastructure.
The Result
One month. 14 versions shipped. 7,500+ MCP servers indexed. Enterprise Gateway live. 12 hosted servers operational. Partnerships with Grafana, RailPush, and Context7 confirmed.
The MCP ecosystem needed a governance layer. MCPNest is it.

Top comments (0)