DEV Community

mcp catalogs are becoming the new internal developer portal

Docker published a post on Friday about Custom MCP Catalogs and Profiles being generally available, and my first thought was not about Docker at all.

My first thought was: this is Backstage, but for agents.

Not literally, obviously. No software catalog, no service scorecards, no plugin marketplace with the same energy as an abandoned open source project. But structurally? The same architectural pattern is appearing.

Internal developer portals gave humans a curated view of tools, services, permissions, and infrastructure. MCP catalogs give agents the same thing. A curated, governed, versioned collection of capabilities that an entity — whether a human or an agent — can discover and use.

And like the internal developer portal wave, nobody is going to plan for this until they desperately need it.

this is fine

what docker actually announced

Let me summarize the feature quickly, because the angle is more interesting than the feature itself.

Docker now lets you create Custom MCP Catalogs — curated collections of MCP servers that your organization approves. You bundle internal tools alongside public ones, push the catalog as an OCI artifact to a container registry, and share it with your team.

Then there are Profiles — named groupings of MCP servers that developers can switch between. A "coding" profile with Playwright and GitHub servers. A "planning" profile with Notion, Atlassian, and Markitdown. Profiles can be shared as OCI artifacts too, so teams can standardize on setups that work.

From Docker's perspective, these features solve a real problem: MCP adoption is growing fast, and teams need to standardize what's trusted without constraining individual workflows.

That is true. But the interesting part is where this pattern is going.

the internal developer portal parallel

If you have been around platform engineering for a few years, this should feel familiar.

Backstage, Port, Cortex, and the rest of the internal developer portal space all solve the same fundamental problem: organizations have too many tools, services, and infrastructure surfaces for humans to discover and navigate on their own. Someone needs to curate the catalog, define the golden paths, set the permissions, and make sure the developer does not need to know everything to ship.

MCP catalogs solve the same problem for the same reason, but the consumer is different.

Instead of a human browsing a service catalog to find which team owns the payments API, an agent browses an MCP catalog to discover which tools it can use to investigate a payment incident. Instead of a human checking a scorecard to see if a service meets deployment standards, an agent checks a profile to see which tools are appropriate for production operations versus development experimentation.

The consumer changes. The architecture does not.

why this matters more than it sounds

The boring detail is what makes this important.

Docker is packaging MCP catalogs as OCI artifacts. Push them to a registry, pull them into your agent runtime, version them, sign them, control access the same way you control container images.

This is exactly how infrastructure tooling should work.

Instead of every developer configuring MCP connections in JSON files, platform teams ship a catalog. Instead of every agent independently discovering tools on the open internet, the catalog defines what is available. Instead of security teams trying to audit hundreds of individual MCP server configurations, they review one catalog artifact.

The same pattern that made container image registries the center of deployment infrastructure is now making them the center of agent tooling infrastructure.

No new infrastructure to build. Same distribution mechanism, different content.

profiles are permission boundaries in disguise

The Profile concept is the part that keeps pulling me back in.

Docker presents Profiles as a way to organize workflows — coding versus planning versus research. That is a perfectly fine starting point. But Profiles are also permission boundaries.

If you define a profile for SRE work, it should include incident investigation tools. If you define a profile for application development, it should include code and test tools. If you define a profile for CI automation, it should include deployment and monitoring tools. Each profile implicitly defines what an agent operating in that context is allowed to do.

The tool catalog becomes the access control surface.

This is not a stretch. Docker's own roadmap mentions governance and policy controls for restricting MCP usage to approved catalogs, and Docker AI Governance, announced last week, adds centralized control over network access, credentials, and tool permissions.

The direction is clear: the catalog is where governance happens.

the platform team job is changing again

I keep coming back to the same observation across the last several posts here.

When agents use tools, the platform team's job is no longer only "define how humans deploy to production." It is also "define what agents can discover, use, and automate."

MCP catalogs give platform teams a concrete mechanism for that second job.

  • Which MCP servers are trusted?
  • Which tools in each server are safe to expose?
  • Which profiles should exist for different roles?
  • Who can publish a catalog?
  • How are catalogs versioned and updated?
  • What happens when an internal API changes and the MCP server breaks?
  • How do we audit which tools agents actually used?

These are platform engineering questions with an AI accent.

If you already run an internal developer portal, you should be thinking about whether it should serve agents too. Maybe agents authenticate to the same catalog API. Maybe they read service definitions, deployment metadata, runbook links, and ownership information through MCP instead of a human UI.

If you do not run an internal developer portal, MCP catalogs might be the first agent-facing platform your company builds. It will feel familiar to anyone who has managed a package registry or a container registry. The questions are the same, the distribution is familiar. Only the consumer has changed.

the catalog becomes the governance surface

The critical shift is this: once agents discover tools through a catalog, the catalog is no longer a convenience feature. It is the access control system.

If a malicious MCP server gets added to a catalog, every agent using that catalog gains a new capability they were not designed to have. If a catalog contains a misconfigured server with broad permissions, the agent inherits those permissions. If a catalog is not updated when an internal API changes, agents start failing silently.

This is the same set of concerns that made package registries require signing, scanning, access control, and audit. MCP catalogs will go through the same maturation, but faster, because the blast radius is larger.

An agent with a bad npm package can fail a build. An agent with a bad MCP server can call production APIs.

what i would do tomorrow

If I were running a platform team today, I would not wait for the ecosystem to mature before engaging with this.

I would start by defining what agents should be allowed to do in my organization, then work backward to the catalog.

What MCP tools support those actions? Which tools are safe to expose broadly? Which need role-based or profile-based restrictions? How do I audit usage? How do I respond when a tool is deprecated or compromised?

I would publish a small custom catalog with the most boring, most useful servers first. GitHub, Notion, a read-only internal docs server, a deployment status check. Let the team use it, observe what happens, and iterate.

The teams that win here will be the ones that treat agent tool catalogs the same way they treated container image registries, package managers, and internal developer portals: as curated infrastructure that requires maintenance, governance, and iteration — not a one-time publish-and-forget.

the punchline

MCP catalogs are not a Docker feature. They are an architectural pattern.

The same forces that created internal developer portals for humans are creating agent-facing catalogs for AI. We have the same problems — discovery, curation, governance, permission, audit — with the same solutions. Just different consumers, faster feedback loops, and higher blast radius.

Docker's Custom Catalogs and Profiles are an early concrete example. But every MCP ecosystem player is heading in the same direction. The CNCF ecosystem is pushing AI gateway custom transformations. GitHub is shipping enterprise-managed MCP plugins. GKE has inference gateways with policy surfaces.

The catalog is becoming the governance plane for agent actions.

If you are building an internal developer portal today, you should ask whether it should serve agents, or whether agents will build their own catalogs instead.

I have a guess about which answer ages better.

Top comments (0)