DEV Community

Cover image for Missing from the MCP debate: Who holds the keys when 50 agents access 50 APIs?
Dennis Traub for AWS

Posted on

Missing from the MCP debate: Who holds the keys when 50 agents access 50 APIs?

There are two debates happening right now:

CLI vs MCP - should agents call existing CLIs or use an MCP server? And API vs MCP - does wrapping a REST API in an MCP server add value, or just complexity?

Both focus on how agents call tools. What both aren't asking is, who holds the credentials when they do.

Fifty agents, fifty sets of keys

When one developer runs one agent on one laptop, credentials are simple. You store them locally, maybe rotate them, and move on.

But that's not where we're heading. Dozens of agents per team, each needing access to Slack, GitHub, Jira, Office 365, that legacy CRM, multiple SaaS tools, and all your internal APIs.

Some of those have CLIs. Most don't - they're SaaS products with REST APIs. If you're lucky - who knows how many production systems still use a global, password-protected admin account.

So every agent needs a separate API key, OAuth token, or username/password pair. For each downstream system. On every machine. And if you've ever managed API keys for a team, you know where this goes. Keys in .env files, shared over Slack, committed to repos, never rotated.

Now hand that problem to fifty autonomous agents.

What happened to SSO?

Most organizations with any sense of security have established SSO, spent years consolidating identity. Every SaaS tool, every internal system, every third-party integration flows through one identity provider.

When someone leaves, you disable a single account. When compliance asks about access controls, there's one answer - and you know exactly where to find it.

And now, agents are about to blow a wide open hole into everything you've built. Whether your agent calls a CLI, hits a REST API, or talks to an MCP server, it needs credentials. And if those credentials live on the agent's machine, they live outside your identity boundary.

Imagine a contractor wrapping up on Friday. You disable their SSO account, but their laptop still has three agents with API keys for your CRM, your internal docs, and your deployment pipeline. Those keys don't expire with the SSO account. Those agents can continue calling your APIs long after the contractor has moved on.

Remote MCP servers are identity boundaries

This is where remote MCP servers earn their place in both debates.

The CLI vs MCP crowd argues about token efficiency. The API vs MCP crowd argues about unnecessary abstraction. Neither side is talking about the nightmare of decentralized credential management.

Charles Chen makes this point well in MCP is Dead; Long Live MCP!. Most of the debate ignores the difference between MCP over stdio (local, and yeah, mostly pointless compared to raw curl or a CLI) and MCP over streamable HTTP (remote, centralized). Once MCP runs as a centralized server, users authenticate via OAuth and never touch the downstream keys.

As he puts it: "An engineer leaves your team? Revoke their OAuth token and access to the MCP server; they never had access to other keys and secrets to start with."

Now take that one step further. In most organizations, that OAuth isn't standalone - it flows through SSO. The MCP server becomes an identity boundary. Your users never store any API keys, custom tokens, or service accounts. One auth mechanism instead of one per machine per agent per API.

Disable the SSO account, and every agent loses access. To everything.

But we already learned this, right? Every microservice managing its own database credentials was a nightmare until we centralized secrets management. Agent credentials are the same problem, just one layer up.

Top comments (6)

Collapse
 
amzani profile image
samz

Great article, The contractor scenario is real but it's a credential management problem, not a CLI vs MCP problem.
A CLI is just an API client. If your API uses OAuth with short-lived tokens and token revocation , disabling the user's account invalidates all tokens. The CLI stops working immediately, same as an MCP server would. If your API uses token introspection, the resource server can check token validity on every call. No stale keys, no zombie access.

The scenario where a contractor's laptop still has active API keys after they leave isn't a failure of CLI as an interface. It's a failure of key management. Long-lived API keys with no expiry and no revocation mechanism are the problem. That's true whether those keys are used by a CLI, an MCP server, a curl command, or a browser.

Remote MCP over streamable HTTP can centralize auth, and that's genuinely useful. But a CLI calling an API behind a proper OAuth/OIDC layer with SSO gives you the same revocation guarantees. One SSO disable, all tokens invalidated, all CLI calls fail. The auth boundary lives at the API layer, not the transport layer.

The real question isn't CLI or MCP. It's whether your API has a well-designed authz layer. If it does, both work. If it doesn't, MCP is just a centralized band-aid over a decentralized problem.

The debate should be about how CLI and MCP work together, not which one replaces the other. The common denominator is always the same: design a good API.

Collapse
 
dennistraub profile image
Dennis Traub AWS

Yes, I think you're spot on.

The problem I see with many APIs (or CLIs, which are often just API wrappers) is that they each come with their own authorization mechanism. Some use OAuth, some use long-lived API keys or PATs, some use username/password.

This kind of credential zoo - and I have yet to see an organization that has none of that - is something that should be centrally managed, not spread across machines. Moving all these credentials behind an SSO layer I can at least be sure I have a single chokepoint, not dozens of them.

Of course there are different ways to do this, remote MCP servers are just one of them.

So you're right: the question isn't "MCP or CLI", it's "where do you manage your credentials." Quite often the answer is "on my local machines" - and that's where it becomes a security hazard.

Collapse
 
amzani profile image
samz

Agreed. The credential zoo problem is real and it's orthogonal to the CLI vs MCP debate.

This is actually why we built Vault at Apideck. Credentials for 300+ integrations are managed per-consumer, per-connection, centrally. The CLI is just the interface layer on top. When we revoke a consumer's access, every connector goes dark. The CLI doesn't hold any secrets, it resolves them at runtime from the vault.

A remote MCP server can be the centralization point. So can an API gateway with OAuth/OIDC. So can a credential vault behind a CLI. The pattern is the same: credentials live in one place, interfaces consume them. The mistake is coupling the "where credentials live" question to the "how agents call APIs" question. They're independent architectural decisions.

Your framing nails it: it's not about the transport, it's about whether you have a single chokepoint or dozens of them.

Collapse
 
globalchatads profile image
Global Chat

This nails the credential sprawl problem, but I think there is an equally thorny issue one layer upstream: discovery.

Before you can even centralize credentials behind a remote MCP server, your agents need to answer a more basic question -- which MCP servers exist, and which ones should I trust?

Right now that answer is "whatever the developer hardcoded." There is no standardized way for an agent to discover available MCP servers (or A2A peers, for that matter). You have got at least 8 competing approaches -- agents.txt files, AGENTS.md, multiple MCP registries, several IETF drafts for agent discovery, the A2A protocol's own discovery mechanism, and various vendor-specific directories. None of them interoperate.

The credential centralization you describe only works if there is also a trusted discovery layer telling agents where those centralized servers are. Otherwise you solve the key distribution problem but leave agents blindly connecting to whatever endpoint they find. That is arguably worse -- now you have centralized trust pointing at unverified services.

The contractor scenario you mention is a good example. Even with SSO and centralized MCP servers, if there is no discovery governance, that contractor's agents could have been pointed at shadow MCP servers that persist after offboarding.

I'd be curious whether you see discovery as something that should live inside the MCP spec itself (some kind of registry/resolution protocol) or whether it is better handled at the infrastructure layer, like DNS-style resolution for agent capabilities.

Collapse
 
dennistraub profile image
Dennis Traub AWS

That's a big issue. I can't really answer how to solve it yet, but it's something we definitely need to deeply think about.

Some comments may only be visible to logged-in visitors. Sign in to view all comments.