AI agents are getting their own credentials and nobody is asking who's accountable when they leak. That sentence should terrify you more than it does.
I've been managing secrets at a 15-person startup for a few years now. We can barely keep human API keys out of Git history. The idea of every AI agent running around with its own identity makes me want to close my laptop and go farm goats.
But here we are.
What Actually Happened
Cloudflare just launched a new scannable API token format with prefixes like cfat_. This is smart — it means tokens are instantly recognizable by pattern-matching tools. GitHub Secret Scanning can detect leaked Cloudflare tokens when they show up in a commit, though the revocation process may require manual remediation rather than being fully automatic.
That's genuinely good engineering. Two major platforms cooperating to shrink the window between "oops" and "revoked." I respect it.
But zoom out for a second. Why does this need to exist at all?
The Real Problem Nobody Wants to Say Out Loud
Non-human identities already outnumber human ones in most organizations. Read that again. Service accounts, CI/CD tokens, bot credentials, API keys — they've been quietly multiplying for years. Now add AI agents to the pile.
Each agent requires credentials to do anything useful. Call an API. Read a database. Deploy a service. Each one becomes a new secret to rotate, scope, monitor, and eventually lose track of.
Here's what I've seen firsthand:
→ Secrets get copy-pasted into .env files that end up in repos
→ Service accounts get created for a "quick test" and never get deleted
→ Nobody owns the rotation schedule because nobody owns the bot
→ When something leaks, the first question is always "wait, what even uses this?"
That's the state of things today. With humans mostly in the loop. 🫠
AI Agents Make This Exponentially Worse
When a human leaks a key, you yell at the human. You do a postmortem. You add a pre-commit hook. There's a feedback loop.
When an AI agent leaks a key — or gets prompt-injected into exposing one — who's accountable? The developer who deployed it? The platform that hosted it? The agent framework that didn't sandbox credentials properly?
Nobody has a good answer yet. And startups are already shipping agents with broad API access because speed wins over security every single time at that stage. I know because I've been that person choosing speed.
The Cloudflare + GitHub integration is a safety net. But safety nets work best when you're not actively trying to juggle chainsaws on a tightrope. At startup scale, with a two-person platform team, you're absolutely juggling chainsaws.
What I Think We Should Be Doing
I don't have a complete answer. But I have opinions:
→ Agents should get short-lived credentials by default. Not long-lived API keys. Tokens that expire in minutes, not months.
→ Every non-human identity needs an owner. A real human on the hook. No orphan service accounts.
→ Scope should be laughably narrow. If an agent only needs to read from one endpoint, it gets access to one endpoint. Period.
→ Audit logs for agent actions should be first-class. Not an afterthought bolted on after the first incident.
The cfat_ prefix and auto-revocation are steps in the right direction. But they're band-aids on a wound we haven't even fully discovered yet. 🩹
Here's the Thing
We built identity management for humans over decades and we're still bad at it. Now we're handing credentials to autonomous software that can act at machine speed, make unpredictable decisions, and get tricked by a well-crafted prompt.
The infrastructure isn't ready. The policies aren't ready. The org charts definitely aren't ready. And yet the agents are already shipping.
I'm not saying stop building agents. I'm saying treat agent identity as a first-class security problem right now, not after the first big breach makes it obvious.
So here's my question: who owns non-human identity at your company? Is it security? Platform? DevOps? Or is it the terrifying answer — nobody? 🔐
Top comments (0)