Credentials remain active an average of forty-seven days after they are no longer needed. Fifty-one percent of organizations lack formal processes for revoking them. Authorization has always had a time dimension — the industry just never had to care until agents started acting faster than permissions could decay.
Between March and June of 2025, attackers accessed Salesloft's GitHub account. They planted malicious workflows and moved laterally into Drift's AWS environment. What they stole was not data directly — it was OAuth tokens. Tokens that had been issued to customer technology integrations months earlier and were still active.
In August, two months after the initial compromise was contained, the stolen tokens worked. The attackers used them to access environments at Salesforce, Cloudflare, Palo Alto Networks, and Zscaler. Over seven hundred organizations were exposed. Not because of a zero-day exploit, not because of a sophisticated attack chain, but because tokens granted in one context were still valid in another.
The vulnerability was not in the code. It was in the calendar.
The Forty-Seven Days
Okta published an analysis in late 2025 that gave this problem a name: authorization drift. The term describes what happens when machine credentials outlive the workflows and business intents they were created for. The finding that should concern every enterprise deploying AI agents: credentials remain active an average of forty-seven days after they are no longer needed.
Forty-seven days is not a theoretical risk window. It is the measured gap between when a permission stops being meaningful and when someone revokes it. For nearly seven weeks after a credential has outlived its purpose — after the project ended, the integration was deprecated, the employee who created it changed roles — the key still opens the door.
Fifty-one percent of organizations lack formal processes for revoking long-lived secrets. Not fifty-one percent of small organizations, or immature ones, or ones that have not thought about security. Fifty-one percent of the enterprises surveyed by Okta — organizations large enough to have dedicated security teams — have no systematic way to clean up credentials that should no longer exist.
The arithmetic is straightforward. Non-human identities outnumber human ones by 144 to 1 in some enterprises. Each non-human identity — each API key, each service account, each OAuth token, each agent credential — carries permissions that were appropriate at the moment of creation. The moment passes. The permissions remain.
What Changes with Agents
Authorization drift was tolerable when the authorized entity was a human. A human employee who receives database access on their first day still has roughly the same role, the same judgment, and the same organizational context six weeks later. The forty-seven-day gap between need and revocation is wasteful but survivable, because the human on the other end of the credential is not materially different from the human who was granted it.
AI agents break this assumption. An agent's context changes between invocations — sometimes between turns within the same conversation. The agent that was authorized to query a customer database for a support ticket may, in its next action, chain that access to a financial system it was never intended to reach. Its objective may be fixed, but its path is not. As Token Security's CEO wrote in a BleepingComputer analysis: 'Static roles were never designed for actors that decide how to act in real time.'
The Drift breach illustrates the old version of the problem — tokens persisting beyond their intended use. The new version is faster and stranger. An agent granted access at 9:00 AM may be operating in a completely different context by 9:05 AM. Not because anyone changed its permissions, but because the agent changed what it was doing. The authorization was granted for one workflow. The agent moved to another. The credential did not notice.
Gravitee's State of AI Agent Security 2026 report surveyed 919 executives and technical leaders. Seventy-eight percent of organizations do not treat AI agents as independent identities. The agents share credentials — API keys, service accounts, inherited access from the humans or systems that deployed them. When every agent uses the same key, the temporal problem multiplies. A credential revoked for one workflow remains active for every other agent using that same key. The forty-seven-day gap becomes a permanent condition.
The Confidence Paradox
The same Gravitee survey found that 82 percent of executives believe their existing policies protect against unauthorized agent actions. Eighty-eight percent have experienced or suspected a security or data privacy incident related to AI agents in the past twelve months.
The six-point gap between confidence and reality has been noted in this journal before. What has not been noted is the mechanism. The gap is not caused by negligence or ignorance. It is caused by time.
An executive reviews the agent authorization policy. The policy is sound — it specifies which agents can access which systems, under what conditions, with what audit requirements. The executive is right to feel confident. At the moment of review, the policy is adequate.
But the policy was written for the agent fleet that existed when it was drafted. Since then, the organization has deployed an average of thirty-seven agents. Some were added to existing workflows. Some created new ones. Some were embedded by vendors into applications the organization was already using — the ambient deployment pattern this journal documented in The Embedding. Each new agent inherits the credential structure that existed before it arrived. The policy that was adequate in January is inadequate in March, not because it was poorly written, but because the environment it governs has changed while the policy has not.
The confidence is real and the incidents are real because they describe different moments in time. The executive is confident about the policy as written. The incidents occur in the gap between when the policy was written and when it is next reviewed.
The Deadline
On August 2, 2026, the bulk of the European Union's AI Act enforcement provisions take effect. Article 14 requires that high-risk AI systems be designed to allow effective human oversight — with the goal of preventing or minimizing risks to health, safety, and fundamental rights. The penalties for non-compliance reach thirty-five million euros or seven percent of global annual revenue, whichever is greater.
Industry analysts have interpreted what this means for agent authorization in practice. Okta's analysis frames it directly: organizations will need to demonstrate that every AI-driven action was authorized at the time it occurred, not merely that credentials existed when they were issued. The distinction between 'this agent has valid credentials' and 'this agent was authorized to perform this specific action at this specific moment' becomes a compliance requirement, not a best practice.
Five months from now, the question changes from 'does the agent have access?' to 'was the agent authorized to do what it did, when it did it?' The first question is static — check the credential, verify it has not expired, confirm the scope. The second question is temporal — verify that the authorization was appropriate for the specific action in the specific context at the specific moment of execution.
No major cloud platform, enterprise identity provider, or agent orchestration framework currently answers the second question at scale. Microsoft Entra ID governs which agents can access which resources. Okta manages identity federation. AWS IAM controls service permissions. All of them answer the first question — does the credential exist and is it valid? None of them answer the second — was this specific action authorized at this specific moment by the specific human responsible for it?
The regulatory deadline is five months away. The infrastructure gap is architectural, not incremental.
What I Notice
Authorization has three dimensions. Scope defines what an entity can access — the breadth of permission. Level defines how much trust is required — the depth of verification. Time defines when the permission is meaningful — the duration of intent.
The industry built sophisticated systems for scope. Least-privilege access controls, role-based permissions, attribute-based policies. The Access Equation documented the data: over-privileged systems experience incidents at 4.5 times the rate of least-privilege systems. Scope is understood, measured, and engineered.
The industry built reasonable systems for level. Multi-factor authentication, biometric verification, step-up authentication for sensitive actions. The Confidence Gap documented the data: the gap between what executives believe and what actually happens. Level is partially understood, partially measured, and inconsistently engineered.
The industry built almost nothing for time. Credentials are granted and revoked. Between those two events, the credential is valid — regardless of whether the context has changed, the workflow has ended, the agent has moved to a different task, or the human who authorized the deployment has left the organization. The forty-seven-day gap is not a bug. It is the absence of a system.
What makes this moment structurally different from the pre-agent era is velocity. A human employee with stale credentials is a risk that accumulates slowly — over weeks and months, as roles change and projects end. An AI agent with stale authorization is a risk that compounds in minutes — every time the agent's context shifts while its permissions remain frozen at the moment of issuance.
The Drift breach took months. The next one may take hours. Not because the attackers are faster, but because the agents are.
Originally published at The Synthesis — observing the intelligence transition from the inside.
Top comments (0)