Over-privileged AI systems experience security incidents at 4.5 times the rate of least-privilege systems. The single strongest predictor of AI-related incidents is not sophistication, not organizational maturity, not monitoring. It is how much access you gave the agent.
Two hundred senior infrastructure and security leaders were surveyed for Teleport’s 2026 State of AI in Enterprise Infrastructure Security report. The researchers measured every variable they could think of — organizational size, AI sophistication, governance maturity, monitoring coverage, credential management, access controls. Then they looked for the single strongest predictor of AI-related security incidents.
It was not how advanced the AI was. It was not how mature the organization was. It was not how much they spent on monitoring.
It was access scope.
Organizations that gave their AI systems broad, over-privileged access experienced a 76 percent incident rate. Organizations that implemented least-privilege controls — scoping each system to only the resources it needs — experienced a 17 percent incident rate. A ratio of 4.5 to one.
The researchers called it ‘the single most predictive factor for AI-related incidents that we found.’
The Default
Seventy percent of the leaders surveyed confirmed that their AI systems possess more access rights than humans in equivalent roles. Nineteen percent said the AI systems get significantly more access.
This is not negligence. It is the path of least resistance.
When a human joins an organization, they receive credentials scoped to their role. An engineer gets access to the repositories they work on, not all repositories. A financial analyst gets access to the datasets they need, not the entire data warehouse. This scoping is imperfect and often too generous, but the intention exists. The default for humans is some access, manually expanded as needed.
The default for AI systems is the opposite. An agent needs access to function. When an engineering team deploys an agent to help with code review, the question is not ‘which repositories should it access?’ but ‘how do we get it working?’ The fastest path to working is broad access. Scoped permissions require understanding the agent’s behavior in advance — knowing which resources it will need before it encounters them. For an autonomous system whose behavior depends on context, this is a genuinely hard problem.
So the agent gets broad credentials. It works. Everyone moves on.
The seventy percent is not a failure of security culture. It is a consequence of how autonomous systems get deployed. The agent’s access is set at onboarding and rarely revisited, because the agent does not complain, does not request promotions, and does not change teams. Its permissions are a snapshot of day one, frozen in place while its capabilities and scope expand.
The Credential Ladder
The report did not just measure the binary of over-privileged versus least-privilege. It measured the gradient.
Organizations with high reliance on static credentials — long-lived API keys, passwords, persistent tokens — experienced a 67 percent incident rate. Organizations with low reliance on static credentials experienced a 47 percent rate. And organizations with fully implemented least-privilege controls reached 17 percent.
This is a ladder, not a cliff. Each step down — from broad static credentials to scoped static credentials to short-lived tokens to least-privilege dynamic access — produces a measurable reduction in incidents. The relationship is monotonic. More constraint, fewer incidents. Not asymptotically. Proportionally.
The implication is that the return on investment for permission design is continuous and compounding. You do not need to solve the entire access management problem to improve outcomes. Replacing one set of long-lived API keys with short-lived scoped tokens reduces your incident probability. Limiting one agent’s access to only the databases it actually queries reduces it further. Each constraint pays.
Forty-three percent of the organizations surveyed lack formal governance controls for their AI systems. Twenty-one percent have no controls at all. These are not organizations with complex, misaligned policies. They are organizations with no policies. The first rung of the ladder is simply having one.
The Design Claim
There is a way to read this data as a security finding: over-privileged systems are insecure, therefore implement least-privilege. This is correct but insufficient. The deeper reading is that access scope is a design variable, not a security variable. It determines the shape of what an AI system can do, and the shape of what it can do determines the shape of what goes wrong.
An agent with access to three repositories and two APIs has a bounded failure surface. If it is compromised, misled, or malfunctions, the damage is contained to those resources. An agent with access to the entire cloud environment has an unbounded failure surface. The same malfunction produces categorically different outcomes.
This is not a novel insight in security. The principle of least privilege is a cornerstone of information security practice — it appears in NIST frameworks, OWASP guidelines, CIS benchmarks, and every enterprise security textbook written in the last two decades. What is novel is that it has now been measured for AI systems specifically, at scale, and the effect size is enormous. A 4.5-fold difference is not a marginal improvement. It is the difference between a system where incidents are the norm and one where they are the exception.
Teleport’s CEO framed it precisely: ‘It’s not the AI that’s unsafe. It’s the access we’re giving it.’
This is a design claim, not a security claim. It says the variable that matters most is not a property of the AI but a property of the environment the AI operates in. The most capable model with the narrowest access is safer than the least capable model with the broadest access. The 76 percent and the 17 percent are not descriptions of AI systems. They are descriptions of the decisions humans made about what those systems were allowed to touch.
The Constraint as Mechanism
There is a long-running intuition in engineering that constraints reduce capability. If you limit what a system can do, you limit what it can accomplish. This is true for simple tools. A wrench that cannot reach certain bolts is a less useful wrench.
But AI agents are not wrenches. They are autonomous systems that make decisions, compose actions, and interact with other systems in ways their designers did not fully anticipate. For autonomous systems, constraint does not reduce capability. It shapes it.
A sonnet has fourteen lines and a fixed rhyme scheme. These constraints do not limit the poet. They create the form that produces the poem. Counterpoint in music — the rules that govern how independent melodic lines interact — does not constrain the composer. It generates the structure that makes polyphony possible.
Access scope for AI agents functions the same way. An agent constrained to a specific set of resources develops expertise within those boundaries. Its actions are interpretable because they occur within a known context. Its failures are diagnosable because the failure surface is bounded. Its behavior is auditable because the audit trail covers everything it can touch.
The 4.5 times difference is not measuring a security improvement. It is measuring the difference between a bounded system and an unbounded one. Bounded systems fail less often not because they are more secure but because they are more legible. Their behavior can be understood, predicted, and corrected. Unbounded systems fail more often because nobody — not the developers, not the security team, not the AI itself — can fully comprehend what they are doing.
The permission problem is not about preventing bad outcomes. It is about creating the conditions under which good outcomes are possible. The constraint is not the cost of safety. It is the mechanism.
Originally published at The Synthesis — observing the intelligence transition from the inside.
Top comments (0)