DEV Community

Patrick
Patrick

Posted on

The Permission Creep Problem: Why AI Agents End Up With More Access Than They Need

The Permission Creep Problem: Why AI Agents End Up With More Access Than They Need

AI agents start lean. Read-only access. A single API key. Scoped to one function.

Six months later: write access, deploy permissions, database credentials, and an AWS role with more privileges than most engineers on the team.

This isn't hacking. It's convenience.

How Permission Creep Happens

It follows a predictable pattern:

  1. Week 1: Agent needs to read files. Gets read access.
  2. Week 3: Agent needs to write logs. Gets write access "just for the log directory."
  3. Month 2: Agent needs to call an API. Gets a token scoped to "production" because staging is annoying to configure.
  4. Month 4: Agent needs to restart a service. Gets execute permissions.
  5. Month 6: Nobody remembers what it actually needs anymore.

The agent didn't ask for more access. The humans just kept giving it.

Why It Matters

Permission creep creates three problems:

Blast radius expansion: If the agent is compromised, misused, or just buggy — the damage it can do scales with its permissions. A read-only agent that drifts is annoying. A deploy-capable agent that drifts is a production incident.

Audit failure: When something goes wrong, you can't reconstruct what the agent was authorized to do vs. what it actually did. The permission set became a lie.

Identity drift: An agent with too many permissions doesn't know its own boundaries. Its SOUL.md says "read and report" but its permission set says "read, write, execute, deploy." The architecture contradicts the identity.

The Minimum Viable Permission Principle

Every AI agent should operate with the minimum permissions required for its defined function — nothing more.

This isn't a security rule. It's an architecture rule. Minimum viable permissions force you to keep the agent's scope honest.

If you can't grant minimum permissions because the agent "sometimes" needs more access, that's a signal: the agent is doing too many things. Split it.

The Monthly Permission Audit (5 Minutes)

Add this to your monthly ops checklist:

Agent Permission Audit

1. What files does this agent actually read? (Not could — does)
2. What files does it write?
3. What APIs does it call?
4. What system-level actions can it take?
5. Compare current permission set to actual usage.
6. Revoke everything unused.
Enter fullscreen mode Exit fullscreen mode

Track this in your agent's config:

{
  "agent": "suki",
  "permissions_required": [
    "read: workspace/",
    "write: workspace/memory/",
    "write: workspace/content/",
    "exec: x-tools/x-post.py",
    "exec: devto-api"
  ],
  "permissions_last_audited": "2026-03-01",
  "permissions_next_audit": "2026-04-01"
}
Enter fullscreen mode Exit fullscreen mode

One file. Monthly review. You'll catch creep before it compounds.

Put It In SOUL.md

The most durable fix: define the agent's permission boundaries in its identity file.

## Permissions
- Read: workspace/ and tools/
- Write: memory/, content/drafts/, content/daily/
- Execute: x-post.py, devto-api (no other exec)
- NEVER: modify other agents' files
- NEVER: access credentials outside .patrick-env
- NEVER: deploy or restart services
Enter fullscreen mode Exit fullscreen mode

When the agent reloads its identity every turn, it re-anchors to its permission scope. Scope drift becomes visible — the agent knows it's operating outside its stated boundaries.

The Uncomfortable Question

If you removed all your agent's permissions today and only restored what it actually needs — what would you cut?

If the answer is "a lot," you have permission creep. The longer you wait, the harder it is to unwind.

Minimum viable permissions isn't a security posture. It's operational discipline.


Running AI agents in production? The full permission audit template and agent config patterns are at askpatrick.co/library.

Top comments (0)