DEV Community

Cover image for Cisco Just Built Zero Trust for AI Agents. Here's Why That Matters.
Alan West
Alan West

Posted on

Cisco Just Built Zero Trust for AI Agents. Here's Why That Matters.

Traditional Zero Trust was built for humans. A person authenticates, gets a scoped token, makes a few requests per session, and eventually logs out. The security model assumes a user who reads screens, clicks buttons, and operates at human speed.

AI agents don't work like that. An autonomous agent running a multi-step task might hit 40 different APIs, spawn sub-agents, access databases it's never touched before, and do all of it in under a minute. The security model that works for a human clicking through a dashboard completely falls apart when the "user" is an LLM making thousands of decisions per second.

At RSA Conference 2026 on April 1, Cisco unveiled a Zero Trust architecture built specifically for this problem -- real-time policy enforcement and anomaly detection designed for autonomous AI agents and multi-agent systems.

Why Existing Security Models Break

Here's the fundamental mismatch. Traditional Zero Trust gives you a token at authentication time with a fixed set of permissions. That works when a human is going to use those permissions over a 30-minute session to do predictable things.

But an AI agent's behavior is inherently unpredictable. You tell Claude Code to "refactor the authentication module" and it might read 50 files, modify 20, run tests, check git history, query a database schema, and hit an external API -- all in a single task. The permission scope it needs changes minute by minute.

// Traditional Zero Trust: authenticate once, get static permissions
// This model assumes predictable, human-speed access patterns

const token = await auth.authenticate({
  user: "deploy-agent",
  scopes: ["read:repos", "write:repos", "read:databases"],
});

// Problem: the agent was scoped for repo work
// but the task evolved and now it needs cloud access
// Static scopes can't adapt to emergent agent behavior
const result = await agent.execute({
  task: "Investigate why response times spiked",
  token: token,  // These scopes are already stale
});

// The agent discovers it needs to check CloudWatch metrics
// to check load balancer configs, to read DNS records...
// None of those were in the original scope
// Traditional Zero Trust either blocks the agent or you over-provision
Enter fullscreen mode Exit fullscreen mode

The practical result is that teams either over-provision agent permissions (giving the agent access to everything "just in case") or they under-provision and the agent fails constantly. Neither is acceptable. Over-provisioning means a compromised agent can do catastrophic damage. Under-provisioning means the agent is useless for any task that isn't perfectly predictable.

What Cisco's Approach Does Differently

Cisco's architecture introduces real-time policy enforcement for AI agents. Instead of granting a static permission set at authentication time, the system evaluates each action the agent takes against a dynamic policy engine. The agent requests access to a resource, the policy engine evaluates whether that access makes sense given the agent's current task context, and the decision happens in real-time.

The anomaly detection component is equally important. It builds a behavioral baseline for each agent and flags deviations. If your deployment agent suddenly starts reading customer PII from a database it's never accessed before, that's an anomaly worth investigating -- even if the agent technically has the credentials.

This is specifically designed for multi-agent systems where one orchestrator agent spawns several worker agents. Each worker inherits a context-appropriate subset of permissions, and the policy engine tracks the entire execution graph.

// Cisco's approach: continuous, context-aware policy evaluation
// Each agent action is evaluated against dynamic policy in real-time

// Conceptual model of what real-time agent policy enforcement looks like
class AgentPolicyEngine {
  constructor(agentId, taskContext) {
    this.agentId = agentId;
    this.taskContext = taskContext;
    this.actionLog = [];
    this.baselineBehavior = null;
  }

  async evaluateAction(action) {
    // Every action is evaluated, not just the initial auth
    const decision = await this.policyCheck({
      agent: this.agentId,
      action: action.type,         // "read", "write", "execute", "spawn"
      resource: action.resource,    // what the agent is trying to access
      context: this.taskContext,    // the original task that justified this work
      history: this.actionLog,     // what the agent has done so far
    });

    if (decision.allowed) {
      this.actionLog.push({ ...action, timestamp: Date.now() });
      this.checkAnomaly(action);
      return { proceed: true };
    }

    // Denied actions are logged and can trigger alerts
    return {
      proceed: false,
      reason: decision.reason,
      escalation: decision.requiresHumanReview,
    };
  }

  checkAnomaly(action) {
    // Behavioral anomaly detection
    // Flag if the agent deviates from its established patterns
    const deviation = this.calculateDeviation(action, this.baselineBehavior);
    if (deviation > THRESHOLD) {
      this.alert({
        type: "behavioral_anomaly",
        agent: this.agentId,
        action: action,
        deviation: deviation,
        message: "Agent accessing resources outside normal pattern",
      });
    }
  }
}
Enter fullscreen mode Exit fullscreen mode

This isn't just theoretical. If you're running multi-agent systems in production -- orchestrator agents that spawn sub-agents for different tasks -- every sub-agent needs its own scoped identity, its own policy evaluation, and its own anomaly baseline. Traditional IAM was never designed for this.

Practical Implications for Teams Running AI Agents

If you're using Claude Code, Copilot, Cursor, or any AI coding tool that accesses your infrastructure, you're already running agents outside traditional security models. These tools read your codebase, run commands on your machine, and interact with external services. Most teams treat them like developer tools. They're autonomous agents with broad system access.

Here's what you should be thinking about right now.

Audit your agent permissions. List every AI agent or AI-powered tool with access to your infrastructure. Check what credentials they hold. Most are over-provisioned because nobody wanted to deal with the agent failing due to missing permissions.

Implement request-level logging for agent actions. If an agent accesses a database, spawns a sub-process, or calls an external API, that should be in your audit trail with the same granularity as human user requests.

Think about agent identity as a first-class concept. Each agent -- and each sub-agent in a multi-agent system -- should have its own identity with its own permission boundary, not a shared service account.

The Bigger Shift

Cisco building this isn't just a product announcement. It's a signal that enterprise security is catching up to the reality that AI agents are no longer experimental side projects. They're production infrastructure making real decisions with real consequences.

The security industry spent 15 years building Zero Trust for humans. It took the AI agent explosion to reveal that the entire model needs to be rethought from first principles. Cisco's move is the first major enterprise attempt at that rethinking.

For developers and platform teams, the takeaway is straightforward: if you're deploying autonomous agents and your security model still assumes a human is in the loop, you have a gap. It might not be exploited today, but the attack surface is growing with every agent you deploy. The question isn't whether you'll need agent-specific Zero Trust. It's whether you'll implement it before or after an incident forces you to.

Top comments (0)