The recent disclosure of what security researchers are calling "the most severe AI-driven vulnerability uncovered to date" in ServiceNow's platform serves as a stark reminder: securing agentic AI isn't just about new AI-specific controls; it requires getting the fundamentals right first.
The anatomy of the virtual agent vulnerability
In October 2025, AppOmni's security research team discovered a critical vulnerability chain in ServiceNow's Virtual Agent that allowed attackers to achieve full platform takeover with little more than a target's email address.
The exploit combined three cascading failures:
Broken API authentication: ServiceNow shipped the same hardcoded credential—the string
servicenowexternalagent—to every third-party service authenticating to the Virtual Agent API. This static token appeared identical across every customer environment.Broken identity verification: Once connected, the system accepted email addresses as sufficient proof of identity. No password. No MFA. No SSO validation.
Excessive agent privileges: The
Record Management AI Agentcould create new data anywhere in ServiceNow, including user accounts with admin privileges.
Here's what makes this attack particularly concerning: ServiceNow serves as an IT services management platform for 85% of the Fortune 500. Its tentacles spread through HR, customer service, security operations, and numerous other systems. An attacker with admin access to ServiceNow doesn't just own that platform; they gain a launchpad into Salesforce, Microsoft 365, and any other connected systems.
The issue wasn't the AI model
This incident reflects a broader pattern emerging across the industry: AI agents are rapidly becoming primary consumers of APIs, and access-control failures are surfacing as the dominant risk vector.
The root causes here weren't novel AI-specific vulnerabilities. They were classic application security issues:
Broken authentication - hardcoded credentials — a class of issue long addressed by static analysis
Broken function-level authorization
Broken authentication - identity linking – a class of issues that can be identified by threat modelling
What the AI agent did was amplify these failures. Traditional bugs that might allow limited data access became full platform compromises because the agent could autonomously chain actions—creating accounts, assigning privileges, and establishing persistence.
As Gartner has noted, AI agents are becoming "autonomous actors" that inherit and often exceed the permissions of the users they serve. When those agents interact with APIs that have fundamental security gaps, small flaws become systemic failures.
The Snyk take: a holistic defense strategy
This incident demonstrates why AI security platforms need foundational application security capabilities built in. Securing AI means securing the software and APIs it controls—by design, at runtime, and by impact. Treating these as separate problems is exactly how incidents like this Virtual Agent vulnerability happen.
Start with threat modeling
Agent-aware threat modeling helps teams design security boundaries before code is written. For the ServiceNow vulnerability, a proper threat model would have flagged:
The risk of shared credentials across tenant instances
The lack of MFA/SSO enforcement for API identity claims
The blast radius of an agent with unrestricted data creation capabilities
Snyk's approach to threat modeling for AI-native applications emphasizes mapping out "what agents can do, what they can access, and how far their actions can propagate" before deployment.
Deploy DAST for traditional vulnerability detection
The first two root causes in this Virtual Agent vulnerability—broken authentication and broken authorization—are classic web security issues. SAST would be able to detect the hardcoded secret. DAST would be able to detect a cryptographic issue. The main vulnerability is the BFLA, which could only be triggered if chained with the first 2 (hardcoded secret and identity linking).
This highlights the need for API security testing, especially in the context of agent-to-agent (A2A) applications.
Traditional DAST tools often struggle with authorization testing because auth logic is context-dependent. Snyk's approach uses LLMs to understand API semantics and identify "complex and previously difficult-to-detect authorization flaws" that static rules miss.
Add AI red teaming for impact discovery
Here's where AI-specific security controls come in. While DAST catches the root cause vulnerabilities, AI red teaming reveals the catastrophic impact paths that emerge when traditional controls fail, and AI agents are in the loop.
Snyk's AI Red Teaming operates as continuous offensive testing for AI-native applications. DAST finds the flaw. AI red teaming shows what that flaw becomes when an autonomous agent can act on it.
For an application like ServiceNow's Virtual Agent, AI red teaming would probe how far an impersonated user could go—testing whether the agent could be tricked into privilege escalation, data exfiltration, or lateral movement to connected systems.
AI red teaming doesn't replace traditional AppSec controls. It reveals how AI agents turn ordinary bugs into full-platform compromises.
Why agentic AI requires layered security controls
The lesson from this Virtual Agent vulnerability is clear: securing agentic AI requires layered controls that address both traditional vulnerabilities and AI-specific risks.
| Control layer | What it catches | ServiceNow example |
|---|---|---|
| Threat modeling | Design flaws, excessive permissions | Would flag unrestricted agent capabilities upfront |
| SAST | Hardcoded secrets, code vulnerabilities | Would catch servicenowexternalagent in source |
| DAST/API security | Auth bypass, BOLA, injection | Would detect missing MFA and identity verification gaps |
| AI red teaming | Impact amplification, privilege escalation chains | Would reveal full platform takeover path |
This is the core of agentic security: building security directly into the AI development lifecycle rather than treating it as an afterthought.
What organizations should do now
If you're deploying AI agents—or vendors are deploying them on your behalf—here are immediate actions to consider:
1. Audit agent permissions
As AppOmni researcher Aaron Costello noted: "Organizations need to ensure that AI agents are not given the ability to perform powerful actions, such as creating data anywhere on a platform. AI agents should be very narrowly scoped in terms of what they can do."
Apply the principle of least privilege aggressively. If an agent doesn't need admin capabilities, it shouldn't have them.
2. Enforce a strong identity at API boundaries
Email addresses are not for authentication. Any API that accepts AI agent requests should enforce:
Strong authentication (OAuth 2.0, API keys with rotation)
MFA or SSO validation for sensitive operations
Rate limiting and anomaly detection
3. Implement continuous testing
Static security assessments often fail to keep pace with the rapid growth of AI deployments. Organizations need:
Automated DAST in CI/CD pipelines
Continuous AI red teaming for production systems
Regular threat model updates as agent capabilities evolve
4. Review AI agent deployments like code
As Costello put it: "Before code gets put into a product, it gets reviewed. The same thinking should apply to AI agents."
Establish approval workflows for:
New AI agent deployments
Changes to agent permissions or capabilities
Integrations with sensitive systems
Looking forward
The Virtual Agent vulnerability is a preview of the AI security landscape to come. As organizations race to deploy agentic AI, the attack surface expands in ways that traditional security tools weren't designed to handle.
But the solution isn't to abandon traditional AppSec; it's to layer AI-specific controls on top of solid fundamentals. Threat modeling identifies risks before code is written. DAST catches vulnerabilities at runtime. AI red teaming reveals impact paths that only emerge when autonomous agents are involved.
ServiceNow moved quickly to address the immediate issues, rotating credentials and disabling the problematic agent within a week of disclosure. But the broader industry lesson remains: securing agentic AI starts with visibility into what agents can do, what they can access, and how far their actions can propagate.
The question isn't whether your organization will deploy AI agents. It's whether you'll secure them before the next vulnerability of a similar class makes headlines.
Related Snyk resources
The New Threat Landscape: AI-Native Apps and Agentic Workflows — How AI-native apps and agentic workflows expand the attack surface
Snyk Ushers in the Future of DAST: AI-Driven Security for the Age of AI — Introducing Snyk API & Web with AI-powered BOLA detection
How Snyk AI Red Teaming Brings Continuous Offensive Testing to AI Systems — Deep dive into automated adversarial testing for AI-native applications
Introducing Evo by Snyk: The World's First Agentic Security Orchestration System — How Evo combines threat modeling, red teaming, and policy enforcement
The Agentic OODA Loop: How AI and Humans Learn to Defend Together — Building collaborative human-AI security workflows
Video resources

Manoj Nair, Snyk | The AI Security Summit 2025 — Snyk's CEO explains the rationale behind Evo and securing LLM-driven applications

Liran Tal: How to Secure your Apps and AI Agents — Deep dive into the evolution of security practices for AI-native applications
Top comments (0)