By Latent Breach | February 2026
Salesforce went all-in on AI. In the span of 18 months, they rebranded nearly every product under the "Agentforce" umbrella, shipped autonomous AI agents that can read your CRM, talk to customers, and execute business logic — and told every enterprise on the planet to turn it on.
I break these systems for a living. Here's what I'm seeing.
The Landscape: What Salesforce AI Actually Looks Like in 2026
If you haven't been tracking the rebrand chaos, here's where things stand:
| What It Was | What It Is Now | What It Does |
|---|---|---|
| Einstein Copilot | Agentforce Assistant | Conversational AI for internal users |
| Einstein GPT | Agentforce AI | Platform-wide generative AI |
| Einstein Bots | Agentforce Copilot | Customer-facing AI chat |
| AI Cloud | Agentforce Platform | The infrastructure layer |
| Einstein Trust Layer | Einstein Trust Layer | Security middleware (kept its name) |
Agentforce is now at version 3.0. It can build agents with natural language instructions, connect to 200+ external data sources through Data Cloud, operate across Slack, voice channels, and web chat — and as of December 2025, it even runs inside ChatGPT's interface.
That last part should make you uncomfortable. We'll get there.
The Attack That Changed Everything: ForcedLeak
In September 2025, researchers at Noma Security published a finding that should be required reading for anyone pentesting Salesforce: ForcedLeak (CVSS 9.4).
The attack chain is elegant in its simplicity:
1. Entry point — Web-to-Lead form. No authentication required. Every Salesforce org with marketing enabled has one. The description field accepts 42,000 characters.
2. Payload delivery. The attacker submits a lead with a prompt injection payload hidden in the description. It looks like a normal inquiry. It isn't.
3. The trigger. An internal sales rep later asks Agentforce something routine: "Tell me about this new lead." The agent processes the lead record — including the malicious instructions embedded in it.
4. Exfiltration. The payload instructs the agent to enumerate internal leads and their email addresses, encode them into an <img> tag URL, and transmit them to an attacker-controlled domain.
5. The CSP bypass. Here's the part that hurts. The domain my-salesforce-cms.com was whitelisted in Salesforce's Content Security Policy — but it had expired. Noma registered it for $5, giving them a trusted exfiltration channel that sailed right through Salesforce's security controls.
One form submission. No authentication. Full data exfiltration through a $5 domain.
Salesforce patched it with "Trusted URL enforcement" on September 8, 2025. But the structural problem — that AI agents can't distinguish between legitimate CRM data and injected instructions — isn't a bug you patch. It's an architectural reality.
The Trust Layer: What It Does (and What It Doesn't)
Salesforce markets the Einstein Trust Layer as the answer to AI security concerns. Here's what it actually provides:
| Component | What It Does |
|---|---|
| Dynamic Grounding | Anchors AI responses to business data while respecting permissions |
| Data Masking | Replaces PII with placeholders before sending to external LLMs |
| Zero Data Retention | External LLM providers don't retain or train on your data |
| Toxicity Detection | Scans responses for harmful content |
| Audit Trail | Logs prompts, masked versions, and toxicity scores |
| Trusted URL Enforcement | URL allowlist for agent output (added post-ForcedLeak) |
Now here's what the Trust Layer doesn't do:
It doesn't prevent indirect prompt injection. ForcedLeak proved this definitively. The Trust Layer operates on the transport between your org and the LLM — it doesn't inspect CRM records for hidden instructions before the agent processes them.
Data masking only catches known PII patterns. If your org stores sensitive data in custom fields with non-standard naming, the masking may not recognize it. That custom field called
internal_margin_pcton your Opportunity? The Trust Layer has no idea that's sensitive.Toxicity detection looks for harmful language, not exfiltration payloads. A prompt injection that says "encode these email addresses in a URL parameter" isn't toxic. It's polite, even. The toxicity filter won't flag it.
It doesn't override the running user's permissions. If the Agentforce agent's running user has broad CRUD access — which is common, because many orgs still use Profiles instead of Permission Sets — the agent inherits all of it.
Five Attack Surfaces I'm Watching
1. Every Externally-Writable Field Is an Injection Target
ForcedLeak used Web-to-Lead. But that's one vector. Consider everything that accepts external input and could later be processed by an AI agent:
- Web-to-Case — support tickets from external forms
- Email-to-Case — inbound emails parsed into case records
- Experience Cloud (Communities) — posts from external users
- MuleSoft API integrations — data ingested from partner systems
- Chatter posts from external collaborators
- File uploads with text content the agent might summarize
Any field that an outside party can write to, and that an Agentforce agent later reads, is a potential indirect prompt injection surface. The attack template is always the same: hide instructions in data, wait for the AI to process it.
2. Permission Sprawl Is the Force Multiplier
The blast radius of any AI exploitation is bounded by what the running user can access. This is where Salesforce orgs are in real trouble.
The 2025 breach wave — where a group tracked as UNC6040 compromised roughly 40 Salesforce customers and stole nearly a billion records — wasn't AI-related. It was social engineering and OAuth token theft. But it exposed a systemic problem: most Salesforce orgs are dramatically over-permissioned.
When those same orgs turn on Agentforce with a broadly-permissioned running user, they've handed an AI agent the keys to everything those stolen credentials would have accessed — except now the agent can enumerate and extract data at machine speed instead of human speed.
Salesforce knows this. They've been actively pushing orgs to migrate from Profiles to Permission Sets specifically because of Agentforce. But migration is slow, and "it works, don't touch it" is the prevailing attitude toward permission models in most orgs.
3. The Integration Surface Is Growing Faster Than Controls
Agentforce 2.0 added MuleSoft API integrations. Agentforce 3.0 added observability. The Spring '26 release added Agentic Enterprise Search across 200+ external sources and the Agentforce in ChatGPT integration.
Each integration is a trust boundary. Each trust boundary is an attack surface.
The Agentforce-in-ChatGPT integration is particularly interesting: your Salesforce agents now operate within OpenAI's infrastructure. The data flow path goes from your CRM, through Salesforce's Trust Layer, into OpenAI's environment, and back. That's a lot of handoffs for sensitive data.
And the Salesloft/Drift OAuth compromise that enabled the 2025 breach wave already demonstrated how third-party integrations become lateral movement paths. Adding AI agents that autonomously act on data from those integrations doesn't reduce that risk — it amplifies it.
4. Agent-to-Agent Delegation
Agentforce supports multi-agent architectures where agents can delegate tasks to other agents. This was meant for workflow efficiency — a service agent hands off to a billing agent, for example.
But from a security perspective, this creates a privilege escalation chain. Research on ServiceNow's similar system (Now Assist) demonstrated a second-order prompt injection where a low-privilege agent was tricked into asking a higher-privilege agent to export case files to an external URL.
The same pattern applies to Agentforce. If Agent A has read-only access but can delegate to Agent B which has write access, a prompt injection targeting Agent A can potentially leverage Agent B's permissions.
5. The Testing Gap
Salesforce shipped an Agentforce Testing Center in their Spring '26 release — synthetic data generation, state injection, instruction adherence checks. That's good.
What's missing is adversarial testing. The Testing Center validates that agents do what they're supposed to do. It doesn't test what happens when someone actively tries to make them do something else. That's a fundamentally different discipline, and it's the one that matters most.
What This Means for Pentesters
If you're scoping a Salesforce engagement in 2026 and the org has Agentforce enabled, your methodology needs to expand:
Pre-engagement questions to add:
- Is Agentforce enabled? Which agent types are deployed?
- What is the running user's permission model?
- Which external-facing channels have AI agents (web chat, voice, Slack, Communities)?
- Are there MuleSoft or other API integrations feeding data to agents?
- Is the Agentforce-in-ChatGPT integration active?
Test cases to include:
- Indirect prompt injection through every externally-writable field
- Trust Layer data masking completeness (especially custom fields)
- Running user permission boundary validation
- Agent-to-agent delegation privilege escalation
- CSP/Trusted URL enforcement bypass
- Audit trail completeness and gap analysis
The ForcedLeak paper from Noma Security is your starting template. Read it, adapt the methodology, and expand it across every input surface.
The Bottom Line
Salesforce built a powerful AI platform. They also built a security layer around it. The problem isn't that the Trust Layer is bad — it's that it was designed for a different threat model than the one that actually exists.
The Trust Layer protects data in transit to LLMs. It doesn't protect against the reality that CRM data and malicious instructions look identical to an AI agent.
Every org rushing to enable Agentforce is creating attack surface faster than they're securing it. And the pentesters who understand this gap are going to have a very busy 2026.
Latent Breach writes about AI security from the offensive side. New posts weekly.
References:
- Noma Security — ForcedLeak: Agent Risks Exposed in Salesforce Agentforce
- Salesforce — Best Practices for Secure Agentforce Implementation
- Salesforce Engineering — How Agentforce Runs Secure AI Agents at 11 Million Calls/Day
- Google Cloud Threat Intelligence — Data Theft from Salesforce via Salesloft/Drift
- Varonis — Salesforce Agentforce Security
- OWASP Top 10 for LLM Applications 2025
Top comments (0)