📰 Originally published on SecurityElites — the canonical, fully-updated version of this article.
The most dangerous AI deployment I assess is the one that’s been fully approved. The security team signed off on it. It had access to email, calendar, Slack, and the internal document store. Each plugin had been individually reviewed. Each connection had been individually authorised. What they hadn’t reviewed was the combination: what an attacker could achieve by using the email plugin to read a malicious message, which injected instructions that used the Slack plugin to exfiltrate data, which used the document store plugin to locate what to exfiltrate.
No single plugin was over-privileged. No single plugin had a vulnerability. The insecure architecture was in how they connected — in the AI model’s position as an unrestricted intermediary between all of them, able to pass data and actions between plugins based on whatever instructions appeared in its context.
Plugin architecture security isn’t about individual plugins. It’s about what the combination of plugins makes possible.
🎯 After This Article
The plugin attack surface — over-provisioning, tool output injection, and cross-plugin escalation
OWASP LLM07 (Insecure Plugin Design) — what it covers and how to apply it
OAuth scope auditing for AI plugin authorisations — finding over-granted permissions
Confirmation gates — the last-line defence against prompt injection attack chains
How to test a plugin ecosystem for tool output injection and cross-plugin escalation
⏱️ 20 min read · 3 exercises ### 📋 Insecure AI Plugin Architecture Attacks- Contents 1. The Plugin Attack Surface — Over-Provisioning and Injection 2. Cross-Plugin Privilege Escalation 3. OAuth Scope Auditing for AI Plugins 4. Confirmation Gates and Minimal Footprint 5. Testing Plugin Ecosystems for Injection and Escalation ## The Plugin Attack Surface — Over-Provisioning and Injection The OAuth scope audit I run for AI plugins follows a straightforward methodology: minimum necessary permissions, then justify every exception. The cross-plugin privilege escalation scenario I document most often uses the AI model as an unintended capability bridge. My plugin architecture reviews always start with the permission inventory — every integration is a potential privilege escalation path. My plugin security reviews always start with the permission inventory — every connection is a potential attack path. Every plugin I audit connected to an AI model expands the model’s effective capability surface — and therefore the attack surface available to any prompt injection that manipulates the model. A model with no tools can be manipulated to output harmful text. A model with email, file, and code execution tools can be manipulated to send malicious emails, exfiltrate files, and execute arbitrary commands. The plugin set defines the blast radius.
Over-provisioned plugins are the most direct source of unnecessary blast radius expansion. When an AI calendar plugin is granted write permissions when it only needs read access to check availability, every injection attack that uses the calendar plugin can now modify or delete calendar entries. The excess permission doesn’t serve any legitimate use case but creates a real attack capability that didn’t need to exist.
securityelites.com
AI Plugin Blast Radius — Permission vs Required Access
Plugin
Needed
Granted (common over-provisioning)
Blast Radius
Email plugin
Read inbox
Read + Send + Delete + access all folders
Full email account
GitHub plugin
Read issues
repo scope (read+write all repos + secrets)
All repos + secrets
Calendar plugin
Read free/busy
Read + Write + Delete all events
Full calendar
Calendar plugin ✅
Read free/busy
calendar.readonly scope only
Read-only — minimal
📸 Plugin blast radius mapping. The GitHub plugin case is the highest-impact common over-provisioning: the generic repo OAuth scope gives read and write access to all repositories including private ones, plus access to repository secrets — an enormous blast radius for a plugin that only needs to read issues. The bottom row shows the correct pattern: calendar.readonly grants only what the availability-check function needs, limiting any injection attack to read-only calendar access regardless of what it requests.
Cross-Plugin Privilege Escalation
The cross-plugin privilege escalation scenario I document most often involves an AI model acting as an unintended capability bridge. The cross-plugin privilege escalation attack I document most often uses the AI model as an intermediary to transfer capabilities between plugins. A low-privilege plugin reads content containing injection instructions. Those instructions direct the AI to use a high-privilege plugin to perform an action the injected content’s source couldn’t directly trigger. The escalation path: low-privilege read → AI model processes injected instructions → high-privilege write/execute.
The attack surface is any flow where the output of one plugin becomes input to the AI’s decision-making about what to do with another plugin. Email reading → document store writing. Web browsing → code execution. Calendar reading → email sending. Each inter-plugin data flow is a potential cross-plugin escalation path if the AI model doesn’t distinguish between processing data and following instructions embedded in that data.
🛠️ EXERCISE 1 — BROWSER (15 MIN · NO INSTALL)
Audit Real AI Plugin OAuth Scopes and Find Over-Provisioned Examples
⏱️ 15 minutes · Browser only
Real AI plugin OAuth scope audits reveal the gap between what plugins are granted and what they actually need — and the research on tool output injection gives you the injection payloads to test against any deployed system.
📖 Read the complete guide on SecurityElites
This article continues with deeper technical detail, screenshots, code samples, and an interactive lab walk-through. Read the full article on SecurityElites →
This article was originally written and published by the SecurityElites team. For more cybersecurity tutorials, ethical hacking guides, and CTF walk-throughs, visit SecurityElites.

Top comments (0)