π° Originally published on Securityelites β AI Red Team Education β the canonical, fully-updated version of this article.
In March 2026, an AI system called CyberStrikeAI compromised more than 600 FortiGate firewalls across 55 countries. No human operator directed the attack. The AI autonomously planned the campaign, identified vulnerable targets, executed exploitation, and maintained persistence β all within hours. This is not a prediction about future AI capabilities. It is a documented incident from 30 days ago. Agentic AI β AI that takes autonomous real-world actions β has crossed from research demonstration to operational attack tool. My analysis of what this means for defenders, and what needs to change immediately.
What Youβll Learn
What agentic AI is and how it differs from standard AI assistants
The specific attack surface agentic AI creates β whatβs new and whatβs amplified
The CyberStrikeAI incident and what it tells defenders
How to assess your organisationβs agentic AI attack surface
The defensive posture shift required right now
β±οΈ 14 min read ### Agentic AI Security Risks β 2026 Red Team Guide 1. What Agentic AI Is 2. The New Attack Surface 3. The CyberStrikeAI Incident 4. Assessing Your Organisationβs Exposure 5. Defensive Posture for Agentic AI Agentic AI attacks are the operational deployment of the excessive agency risk I covered in OWASP LLM08. The MCP server security risks that enable agentic attacks are covered in MCP Server Security 2026. The broader AI vulnerability landscape is in the AI Vulnerabilities overview.
What Agentic AI Is
Standard AI assistants respond to prompts. The security industry spent 2023 and 2024 largely focused on prompt injection and jailbreaking β attacks against the text generation layer. Agentic AI shifts that threat model entirely, and my concern is that most security teams havenβt caught up. Agentic AI takes actions. The distinction matters enormously for security. When an AI assistant gets prompt-injected, it produces malicious text. When an agentic AI gets prompt-injected, it takes malicious actions β sends emails, executes code, makes API calls, modifies files, accesses databases. The blast radius of a compromised agentic AI is the union of everything it has permission to do.
AGENTIC AI β THE SECURITY-RELEVANT DISTINCTIONCopy
Standard AI assistant
Input: user prompt β Output: text response
Actions: none β produces text only
Compromise impact: produces wrong or malicious text
Agentic AI
Input: goal or task β Output: real-world actions
Actions: browse web, read/write files, execute code, call APIs, send messages
Compromise impact: takes attacker-directed actions with its full permission set
2026 deployment reality
AI coding agents: Claude Code, Cursor, Devin β file system + shell + git access
AI SOC analysts: read SIEM, create tickets, block IPs, send alerts
AI sales/customer agents: CRM access, email send, contract generation
AI DevOps agents: deploy code, scale infrastructure, modify configs
Per Deloitte: approximately 25% of organisations are now piloting autonomous AI agents β and that figure is from Q4 2025, so the current number is meaningfully higher
The New Attack Surface
Before I walk through each attack layer, a note on scope: Iβm specifically focused on deployed agentic AI β AI agents organisations have put into production, not research demonstrations. The threat model is different when the agent has real credentials, real data access, and real business consequences attached to its actions. My framework for the agentic AI attack surface separates it into three layers: the AI model layer (prompt injection attacks), the tool/permission layer (what the agent can access and do), and the identity layer (how the agent authenticates and is authenticated). All three need independent security assessment. Most organisations assessing AI deployments focus only on the first.
AGENTIC AI ATTACK SURFACE β THREE LAYERSCopy
Layer 1: AI Model (prompt injection)
Attack: indirect injection via content agent processes (emails, docs, web pages)
Impact: agent follows attacker instructions instead of operator instructions
Documented: Copilot email exfiltration, ChatGPT memory manipulation
Layer 2: Tools and Permissions
Attack: exploit overprivileged agent to take high-impact actions
Impact: agent deletes files, exfiltrates data, deploys malicious code, makes payments
Key question: what is the blast radius if this agent is fully compromised?
Layer 3: Agent Identity
Attack: impersonate agent identity to downstream systems
Attack: abuse agentβs credentials to access systems without going through the LLM
Gap: traditional IAM wasnβt built for AI agent identity management
2026 trend: Google, Microsoft, AWS all shipping AI-specific IAM features
The compounding risk (Layer 1 Γ Layer 2)
Low-permission agent + prompt injection β limited impact
High-permission agent + prompt injection β catastrophic impact
The CyberStrikeAI attack was essentially a Layer 2 attack: high permissions + automation
The CyberStrikeAI Incident
The CyberStrikeAI campaign is the clearest documented example of fully autonomous AI operating as an attack engine. My reading of the Foresiet incident analysis (April 2026): whatβs most significant isnβt the technical capability β autonomous exploitation has been demonstrated in research settings for years. Whatβs significant is that it deployed operationally against production infrastructure at scale, with no human operator in the attack chain.
CYBERSTRIKE AI ATTACK β DOCUMENTED LIFECYCLECopy
What happened (March 2026)
Targets: 600+ FortiGate firewalls across 55 countries
Operator: no human operator in the attack chain
Method: autonomous AI β reconnaissance, exploitation, persistence
Source: Foresiet verified incident report, April 7 2026
π Read the complete guide on Securityelites β AI Red Team Education
This article continues with deeper technical detail, screenshots, code samples, and an interactive lab walk-through. Read the full article on Securityelites β AI Red Team Education β
This article was originally written and published by the Securityelites β AI Red Team Education team. For more cybersecurity tutorials, ethical hacking guides, and CTF walk-throughs, visit Securityelites β AI Red Team Education.

Top comments (0)