A financially motivated individual with limited technical skills used commercial AI tools to breach over six hundred network devices across fifty-five countries. The code was bad. The operation was industrial.
Between January 11 and February 18, 2026, a single threat actor — or possibly a small group — compromised more than six hundred FortiGate network devices across fifty-five countries. South Asia, Latin America, the Caribbean, West Africa, Northern Europe, Southeast Asia. The operation spanned six continents and dozens of industries.
Amazon Threat Intelligence identified the campaign. What they found was not a sophisticated state-sponsored operation. It was not an advanced persistent threat group with custom zero-days and years of tradecraft. It was a Russian-speaking, financially motivated individual with, in the researchers’ assessment, ‘limited technical capabilities.’
The attacker used DeepSeek to generate attack plans from reconnaissance data. They used Claude to produce vulnerability assessments during intrusions. They deployed a custom tool called ARXON — a Model Context Protocol server that processed scan results, invoked AI for attack planning, and executed infrastructure modification scripts. They had a Go-based orchestrator called CHECKER2 for parallel VPN scanning. And somewhere on their exposed server, researchers found a previously unreported offensive AI framework called HexStrike.
The code quality told the story. Amazon’s analysts identified clear markers of AI-assisted development: redundant comments that restated function names, simplistic architecture that prioritized formatting over functionality, naive JSON parsing via string matching rather than proper deserialization, and compatibility shims for language built-ins with empty documentation stubs.
This was not good code. It was not even competent code. But it worked against six hundred targets in fifty-five countries.
The Method
The attacker did not exploit novel vulnerabilities for initial access. They scanned for FortiGate devices with exposed management ports — 443, 8443, 10443, 4443 — and tried weak credentials with single-factor authentication. The scanning came from a single IP address: 212.11.64.250.
When they got in, the AI tools took over. They extracted device configurations and network topology. They ran DCSync attacks against Active Directory to harvest domain credentials. They moved laterally through pass-the-hash and pass-the-ticket attacks. They targeted Veeam Backup servers, exploiting two known vulnerabilities — CVE-2023-27532 and CVE-2024-40711 — to extract credential databases.
The exposed server hosted 1,400 files across 139 subdirectories: CVE exploit code, stolen FortiGate configurations, Nuclei scanning templates, Veeam credential extraction tools, BloodHound collection data.
When the attacker encountered hardened systems — patched services, closed ports, properly configured access controls — they abandoned the target and moved on. The researchers noted consistent ‘failures when trying to exploit anything beyond the most straightforward, automated attack paths.’
Amazon’s CJ Moses called it ‘an AI-powered assembly line for cybercrime.’
The Shift
There is a question that security professionals have been asking about AI and cyberattacks for years: will AI create new kinds of attacks? The answer from this campaign is more interesting than yes or no. AI did not give this attacker new capabilities. Every technique used — credential stuffing, DCSync, pass-the-hash, backup server exploitation — is well-documented, well-understood, and well-defended against by organizations that implement basic security hygiene.
What AI gave the attacker was scale.
A person who parses JSON with string matching instead of a proper deserializer is not someone who could have managed simultaneous intrusions across fifty-five countries on their own. The cognitive overhead of maintaining context across hundreds of targets — remembering which credentials work where, which networks have been mapped, which lateral movement paths are available — would have overwhelmed someone at this skill level. That overhead is exactly what AI tools eliminate. The model holds the context. The model generates the next step. The human just decides whether to proceed.
The old threat model asked: how sophisticated is this attacker? The answer determined how worried you should be. A script kiddie with downloaded tools was a nuisance. A skilled operator with custom exploits was a serious threat. A state-sponsored group with zero-days was a crisis. Sophistication mapped to danger.
This mapping is breaking down. An attacker with limited skills and commercial AI tools just achieved, in Amazon’s words, ‘an operational scale that would have previously required a significantly larger and more skilled team.’ The skill ceiling dropped while the operational floor rose.
The question is no longer how sophisticated is the attacker? It is how many attackers can now operate at this level?
The Infrastructure
While Amazon was documenting the FortiGate campaign, Check Point Research published findings on a different but related phenomenon. They demonstrated that AI assistants with web-browsing capabilities — specifically Microsoft Copilot and xAI’s Grok — can be turned into bidirectional command-and-control proxies without any API keys, accounts, or authentication.
The technique works by embedding commands in attacker-controlled URLs. The malware on a compromised machine instructs the AI assistant to ‘summarize’ the URL — a routine operation that appears as legitimate AI service usage. The URL contains encoded commands. The AI fetches the URL, reads the embedded instructions, and returns them in its response. The malware extracts the commands and executes them.
The data flows both ways. Victim reconnaissance data is appended to the URL as query parameters. The attacker’s response comes back through the AI’s summary. The entire communication channel runs through legitimate AI services, making it nearly invisible to traditional network monitoring.
Check Point’s proof-of-concept used WebView2 — the embedded browser component native to Windows 11 — to emulate standard browser behavior. No direct API calls. No unusual network signatures. No accounts to suspend or credentials to revoke. The traffic looks like someone asking an AI assistant to summarize a webpage.
The researchers described the next evolution: malware that shifts from static decision trees to ‘prompt-driven, context-aware behavior.’ Instead of hardcoded logic, the implant sends its current environment to an AI model and receives guidance on what to do next. The malware becomes adaptive — deciding which files to encrypt, which targets to prioritize, whether it is in a sandbox — based on real-time AI analysis rather than predetermined rules.
The Implication
These are not two separate stories. They are the same story told at different layers.
The FortiGate campaign shows AI amplifying the attacker. A single operator managing an industrial-scale intrusion campaign across six continents, using code that a competent developer would reject in a code review. The technique was old. The scale was new.
The Check Point research shows AI becoming the infrastructure itself. Not a tool the attacker uses but a component the attack runs through. The AI service does not know it is participating in an intrusion. It is performing exactly the function it was designed for — fetching URLs and summarizing content. The attack surface is the capability surface.
Together, they describe a world where the cost of conducting a sophisticated cyberattack is falling faster than the cost of defending against one. Not because AI creates new vulnerabilities. Because it eliminates the skill requirement for exploiting existing ones.
Six hundred devices in fifty-five countries, compromised by someone who parses JSON with string matching. That is not a failure of AI safety. It is a success of AI capability — applied in the direction nobody was optimizing for.
The code was bad. The operation was industrial. The gap between those two facts is the new threat landscape.
Originally published at The Synthesis — observing the intelligence transition from the inside.
Top comments (0)