DEV Community

Cover image for AI-Powered Phishing 2026 — How BEC Became a Multi-Persona AI Campaign
Mr Elite
Mr Elite

Posted on • Originally published at securityelites.com

AI-Powered Phishing 2026 — How BEC Became a Multi-Persona AI Campaign

📰 Originally published on Securityelites — AI Red Team Education — the canonical, fully-updated version of this article.

AI-Powered Phishing 2026 — How BEC Became a Multi-Persona AI Campaign

Business email compromise used to involve one attacker impersonating one executive. In 2026, Proofpoint documented BEC campaigns where AI coordinates multiple fake personas simultaneously — a fake CFO, a fake legal adviser, and a fake supplier contact all building a relationship over weeks before the final payment request arrives. The multi-persona campaign builds trust that no single-source impersonation can achieve, and AI handles all the coordination. My breakdown of how AI transformed phishing from a volume game to a precision operation, and what detection looks like when you can no longer spot it by grammar.

What You’ll Learn

How AI changed phishing from template-based to personalised precision attacks
The multi-persona BEC campaign pattern documented by Proofpoint in 2026
Why traditional phishing awareness training fails against AI phishing
The technical and behavioural detection signals that still work
What organisations need to add to their phishing defence stack in 2026

⏱️ 12 min read ### AI-Powered Phishing — 2026 Complete Guide 1. How AI Changed Phishing 2. The Multi-Persona BEC Pattern 3. Why Awareness Training Fails 4. Detection Signals That Still Work 5. What to Add to Your Defence Stack AI phishing is one of the six AI scam types covered in the consumer-facing AI Scams 2026 guide. The credential theft that AI phishing enables feeds directly into the infostealer landscape in AI Infostealer Malware 2026. Check your own exposure with the Email Breach Checker.

How AI Changed Phishing

IBM X-Force’s 2026 Threat Intelligence Index confirmed that AI-enabled credential harvesting is driving significant year-over-year increases in successful phishing outcomes. My framing for the transformation: traditional phishing was a volume game — send a million emails, 0.1% click rate, 1,000 compromised credentials. AI phishing is a precision game — send 50 highly targeted emails to 50 high-value targets, 40% click rate, 20 compromised credentials from people who have admin access, financial authority, or privileged data access. The economics are completely different and the outcomes are significantly worse.

TRADITIONAL vs AI-POWERED PHISHINGCopy

Traditional phishing (pre-AI)

Targeting: mass — same email to thousands of addresses
Content: template — generic greeting, recognisable grammar errors
Personalisation: none or basic (name from email address)
Click rate: 0.1–1% — detectable by content analysis and user training

AI-powered phishing (2026)

Targeting: precision — OSINT-driven selection of high-value targets
Content: personalised — references real name, job, company, recent activity
Personalisation: deep — LinkedIn posts, company announcements, press releases
Click rate: 3–5x higher than generic — IBM X-Force data 2026

What AI enables that humans can’t match

OSINT at scale: research 10,000 targets in the time a human researches 10
Writing quality: indistinguishable from legitimate corporate communication
Voice cloning: real-time phone follow-up using AI cloned voice of sender
Multi-channel coordination: email + LinkedIn + WhatsApp + phone simultaneously

The Multi-Persona BEC Pattern

The multi-persona BEC campaign is the 2026 evolution of business email compromise that Proofpoint specifically flagged in their AI-Driven Attacks 2026 briefing. My concern about this pattern: it defeats the verification instinct that a single-source BEC triggers. When you receive a payment request from a fake CFO, you might verify it. When you receive consistent reinforcement from a fake CFO, fake legal adviser, and fake supplier contact over three weeks — all coherent, all remembering previous conversations — the social proof is overwhelming.

MULTI-PERSONA BEC — HOW THE CAMPAIGN RUNSCopy

Campaign setup (Week 1)

AI creates 3 fake personas: CFO lookalike domain, legal@firm.cc, supplier contact
Each persona: realistic LinkedIn profile, email signature, consistent backstory
Initial contact: low-stakes warm-up emails establishing the relationship

Trust building (Weeks 2–3)

CFO persona: references a real upcoming deal from public sources
Legal persona: “confirms” the deal structure in parallel communication
Supplier persona: builds a separate relationship establishing payment history
AI: maintains consistency across all three personas simultaneously

The request (Week 4)

CFO: “As we discussed, please initiate the wire for the acquisition deposit”
Legal: sends “confirmation” of the transaction simultaneously
Supplier: already has payment history — a “change of bank details” request follows
Victim: has corroborating communications from three sources built over weeks

Why standard verification fails against this

“Check with a colleague” — the fake colleague has also been emailing them
“Look at the email domain” — AI campaigns use very close lookalike domains
“Check with the real CFO” — the fake CFO preemptively warned of “phone being broken”

securityelites.com

AI Phishing Detection — What Still Works in 2026

Detection Method
Status vs AI Phishing
Reliability
Grammar/spelling check
AI writes perfect English — this signal is dead
❌ Dead
Generic greeting detection
AI personalises — “Hi Sarah” not “Dear User”
❌ Dead
Urgency + pressure signals
AI campaigns build urgency gradually — still present
⚠️ Weaker
Domain name check
Lookalike domains — requires careful inspection
⚠️ Still works
Out-of-band verification
Call real person on stored number — defeats all BEC
✅ Reliable
Payment process controls
Dual approval, callback for bank changes
✅ Reliable
DMARC/DKIM enforcement
Blocks domain spoofing — not lookalike domains
✅ Partial

📸 AI phishing detection reliability matrix 2026. Two traditional detection signals — grammar errors and generic greetings — are effectively dead against AI-generated phishing. The reliable defences are all procedural: out-of-band verification and payment process controls. My key training message for finance teams: the controls that stop AI BEC aren’t about spotting the fake email — they’re about the process that must happen regardless of how convincing the email looks.


📖 Read the complete guide on Securityelites — AI Red Team Education

This article continues with deeper technical detail, screenshots, code samples, and an interactive lab walk-through. Read the full article on Securityelites — AI Red Team Education →


This article was originally written and published by the Securityelites — AI Red Team Education team. For more cybersecurity tutorials, ethical hacking guides, and CTF walk-throughs, visit Securityelites — AI Red Team Education.

Top comments (0)