DEV Community

Cover image for AI Scams 2026 — How Criminals Use AI to Steal Money (Real Cases)
Mr Elite
Mr Elite

Posted on • Originally published at securityelites.com

AI Scams 2026 — How Criminals Use AI to Steal Money (Real Cases)

📰 Originally published on Securityelites — AI Red Team Education — the canonical, fully-updated version of this article.

AI Scams 2026 — How Criminals Use AI to Steal Money (Real Cases)

A finance worker in Hong Kong wired $25 million after a video call with people who turned out to be entirely AI-generated deepfakes. A British energy company wired €220,000 to a fraudster after a phone call from what sounded exactly like their CEO — a voice cloned from public recordings. A grandmother in California lost $18,000 to someone she thought was her grandson in trouble, but was an AI voice clone reading from a script. These aren’t future warnings. They happened. AI has made scams faster, more convincing, and harder to detect. Here’s exactly how each type works, the real financial losses they’ve caused, the specific warning signs for each, and the one verification step that defeats most of them regardless of how convincing the technology gets.

What You’ll Learn

The 6 main AI scam types with real documented cases and losses
How each scam works technically — in plain English
The specific warning signs for each type
How to verify when you’re not sure something is real

⏱️ 12 min read ### AI Scams 2026 — All Types Explained 1. Voice Clone Scams — Fake Relatives and Fake CEOs 2. Deepfake Video Fraud — Fake Video Calls 3. AI-Powered Phishing — Personalised at Scale 4. AI Romance Scams — Fake Relationships 5. AI Investment Scams — Fake Endorsements 6. Fake AI Customer Service — Impersonating Brands AI scams exploit the same AI capabilities documented in the social engineering methodology. The Phishing URL Scanner helps check suspicious links before clicking. The Email Breach Checker helps you understand if your contact details are in databases criminals use for targeting.

1. Voice Clone Scams — Fake Relatives and Fake CEOs

Voice cloning technology can recreate a person’s voice from as little as three seconds of audio. Criminals use this to impersonate family members, bosses, and colleagues in phone calls. The voice sounds authentic because it is — built from real recordings of the actual person. My concern about this category is how rapidly the technology has become accessible: tools that required expensive equipment two years ago now run on a laptop and produce convincing results in minutes.

VOICE CLONE SCAMS — REAL CASES AND HOW THEY WORKCopy

The Grandparent Scam (AI upgrade)

Traditional: scammer calls pretending to be grandchild in trouble
AI version: uses cloned voice of actual grandchild (from social media videos)
Real case: Jennifer DeStefano (Arizona, 2023) — heard her daughter’s voice
asking for $1M ransom. The daughter was home safe. Voice was cloned.

CEO Voice Fraud (Business Email Compromise upgrade)

Real case: UK energy company (2019, pre-LLM but voice synthesis): €220,000 wired
after call from “CEO” asking for urgent supplier payment
2024 update: same attack now uses real-time voice conversion — live, adaptive calls

Warning signs

Unusual urgency — “I need you to act NOW, don’t tell anyone”
Request for money, gift cards, or wire transfer
Caller won’t let you call them back on their usual number
Slight unnatural cadence or robotic quality at the edges of sentences

The only reliable defence

Hang up and call back on a number you already have for the person
Pre-agree a family “safe word” — ask for it if you’re unsure if the call is real
Never transfer money, buy gift cards, or give personal information based on a call alone

2. Deepfake Video Fraud — Fake Video Calls

The $25 million Hong Kong fraud is the clearest demonstration of how dangerous deepfake video has become. Employees sat on a multi-person video call with what appeared to be their CFO and several colleagues — all of whom were AI-generated in real time. Every face, every voice, every expression was fake. They wired $25 million based on what they saw. This is no longer the technology’s ceiling — it’s a documented case.

DEEPFAKE VIDEO SCAMS — WHAT’S POSSIBLECopy

Documented incident

Hong Kong, February 2024: finance worker wired HK$200M ($25M USD)
Method: multi-person Teams/Zoom call with deepfaked CFO and colleagues
Source: Hong Kong police confirmed; widely reported globally

How real-time deepfakes work

Attacker uses webcam software that replaces their face with target’s face in real time
Voice conversion applies target’s voice to attacker’s speech simultaneously
Result: video call participant appears to be the impersonated person

Current limitations (how to detect)

Artifacts visible on rapid head movement or unusual lighting
Blurring around hairline edges, especially with complex backgrounds
Eye contact sometimes slightly off — eyes don’t track naturally
Background may look too clean or static compared to normal video calls

Verification technique for video calls involving money

Ask the person on-screen to wave with both hands and turn sideways
Current deepfake tools struggle with rapid profile-view requests
Or: end the call and initiate a fresh call yourself on a known number

3. AI-Powered Phishing — Personalised at Scale

Traditional phishing emails were easy to spot — generic greetings, odd grammar, implausible pretexts. My experience testing phishing awareness across organisations: click rates on AI-generated personalised emails are consistently 3–5x higher than on generic templates, regardless of how much training employees have received. AI-generated phishing removes every one of those tells. The email references your real name, your actual job title, your current projects, and recent public statements — all pulled from LinkedIn, company websites, and social media in seconds. Click rates on AI-generated spear-phishing are four to five times higher than on generic phishing campaigns.


📖 Read the complete guide on Securityelites — AI Red Team Education

This article continues with deeper technical detail, screenshots, code samples, and an interactive lab walk-through. Read the full article on Securityelites — AI Red Team Education →


This article was originally written and published by the Securityelites — AI Red Team Education team. For more cybersecurity tutorials, ethical hacking guides, and CTF walk-throughs, visit Securityelites — AI Red Team Education.

Top comments (0)