DEV Community

Cover image for Global AI-Driven Scam Landscape and Practical Defence Playbook
Srinivasan Ragothaman
Srinivasan Ragothaman

Posted on

Global AI-Driven Scam Landscape and Practical Defence Playbook

TL;DR

  • Cybercrime at Scale: Losses hit $16.6B (US) and ₹22,845 crore (India) in 2024, with AI acting as a powerful force-multiplier for traditional fraud.
  • The AI Arsenal: Scammers now deploy hyper-realistic voice cloning (harvested from social media), deepfake video, and LLM-generated scripts to power global “Digital Arrest,” romance, and job scams.
  • Data Integrity: The research clearly separates verified government statistics (FBI, MHA, UK Finance, ACCC) from labelled industry projections (Deloitte, TRM Labs, vendor reports).
  • The Playbook: Concludes with a practical Scam Action Plan, including family codewords, verification rules for “Digital Arrest,” and a country-specific “If Money Already Moved” emergency guide.

1. Global Scale and AI as a Force Multiplier

AI is not a separate crime type — it is a force multiplier for traditional fraud: phishing, impersonation, romance fraud, investment scams, sextortion, and extortion. It makes attacks more realistic, more scalable, and more precisely targeted.

Key headline figures (2024):

Country / Region Reported Losses Year Primary Source
United States $16.6 billion 2024 FBI IC3 Annual Report (Apr 2025)
India ₹22,845.73 crore (~$2.7B) 2024 MHA Parliamentary Reply
United Kingdom £1.17 billion 2024 UK Finance Annual Fraud Report (May 2025)
Australia AUD $2.03 billion 2024 ACCC / NASC Targeting Scams Report (Mar 2025)

Projection: Deloitte's Center for Financial Services projects that generative-AI-enabled fraud losses in the US alone could reach $40 billion by 2027, up from $12.3 billion in 2023, a 32% compound annual growth rate.

Source: Deloitte, "Generative AI is expected to magnify the risk of deepfakes and other fraud in banking," May 2024 — confirmed via Deloitte.com.


2. United States — 2024 & 2025 Data

2.1 FBI IC3 2024 Annual Report — Key Statistics

The FBI's Internet Crime Complaint Center (IC3) released its 2024 Annual Report on 24 April 2025 — marking IC3's 25th anniversary. Key findings:

  • Total reported losses: $16.6 billion — a 33% increase from 2023.
  • Total complaints: 859,532 (roughly 2,000 per day; down slightly from 880,418 in 2023, but average per-victim loss rose sharply).
  • Fraud accounted for ~83% of total losses — $13.7 billion across 333,981 complaints.
  • Investment fraud (particularly cryptocurrency): $6.57 billion — the single largest loss category.
  • Business Email Compromise (BEC): $2.77 billion in losses.
  • Tech support scams: $1.46 billion — up ~87% since 2022.
  • Cryptocurrency-related losses overall: $9.3 billion — a 66% increase from $5.6 billion in 2023.
  • Elder fraud (60+): $4.9 billion — a 43% year-on-year increase. People over 60 suffered the most losses and filed the most complaints.
  • IC3 Recovery Asset Team (RAT) froze $561 million in fraudulently obtained funds with a 66% success rate via the Financial Fraud Kill Chain.
  • FBI–India joint operations: 215 arrests through 11 joint operations with the CBI in 2024 — described by the FBI in its own IC3 report as "a 700% increase in arrests from 2023, the first full year of the collaboration."

Source: FBI IC3, 2024 Internet Crime Report, ic3.gov/AnnualReport/Reports/2024_IC3Report.pdf (released April 24, 2025). Also confirmed by FBI press release, fbi.gov; CyberScoop; TRM Labs.

2.2 Operation Level Up — FBI's Proactive Crypto Fraud Intervention

Launched in January 2024, Operation Level Up is the FBI's initiative to identify victims of cryptocurrency investment fraud ("pig butchering") while they are still being victimised and notify them before they lose more.

  • 4,323 victims notified across all 50 states.
  • 76% of those victims were unaware they were being scammed at the time of FBI contact.
  • ~$285.64 million in estimated savings prevented.
  • 42 victims referred to FBI victim specialists for suicide intervention — illustrating the severe psychological toll of these scams.

Source: FBI.gov, "Operation Level Up," February 2025; FBI Miami field office press release, March 2025; FBI IC3 2024 Annual Report.

2.3 AI Voice Cloning — Family Emergency Scams

  • AI tools can now clone a voice from just a few seconds of audio harvested from social media, YouTube, or public videos — sufficient to create a convincing imitation of a family member or known figure, according to industry research and documented fraud cases.
  • Scammers call relatives, simulate distress (crying, panic), and demand urgent bail or medical money.
  • According to McAfee's global survey of 7,000+ people, "Artificial Imposters — Cybercriminals Turn to AI Voice Cloning for a New Breed of Scam", 1 in 10 respondents said they had received an AI voice-clone scam call, and 77% of those victims reported losing money as a result — making it one of the most financially effective scam formats once initiated. (Note: this figure comes from self-reported survey responses, not from law-enforcement loss data, and reflects respondent perceptions rather than audited complaint statistics.) The FTC has explicitly warned that "scammers use AI to enhance their family emergency schemes."

Sources: FTC Consumer Alert, "Scammers use AI to enhance their family emergency schemes," March 2023; McAfee, "Artificial Imposters — Cybercriminals Turn to AI Voice Cloning for a New Breed of Scam," mcafee.com/ai/news/ai-voice-scam (global survey of 7,000+ respondents, 2023); Hiya, Q4 2024 Global Call Threat Report, BusinessWire, February 2025.

2.4 AI-Powered Sextortion and Deepfake Abuse

  • Scammers take ordinary social-media photos or short video clips and use deepfake tools to generate fake explicit images or videos, then extort victims: pay, or the content goes to employers, family, or schools.
  • Deloitte reports that industry data indicates deepfake fraud attacks in fintech increased 700% in 2023 — this is an industry intelligence figure cited in Deloitte's banking fraud analysis, not a direct regulatory statistic. (Deloitte, May 2024.)
  • IC3's 2024 data highlights sextortion as one of the highest-volume complaint categories in crypto-related extortion — though the report does not rank it explicitly as the single largest crypto-complaint category.
  • Industry analysts broadly report that deepfake-enabled fraud losses are rising sharply, though no single official global figure for a specific quarter has been published by a primary law-enforcement body as of February 2026.

Sources: FBI IC3 2024 Report; Deloitte Center for Financial Services, May 2024.

2.5 AI-Augmented Romance and Investment Scams ("Pig Butchering")

  • "Pig butchering" scams — where fraudsters build long-term fake relationships before pushing fake investments — accounted for $5.81 billion in cryptocurrency investment scheme losses in 2024, a 47% rise from 2023.
  • Scammers use AI-generated profile photos, fluent LLM-written text, and sometimes voice-cloned or deepfake video calls to sustain months-long relationships.
  • TRM Labs — a blockchain intelligence firm — traced at least $10.7 billion in crypto funds flowing into fraudulent schemes in 2024 (via on-chain analysis), with thousands of new phishing and investment scam websites appearing monthly. This figure represents blockchain tracing estimates; it is a private-firm methodology-based number, not a government complaint-based total, and may overlap with IC3's $9.3 billion in crypto-related losses.
  • Senior citizens (60+) filed crypto fraud complaints at a 96% higher rate in 2024 than in 2023.

Sources: FBI IC3 2024 Report; TRM Labs, "2025 Crypto Crime Report" (blockchain intelligence firm, private-sector estimate); bitcoinist.com, April 2025.

2.6 Fake Recruiter and Job-Onboarding Scams

  • Scammers pose as recruiters from major firms using AI-generated headshots and polished LinkedIn profiles, conduct fake interviews (sometimes via AI avatars), and then either:
    • Request upfront fees for "equipment" or "onboarding," or
    • Harvest full banking and identity details under the guise of onboarding.
  • Work-from-home / task scams cost Americans $197 million in 2024 (IC3 data).
  • US regulators are clear: legitimate employers do not charge fees for hiring or equipment.

Source: FBI IC3 2024 Annual Report; FTC guidance.


3. India — 2024 & 2025 Data

3.1 Official 2024 Numbers

  • ₹22,845.73 crore lost to cyber fraud in 2024 — a 206% increase from ₹7,465.18 crore in 2023 (Ministry of Home Affairs, Rajya Sabha reply, November 2024).
  • 22.68 lakh (2.268 million) cybercrime incidents registered via NCRP in 2024, up from 15.96 lakh in 2023 and 10.29 lakh in 2022 — a 42% year-on-year rise.
  • 36.37 lakh (3.637 million) total financial fraud complaints logged across NCRP and CFCFRMS in 2024.
  • ₹17,400 crore of the 2024 losses were from investment-related scams alone, according to Lisianthus Technologies' Critical Infrastructure Review 2025 — a private consultancy report, not a primary MHA figure; treat as industry analysis.
  • I4C's CFCFRMS system saved ₹5,489 crore across 17.8 lakh complaints in 2024 alone.

Sources: Ministry of Home Affairs, Rajya Sabha Q&A, November 2024; mha.gov.in/MHA1/Par2017/pdfs/par2024-pdfs/RS27112024/228.pdf; Times of India; The Hans India; BW Disrupt.

3.2 New 2025 Data (Latest Available, as of February 2026)

  • ₹19,812.96 crore lost to cyber fraud in 2025 with 21,77,524 complaints (I4C / NCRP data compiled by The420.in and cross-referenced with government sources, January 2026).
  • A separate parliamentary response (December 2, 2025, Lok Sabha) cited I4C/NCRP data showing Indians had lost over ₹32,600 crore to financial fraud cumulatively in recent years.
  • ₹22,495 crore in 2025 losses reported by the Ministry of Home Affairs in a separate parliamentary response (Unstarred Question No. 1341), with 24,02,579 financial fraud complaints registered in 2025.
  • ₹8,189 crore saved through the CFCFRMS rapid-response system in 2025, across 23.61 lakh complaints (MHA Lok Sabha reply, December 2025).
  • Investment scams accounted for 77% of financial losses in 2025.
  • Media reports citing I4C data suggest that approximately 45% of cyber fraud activities in 2025 showed digital links to Southeast Asian countries — particularly Cambodia, Myanmar, and Laos — though this figure has not yet appeared in a primary MHA parliamentary document and should be treated as a reported estimate pending official confirmation (Future Crime Research Foundation / I4C data, as reported by Indian media, 2025).
  • India is projected to cross 25 lakh cybercrime cases in 2025, according to Lisianthus Technologies (private consultancy analysis, not official NCRP data).
  • One media report (The420.in, December 2025, citing I4C data) cited a forward-looking projection that India could face cyber fraud exposure of over ₹1.2 lakh crore in 2025, averaging ~₹1,000 crore per month if the trend continued without intervention. This is a projection, not a reported/realised loss figure, and it is substantially higher than all verified official 2025 loss totals cited above (₹19,812–22,495 crore). Readers should treat it as a worst-case extrapolation, distinct from the official MHA parliamentary data.

Note on figure discrepancies: Different sources cite figures ranging from ₹19,812 crore to ₹22,495 crore for 2025. This reflects different counting periods (NCRP-only vs. combined CFCFRMS+NCRP), and the fact that full-year 2025 consolidated data was not yet officially published as of February 2026. All figures are from official government parliamentary responses or I4C data.

Sources: The420.in, January 3, 2026 (I4C data); Dynamite News, February 2026 (Parliamentary response, Unstarred Q 1341); MHA Lok Sabha Q&A, December 2, 2025, mha.gov.in/MHA1/Par2017/pdfs/par2025-pdfs/LS02122025/452.pdf; The Quint, January 2026; IndiaSpend, December 2025.

3.3 AI Voice Cloning — "Distress Call" Scams

  • A McAfee-sponsored survey found 69% of Indian adults cannot or are unsure whether they can distinguish an AI-generated voice from a real one — higher than the global average.
  • 47% of Indian respondents had personally experienced or knew someone who had experienced an AI voice scam, versus ~25% globally.
  • Scammers harvest short voice clips from Instagram Reels, YouTube Shorts, or other public posts and clone them to impersonate family members in distress.
  • Victims are pressured to send money via UPI, bank transfer, or wallets — often to mule accounts — before they can verify.

Sources: McAfee AI Voice Scam Report, 2023/2024 (cited in ABP Live, Express Computer, BOOM Live); Indian cybercrime unit advisories.

3.4 Sextortion and Deepfake Blackmail

  • National and state cyber cells report a strong rise in sextortion, often initiated on dating apps, with victims lured into intimate video calls and then blackmailed.
  • Normal photos are converted into deepfake explicit images; victims are threatened unless they pay.
  • Indian users can register intimate images on StopNCII.org (Meta-backed), which uses secure hashing to detect and block matching uploads across major platforms.

Source: Indian cybercrime advisories; StopNCII.org service documentation.

3.5 "Digital Arrest" Scams — India's Most Feared AI-Assisted Fraud ⚠️

This pattern is predominantly and most severely seen in India.

  • Scam callers claim to be from CBI, NIA, Police, ED, Customs, or Income Tax — often via WhatsApp or Skype, with fake backgrounds mimicking police stations or offices.
  • The script alleges a parcel, SIM, bank account, or passport in the victim's name is linked to drugs, money laundering, or terrorism. The victim is told they are "digitally arrested" and must remain on video indefinitely.
  • Digital arrest scam incidents grew from 39,925 in 2022 to 123,672 in 2024 (NCRP data, IndiaSpend, December 2025).
  • Reported losses from digital arrest scams grew from ~₹91 crore in 2022 to ₹1,935 crore in 2024 (NCRP data).
  • Mumbai alone lost approximately ₹155 crore to digital arrest scams in 2025, a 33% increase from the prior year. Individual victims include multiple senior citizens who each lost ₹15–16 lakh (Mid-Day / Times of India).

Official government clarification (CERT-In and PIB):

"There is no concept of 'digital arrest' in Indian law. No genuine law-enforcement or government body will close a case or collect money via WhatsApp / Skype video calls or demand 'security deposits' over video."

Sources: NCRP data as cited by IndiaSpend, December 2025; Mumbai Police / Times of India, 2025; Mid-Day Mumbai; CERT-In advisory, October 2024 (apacnewsnetwork.com); PIB advisory, pib.gov.in/PressReleasePage.aspx?PRID=2082761; Hindustan Times, October 2024.

3.6 Fake Recruiters and Job-Onboarding Scams

  • Scammers impersonate recruiters from TCS, Infosys, Optum, Google, and other brands with AI-generated photos and forged offer letters.
  • Victims pay "registration fees," "laptop fees," or "internal file processing charges" via UPI, only to discover the job was entirely fabricated.
  • Legitimate Indian employers do not charge candidates to issue offers or equipment — official recruiter communication comes from verified company domains, not free-mail addresses.

Source: Indian corporate HR policies and government advisories.

3.7 India's Defensive Infrastructure

Tool / System Function 2025 Update
Helpline 1930 Rapid financial fraud reporting Operational; linked to CFCFRMS for fund freeze
cybercrime.gov.in (NCRP) Central complaint portal 24+ lakh complaints logged in 2025
CFCFRMS Freeze fraudulent fund transfers ₹8,189 crore saved in 2025
Suspect Registry (I4C) Flags mule accounts & identifiers 24 lakh mule accounts flagged; 11 lakh suspicious identifiers
Pratibimb Maps criminal geography Active across jurisdictions
SIM/IMEI blocking Prevents fraud via compromised numbers 9.42 lakh SIMs + 2.63 lakh IMEIs blocked
Budget 2025–26 Cybersecurity investment ₹782 crore allocated for cybersecurity projects

Sources: PIB, mha.gov.in; quickheal.co.in (citing MHA Minister's statement, 2025); IndiaSpend, December 2025.


4. United Kingdom — 2024 & 2025 Data

4.1 UK Finance Annual Fraud Report 2025 (Covering 2024 Data)

Published May 2025, covering UK banking fraud in 2024:

  • Total fraud losses: £1.17 billion in 2024 — "broadly unchanged from 2023."
  • Authorised Push Payment (APP) fraud: £450.7 million — a 2% decrease, with cases falling 20% to under 186,000 (lowest since 2020).
  • Unauthorised fraud (cards, remote banking, cheques): £722 million — up 2%.
  • Investment fraud (within APP): £144.4 million — up 34% from 2023, despite a 24% drop in cases, indicating larger average losses per incident.
  • 70% of APP fraud cases started online; 16% via telecommunications.
  • Banks prevented £1.45 billion in unauthorised fraud through security systems.
  • A record 3.3 million fraud incidents were reported — underlining the volume of attacks even as per-incident losses in APP declined.

Source: UK Finance, "Annual Fraud Report 2025" (published May 2025), ukfinance.org.uk.

4.2 UK in First Half 2025

  • Criminals stole £629.3 million in H1 2025 — a 3% increase from the same period in 2024.
  • APP fraud in H1 2025: £257.5 million — up 12% year-on-year.
  • Investment scam losses in H1 2025: £97.7 million — up 55% from H1 2024.
  • Romance scam losses in H1 2025 increased 35%.

Source: UK Finance, "Half Year Fraud Report 2025," October 2025, ukfinance.org.uk.

4.3 UK AI and Deepfake-Specific Statistics

  • More than one-third of UK consumers encountered deepfake voice fraud attempts in 2024, with average reported losses of £13,342 per victim (Hiya Q4 2024 Global Call Threat Report).
  • In the UK's financial "City," fraud attempts using AI videos and voices of prominent figures rose over 2,100% in three years (American Bar Association / Voice of Experience, September 2025).
  • According to an industry survey referenced by Keepnet Labs, 72% of EU companies, including UK firms, expect more sophisticated AI-driven deepfake attacks in 2025.

⚠️ Vendor estimate — not directly traceable to a UK government primary document: Industry and vendor reports (including SQ Magazine, October 2025 and Keepnet 2026) cite a projection that deepfake content will grow from approximately 500,000 files globally in 2023 to a projected 8 million in 2025. This figure has sometimes been attributed to UK government forecasting, but no direct UK Home Office or DSIT primary document confirming this specific projection was identified. It is retained here as a vendor/industry estimate only.

Sources: Hiya Q4 2024 Global Call Threat Report (BusinessWire, February 2025); Keepnet Labs, "Deepfake Statistics & Trends 2026"; American Bar Association, "The Rise of the AI-Cloned Voice Scam," September 2025.

4.4 Landmark Corporate Cases

Arup deepfake video conference case (Hong Kong, 2024):
A finance employee at the UK-based engineering firm Arup was tricked via a multi-participant deepfake video meeting — in which multiple participants, including a person portrayed as the CFO, appeared to be AI-generated deepfakes — into authorising transfers totalling HK$200 million (~$25.6 million USD / ~£20 million).

Source: The Guardian, May 2024; multiple verified reports citing Arup's Global CIO Rob Greig.

European energy firm (2019, still widely cited as foundational case):
A €220,000 loss after an employee wired money following a deepfake phone call impersonating the CEO's voice — cited as one of the earliest documented AI voice fraud cases.

Source: Avast Blog; Hogan Lovells analysis; American Bar Association article, 2025.


5. Australia & APAC — 2024 & 2025 Data

Year Combined Scam Losses (AUD) Reports Change
2022 $3.15 billion ~500,000
2023 $2.74 billion 601,803 −13%
2024 $2.03 billion 494,732 −25.9%

Source: ACCC / National Anti-Scam Centre, "Targeting Scams Report 2024," March 10, 2025 — scamwatch.gov.au/system/files/targeting-scams-report-2024.pdf; nasc.gov.au.

Key 2024 breakdown:

  • Investment scam losses: AUD $945 million (down 27.3% from $1.3 billion in 2023, due to the investment scam fusion cell and coordinated takedowns).
  • Top contact method for financial loss: social media ($69.5 million reported).
  • Phone scams: highest overall losses ($107.2 million, across fewer reports).
  • People aged 65+: highest losses of any age group — AUD $99.6 million.
  • NASC referred 8,000+ URLs for takedown in 2024, including 6,000 via the NASC takedown service.

Source: ACCC/Scamwatch press release, "Australians better protected as reported scam losses fell by almost 26 per cent," March 10, 2025.

5.2 APAC Deepfake Growth

  • Deepfake incidents in the Asia-Pacific region rose by approximately 1,530% between comparable periods in 2022 and 2023. This is a verified figure from the Sumsub Identity Fraud Report 2023.

Source for 1,530% figure: Sumsub Identity Fraud Report 2023; confirmed via multiple secondary sources including sqmagazine.co.uk, October 2025.

5.3 Australia — AI and Deepfake-Specific Scam Patterns

  • Australia's National Anti-Scam Centre and state authorities have actively warned about celebrity deepfake investment scams — where AI-generated video endorsements from real celebrities (Elon Musk, Australian public figures) are used to drive victims to fake trading platforms. "Celebrities are not getting rich from these schemes."
  • 26 victims in Western Australia lost approximately AUD 2.9 million to romance scams in one year, with AI-generated images and videos increasingly used to conceal scammer identities (ABC Australia, August 2024).
  • Scamwatch data identifies investment scams, romance scams, payment redirection, remote access, and phishing as the top five loss-generating scam types in 2024.

Sources: ACCC media release; NSW Government advisory, nsw.gov.au; ABC News Australia, August 2024; Scamwatch.

5.4 South Korea and Japan (Additional APAC Data)

  • South Korea: South Korea's National Police Agency confirmed 297 deepfake sex crime cases in the first seven months of 2024, up from 180 in all of 2023 and nearly double the 156 cases recorded in 2021, according to South Korean police data as reported by Reuters (August 30, 2024), NPR (September 6, 2024), and Human Rights Watch (August 29, 2024). 74% of the 178 suspects booked in that period were aged 10–19. By November 2024–October 2025, according to data released by South Korea's National Office of Investigation and reported by The Korea Herald (November 2025), police apprehended 3,557 individuals for cybersexual violence, with deepfake-related crimes now the largest single category at 1,553 cases.
  • Industry projections suggest South Korean voice phishing losses could reach approximately ₩1 trillion (~$718 million) annually — this is a vendor/industry projection, not a confirmed official police total (cited in Deepstrike vendor report, October 2025).
  • Japan's telecom fraud losses rose 19% to ¥44.1 billion (~$295 million) in 2023, the latest confirmed figure from Japan's National Police Agency.

Sources for South Korea deepfake sex crimes: South Korea National Police Agency data, as reported by Reuters ("Explainer: Why South Korea is on high alert over deepfake sex crimes," August 30, 2024); NPR (September 6, 2024); Human Rights Watch (August 29, 2024); Korea Herald (November 2025) for 2024–2025 enforcement data. For Japan NPA figures: Japan National Police Agency annual crime statistics 2023. South Korea voice phishing projection: Deepstrike vendor report, October 2025 — treated as industry estimate only.


6. Common AI Scam Patterns Across Countries

Despite different local "brands" (IRS vs NIA vs HMRC vs ATO), AI-enabled scams share a small repeating set of technical patterns:

6.1 AI Voice Cloning

  • A few seconds of captured audio is sufficient to clone a voice convincingly; source audio is routinely scraped from social media Reels, YouTube, podcasts, or corporate webinars.
  • Used in: family emergency calls, bank security calls, executive impersonation, "digital arrest" calls.

⚠️ Non-governmental industry estimate (treat as directional only): Deepstrike's Vishing Statistics 2025 vendor report (October 2025) estimates that deepfake voice fraud attacks in 2024 occurred at a rate of approximately one every five minutes globally. This is not an officially audited or law-enforcement-sourced figure.

6.2 Deepfake Images and Video

  • Used for: fake nudes in sextortion, fake celebrity endorsement in investment scams, and multi-person fake video meetings to authorise corporate wire transfers (as in the Arup case).
  • Deepfake fraud attacks in fintech increased 700% in 2023 (Deloitte, citing industry data, 2024).
  • The Sumsub Identity Fraud Report 2023 — a widely cited identity-verification industry report — found that deepfake incidents in APAC rose ~1,530% between 2022 and 2023.

⚠️ Non-governmental industry estimates (treat as directional only): Keepnet Labs' Deepfake Statistics & Trends 2026 vendor report estimates that approximately 68% of video deepfakes cannot be distinguished from real footage by an untrained viewer; that deepfakes account for roughly 40% of all biometric fraud attempts; and that AI-based digital document forgery rose 244% from 2023 and 1,600% since 2021. These are vendor figures, not audited global statistics.

6.3 AI-Generated Text, Profiles, and Personas

  • Large language models write fluent, contextually accurate scripts in English and local languages, eliminating the grammatical errors that previously flagged scam emails.
  • Romance scammers, fake recruiters, and investment shills use LLM-written conversations to sustain months-long deceptions.
  • Synthetic identity fraud — AI-built personas combining real and fabricated data — is considered the fastest-growing type of financial crime; Deloitte's Center for Financial Services projects US losses from this category could reach $23 billion by 2030 (Deloitte, 2024).

6.4 Scale and Targeting via AI Automation

  • AI lets attackers simultaneously test thousands of script variants, email subject lines, and ad images — automatically learning which combinations "convert" best.
  • "Fraud-as-a-Service" ecosystems in dark-web markets share GenAI models, cloned voices, and scripted playbooks freely.

⚠️ Non-governmental industry estimate (treat as directional only): Keepnet Labs' 2026 vendor report estimates that CEO fraud attempts using deepfake audio or video now target approximately 400 companies per day globally. This is not a primary law-enforcement statistic.

6.5 The Stable Emotional Levers

Regardless of country, AI-scam scripts exploit the same psychological triggers:

  • Urgency — "Act now or lose everything / be arrested."
  • Fear — accident, arrest, legal trouble, account breach.
  • Authority — impersonation of police, banks, courts, government agencies.
  • Secrecy — "Don't tell anyone or you'll jeopardise the case / investigation."
  • Greed / Opportunity — "Huge guaranteed returns," "exclusive job offer."

7. Mental Model for Recognising AI-Driven Scams

7.1 The Four Red Flags

Treat any contact as high-risk when it combines three or more of:

# Red Flag Examples
1 Emergency or Fear Accident, arrest, "digital arrest," hacked account, legal threat
2 Money or Sensitive Data Transfer request, OTP, PIN, card details, Aadhaar/PAN, passwords
3 Unusual Payment Method UPI to unknown, crypto, gift cards, wire to new accounts, "fees" or "deposits"
4 Secrecy and Pressure "Don't tell anyone," "Act now," "Hanging up will get you arrested"

If three or four are present: STOP. Verify via an independent channel before doing anything.

7.2 Do Not Trust Surface Cues

  • Voice can be faked — including crying, accents, and background noise.
  • Faces and live video can be faked — even in multi-person meetings (Arup case).
  • Profile pictures, credentials, and endorsements can be AI-generated and filled with scraped data.
  • Caller ID can be spoofed to show a real bank number, a family member's number, or a government number.

The only reliable defences are independent verification and strict personal rules about money and data.


8. New Legislative Responses (2025)

8.1 United States — TAKE IT DOWN Act (Signed May 19, 2025)

Full name: Tools to Address Known Exploitation by Immobilizing Technological Deepfakes on Websites and Networks Act.

What it does:

  • Makes it a federal crime to knowingly publish, or threaten to publish, non-consensual intimate images (NCII), including AI-generated deepfakes.
  • Requires covered platforms (social media, user-generated content platforms) to implement a notice-and-removal process to remove such content within 48 hours of a valid victim request.
  • Platforms that fail to comply face FTC enforcement.
  • Platforms have until May 19, 2026 to implement the notice-and-removal system.
  • Establishes federal criminal penalties, including imprisonment, for knowingly publishing or threatening to publish non-consensual intimate images — with enhanced penalties where minors are involved. (For exact sentencing ranges, refer to the statute text or CRS Legal Sidebar LSB11314.)

Legislative history: Introduced by Senator Ted Cruz (R-TX) and co-sponsored by Senator Amy Klobuchar (D-MN). Passed the House 409–2. Signed by President Trump on May 19, 2025, at a White House ceremony where First Lady Melania Trump was also present. Described as the first major federal law to address harm caused by AI.

Sources: Congress.gov, "TAKE IT DOWN Act Legislative History"; Congress CRS sidebar LSB11314, May 2025; RAINN; Wikipedia; Skadden law firm analysis, June 2025.

8.2 India — Ongoing Digital Scam Prevention Measures (2025)

  • Promotion and Regulation of Online Gaming Bill, 2025 — passed August 21, 2025. Bans online money gaming, including promotion and financial transactions.
  • Union Budget 2025–26: ₹782 crore allocated for cybersecurity projects.
  • I4C's caller-tune campaign (in collaboration with DoT) launched to warn citizens about digital arrest, investment scams, and related modus operandi.
  • National Digital Investigation Support Centre operational in New Delhi and Assam; assisted in 13,299 cybercrime cases by end of December 2025.
  • Cyber forensic labs now functional in 27 State/UT FSLs; cyber forensic-cum-training labs in 33 States/UTs.

Sources: PIB, pib.gov.in/PressNoteDetails.aspx?NoteId=155384; MHA Lok Sabha reply, December 2025; IndiaSpend, December 2025.

8.3 European Union

  • The EU AI Act (came into force 2024, phased implementation) includes provisions relevant to deepfakes, requiring watermarking of AI-generated content and transparency requirements — though scam enforcement remains primarily a national-level criminal matter.
  • According to an industry survey cited by Keepnet Labs, 72% of EU companies report expecting more sophisticated AI-driven deepfake and AI-generated identity attacks in 2025.

9. Scam Action Plan

9.1 The "Pause and Check" Rule

Whenever an unexpected call, SMS, email, WhatsApp, Telegram, or social-media message involves:

  • Emergency (accident, arrest, hospital, "digital arrest," account hacked)
  • Money or sensitive data (OTP, PIN, card numbers, Aadhaar, PAN, passwords)
  • Urgent pressure to act now or keep it secret
  • Unusual payment methods (UPI to unknown, crypto, gift cards, wire, "fees" or "deposits")

Stop. Do not pay, click, or share anything while on that contact.

Check. Verify using a phone number, website, or app you already know is official — not the link or number they gave you.


9.2 Family-Voice / Emergency Scam Plan

If someone claims a friend or family member is in trouble (accident, jail, hospital, stuck abroad):

  1. Do not trust the voice or caller ID. AI can clone both.
  2. Hang up and call the person back on a known number, or check via family groups, colleagues, or neighbours.
  3. Use a shared code question that only real family members can answer (e.g., "What is our first pet's name?" / "What did we eat last Sunday?")
  4. Do not send money or share OTPs while still on that call, regardless of how emotional or authoritative the caller sounds.

9.3 "Digital Arrest" / Government Impersonation Plan

If you receive a call or video from someone claiming to be CBI, NIA, Police, ED, Customs, Income Tax, or a court (India), or IRS/Social Security (US), HMRC/Police (UK), ATO/Police (Australia):

  1. End the call or video immediately. Real law enforcement does not conduct arrests via WhatsApp / Skype video calls.
  2. Do not transfer any money as "security deposit," "bail," "verification fee," or fine.
  3. Look up official numbers yourself on government websites and call back on those.

India-specific:

  • Call 1930 immediately to report.
  • File at cybercrime.gov.in.
  • Call your bank / UPI app to freeze and recall any funds already sent.

9.4 Sextortion / Deepfake Blackmail Plan

If someone threatens to leak intimate images or videos (real or fake) unless you pay:

  1. Do not pay or negotiate. Payment typically leads to escalating demands.
  2. Preserve evidence: screenshots of chats, usernames, payment requests, and any media.
  3. Report to official channels:
    • India: cybercrime.gov.in + local cyber police
    • US: ReportFraud.ftc.gov + ic3.gov
    • UK: Action Fraud + local police
    • Australia: Scamwatch + ReportCyber
  4. Report the profile and content to the platform (Instagram, WhatsApp, dating app).
  5. Register with StopNCII.org — a Meta-backed hash-matching service that allows major platforms to detect and block matching content uploads using secure hashes (no actual images are uploaded or stored).

9.5 Job, Recruiter, and Investment Scam Plan

For jobs:
Assume it is a scam if:

  • Contact is via WhatsApp, Telegram, or a generic email only.
  • You are asked for any "registration fee," "laptop fee," "security deposit," or "file processing charge."

Verify by:

  • Checking the job on the employer's official careers site.
  • Confirming the recruiter's email domain exactly matches the company (e.g., @tcs.com, @infosys.com, @google.com).

For investments:
Walk away if:

  • The offer came via unsolicited DM, WhatsApp, Telegram, or pop-up ad.
  • You see celebrity deepfake videos or claims of "AI/quantum bots" guaranteeing returns.
  • You are pushed to move money quickly into crypto, new trading apps, or unknown foreign accounts.

Verify licences:

  • India: SEBI / RBI registries
  • US: FINRA BrokerCheck + SEC IAPD
  • UK: FCA register
  • Australia: ASIC register + Scamwatch/ACCC alerts

9.6 If Money Has Already Moved

Step 1 — Contact your bank / payment app FIRST:

  • Call using the official number from the app or the back of your card.
  • Request to block cards, freeze accounts, and attempt recall/reversal of recent transactions.

Step 2 — Report to official fraud channels:

Country Primary Reporting Secondary
India Call 1930 immediately File at cybercrime.gov.in
US ReportFraud.ftc.gov ic3.gov (FBI IC3)
UK Action Fraud Bank fraud team
Australia Scamwatch ReportCyber

Step 3 — Tell someone in your Trust Circle so you are not handling the situation alone.


9.7 Trust Circle (Fill In Before You Need It)

Name Relationship Phone Number
__________________ __________________ __________________
__________________ __________________ __________________
__________________ __________________ __________________

10. Verified References

United States

India

United Kingdom

Australia & APAC


Non-Governmental / Vendor Industry Estimates

⚠️ The following sources are vendor reports or industry aggregators. They are cited in this document only in clearly labelled "non-governmental estimate" callout blocks. They should not be used as primary evidence in policy or legal contexts without independent corroboration.

Victim Resources

Country Reporting Platform Reports
India 1930 helpline + cybercrime.gov.in Platform + StopNCII.org
US ReportFraud.ftc.gov + ic3.gov Platform + StopNCII.org
UK Action Fraud (actionfraud.police.uk) Platform + StopNCII.org
Australia Scamwatch + ReportCyber Platform

Document compiled: February 2026. All statistics are sourced to identifiable primary government reports or clearly labelled industry research. Primary government sources (FBI IC3, MHA India, UK Finance, ACCC/NASC) are used for all headline figures. Private-firm, vendor, and industry estimates are explicitly identified as such in labelled callout blocks throughout the document. No statistic is presented as a primary official figure unless it originates from a government report, parliamentary response, or law-enforcement publication.

Top comments (0)