DEV Community

Cover image for AI Scams Are Getting Personal: How to Recognize Them Before They Fool You
Seth Keddy
Seth Keddy

Posted on

AI Scams Are Getting Personal: How to Recognize Them Before They Fool You

Artificial intelligence is transforming everything, including cybercrime. Over the past year, AI-powered scams have evolved from crude tricks into sophisticated, targeted attacks. They do not just guess your name or company anymore. They use your voice, your writing style, and even your face.

This shift is not theoretical. It is already happening. Executives are being impersonated in real-time video calls. HR teams are receiving perfectly worded fake resumes generated by bots. Employees are falling for phishing emails that read like internal memos because they are built using publicly available company lingo.

These attacks are not just smarter. They are personal.

This article explains how modern AI scams work, why they are so effective, and how to protect yourself before one of them succeeds using your name or your face.


1. Deepfake Voices That Sound Like Your Boss

One of the most viral scams on LinkedIn this year involved an employee who received a call from their CFO. The voice sounded normal. The message was simple: wire funds to a new vendor account.

Everything matched. Caller ID, tone, urgency. But the call was fake. The voice was cloned using samples pulled from company videos and internal meetings that had been posted online. The request had urgency and authority, and the money was gone before IT had time to investigate.

AI tools today can replicate a person’s voice using less than 30 seconds of audio. Combined with spoofed numbers and casual context, this creates the illusion of legitimacy. The result is a social engineering attack that is almost impossible to detect by ear alone.

How to protect against deepfake voice scams

  • Verify high-stakes requests through multiple channels. A second call, a text message, or Slack ping can reveal inconsistencies.
  • Educate teams not to trust voice alone, especially under pressure or urgency.
  • Limit the public posting of video or audio where executives speak clearly for long periods.

2. Phishing Emails Generated by AI

Many organizations train employees to look for misspellings and awkward grammar in phishing emails. That advice is outdated.

AI tools can now craft emails that perfectly match a company’s tone and vocabulary. If your company has press releases, blog posts, or job listings online, those can be scraped and used to construct fake emails that appear completely authentic.

Recent LinkedIn reports show phishing messages that cite internal project names, reference team structures, and even include correct titles of real employees. These messages are often written using ChatGPT-like tools that can generate content with uncanny precision.

What makes AI phishing dangerous

  • The language appears clean, professional, and familiar.
  • The timing of emails often aligns with known project cycles.
  • The structure of the email includes real signatures, logos, and footers.

How to spot AI-generated phishing

  • Look for unusual requests that deviate from standard procedures, even if the language feels natural.
  • Watch for links that redirect through obscure domains before landing on a legitimate-looking page.
  • Use email tools that flag unusual patterns or behavior, not just keywords.

3. Fake Job Offers and Resume Bots

LinkedIn is full of stories from professionals who received job offers, only to later find out the hiring manager was fake. Some had phone interviews with deepfake voices. Others received onboarding documents from fake domains that looked almost identical to the real company’s website.

On the flip side, HR departments have been flooded with resumes that appear legit but are generated entirely by AI. These applicants can carry real LinkedIn profiles (with scraped photos) and resumes tailored to the exact job description.

What is new in fake hiring scams

  • Use of cloned recruiter profiles to make contact
  • Simulated interviews with pre-recorded or deepfake videos
  • Fake job portals that collect personal data or attempt payment fraud for background checks or training kits

How to avoid them

  • Confirm all recruiter outreach through the company’s verified site or known HR contact.
  • Watch for domains with minor spelling errors or different top-level domains (.co instead of .com).
  • For recruiters: implement video verification and cross-reference candidates against known databases.

4. AI-Powered Impersonation on Video Calls

This one is relatively new but increasingly common. A scammer joins a video meeting pretending to be a known stakeholder. Instead of live video, they use a deepfake animation of the person’s face and a voice clone.

These impersonations usually involve only audio and limited facial movement, so they blend well with poor lighting or weak connections. Most of the time, people do not question the presence of someone they recognize.

How this works

  • Video footage from YouTube or company webinars is used to train a face model.
  • Voice cloning apps simulate the target’s speaking style and tone.
  • The scammer joins under a legitimate-sounding pretext and makes a small request — access to a folder, a password reset, or approval for a wire transfer.

How to detect video-based impersonation

  • Ask the person to verify their identity via internal chat or shared reference.
  • Pay attention to latency, unnatural blinking, or strange lighting artifacts.
  • If something feels off, end the call and escalate.

5. Your Public Data Is Fuel for AI Scams

Every time you post your title, your team’s roadmap, or a company success story, you are feeding potential attackers. AI thrives on publicly available context. When it knows your role, your boss’s name, and your current project, it can craft highly convincing messages and impersonations.

You do not need to stop posting completely. But you do need to post with awareness.

Real example

A VP of Marketing posted about a recent event their team executed. A week later, someone impersonated that VP and sent emails to multiple vendors requesting invoices and contracts for a follow-up event. Everything checked out except one thing — the event did not exist.

The emails were polite, relevant, and detailed. Without close inspection, they would have passed.


Final Recommendations

AI-driven scams are effective because they blend into everyday business routines. They do not break systems. They exploit trust, urgency, and familiarity.

Here is what you can do:

  • Assume every inbound communication can be faked, especially voice and video.
  • Keep executive media exposure limited when possible.
  • Use MFA and access controls to limit the damage of one compromised account.
  • Educate teams to verify requests through trusted secondary channels.
  • Audit your own public data regularly. Search for yourself and your company. See what an attacker would see.

Cybersecurity in 2025 is no longer about technical defenses alone. It is about situational awareness and behavioral discipline. AI scams are getting personal. The only way to defend yourself is to make your awareness personal too.

Top comments (0)

Some comments may only be visible to logged-in visitors. Sign in to view all comments.