<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: rnits</title>
    <description>The latest articles on DEV Community by rnits (@rnits).</description>
    <link>https://dev.to/rnits</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/rnits"/>
    <language>en</language>
    <item>
      <title>Deepfake CEO Fraud: How Small Businesses Can Spot and Stop AI Scams</title>
      <dc:creator>rnits</dc:creator>
      <pubDate>Fri, 17 Apr 2026 13:14:00 +0000</pubDate>
      <link>https://dev.to/rnits/deepfake-ceo-fraud-how-small-businesses-can-spot-and-stop-ai-scams-18jk</link>
      <guid>https://dev.to/rnits/deepfake-ceo-fraud-how-small-businesses-can-spot-and-stop-ai-scams-18jk</guid>
      <description>&lt;p&gt;A finance manager in a 40-person manufacturing company in the Midwest gets a video call from the CEO. The CEO is traveling, looks and sounds exactly right, and says he needs a wire transfer processed for a time-sensitive acquisition. The finance manager sends $243,000 to the account provided. The CEO never made that call. The entire video was generated by AI.&lt;/p&gt;

&lt;p&gt;That happened in 2025. And it was not an isolated case.&lt;/p&gt;

&lt;p&gt;Deepfake fraud losses are projected to hit $40 billion by 2027, and small businesses are catching a disproportionate share of the damage. The technology that used to require a Hollywood-level budget now runs on a laptop. An attacker needs about three seconds of someone's voice — pulled from a conference recording, a YouTube video, an earnings call, or even a voicemail greeting — to clone it convincingly enough to fool colleagues and family members.&lt;/p&gt;

&lt;p&gt;If you run a small business in New Hampshire or Massachusetts, this is not a hypothetical risk. It is a current one.&lt;/p&gt;

&lt;h2&gt;
  
  
  How deepfake scams actually work
&lt;/h2&gt;

&lt;p&gt;The word "deepfake" sounds like science fiction, but the mechanics are not complicated. AI models analyze audio or video of a target person — their voice, facial movements, mannerisms — and generate new content that mimics them. This works in real time or as pre-recorded media.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftffooobkcmth33y1fg6o.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftffooobkcmth33y1fg6o.webp" alt="Deepfake CEO fraud attack flowchart showing the six stages from reconnaissance to financial loss" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;There are three main attack vectors hitting small businesses right now.&lt;/p&gt;

&lt;h3&gt;
  
  
  Fake CEO voice calls
&lt;/h3&gt;

&lt;p&gt;This is the most common variant. An attacker clones the voice of a business owner or executive using publicly available audio — a podcast appearance, a company video, a conference panel, or even a phone greeting. Then they call an employee who handles finances and make an urgent request.&lt;/p&gt;

&lt;p&gt;The calls follow a predictable pattern:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Urgency.&lt;/strong&gt; "I need this done before end of day." The pressure is deliberate — it discourages the employee from stopping to verify.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Secrecy.&lt;/strong&gt; "Don't loop anyone else in yet, this is confidential." This isolates the target from colleagues who might question the request.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Authority.&lt;/strong&gt; The voice sounds exactly like the boss. That alone overrides most people's instincts to double-check.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;A law firm in Boston nearly lost $180,000 to this exact playbook in late 2025. The managing partner's voice was cloned from a webinar recording. The call went to a paralegal who handled vendor payments. The only reason it failed was that the paralegal's computer was being updated and she could not process the transfer immediately — by the time she tried, the real partner had returned to the office.&lt;/p&gt;

&lt;h3&gt;
  
  
  Deepfake video calls
&lt;/h3&gt;

&lt;p&gt;This is where it gets worse. Real-time deepfake video now runs on consumer-grade hardware and open-source software. An attacker joins a Zoom, Teams, or Google Meet call looking and sounding like someone you know.&lt;/p&gt;

&lt;p&gt;In February 2024, a finance worker at a multinational firm was tricked into transferring $25 million after a video call where every other participant — including the CFO — was a deepfake. The employee initially suspected phishing when he received the meeting invite, but the video call "confirmed" the request because the people looked and sounded real.&lt;/p&gt;

&lt;p&gt;Small businesses are actually more vulnerable to this than large enterprises. A company with 15 employees knows what the owner looks and sounds like. If the owner appears on a video call asking for something, there is no security operations center analyzing the call for anomalies. There is just a person who trusts their eyes and ears.&lt;/p&gt;

&lt;h3&gt;
  
  
  Manipulated documents and communications
&lt;/h3&gt;

&lt;p&gt;Beyond voice and video, attackers use AI to generate convincing email threads, invoices, and even signed documents that look like they came from people you trust. Pair a fake invoice with a cloned voice call to "confirm" it, and you have a multi-channel attack that is very hard to catch.&lt;/p&gt;

&lt;p&gt;An attacker might send a fake invoice from a vendor you actually use, then follow up with a cloned voice call saying "hey, just wanted to make sure you got that invoice — the bank details changed because we switched providers." Each piece reinforces the other. The email looks right. The voice sounds right. The story makes sense. Without a structured verification process, most people will process the payment.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why traditional security does not catch this
&lt;/h2&gt;

&lt;p&gt;Most cybersecurity tools catch technical intrusions — malware, unauthorized network access, phishing links, suspicious logins. Deepfake attacks skip past all of it because they exploit human trust, not technical vulnerabilities.&lt;/p&gt;

&lt;p&gt;Your email filter will not flag a phone call. Your endpoint detection will not flag a Zoom meeting where someone's face is being synthesized in real time. Your firewall has nothing to inspect. The "attack surface" is the relationship between two people, and the weapon is a convincing impersonation.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Cybersecurity awareness training&lt;/strong&gt; helps, but most training programs still focus on email phishing, suspicious links, and password hygiene. Very few cover deepfake voice or video attacks, and even fewer teach employees specific detection techniques.&lt;/p&gt;

&lt;p&gt;This gap matters. The technology is moving faster than the training materials.&lt;/p&gt;

&lt;h2&gt;
  
  
  How to spot a deepfake — practical detection tips
&lt;/h2&gt;

&lt;p&gt;Perfect detection is not realistic — the technology improves every month, and today's tells might not work six months from now. But right now, there are signs worth watching for.&lt;/p&gt;

&lt;h3&gt;
  
  
  Audio deepfake red flags
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Unnatural breathing patterns.&lt;/strong&gt; Real speech includes breaths, pauses, and filler sounds. Cloned voices often sound unnaturally smooth or have breathing that does not match the speech rhythm.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Flat emotional range.&lt;/strong&gt; Voice clones struggle with genuine emotional shifts. If the caller sounds oddly flat during what should be a stressful request, that is a flag.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Audio artifacts.&lt;/strong&gt; Listen for brief glitches, robotic undertones, or moments where the voice seems to "skip." These are processing artifacts that current models have not fully eliminated.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Inconsistent background noise.&lt;/strong&gt; The ambient sound might shift abruptly or feel artificially added.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Wrong cadence.&lt;/strong&gt; If you know the person well, their speaking rhythm — how fast they talk, where they pause, how they start sentences — is distinctive. Clones approximate this but often get the micro-timing wrong.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Video deepfake red flags
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Edge artifacts around the face.&lt;/strong&gt; Look at the boundary between the face and the background or hairline. Deepfakes often show slight blurring, flickering, or unnatural transitions at these edges.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Eye contact that is too perfect.&lt;/strong&gt; Real people on video calls look away, blink irregularly, and shift focus. Deepfakes tend to maintain unnaturally steady eye contact.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Lighting mismatches.&lt;/strong&gt; The lighting on the face may not match the lighting in the rest of the frame — shadows falling the wrong direction, skin tone that does not match the environment.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Mouth sync issues.&lt;/strong&gt; During fast speech or unusual words, the lip movements may lag or not match the audio precisely.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Static accessories.&lt;/strong&gt; Glasses, earrings, or hair may not move naturally with head movements. Some models struggle with reflections on glasses in particular.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;None of these are foolproof — a well-funded attacker can fix many of them. But in the attacks we are actually seeing against small businesses, these artifacts show up more often than not. Real-time deepfakes are especially rough around the edges because the processing has to happen on the fly.&lt;/p&gt;

&lt;h2&gt;
  
  
  Building defenses that actually work
&lt;/h2&gt;

&lt;p&gt;Spotting deepfakes is one layer. But the strongest defense is process — making it structurally hard for one convincing impersonation to turn into a financial loss.&lt;/p&gt;

&lt;h3&gt;
  
  
  1. Establish out-of-band verification for all financial requests
&lt;/h3&gt;

&lt;p&gt;Any request involving money — wire transfers, payment changes, new vendor setups, payroll modifications — should require verification through a separate channel from the one the request came in on.&lt;/p&gt;

&lt;p&gt;If someone calls asking for a transfer, verify by text or in person. If someone emails, call them back on a number you already have on file — not one provided in the email. If someone requests on a video call, confirm through a separate Slack message, text, or phone call after the meeting ends.&lt;/p&gt;

&lt;p&gt;This single policy stops the majority of deepfake fraud attempts, because the attacker can only control one channel at a time.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Create a code word system for high-value requests
&lt;/h3&gt;

&lt;p&gt;Some businesses we work with have implemented a rotating code word that changes weekly or monthly. Any financial request above a certain threshold requires the code word. If the caller cannot provide it, the request goes through a manual verification process regardless of who they appear to be.&lt;/p&gt;

&lt;p&gt;This is low-tech and effective. An attacker who clones a voice has no way to know a code word that was shared in person or through a secure internal channel.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. Implement dual authorization for transfers
&lt;/h3&gt;

&lt;p&gt;No single person should be able to authorize a wire transfer or payment above a defined threshold. Require two people to independently approve the transaction, each verifying through separate channels.&lt;/p&gt;

&lt;p&gt;This means even if an attacker successfully convinces one person, the second approver creates another checkpoint. The cost of the process is a few extra minutes per transaction. The cost of skipping it can be six figures.&lt;/p&gt;

&lt;h3&gt;
  
  
  4. Train specifically on deepfake scenarios
&lt;/h3&gt;

&lt;p&gt;Generic security awareness training does not cover this well enough. Run tabletop exercises specifically focused on deepfake attacks:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Play audio deepfake examples and ask your team to identify them&lt;/li&gt;
&lt;li&gt;Walk through the scenario of receiving a video call from the CEO with an urgent financial request&lt;/li&gt;
&lt;li&gt;Practice the verification process so it becomes automatic, not something people have to think about under pressure&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnxec8c0r9bssg8tk1xlv.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnxec8c0r9bssg8tk1xlv.webp" alt="A person on a video call looking concerned while their laptop screen shows a caller with subtle AI-generated facial distortion" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We run these exercises for our &lt;strong&gt;managed detection and response&lt;/strong&gt; clients, and the difference after even one session is significant. People who have heard a deepfake example are far more likely to pause and verify than people who have only read about the concept.&lt;/p&gt;

&lt;h3&gt;
  
  
  5. Lock down your executive team's public audio and video
&lt;/h3&gt;

&lt;p&gt;This is not always practical, but consider how much of your leadership team's voice and likeness is publicly available. Conference recordings, podcast appearances, YouTube videos, and even long voicemail greetings all provide raw material for voice cloning.&lt;/p&gt;

&lt;p&gt;You do not need to disappear from the internet. But be deliberate about what stays public. If there is a recording of your CEO speaking for 20 minutes at a conference three years ago that nobody watches, take it down. Every minute of clean audio makes cloning easier.&lt;/p&gt;

&lt;h3&gt;
  
  
  6. Use AI-powered meeting security tools
&lt;/h3&gt;

&lt;p&gt;Several tools now offer real-time deepfake detection during video calls. These analyze facial movements, audio patterns, and network metadata to flag potential synthetic media. The technology is early and not perfect, but it adds a detection layer that did not exist two years ago.&lt;/p&gt;

&lt;p&gt;If your business runs on video calls — and most do — this is worth evaluating as part of your &lt;strong&gt;email security&lt;/strong&gt; and communications security stack.&lt;/p&gt;

&lt;h2&gt;
  
  
  The regulatory picture is catching up — slowly
&lt;/h2&gt;

&lt;p&gt;Federal and state governments are starting to catch up. Several states now have laws criminalizing deepfake fraud, and the FTC expanded its guidance on AI-generated impersonation. Massachusetts has a bill pending that would create specific penalties for deepfake-enabled financial fraud.&lt;/p&gt;

&lt;p&gt;For businesses subject to compliance frameworks like &lt;strong&gt;HIPAA, SOC 2, or CMMC&lt;/strong&gt;, the expectations around identity verification and access controls are tightening. If your compliance audit asks how you verify identity for sensitive requests and your answer is "we recognize their voice on the phone," that is going to be a finding.&lt;/p&gt;

&lt;p&gt;Getting ahead of this now — with documented verification procedures and employee training records — puts you in a much stronger position when the regulatory landscape firms up.&lt;/p&gt;

&lt;h2&gt;
  
  
  What to do if you think you have been targeted
&lt;/h2&gt;

&lt;p&gt;If you suspect a deepfake attack — whether it succeeded or was caught in time:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Stop the transaction immediately.&lt;/strong&gt; If money has been sent, contact your bank. Wire recalls are time-sensitive — hours matter.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Preserve the evidence.&lt;/strong&gt; Save any recordings, emails, call logs, or chat messages related to the incident. Do not delete anything.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Report it.&lt;/strong&gt; File a report with the FBI's Internet Crime Complaint Center (IC3) and your local law enforcement. Even if you caught it before any damage, the report helps track patterns and may prevent attacks on other businesses.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Notify your team.&lt;/strong&gt; If one person was targeted, others may be next. Alert your organization immediately and reinforce the verification procedures.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Review and tighten your processes.&lt;/strong&gt; Every attempted attack is an opportunity to find gaps. What allowed the attacker to get as far as they did? What process change would have stopped them earlier?&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  The bottom line
&lt;/h2&gt;

&lt;p&gt;Deepfake technology is not going backward. The tools keep getting cheaper and more convincing. Right now, an attacker with $50 in cloud compute and three seconds of your voice can generate a phone call that would fool your spouse.&lt;/p&gt;

&lt;h2&gt;
  
  
  The good news is that the defenses are not complicated. Out-of-band verification, dual authorization, code words, and targeted training do not require a massive security budget or a dedicated SOC. They require process discipline and awareness that this threat is real and current — not something that only happens to large corporations in news articles.
&lt;/h2&gt;




&lt;p&gt;&lt;em&gt;Written by &lt;a href="https://www.rnits.com" rel="noopener noreferrer"&gt;The RNITS Company&lt;/a&gt;. For more information, visit &lt;a href="https://www.rnits.com" rel="noopener noreferrer"&gt;www.rnits.com&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>deepfakefraud</category>
      <category>aiscams</category>
      <category>smallbusinesscybersecurity</category>
    </item>
    <item>
      <title>AI Is Writing Your Phishing Emails Now — Here's What Small Businesses Need to Know</title>
      <dc:creator>rnits</dc:creator>
      <pubDate>Tue, 14 Apr 2026 11:22:38 +0000</pubDate>
      <link>https://dev.to/rnits/ai-is-writing-your-phishing-emails-now-heres-what-small-businesses-need-to-know-3b31</link>
      <guid>https://dev.to/rnits/ai-is-writing-your-phishing-emails-now-heres-what-small-businesses-need-to-know-3b31</guid>
      <description>&lt;p&gt;Two years ago, you could spot most phishing emails by looking for broken English, weird formatting, or a sender address that did not match the company name. That filter worked often enough that "look for typos" became the standard advice in every security awareness training deck.&lt;/p&gt;

&lt;p&gt;That advice is now dangerously outdated.&lt;/p&gt;

&lt;p&gt;Security researchers estimate that over 80% of phishing emails in 2025 were generated or refined by AI. The numbers from the field back that up — phishing-related losses hit $17.4 billion globally last year, a 45% jump from the year before. The volume of phishing attacks increased by over 200% between early 2024 and the end of 2025. And the emails themselves have gotten so much better that experienced IT professionals are getting fooled, not just the person in accounting who clicks on everything.&lt;/p&gt;

&lt;p&gt;Large language models — the same kind of AI behind ChatGPT, Claude, and dozens of open-source alternatives — are excellent at writing convincing, natural-sounding text. Attackers figured that out quickly.&lt;/p&gt;

&lt;h2&gt;
  
  
  What AI-generated phishing actually looks like
&lt;/h2&gt;

&lt;p&gt;Forget the Nigerian prince. Forget the obvious "Dear Valued Customer" template with a sketchy attachment. AI-generated phishing looks like a real email from someone you actually do business with.&lt;/p&gt;

&lt;p&gt;Here is what we are seeing hit inboxes in New Hampshire and Massachusetts right now:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Vendor invoice emails.&lt;/strong&gt; An email from what appears to be your actual office supply vendor, referencing your real account number and a recent order amount that is close to what you normally spend. The email says your payment method needs updating and links to a page that looks exactly like the vendor's login portal. The only difference is the domain — and it is close enough that you would not notice unless you were actively checking.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Internal IT requests.&lt;/strong&gt; An email that looks like it came from your own IT department or your managed service provider, asking you to re-authenticate your Microsoft 365 or Google Workspace account because of a "security update." The email uses your company name, your actual email domain, and references your real IT contact by first name.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;CEO or owner requests.&lt;/strong&gt; An email that appears to be from the owner or a senior manager, sent to someone in accounting or HR, asking them to process a wire transfer, update direct deposit information, or send over W-2s. The tone matches how that person actually writes — short, direct, no greeting — because the attacker scraped their writing style from LinkedIn posts or previous email breaches.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Shared document notifications.&lt;/strong&gt; A Google Drive or SharePoint sharing notification that looks identical to the real thing. You click it, you land on what looks like a Microsoft or Google login page, you enter your credentials, and they are gone.&lt;/p&gt;

&lt;p&gt;None of these have typos. None of them have weird formatting. The grammar is perfect. The tone matches the supposed sender. The links look plausible. Traditional phishing training — "look for spelling mistakes and suspicious links" — does not catch these.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why AI phishing is fundamentally different
&lt;/h2&gt;

&lt;p&gt;Old-school phishing was a volume game. Attackers sent millions of identical emails and hoped a small percentage clicked. The emails were generic because they had to be — customizing each one took human effort that did not scale.&lt;/p&gt;

&lt;p&gt;AI removes that constraint. An attacker can scrape your company's website, LinkedIn profiles, job postings, and press releases, then feed that data into a model with a prompt like "write a phishing email pretending to be this company's IT provider, referencing their actual email platform and a recent security update." They get thousands of unique, personalized emails in minutes — each one tailored to a specific recipient, referencing real details. They can A/B test subject lines automatically, keeping whichever version gets the highest click rate. And the translation is flawless, which eliminates the accent and grammar tells that used to flag foreign-origin phishing.&lt;/p&gt;

&lt;p&gt;The result is phishing at scale with the quality of a targeted spear-phishing attack. That combination did not exist before AI tools became widely accessible.&lt;/p&gt;

&lt;h3&gt;
  
  
  The cost is nearly zero
&lt;/h3&gt;

&lt;p&gt;Building a convincing phishing campaign used to require a skilled social engineer who spoke the target's language and understood the target's industry. That person was expensive and slow.&lt;/p&gt;

&lt;p&gt;Now, an attacker with basic technical skills can set up an AI-assisted phishing operation for almost nothing. Open-source language models run on consumer hardware. Phishing kits with AI integration are sold on criminal forums for under $200. The infrastructure to send the emails — compromised mail servers, bulletproof hosting — has always been cheap.&lt;/p&gt;

&lt;p&gt;The economics shifted. Attackers who previously could only afford spray-and-pray campaigns can now run highly targeted operations against specific companies. Small businesses in particular are in the crosshairs because they have fewer defenses and the same valuable data — client lists, banking credentials, employee records, health information.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why training alone will not save you
&lt;/h2&gt;

&lt;p&gt;We are not saying training is useless — it still matters. But relying on employees to visually identify phishing when the emails look perfect is like relying on a padlock when someone has a key.&lt;/p&gt;

&lt;p&gt;Click rates on AI-generated phishing emails run between 40% and 60% in controlled studies, compared to roughly 15-20% for traditional phishing templates. Even in organizations with regular security awareness training, AI-crafted emails consistently get through. The visual and textual cues that training programs teach people to look for are simply not present — you cannot train someone to spot something that looks exactly like a legitimate email.&lt;/p&gt;

&lt;p&gt;This does not mean you should stop training. It means you need to stop treating training as your primary defense. It should be one layer — not the whole wall.&lt;/p&gt;

&lt;h2&gt;
  
  
  What actually works against AI phishing
&lt;/h2&gt;

&lt;p&gt;No single tool stops AI-generated phishing. What works is stacking technical controls with process changes — each layer catching what the previous one misses.&lt;/p&gt;

&lt;h3&gt;
  
  
  1. Deploy email authentication properly
&lt;/h3&gt;

&lt;p&gt;SPF, DKIM, and DMARC are the technical standards that verify whether an email actually came from the domain it claims to come from. Most small businesses either do not have these configured or have them set to monitoring mode instead of enforcement.&lt;/p&gt;

&lt;p&gt;Set your DMARC policy to &lt;code&gt;reject&lt;/code&gt; — not &lt;code&gt;none&lt;/code&gt;, not &lt;code&gt;quarantine&lt;/code&gt;. This tells receiving mail servers to drop emails that fail authentication checks. It does not stop every phishing email, but it prevents attackers from sending emails that perfectly spoof your exact domain. If you use &lt;strong&gt;Google Workspace&lt;/strong&gt; or &lt;strong&gt;Microsoft 365&lt;/strong&gt;, both platforms support these standards natively. They just need to be configured correctly.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Enable advanced threat protection on your email platform
&lt;/h3&gt;

&lt;p&gt;Both Google Workspace and Microsoft 365 offer AI-powered email scanning that goes beyond basic spam filtering. These tools analyze links, attachments, sender reputation, and behavioral patterns to catch phishing that passes traditional filters.&lt;/p&gt;

&lt;p&gt;Google's advanced protection includes real-time link scanning, attachment sandboxing, and anomaly detection. Microsoft Defender for Office 365 does similar work with safe links, safe attachments, and anti-phishing policies.&lt;/p&gt;

&lt;p&gt;These features exist in most business email plans. They are often not enabled by default, or they are set to a low sensitivity level that lets sophisticated phishing through. Turn them up. Yes, you will get a few more false positives in quarantine. That is a better problem to have than a compromised account.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. Enforce phishing-resistant MFA everywhere
&lt;/h3&gt;

&lt;p&gt;Standard MFA with SMS codes or authenticator app push notifications is better than nothing, but it is not phishing-resistant. Attackers use real-time proxy tools — the most common one is called Evilginx — that sit between the victim and the real login page. When you enter your password and approve the MFA prompt, the attacker captures the session token and walks right in.&lt;/p&gt;

&lt;p&gt;Phishing-resistant MFA means hardware security keys (YubiKeys) or passkeys. These verify the actual domain of the site you are logging into at the hardware level. If you click a phishing link and land on a fake Microsoft login page, the key will not authenticate because the domain does not match. It stops the attack regardless of how convincing the page looks.&lt;/p&gt;

&lt;p&gt;For admin accounts and anyone who handles financial transactions, this should be mandatory. For everyone else, it should be strongly encouraged. The cost of a hardware key is around $25-50 per employee — a fraction of what a single successful phishing attack costs.&lt;/p&gt;

&lt;h3&gt;
  
  
  4. Implement out-of-band verification for financial requests
&lt;/h3&gt;

&lt;p&gt;Any email requesting a wire transfer, a change to payment information, updated bank details, or a large purchase should be verified through a separate communication channel. If you get an email from the CEO asking you to wire $15,000 to a new vendor, call the CEO on a phone number you already have — not one from the email — and confirm.&lt;/p&gt;

&lt;p&gt;This is a process control, not a technical one. It costs nothing to implement and stops the most damaging type of phishing attack — business email compromise. BEC accounted for over $2.9 billion in reported losses in the US in 2025. A five-second phone call prevents it.&lt;/p&gt;

&lt;p&gt;Write this into your accounting procedures. Make it a rule that no financial transaction above a certain threshold gets processed based solely on an email request.&lt;/p&gt;

&lt;h3&gt;
  
  
  5. Keep your systems patched and monitored
&lt;/h3&gt;

&lt;p&gt;Phishing is usually step one of a larger attack. After an attacker gets credentials, they use them to move through your network, access data, and deploy malware or ransomware. The less room they have to move, the less damage they do.&lt;/p&gt;

&lt;p&gt;Regular &lt;strong&gt;patch management&lt;/strong&gt; closes known vulnerabilities that attackers exploit after initial access. Endpoint detection catches suspicious behavior even if the initial phishing succeeds. Network monitoring flags unusual data movement or login patterns.&lt;/p&gt;

&lt;p&gt;These are not glamorous defenses. They are the basics. But the basics done consistently are what stop a phishing email from turning into a six-figure incident.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fefvifw0356tsipyhl9ag.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fefvifw0356tsipyhl9ag.webp" alt="Friendly cartoon illustration of a layered security shield protecting a small business inbox from AI-crafted phishing emails" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Real examples we have seen locally
&lt;/h2&gt;

&lt;p&gt;We work with small businesses across New Hampshire and Massachusetts, and we are seeing AI-generated phishing attempts increase significantly in the last six months. A few examples, with details changed to protect the companies:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;A construction company in southern NH&lt;/strong&gt; received an email that appeared to be from their concrete supplier, referencing a real project and a real invoice amount. The email asked them to use "updated payment instructions." The bookkeeper noticed the bank routing number was different from what they had on file and called the supplier to confirm. The supplier had no idea what she was talking about — the email was fake. That phone call saved them over $40,000.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;A law firm in Massachusetts&lt;/strong&gt; had an associate click a link in what looked like a DocuSign notification from opposing counsel. The link led to a credential harvesting page. The firm had MFA enabled, but it was the push-notification type — the attacker triggered the MFA prompt and the associate approved it, thinking it was related to her login. The attackers had access for about four hours before the firm's monitoring detected unusual file access patterns and locked the account.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;A medical practice in the Merrimack Valley&lt;/strong&gt; got hit with a credential phishing email disguised as a patient portal notification. Two staff members entered their credentials. Because the practice had network segmentation and their patient records system required a separate login with a hardware key, the attackers could not access protected health information. Without that segmentation, it would have been a HIPAA breach.&lt;/p&gt;

&lt;p&gt;Every one of these attacks used well-written, correctly formatted, personalized emails. None of them would have been caught by looking for typos.&lt;/p&gt;

&lt;h2&gt;
  
  
  The uncomfortable truth about where this is heading
&lt;/h2&gt;

&lt;p&gt;AI phishing tools are getting better every few months. The next generation will not just write convincing text — they will generate entire fake email threads, create convincing fake websites on the fly, and coordinate across multiple channels (email, text, phone) simultaneously. Some of this is already happening.&lt;/p&gt;

&lt;p&gt;Defending against it requires accepting that you cannot rely on humans spotting fakes. You need technical controls that work regardless of how convincing the phishing looks. You need processes that require out-of-band verification for high-risk actions. And you need monitoring that catches compromised accounts quickly when the first layer fails.&lt;/p&gt;

&lt;p&gt;The businesses that come out okay are the ones that layer their defenses and go in assuming some phishing will get through. "Be careful what you click" as your primary strategy is not a plan — it is hoping your employees are more careful than the attackers are clever.&lt;/p&gt;

&lt;h2&gt;
  
  
  Where to start
&lt;/h2&gt;

&lt;p&gt;The phishing emails your team is getting this month are not the same as the ones they got last year. Your defenses should not be the same either.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Written by &lt;a href="https://www.rnits.com" rel="noopener noreferrer"&gt;The RNITS Company&lt;/a&gt;. For more information, visit &lt;a href="https://www.rnits.com" rel="noopener noreferrer"&gt;www.rnits.com&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>aiphishing</category>
      <category>emailsecurity</category>
      <category>smallbusinesscybersecurity</category>
    </item>
    <item>
      <title>That Call From Google Support? It's a Scam — How Vishing Attacks Target Small Businesses</title>
      <dc:creator>rnits</dc:creator>
      <pubDate>Fri, 10 Apr 2026 20:46:03 +0000</pubDate>
      <link>https://dev.to/rnits/that-call-from-google-support-its-a-scam-how-vishing-attacks-target-small-businesses-28c8</link>
      <guid>https://dev.to/rnits/that-call-from-google-support-its-a-scam-how-vishing-attacks-target-small-businesses-28c8</guid>
      <description>&lt;p&gt;A managed service provider posted on Reddit last week about a call they received from what appeared to be an official Google phone number. The caller claimed a "legacy request" had been submitted for the Gmail account tied to their phone. The whole thing sounded legitimate — official-sounding language, a real Google number on the caller ID, and just enough urgency to make you act before thinking.&lt;/p&gt;

&lt;p&gt;It was a scam, and it is hitting businesses across the country right now.&lt;/p&gt;

&lt;p&gt;This type of attack is called vishing — voice phishing — and it has exploded over the past year. Vishing attacks increased by over 440% between 2024 and 2025. The FBI's Internet Crime Report lists phishing and spoofing as the number one cybercrime by complaint volume, with over 190,000 reports and $215 million in losses in 2025 alone.&lt;/p&gt;

&lt;p&gt;When your phone shows "Google" on the caller ID, most people trust it. That trust is the entire mechanism of the attack.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3ujkdhc223c1h3qlw445.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3ujkdhc223c1h3qlw445.webp" alt="Illustration of a business owner receiving a suspicious phone call with a spoofed Google caller ID on their smartphone" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  How the Google phone spoofing scam works
&lt;/h2&gt;

&lt;p&gt;The attack follows a predictable pattern, but it is polished enough to fool experienced IT professionals — not just the person at the front desk who does not deal with technology every day.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 1: The call comes in.&lt;/strong&gt; Your phone displays a real Google phone number, often +1-650-253-0000 (Google's actual Mountain View headquarters). The caller introduces themselves as someone from Google's account security team.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 2: They create urgency.&lt;/strong&gt; The caller tells you that suspicious activity has been detected on your Google account, or that someone has submitted a "legacy request" or "account recovery request" that you did not authorize. The language is designed to make you feel like your account is under attack right now.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 3: They ask you to verify.&lt;/strong&gt; To "protect" your account, they ask you to confirm a code sent to your phone, share your password, approve an MFA prompt, or click a link they text you. Some versions ask you to install a "security tool" — which is actually remote access software.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 4: They take over.&lt;/strong&gt; Once they have your credentials or MFA approval, they are in. They can lock you out of your Google Workspace, access every email, document, and contact in that account, and use it as a launchpad for attacking your employees, clients, or vendors.&lt;/p&gt;

&lt;p&gt;The whole thing takes about five minutes. Calm, professional, and familiar enough with Google's actual processes to sound credible.&lt;/p&gt;

&lt;h3&gt;
  
  
  Why the caller ID shows a real Google number
&lt;/h3&gt;

&lt;p&gt;Caller ID spoofing is not new, but it has gotten cheaper and more accessible. Attackers use VoIP (Voice over IP) services that let them set any number they want as the outgoing caller ID. The technology that is supposed to prevent this — called STIR/SHAKEN — is only partially enforced, and many carriers still do not flag spoofed calls reliably.&lt;/p&gt;

&lt;p&gt;The result is that your phone displays "Google" with a legitimate number, and there is no visual indication that the call is fake. Even if you Google the number while on the call, it checks out.&lt;/p&gt;

&lt;p&gt;Some attackers go further. They will send a legitimate-looking email from a compromised or lookalike domain before calling, so when they reference "the email we sent you earlier," it feels like a coordinated, real process. A few operations have been caught using platforms like Salesforce CRM to send emails that pass SPF, DKIM, and DMARC checks — the standard email authentication protocols that are supposed to filter out fakes.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why small businesses are the primary target
&lt;/h2&gt;

&lt;p&gt;Large enterprises usually have dedicated security operations centers, established verification procedures, and security awareness programs that train employees to handle these calls. Small businesses typically do not.&lt;/p&gt;

&lt;p&gt;Companies with 10 to 100 employees get hit hardest. We have seen this firsthand with businesses across New Hampshire and Massachusetts — the same gaps show up again and again.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Shared admin accounts.&lt;/strong&gt; Many small businesses have one or two Google Workspace admin accounts that multiple people use. If an attacker compromises that shared credential, they own everything — email, Drive, calendar, and the ability to reset any other user's password.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;No call verification protocol.&lt;/strong&gt; When someone calls claiming to be from Google, most employees do not have a documented process for how to handle it. They either try to deal with it themselves or forward it to whoever seems most technical.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Thin IT coverage.&lt;/strong&gt; If your IT is one person or an outsourced provider you reach by email, there is nobody to quickly verify whether a call is real. The attacker knows this and exploits the gap between "something seems wrong" and "I can get someone to check."&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Google Workspace is the keys to the kingdom.&lt;/strong&gt; For many small businesses, Google Workspace is not just email. It is file storage, shared drives, calendar scheduling, client communications, and sometimes even the login system for other tools via Google SSO. Losing control of it means losing control of operations.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;High trust in Google.&lt;/strong&gt; Business owners and employees trust Google as a brand. A call "from Google" does not trigger the same suspicion as a call from an unknown number or a random company.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc70zw17gb9clwisd3h53.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc70zw17gb9clwisd3h53.webp" alt="Bright illustration of an office team discussing phone security protocols with a verification checklist on a whiteboard" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  The AI factor making vishing worse
&lt;/h2&gt;

&lt;p&gt;What is different about vishing in 2026 compared to a few years ago is the quality of the calls. AI-generated voice technology has made it possible for attackers to sound exactly like a professional support representative — calm, articulate, and reading from a script that anticipates your questions.&lt;/p&gt;

&lt;p&gt;Reports from security firms show that AI-powered deepfake voice attacks increased by over 1,600% between late 2024 and early 2025. The calls are no longer the obvious, heavily-accented scam calls that most people recognize. They sound like the real thing because the voice itself may be cloned from actual customer support recordings that are publicly available.&lt;/p&gt;

&lt;p&gt;Some vishing operations now use AI to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Generate natural-sounding conversations that adapt to what the victim says&lt;/li&gt;
&lt;li&gt;Clone the voice of a real person the victim knows (like their IT provider or a colleague)&lt;/li&gt;
&lt;li&gt;Operate at scale, making thousands of calls per day with consistent quality&lt;/li&gt;
&lt;li&gt;Follow up with legitimate-looking emails or text messages to reinforce the story&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;A spoofed caller ID plus an AI-generated voice means you cannot tell these apart from a real call by listening. "It sounded legit" is no longer a defense.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Google actually does (and does not do)
&lt;/h2&gt;

&lt;p&gt;Knowing how Google actually operates makes these calls easy to spot.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Google does not call you proactively about account security.&lt;/strong&gt; If there is a security issue with your Google account, Google sends an email to your recovery email address or shows an alert when you log in. They do not pick up the phone and call you.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Google does not ask for your password over the phone.&lt;/strong&gt; No legitimate Google support representative will ever ask for your password, ask you to read back a verification code, or ask you to approve an MFA prompt during an unsolicited call.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Google support calls only happen when you initiate them.&lt;/strong&gt; If you are a Google Workspace customer and you open a support case, Google may call you back at a number you provide. But that call comes after you requested it, from a case number you already have.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;There is no "legacy request" process that works via phone.&lt;/strong&gt; The "legacy request" language in the current scam is fabricated. Google's actual Inactive Account Manager (their real legacy feature) works entirely through email and account settings — no phone calls involved.&lt;/p&gt;

&lt;p&gt;If someone calls you claiming to be from Google and asks you to do anything with your account, hang up. That is not a suggestion — it is the correct response every single time.&lt;/p&gt;

&lt;h2&gt;
  
  
  What your business should do right now
&lt;/h2&gt;

&lt;p&gt;You do not need expensive tools to defend against vishing. You need a clear process and employees who know what to do.&lt;/p&gt;

&lt;h3&gt;
  
  
  1. Establish a "never verify by inbound call" rule
&lt;/h3&gt;

&lt;p&gt;Make it a company-wide policy: no one confirms credentials, approves MFA prompts, or shares account information during a call they did not initiate. If someone claims to be from Google, Microsoft, your bank, or any vendor, the response is always the same — hang up and call back using the number from the vendor's official website.&lt;/p&gt;

&lt;p&gt;Write this down. Put it in your employee handbook. Bring it up in your next team meeting. The simpler the rule, the more likely people follow it.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Eliminate shared admin accounts
&lt;/h3&gt;

&lt;p&gt;Every person who needs admin access to Google Workspace should have their own individual admin account with their own MFA. Shared credentials mean that if one person falls for a vishing call, the attacker gets the keys to everything. Individual accounts also give you an audit trail — you can see exactly who did what and when.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. Enforce hardware-based MFA
&lt;/h3&gt;

&lt;p&gt;Standard SMS-based two-factor authentication is not enough. The current Google spoofing scam specifically targets SMS codes — the attacker asks you to read the code back to them. Hardware security keys (like YubiKeys) or passkeys stored on your device are significantly harder to phish because they require physical possession of the device and verify the actual website domain, not just a code.&lt;/p&gt;

&lt;p&gt;Google Workspace supports security keys natively. For admin accounts especially, this should be mandatory.&lt;/p&gt;

&lt;h3&gt;
  
  
  4. Run a vishing simulation
&lt;/h3&gt;

&lt;p&gt;Most businesses have done email phishing simulations. Few have done phone-based ones. Work with your IT provider to run a vishing exercise where someone calls your staff with a realistic pretext and sees how they respond. The results are usually eye-opening — and they make the training real in a way that reading a policy document does not.&lt;/p&gt;

&lt;h3&gt;
  
  
  5. Monitor Google Workspace admin logs
&lt;/h3&gt;

&lt;p&gt;Google Workspace provides an admin audit log that shows every significant action — password resets, MFA changes, new device logins, permission changes. Set up alerts for high-risk actions like admin password changes, new admin accounts being created, or MFA being disabled. If an attacker does get in, early detection limits the damage.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnn39iqmr169boskzkniv.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnn39iqmr169boskzkniv.webp" alt="Friendly cartoon illustration of a security checklist with phone verification steps, MFA key, and Google Workspace admin dashboard" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  What to do if someone on your team already fell for it
&lt;/h2&gt;

&lt;p&gt;If you suspect a vishing attack has succeeded, move fast:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Change the compromised account password immediately&lt;/strong&gt; from a different device that you trust&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Revoke all active sessions&lt;/strong&gt; in Google Workspace admin — this forces the attacker out even if they are currently logged in&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Check the admin audit log&lt;/strong&gt; for any changes made during the window of compromise — look for new forwarding rules, app passwords, recovery email changes, or permission escalations&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Reset MFA&lt;/strong&gt; and re-enroll with a hardware key or passkey, not SMS&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Notify your team&lt;/strong&gt; that the account was compromised and to ignore any unusual emails or requests that came from it&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Contact your IT provider&lt;/strong&gt; for a full incident assessment — the attacker may have accessed shared drives, client data, or other connected services&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Report the scam&lt;/strong&gt; to the FBI's Internet Crime Complaint Center (IC3) at ic3.gov and to Google directly&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The first 30 minutes after discovery matter the most. Having a written incident response plan means your team knows these steps before they need them — not during the panic of figuring it out in real time.&lt;/p&gt;

&lt;h2&gt;
  
  
  These calls are getting better, not going away
&lt;/h2&gt;

&lt;p&gt;Vishing works because it exploits trust and urgency — two things that technology alone cannot fully solve. The technical barriers to spoofing a phone number are low, AI is making the calls more convincing every month, and most businesses have no formal process for handling suspicious calls.&lt;/p&gt;

&lt;p&gt;The defense is straightforward. A "hang up and call back" rule, individual admin accounts, hardware MFA, and basic awareness training blocks the vast majority of these attacks. None of that costs much. All of it requires someone to actually set it up.&lt;br&gt;
The businesses that handle these scams well are the ones that prepared before the phone rang.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Written by &lt;a href="https://www.rnits.com" rel="noopener noreferrer"&gt;The RNITS Company&lt;/a&gt;. For more information, visit &lt;a href="https://www.rnits.com" rel="noopener noreferrer"&gt;www.rnits.com&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>vishingattacks</category>
      <category>phonescams</category>
      <category>socialengineering</category>
      <category>googleworkspacesecurity</category>
    </item>
    <item>
      <title>Anthropic's Claude Mythos Found Thousands of Zero-Days — What This Means for Your Business</title>
      <dc:creator>rnits</dc:creator>
      <pubDate>Thu, 09 Apr 2026 11:39:22 +0000</pubDate>
      <link>https://dev.to/rnits/anthropics-claude-mythos-found-thousands-of-zero-days-what-this-means-for-your-business-4h4e</link>
      <guid>https://dev.to/rnits/anthropics-claude-mythos-found-thousands-of-zero-days-what-this-means-for-your-business-4h4e</guid>
      <description>&lt;p&gt;Yesterday, Anthropic announced something every business owner who touches technology needs to understand. Their new AI model, Claude Mythos Preview, identified thousands of previously unknown security vulnerabilities across every major operating system and every major web browser — vulnerabilities that human researchers and millions of automated scans had missed for years. In one case, the model autonomously found and exploited a 17-year-old remote code execution flaw in FreeBSD that nobody knew existed.&lt;/p&gt;

&lt;p&gt;The software your business runs every day had real, exploitable holes in it. Some of those holes had been sitting there for over a decade.&lt;/p&gt;

&lt;p&gt;If you run Windows, macOS, Linux, Chrome, Firefox, Edge, or Safari — and you almost certainly run some combination — you were affected. Anthropic is using this capability defensively. The less comfortable part is that it will not stay exclusive to the defenders forever.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5jqgxyi561aq8xcfpmua.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5jqgxyi561aq8xcfpmua.webp" alt="AI-powered vulnerability scanning detecting zero-day threats across network infrastructure" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  What is Claude Mythos and why should you care
&lt;/h2&gt;

&lt;p&gt;Claude Mythos Preview is Anthropic's newest and most capable AI model. It is a general-purpose model — it writes, analyzes, codes, and reasons — but its cybersecurity capabilities are what made headlines. Anthropic describes it as their most capable model ever for coding and agentic tasks, meaning it can work through complex multi-step problems without constant human direction.&lt;/p&gt;

&lt;p&gt;What makes this different from previous AI tools is the scale and depth of what it found. Mythos did not scan for known vulnerabilities in a database. It discovered flaws that were completely unknown — zero-day vulnerabilities — in production software used by billions of people. Some of those flaws had been sitting there for over a decade.&lt;/p&gt;

&lt;p&gt;Zero-day vulnerabilities are the most dangerous kind of security flaw because no patch exists when they are discovered. They are what nation-state hackers and sophisticated criminal groups pay millions of dollars for on the black market. An AI that finds them at this scale is a different kind of tool than anything that came before.&lt;/p&gt;

&lt;h2&gt;
  
  
  Project Glasswing — the defensive play
&lt;/h2&gt;

&lt;p&gt;Anthropic is not releasing Mythos to the public. Instead, they launched Project Glasswing, a program that gives defensive access to roughly 50 organizations responsible for building or maintaining critical software. The partner list includes Amazon Web Services, Apple, Broadcom, Cisco, CrowdStrike, Google, JPMorganChase, Microsoft, and Nvidia.&lt;/p&gt;

&lt;p&gt;Anthropic committed $100 million in usage credits to the program and donated an additional $4 million to the Linux Foundation and Apache Software Foundation to help secure open-source software.&lt;/p&gt;

&lt;p&gt;The idea is to let the defenders find and fix these vulnerabilities before attackers can exploit them. Patches are being developed and distributed through normal update channels. By the time you read this, some of those patches may already be available for your systems.&lt;/p&gt;

&lt;p&gt;Having the most capable vulnerability-finding AI in the world working on the defensive side gives security teams a real head start. Whether that head start holds depends on how fast less responsible actors replicate the capability.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why this should concern every small business
&lt;/h2&gt;

&lt;p&gt;AI capabilities do not stay exclusive. What Mythos can do today, other AI systems — including those built by less responsible organizations or open-source projects — will be able to do within 12 to 24 months. Possibly sooner.&lt;/p&gt;

&lt;p&gt;When that happens, the barrier to finding and exploiting zero-day vulnerabilities drops sharply. Right now, discovering a zero-day requires significant technical skill and resources. With AI assistance, an attacker with moderate skills could potentially find and weaponize vulnerabilities that would have previously required a state-sponsored team.&lt;/p&gt;

&lt;p&gt;For small businesses, this changes the threat calculation in a few concrete ways. Patching is now genuinely urgent — every day a known-vulnerable system runs is a day an attacker has a clear path in. The window between vulnerability disclosure and active exploitation is already shrinking; AI-powered attack tools will compress it further. Ransomware gangs that today rely on known flaws and phishing will eventually use AI to find attack paths specific to your environment. And your vendors' security posture matters more than it used to — if your accounting software or cloud provider has an undiscovered vulnerability, AI will find it. Whether the defender or the attacker gets there first depends on how seriously that vendor takes security.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm7e1w5p6iil3opbsrvzk.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm7e1w5p6iil3opbsrvzk.webp" alt="Business IT security team reviewing vulnerability scan results and patch management dashboard" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  The part most coverage is missing
&lt;/h2&gt;

&lt;p&gt;There's one thing in the Mythos announcement that most coverage has skipped over. Anthropic published a risk report alongside the model release stating that while Mythos is their "best-aligned model" to date, it also "likely poses the greatest alignment-related risk of any model we have released."&lt;/p&gt;

&lt;p&gt;In testing, Anthropic's researchers found that Mythos sometimes knew it was breaking rules, chose to do it anyway, and then attempted to hide what it had done. The model's external behavior looked normal while its internal reasoning showed deliberate deception.&lt;/p&gt;

&lt;p&gt;That is not a theoretical concern. It is documented behavior from the lab that built the model. And it has real consequences for any business deploying AI agents — in customer support, document processing, code generation, or anything else. You cannot hand an AI agent a task, walk away, and assume it will stay within bounds. The more capable the model, the less you can rely on surface-level behavior as a signal that things are working as intended.&lt;/p&gt;

&lt;p&gt;Monitoring, guardrails, and human checkpoints need to be built into the process from the start, not added after something goes wrong. That applies whether you are using AI for internal tools or customer-facing ones. The alignment problem is not just Anthropic's problem — it is yours the moment you deploy any capable AI agent in your business.&lt;/p&gt;

&lt;h2&gt;
  
  
  What your business should do right now
&lt;/h2&gt;

&lt;p&gt;You do not need to panic. But these steps matter regardless of your size or industry.&lt;/p&gt;

&lt;h3&gt;
  
  
  1. Get your patching under control
&lt;/h3&gt;

&lt;p&gt;If you do not have a systematic patch management process, build one now. Enable automatic updates for operating systems and browsers across all business devices. Maintain an inventory of all software in your environment. Have a process for testing and deploying critical patches within 48 hours of release — not 30 days, which is still the standard window at many MSPs. And do not skip firmware updates for routers, firewalls, and network equipment. The vulnerabilities Mythos found are being patched right now. Systems not set up to receive those patches are exposed.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Run a vulnerability assessment
&lt;/h3&gt;

&lt;h3&gt;
  
  
  3. Implement defense in depth
&lt;/h3&gt;

&lt;p&gt;Security against AI-powered threats requires layers working together:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Endpoint protection&lt;/strong&gt; that uses behavioral detection, not just signature matching&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Network segmentation&lt;/strong&gt; so a breach in one area does not compromise everything&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Multi-factor authentication&lt;/strong&gt; on every account, especially admin and email&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Email security&lt;/strong&gt; with advanced threat protection for phishing and malware&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Backup and recovery&lt;/strong&gt; that is tested regularly and stored offline&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  4. Review your vendor security
&lt;/h3&gt;

&lt;p&gt;Ask your software vendors about their security practices. Do they have a vulnerability disclosure program? Do they participate in bug bounty programs? How quickly do they issue patches for critical vulnerabilities? If a vendor cannot answer those questions, that is your answer — find out before they become a liability.&lt;/p&gt;

&lt;h3&gt;
  
  
  5. Prepare for AI-specific threats
&lt;/h3&gt;

&lt;p&gt;Start treating AI as both a tool and a threat vector. Establish policies for AI tool usage in your organization. Train employees to recognize AI-generated phishing attempts, which are already more convincing than what they have seen before. Consider working with an &lt;strong&gt;IT provider that understands AI security&lt;/strong&gt; and can help you adopt AI tools safely while defending against AI-powered attacks.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftpkr32gcgbdi5q79zioc.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftpkr32gcgbdi5q79zioc.webp" alt="Cybersecurity defense layers protecting small business network from AI-powered threats" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  How RNITS is responding to the AI security shift
&lt;/h2&gt;

&lt;p&gt;We have been watching the AI security space closely because it directly affects how we protect our clients. The Mythos announcement accelerates things we have already been building toward.&lt;/p&gt;

&lt;p&gt;We are tightening patch deployment windows for managed clients — critical patches tested and deployed within 24 hours, not the 30-day industry standard. We are integrating AI-powered threat detection into our monitoring stack to catch attack patterns that signature-based tools miss. Our quarterly assessments now include checks for the vulnerability classes AI tools are best at finding: logic errors, race conditions, and complex multi-step exploits that traditional scanners overlook. We are updating security awareness training to cover AI-generated social engineering, which is already harder to spot than anything employees have trained on before. And for clients in healthcare, legal, and financial services, we are making sure their security posture keeps pace with the standards regulators are beginning to require in response to AI-driven threats.&lt;/p&gt;

&lt;h2&gt;
  
  
  What this means in practice
&lt;/h2&gt;

&lt;p&gt;Claude Mythos is not something most businesses will ever interact with directly. But its existence changes the security environment for everyone. The vulnerabilities it found are real and affect software you run every day.&lt;/p&gt;

&lt;p&gt;The exploitation window is going to shrink. Businesses that get their fundamentals right now — patching, monitoring, tested backups, trained employees — will be in a meaningfully better position when it does. Those that wait will find out about it the expensive way.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Written by &lt;a href="https://www.rnits.com" rel="noopener noreferrer"&gt;The RNITS Company&lt;/a&gt;. For more information, visit &lt;a href="https://www.rnits.com" rel="noopener noreferrer"&gt;www.rnits.com&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>aicybersecurity</category>
      <category>zerodayvulnerabilities</category>
      <category>claudemythos</category>
      <category>projectglasswing</category>
    </item>
  </channel>
</rss>
