<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Dylan Fitzgerald</title>
    <description>The latest articles on DEV Community by Dylan Fitzgerald (@arubis).</description>
    <link>https://dev.to/arubis</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/arubis"/>
    <language>en</language>
    <item>
      <title>AI is Hallucinating Package Names - And Hackers Are Ready</title>
      <dc:creator>Dylan Fitzgerald</dc:creator>
      <pubDate>Thu, 26 Jun 2025 00:56:22 +0000</pubDate>
      <link>https://dev.to/arubis/ai-is-hallucinating-package-names-and-hackers-are-ready-dli</link>
      <guid>https://dev.to/arubis/ai-is-hallucinating-package-names-and-hackers-are-ready-dli</guid>
      <description>&lt;h2&gt;
  
  
  The 19.6% Problem Nobody's Talking About
&lt;/h2&gt;

&lt;p&gt;If you're using AI to help write code (and let's be honest, who isn't?), there's a 1 in 5 chance it's telling you to install packages that don't exist.&lt;/p&gt;

&lt;p&gt;Worse? Hackers know this. They're already registering these phantom packages with malware.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is Slopsquatting?
&lt;/h2&gt;

&lt;p&gt;Seth Larson from the Python Software Foundation coined the term "slopsquatting" to describe this emerging attack vector. It's like typosquatting's evil AI-powered cousin.&lt;/p&gt;

&lt;p&gt;Here's how it works:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;1. AI hallucinates a plausible package name (e.g., 'express-validator-extended')
2. Attackers analyze AI outputs to predict these hallucinations
3. They register the fake packages with malicious code
4. Developers copy-paste AI suggestions (or autofill them with agentic tools) and unknowingly install malware
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h2&gt;
  
  
  The Numbers Are Staggering
&lt;/h2&gt;

&lt;p&gt;The &lt;a href="https://www.usenix.org/conference/usenixsecurity25/presentation/spracklen" rel="noopener noreferrer"&gt;USENIX 2025 study&lt;/a&gt; tested 16 coding models and found:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Average hallucination rate&lt;/strong&gt;: 19.6%&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Commercial models&lt;/strong&gt; (GPT-4, Claude): 5.2%&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Open-source models&lt;/strong&gt;: 21.7%&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Total unique hallucinated packages found&lt;/strong&gt;: 205,474&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;
  
  
  Real-World Example: &lt;code&gt;huggingface-cli&lt;/code&gt;
&lt;/h3&gt;

&lt;p&gt;Researchers created a dummy package called &lt;code&gt;huggingface-cli&lt;/code&gt; - a name frequently hallucinated by AI models.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Results after 3 months&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;30,000+ downloads&lt;/li&gt;
&lt;li&gt;Major companies had it in their requirements&lt;/li&gt;
&lt;li&gt;Zero actual functionality (thankfully just empty, not malicious)&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
  
  
  Why Traditional Security Tools Miss This
&lt;/h2&gt;

&lt;p&gt;Your current security stack probably includes:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Dependency scanners&lt;/strong&gt;: Check known vulnerabilities in real packages&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;SAST tools&lt;/strong&gt;: Analyze your code for security issues&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;License compliance&lt;/strong&gt;: Ensure you're using approved packages&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;But none of these ask: &lt;em&gt;"Should this package even exist?"&lt;/em&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  The Detection vs. Remediation Gap
&lt;/h2&gt;

&lt;p&gt;Even if tools could detect slopsquatting, there's a bigger problem:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;  &lt;span class="c1"&gt;// What current tools do:&lt;/span&gt;
  &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;⚠️ Warning: 'express-auth-validator' may not be a legitimate package&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;

  &lt;span class="c1"&gt;// What fast-moving teams need:&lt;/span&gt;
  &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;✅ Fixed: Replaced with 'express-validator' and updated imports&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Most security tools stop at detection. But with AI generating code 10x faster, we need automated fixes that match that speed.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Building AI-Aware Security&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;At RSOLV, we're tackling this with a three-pronged approach:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;AI-Era Detection
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;def detect_hallucinated_package(package_name, language):
  # Check package registry existence
  # Analyze naming patterns common in hallucinations
  # Compare against known AI suggestion patterns
  # Check registration date vs AI training cutoffs
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Automated Remediation&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Instead of just flagging issues, we:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Identify the likely intended package&lt;/li&gt;
&lt;li&gt;Generate a working fix&lt;/li&gt;
&lt;li&gt;Create a PR with the corrected dependency&lt;/li&gt;
&lt;li&gt;Include security impact analysis&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This is actually our primary value offering, and a lot of what enables ripping through slopsquatting as a lightweight side project instead of a full product offering!&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Success-Based Alignment&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;We only get paid when you merge our fixes. No false positives eating your budget. No seat licenses for tools that just create more backlog.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What You Can Do Today&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Immediate Steps:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Audit recent AI-assisted code for suspicious package names&lt;/li&gt;
&lt;li&gt;Check package creation dates - be wary of very new packages&lt;/li&gt;
&lt;li&gt;Verify package legitimacy before installing:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Check npm
npm view [package-name]

# Check PyPI
pip show [package-name]

# Check with your favorite package manager
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Long-term Protection:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Implement registry validation in your CI/CD pipeline&lt;/li&gt;
&lt;li&gt;Use AI coding tools with caution - always verify package suggestions&lt;/li&gt;
&lt;li&gt;Consider automated remediation for when issues are found&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;The Future of AI Security&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;As AI adoption accelerates, we're seeing entirely new vulnerability classes emerge. Slopsquatting is just the beginning.&lt;/p&gt;

&lt;p&gt;The security industry needs to evolve from:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Reactive → Proactive: Anticipating AI-specific threats&lt;/li&gt;
&lt;li&gt;Detection → Remediation: Fixing faster than AI can create problems&lt;/li&gt;
&lt;li&gt;Generic → Contextual: Understanding AI behavior patterns&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Join the Conversation&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;We're building in public and sharing our discoveries along the way. Check out our &lt;a href="https://www.indiehackers.com/post/built-the-wrong-thing-at-the-wrong-time-but-discovered-something-worse-or-better-0bc8629cc0" rel="noopener noreferrer"&gt;first IndieHackers post&lt;/a&gt; where we dive deeper into how we discovered this while building automated security remediation.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Want to see if your codebase has AI-hallucinated dependencies?&lt;/strong&gt; Pay us a visit at &lt;a href="https://rsolv.dev?utm_source=devto&amp;amp;utm_medium=article&amp;amp;utm_campaign=slopsquatting" rel="noopener noreferrer"&gt;RSOLV.dev&lt;/a&gt; - we detect and fix security issues automatically.&lt;/p&gt;

&lt;p&gt;What's your experience with AI code generation? Have you noticed any suspicious package suggestions? Let's discuss in the comments!&lt;/p&gt;

</description>
      <category>security</category>
      <category>aisecurity</category>
      <category>ai</category>
      <category>supplychain</category>
    </item>
  </channel>
</rss>
