The head of the UK's National Cyber Security Centre (NCSC) stood up at RSA Conference last week and called the security risks from AI-generated code "intolerable."
The same week, Cursor's CEO warned that vibe coding builds "shaky foundations" that eventually "crumble."
The same week, someone compromised LiteLLM's PyPI package and got 47,000 poisoned downloads in 46 minutes.
These aren't separate stories. They're the same story.
What the NCSC actually said
The NCSC CEO called for international cooperation on vibe coding security. Not guidelines. Not best practices. International cooperation. That's the language governments use when they think a problem is bigger than any one country can solve.
Why? Because vibe-coded apps are shipping to production at a rate that outpaces any security review process. The code compiles. The tests pass. The app works. The security is broken.
What "broken security" actually looks like
We've been scanning vibe-coded apps for months. The pattern is the same every time:
Supabase RLS disabled or misconfigured. Default Lovable setup ships with Row Level Security that checks "is this user authenticated?" instead of "does this user own this data?" Any logged-in user can read any other user's records. For a todo app, whatever. For a health app with 8,700 users storing body metrics? That's a data breach waiting to happen.
API keys in client-side JavaScript bundles. Vite and Next.js handle environment variables differently. Vibe coders rarely know the difference. We regularly find Supabase anon keys, Stripe publishable keys, and occasionally secret keys sitting in the JS bundle anyone can read with browser DevTools.
Zero rate limiting on auth endpoints. No brute-force protection on login. No cooldown on password reset. No limit on API calls. The AI never adds these because nobody prompts for them.
No input validation on database-touching forms. The AI builds the form. The form submits to Supabase. Nothing in between checks whether the input is valid, safe, or even the expected type.
The numbers
- Escape.tech scanned 5,600 AI-built apps. 60% failed basic security checks.
- Wikipedia tracked 1,645 Lovable-created web apps. 170 had personal data access issues.
- ReversingLabs documented a cascading supply chain attack (TeamPCP) that hit LiteLLM and Telnyx in the same week, compromising 47,000 and 742,000 downloads respectively.
What Cursor's CEO got wrong
Michael Truell compared vibe coding to "erecting four walls and a roof while being oblivious to the details lurking beneath the floorboards or within the wiring."
The metaphor is wrong. The floorboards are fine. The wiring works. The problem is that the house has no locks on the doors, no fence around the yard, and the windows don't close. Everything functions. Nothing is secure.
The fix isn't "learn to code properly" as Truell implies. The fix is: review your security defaults before you deploy. 20 minutes checking RLS, env vars, auth flows, and rate limiting. That's it.
What to do about it
If you've shipped a vibe-coded app to production:
Check your Supabase RLS policies. Open the SQL editor and verify every table has policies that check
auth.uid() = user_id, not justauth.role() = 'authenticated'.Search your deployed JS bundle for JWTs and API keys. Open DevTools, go to Sources, and search for
eyJ(the base64 prefix of every JWT). If you find your Supabase anon key, that's expected. If you find anything else, you have a problem.Add rate limiting to your auth endpoints. Even a simple 5-requests-per-minute limit on login and password reset prevents brute force.
Run a security scanner. We built free tools at notelon.ai specifically for vibe-coded apps. No signup required. If you want a deeper review, our $99 audit covers 50+ checks with a PDF report.
The NCSC, Cursor's CEO, and the supply chain attacks are all pointing at the same thing: the code works, the security doesn't. The good news is it's fixable. The bad news is most people won't fix it until something breaks.
Full security report with 63+ sources: notelon.ai/report
Top comments (0)