DEV Community

Tomer goldstein
Tomer goldstein

Posted on

I scanned 100 AI-generated apps for security vulnerabilities. Here's what I found.

I've been building a security scanner for the past few months, specifically designed for apps built with AI coding tools like Cursor, Lovable, Bolt.new, and v0.

To validate whether the tool was actually useful, I scanned 100 real GitHub repos - all built primarily with AI assistance. The results were worse than I expected.

The numbers

  • 67 out of 100 repos had at least one critical vulnerability
  • 45% had hardcoded secrets (API keys, JWT secrets, database URLs in source code)
  • 38% had missing authentication on sensitive API routes
  • 31% had SQL injection or XSS vulnerabilities
  • 89% of Lovable apps were missing Supabase Row Level Security policies

This isn't a theoretical exercise. These are real apps, some already deployed with real users.

The most common vulnerabilities by AI tool

Cursor

The biggest issue with Cursor-generated code is IDOR (Insecure Direct Object References). Cursor loves to use sequential IDs and often skips ownership checks:


javascript
// Cursor generates this — anyone can access any user's data
app.get('/api/users/:id', async (req, res) => {
  const user = await db.users.findById(req.params.id);
  res.json(user);
});

It should verify the requesting user owns that resource. 43% of Cursor repos had this pattern.

Lovable
Lovable builds beautiful Supabase apps, but it almost never enables Row Level Security. This means any authenticated user can read/write any row in any table:

-- This is what Lovable usually generates: nothing
-- What it should generate:
ALTER TABLE todos ENABLE ROW LEVEL SECURITY;

CREATE POLICY "Users can only see their own todos"
ON todos FOR SELECT
USING (auth.uid() = user_id);

89% of Lovable repos were missing RLS on at least one table with user data.

Bolt.new
Bolt.new's biggest weakness is unauthenticated API routes. It generates Express/Next.js API handlers that accept POST, PUT, and DELETE requests with zero auth checks:

// No auth check — anyone on the internet can delete data
export async function DELETE(req) {
  const { id } = await req.json();
  await db.delete(items).where(eq(items.id, id));
  return Response.json({ success: true });
}

52% of Bolt.new repos had at least one unprotected mutating endpoint.

Why AI tools create these vulnerabilities
It's not that the AI models are bad at coding. The problem is more specific:

No threat model. AI generates code that satisfies the functional requirement ("build a todo app") but doesn't consider adversarial use ("what if someone guesses another user's ID?")

Training data bias. Most code on GitHub — which these models trained on — is tutorial code, demos, and prototypes. Production security patterns are underrepresented.

Context window limits. Security often requires understanding the full system (auth flow + database policies + API routes). AI generates one file at a time and loses cross-file context.

No feedback loop. When AI writes a working but insecure endpoint, nobody tells the model it was wrong. The code works, tests pass, users can log in — the vulnerability is invisible until exploited.

Stanford and UIUC researchers found that developers using AI assistants produced significantly less secure code than those coding manually, yet were more confident their code was secure. That confidence gap is dangerous.

What you can do about it
1. Never trust AI with auth logic
Always manually review authentication and authorization code. This is the highest-risk area.

2. Check for hardcoded secrets
Run a quick grep:

grep -rn "sk_live\|sk_test\|api_key\|password\|secret" --include="*.ts" --include="*.js" .

3. Enable RLS if using Supabase
Every table with user data needs Row Level Security. No exceptions.

4. Add auth middleware to all mutating routes
Every POST, PUT, DELETE, and PATCH endpoint needs authentication. Create a middleware function and apply it consistently.

5. Run an automated scanner
This is why I built ShipSafe. Paste a GitHub URL, get a report in 2 minutes with plain-English explanations and specific fixes. Free scan covers 30+ checks, paid plans add AI-powered deep analysis.

The bottom line
AI coding tools are incredible for speed. I use Cursor every day. But they have a blind spot for security, and that blind spot is predictable and fixable.

The vulnerabilities AI creates aren't exotic zero-days. They're the same OWASP Top 10 issues that have existed for 20 years — just generated faster and at scale.

Scan your code. Fix what matters. Ship with confidence.

If you're building with AI tools, I'd love to hear what security issues you've encountered. Drop a comment — I'm curious whether your experience matches what we found.
Enter fullscreen mode Exit fullscreen mode

Top comments (0)