DEV Community

Cover image for I Catalogued the Security Patterns That Keep Showing Up in AI Code
LazyDev_OH
LazyDev_OH

Posted on • Originally published at gocodelab.com

I Catalogued the Security Patterns That Keep Showing Up in AI Code

Across the Apsity App Store dashboard, the FeedMission SaaS, and a dozen side projects, more than half the code I touch is AI-generated. After shipping a SaaS in 7 days, vibe coding has been the default workflow.

Run it long enough and the patterns show up. AI-generated code keeps producing the same classes of security holes. One FeedMission review surfaced seven criticals at the same time — a Slack webhook URL bundled into the frontend, an unsubscribe endpoint that any email address could trigger, an admin reply leaking through a public API, routes missing team-member auth checks. None of that was bad luck. Industry research lists these as the highest-frequency patterns, and they had effectively reproduced themselves in our codebase.

So now I run the same seven checks before every deploy, the same way each time. This post is the pattern catalogue plus the routine.

The numbers, first

This isn't a vibe check. Multiple groups in 2026 (Georgia Tech, Cloud Security Alliance, Checkmarx) analyzed AI-generated code and found:

  • 40–62% of samples contain security issues
  • 2.74× more vulnerable than human-written code on equivalent tasks
  • 86% failed XSS defenses
  • 88% vulnerable to log injection
  • 35 new CVEs tied to AI-generated code in March 2026 alone
  • One AI app leaked 1.5M API keys post-launch — shipped without security review

Nobody's quitting vibe coding because of these numbers. I'm not. But the 10 minutes you spend before deploy is what decides production's fate.

How AI skips security

Beginners get this wrong. The AI didn't make a mistake — it built what you asked for. "Make a user profile API" → it makes one. Auth wasn't requested, so it's not there. It leaves // TODO: add auth here and moves on.

The fix: put security in the prompt from the start. "Include JWT auth middleware, read secrets only from env, no raw SQL, no TODO comments, ship complete code." One line changes the output quality.

Top 7 mistakes — in the order I hit them

# Mistake What happens Red flag
1 Hardcoded API keys Scraped by bots within seconds sk_, api_key=
2 Auth-less API routes URL-only access to your DB no session/auth/token references
3 NEXT_PUBLIC_ misuse Service-role key in browser bundle NEXT_PUBLIC_*_SECRET/KEY
4 Raw SQL interpolation SQL injection → full DB exfil `SELECT ... ${}`
5 CORS wildcards Any domain hits your API Allow-Origin: *
6 Missing XSS / log-injection defense User input straight into HTML/logs dangerouslySetInnerHTML, raw-string logs
7 Phantom packages (slopsquatting) Malicious package under hallucinated name unfamiliar packages, low downloads

1 and #3 hit fastest. The moment you push to GitHub, scraper bots scoop the key and burn your API quota. If you've never been hit, you've only been lucky.

Slopsquatting warning — when AI says npm install some-plausible-package, check npmjs.com first. About 20% of AI-generated code references nonexistent packages. Attackers register those names with malicious payloads, and you install them instantly.

What could have happened at FeedMission

From the 7 above, FeedMission had #2, #3, #6, plus a few app-specific issues:

  • Slack webhook URL rode on ProjectContext into the frontend bundle.
  • Unsubscribe API took just an email address. Anyone's email → instant unsubscribe. Switched to an unsubscribeToken flow.
  • /api/feedback/mine returned the full admin reply text. Now hasReply: boolean only.
  • Team member auth checks missing across several APIs.
  • .env wasn't in .vercelignore — almost shipped via symlink in a Vercel build.

All fixed in one commit (52efb89). None of these are "too edge-case to happen to me."

My 10-minute pre-deploy routine

# 1. Three grep lines — 5 seconds
# Unfinished security code
grep -r "TODO\|FIXME\|implement.*later\|add.*auth" ./src

# Hardcoded secrets
grep -r "sk_\|api_key\|password\s*=" ./src

# Client-exposed env vars
grep -rE "NEXT_PUBLIC_.*(SECRET|KEY|TOKEN)" ./src

# 2. SQL interpolation and CORS wildcards
grep -rn "\`SELECT\|\`INSERT\|\`UPDATE\|\`DELETE" ./src
grep -rn "Allow-Origin.*\*" ./src
Enter fullscreen mode Exit fullscreen mode

If all pass, paste the generated code back to the AI and ask: "Review this code against OWASP Top 10 for vulnerabilities." Imperfect but a fine first-pass filter.

GitHub side, turn on three things: Secret Scanning, Push Protection, CodeQL Code Scanning. Plus Dependabot/npm audit in CI for package vulns.

My prompt tail (every code-generation request): "Include auth middleware; read secrets only from process.env and use NEXT_PUBLIC only for public values; always validate user input; no raw SQL; ship complete code without TODO/FIXME."

Bonus — Using Supabase? RLS is its own chapter

Next.js + Supabase is the default vibe-coder stack, so RLS gets a dedicated section. RLS (Row Level Security) is PostgreSQL's row-level access control. "This row is readable only by the user whose user_id matches" — enforced at the database layer.

Why this matters: when you create a table in Supabase Studio, RLS is OFF by default. Ship NEXT_PUBLIC_SUPABASE_ANON_KEY to the client in that state and anyone with that key can read or write every row in every table. The anon key effectively becomes a service-role key. Whatever assurance "client-side anon key is safe" gave you, it's gone.

Turning RLS on isn't enough either. Without policies, every access is denied. You write separate policies per action: SELECT, INSERT, UPDATE, DELETE. The most frequent mistake is writing USING (the read/delete-time filter) but forgetting WITH CHECK (the post-write validation):

-- ✗ Risky — USING only
CREATE POLICY "own rows"
ON posts FOR UPDATE
USING (auth.uid() = user_id);
-- WITH CHECK forgotten!

-- ✓ Safe — both
CREATE POLICY "own rows"
ON posts FOR UPDATE
USING (auth.uid() = user_id)
WITH CHECK (auth.uid() = user_id);
Enter fullscreen mode Exit fullscreen mode

Without WITH CHECK, user_a can INSERT or UPDATE rows claiming user_b's user_id — planting rows or hijacking existing ones.

Three review queries to save in your Supabase SQL Editor:

-- 1. Tables with RLS still off
SELECT tablename, rowsecurity
FROM pg_tables
WHERE schemaname = 'public' AND rowsecurity = false;

-- 2. RLS on but no policies — everything is rejected
SELECT t.tablename
FROM pg_tables t
LEFT JOIN pg_policies p
  ON t.schemaname = p.schemaname AND t.tablename = p.tablename
WHERE t.schemaname = 'public' AND p.policyname IS NULL;

-- 3. INSERT/UPDATE policies missing WITH CHECK
SELECT tablename, policyname, cmd, qual, with_check
FROM pg_policies
WHERE schemaname = 'public' AND cmd IN ('INSERT', 'UPDATE') AND with_check IS NULL;
Enter fullscreen mode Exit fullscreen mode

Run these after every migration. Empty results on all three = you're clear.

Top-4 BaaS-specific mistakes:

  1. RLS off — anon key becomes a master key.
  2. Missing WITH CHECK — attackers plant rows under someone else's user_id.
  3. service_role key shipped to clientSUPABASE_SERVICE_ROLE_KEY must never be NEXT_PUBLIC. Server routes / Edge Functions only.
  4. Permissive anon-role policiesauth.uid() = user_id missing means unauthenticated callers reach every row.

Same principle applies to Firebase Security Rules, Appwrite Permissions, PocketBase Collection rules: if the client talks to the database directly, the database is the last line of defense. Leave that line empty and no upstream security matters.

Wrap-up

Vibe coding didn't make security worse. The habit of deploying without review did. AI raised the speed. Raise your review speed with it. Three grep lines, one AI review, three GitHub settings, the RLS check if you're on Supabase. Ten minutes.

Skip those ten minutes and "1.5M API keys leaked" stops being someone else's story.

Sources


Originally published at GoCodeLab. Lazy Developer EP.18.

Top comments (0)