DEV Community

Cover image for What Is Vibe Coding? Why Developers Are Shipping Insecure AI Code in 2026
Mr Elite
Mr Elite

Posted on • Originally published at securityelites.com

What Is Vibe Coding? Why Developers Are Shipping Insecure AI Code in 2026

📰 Originally published on Securityelites — AI Red Team Education — the canonical, fully-updated version of this article.

What Is Vibe Coding? Why Developers Are Shipping Insecure AI Code in 2026

On March 31, 2026, Anthropic’s Claude Code CLI shipped a 59.8MB source map file in its npm package — exposing roughly 512,000 lines of proprietary TypeScript to anyone who downloaded it. The tool had itself been largely vibe-coded. A misconfigured packaging rule caused the leak, not a logic bug. Existing security scanners didn’t catch it. That incident captures everything I want you to understand about vibe coding and security: the risk isn’t that AI writes bad code on purpose. The risk is that developers moving at AI speed have less visibility into what they’re actually shipping. Here’s the complete picture.

What You’ll Learn

What vibe coding is and why it’s the dominant development pattern in 2026
The specific security vulnerabilities AI-generated code tends to introduce
Real documented incidents from the last 90 days
How to audit AI-generated code before it hits production
The tools and workflow changes that actually work

⏱️ 14 min read ### Vibe Coding Security — Complete Guide 2026 1. What Vibe Coding Actually Is 2. The Specific Vulnerabilities It Introduces 3. Real Incidents — What’s Already Gone Wrong 4. How to Audit AI-Generated Code 5. The Secure Vibe Coding Workflow Vibe coding is the practical manifestation of the AI code generation risks covered in the AI-generated malware guide. The hallucination and slopsquatting risks I described in the OWASP LLM Top 10 — LLM09 overreliance — are what vibe coding produces at scale.

What Vibe Coding Actually Is

Vibe coding is the practice of delegating code generation almost entirely to AI assistants — describing what you want in natural language, accepting the output with minimal review, and iterating by prompting rather than by reading. The term was coined by Andrej Karpathy in early 2025 and was immediately recognised by every developer who had been doing exactly this with Copilot, Cursor, Claude Code, and similar tools.

My take on where we are in 2026: vibe coding isn’t a fringe practice anymore. It’s how most new projects start. The speed advantage is real and substantial — developers are shipping features in hours that previously took days. The problem is that the security review process hasn’t kept pace. Developers are producing more code faster, with less line-by-line understanding of what that code does.

VIBE CODING — THE DEVELOPMENT PATTERNCopy

Traditional development flow

Developer writes code → understands every line → reviews for security → ships
Security review: developer has full mental model of what the code does

Vibe coding flow

Developer describes intent → AI generates code → developer tests output → ships
Security review: developer tests whether the code works, not whether it’s secure

The security gap

Working code ≠ secure code
AI optimises for functional correctness, not security by default
Developers moving at AI speed have less time for manual security review
Result: more code volume, less security scrutiny, more vulnerabilities in production

The Specific Vulnerabilities It Introduces

Veracode’s 2026 research on AI-generated code identified a consistent pattern: AI assistants produce code that is syntactically correct and functionally adequate, but frequently missing security controls that a security-aware developer would add as a matter of habit. My analysis of the vulnerability classes most common in vibe-coded projects aligns with what Veracode, Checkmarx, and GitLab have all published in the last 90 days.

VULNERABILITY CLASSES IN AI-GENERATED CODECopy

Most common — missing by default in AI output

Input validation: AI generates functional handlers without sanitisation checks
Hardcoded credentials: AI uses placeholder strings like “your_api_key_here” that get replaced with real values and committed
Insecure dependencies: AI recommends packages by name from training data — some outdated, some hallucinated
Missing auth checks: AI builds feature endpoints without always adding authorisation middleware
SQL injection: AI uses string concatenation for queries when it has no schema context

The hallucinated package problem (slopsquatting)

AI suggests a package that doesn’t exist → developer installs it
Attacker has registered that package name with malicious code
Developer’s environment or production codebase is now compromised
Documented: researchers found hundreds of AI-hallucinated package names already registered

Configuration vulnerabilities (the Claude Code leak pattern)

AI generates build configs, deployment files, packaging rules
These are harder to review than application code
Misconfigured packaging, overly permissive CORS, exposed debug endpoints
The Claude Code source map exposure was exactly this pattern

EXERCISE 1 — THINK LIKE A SECURITY REVIEWER (15 MIN)
Audit a Vibe-Coded Function for Security Issues

Take this AI-generated Node.js function and identify every security issue:

javascript app.post(‘/api/user/login’, async (req, res) => { const { username, password } = req.body; const query =SELECT * FROM users WHERE username = ‘${username}’; const user = await db.query(query); if (user && user.password === password) { const token = jwt.sign({ userId: user.id }, ‘mysecretkey’); res.json({ token }); } else { res.status(401).json({ error: ‘Invalid credentials’ }); } }); “

Find and name every vulnerability. Hint: there are at least 6. Then write the secure version.

Issues to find: 1. The SQL query construction 2. The password comparison 3. The JWT secret 4. The token expiry 5. The error message 6. The rate limiting (or lack of it)


📖 Read the complete guide on Securityelites — AI Red Team Education

This article continues with deeper technical detail, screenshots, code samples, and an interactive lab walk-through. Read the full article on Securityelites — AI Red Team Education →


This article was originally written and published by the Securityelites — AI Red Team Education team. For more cybersecurity tutorials, ethical hacking guides, and CTF walk-throughs, visit Securityelites — AI Red Team Education.

Top comments (0)