Georgia Tech researchers just dropped a stat that should scare every vibe coder: 35 new CVEs in March 2026 were traced directly to AI-generated code. That's up from 6 in January and 15 in February.
The trend line is vertical.
The Vibe Security Radar
The Vibe Security Radar is a research project from Georgia Tech's Systems Software & Security Lab. They track vulnerabilities specifically introduced by AI coding tools that made it into public advisories (CVE.org, NVD, GitHub Advisory Database, OSV, RustSec).
Their method:
- Pull from public vulnerability databases
- Find the commit that fixed each vulnerability
- Trace backwards to find who introduced the bug
- If the commit has an AI tool's signature (co-author tag, bot email), flag it
- AI agents investigate the root cause using actual Git history
74 confirmed cases so far. The real number is estimated at 5-10x higher (400-700 across open source) because tools like Copilot leave no metadata traces.
Which Tools Introduce the Most Vulnerabilities?
Claude Code shows up the most in the data, but the lead researcher Hanqing Zhao says that's mostly because Claude "always leaves a signature." Copilot's inline suggestions leave no trace.
They track approximately 50 AI-assisted coding tools: Claude Code, GitHub Copilot, Cursor, Devin, Windsurf, Aider, Amazon Q, Google Jules, and more.
Why This Matters for Vibe Coders
Here's the context that makes this urgent:
- NCSC Warning: The UK's National Cyber Security Centre CEO called for vibe coding safeguards at RSA Conference this week
- escape.tech Data: 5,600 live vibe-coded apps scanned, hundreds of vulnerabilities and exposed secrets found
- Supply Chain Attacks: LiteLLM's PyPI package was backdoored (47K downloads in 46 minutes). The same attacker poisoned trivy-action, a security scanner itself
- Only 18% of organizations can fix security vulnerabilities at the pace AI generates them (InformationWeek)
The gap between code generation speed and security review capacity is widening every month.
The 4 Most Common AI Code Vulnerabilities
Based on our analysis of hundreds of vibe-coded apps at VibeCheck:
- Hardcoded secrets in source code (API keys, database credentials in plaintext)
- No input validation before database queries (SQL injection, NoSQL injection)
- Missing authentication on API endpoints (anyone can call them)
- No rate limiting on auth endpoints (brute force attacks trivial)
AI coding tools generate functional code. They rarely generate secure code. The difference kills you in production.
What You Can Do Right Now
For your dependencies:
# .npmrc - block new packages for 7 days
min-release-age=7
# uv.toml - same for Python
exclude-newer = "7 days"
Most malicious packages get caught within 24-72 hours. A 7-day buffer kills the majority of supply chain attacks.
For your code:
- Pin exact versions with lockfiles and hashes
- Never hardcode secrets (use environment variables)
- Add input validation on every endpoint that touches a database
- Rate limit authentication endpoints
- Run a security scan before deploying
Free scanner: notelon.ai checks for the common vibe coding mistakes. Paste your repo or URL, get results in seconds. No signup required.
The Trend Is Clear
| Month | CVEs from AI Code |
|---|---|
| Jan 2026 | 6 |
| Feb 2026 | 15 |
| Mar 2026 | 35 |
That's a 150% month-over-month increase in February, then 133% in March. If this trajectory holds, April could see 70+.
The tools that generate code are getting faster. The tools that secure it aren't keeping up. That gap is the vulnerability.
Sources: Infosecurity Magazine, Georgia Tech Vibe Security Radar, NCSC, InformationWeek
Top comments (0)