AI code generation has fundamentally changed how we build software. Tools like Claude Code, GitHub Copilot, and Cursor have made developers incredibly productive. But there's a hidden cost to this "vibe coding" workflow: security vulnerabilities slip through faster than ever before.
According to a 2025 study by GitClear, AI-generated code is accepted into production codebases at rates 40% higher than human-written code, yet contains security vulnerabilities at nearly twice the rate. The problem? Developers trust AI suggestions too quickly, skipping the critical security review steps that would catch these issues.
This is the vibe coding security problem: when you're moving fast and the code "feels right," it's easy to miss subtle security flaws. This checklist will help you catch those bugs before they ship.
What is Vibe Coding Security?
Vibe coding refers to the workflow where developers use AI assistants to generate code quickly, relying on intuition and rapid iteration rather than careful, methodical development. The code "vibes" — it looks good, runs well in tests, and ships fast.
Vibe coding security is the practice of implementing security checkpoints in this high-velocity workflow without slowing down development. It's about building guardrails that catch vulnerabilities automatically, so you can maintain speed while staying secure.
The Vibe Coding Security Gap
The gap between AI code generation speed and security review speed is the #1 risk factor in modern development. AI can generate 100 lines of code in seconds. How long does it take you to security-review 100 lines? Most developers skip the review entirely.
The 10-Point Vibe Coding Security Checklist
Here's your pre-ship checklist. Run these checks on every AI-generated code change before merging to main. These checks are ordered by priority — start with #1 and work your way down.
1. Check for Hardcoded Secrets and API Keys
Why it matters: AI models are trained on public code repositories, many of which accidentally contain hardcoded secrets. When generating code, AI assistants often replicate this pattern, inserting placeholder API keys that look realistic but are actually leaked credentials from training data.
What to check:
- Search for patterns like
API_KEY,SECRET,PASSWORD,TOKEN - Look for base64-encoded strings (often used to "hide" credentials)
- Check for AWS access keys (pattern:
AKIA[0-9A-Z]{16}) - Scan for private keys (look for
-----BEGIN PRIVATE KEY-----) - Review environment variable usage — are secrets properly externalized?
How to automate it:
# Using git-secrets
git secrets --scan
# Using gitleaks
gitleaks detect --source . --verbose
# Using LucidShark's comprehensive scanning
lucidshark scan --all
Pro Tip: AI Models Remember Secrets
If you've been chatting with an AI assistant and accidentally pasted a real API key earlier in the conversation, the model may insert that key into generated code later. Always start a fresh session when working with sensitive credentials.
2. Validate Input Sanitization and Injection Vulnerabilities
Why it matters: AI-generated code frequently omits input validation. The AI assumes inputs are well-formed and safe because it's optimizing for the "happy path." This creates SQL injection, command injection, and XSS vulnerabilities.
What to check:
- SQL queries: Are user inputs parameterized? Never use string concatenation for SQL.
-
Shell commands: Does the code call
exec(),system(), oreval()with user input? -
File paths: Can users control file paths? Check for path traversal (
../). - HTML output: Is user-generated content properly escaped to prevent XSS?
- JSON parsing: Does the code validate JSON structure before parsing?
Common AI-generated vulnerability:
// AI-generated code (VULNERABLE)
const userId = req.query.userId;
const query = `SELECT * FROM users WHERE id = ${userId}`;
db.query(query);
// Secure version
const userId = req.query.userId;
const query = 'SELECT * FROM users WHERE id = ?';
db.query(query, [userId]);
How to automate it:
# Using LucidShark with comprehensive scanning
lucidshark scan --all
# Using Snyk for dependency vulnerability detection
snyk test
3. Review Authentication and Authorization Logic
Why it matters: AI assistants often generate code that implements the requested feature but skips access control checks. The result? Features that work perfectly but are accessible to unauthorized users.
What to check:
- Is authentication required for this endpoint/function?
- Does the code verify the user has permission to access the resource?
- Are there direct object references (e.g.,
/api/user/123) without ownership checks? - Does the code properly handle authentication failures?
- Are admin-only functions protected from regular users?
Real-World Example
A developer asked Claude to "add an endpoint to delete user accounts." The AI generated a working DELETE endpoint — but forgot to check if the requesting user actually owns the account being deleted. The bug went unnoticed until a security researcher reported they could delete any user account by ID.
How to check:
// Check every new route/endpoint
app.delete('/api/account/:id', async (req, res) => {
// ⚠️ MISSING: Is user authenticated?
// ⚠️ MISSING: Does this user own account :id?
await deleteAccount(req.params.id);
res.json({ success: true });
});
4. Scan Dependencies for Known Vulnerabilities
Why it matters: AI assistants frequently suggest outdated packages or packages with known vulnerabilities. The AI's training data lags behind current security advisories, so it might recommend a package version from 2023 that has since been flagged as vulnerable.
What to check:
- Run
npm auditorpip-auditafter adding new dependencies - Check if the suggested package version is the latest stable release
- Review the package's security history (check GitHub Security tab)
- Verify the package is actively maintained (recent commits, issues responded to)
- Check package download stats — is this widely used or a potential typosquatting attack?
How to automate it:
# Node.js projects
npm audit --audit-level=moderate
# Python projects
pip-audit
# Using Snyk for comprehensive dependency scanning
snyk test --all-projects
# Using LucidShark's comprehensive scanning
lucidshark scan --all
5. Verify Error Handling Doesn't Leak Information
Why it matters: AI-generated code often includes detailed error messages that help with debugging — but also help attackers. Stack traces, database error messages, and file paths can reveal system architecture and create attack vectors.
What to check:
- Do error messages expose stack traces to end users?
- Are database errors returned directly to the client?
- Do error messages reveal file paths or system information?
- Is there a difference between dev/production error handling?
- Are errors logged properly without exposing sensitive data?
Common AI-generated vulnerability:
// AI-generated error handling (VULNERABLE)
try {
const user = await db.query('SELECT * FROM users WHERE id = ?', [userId]);
} catch (error) {
return res.status(500).json({ error: error.message });
// ⚠️ Leaks database structure and SQL details to client
}
// Secure version
try {
const user = await db.query('SELECT * FROM users WHERE id = ?', [userId]);
} catch (error) {
logger.error('Database query failed:', error); // Log internally
return res.status(500).json({ error: 'Internal server error' }); // Generic message to client
}
6. Check for Race Conditions and TOCTOU Bugs
Why it matters: AI assistants generate code for single-threaded, sequential execution. They rarely consider concurrent access, leading to Time-of-Check-Time-of-Use (TOCTOU) vulnerabilities, race conditions, and data corruption.
What to check:
- Are there file system checks followed by file operations? (classic TOCTOU)
- Does the code check a balance before deducting funds? (race condition)
- Are database reads and writes properly transactional?
- Does the code assume sequential execution in async contexts?
- Are shared resources protected with locks or atomic operations?
Common pattern to watch for:
// TOCTOU vulnerability
if (fs.existsSync(filePath)) { // Check
const content = fs.readFileSync(filePath); // Use
// ⚠️ File could be deleted or modified between check and use
}
// Better approach
try {
const content = fs.readFileSync(filePath); // Just try, handle error
} catch (error) {
if (error.code === 'ENOENT') {
// Handle missing file
}
}
7. Review Cryptography Implementation
Why it matters: AI assistants are particularly dangerous when generating cryptographic code. They often use deprecated algorithms, weak key sizes, or implement custom crypto (which is always wrong). Never trust AI-generated cryptography without expert review.
What to check:
- Is the code using strong algorithms? (AES-256, SHA-256, not MD5 or SHA-1)
- Are random values cryptographically secure? (use
crypto.randomBytes(), notMath.random()) - Is the IV (Initialization Vector) properly generated and unique per encryption?
- Are passwords hashed with bcrypt/argon2, not plain SHA-256?
- Is there any custom cryptographic implementation? (RED FLAG — always use established libraries)
Never Trust AI With Crypto
In 2025, researchers found that Claude, GPT-4, and Cursor all generated insecure cryptographic code when asked to "implement encryption." Common mistakes: ECB mode instead of CBC/GCM, predictable IVs, weak key derivation. Always use established crypto libraries and have a security expert review crypto code.
Red flags in AI-generated crypto:
// ⚠️ INSECURE - Using Math.random() for crypto
const token = Math.random().toString(36).substring(2);
// ✅ SECURE - Using crypto.randomBytes()
const token = crypto.randomBytes(32).toString('hex');
// ⚠️ INSECURE - SHA-256 for password hashing
const hash = crypto.createHash('sha256').update(password).digest('hex');
// ✅ SECURE - bcrypt for password hashing
const hash = await bcrypt.hash(password, 12);
8. Test for Business Logic Vulnerabilities
Why it matters: AI understands syntax and common patterns but struggles with domain-specific business logic. It will implement what you ask for, but won't catch logical flaws like "users can apply discount codes multiple times" or "refunds can be issued before payment clears."
What to check:
- Can operations be performed in an unintended order?
- Can users manipulate quantities, prices, or balances?
- Are there state transitions that bypass validation steps?
- Can workflows be repeated when they should be one-time?
- Does the code enforce business rules (e.g., "refunds only for completed orders")?
Example business logic flaw:
// AI-generated checkout code
async function applyDiscount(cart, discountCode) {
const discount = await getDiscountByCode(discountCode);
if (discount) {
cart.total = cart.total * (1 - discount.percentage);
}
return cart;
}
// ⚠️ BUG: Can call applyDiscount() multiple times on same cart
// ⚠️ BUG: No check if discount code is expired or already used
// ⚠️ BUG: No minimum purchase amount check
9. Validate API Rate Limiting and Resource Protection
Why it matters: AI-generated API endpoints rarely include rate limiting, request throttling, or resource protection. This creates DoS vulnerabilities and allows attackers to abuse your API for free.
What to check:
- Are expensive operations (file uploads, AI API calls) rate-limited?
- Is there request throttling per user/IP?
- Are there file size limits on uploads?
- Do bulk operations have maximum item counts?
- Are there timeouts on long-running operations?
How to implement:
// Add rate limiting to AI-generated endpoints
const rateLimit = require('express-rate-limit');
const apiLimiter = rateLimit({
windowMs: 15 * 60 * 1000, // 15 minutes
max: 100, // Limit each IP to 100 requests per windowMs
message: 'Too many requests, please try again later.'
});
app.use('/api/', apiLimiter);
10. Run Static Analysis and Security Linters
Why it matters: Manual review catches some issues, but automated tools catch patterns humans miss. Static analysis tools are specifically designed to find security vulnerabilities in code — use them on every AI-generated change.
Recommended tools by language:
| Language | Tools | What They Catch |
|---|---|---|
| JavaScript/TypeScript | LucidShark, ESLint | XSS, injection, insecure dependencies |
| Python | LucidShark, Bandit | SQL injection, insecure deserialization |
| Java | LucidShark, PMD | Input validation, crypto misuse |
| Go | LucidShark, gosec | Race conditions, crypto issues |
| Ruby | Brakeman, LucidShark | SQL injection, XSS, CSRF |
How to automate with LucidShark:
# Run comprehensive quality and security scanning
lucidshark scan --all
# Integrate into your CI pipeline
lucidshark scan --all --fail-on high --format sarif > results.sarif
# Scan uncommitted changes (default behavior)
lucidshark scan --all
Pro Tip: Use LucidShark's Comprehensive Scanning
LucidShark provides comprehensive code quality and security scanning across 10 domains: linting, formatting, type-checking, dependency vulnerabilities (SCA), security patterns (SAST), infrastructure-as-code, container scanning, testing, coverage, and duplication detection. Run
lucidshark scan --allto catch all issues in a single scan.
Implementing This Checklist in Your Workflow
A checklist is only useful if you actually use it. Here's how to integrate these checks into your vibe coding workflow without killing velocity:
Option 1: Pre-Commit Hooks (Recommended)
Run automated checks before every commit. This catches issues immediately without requiring manual review.
# Install pre-commit hooks
pre-commit install
# Or use pre-commit framework
# .pre-commit-config.yaml
repos:
- repo: local
hooks:
- id: lucidshark
name: LucidShark Quality Scan
entry: lucidshark scan --all
language: system
pass_filenames: false
Option 2: CI/CD Integration
Run the full checklist on every pull request. Block merges if critical vulnerabilities are found.
# GitHub Actions example
name: Security Scan
on: [pull_request]
jobs:
security:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Run LucidShark
run: |
curl -fsSL https://raw.githubusercontent.com/toniantunovi/lucidshark/main/install.sh | bash
lucidshark scan --all --fail-on high
Option 3: IDE Integration
Get real-time feedback as you code using LucidShark's Claude Code integration. Issues are highlighted before you even save the file.
# Enable LucidShark in Claude Code
# Add to your project's lucidshark.yml
pipeline:
linting:
enabled: true
security:
enabled: true
fail_on:
linting: error
security: high
Common Mistakes When Implementing Vibe Coding Security
Mistake #1: Only checking "important" code
Vulnerabilities don't care about importance. A bug in a "minor utility function" can be just as exploitable as a bug in core authentication logic. Check everything.
Mistake #2: Trusting AI because it explained its code
AI assistants are excellent at generating plausible-sounding explanations for insecure code. Just because the AI justified its approach doesn't mean it's secure. Verify independently.
Mistake #3: Skipping checks when "just prototyping"
Prototype code becomes production code faster than you think. Security debt compounds. If you skip security checks during prototyping, you'll forget to add them later.
Mistake #4: Relying solely on automated tools
Automated tools are essential but not sufficient. Business logic vulnerabilities and context-specific issues require human review. Use tools to catch the obvious, then think critically about edge cases.
Real-World Impact: The Cost of Skipping These Checks
In early 2026, a fintech startup suffered a data breach that exposed 50,000 customer records. The root cause? An AI-generated API endpoint that lacked authorization checks (Checklist item #3). The developer accepted the AI's code because it worked in testing. The bug went unnoticed for three months.
Cost of the breach:
- $2.3M in regulatory fines (GDPR violations)
- $800K in incident response and customer notification
- 32% customer churn rate in the following quarter
- $5M Series B round delayed by 8 months
Running these 10 checks would have caught the vulnerability before deployment. The total time cost? Approximately 45 seconds of automated scanning.
Conclusion: Make Security Checks Non-Negotiable
Vibe coding is here to stay. AI assistants make developers incredibly productive, and that productivity advantage is too significant to give up. But speed without security is reckless.
The solution isn't to slow down — it's to automate security checks so thoroughly that they become invisible. This 10-point checklist should run automatically on every code change. Make it part of your pre-commit hooks, your CI pipeline, and your IDE workflow.
The rule is simple: If you didn't run the checklist, the code doesn't ship.
Start implementing these checks today. Your future self (and your security team) will thank you.
Get Started with LucidShark
LucidShark automates most of this checklist out of the box. Install it once, integrate with Claude Code, and catch vulnerabilities before they reach production.
curl -fsSL https://raw.githubusercontent.com/toniantunovi/lucidshark/main/install.sh | bash
lucidshark scan --all
Originally published on the LucidShark Blog on March 10, 2026
Top comments (0)