DEV Community

Cover image for I Scanned 300 Vibe-Coded Repos. The #1 Finding Will Annoy You.
Chandan Karn
Chandan Karn

Posted on

I Scanned 300 Vibe-Coded Repos. The #1 Finding Will Annoy You.

TL;DR

  • Hardcoded secrets (CWE-798) show up in roughly 2 out of 3 AI-generated repos
  • It happens because AI models were trained on years of tutorial code, not production code
  • A pre-commit hook with gitleaks catches this in under 5 seconds

I've been scanning repos for a few months now. Mostly side projects, a handful of production apps that founders shared with me directly. The pattern I keep seeing is secrets hardcoded directly into source files.

Not occasionally. Not in old projects. In code that was written last week, sometimes yesterday, by developers who absolutely know better.

Here's the thing: they didn't write it. Their AI did.

The Vulnerable Pattern

This is the exact snippet I've found in some variation across maybe 200 of the ~300 repos I've scanned:

// Generated by Cursor, March 2026
const jwt = require('jsonwebtoken');

const SECRET = 'my-super-secret-key-123';

function generateToken(userId) {
  return jwt.sign({ id: userId }, SECRET, { expiresIn: '7d' });
}
Enter fullscreen mode Exit fullscreen mode

CWE-798, use of hardcoded credentials. The JWT secret is sitting in plaintext in a file that will get committed to version control. If this is a public repo, it's already exposed. If it's private, it's one accidental visibility change away from being exposed.

I see variations with database connection strings containing passwords, Stripe secret keys, OpenAI API keys, and AWS access credentials. All in the source. All committed. Gitleaks flags them in about 2 seconds.

Why AI Keeps Doing This

AI code generators were trained on an enormous corpus of public code. A huge portion of that corpus is tutorial content, blog posts, README examples, Stack Overflow answers. That code almost universally hardcodes values. It's easier to read. It makes sense for a demonstration.

The model learned from that. When it generates a JWT auth example, it reaches for the pattern it has seen thousands of times: const SECRET = 'some-string'. It's not being lazy. It genuinely learned that this is how people write this code.

The problem compounds because AI tools are optimized to produce working code fast. A hardcoded secret works. The token generates. The tests pass. Nothing breaks until it breaks badly.

Cursor, Claude Code, Copilot, they all do this. Some do it more than others. None of them reliably stop themselves.

The Fix

Straightforward:

// Pull from environment variables
const SECRET = process.env.JWT_SECRET;

if (!SECRET) {
  throw new Error('JWT_SECRET environment variable is not set');
}

function generateToken(userId) {
  return jwt.sign({ id: userId }, SECRET, { expiresIn: '7d' });
}
Enter fullscreen mode Exit fullscreen mode

Set JWT_SECRET in your .env file. Add .env to .gitignore. Done.

But catching it before commit is better than fixing it after. Here's a gitleaks pre-commit hook:

# Install gitleaks
brew install gitleaks  # macOS
# or: go install github.com/gitleaks/gitleaks/v8@latest

# Add to .git/hooks/pre-commit
#!/bin/sh
gitleaks protect --staged --no-banner
Enter fullscreen mode Exit fullscreen mode

That hook runs in about 2 seconds and catches hardcoded secrets before they ever hit your repo. For the whole project, not just staged files:

gitleaks detect --no-banner
Enter fullscreen mode Exit fullscreen mode

Scan Your Own Codebase

I've been running SafeWeave for this. It hooks into Cursor and Claude Code as an MCP server and flags these patterns before I move on. That said, even a basic pre-commit hook with gitleaks will catch most of what's in this post. The important thing is catching it early, whatever tool you use.

Top comments (0)