There's a specific moment when vibe coding goes from fun to terrifying. For me it was a Saturday afternoon, three espressos deep, shipping my fourth side project in three months. I'd been on a roll - prompting Claude, watching code appear, deploying to Vercel, feeling like a wizard.
Then I ran a basic security scan on one of my apps and the report was... not great.
The setup
Quick context: I'm a director at a gaming company by day, indie builder by night. I'd been cranking out AI-powered apps on weekends - a mood tracker, a feedback tool, a recipe thing, a growth analytics dashboard. All built mostly through prompting, minimal manual coding. Ship fast, learn fast.
The problem is "learn fast" didn't include "learn about the security holes your AI just introduced."
What I actually found
I won't pretend I did some sophisticated penetration test. I literally just ran npm audit and checked my environment variable handling. Here's what showed up across my projects:
Hardcoded API keys in client-side code. I asked the AI to add an API call and it put the key right in the React component. Not in an env file, not server-side - just sitting there in the bundle. I caught two of these. Who knows if I missed more.
No input validation anywhere. Every form in every app just trusted whatever the user typed. The AI generated clean-looking form handlers that did zero sanitization. SQL injection? XSS? Wide open. The code looked professional but it was essentially a welcome mat for anyone who wanted to mess with my database.
Dependencies with known vulnerabilities. When you prompt "add authentication" the AI picks packages. It doesn't always pick the latest or most secure ones. I had three packages with high-severity CVEs that were fixed in newer versions.
Overly permissive CORS. Every backend had Access-Control-Allow-Origin: * because that's what makes things work fast during development and the AI never suggested tightening it for production.
Why this keeps happening
I think about this a lot now. The AI isn't trying to write insecure code. It's optimizing for "working code that does what you asked." Security is almost never what you asked for.
When you prompt "build me a login page" you get a login page. You don't get rate limiting on login attempts. You don't get account lockout after failed tries. You don't get CSRF protection. You get the thing you asked for and nothing else.
And honestly? When you're vibe coding and in flow, you don't want to stop and ask "now add rate limiting, now add CSRF tokens, now add input validation to every field." That kills the vibe. That's the whole tension.
What I changed
I'm not going to pretend I figured out some perfect system. But here's what I do now:
Security prompt at the end. After the app works, I do one dedicated pass: "Review this entire codebase for security vulnerabilities. Check for hardcoded secrets, missing input validation, CORS configuration, dependency vulnerabilities, and authentication weaknesses." It's not perfect but it catches the obvious stuff.
Automated scanning in CI. I added npm audit and a basic SAST tool to my deploy pipeline. Takes five minutes to set up. Catches things I'd never think to check manually.
Env file template. I keep a .env.example in every project and I tell the AI upfront "all API keys go in environment variables, never hardcode them." Setting context early helps.
Dependency review. Before I accept whatever packages the AI suggests, I check when they were last updated and if there are known issues. This takes like 2 minutes and has saved me multiple times.
The bigger picture
I think vibe coding is genuinely amazing for shipping fast and I'm not going to stop doing it. But there's this gap between "it works" and "it's production-ready" that AI doesn't bridge on its own. The AI writes code that works. Making it secure is still on you.
The irony is that the same AI that introduced these vulnerabilities can also help fix them - you just have to ask. It's weirdly good at security reviews when you explicitly prompt for them. The problem is remembering to ask in the first place when you're riding that shipping high.
If you're vibe coding side projects and deploying them publicly, just do the security pass. Twenty minutes of review can save you from being the person who leaked their OpenAI key to GitHub or got their user database dumped because of a missing parameterized query.
Not that I'd know anything about that first one. Definitely not from personal experience. Nope.
Top comments (2)
This is the reason why I still feel that tbe human plays a key role on tbe engineering even if you dont write the code or you are aware of all tbe syntax details you still do software architecture, QA and like you mentioned here ask for a minimal security standards and at the end the final result still limited to the human ability to steer the agent in the right direction. You effectively become a tech lead of a very talented dev that you need to set goals, help to prioritize tasks and make sure it pay attention to the right thinks like security
yeah 100% - the moment I stopped thinking of myself as "the developer" and started thinking "the architect who directs AI" things clicked. the security stuff especially. AI will happily generate auth code that technically works but misses all the edge cases. you still need someone who knows what questions to ask. honestly I think that's the new skill gap - not writing code but knowing what bad code looks like