DEV Community

Cover image for The Uncomfortable Reality: Vibe Coding
Junn Xavier Adalid
Junn Xavier Adalid

Posted on • Originally published at xaviworks.hashnode.dev

The Uncomfortable Reality: Vibe Coding

But there is another side of AI coding that we need to talk about.

In reality, many modern developers are no longer using AI only as a tool or assistant. Some are using it as the main developer.

This is what people now call vibe coding.

The idea sounds exciting: you describe what you want, the AI generates the system, you run it, you fix errors by pasting them back into the AI, and after a few minutes you have something that looks like a working app.

And honestly, that is impressive.

But it is also concerning.

Because building software is not only about making something run.

A system can run and still be badly designed.

A feature can work and still be insecure.

A login page can look correct and still have broken authentication.

An API can return the right response and still expose user data.

A payment system can pass a simple test and still fail in real-world edge cases.

That is the part many people miss.

AI can generate code very fast, but fast code is not automatically good code.

According to the 2025 Stack Overflow Developer Survey, AI tool usage is already very common: 84% of respondents said they use or plan to use AI tools in their development process, and 51% of professional developers use them daily. But the same survey also shows a trust problem: more developers distrust the accuracy of AI tool output than trust it, and only a very small percentage highly trust it.

That says a lot.

Developers are using AI more, but they do not fully trust it.

And they should not blindly trust it.


The Problem Is Not AI Coding. The Problem Is Unreviewed AI Coding.

I do not think AI coding itself is bad.

Actually, AI can be very useful.

It can help with boilerplate, debugging, refactoring, tests, documentation, and learning. It can make a good developer much faster.

The real problem starts when a developer accepts AI-generated code without understanding it.

This becomes risky when the AI is deciding things like:

  • database structure
  • authentication flow
  • authorization rules
  • API design
  • file structure
  • error handling
  • validation
  • security logic
  • deployment configuration
  • system architecture

At that point, the developer is no longer just getting help.

The developer is giving up control.

That is a big difference.

Using AI to write a function is one thing.

Using AI to design your whole system without reviewing the architecture is another thing.


Can AI-Generated Software Be Secure?

Yes, it can be secure.

But only if the developer or team treats AI-generated code like untrusted code that needs review.

The code should still go through:

  • human code review
  • security review
  • testing
  • threat modeling
  • dependency checks
  • static analysis
  • access control checks
  • proper architecture review
  • production monitoring

Without those steps, AI-generated software can easily become risky.

A study on AI-generated backend applications found that even the best tested model reached only 62% code correctness, and around half of the correct generated programs could still be exploited. That is important because backend systems are usually where authentication, authorization, user data, and business logic live.

Another large-scale analysis of AI-generated code from public GitHub repositories found thousands of CWE security weakness instances across many vulnerability types. The study also noted that vulnerability rates differed by language, with Python showing higher rates than JavaScript and TypeScript in their dataset.

This does not mean every AI-generated codebase is insecure.

But it does mean we should not assume AI-generated code is safe just because it works.


Why Security Is Hard for AI

Security is not just about adding a few lines of code.

Security depends on context.

AI might know how to create a login system, but does it know your real business rules?

Does it know which users should access which data?

Does it know your company’s security standards?

Does it know your threat model?

Does it know what should happen when a user changes roles?

Does it know how your payment flow should behave when something fails halfway?

Usually, no.

That is why AI can generate code that looks correct but misses important security controls.

The Cloud Security Alliance explains that AI coding assistants can introduce risks because they do not inherently understand an application’s risk model, internal standards, or threat landscape. They can repeat insecure patterns, take shortcuts, omit necessary security controls, or introduce subtle logic errors that are hard to notice.

This is especially dangerous because AI-generated code often looks clean.

And clean-looking code can make developers feel safe.

But readable code is not the same as secure code.


The Most Dangerous Mindset

The most dangerous mindset is:

“The app works, so the code must be fine.”

That mindset was already risky before AI.

With AI, it becomes even more dangerous because developers can now create bigger systems faster than they can understand them.

A developer might generate a dashboard, authentication system, backend API, database schema, admin panel, and deployment config in one afternoon.

That sounds powerful.

But if they did not review the code, then they do not really know what they built.

They only know that it appears to work.

That is not engineering.

That is gambling.


AI Can Write Security Code, But It Cannot Own Security Responsibility

Some people may say:

“But AI can also implement security.”

Yes, it can.

AI can generate password hashing code.

AI can suggest input validation.

AI can create middleware.

AI can add authentication.

AI can write tests.

AI can explain vulnerabilities.

But security is not just implementation.

Security is verification.

Security is asking:

  • Is this the right control?
  • Is it applied everywhere?
  • What happens in edge cases?
  • Can a normal user access admin data?
  • Are secrets exposed?
  • Are tokens handled safely?
  • Are permissions checked on the server?
  • Is the database query safe?
  • What happens if the request is modified?
  • What happens if the user is malicious?

AI can help answer those questions, but the developer still needs to ask them.

The UK National Cyber Security Centre recently warned that AI-generated code has benefits, but it must not come at the expense of security. The NCSC also said AI tools used to develop code need to be designed and trained so they do not introduce or propagate unintended vulnerabilities.

That is the key point.

AI can help with security, but it should not be the only security reviewer.


Manual Review Still Matters

This is why manual review is still important.

Not because humans are perfect.

Humans also write insecure code.

But humans understand context in a way AI often does not.

A developer can look at the system and ask:

“Does this design actually make sense for our users, our data, and our risks?”

AI may generate the code, but the developer must still understand the architecture.

OWASP describes secure code review as a manual process for finding vulnerabilities that automated tools often miss, especially issues involving application logic, data flow, implementation details, and context-specific security problems.

That matters even more in the age of AI.

Because when code is generated faster, review becomes more important, not less important.


So, Is Software Still Secure Nowadays?

The honest answer is:

Some software is secure. Some software only looks secure.

AI does not automatically make software insecure.

But careless AI dependence can absolutely make software more dangerous.

A team that uses AI properly can still build secure systems if they have strong engineering practices.

But a developer who vibe codes an entire production system without reviewing the code, architecture, permissions, and data flow is creating serious risk.

The scary part is not that AI can write code.

The scary part is that AI can make inexperienced developers feel like they understand a system they have not actually studied.

That is where the danger starts.


My Take

I think AI coding is here to stay.

Developers will keep using tools like Claude Code, Codex, GitHub Copilot, Cursor, and other AI coding assistants because they are useful and fast.

But speed should not replace understanding.

The future developer should not be someone who avoids AI.

But the future developer also should not be someone who blindly accepts everything AI writes.

The best developer is the one who can use AI, question AI, review AI, and still understand the system deeply.

Because at the end of the day, users do not care whether the code was written by a human or generated by AI.

They care if the software works.

They care if their data is safe.

They care if the system is reliable.

And if something breaks, the AI will not be responsible.

The developer will be.

AI can generate code in minutes, but it cannot guarantee that the code is correct, secure, scalable, or maintainable. That responsibility still belongs to the developer.

Top comments (0)