DEV Community

Cover image for How Much AI-Generated Code Are We Actually Shipping to Production? My Reality So Far
Daniel Balcarek
Daniel Balcarek

Posted on • Edited on

How Much AI-Generated Code Are We Actually Shipping to Production? My Reality So Far

Every day I see posts where people claim they barely code anymore or that in a few months most production code will be vibe-coded.

Not only on dev.to, but also on Reddit, X and LinkedIn.

My experience looks very different.

In hobby projects, maybe 75% of my code is AI-generated.
At work, I am still coding a lot, I think that there is like 25% code generated by AI.

Don’t get me wrong, AI has definitely changed how I work. Whether it’s a hobby project or a sprint at the office, my workflow isn't the same as it was a year ago. However, many of the current predictions still feel heavily 'AI-hyped' compared to the boots-on-the-ground reality of software engineering.

The difference becomes especially clear when comparing legacy systems with modern codebases.

Older Codebases

We maintain several older solutions built on .NET Framework. Honestly, some parts are a mess. I’ve cursed the original authors many times.

In these systems, I rarely use AI for new features because it simply isn’t very helpful.

The problems are familiar to anyone working with legacy software:

  • inconsistent architecture
  • missing context
  • hidden dependencies
  • business rules scattered across the codebase

Even developers who have worked on these systems for 10+ years are sometimes afraid to touch certain areas. In this environment, AI struggles because understanding the system matters more than generating syntax.

Newer Codebases

In newer projects that follow good standards and clearer architecture, AI becomes much more useful.

Here is where I actually use it regularly:

  • generating SQL (PostgreSQL functions, tables, indexes)
  • creating unit test drafts for backend and frontend
  • generating boilerplate code from prompts we keep inside the repository
  • discussing performance ideas or refactoring options

However, for complex business features, I still write most of the code myself. Many tasks are too domain-specific to describe well in a prompt.

My Question to the Community

I’m interested in how AI is used in production environments, not demos or hobby projects, but daily engineering work.

  • Which AI coding assistant do you use most often? Do you combine multiple tools?
  • Roughly how much of your production code is AI-generated?
  • Does AI help equally in legacy and modern codebases for you?

Maybe I’m behind the trend or maybe real-world usage simply looks different from online predictions.

Top comments (6)

Collapse
 
webdeveloperhyper profile image
Web Developer Hyper

Right now, I'm migrating an old Nuxt app to Next.js at work using Codex. It helps a lot because the task is simple—just migrating it as is.
However, when it comes to fixing a large legacy codebase, AI seems to have a hard time understanding the context. But that was half a year ago, so maybe AI has improved since then.🧠

Collapse
 
gramli profile image
Daniel Balcarek

So you mainly supervise Codex while it generates most of the code?

We’re using Copilot at work, and with older codebases we’re very careful about using it, a lot of the time, we avoid it completely. In our experience, it often doesn’t identify the real root cause of a bug and instead “fixes” something else.

Thanks for the comment!❤️

Collapse
 
webdeveloperhyper profile image
Web Developer Hyper

Yes, my job is staring at Codex with my arms crossed to make sure it doesn’t slack off. It’s quite hard work. 🤨

Thread Thread
 
gramli profile image
Daniel Balcarek

CEO vibes! 😆

But my question was honest, I’m really curious about it, because it makes me feel like I’m behind the trend a lot.

Collapse
 
trinhcuong-ast profile image
Kai Alder

Your 75/25 split mirrors my experience almost exactly. Side projects? AI writes most of it. Production work with real constraints? I'm still doing the heavy lifting.

The biggest gap I've noticed is that AI is great at generating code that looks correct but doesn't account for the weird edge cases you only learn from production incidents. Like, it'll generate a perfectly clean API endpoint but miss the fact that your legacy auth middleware passes user data in a non-standard header.

Where I've found AI genuinely useful at work: writing test cases. I describe the function behavior and edge cases, and it cranks out 80% of the test code. Still need to review it but it saves a ton of time on the boring parts.

One thing that changed my workflow recently — using AI to explain unfamiliar legacy code rather than write new code. Just paste in a gnarly function and ask "what does this do and why." Way more useful than trying to get it to fix bugs it doesn't understand.

Collapse
 
gramli profile image
Daniel Balcarek

Nice! I knew I’m not alone 😄

I’ve also notice, that when fixing bugs in older codebases. Even when I give it a clear stack trace together with the code, it often points in a totally wrong direction and can’t really find the real root cause. Recently we had a classic thread-safety issue, in one of many context windows it actually hinted that concurrency might be the problem, but the proposed fix was still far from the real solution.

Same here with test generation. Writing unit tests was always a bit painful (especially with strict coverage rules), and AI helps a lot with the repetitive parts.

I like your idea about using AI to explain legacy code instead of writing new code.

Thanks for your comment! Curious if others see the same.