Let's set the debate: on one side, those for whom AI writes bad code. On the other, those for whom AI has revolutionized their workflow, they move ten times faster, the question is settled.
Both sides are potentially asking the wrong question.
The real question isn't "does AI code well or badly?" It's "what does it actually produce, and under what conditions does that become a problem?"
What AI is trained on
To understand what AI produces, you have to understand what it learned from. Code models are trained on massive amounts of public code — millions of repositories, years of contributions, dozens of languages.
Do we know exactly what that training contains? But we can reasonably assume it mixes very good code with much more average code. Clean, well-architected code written by experienced developers. And code written fast, under pressure — for an MVP that never got refactored.
If AI was trained on good patterns, it can reproduce them. But it doesn't choose a pattern by "intelligence" — it picks the one that is statistically most present in its data. The problem? The most common pattern isn't always the most performant or the most suited to your specific architecture.
AI isn't incompetent. It can lack context. And that's a fundamental difference.
What it doesn't see
AI sees the code — but not its life after deployment. It can spot a classic security flaw because it learned to recognize those patterns. But it doesn't know whether this architecture held up under a thousand users or collapsed. For AI, code that works on screen has the same value as code that survives in production. It reproduces forms, not robustness
So what do we do?
We don't throw AI away. What changes is the developer's role — they become the one who guides what AI produces. The one who brings the context AI can't have: real constraints, security requirements, expected load, acceptable technical debt.
AI can be excellent, but it requires a rich and relevant context to truly perform. And knowing what to ask, how to frame it, which constraints to specify — that's precisely what AI can't do on its own.
Conclusion
The developer who truly understands how AI works uses it better — and combines it with code analysis tools to add an extra layer of control.
In the next article, we'll come back to our initial promise — exploring how an AI capable of correcting itself and doubting could become a 24/7 researcher, in medicine to try to discover new drugs, in energy to try to analyze and find new alternatives.
Top comments (0)