DEV Community

Cover image for The Numbers That Deflate the AI Hype About Replacing Coders—and Ease FOMO
Cesar Aguirre
Cesar Aguirre

Posted on • Edited on • Originally published at canro91.github.io

The Numbers That Deflate the AI Hype About Replacing Coders—and Ease FOMO

Only 25% of developers regularly use AI agents, according to Sonar's survey.

After surveying more than 1,100 developers across the globe, they found:

  • 90% of respondents use AI for assisting development.
  • Only 55% of them rate AI as "extremely or very effective."
  • 96% don't fully trust that AI-generated code is functionally correct.
  • 48% always check their AI-assisted code before committing.

Why this matters:

We're flooded with headlines predicting the end of coders.

AI generating more than X% of code at a FAANG. One CEO suggesting nobody else should learn to code. To then retract himself saying that replacing senior coders with AI is crazy. And companies using "AI innovation" as an excuse for more layoffs.

Those numbers show CEOs spread panic to bump stock prices and fuel the euphoria. Tech adoption takes time. Only 55% rating AI as very effective?! Hmm, there's something CEOs aren't saying, right?

The real motivator isn't productivity, but financial interest.

What to do:

If you think you're missing the whole AI movement, let the dust settle down.

Double down on the fundamentals, not shiny objects. Maybe it's time to pick Structure and Interpretation of Computer Programs or any classical textbook.

With 96% not trusting AI, clean code and security remain essential.

Don't throw away your copy of Clean Code. You will need it to review what AI spits out—You'll beat the 52% who don't review anything.

And even if AI takes over, there's work for human coders to do.

It's easy to fall into the AI hype. AI wins on speed. But humans win on communication, collaboration, and problem-solving skills.

Street-Smart Coding covers some of those skills. Follow this roadmap to build hype-proof skills and become the kind of coder AI can't replace.

Top comments (17)

Collapse
 
xwero profile image
david duymelinck

1000 and change developers isn't much globally. So I wouldn't rely to much on the numbers of the survey.

I do agree with the sentiment of the post, don't read too much in the hype articles like Spotify didn't touch code for months, Rewritten Next in 2 weeks, and so on.
The truth is buried in the content. And that gives a more nuanced story.

I didn't jump on the train from the start. But I hopped on recently. Running local LLM's is possible because of tools like ollama, LM studion, llm.cpp. And models that have a mixture of experts and quantization.
Tools like Claude code and Opencode are better than Cursor and Windsurf because we understand harnessing and context window better.
AI is better used for one off and small tasks than repeated tasks.

The main thing is to not humanize AI. It is nothing more than software tool.
And you can use it or not like any other tool.

Collapse
 
canro91 profile image
Cesar Aguirre

1000 and change developers isn't much globally.

Absolutely! Just enough to have a sense of real AI usage and the overall "trend"

AI is better used for one off and small tasks than repeated tasks.

I use it to fill in the body of methods. I sketch out the solution with comments and ask AI to do the dirty work.

The main thing is to not humanize AI. It is nothing more than software tool. And you can use it or not like any other tool.

Love this chain of thought.

Collapse
 
miketalbot profile image
Mike Talbot ⭐

Ok but I bet if you surveyed turkeys they might not trust thanksgiving... I think we need to be looking at what is working, not sentiment, but practical actual evidence of progress. Also should we be assessing how many subtle bugs are introduced by human programmers? Quite a high percentage I'd imagine. The key for me is to stop saying "don't worry, it's rubbish" and starting to find the ways to make things work: for us that's AI writing code, AI code reviewing, our own AI bots confirming architecture, AI highlighting the most important changes and choices and humans reviewing those. It'll be different in 6 months, but that's our current position.

Collapse
 
canro91 profile image
Cesar Aguirre

AI highlighting the most important changes and choices and humans reviewing those

I just dream about the end of writing CRUDs and SCRUM and their daily meetings :P

Collapse
 
matthewhou profile image
Matthew Hou

The 96% not fully trusting AI-generated code stat is the most telling one here.

Here's what I think the numbers are actually showing: AI didn't reduce work — it redistributed it. You spend less time writing code, more time verifying code you didn't write. And verifying someone else's code (human or AI) is cognitively harder than writing your own.

The METR study adds another layer: developers perceived 24% speedup but measured 19% slowdown. That perception gap is the real story. We feel faster because generation is instant, but we're spending more total time on the verify-debug-fix cycle.

So I'd frame it differently than "AI won't replace you." It's more like: AI shifts your job from writing to verification. And verification — defining what "correct" means, catching the subtle bugs that look right — is the hard part. Always was, actually. We just didn't notice because writing code was slow enough to force us to think.

The question isn't whether AI replaces coders. It's whether we invest in the right skills for what comes after generation gets cheap.

Collapse
 
canro91 profile image
Cesar Aguirre

Agree, Matthew, more code, more verification, slower code reviews, more builds, etc...

Collapse
 
crisiscoresystems profile image
CrisisCore-Systems

Thank you for bringing numbers into a conversation that is usually just vibes and fear. I keep seeing people talk like replacement is already a done deal, but when you look at what actually ships in real teams, the work is messy. It is systems, tradeoffs, unclear requirements, weird bugs, humans changing their minds, and consequences you have to own.

AI is clearly changing the job, but I think the bigger shift is that it compresses some tasks and expands others. Less time writing the first draft, more time reviewing, testing, threat modeling, and making sure you did not quietly ship a lie.

I am curious what you think happens to entry level growth in this world. If the easy tasks get automated, where do juniors get their reps, and what do good teams need to do differently to teach people without throwing them into the deep end.

Collapse
 
canro91 profile image
Cesar Aguirre

AI is clearly changing the job, but I think the bigger shift is that it compresses some tasks and expands others.

^ This. The more code, the more problems. And that's been true for ages :)

Collapse
 
crisiscoresystems profile image
CrisisCore-Systems

Yeah, exactly. The part people miss is that AI does not remove complexity, it moves it. You get less “typing” and more “are we sure this is correct, safe, maintainable, and actually what the user needed.” The failure modes just shift from syntax errors to quiet semantic bugs and bad decisions that look plausible.

I also think that entry level question gets sharper here. If the easy reps disappear, juniors either get trapped doing glue work forever or teams have to get intentional about apprenticeship. Smaller scoped tickets with clear guardrails, more pairing, and more review that explains why, not just what. Otherwise you end up with seniors acting as full time air traffic control and nobody growing.

Thread Thread
 
canro91 profile image
Cesar Aguirre

The part people miss is that AI does not remove complexity, it moves it.

...while we lose all the mental models and context we build before when coding by "hand"

If the easy reps disappear, juniors either get trapped doing glue work forever or teams have to get intentional about apprenticeship...Otherwise you end up with seniors acting as full time air traffic control and nobody growing.

What a coincidence! This week I found a post saying the coding world will be divided into expert beginners and lone wolves. It doesn't sound that crazy.

Collapse
 
baltasarq profile image
Baltasar García Perez-Schofield

Thanks for researching the real numbers. Well, one could suspect by himself that AI coding X% of the big products in the industry was very wrong. At best.

Probably, the exception is Microsoft. It actually feels that the code thrown in Windows 11 has been written by an AI, with all its flaws. And the problem is, obviously, how to debug it now.

AI is a great tool. Do you want to know about something? Now you can ask AI to summarize it for you. Don't you know about some API and need a solution? You can ask an AI, but you must evaluate the solutions given. There's nothing automatic about it.

I know I'm faster without AI, though it is a tool I know it's there for me if I need it.

Collapse
 
canro91 profile image
Cesar Aguirre

I like to think of AI as a very fast junior coder with memory and attention issues :/

Collapse
 
klement_gunndu profile image
klement Gunndu

The 96% not fully trusting AI code matches what I see — we review every AI-generated line before merge, and the rejection rate on logic errors is still around 15-20%. Speed gains are real but only if you actually catch the subtle bugs.

Collapse
 
canro91 profile image
Cesar Aguirre

To avoid those pesky bugs, I decided to use AI as a sparing coder: if I code, AI reviews. And the other way around.

Collapse
 
viktor_avdulov_b8045dc2e8 profile image
Viktor Avdulov

I think the devil is in the details. This doesn't tell us the full story. You need to stratify these devs by ai tools that they've used, by education in using AI for dev, by organizational culture, etc. I think you would see a lot more success of AI use among teams that work at organizations who encourage AI for dev, spend the resources to educate their devs, have SOPs for agentic coding flows developed and tested. Claude code echo system would yield drastically different results than using chatgpt generation. It's not even close. And you should check any code before you commit it and you have tests and prs for tgat

Collapse
 
canro91 profile image
Cesar Aguirre

And you should check any code before you commit it and you have tests and prs for tgat

I second this. We're responsible for every LOC. When things break, we can't simply say "AI did that"

Collapse
 
__d04775ef9dd1f profile image
Стас Фирсов

Дружище, Вы просто верите в волшебного Джинна из сказки про Алладина, А реальность заключается в другой плоскости- без труда не выловишь и рыбку из пруда))). Вы ждете халявных прорывов от ИИ загружая в него массивы информации, которую он и без вас знает)). Ваш творческий потенциал заканчивается на кантрл в)).Ты думаешь, что Нейронку надо обучать и подгонять, загружая в неё гигабайты кода, но Ты её не прокачаешь) ибо Ты забыл)))- нейронка выдает Тебе среднее значение))). Ты творец, нейронка инструмент, если Творец говно, то инструмент в кривых руках, а если Творец тщеславный кусок)), то он обвинит во всем бездушного и безответного... Задумайся и начни работать)), а не лайки собирать.