DEV Community

Cover image for AI Will Replace Programmers. Are You Sure About That?
Fedor Iakushkov
Fedor Iakushkov

Posted on

AI Will Replace Programmers. Are You Sure About That?

In case you missed it, back in January 2026, at the World Economic Forum in Davos, Anthropic CEO Dario Amodei stated:

"I think we might be 6 to 12 months away from when the model is doing most, maybe all of what SWEs do end-to-end."

He then added: "I have engineers within Anthropic who say I don't write any code anymore. I just let the model write the code, I edit it..."

It’s April now, and spoiler alert: we are still here, and we are still coding. Especially if you're a Senior Developer with 12 years at Google. But there's a catch: this is marketing on the level of "AI will replace everyone in 12 months" - a phrase we've been hearing for four years in a row now. Just a reminder that Anthropic's CEO said literally the exact same thing back in October 2025. So let's break down in maximum detail (maybe even too much detail) why neural networks, even the most advanced ones, are physically and mathematically incapable of replacing a programmer. And why Moore's Law, quantum tunneling, and the very nature of probabilistic machines put a massive nail in that coffin.

If you’re interested in the process and want to keep up with my future rants, I’d appreciate a follow on X (Twitter).


1. A Neural Network is Not a Brain, It's a Massive Probabilistic Machine

Modern Large Language Models (LLMs) are neural networks that fundamentally do one thing: predict the next piece of text based on everything they've seen before. They don't understand code in the true sense. They have no internal world model; they don't know what a race condition in concurrent code actually is, or how to ensure data reliability in a distributed system. They simply output the most probable sequence of characters based on statistics gathered from terabytes of code, forums, and repositories.

It's simple: if the model has seen that the token "cat" is followed by "fluffy" in 95% of cases, it picks that word next.

Hence the famous hallucinations: when the model takes a low-probability detour into a parallel universe where running DROP TABLE users CASCADE; seems like a perfectly reasonable solution. Chain-of-thought reasoning, web search, autonomous agents - these are all useful add-ons that drastically increase the chances of getting the right answer, but they don't change the core mechanism. It's still good old autocomplete on steroids. By the way, giving a neural network web access massively improves answer quality and user experience. It's just a pity that the services providing these search APIs are mostly garbage - offering overpriced and incomplete data. But I digress.

In real-world development, 80% of code is boilerplate, basic data operations, and gluing different parts together. Here, the model genuinely speeds up the workflow exponentially (in today's world, not using AI is just plain stupid because of the speed and boilerplate reduction). But that remaining 20% is system architecture, trade-offs between performance and reliability, deep business domain knowledge, edge cases, operational resilience, legal compliance, and questions like, "What if the business decides tomorrow that we need semantic compression for cucumber logs?" This is where the probabilistic machine fails. Because the so-called "correct" answer simply doesn't exist in the training data - it requires a creative solution under conditions of incomplete information.


2. Moore's Law is Dead, and Quantum Tunneling Won

Now for the fun part: physics.

Since 1965, we've lived by Moore's Law: the number of transistors on a microchip doubles every two years. This provided the exponential growth in computing power that fueled all the recent AI breakthroughs. More parameters = more "intelligence." Simple.

But in 2025-2026, we hit a brick wall.

The current manufacturing process is 2 nanometers. The thickness of the insulating layer is already down to 0.7-1.2 nanometers - that's literally two or three silicon atoms. At this scale, quantum tunneling kicks in: electrons start behaving like waves and, with a non-zero probability, simply "leak" through a barrier they shouldn't be able to cross according to classical physics. As a result, computing efficiency drops due to leakage current, while heat generation and power consumption skyrocket.

Quantum Tunneling

Voltage scaling alongside size reduction died a long time ago, too. Now we've moved on to modular chiplets, 3D stacking, and software optimization. This gives us linear growth at best, not exponential.

The problem is, training the most powerful models requires exactly that explosive, exponential growth in compute. To achieve the next major leap, you need 10 to 100 times more parameters and operations. But data centers are already consuming as much energy as small countries. The cost of training a single model is in the hundreds of millions of dollars. We've practically run out of high-quality human data on the open web, and training on "AI slop" doesn't help much - models start feeding on their own outputs and severely degrade in quality. We won't dive into the theories of what happens when neural networks train on their own inherently flawed generated content, but you can guess the outcome.

So, humanity has hit a physics bottleneck, as usual. To build something truly groundbreaking, we either need to invent a vastly superior neural network architecture, or we need breakthroughs like stable quantum computers - which aren't exactly on the horizon for the next few years. And to create a better architecture - a true Artificial General Intelligence (AGI) that can do everything a human does, including creativity and problem-solving in entirely novel situations - we first need to understand how the human brain works, or at least grasp its fundamental principles to replicate and improve upon them. But as of today, we don't even understand how consciousness and intellect emerge from millions of neurons and synapses. Until we figure that out, neural networks will remain nothing more than very smart probability predictors.

And if we ever do build a system that approaches the complexity of the brain, an ethical question immediately arises: is it self-aware? Does it have rights? But that's a completely different story - and we are a long way from it.

3. What This Means for AI, Programmers, and the Future


Neural networks won't get "smarter" in the sense of replacing a human engineer, but they have already become an incredibly useful tool - just like Git, Docker, or Kubernetes did in their time. A Senior Developer's productivity has multiplied, and Juniors are growing into Mids and Seniors much faster. But AI won't kill the profession itself.

Even if it did, a whole new set of questions immediately pops up:

  • Someone has to define the problem at the level of business logic and real-world physical constraints;
  • Someone has to verify that the code doesn't just look right, but actually works in production under heavy load, handles failures, and is secure;
  • Someone has to make architectural decisions where the cost of a mistake is billions of dollars or human lives;
  • And finally, someone has to maintain the system for decades (oh yeah, everyone loves those legacy Java monsters with server-side rendered frontends and 6,000 lines of code stuffed into a single file).

Enough with the Marketing BS

While AI company CEOs drop quotes like "there will be no more programmers" to pump their stock prices and secure investments, actual engineering teams at NVIDIA, Google, and even Anthropic itself are still actively hiring developers. Why? Because you can't build reliable, scalable, and secure systems purely on probabilities.

AI won't replace programmers, but it is already turning good programmers into great ones. And bad programmers into unemployed ones. The only difference is who understands how this probabilistic machine works under the hood, and who knows how to ask it the right questions.

Physics never lies, but marketing and people lie all the time.

And that is probably the most reliable statement in the tech industry for 2026. So learn to code, and stop whining about being replaced by AI.

Good luck!

Top comments (0)