DEV Community

Ferdynand Odhiambo
Ferdynand Odhiambo

Posted on

The Coding Paradox: Why You Need to Learn to Code Precisely Because Everyone Says You Don't

The Coding Paradox: Why You Need to Learn to Code Precisely Because Everyone Says You Don't

There is a statement making the rounds in tech circles right now, repeated confidently at conferences, in newsletters, and across social media timelines: coding is dead. The argument goes that AI has democratized software development so thoroughly that writing code by hand is a relic — a skill as obsolete as typesetting or darkroom photography. Anyone can build anything now. Just describe what you want.

I think this is partially true. And that partial truth is making it dangerous.

What the "Coding is Dead" Crowd Gets Right

Let me be fair. The people making this argument are not wrong about the surface-level observation.

AI coding tools have genuinely compressed what used to take weeks into hours. A non-technical founder can now scaffold a functional prototype without hiring an engineer. A designer can bring their mockup to life without learning React from scratch. A solo developer can punch well above their weight, handling backend logic, frontend polish, and deployment pipelines in a single afternoon.

I have experienced this firsthand. Tasks that used to consume entire afternoons; boilerplate setup, repetitive CRUD operations, writing unit tests for obvious cases, now take minutes. The productivity gain is real. It is not hype.

And yes, for a large category of software, static websites, simple CRUD apps, internal tools, landing pages; a sufficiently clear prompt and some iteration can get you to a working product without deep technical knowledge. The barrier to entry has dropped substantially, and that is genuinely good for the world.

So the claim is not fabricated. AI has changed what it means to build software. The question is whether it has changed it enough to make foundational knowledge irrelevant. I do not think it has. Not even close.

What They Are Getting Wrong: The Supervision Problem

Here is where my own experience diverges sharply from the narrative.

When I use AI to build anything beyond the trivially simple, I am not a passive recipient of generated code. I am a supervisor, a reviewer, and often a debugger. And the quality of that supervision depends entirely on what I already know.

A few months ago, I was building a feature that involved file uploads processed asynchronously. The AI produced clean, readable code. It worked perfectly in every test I ran. I almost shipped it. Then something felt off, a subtle detail in how the job queue was being handled that I recognized from a painful lesson two years prior. The AI had introduced a race condition that would only surface under concurrent load. The code was not wrong in any obvious way. It just would have failed in production, quietly, at the worst possible moment.

I caught it because I had broken something similar before, understood why it broke, and rebuilt it from scratch at the time. That experience lived in my hands and my memory. The AI had no way to surface it. And a developer who had never written that kind of code would not have a reason to check that, or even be in a position to identify such a subtle error.

This is the supervision problem. AI generates plausible code. Not always correct code. Not always safe code. Not always code that scales, or handles edge cases, or survives contact with real users. The gap between plausible and correct is exactly where engineering judgment lives, and that judgment is built through experience, not prompting.

The Amplification Effect

I want to push this further, because I think there is a deeper principle at work.

AI does not replace expertise. It amplifies it. This means it makes skilled developers dramatically more productive, and it gives unskilled developers a convincing illusion of productivity, right up until the moment it does not.

Think about what actually happens when you build something non-trivial with AI assistance. You make hundreds of small decisions: which parts of the generated output to keep, which to question, which to throw away entirely. You decide when the architecture feels wrong even if you cannot immediately articulate why. You know which edge cases to probe because you have seen them break your head before. You recognize when a database schema will become a nightmare to query at scale, even if it looks clean today.

None of that is in the prompt. All of it comes from having written code, broken things, and learned why.

The cruel irony is that the people confidently declaring coding dead are, almost without exception, people who know how to code deeply. They have internalized the thinking so thoroughly that they can now safely outsource the execution. They are not replacing their knowledge. They are leveraging it more efficiently. When they say "anyone can build anything now," they mean anyone like them; someone who has already paid the foundational cost.

The Hidden Debt

There is a generation of new builders entering tech right now who are being told, sincerely and not maliciously, that they can skip the fundamentals. Learn to prompt. Learn to ship. The tools will handle the rest.

I worry about this.

Not because I am attached to the ritual of writing code by hand. I have no nostalgia for spending three hours tracking down a missing semicolon. But because the fundamentals are not about syntax. They are about mental models, how data moves through a system, how state is managed, what happens at the boundaries between components, why some abstractions hold under pressure and others collapse. You do not develop those models by reading documentation. You develop them by building things, watching them break, and understanding why.

A developer who skips this phase is not just missing skills. They are missing the diagnostic vocabulary to know when something is wrong before it becomes catastrophic. They are accumulating a debt, not in their codebase, but in their own understanding. And unlike technical debt, you cannot refactor your way out of it at a later date. You have to go back and pay it.

A Fair Counterargument

I want to be honest about where the other side has a point I cannot fully dismiss.

AI tools are improving rapidly. The edge cases they miss today, they may handle confidently next year. The architectural mistakes that require experienced eyes to catch might become detectable by better static analysis, smarter linting, or more capable models that reason about system behaviour rather than just generating syntax. If the tools get good enough, the required baseline of human knowledge genuinely shrinks.

I think this is probably true at the margins, and will become more true over time. Routine engineering work; the kind that follows established patterns in well-documented domains; will require progressively less human oversight.

But architecture, product judgment, novel problem spaces, and systems that need to survive real-world conditions, these will demand taste and experience for a long time. And taste is not something you can fake. It accumulates slowly, through work and failure and iteration.
(indeed a very slow process lol)

What I Actually Think You Should Do

If you are early in your career, or considering a career in tech, here is my honest read of the moment:

Learn to code. Write things by hand. Build projects that are too small to be impressive and too broken to ship. Debug things you do not understand until you understand them. Let AI assist you while you are doing this, it is a spectacular learning accelerator when used that way, but do not let it replace the process of understanding.

If you are already an experienced developer, use AI aggressively. It is a genuine superpower at your level. But stay sharp about what you are delegating. There is a version of AI-assisted work that keeps your judgment engaged, and a version that slowly atrophies it. Know which one you are doing.

And if you are a leader or educator — be careful about what you tell the next generation is safe to skip. The people who will be most harmed by "coding is dead" are not the experienced engineers who said it. They are the newcomers who believed them.

The Paradox, Plainly Stated

The more AI lowers the barrier to building software, the more valuable the foundational knowledge becomes — because someone has to supervise the output, catch the failures, and make the judgment calls that no prompt can encode.

You need to learn to code precisely because everyone is saying you do not.

The floor has not disappeared. It has just become invisible to anyone who never stood on it.

Top comments (0)