DEV Community

Cover image for Is Learning to Code Still Worth It in the Age of AI?
Ali Aldahmani
Ali Aldahmani

Posted on

Is Learning to Code Still Worth It in the Age of AI?

A conversation that changed the way I think about programming.


I'll be honest, I had a moment of doubt recently. I'm in my last year and a half of university, majoring in AI, and the more I look at what's happening in the tech world, the more a quiet question kept nagging at me: Is any of this still worth it?

Every semester, I sit through classes on C++, Java, and Python — OOP concepts, data structures, and design patterns. Meanwhile, I watch people on social media generate entire working applications just by typing a sentence into ChatGPT. "Vibe coding," they call it. And it actually works. So naturally, I started wondering: if AI can write the code, why am I spending hundreds of hours learning to write it myself?

I needed an answer from someone who actually knew, not an AI, and not a random post online. I needed a real person with real experience. That's when I thought of my old professor: a computer science professor and department chair, someone who has watched this field evolve for decades.

So I sent him an email.


What I Asked

I shared with him what ChatGPT had told me — that programming isn't going anywhere, that AI will just assist developers and make them more efficient, that human creativity and problem-solving will always be needed. It sounded reasonable. But I wanted to know what he thought. Does programming still matter? Will it still matter when I graduate?

His reply was longer than I expected. And it completely reframed how I was thinking about the whole thing.


The History Lesson I Didn't Know I Needed

Instead of giving me a straight yes or no, my professor walked me through the entire history of programming, told through one simple task: adding a series of numbers. Each era, the same problem, a totally different world.

Stage 1 — Machine Language

It started at the very bottom. Pure binary. Instructions written as raw ones and zeros that the hardware understood directly:
0001 0001 0010
No abstraction. No human-readable anything. Just bits.

Stage 2 — Assembly Language

Then came Assembly, which gave human-readable names to those hardware instructions:
ADD R1, R2 ; R1 = R1 + R2
A small step in readability, but a massive mental leap for programmers of that era.

Stage 3 — Fortran (First High-Level Language)

Then the first high-level language appeared — Fortran — and suddenly code started to look almost like math:
DO 10 I = 1, 10
SUM = SUM + I
10 CONTINUE

Stage 4 — ADA (Second Generation High-Level)

Languages kept evolving. ADA brought cleaner structure and readability:
for I in 1 .. 10 loop
Sum := Sum + I;
end loop;

Stage 5 — Java (Object-Oriented)

Then object-oriented programming arrived, with Java letting developers model entire systems around real-world concepts:
for (int i = 1; i <= 10; i++) {
sum = sum + i;
}

Stage 6 — Python (Modern High-Level)

Then Python took things even further — doing in a few lines what used to take many, with expressive, readable syntax:
Create two arrays
A = np.array([1, 2, 3, 4])
B = np.array([5, 6, 7, 8])
Add arrays
C = A + B

Stage 7 — Vibe Programming (AI-Generated Code)

And now? You just describe what you want in plain English, and the AI writes it:

User: Create a Python program using NumPy that multiplies two matrices and prints the result.

import numpy as np
A = np.array([[1, 2],
[3, 4]])
B = np.array([[5, 6],
[7, 8]])
C = np.dot(A, B)
print("Matrix A:")
print(A)
print("Matrix B:")
print(B)
print("Result:")
print(C)

Here's the thing: every single one of those transitions, from machine code to Assembly, from Assembly to Fortran, from procedural to object-oriented — felt radical at the time. People probably asked the same question at each step: "If the new tool does the heavy lifting, do I still need to understand the old way?"


The Part That Actually Hit Me

After walking through the history, my professor made a point that I keep coming back to.

He said that vibe programming allows the programmer to think at the level of ideas and design, rather than focusing on the mechanics of writing code. That sounds like pure freedom. And in some ways, it is.

But then he added the part I wasn't expecting: it is essential that the person writing the prompt actually understands the code that gets produced.

Why? Because software doesn't just get written once and live forever. It has a lifecycle, and every phase of that lifecycle requires real understanding:

That last one — maintenance — is the most important. Real software gets updated, patched, extended, and fixed continuously across many versions. And if you don't understand what the AI generated, you cannot maintain it, debug it, or evolve it with confidence.

He put it simply: AI eliminated some jobs that already existed — machine coders, assembly programmers. But it also created new ones. Prompt engineers. Vibe programmers. The field didn't shrink; it shifted.


The Calculator Analogy

This is the part that really settled the question for me. Think about what happened when calculators arrived.

Nobody said "math is dead." Nobody stopped teaching arithmetic in schools. What happened instead was that the floor of what you could accomplish rose dramatically — but the ceiling only moved for the people who actually understood what was happening underneath. A calculator in the hands of someone who doesn't understand math is just a machine that produces numbers. In the hands of someone who does, it's a tool that amplifies everything they're capable of.

AI and code generation are the same. The tools get more powerful. But the person operating them still needs to understand what they're doing — otherwise they're just producing output they can't explain, verify, or fix.


What I'm Taking Away From This

I came into that email thread feeling like my curriculum might be obsolete before I even graduated. I came out of it feeling as if I finally understood why my curriculum exists.

Learning C++, Java, and Python isn't about memorizing syntax that an AI can generate in seconds. It's about building a mental model of how software actually works, how memory is managed, how objects interact, and how algorithms perform at scale. That mental model is what lets me read AI-generated code critically, catch mistakes, ask better questions, and ultimately build better things.

The programmers who will struggle in an AI-driven world aren't the ones who learned to code. They're the ones who learned to copy-paste without understanding. AI doesn't change that equation — it just raises the stakes.

So yes, it's still worth it. Not despite AI, but especially because of it.

Top comments (0)