DEV Community

From Programmer to AI Code Supervisor

Miguel Teheran on October 10, 2025

A couple of years ago, mentioning Generative AI in software companies led to skepticism and strict security measures. IT departments viewed tools l...
Collapse
 
leob profile image
leob • Edited

I think we should still learn to code, because how else are we to validate the code that's written by AI ? So we still need to understand the basics of coding, the web (HTML/CSS/JS), the HTTP protocol, and so on ...

You could say "but, AI will also do the code reviews" - but then we'd be totally at the mercy of what the AI tools are doing ...

Final conclusion (at least for now): mastering the basics will still be important - and even writing some code "by hand" will remain important.

Collapse
 
mteheran profile image
Miguel Teheran

Thanks for your comment. For my current project, we require an AI code review from Copilot and two code reviews from developers. This entails one endorsement from AI followed by two approvals from humans. We need a senior review to ensure quality, and I believe this will continue for a long time.

I fixed the article's format and would appreciate it if you could share it.

Collapse
 
leob profile image
leob • Edited

Share it? :-)

However, my point, basically (but I'm sure you understood that) was that devs, even when they would mainly be acting as "managers" or "directors" (of AI tools), will still need to learn at least basic coding skills, and have a grasp of the 'fundamentals' (HTML, CSS, JS, HTTP etc) - because, how else are they going to assess whether what the AI tools produce make any sense at all?

So, at least that part of the education (training) of devs will (should) remain, for the foreseeable future ...

But, I completely agree that the focus, both of education/training, and of the day to day work, is going to shift, no doubt about it ... a lot will change, even when some things will remain the same.

Thread Thread
 
mteheran profile image
Miguel Teheran

Yes, I agreed! Leveraging AI as a supplementary tool will be seamless for experienced developers who possess the knowledge to comprehend AI's operations and identify potential issues in the process. However, I am concerned about how novice developers, who rely solely on AI for their development tasks, will address any problems that may arise from these tools.

Thread Thread
 
leob profile image
leob • Edited

"I am concerned about how novice developers, who rely solely on AI for their development tasks, will address any problems that may arise from these tools"

That's exactly why I'm saying that mastering the basics/fundamentals (also by, or especially by, junior/novice devs) will remain important (necessary), even when a large part of the code will be 'written' by AI tools ...

Thread Thread
 
n3nad profile image
Nenad Mitrovic

@leob I would go even one step further, and say that we need not just basic, but deep understanding of our craft to review and ideate on complex problems and solutions.

Although a lot of things @mteheran stated here stand.

I think we should definitely adopt AI, but in a strategic manner - augmenting our abilities and potential by leveraging human strengths (critical thinking, judgment, creativity) and AIs (pattern recognition, scale, performance) in synergy.

Collapse
 
dev_michael profile image
Michael Amachree

I read this and thought, nah AI is just an overpriced autcomplete

Collapse
 
leob profile image
leob

Brilliant, that's a dose of sorely-needed antidote to the AI hype :-)

Collapse
 
benjamin_nguyen_8ca6ff360 profile image
Benjamin Nguyen

I believe AI still has limitations that require human oversight. A human companion is essential to catch errors, guide improvements, and ensure AI systems behave responsibly and effectively

Collapse
 
a-k-0047 profile image
ak0047

Thank you for sharing your experience.
Your post made me think about how I want to work with AI in the future.

Collapse
 
mteheran profile image
Miguel Teheran • Edited

Thanks for your comment.

Collapse
 
queelius profile image
Alex Towell

Great post.

The “AI code supervisor” vision probably captures the short-term trajectory well. Humans will prompt, verify, and apply judgment. But that framing assumes that software will continue to be structured around what fits in human working memory—our tendency to bundle complexity into a few clean abstractions we can reason about consciously.

That constraint has actually served us. It nudges us toward simple, compressible models, and by Solomonoff induction, simpler explanations have a stronger prior. In that sense, our limited working memory acts as a kind of inductive filter, making us surprisingly effective and sample-efficient in problems that can be expressed through clean conceptual structure.

We do operate in high-dimensional spaces too—recognizing a cat in an image involves thousands of interacting features—but that process happens outside conscious reasoning. We can perform the recognition without being able to articulate it.

AI systems don’t inherit that same cognitive bottleneck. They can blend symbolic structure with sub-symbolic pattern recognition without needing everything to collapse into a human-legible abstraction. As these systems take on more of the “thinking,” it's possible that software development drifts toward latent-space manipulation, where the core units of organization are no longer concepts we can fully grasp or name.

If that happens, our role won’t be to “understand all the code” in the traditional SICP sense. It will be closer to probing and steering behavior, accepting that parts of the system exceed our capacity for explicit comprehension. Our taste for simplicity will remain useful—but maybe only within the subset of problems that are amenable to being simplified at all.

Collapse
 
mayankgoyal profile image
Mayank Goyal

We’re moving from writing code to guiding AI ; but knowing the fundamentals is still what keeps us in control.

Collapse
 
oppy_pro profile image
Oppy_pro

good idea, interesting to try