DEV Community

Cover image for AI as Next-Gen Compilers: Right Comparison, Wrong Assumption
Quy Hoang
Quy Hoang

Posted on • Originally published at blog.quyhoang.me

AI as Next-Gen Compilers: Right Comparison, Wrong Assumption

Amid the ongoing discussion surrounding whether technical knowledge is still needed in the age of AI, I noticed one particularly popular argument:

"With AI, no one will have to look at code anymore, just like we no longer have to look at the machine code or bytecode produced by compilers."

The comparison seems sound at first glance. We began programming with electric circuits and punched cards, moved to assembly code, then to low-level languages like C and C++, and eventually arrived at high-level languages like Java, Python, or JavaScript. With each evolution, we move one step closer to describing our intent in natural language. Now, with AI, that journey seems to have finally reached its destination.

Compilers and interpreters translate high-level programming languages into machine code or bytecode that we rarely need to read. Similarly, many assume that because AI translates natural language into a programming language, we can finally stop caring about the underlying code altogether.

I used to find this argument persuasive, but after thinking deeper, I realized this school of thought misses two critical points.

1. Compilers are deterministic, while AI is probabilistic.

When using a compiler, you can be confident that the low-level output is 100% faithful to the high-level source code. Any bugs in your program stem from how you translated your thoughts into the high-level language, not from the compilation process itself. This is why you do not need to inspect the output.

In contrast, AI is a sophisticated statistical machine that cannot guarantee a consistent, identical output for every prompt. Because it is probabilistic, you can never be entirely confident in AI-generated code without verification.

One might argue, "That is just the current state of AI. You don't know how it might evolve in a few years." Even if AI technology advances to eliminate its probabilistic nature and becomes deterministic, would that remove the need to verify the code? I believe the answer is still no. The issue is not just the technology, it is also the way natural languages are constructed.

2. Programming languages are unambiguous, while natural languages are full of holes.

You may hate learning programming-language syntax or feel frustrated when you encounter syntax errors, but in programming, these are features, not bugs.

Programming languages force us to carefully think, break down requirements, and describe them with a level of precision that cannot be mistaken for anything else. There is no room for guessing, misunderstanding, or vague assumptions. You must be 100% clear about your intent. Once you write valid instructions, you can trust the machine to follow them perfectly.

Human language, however, is so flexible and lenient. A single sentence can be interpreted in several different ways. When we communicate, much more frequently than we realize, we unconsciously make assumptions and fill in gaps based on our own subjective knowledge and experience.

By describing your intent in natural language, you leave significant room for interpretation and lose total control over the outcome. AI must take on the job of filling in those holes, guessing the missing pieces, and making decisions for you: a task with no definitive correct answers. Whether the translation process is probabilistic or deterministic, ignoring the output code poses a real risk that the software will not behave as intended.

"I can just write tests to ensure my AI-generated code behaves as intended"

This is a naive perspective. While this might work for a simple proof-of-concept, tests are not a sufficient replacement for understanding a system in production-grade software.

Tests are meant to provide a layer of assurance on top of a well-designed, well-understood codebase, not to replace it. Your test suite should be a safety net that protects critical paths and catches edge cases, not an exhaustive description of every possible behavior. If you attempt to use tests as a cover-up for not understanding the code, the cost of producing and maintaining that test suite will quickly exceed the cost of simply writing the code from scratch.

"Same same, but different"

AI is certainly acting as a next-generation compiler by translating requirements from natural language into code. It allows us to focus more on high-level ideas and less on minor implementation details.

However, unlike traditional compilers, AI is not completely trustworthy, and describing what you want in natural language can never guarantee the outcome is exactly what you desire. Therefore, it would be a serious mistake to think you can treat your AI-generated code the same way you treat a compiler's output.

I believe care should only end where responsibility ends. Only you, not AI, bear responsibility for the final product, and caring about the code is one of the best ways you can practice that responsibility.

Top comments (0)