TL;DR: The Core Arguments
The Hallucination Tax: AI is architecturally optimized for confidence, not correctness. Your deep technical literacy enables you to audit the logic that ultimately runs your business.
The Prompting Ceiling: Your ability to guide an AI is capped by your technical vocabulary. Better engineering knowledge equals higher-leverage prompts.
The Sustainability Gap: AI agents act as mercenary developers - stateless and shortsighted. You are the architectural anchor that ensures AI's speed multiplies your progress, rather than multiplying your technical debt.
The 1% Rule: AI can handle 99% of the flight, but you are the pilot required for the 1% where the engines fail. If you can’t intervene, the 99% doesn't matter.
Send this article to anyone who tells you "We don't need developers anymore." The barrier to entry has collapsed, but the gap between a prototype and a production-grade product has never been wider.
Before starting, I must admit: the percentage of code I manually write has been steadily declining over the past year, to the point that it’s probably close to zero now. So I'm not against using AI in coding in any way - in fact, developing software feels more comfortable to me now than ever. The friction of turning my ideas into working code is at an all-time low. I can juggle multiple projects at once or handle personal chores while waiting for AI agents to finish their work. AI also saves me from memorizing obscure syntax, peculiar CSS tricks, or specific API parameters, which I have never enjoyed - It is a fantastic shift.
Meanwhile, there is a growing skepticism about the long-term demand for traditional software developers. High-profile figures such as Elon Musk, Jensen Huang, and Dario Amodei have famously promoted the "coding is dead" narrative, advising the public against learning to code.
That sounds really reasonable. When most people (including me) don't need to write code manually anymore, software developers should be obsolete, and the value of knowing how to code drops to zero, is that right?
No, that's wrong.
After using AI coding agents more and more, I see even more clearly how my technical depth acts as a force multiplier. It allows me to produce better results more effectively and reliably, even when I am no longer the one who directly writes those lines of code.
1. Good developers can detect when AI is lying
Let’s be real: generative AI is never going to stop hallucinating, making mistakes, or blatantly lying to you. It is not a bug, it is literally how the technology is built.
In my personal experience, AI has claimed to build features that it never actually wrote; spent multiple turns writing and "verifying" logic for data that never existed; and built a complex "memory-aware" burst controller to prevent Out Of Memory crashes, only to ignore the limit on the very last line. How could you catch these errors if you have no idea what the AI agents are giving you? A manager who does not understand the job is easily fooled and manipulated by their subordinates - the same principle applies here.
Since unpredictability is baked into the architecture, it is wild that anyone would consider delegating all the work to AI without a second look. Treating AI as an autonomous pilot instead of a co-pilot is not just lazy, it is asking for trouble. Without a human in the loop to fact-check the output, you are essentially playing a game of Russian roulette with your project.
One might argue, "I can test the output. As long as the result is correct, I do not care if the steps are wrong." That holds true only when what you build with AI is simple enough to have a small test surface you can cover, and it is completed after just a few prompts.
In reality, if you want to build anything more than a quick proof of concept or a disposable tool, you will need many more iterations. At that scale, the number of potential failure points increases exponentially.
Software is not static like an image where everything you need to verify is visible at once. You must account for different edge cases, various devices, and diverse input data. Your application could crash or become sluggish. You could receive a surprisingly high cloud bill, or you could even be hacked and lose all your data. Your app may behave correctly for now, but with a poor underlying design, it will be fragile. Any future update could trigger a cascade of hidden bugs to surface. A lot can go wrong if you do not understand the code.
2. Good developers can write better prompts
"I don't need to understand code, I just need to prompt." - This seems to be a popular sentiment among many people. But it misses a critical truth: If you do not understand the underlying technology, how do you know what to prompt for the best results?
Obviously, prompts should describe what you want, but "what you want" exists at varying levels of specificity. For instance, you might want to add a "New" label to an item so that users notice it, which increases the likelihood of a purchase, which grows revenue, which ultimately makes you happy. All those levels are "what you want." But you cannot simply provide a vague "make me happy" prompt and expect the AI to function like a genie in a bottle, figuring out all the rest for you. There are realistic technical constraints and trade-offs that determine what's feasible, desirable, or optimal. Needless to say, your ability to identify and evaluate these factors is directly proportional to your understanding of the code.
While not all levels of specificity require technical depth, the more specific you want to be, the more your technical knowledge matters. As a rule of thumb, you should strive for the most specific prompt you can make without overthinking and losing productivity. Nonetheless, sometimes spending 5 more minutes to review the existing code and evaluate options for the next change you want to implement could save you hours of future damage control that would have been required had you sent a vague, lazy prompt.
For me, when working with a major change or new feature, I don't simply describe the end-user goal. Instead, I usually guide the AI through the necessary steps. I specify where to place the code, which libraries to utilize, and which components should be abstracted into utility functions for future reuse... By providing these technical guardrails, the results are consistently more accurate and more desirable than if I had allowed the AI to make those architectural decisions in a vacuum.
3. Good developers can keep project development sustainable
Many people feel a surge of hype when they send a prompt to an AI and receive a strikingly good result on the first attempt. They often extrapolate that success toward the horizon, convinced that at this pace of progress, they will achieve their goals in no time. However, as they dig deeper, make modifications, or add features, the project often begins to go off-rail. Random bugs appear out of nowhere; fixing one issue introduces five others; a simple change takes 20 minutes yet still leads nowhere. This story is likely familiar to many.
The misunderstanding lies in the gap between producing something simple that "works" for now and engineering production-grade software that is usable, maintainable, and evolvable for the long term.
Code is a set of rules defining how an application behaves. For every high-level behavior you put into a prompt, there are countless sets of rules that can technically satisfy it. Yet, each option carries a different set of trade-offs and side effects that remain invisible, at least temporarily. These often involve edge cases or implicit non-functional requirements that are easy to overlook until they suddenly become a problem.
Meanwhile, AI agents act like mercenary developers. They are stateless, meaning every new chat session starts with a "new" AI. They are also made to be short-sighted, laser-focused on completing the immediate task at hand, sometimes at the expense of long-term goals. While context engineering (such as documentation, skills, and markdown files) can provide agents with useful context, it is not enough to rely on solely. In the human world, the amount of relevant context is more enormous than we realize, including things we consider obvious but are not so obvious to an AI. Sometimes these nuances are implicit, vague, and difficult to express in words. Without closely involving in the development process, it's hard to anticipate these hidden complexities before they manifest as breaking bugs.
No amount of context engineering can guarantee that an LLM has everything it needs to work with total autonomy. Therefore, you must act as the glue connecting AI working sessions, holding the long-term vision and the mental model of the project. You provide the AI with appropriate pieces of information when needed and nudge it back on track when it veers off course.
Another effective approach, which I used while developing pixart.world, is to write the architecture, core algorithms, and core UI components yourself first. By the time AI was brought in to accelerate development, I had already laid the "train tracks" for it to follow.
AI is a massive multiplier of output, but multiplying a flawed, shortsighted architecture just gives you technical debt faster. You are the structural anchor that ensures this speed remains sustainable.
4. Good developers can intervene when necessary
While the current generation of AI we use for coding, Large Language Models (LLMs), already possesses a much broader intelligence than previous generations, they are still not entirely well-rounded. The details about this should be the topic for another article, but for now, we can agree that AI still has significant cognitive gaps compared to even an ordinary human in several specific areas, which are unlikely to be closed anytime soon due to the very nature of the technology.
In my experience, LLMs struggle most with complex abstraction, spatial logic, intricate user flows, and nuanced user experience. For example, I once prompted an AI (Opus 4.6) to help me implement an algorithm to process a custom 16-ary tree data structure for pixart.world, which requires some abstract spatial imagination to understand. Even though the AI had access to all the detailed documentation, it failed miserably after many attempts. I eventually had to intervene by designing an elegant algorithm myself. Only when I passed the AI the pseudo-code, it understood and generated a function that worked flawlessly.
Sometimes, the need for intervention is more critical than simply overcoming technical drawbacks. Serious bugs, security vulnerabilities, and core business logic can make or break your software. These are not tasks you can "prompt and forget." They require careful inspection and rigorous scrutiny. A commercial flight may operate on autopilot for 90% to 99% of its time in the air, but the human pilot remains irreplaceable for the remaining 1% to 10% of the journey. If we fail during those critical moments, the successful 99% becomes meaningless.
What "Knowing Code" Actually Means
The true value of a developer has never been about memorizing syntax, it is about the systems-level thinking you apply to every problem. These are the leverage points you bring to the table:
You understand the "physics" of the digital world, possess the intuition to know what is computationally expensive, what is trivial, and where the boundaries of reality lie.
You know how to dissect chaos, to take a vague human "want" and break it down into the precise, modular logic required to make it work.
You have structured skepticism, hunting for the edge cases, race conditions, and "what-ifs" that an AI would confidently bypasses.
You master the art of the trade-off, understanding that engineering isn't about finding a "perfect" solution, but about choosing the right set of compromises for the long-term mission.
The Promotion of the Developer
The "coding is dead" narrative makes for great headlines, but it fundamentally misleads about what a developer actually does. If coding were merely the act of typing syntax into a text editor, then yes, we would be obsolete. But coding has always been about problem-solving, systems thinking, and architectural integrity. AI hasn't killed coding, it has simply promoted us.
We are transitioning from being the bricklayers of the digital world to being its architects and site managers. The tools have changed - our "shovels" are now power excavators - but you still need to know where to dig, how deep the foundation must be, and how to read the blueprint when the machinery hits a pipe it didn't expect.
The Bottom Line
Technical literacy in the age of AI isn't obsolete, it's a massive competitive advantage. While the barrier to entry for building a simple app has vanished, the gap between a "vibe-coded" mess and a scalable, secure, and sustainable product that solves real problems remains vast.
The developers who thrive in this new era won't be the ones who resist AI, nor will they be the ones who outsource their brains to it. They will be the ones who use their deep technical understanding to steer these powerful models toward excellence. So, keep learning, keep building, strengthen your expertise, and don't put down the keyboard just yet - you’re going to need it to keep the robots in check.
Top comments (0)