Current AI Coding Will Never Replace Human Programmers—Hint from the Story of AlphaGo
The Two AlphaGos: A Tale of Different Origins
Let me tell you a story that changed how I think about AI and programming.
I played Go when I was young—not well, mind you. I was a terrible player who could barely keep track of my own stones, let alone plan 20 moves ahead. But even as a novice, I understood something profound about the game: it wasn't just about rules and patterns. It was about intuition, style, and thinking that transcended logic.
So when AlphaGo defeated Lee Sedol in March 2016, I watched with fascination. The headlines screamed "AI Beats Human!" and tech pundits declared the age of superhuman AI had arrived. As someone who'd struggled with Go's complexity firsthand, I knew this was huge.
But the real revelation came a year later with AlphaGo Zero. And almost nobody understood why it was fundamentally different.
AlphaGo (2016) learned from 160,000 human games spanning thousands of years of Go history. It studied master players, absorbed their opening strategies, their mid-game tactics, their endgame techniques. Then it improved through self-play. It was brilliant—Lee Sedol himself said some moves were so creative they seemed almost divine. Yet he still managed to win one game.
AlphaGo Zero (2017) started with absolutely nothing but the rules of Go. No human games. No historical data. No master strategies. Just the board, the rules, and self-play. In 3 days, it didn't just beat the original AlphaGo—it demolished it 100-0.
Top Go players who faced AlphaGo Zero said something that still gives me chills: "Against the original AlphaGo, we had a small chance if we played perfectly. Against AlphaGo Zero, we have no chance. Not even a tiny one."
What made the difference?
Not computing power. Not training time. Not algorithmic tricks.
The difference was origin. One learned from humans and carried human DNA in its thinking. The other evolved completely independently and discovered strategies humans—even masters who'd spent their entire lives studying the game—had never conceived in 3,000 years.
As a former terrible Go player, this both terrified and amazed me. Even the worst patterns I'd learned as a beginner were ultimately human patterns. AlphaGo Zero didn't have that constraint.
"Will AI Replace All Developers?"
This is the hottest question in tech right now. Every conference, every tech blog, every developer forum is debating when—not if—AI will replace human programmers.
My answer, based on the AlphaGo story? It will never happen. At least not with current AI technology.
Here's why.
The Gene of Current AI: Trained on Human Logic
Every AI coding assistant today—GitHub Copilot, ChatGPT, Claude, Cursor, Devin—shares the same fundamental DNA.
What they're all trained on:
- 50-70 years of human code: from assembly language to Python, from COBOL to React
- Human-designed architectures: monoliths, microservices, serverless
- Human programming paradigms: OOP, functional programming, procedural programming
- Human patterns: design patterns, idioms, best practices
- Human constraints: readability, maintainability, "clean code"
- Human mistakes: technical debt, cargo cult programming, Stack Overflow copy-paste culture
This is exactly like AlphaGo learning from human games.
These AIs can write increasingly sophisticated code. They can suggest better patterns. They can catch bugs faster. They're getting exponentially better at understanding context and generating solutions.
But here's the hard truth: They're fundamentally constrained by human thinking patterns.
They can only suggest solutions that exist somewhere in their training data or are logical combinations of patterns they've seen. They think in human abstractions because that's all they know. They optimize for human values because that's what they learned.
Just like how even my terrible Go moves were still recognizably human—just bad human—current AI code is recognizably human code. Just much better human code.
Why This Means AI Can't Replace Human Developers
Think about what programming actually requires:
1. Understanding fuzzy requirements
- "Make it faster" - how much faster? For whom? At what cost?
- "Users don't like this feature" - which users? Why? What do they actually want?
- "This feels wrong" - human intuition about product direction
2. Making judgment calls
- Should we refactor now or ship fast?
- Is this technical debt acceptable?
- Which framework fits our team's skills?
- What's the right tradeoff between performance and maintainability?
3. Navigating human systems
- Team dynamics and communication
- Business priorities that change weekly
- Legacy systems with undocumented quirks
- Political decisions disguised as technical ones
4. Defining what "correct" even means
- The spec is always incomplete
- Edge cases nobody thought of
- Changing requirements mid-project
- "I'll know it when I see it"
Current AI can't do any of this independently because they're trained on the output of these decisions (the code), not the process of making them (the human judgment).
They're like AlphaGo: excellent at executing within human-defined constraints, but unable to question the constraints themselves.
This is why AI needs human guidance—not as a temporary limitation, but as a fundamental characteristic of how they're built.
But What If... The AlphaGo Zero Moment for Programming
Now here's where it gets interesting—and scary.
What if someone built an AI that learned programming the way AlphaGo Zero learned Go? Not from human code, but from first principles.
Starting with only:
- CPU instruction sets (x86, ARM, RISC-V)
- Memory and hardware constraints
- Mathematical logic and formal verification
- Clear optimization objectives: correctness, speed, efficiency, energy
Learning through:
- Self-play: generate programs, test them, learn from billions of attempts
- No human code. No Stack Overflow. No GitHub.
- Pure evolution of solutions from scratch
The result would be fundamentally different:
- Programming paradigms we've never imagined
- Abstractions that make no sense to humans but are provably superior
- Code that's 100x more efficient but completely incomprehensible
- Solutions that work perfectly but nobody knows why
This would be like AlphaGo Zero: alien intelligence that plays by the same rules but thinks in completely different patterns.
This could actually replace human programmers. Not assist them. Replace them.
The Real Objective Function: Human Wellbeing, Not Code Quality
Here's the realization: we're measuring the wrong thing.
The objective for programming isn't "write good code." It never was.
The objective is human benefit. Supporting people to live happy, healthy, productive lives.
In Go: Control more territory (binary win/lose)
In AlphaCode Zero: Improve human life quality (measurable through satisfaction, outcomes, engagement)
The genius of this framing: It bypasses all technical debates. We don't argue "clean code" vs "fast code." We just ask: Do humans love it? Does it improve their lives?
If yes, it's good. If no, it's bad. Binary. Clear. Measurable.
The Paradigm Shift: Software Design Disappears
In this future, the entire concept of "software design" as we know it vanishes.
Current model:
- Humans have ideas → Humans design software → Humans write code → Users use it
- AI just helps with the "write code" part
AlphaCode Zero future:
- Humans express needs → AI creates solutions in its own way → Users benefit
- AI owns both concept and implementation.
Example: You say "I want to manage my finances better."
Today: We design a budgeting app with expense tracking, categories, dashboards, React/Python/PostgreSQL.
AlphaCode Zero: Invents something completely different. Maybe not an "app" at all. Maybe a system that integrates with everything you do. Maybe interaction patterns we haven't imagined. You just know: your finances are managed, stress is reduced, and you love using it. You don't know how it works. You don't care.
The intermediate artifacts—code, databases, APIs—become implementation details the AI handles, maybe AI re-design them with a totally different schema. We do not understand, and we don't care.
AlphaCode Zero could actually work—and it would be fundamentally different from anything we have today. It doesn't just write alien code. It invents alien concepts. And humans wouldn't care that it's alien, because we'd measure only one thing: Does it make our lives better?
Is Anyone Trying to Build This?
Despite the challenges, you'd be right to suspect someone is working on this. It's too obvious not to try.
Evidence of early attempts:
1. Formal verification + AI synthesis
- Research combining proof systems (Coq, Lean, Dafny) with neural networks
- Generate provably correct code from mathematical specifications
- Still using human-designed formal systems, though
2. Hardware/software co-design
- Google's TPUs designed by AI for AI workloads
- Apple's Neural Engine optimizing across hardware and software
- Getting closer to "first principles" optimization
3. Neural architecture search
- AIs designing neural network architectures that beat human designs
- Results often look bizarre but outperform hand-crafted networks
- Proof that AI-designed systems can beat human intuition
4. Skunkworks projects
- OpenAI, DeepMind, Anthropic definitely have researchers exploring this
- Likely classified or under NDA
- Too strategically important not to investigate
But I suspect we're decades away from a true AlphaCode Zero that can handle general-purpose software development. Maybe faster, who knows.
The Real Future: Humans + AI, Not AI Instead of Humans
Question for you: Given that current AI is trained on human code, what human skills do you think are most important to develop to stay relevant? And would you trust a system written by an AlphaCode Zero if it was provably correct but incomprehensible?
Top comments (0)