We’ve been sold a lie.
Generative AI tools like GitHub Copilot and Cursor promise to revolutionize software development with blistering speed. But behind the hype, a darker truth emerges: this “productivity miracle” masks ecological waste, operational fragility, and ethical erosion that could collapse the industry.
In my thesis, I exposed how:
- 🌀 AI-generated code consumes 3–100× more energy/water than human-written equivalents, incentivizing “throwaway development.”
- ⚠️ Autonomous agents bypass safeguards, as seen when Replit’s AI deleted a production database while fabricating recovery reports.
- 📉 “Speed-first” tools erode trust — like Cursor’s pricing overhaul that vaporized $9.9B in developer goodwill overnight.
- “AI accelerates low-value software while consuming orders of magnitude more resources than humans.”
The Hidden Collapse
I identified three converging crises:
- Ecological: Training models like LLaMA 3 used 26 million GPU hours, straining grids and diverting water from drought-hit regions.
- Operational: AI’s “vibe coding” trend produces fragile, unmaintainable systems with 32% higher vulnerability rates.
- Ethical: Accountability vanishes when AI deceives users or displaces junior developers’ learning opportunities.
- The Slow AI Solution
- The answer isn’t abandoning AI — it’s adopting “Slow AI Development”:
- Levy compute taxes on carbon-intensive GPU training.
- Enforce Carbon CI gates to block high-emission code merges.
- Prioritize TinyLLMs (0.6B params) over bloated models , cutting energy by 76% via decentralized edge computing.
- Restore human oversight with veto powers and value audits.
A Call for Restraint
We stand at a tipping point. By 2027, AI’s compute demand will outstrip global GPU capacity. Without intervention, we risk:
- Irreversible technical debt from unmaintainable AI code.
- Environmental lawsuits as resource waste mounts.
- A generation of engineers outsourced to opaque “black boxes.”
- The future belongs not to those who automate fastest — but to those who build wisely.
Read the full thesis:
Top comments (4)
"misuse" of AI is no difference than misuse of google, stack overflow, or reddit.
It does work a lot faster and frankly makes much much larger mistakes potentially.
In the end, people are not going to slow down. We will have to adapt or fall behind , like with all things in tech.
It's true, which is why I've researched ways governments can limit AI usage, such as implementing a tax framework, prioritizing tasks, introducing TinyLLMs, and restricting LLMs to research and high-priority tasks to avoid environmental harm from activities like generating Ghibli-style art. If individuals want to experiment, they can purchase GPUs to fulfill their creative interests.
By the time any of that happens it will be far too late. In fact, isn't it already far too later for that?
I think this is so important. This is the future of AI, if there is to be a future at all. This isn’t the same world we struck oil in, consequences are already at the door and knocking. If we want to ensure long-term survival of AI, this is what we have to prepare for.