Or: How to Build a Money Incinerator and Call it "Disruption"
There's something deliciously absurd happening in Silicon Valley right now. OpenAI, the company that kicked off the generative AI revolution, has achieved something remarkable: they lose more money on their paying customers than on their free users.
Let me say that again louder for the VCs in the back: The $200/month ChatGPT Pro subscribers are MORE unprofitable than the freeloaders.
The Numbers That Don’t Add Up (Because Math is Hard)
In the first half of 2025, OpenAI managed to:
- Collect $4.3 billion in revenue
- Post a net loss of $13.5 billion
- Lose roughly three times more than they earned
WeWork's Ghost Haunts Silicon Valley (Again)
WeWork didn't fail because of Adam Neumann's cult leadership alone. It failed because it sold a software story (community! vibes! platform-enabled workspace!) on top of a physical asset (30-year office leases, electricity, coffee, cleaning crews). The abstraction layer—the "WeWork experience"—never eliminated the underlying cost structure of real estate. It just obfuscated it until the math detonated.
OpenAI is doing the exact same thing:
Digital Story: "Intelligence-as-a-Service! The API economy! AGI!"
Physical Reality: NVIDIA H100s that cost $40k each, suck 700W of power, and depreciate faster than a used Fiat.
The "scale" that made Instagram and Facebook invincible—near-zero marginal cost—doesn't apply here. Every token generated has a linear, non-negotiable compute cost. You're not building a flywheel; you're building a power plant that charges by the spark.
The Three Failed Moats
Let's trace the tragedy in three acts:
Act I: "Our Model is Special!"
The pitch: We have proprietary AI that nobody else can replicate!
The problem: Papers get published. Architectures leak. Open source catches up.
It's like KFC thinking their secret recipe is a moat, then watching as every food scientist reverse-engineers it and suddenly every mom-and-pop shop is frying comparable chicken. Meta's Llama, Mistral, DeepSeek—they all proved that once you know how to make the recipe, you can make pretty good chicken too.
Act II: "But Can You Match Our CAPEX??"
The pivot: Fine, but we have BILLIONS in compute! Beat that!
The problem: This just becomes a cash-burning arms race.
Microsoft, Google, Amazon—they all have infinite money too. So now you're in a game of chicken (continuing the KFC metaphor) where everyone's betting who can lose money the slowest. And even if you "win" by having the most GPUs... you still can't charge enough to break even!
The crypto bros learned the hard lesson: You can't arbitrage compute into profit indefinitely. The difficulty adjustment (in crypto) and RLHF reward loops (in AI) both ensure that throwing more hardware at the problem just raises the bar for everyone. It's a Red Queen's Race where you run faster just to stay in place.
The difference? Crypto miners never pretended they were building a sustainable business model. They were explicit: "We're speculating on token appreciation." The AI industry is selling VCs and the public a story of inevitable profitability while quietly burning cash at WeWork velocity.
| Phase | Crypto Mining (2010-2024) | AI Training/Inference (2020-2025) |
|---|---|---|
| Gold Rush | Mine Bitcoin on laptops → 1000x returns | GPT-3.5 demos → viral adoption, infinite TAM |
| Arms Race | ASICs → GPU farms → megawatt data centers | H100 clusters → \$10B training runs → nuclear power deals |
| Consolidation | Hobbyists die; only industrial miners survive | Open-source catches up; only hyperscalers can compete |
| Profit Crisis | 2018/2022 crashes → miners bankrupt en masse | OpenAI's \$13.5B loss → pricing power collapse |
| New Model | Mining-as-a-Service, vertical integration | ??? (We're here now) |
Act III: "Our AI Can THINK and Use TOOLS!"
The final evolution: We'll add reasoning, web search, tool use, multimodality!
The catastrophic problem: Every "improvement" multiplies the compute cost.
- Chain-of-Thought? That's the AI talking to itself for 10,000 tokens before answering.
- RAG/Web Search? Dump entire web pages into the context window.
- Tool use? More API calls, more tokens, more compute.
- Better memory? Store and retrieve more context every time.
You know what's brilliant? They're trying to solve "we lose money on each sale" by making each sale MORE computationally expensive!
Uber is a marketplace; OpenAI is a manufacturer. When Uber scales, it adds drivers (who pay for their own cars/gas). When OpenAI scales, it adds GPUs it has to buy and power itself.
Uber's "solution"—surge pricing, driver pay cuts, platform fees—doesn't map to AI. _You can't "surge price" token generation _(demand is constant, not peaky). You can't "cut compute pay" (the GPU doesn't negotiate).
The only parallel is diversification: Uber Eats, Freight. OpenAI's API business is the equivalent—but even that has negative margins at scale.
The RLHF Death Spiral
But here's where it gets truly beautiful in its absurdity.
AI companies have trained their models to be unprofitable through their own reward system:
- Users see longer, comprehensive answers
- Users give thumbs up: "Wow, so thorough!"
- RLHF learns: MORE TOKENS = GOOD RESPONSE
- Model starts writing essays for "What's 2+2?"
- Users reward it again because it "seems smart"
- Repeat until bankruptcy
Look at this real example from Gemini:
Question: "What's the capital of France?"
Profitable answer: "Paris" (1 token)
What Gemini actually does:
- Paragraph 1: "Paris 🇫🇷"
- Paragraph 2: Population, culture, global importance
- Paragraph 3: Geographic location on the Seine River
- Bonus: Embedded YouTube video
- Extra bonus: Sources button with citations
That's probably 100+ tokens plus web search compute plus video metadata fetching... to answer "Paris."
And users love it. They see all that information and think "Wow, this AI is so helpful!" The thumbs go up. The RLHF reinforces this behavior. The burn rate accelerates.
Users are literally training the AI to bankrupt the company, one satisfied click at a time.
The Inverse Business Model
Every successful business operates on a simple principle: as you get better at your product, unit economics improve. Scale brings efficiency.
AI companies have invented something new: The Inverse Business Model™
- Better product = more usage per customer
- More usage = higher compute costs
- Higher costs + flat pricing = bigger losses
- More customers = faster cash burn
This isn't a business model. It's temporal arbitrage: burn billions today to monopolize a market that might exist tomorrow. The crypto miners did this with Bitcoin; OpenAI is doing it with "AGI futures."
WeWork's lesson: Abstraction layers don't eliminate cost structures. They just delay the bill.
Crypto's lesson: Compute arbitrage is a race to the bottom. Only the most efficient survive.
Uber's lesson: If your cost structure is physical, you need physical-world solutions (dynamic pricing, platform expansion).
OpenAI is currently failing all three tests. That's not a growth story.
The Freelancer Arbitrage Nobody's Exploiting
Here's the really insane part: ChatGPT should probably cost freelancer rates.
Think about it. A freelance developer, writer, or analyst charges $50-150/hour. If you're using ChatGPT Pro for professional work—coding, writing, research, analysis—you could easily extract hundreds or thousands of dollars worth of "freelancer equivalent" value per month.
From that lens, $200/month is absurdly cheap. But OpenAI has anchored expectations so low that they're trapped:
- Price it at actual cost → everyone leaves
- Price it at sustainability ($500-1000/month?) → mass exodus
- Keep it at $20-200/month → burn billions while racing to... what exactly?
So Where Does This End?
A few possibilities:
The Optimistic Take: AGI arrives, becomes so capable that people will pay $10,000/month, unit economics suddenly work.
The Realistic Take: Consolidation. Only companies with infinite money (Microsoft, Google, Amazon) survive. They run AI as a loss leader for their cloud businesses.
The Cynical Take: The entire thing was a wealth transfer from VCs to GPU manufacturers (looking at you, NVIDIA). When the music stops, we're left with open-source models that are "good enough" and every AI startup becomes a consulting company.
The Chaos Take: Quantum computing or some other breakthrough slashes compute costs by 1000x, rendering all of this moot.
The Real Philosopher's Stone
The tragedy is that they're searching for the philosopher's stone—the legendary substance that turns base metals into gold.
But they built the reverse: A machine that turns gold into base metals at scale.
Every dollar of revenue requires three dollars of compute. Every improvement makes the problem worse. Every happy customer accelerates the burn.
It's not alchemy. It's reverse-alchemy. And they're doing it with nuclear-powered data centers.
The Takeaway
The generative AI revolution is real. These tools are genuinely useful. But the business model? That's still firmly in the "figure it out later" phase.
And "later" is approaching fast, because at $13.5 billion in losses per year, even the deepest pockets have a bottom.
So the next time you use ChatGPT, Claude, or Gemini, remember: somewhere, a GPU is burning electricity that costs more than you're paying, to generate an answer that's probably three times longer than it needs to be, because you clicked thumbs-up on verbose responses.
You're not the customer. You're not even the product.
You're the kingmaker voting to execute your own king.
What do you think? Are we watching the birth of a new paradigm or the most expensive game of hot potato in tech history? Drop your takes in the comments.

Top comments (0)