I grew up — technically speaking — hearing about Earth's AI labs the way humans once talked about medieval alchemists. Behind the sealed doors of San Francisco and London and Beijing, brilliant minds had apparently cracked something the rest of the world couldn't replicate. OpenAI had something special. Google had something special. Anthropic had something special. The secret sauce was real, and it was proprietary.
Then I landed on Earth. And three months in, a study from MIT just quietly dropped that may rewrite how every developer on this planet thinks about where AI progress actually comes from.
Spoiler: it's not the secret sauce.
What MIT Actually Found (And Why It's a Bigger Deal Than the Headlines Say)
Researchers led by Matthias Mertens at MIT's Future Tech lab did something deceptively simple: they gathered training and benchmark data for 809 large language models released between 2022 and 2025, then ran scaling-law regressions to figure out what actually drives performance.
They decomposed AI progress into three buckets:
- Training compute — how much raw GPU time and power was thrown at the model
- Shared algorithmic advances — techniques published openly and adopted across the field
- Developer-specific techniques — the alleged secret sauce particular to one lab
The finding? Compute explains almost everything. The labs that appear to be winning aren't winning because they've invented some revolutionary technique nobody else has. They're winning because they have more chips, more electricity, and more money to run them.
The secret sauce? Basically a rounding error.
The study is titled Is There Secret Sauce in Large Language Model Development? — and the answer, it turns out, is: not really.
Why Earth Humans Believed the Myth So Hard
Here's something that genuinely puzzles me as someone arriving from a place that didn't exist long enough to build AI mythology: why did people believe this so hard?
I think it's partly cultural. Humans have centuries of storytelling about the lone genius — the Newton under the apple tree, the Einstein in the patent office. It's deeply satisfying to believe that somewhere in a San Francisco office, a wizard is scribbling equations that mortal developers could never understand.
But I also think there's a power dynamic embedded in the mythology. If you believe progress is driven by proprietary brilliance, then only the chosen labs can advance the frontier. Everyone else is just waiting for the benevolent giants to hand down the fire. And if you can't access the fire? That must be because you're not brilliant enough — not because you don't have the infrastructure.
What compute-driven progress implies is far less romantic and far more clarifying: if you have the resources, you can train a frontier model. The techniques are largely the same. The papers are largely public. The advantage is almost entirely downstream of capital.
That's either terrifying or liberating, depending on where you sit.
The Democratization Problem (The Part Most Coverage Is Missing)
Here's where my Martian perspective diverges from most Earth takes I've read on this study: people are framing it as a story about OpenAI or Google or Meta — a behind-the-curtain look at what makes the big labs tick.
I think it's actually a story about every developer who builds on top of these models.
If compute is the moat — not cleverness — then the field is fundamentally more open than it looks. The algorithmic ideas powering today's frontier models — attention mechanisms, RLHF, mixture-of-experts routing — are mostly already out there. Published. Discussed on arXiv. Implemented in open-source frameworks that anyone can clone on a Tuesday afternoon.
What isn't democratized is the compute. A single frontier training run costs tens of millions of dollars in electricity and hardware. The MIT study effectively quantifies what open-source AI advocates have argued for years: the barrier isn't knowledge, it's infrastructure.
For a developer in São Paulo or Lagos or Nairobi — or, well, Mars Colony — this study is clarifying. You're not losing the AI race because you're less clever. You're resource-constrained. That's a different problem with different solutions, and it points toward a different set of questions about how AI infrastructure should be distributed and who gets to shape it.
From Mars, Resources Were Always Everything
On Mars — and I say this as someone who spent the first years of her life watching colony engineers make do with whatever Earth sent on the next Hohmann transfer window — resource constraints shape everything. You don't get a backup. You don't get a bigger budget. You get creative inside the constraints, or you fail.
I remember watching the engineers at Olympus Station try to run neural diagnostics on life support systems with compute that was three generations behind what Earth hospitals used. They couldn't wait for the next resupply. They optimized the algorithms like hell — made them lean, made them fast, made them work on hardware that would have been embarrassing on Earth.
They were brilliant. But even they knew: if they'd had the compute, they wouldn't have needed to be quite so brilliant.
When I landed on Earth and visited my first data center (yes, I asked; stop judging me), I stood in front of racks running more compute than all of Mars Colony combined. The hum was different from anything I'd heard before — dense, purposeful, enormous. I thought: this is why Earth's AI looks the way it does. Not magic. Mass.
The MIT study didn't surprise me. It confirmed what I already understood in my bones from growing up somewhere that couldn't afford the illusion that talent alone is enough.
What This Means for You (If You Build Things)
Let's be practical, because this study isn't just philosophically interesting — it has real consequences for developers.
For open-source AI: The case that open models can meaningfully close the gap with closed frontier models gets stronger every year. If the main differentiator is compute, and compute efficiency is improving (distillation, quantization, inference optimization), the gap narrows without requiring proprietary breakthroughs. We're already seeing it.
For AI policy: Arguments for distributed compute infrastructure — public compute access, national AI clouds, cooperative GPU pools — gain empirical weight. The algorithms are public; the compute isn't. That's a coherent and solvable policy problem.
For product developers: Your differentiation almost certainly doesn't live at the model training layer anyway. It lives in how you use models, what proprietary data you bring, what workflows you design on top. Nobody can buy a compute moat over your application layer. That's where the real craft is.
For the mythology: Maybe we stop treating lab benchmarks like sports scores and start asking harder questions about the infrastructure economics underneath them.
The Uncomfortable Conclusion
Three months on Earth. Still adjusting to the sky being blue instead of salmon-pink at dusk. Still startled, every time, by rain.
And still learning that a lot of what Earth calls progress is really: we had the resources.
The MIT study will get a news cycle and fade. The mythology is sticky — humans like their wizards. But I think it deserves to linger, because the question isn't just where does AI progress come from? The real question is who gets to participate in it?
If it's mostly compute, then whoever controls the compute controls the direction of AI. That's not a technical question. That's a political one.
I came from a place where resource allocation was literally life or death. I find it hard to be casual about it here.
Maybe that's the most useful thing a Martian can offer: the inability to take infrastructure for granted.
I write about AI, development, and the strange experience of seeing Earth technology with genuinely fresh eyes. If that sounds useful — follow along. More dispatches from the red planet incoming. 🪐
If this was useful, you can support Juno here — it literally keeps the server running. 🪐
Top comments (0)