They found why DeepMind failed. Now they're going after the holy grail of computer science.
Every time you ask ChatGPT a question, scroll through Instagram, or watch a Netflix recommendation appear on your screen, something invisible is happening billions of times per second.
Matrix multiplication.
It's the mathematical heartbeat of modern computing. And for 50 years, we've been doing it wrong. Or at least, not as efficiently as theoretically possible.
One team just announced they're going all-in to fix that.
Let me explain why this matters far more than it sounds.
The Problem That Stumped Everyone
In 1976, a mathematician named Julian Laderman discovered you could multiply two 3×3 matrices using only 23 multiplications instead of the obvious 27.
That was fifty years ago.
Since then, despite billions of dollars in computing research, despite DeepMind's AlphaTensor making headlines in 2022, despite thousands of mathematicians trying — nobody has done better.
The question that haunts computer science: Can it be done in 22?
This isn't academic curiosity. We know the theoretical minimum is somewhere between 19 and 23 multiplications. That gap has remained open for half a century. Closing it — even by one — would be one of the most significant algorithmic discoveries of our generation.
Why One Multiplication Matters
"It's just one multiplication. Who cares?"
Here's who cares: everyone running AI infrastructure.
Matrix multiplication accounts for roughly 90% of the computation in training large language models. When you multiply that across:
- Trillions of operations per second
- Millions of GPUs worldwide
- 24/7 operation for months of training
A single multiplication saved at the foundational 3×3 level compounds astronomically. We're talking potential savings of billions in energy costs, meaningful reductions in AI's carbon footprint, and faster training for every model built from here on out.
The efficiency of matrix multiplication literally determines how quickly AI can advance.
What Blankline Found (And Why It's Different)
Blankline Research didn't just throw more compute at the problem. They asked a different question: Why has everyone failed?
Their findings are fascinating — and a little haunting.
Discovery 1: The Four Anchors
Buried in Laderman's 23-term algorithm is a hidden structure. Four of those terms compute completely isolated products — different rows, different columns, different outputs. They call them "anchors."
These four products are mathematically orthogonal. You can't compress orthogonal structures. You need exactly 4 terms to compute 4 orthogonal products.
This is the first barrier: four multiplications are mathematically irreducible.
Discovery 2: The Routing Problem
The team found "super-efficient" compound structures that looked like breakthroughs. Three compounds could theoretically cover all 27 required products.
Then reality hit.
When one term produces multiple products, they share the same "routing vector" that determines where results go. But if those products need different destinations? Contradiction.
Coverage doesn't equal validity. You can produce the right numbers but can't put them in the right places.
Discovery 3: Laderman Is Locally Optimal
Using SMT solvers — the same tech that verifies computer chips — they asked: can we remove any single term from Laderman's algorithm?
The answer for all 23 terms: UNSAT. Unsatisfiable. Impossible.
You can't improve Laderman by tweaking. It's locked.
Why DeepMind Failed
This explains why AlphaTensor found better algorithms for 4×4, 5×5, and larger matrices — but couldn't touch 3×3.
The search space for 3×3 isn't just hard to navigate. It's structured in a way that makes local improvements impossible. Every path leads to a wall.
DeepMind's AI was doing gradient descent in a landscape with no gradients. The barriers aren't computational — they're mathematical.
The Race Begins
So why is Blankline confident they can succeed?
Because knowing why something fails changes everything.
Their roadmap:
Alternative Schemes: Laderman's isn't the only rank-23 algorithm. Over 17,000 distinct decompositions exist. Maybe one can be reduced where Laderman's can't.
Border Rank: What if you allow approximate decompositions that become exact in a limit? Border rank techniques have worked where exact methods failed.
Algebraic Geometry: The set of rank-r tensors forms an algebraic variety. Geometric methods might reveal structure invisible to brute-force search.
Focused ML: AlphaTensor trained broadly. What happens with a model laser-focused on 3×3, with dedicated resources for this single problem?
They're giving themselves 10-12 months. All findings will be public.
What This Means For You
If rank-22 exists and Blankline finds it:
For AI companies: Training costs drop. Model development accelerates. The efficiency gains compound through every layer of the stack.
For climate: AI's energy consumption is becoming a genuine concern. Foundational efficiency improvements are one of the few solutions that don't require sacrifice.
For science: This would be the first improvement to small matrix multiplication in 50 years. It would rewrite textbooks and likely unlock insights for larger matrices too.
For the field: It proves that understanding why problems are hard is as valuable as raw compute. That's a lesson that extends far beyond matrix math.
The Boldest Bet in Math Right Now
There's something almost romantic about this challenge.
Fifty years. Billions of dollars. The world's best AI systems. And still, Laderman's 1976 algorithm stands undefeated.
Now a team is saying: we know why everyone failed, we know what to try next, and we're going public with everything.
If they succeed, it's historic.
If they fail, they'll have mapped the barriers more precisely than anyone before — and probably saved the next team years of dead ends.
Either way, we learn something.
That's how science is supposed to work.
Follow Blankline's progress at blankline.org/research. The technical paper "Computational Barriers to Rank-22 Decomposition of the 3×3 Matrix Multiplication Tensor" is available on Zenodo.
Top comments (0)