DEV Community

jidonglab
jidonglab

Posted on • Originally published at spoonai.me

From $7.5B to $18B in 4 Months: The AI Infrastructure Gold Rush Nobody Saw Coming

AI models aren't the bottleneck anymore. Data centers are.

Fluidstack — an Oxford spinout that builds specialized AI data centers — just jumped from $7.5B to $18B in four months. That's a 2.4x valuation leap while most startups struggle to maintain flat rounds.

Here's why this matters more than any new model release.

A quant trading firm is leading — not a VC

Jane Street (one of the world's largest quant trading houses) and Situational Awareness (Leopold Aschenbrenner's fund — the former OpenAI researcher who wrote that essay about AGI timelines) are co-leading a $1B round. Morgan Stanley is advising.

When a trading firm puts $1B into AI infra instead of a traditional VC, it tells you they see predictable returns, not a speculative bet.

The $50B contract behind the valuation

The catalyst: a $50 billion custom data center construction deal with Anthropic.

Let that number sink in. $50B for a single infrastructure contract. Anthropic closed a $30B Series G in February and is approaching $30B ARR. A massive chunk of that goes to compute — and Fluidstack is the build partner.

Fluidstack moved its HQ from London to New York, pulled out of a large French DC project, and went all-in on US compute demand. That bet is paying off.

The numbers tell the story

Metric Value
Current valuation (in talks) $18B
Previous round (Dec 2025) $7.5B
Round size $1B
Anthropic DC contract $50B
Global AI DC power draw 29.6 GW

That 29.6 GW figure comes from the Stanford AI Index 2026. It equals New York State's entire peak power demand. And it's still not enough.

The infrastructure war scoreboard

  • CoreWeave: $35B+ in AI cloud contracts (Meta, Anthropic)
  • Lambda Labs: $10B+ valuation
  • Crusoe Energy: $5B+, green AI infrastructure
  • Fluidstack: $18B (negotiating)

Q1 2026 saw $300B in venture investment globally. 80% ($242B) went to AI — infrastructure captured the largest share.

What this means for you

Two practical takeaways:

GPU cloud prices aren't dropping soon. Demand overwhelms supply at every level. If you're budgeting for inference costs, plan conservatively.

Local inference keeps getting more attractive. Intel's Arc Pro B70 ($949, 32GB) can run Qwen 3.5 27B at 4-bit quantization locally. As cloud costs stay high, the economics of local inference keep improving.

The AI gold rush isn't about who builds the smartest model. It's about who can run it.


Full analysis on spoonai.me | Daily AI briefing — subscribe at spoonai.me

Top comments (0)