DEV Community

Cover image for Three Months Ago Elon Musk Called Anthropic Evil. Last Tuesday He Became Their Landlord.
varun pratap Bhardwaj
varun pratap Bhardwaj

Posted on • Originally published at qualixar.com

Three Months Ago Elon Musk Called Anthropic Evil. Last Tuesday He Became Their Landlord.

In February 2026, Elon Musk publicly called Anthropic "doomed to become the opposite of its name." A few weeks later, he asked his 200 million followers if there was "a more hypocritical company than Anthropic." The receipts are still online. You can scroll back and read them.

On May 6, 2026, that same Anthropic signed a deal to use the entire compute capacity of SpaceX's Colossus 1 data center in Memphis. 300 megawatts. 220,000 NVIDIA GPUs unlocked within the month. Three to four billion dollars per year flowing into SpaceX's books just before its IPO roadshow opens in June. Two and a half billion dollars of that lands as cash profit.

Asked about Anthropic this week, Musk had a slightly different read. "I spent time with senior members of the team and was impressed. Everyone I met was highly competent and cared a great deal about doing the right thing. No one set off my evil detector."

Apparently four billion dollars a year has excellent vision correction.

This piece is not about Musk. It is about what the deal actually proves, which most of the coverage missed.

The headline everybody saw

The story landed in the news cycle as drama. The hot-take economy picked it up the way it picks up everything: hypocrisy plus money plus a famous name. Anthropic safety-people taking SpaceX money. Musk swallowing his own words for a quarterly revenue print. The IPO whisper number jumping somewhere north of one and three-quarter trillion dollars on the strength of a marquee AI tenant.

All of that is real. None of it is the actual story.

The actual story is that the second-most-respected AI safety lab in the world signed a multi-billion-dollar dependency agreement with the company run by the person who has spent the last year publicly demanding their dissolution. They did this not because they suddenly trust him. They did it because they had no other option that kept Claude responsive on a Tuesday afternoon.

That is a sentence worth re-reading.

Compute is the moat. Everything else is theater.

For three years the AI conversation has been organized around model performance. Whose benchmark is higher. Whose context window is longer. Whose RLHF is cleaner. The companies in the conversation acted as if the differentiator was the work happening inside their buildings.

The Colossus deal is the public confession that the differentiator is the buildings.

Look at Anthropic's compute portfolio in 2026, all signed in the last twelve months:

  • Up to 5 GW with Amazon
  • 5 GW with Google plus Broadcom
  • $30 billion of Azure capacity through Microsoft and NVIDIA
  • $50 billion in American AI infrastructure with Fluidstack
  • And now 300 MW through SpaceX

That is not a customer base. That is a survival pattern. Every one of those deals exists because Claude usage outran its substrate. The doubled rate limits announced alongside the SpaceX news are the user-facing tell — there was a ceiling, and it was hit, and it had to be punched through with whatever GPU pipe could be turned on fastest.

In that environment, the question of whether your supplier publicly hates you is a luxury concern. Anthropic is not signing with Musk because his evil detector recalibrated. They are signing with him because he has 220,000 GPUs that can be online inside four weeks, and nobody else has them on offer at that latency.

This is the actual lesson of the deal: when compute is the constraint, every enemy is a vendor and every principle has a price expressed in megawatts.

The IPO is the punchline most people missed

SpaceX files its confidential S-1 on April 1, 2026. The roadshow starts in June. Target valuation lands somewhere between $1.75 trillion and $2 trillion. Musk has just dissolved xAI into SpaceX, creating "SpaceXAI" — a space company that is now also an AI cloud business.

A space company without a major AI customer in 2026 is selling a story. A space company with Anthropic as a tenant on launch day is selling cash flow.

The "evil" tweets aged poorly because they were never about Anthropic. They were posture during a period when Musk had no AI infrastructure revenue to defend. The moment SpaceX needed an AI cloud comp for its prospectus, the posture became inconvenient. So it ended.

Founders do this. CEOs do this. The mistake is treating their public positions as fixed beliefs instead of as moves in the game they are currently playing. The evil detector has always been business-cycle dependent.

What this means for anyone building on top of these companies

If you are running a startup that depends on Claude, GPT, or any frontier model, the SpaceX deal should change exactly one thing in your architecture review.

Your model provider is not in control of their substrate.

Your model provider is one capacity crunch away from signing with whoever has GPUs to lend, including parties who were calling them evil last quarter. That is not a moral failure on the model provider's part. It is the structural reality of running an AI business at scale in 2026. Compute is rationed; rate limits are the visible edge of that rationing; and your roadmap is downstream of someone else's data center decisions.

The reliability question that matters is not "does Claude pass our evals." It is "what happens to our product when the substrate underneath Claude shifts under conditions our vendor cannot control?"

Concrete things this affects:

  • Latency floors are not fixed. They move with whichever data center is currently active.
  • Rate limits are not policy. They are physics. They will tighten without notice when capacity reshuffles.
  • Provider availability is correlated, not independent. Three vendors sharing one substrate are one vendor's outage.
  • Pricing is not market-driven in the short term. It is rationing-driven.

This is what we mean at Qualixar when we say AI Reliability Engineering. Not testing the model. Testing your dependence shape on the model. Most teams I talk to have not separated those two questions yet.

The honest version of the deal

If you stripped the politics off and wrote a one-line description of what happened on May 6, it would read:

The fastest-growing AI lab in the world signed a four-billion-dollar-a-year contract with the only available counterparty that could close the gap between user demand and GPU supply, regardless of prior public position.

That is a reasonable business decision. It is also a clearer description of where the AI industry actually is in 2026 than ninety percent of the coverage produced this week.

We are not in a model race. We are in a substrate race. The model labs are tenants. The substrate owners are landlords. And as of last Tuesday, one of those landlords is a man who, ninety days ago, was publicly arguing that the tenant should not exist.

He gets paid either way.

What changes this week

Nothing changes for end users. Claude Code rate limits doubled. Opus API ceilings raised. Pro and Max accounts stop getting throttled at peak. From the outside it looks like a quiet upgrade.

What changed is the part you cannot see from the outside: the dependency graph of the company you are trusting with your reliability-critical AI workloads now includes a vendor whose CEO was, in writing, publicly calling them an existential threat to humanity's interests last quarter. That vendor is taking $4 billion a year from them. That vendor is also about to be a public company whose stock price you will be able to watch reflect this revenue.

If that does not change how you architect your fallback strategy, your fallback strategy was theater.

The path to AI Reliability Engineering does not start with eval suites. It starts with honest accounting of what your AI stack is actually built on, and what happens when the people three layers down from your prompt change their minds. As they will. As they just did.


Varun Pratap Bhardwaj builds Qualixar — the AI Reliability Engineering category, anchored by SuperLocalMemory, AgentAssert, AgentAssay, SkillFortify, and Qualixar OS. 7 published papers. 15 years enterprise IT. Independent of Accenture.

Find him on X: @varunPbhardwaj · YouTube: @myhonestdiary · varunpratap.com

#AIReliabilityEngineering

Top comments (0)