On May 6, Claude Code's five-hour rate limits doubled. The peak-hour throttling that had been frustrating paid users for months disappeared. Most people noticed the change and moved on without looking too closely at what caused it.
The answer is strange enough that I think it is worth looking at closely. Anthropic rented the entire Colossus 1 supercomputer cluster in Memphis, Tennessee from SpaceX. That is 220,000 NVIDIA GPUs and 300 megawatts of power capacity, coming online within a month of the announcement. The reason it is strange: three months before signing this deal, Elon Musk had posted on X that Anthropic's AI was "misanthropic and evil" and told the company it was "doomed."
Let me walk through what actually happened, what it means practically, and what I think it signals about where we are in the AI compute story.
What Colossus 1 Actually Is
Most people have heard the name but do not have a clear picture of the scale. Colossus 1 is the original AI supercomputer cluster that xAI (Musk's AI company) built in Memphis starting in 2024. It went operational in July of that year, remarkably fast for infrastructure of that size.
The hardware breakdown: the cluster runs a mix of NVIDIA H100s, H200s, and GB200s. 220,000 GPUs total. The 300 megawatt power draw is equivalent to the entire electricity load of roughly 300,000 average American homes. When it launched, it was described as the largest AI training facility in the world by a significant margin.
Here is what changed and why the deal was possible. Since then, xAI (now merged into SpaceX after a $1.25 trillion all-stock deal in February 2026) built Colossus 2, an even larger cluster with around 520,000 GB200s targeting one gigawatt of power capacity. When Grok's training workloads migrated to the newer, faster hardware, Colossus 1 became a 300-megawatt facility generating very little revenue. The deal with Anthropic solves that problem.
Anthropic gets the compute immediately. SpaceX gets rental income ahead of its planned June 2026 IPO. That is the straightforward business logic.
Why Anthropic Needed This
Dario Amodei was on stage at Anthropic's developer conference the same day the deal was announced. He said something that landed harder than most conference quotes: the company had projected 10x growth in Q1 2026. The actual number was 80x, annualized. He called it "just crazy" and "too hard to handle."
Claude Code specifically drove a lot of that. The adoption curve for AI coding tools has been steep across the industry, and Claude Code became the default choice for a large chunk of that market. The infrastructure was not built for 80x growth. That is what was behind the rate limit caps and the peak-hour throttling that paying users had been hitting for months. It was a capacity problem, not a policy problem.
Anthropic is not short on future compute commitments. The company has deals with Amazon (up to $25 billion invested, roughly 5 gigawatts of Trainium capacity coming over the next few years), Google (up to $40 billion invested, 5 gigawatts via Broadcom), and several other infrastructure partners. The total compute reserved across all of those deals is measured in gigawatts.
The problem those deals do not solve is now. AWS Trainium rollouts and Google TPU clusters are measured in years, not weeks. Colossus 1 is available within a month of the announcement. For a company that just discovered its demand is 8x higher than forecast, "available in weeks" is worth a lot even at a smaller scale than the future partnerships will deliver.
The current deal also appears to be focused on inference rather than training. Anthropic trains Claude on AWS Trainium and Google TPUs. Colossus 1's hardware mix, particularly the H100 and H200 GPU density, is better suited for the inference workloads that serve Claude Pro, Claude Max, and the API. The immediate user-facing impact, the doubled rate limits and removed peak throttling, is consistent with that.
The Musk Reversal
This is the part of the story that every tech journalist covered, and for good reason. The timeline is genuinely unusual.
In February 2026, hours after Anthropic announced a $30 billion funding round, Musk posted directly at the @AnthropicAI account: "Your AI hates Whites & Asians, especially Chinese, heterosexuals and men. This is misanthropic and evil. Fix it." In other posts around the same period he called Anthropic "Misanthropic," said it "hates Western civilization," and declared that "Winning was never in the set of possible outcomes for Anthropic."
He also had a specific grievance: Anthropic had cut off xAI's access to Claude through Cursor, citing their commercial terms that prohibit using the API to build competing AI products. (Anthropic did the same to OpenAI in August 2025.) The xAI cofounder Tony Wu confirmed it internally: "We will take a hit on productivity, but it really forces us to develop our own coding products and models."
Three months later they signed a deal together.
Musk's explanation, posted the day after the announcement: "I spent a lot of time last week with senior members of the Anthropic team to understand what they do to ensure Claude is good for humanity and was impressed. Everyone I met was highly competent and cared a great deal about doing the right thing. No one set off my evil detector. So long as they engage in critical self-examination, Claude will probably be good."
There is one unusual clause buried in the deal: SpaceX reserves the right to reclaim the compute if Anthropic's AI "engages in actions that harm humanity." Whether that is meaningful contractual language or a rhetorical add-on is hard to say from the outside, but it is the kind of condition that reflects how personally Musk was taking the criticism before the handshake.
My read on the reversal is simpler than the drama makes it seem. Colossus 1 was sitting underutilized. Anthropic needed compute fast and had budget to pay for it. Both sides had a clear financial reason to set the insults aside. The "evil detector" framing is Musk, but the underlying transaction is just two companies with complementary short-term needs.
What Actually Changed for Claude Users
The practical changes are real and immediate.
For Claude Code specifically: five-hour rate limits doubled for Pro, Max, Team, and Enterprise plans. The peak-hour throttling that kicked in during high-demand periods is gone for Pro and Max accounts. If you have been hitting rate limit errors in the late afternoon US time, that should largely stop.
For API users on Opus models: Anthropic described the limits as "considerably raised" without publishing exact numbers. The framing in the announcement focused on the ability to "process significantly more input and output tokens per minute."
The rate limit doubling matters more than it might sound if you are actively building with Claude Code. The five-hour window was a real constraint on complex, multi-step agentic tasks. Longer context windows, more tool calls, deeper refactors, those all burn limits faster. Doubling the window is a meaningful change for anyone doing serious work rather than quick edits.
The timing of availability is also notable. Colossus 1 is supposed to come online for Anthropic within one month of the announcement. That is unusually fast for infrastructure at this scale, but the cluster is already built and operational. It is a matter of provisioning Anthropic's access rather than constructing anything.
The Compute Race Is Now a First-Class Business Problem
Something this deal makes clear, if it was not already, is that AI compute is now a strategic constraint that the companies in this space have to solve actively and continuously.
Anthropic's situation is a good illustration. They have gigawatt-scale deals committed with Amazon and Google. They also just signed an emergency lease on a competitor's data center because the demand curve outran their projections by a factor of eight. Both things can be true at once. Long-term infrastructure deals are not enough on their own when you are growing at rates this fast.
The orbital compute angle in the announcement is worth noting, even if it reads as forward-looking. Anthropic and SpaceX expressed interest in developing "multiple gigawatts of orbital AI compute capacity." SpaceX filed with the FCC in January 2026 for authorization to deploy a satellite constellation for exactly this purpose. Google published a feasibility study suggesting space-based data centers become cost-competitive with terrestrial ones once Starship brings launch costs down to around $200 per kilogram, which is a realistic target on a ten-year horizon.
I would not count orbital compute as near-term capacity planning. But it does reflect where the ceiling conversation is already happening. Terrestrial power, land, and cooling are the constraints. SpaceX has a credible path to removing those constraints eventually, and Anthropic is a customer with both the compute need and the capital to be interesting to them as a long-term partner.
The Weird Politics at the Edge of This Deal
This part is less about development and more about context, but I think it matters for how you read the deal.
Anthropic has said they are "very intentional" about where they add compute capacity, specifically mentioning a preference for democratic countries with stable legal frameworks. In the same month they signed this deal, they were actively suing the Trump administration to reverse a Defense Department decision that blacklisted them as a supply chain risk and cut them off from federal contracts.
Musk, who controls SpaceX and now SpaceXAI, is closely aligned with that same administration. There is an obvious tension between Anthropic's stated preference for democratic infrastructure partners and signing a major deal with someone whose political alignment is with the government that just tried to cut them off.
I am not drawing a conclusion here, partly because the financial logic of the deal is clear and partly because I do not have visibility into how Anthropic weighed the tradeoff internally. But it is the kind of contradiction that tends to come up again when there is a policy dispute down the line. If SpaceX invokes the "harms humanity" reclaim clause someday, that context will matter.
What This Means for Developers Using Claude
The immediate practical takeaway is: the bottleneck you were hitting on Claude Code is about to be significantly less painful.
The longer-term takeaway is less tidy. The AI infrastructure layer is consolidating around a small number of very large players, and the relationships between those players are more complicated than a simple vendor-customer model. Anthropic's compute stack now includes Amazon, Google, Microsoft, SpaceX, and Fluidstack in a mix of equity investments, compute credits, and rental agreements. Those relationships come with interests that are not always perfectly aligned with the people building on the platform.
This is not a reason to stop building on Claude. The rate limits are better, the pricing is still competitive, and the prompt caching economics still favor Anthropic for high-volume production features. For complex agents, the Claude-specific features (extended thinking, memory primitives, tool use) remain genuinely strong. If you have been building your AI agent architecture around Claude, the deal does not change the calculus there.
What it does is add one more data point to the general pattern of the AI infrastructure layer being much more entangled than the clean abstractions on the surface suggest. The API call you make to get a completion goes through a stack that includes a data center leased from the company whose CEO called your provider "evil" this spring. That is not an argument for or against using the API. It is just an accurate description of the current state of things.
The Short Version
SpaceX had a 300-megawatt data center with 220,000 GPUs sitting underutilized after upgrading to newer hardware. Anthropic was growing 8x faster than projected and hitting capacity limits. They made a deal that makes clear financial sense for both parties, regardless of what either CEO had said about the other three months earlier.
Claude Code rate limits doubled as a direct result. That is the part that affects your day-to-day work, and it is a real improvement for anyone doing serious agentic development.
The rest of the story, the Musk reversal, the orbital compute ambitions, the political contradictions, is worth understanding as context for an industry where the infrastructure layer is genuinely complicated and the companies building on it are making consequential decisions about who they do business with. Those decisions have a way of mattering more than they seem to at announcement time.
Top comments (0)