OpenAI made two very different moves over the last 24 hours, but together they tell one coherent story.
First, CNBC reported that OpenAI used an investor memo to take a direct shot at Anthropic, arguing that it has a stronger compute and infrastructure position and that the gap is widening. Second, OpenAI published a new Academy guide explaining how to use ChatGPT Projects, framing them as persistent workspaces for chats, files, instructions, and shared context.
On the surface, those look like unrelated updates. One is investor messaging. The other is a product education piece. But they both point in the same direction. OpenAI is trying to win the AI race on both the supply side and the user side.
According to CNBC, OpenAI told investors it expects to reach 30 gigawatts of compute by 2030, while estimating Anthropic could land somewhere around 7 to 8 gigawatts by the end of 2027. That is not casual positioning. It is OpenAI making the case that infrastructure scale is going to matter as much as model quality in the next phase of this market.
That argument is hard to dismiss. AI competition in 2026 is no longer just about whose model looks smartest in a benchmark screenshot. It is about who can train faster, serve more users, support enterprise demand, and lower the cost of inference without hitting a wall. Compute is no longer just an engineering input. It is a strategic moat.
The timing matters because Anthropic has real momentum. It has been gaining ground in enterprise accounts and continues to build a strong reputation around reliability, safety, and developer trust. OpenAI’s memo reads like a reminder to investors that momentum is not the same thing as control, and that control over infrastructure may decide who actually captures the biggest share of the market.
At the same time, the ChatGPT Projects guide shows OpenAI pushing just as hard on the product layer. Projects are being positioned as dedicated spaces where users can keep files, conversations, instructions, and context together over time. That might sound like a small UX improvement, but it is actually one of the most important shifts in AI product design.
The more AI gets embedded into real work, the less useful stateless chat feels. People do not want to keep re-explaining the same brief, re-uploading the same files, or digging through old chats to reconstruct context. Persistent workspaces solve that problem. They make AI more usable for writing, research, planning, collaboration, and any workflow that stretches over days or weeks.
This is why the Projects update matters more than it first appears. It is not just a feature guide. It is a signal that OpenAI wants ChatGPT to become an operating environment for ongoing work, not just a place for one-off prompts.
Taken together, these two updates show what the next phase of the AI race looks like. The winners will not just have better models. They will have deeper compute reserves, tighter infrastructure control, lower serving costs, and products that hold user context in a way that makes them genuinely sticky.
Anthropic is still very much in this fight. In some areas, especially enterprise trust and model behaviour, it may still have the stronger hand. But OpenAI is making a broader platform play. It wants to own the hardware narrative and the workflow narrative at the same time.
My take is simple. The investor memo is the headline move, but Projects may be the more important one in the long run. Infrastructure shapes margins and valuation. Workflow shapes habit. And habit is what turns a useful AI product into the default place where work happens.
If OpenAI can pair frontier-scale infrastructure with persistent, context-rich collaboration, it strengthens its position not just as a model lab, but as the platform layer for AI-native work.
Sources:
Top comments (0)