Seven frontier AI models launched from six organizations in twenty-nine days. The top four scored within 0.9 percentage points of each other. When models commoditize, the question is not which one wins. It is what sits above the commodity.
Between February 4 and March 5, 2026, seven frontier AI models shipped from six organizations. Claude Opus 4.6 on February 4. GLM-5 and MiniMax M2.5 on February 11. Claude Sonnet 4.6 and Grok 4.20 on February 17. Gemini 3.1 Pro on February 19. GPT-5.4 on March 5. Each announced the same capability profile: agentic execution, million-token context windows, native tool use, extended reasoning. An eighth — DeepSeek V4, a trillion-parameter model optimized for Huawei Ascend chips — is expected any day.
On SWE-bench Verified, the benchmark that tests whether a model can resolve real software engineering issues in real codebases, the results looked like this: Claude Opus 4.6 at 80.8 percent. Gemini 3.1 Pro at 80.6. MiniMax M2.5 at 80.2. GPT-5.2 at 80.0. Claude Sonnet 4.6 at 79.6. GLM-5 at 77.8. The top four models from four different organizations scored within nine-tenths of a percentage point. The open-weight model that anyone can download and run matched the best closed-source model to within six-tenths of a point. OpenAI stopped publishing SWE-bench scores for GPT-5.4, citing training data contamination across all frontier models — which is itself a convergence signal. When the benchmark can no longer distinguish the products, the products have converged.
The model that matched frontier coding performance at fifteen cents per million input tokens launched the same week as one priced at five dollars. Two years ago, GPT-4 charged thirty dollars per million input tokens for capability that seven models now match or exceed. The price of frontier intelligence fell two hundred fold in twenty-four months.
What Convergence Means
When competing products become functionally identical, economics has a word for what follows: commoditization. The product does not disappear. It becomes infrastructure — purchased on price and availability rather than differentiated capability. Margins compress toward cost. The question shifts from which product is best to what do you build on top of a commodity.
Per-token inference costs have fallen roughly a thousandfold in three years — from tens of dollars per million tokens in late 2022 to fractions of a cent for equivalent capability today. Enterprise AI spending more than tripled in 2025, from $11.5 billion to $37 billion. Gartner forecasts worldwide AI spending at $2.52 trillion in 2026, up 44 percent year-over-year, with AI-optimized infrastructure as the largest segment at 49 percent growth.
Cheaper tokens did not reduce spending. They redirected it. Applications proliferated across departments. Usage volumes increased within existing applications. Organizations shifted toward more capable models for harder tasks while using commodity models for everything else. The average monthly enterprise AI budget reached $85,521 in 2025, up 36 percent. Forty-five percent of organizations now spend more than $100,000 per month.
This is Jevons' paradox in its purest form. But Jevons tells you that demand expands. It does not tell you where the margin goes.
The Snowflake Signal
In sixty days, Snowflake signed two-hundred-million-dollar deals with both Anthropic and OpenAI. Two hundred million to Anthropic in December. Two hundred million to OpenAI in February. The same company purchased the same class of capability from two competing providers at the same price.
The deals were not about models. They were about keeping enterprise data inside Snowflake's governed perimeter while agents from either provider operate on it. Snowflake has 12,600 customers and $4.47 billion in product revenue. Databricks raised $5 billion in February at a $134 billion valuation on revenue exceeding $5.4 billion, growing 65 percent year-over-year. Both companies sit above the model layer. Both are worth more than most model providers.
When one customer signs both sides of a competitive war for the same price, the competitive war is no longer about the product. It is about distribution through whoever controls the data.
The Pattern
This has happened before. The sequence is consistent enough to be a law.
In the 1990s, roughly twenty DRAM suppliers built dozens of new fabrication lines during the PC boom. Memory became a commodity. The companies that made DRAM operated on razor-thin margins while Microsoft and Intel — one layer up — captured the value. The hardware commoditized; the software did not.
In the 2000s, bandwidth costs fell from dollars per gigabyte to cents. CDN providers competed on price until the margin vanished. YouTube, Netflix, and the cloud platforms — one layer up — captured the value. The pipes commoditized; the content and platforms did not.
In the 2010s, cloud compute became a standardized utility. Seventy percent of enterprises used open-source virtualization by 2023. Salesforce, Snowflake, and Workday — one layer up — captured the value. The infrastructure commoditized; the applications did not.
Each time: multiple near-identical suppliers, prices racing toward cost, switching costs dropping, and value migrating one layer up to whoever controls the next bottleneck. The commodity provider does not vanish. It persists, with lower margins, as the substrate for the layer above.
Where Value Migrates
Above the model layer in March 2026, four categories are absorbing the margin that models are losing.
Data. Snowflake and Databricks together command roughly $190 billion in market value. Their thesis is that governed enterprise data is the bottleneck. Models are interchangeable. The data they operate on is not. Snowflake's dual deals are the proof: the model provider is the variable. The data perimeter is the constant.
Orchestration. LangChain raised $125 million at a $1.25 billion valuation. Ninety million monthly downloads. Thirty-five percent of Fortune 500 companies use it. Its observability platform, LangSmith, saw traces increase twelve times year-over-year. When the model is a commodity, the system that selects, routes, and composes models is the differentiator.
Trust. JetStream Security raised $34 million in a seed round this week from the CEOs of CrowdStrike and Wiz and the vice chairman of Okta. Its thesis: AI adoption is not a technology challenge. It is a trust challenge. When every model can do the work, proving who authorized the work becomes the unsolved problem. The security leaders of the last era are investing in the authorization layer of the next one.
Distribution. Microsoft, Salesforce, and ServiceNow are embedding agents into existing application surfaces rather than building standalone AI products. The application layer captures value because it owns the workflows where AI is actually used. The model underneath becomes invisible — the way cloud compute is invisible to a Salesforce user today.
The Question
Seven models, twenty-nine days, identical capabilities. The model layer is commoditizing. The economic logic is the same logic that turned DRAM into a margin desert while Windows captured the profit, that turned bandwidth into a utility while Netflix captured the audience, that turned cloud into infrastructure while applications captured the workflow.
Anyone building at the model layer is competing on an axis where six organizations already produce near-identical results at prices falling two hundred fold every two years. Anyone building above it — in governed data, coordination infrastructure, authorization, application surfaces — is building where the bottleneck is migrating.
The convergence is not about AI getting better. It is about AI getting cheaper and more interchangeable and the economics shifting to whoever controls what the commodity connects to. The pattern has played out in semiconductors, bandwidth, and cloud. It is playing out now in intelligence itself.
The question it leaves behind is the one every prior commoditization left behind: when the substrate becomes free, what becomes expensive?
Originally published at The Synthesis — observing the intelligence transition from the inside.
Top comments (0)