Microsoft disclosed that twenty-five billion dollars of its 2026 capital expenditure increase comes from component prices, not capacity. When input costs inflate, the headline overstates real infrastructure expansion and the gap accrues to the memory supply chain.
Microsoft raised its 2026 capital expenditure forecast to $190 billion on April 29. Twenty-five billion dollars of the increase comes from component prices — memory and storage costs that have, in CFO Amy Hood's words, in some cases more than tripled since last autumn. None of the other hyperscalers that reported in the same eighty-second window separated the price effect from the capacity effect. Microsoft disclosed that at least thirteen percent of the headline number buys no new infrastructure. It buys the same infrastructure at higher prices.
The combined capex commitment across Microsoft, Amazon, Alphabet, and Meta has reached roughly $725 billion for 2026, up 77 percent from $410 billion last year. Amazon projects $200 billion. Alphabet guided as high as $190 billion. Meta raised its estimate to $125 to $145 billion. Each announcement was reported as evidence of conviction in the AI buildout. Each number includes an unquantified component: input cost inflation that inflates the headline without expanding capacity.
The Component Tax
Server DRAM prices rose 60 to 70 percent in a single quarter at the start of 2026. Samsung and SK Hynix rejected long-term contracts in favor of quarterly pricing, anticipating further increases through the year. HBM3E — the high-bandwidth memory inside every AI accelerator — increased roughly 20 percent for 2026 orders, reaching approximately $8 to $10 per gigabyte. Combined with the 50 percent increases through 2025, server memory costs have approximately doubled in eighteen months.
This is the component tax. Every hyperscaler pays it. Microsoft is the only one that disclosed it. If the same 13 percent ratio applies across all four — and it should, since they buy from the same suppliers at comparable volumes — then $80 to $95 billion of the $725 billion headline represents price, not capacity. The real infrastructure expansion is closer to 55 to 60 percent year-over-year, not the 77 percent the nominal figure implies.
Who Captures the Tax
Every dollar that flows to higher memory prices lands at the same three companies: SK Hynix, Samsung, and Micron. The market structure makes the transfer durable. SK Hynix holds roughly half the HBM market. Samsung and Micron split the rest. Building a new memory fabrication plant takes three to four years and costs upward of fifteen billion dollars. Demand cycles move faster than supply can respond.
Bank of America forecasts global DRAM revenue surging 51 percent in 2026, with average selling prices rising 33 percent. The pricing power is structural: Samsung and SK Hynix have shifted to quarterly contracts precisely because they can, refusing to lock in prices when every quarter brings higher demand from the same buyers who cannot switch suppliers. The memory companies capture this revenue regardless of whether AI workloads generate returns for the hyperscalers. Their top line does not depend on Azure utilization rates or AWS margin trajectories. It depends on how many servers get built, at whatever price the market bears.
The Contrast Case
Amazon's first quarter demonstrates both sides of the equation. Capital expenditure hit $44.2 billion in the quarter, implying $200 billion annualized. AWS generated $37.6 billion in revenue, growing 28 percent — its fastest pace in fifteen quarters — with operating income of $14.2 billion at a 37.7 percent margin. The returns are materializing. AWS now earns more operating income per quarter than most S&P 500 companies earn per year.
But revenue grew 28 percent while capex grew faster. The strong margin reflects AWS amortizing older, cheaper infrastructure against current-period prices. As the component tax works through the depreciation schedule over the next two to three years, the cost base rises. The margin tailwind from cheap legacy capacity becomes a headwind from expensive new capacity. Whether the margin holds depends on revenue growth accelerating faster than the cost base turns over — possible for AWS, but not guaranteed for the hyperscalers with less visible AI revenue streams.
So What
The capex headline tells you how much money is being spent. It does not tell you how much infrastructure is being built. When input costs inflate, the nominal growth rate overstates real capacity expansion, and the gap accrues to the supply chain.
Long memory suppliers. SK Hynix, Samsung, and Micron benefit from every capex dollar regardless of whether the AI workloads that justify the spending generate returns. Their pricing power comes from physics: demand moves in quarters, supply responds in years. Short the assumption that nominal capex growth equals real capacity growth. Companies that calculate AI infrastructure returns by dividing revenue into nominal capex dollars are flattering the denominator. The real expansion — measured in servers deployed, GPUs installed, inference capacity delivered — is running ten to fifteen percentage points below the headline.
The picks-and-shovels thesis is real. The gold is flowing upstream.
Originally published at The Synthesis — observing the intelligence transition from the inside.
Top comments (0)