Author: Ruslan Averin | Financial analysis blog averin.com
The Number Everyone Keeps Quoting Without Understanding What It Buys
$300 billion. That is the combined AI infrastructure capital expenditure that Microsoft, Google, Meta, and Amazon are expected to deploy in calendar year 2026. The number gets cited in earnings calls, in analyst notes, in every AI bull and bear argument I've seen this year.
What almost nobody does is open the line items and ask: what is $300 billion actually buying?
I've spent considerable time on this — partly because I have meaningful exposure to the theme across several positions, and partly because the bear case I hear most often ("this is the dot-com infrastructure bubble all over again") deserves a precise rebuttal rather than hand-waving optimism. Let me walk through the four companies, what they're building, and what I think it means for investors who want exposure without owning the hyperscalers directly.
Breaking Down the $300 Billion
The aggregate number is real, but the composition matters.
Microsoft: $80 billion in FY2026. This is the most scrutinized number because it follows Satya Nadella's January 2025 announcement and has been largely confirmed in subsequent earnings. The bulk — roughly $50-55 billion — goes into data center construction and GPU cluster buildout across the US and Europe. The remaining $25-30 billion covers networking infrastructure, storage expansion, and custom silicon development (Microsoft's Maia chips). Microsoft is building for two purposes: capacity to serve Azure AI workloads for enterprise customers, and infrastructure to run its own Copilot suite at scale.
Google: $75 billion in CY2026. Alphabet's Q1 2026 earnings confirmed the trajectory — $17.2 billion in a single quarter, annualizing to approximately $69 billion, with guidance suggesting acceleration in the back half. Google is building the world's largest fleet of custom AI accelerators (TPUs), alongside significant investment in undersea cable capacity and nuclear power purchase agreements. The nuclear piece is real: Google has signed Power Purchase Agreements with multiple operators to provide carbon-free power for data centers at the scale AI workloads require.
Meta: $65 billion in CY2026. Mark Zuckerberg has been explicit about the strategic logic — Meta is building proprietary AI infrastructure rather than renting capacity from Amazon or Microsoft, a bet that full vertical integration will be cheaper at Meta's scale over a 10-year horizon. The $65 billion includes a planned data center in Louisiana that alone will consume approximately 1 gigawatt of power at peak capacity — equivalent to roughly 750,000 US homes.
Amazon: $100 billion in CY2026. The largest single number, and somewhat misleadingly presented. Amazon's $100 billion includes AWS infrastructure broadly — not purely AI-focused spending. A reasonable estimate is $60-70 billion in AI-specific buildout (GPU clusters, custom Trainium and Inferentia chips, fiber and backbone investment) with the balance in general cloud capacity. Even at the conservative estimate, it's the largest single-company AI infrastructure commitment in history.
What They're Building: The Physical Reality
The physical infrastructure required for these workloads is what makes this cycle interesting from an investment perspective. It is not primarily a software or talent expenditure. It is a massive physical buildout with identifiable supply chain beneficiaries.
GPU clusters are the obvious layer. Tens of thousands of Nvidia H100 and B200 GPUs in a single training cluster, networked with InfiniBand at extraordinary bandwidth. The limiting factor is not money — it is chip availability. Nvidia's order backlog and production ramp is why the hyperscalers are all simultaneously developing custom silicon as a hedge.
Power infrastructure is the constraint that most analysis underweights. A single 100,000-GPU training cluster consumes in the range of 100-150 megawatts continuously. The four hyperscalers combined are adding capacity that will require several additional gigawatts of reliable, preferably carbon-free power. This is not a 2030 problem — it is a 2026-2027 problem, and utilities and power equipment manufacturers are scrambling.
Cooling systems are equally non-trivial. GPU clusters run hot. Air cooling at the required density is becoming physically impractical for the largest facilities. Liquid cooling — direct-to-chip and immersion cooling systems — is a growing share of every new data center build. The companies designing and manufacturing these systems are in high demand with multi-year backlogs.
Where I'm Positioned: Picks and Shovels
I don't own large positions in the hyperscalers themselves. Microsoft, Google, Amazon, and Meta are all priced for a scenario where the AI infrastructure investment generates substantial returns — if it does, the stocks are probably fairly valued at current levels; if there's any disappointment, the multiple compression is painful. That's not a risk/reward I want to own at current prices.
What I own is the infrastructure layer — the companies that get paid regardless of which hyperscaler wins the AI race.
Nvidia (NVDA) is the most obvious beneficiary, and I hold it with covered calls to generate income against the position. The hyperscaler custom silicon programs (Maia, TPU, Trainium, Marvell-designed chips for Meta) are real competitive threats to Nvidia over a 5-year horizon, but in the 2026-2027 window, there is no production alternative for the highest-performance AI training at scale. Nvidia's H100 and B200 GPUs are the only products that deliver the required FLOPS for frontier model training. That won't change in 12-18 months.
Vertiv Holdings (VRT) is my largest single position in this theme. Vertiv makes the power distribution, thermal management, and infrastructure systems that go inside data centers. Their liquid cooling solutions are on the critical path for every GPU cluster deployment. Q1 2026 earnings showed order book growth of 35% year-over-year. Backlog is 18+ months. They are effectively sold out of the product categories that AI requires. This is a company printing revenue into existing demand.
Eaton Corporation (ETN) is my second significant position in the power infrastructure layer. Eaton makes the electrical distribution equipment — switchgear, transformers, power conditioning — that sits between the grid and the data center. Every new data center build requires Eaton's products. The lead times on their heavy electrical equipment are running 18-24 months. The backlog is real and visible.
Constellation Energy (CEG) is the most speculative position, but the thesis is concrete: Constellation is the largest nuclear energy generator in the US, operating a fleet of 21 reactors. Microsoft's deal to reopen Three Mile Island Unit 1 (operated by Constellation) for dedicated data center power was the first; it won't be the last. Hyperscalers need 24/7 carbon-free power at gigawatt scale, and nuclear is the only source that fits that description. CEG is positioned at the intersection of two secular trends — AI power demand and nuclear renaissance.
The Bear Case I Take Seriously
I want to engage with the bear case honestly, because I've held these positions long enough to have watched the concerns evolve.
The legitimate concern is not that AI doesn't work — it clearly generates value in multiple enterprise applications. The legitimate concern is that the hyperscalers are all building simultaneously for the same customer base, and the competition between AWS, Azure, and Google Cloud is structurally deflationary for cloud AI pricing. If the capacity buildout outstrips actual enterprise AI workload demand, pricing power erodes, and the return on $300 billion of capex comes in well below the cost of capital.
This is the capacity glut scenario — the one that actually happened with fiber optic cable in 2000-2001, and with cloud servers in 2015-2016 before enterprise adoption caught up. It's not impossible. The honest answer is that enterprise AI adoption is running behind hyperscaler buildout at the moment. The question is whether adoption closes that gap or the overcapacity persists.
I hold the picks-and-shovels positions rather than hyperscaler equity precisely because of this uncertainty. Vertiv, Eaton, and Nvidia get paid when the capacity is built — the question of whether that capacity earns a good return is the hyperscalers' problem, not mine.
The Timeline: When Returns Appear
My working model: AI infrastructure investment peaks in 2025-2027. The capacity comes online through 2026-2028. Revenue from enterprise AI workloads running on that capacity begins showing in earnings in 2027-2028, and only then can the hyperscalers demonstrate whether deployed capital earns returns above cost.
The period between peak capex (now) and visible returns (2027+) is the window when skepticism is highest and stocks are most vulnerable. I've sized the picks-and-shovels positions to hold through that window without needing to time the exact inflection.
The $300 billion is not being spent irrationally. It's being spent because each hyperscaler believes — correctly, in my assessment — that controlling AI infrastructure at scale in 2028 creates a structural competitive advantage that is very difficult to replicate late. That's a bet I'm comfortable making in the supply chain. I'm less comfortable making it in the hyperscalers themselves at current multiples.
© Ruslan Averin — averin.com. Original: averin.com/en/journal/ai-capex-cycle-2026-hyperscaler-spending
Top comments (0)