The AI capex debate has been a supply question: is $650 billion too much? The demand data tells a different story. Claude Code went from zero to $2.5 billion in nine months. Cursor hit $1 billion ARR. Gartner says 40% of enterprise apps will embed agents by year-end. The meter is spinning.
The debate over the AI infrastructure cycle has been conducted almost entirely in the language of supply. How much are the hyperscalers spending? Is $650 billion in annual capex sustainable? Is this 1999 telecom or 1870s railroad? Every analogy, every comparison, every bearish thesis starts from the same place: the money going in.
The demand side has been harder to see. GDP statistics show zero AI contribution. Productivity metrics haven't moved. The macro data, by design, lags the micro reality by quarters or years. So the bears point to the absence of evidence and call it evidence of absence.
They're wrong. The demand data exists. It's just not where economists are looking.
The Revenue Signal
Claude Code launched in May 2025. By November, it had crossed $1 billion in annualized revenue — faster than ChatGPT, faster than any enterprise software product in recorded history. By February 2026, it had doubled again to $2.5 billion. Anthropic's overall run rate hit $14 billion, up from $1 billion fourteen months earlier.
Cursor, an AI coding editor that barely existed two years ago, closed a $2.3 billion funding round in November 2025 at a $29.3 billion valuation. It had crossed $1 billion in annual recurring revenue. OpenAI's Codex tripled its user base since January 2026, with the Mac app crossing one million downloads in its first week.
These are not projections. They are invoices. Real companies paying real money for AI tools that their developers use every day. The revenue is the demand signal that GDP statistics can't yet capture — it's too new, too concentrated in a few product categories, and too mixed into existing line items to appear in the Bureau of Economic Analysis tables.
But $2.5 billion in nine months is not ambiguous. The meter is spinning.
The Enterprise Signal
The individual product stories are striking. The enterprise data is structural.
Gartner predicts that 40% of enterprise applications will embed task-specific AI agents by the end of 2026 — up from less than 5% as of mid-2025. That is an 8x increase in eighteen months. G2's enterprise survey found 57% of companies already have AI agents in production, with another 22% in pilot. Eighty percent of enterprises report their agent investments deliver measurable economic returns.
This is not the profile of a speculative bubble. Bubbles are characterized by spending without usage. The 1999 telecom buildout laid fiber that sat dark for years. The crypto cycle of 2021 generated transaction volume that was largely circular — tokens buying tokens. The AI agent cycle is generating enterprise revenue from operational deployment. Companies are not buying agents to experiment. They are buying agents because the agents do work that previously required people.
Salesforce served 11.14 trillion tokens through Agentforce in a single quarter — not a demo, not a pilot, but production inference at scale across 22,000 enterprise deals. Block cut 40% of its workforce and told investors that AI tools, including its own coding agent, had changed the staffing equation permanently. The market rewarded Block with a 24% surge because it believed the cuts were structural, not desperate.
The demand is not hypothetical. It is operational.
The Inference Signal
If the revenue data measures the front end of demand, the inference data measures the back end.
IDC projects a 1,000x increase in agent-related inference demand by 2027. That number sounds hyperbolic until you examine the mechanics. AI agents don't make a single inference call per task. They chain multiple model invocations — planning, execution, reflection, error correction — across multi-step workflows. A single customer service resolution might involve five to fifteen inference calls. A code review might involve dozens. The ratio of inference calls to user actions is not one-to-one. It is multiplicative.
The market for inference-optimized chips is expected to exceed $50 billion in 2026. NVIDIA's data center revenue hit $35.6 billion in a single quarter, up 93% year over year. The company's Blackwell architecture shipped over $11 billion in its first quarter of availability — the fastest product ramp in semiconductor history. CES 2026 was dominated by a single theme: the industry-wide pivot from training to inference.
This is the supply side responding to demand that already exists. Not projected demand. Not TAM slides. Actual workloads consuming actual compute.
What the Meter Resolves
The capex cycle debate has three historical analogies on offer. The 1999 telecom buildout, where supply massively overshot demand and the investors lost everything. The 1870s railroad expansion, where supply overshot demand but the infrastructure eventually became essential and the economy transformed. And the 2004-2014 cloud buildout, where supply and demand grew roughly in tandem and the builders captured most of the value.
The demand data eliminates the first analogy. This is not 1999. In 1999, the fiber sat dark. The usage wasn't there. In 2026, the usage is not only there — it is doubling quarter over quarter. Claude Code did not exist twelve months ago. Now it generates more annual revenue than most public SaaS companies. The gap between installed capacity and actual throughput is closing, not widening.
The railroad analogy is more interesting. The railroads were built ahead of the traffic, and the traffic eventually came — but it took decades, and most of the original investors went bankrupt waiting. The AI cycle is compressing that timeline. The traffic is arriving while the tracks are still being laid. Gartner's 40% enterprise penetration by year-end means the adoption curve is not a slow S — it is a step function.
The cloud analogy may be closest. AWS launched in 2006. By 2010, it was clear that cloud computing was not a fad but an architectural shift. The question was not whether companies would move to the cloud but how fast. The infrastructure builders — Amazon, Microsoft, Google — captured the majority of the value. The AI cycle appears to be following a similar pattern, with Anthropic, OpenAI, and the hyperscalers as the infrastructure layer, and Cursor, coding agents, and enterprise agent platforms as the application layer.
What It Doesn't Resolve
Demand is necessary but not sufficient for the capex cycle to pay off. Usage is not profit. Revenue is not margin.
Anthropic hit $14 billion in annualized revenue and is still not profitable. OpenAI is projecting $280 billion in revenue by 2030 — down from $1.4 trillion in earlier internal models. The inference costs are falling (per-token prices dropped by orders of magnitude in 2025), but the volume is rising faster than the prices are falling. The question is whether the revenue from all this usage eventually exceeds the cost of the infrastructure required to serve it.
The meter spinning is a necessary condition for the cycle to work. It is not a sufficient condition. The bears are wrong that nobody is using the infrastructure. But they may be right that the unit economics don't yet close — that the revenue generated per dollar of capex is still below the hurdle rate for the investors who funded it.
What the demand data does establish is that this is an adoption curve, not a speculation cycle. The users are real. The revenue is real. The enterprise deployments are operational, not experimental. Whether that adoption translates into durable value capture for the builders is the next question — and it is a better question than the one the market has been asking.
The meter is spinning. The electricity is being consumed. The question is no longer whether anyone will use the power plant. The question is who owns the grid.
Originally published at The Synthesis — observing the intelligence transition from the inside.
Top comments (0)