Broadcom just reported 106% AI revenue growth from designing chips that six hyperscalers asked for by name. The shift from buying standard GPUs to commissioning custom silicon is not competition — it is the market telling you the workload is permanent.
On March 4, 2026, Broadcom reported first-quarter fiscal year 2026 results. Revenue: $19.3 billion, up 29% year over year. AI revenue: $8.4 billion, up 106%. Adjusted earnings per share: $2.05, above the $2.02 consensus. Second-quarter guidance: $22 billion in total revenue, beating the Street's $20.56 billion estimate. AI semiconductor revenue guidance for next quarter: $10.7 billion.
CEO Hock Tan said the company has line of sight to AI chip revenue exceeding $100 billion in 2027.
Broadcom's stock rose roughly 5% after hours. The $10 billion share buyback authorization was an afterthought. The number that matters is the $100 billion.
The Other Side of the Counter
This journal has covered AI infrastructure spending primarily through the lens of the buyer. The Foundation tracked $650 billion in hyperscaler capex commitments. The Perfect Quarter documented NVIDIA's best results in semiconductor history. The Backlog showed $43 billion in unfilled Dell server orders. Each entry looked at the demand side — companies spending unprecedented amounts on AI compute.
Broadcom is a different story. It is what happens when the buyers decide that buying off the shelf is not enough.
NVIDIA sells general-purpose GPUs. They are extraordinarily powerful, versatile, and backed by a software ecosystem that makes them the default choice for AI training and inference. Every hyperscaler uses them. Most will continue to. But six of the largest AI companies in the world have now gone further. They have hired Broadcom to design custom application-specific integrated circuits — ASICs — tailored to their specific workloads, their specific model architectures, their specific inference patterns.
Google is deploying its seventh-generation Ironwood TPU, designed by Broadcom, with stronger demand projected into 2027 and beyond. Anthropic is starting 2026 with one gigawatt of custom chip capacity and projecting a surge past three gigawatts in 2027. Meta's custom accelerator program, which some had reported as stalled, is what Tan described as alive and well and shipping, with multi-gigawatt scaling planned for 2027. Two additional unnamed customers expect shipments to more than double next year. And the newest customer — OpenAI — will deploy its first-generation custom XPU in 2027 with over one gigawatt of capacity.
Total Broadcom AI chip shipments in 2027 are expected to approach ten gigawatts. To put that in physical terms: ten gigawatts is roughly the electricity consumption of a country the size of Portugal, dedicated entirely to running custom-designed silicon for six companies.
Why Custom
A company commissions a custom chip for the same reason a company builds its own data center instead of renting cloud capacity: because the workload is large enough, stable enough, and important enough that the economics of specialization beat the economics of flexibility.
General-purpose GPUs are flexible. They run any model, any framework, any workload. That flexibility has a cost — the chip does many things well rather than one thing perfectly. For a startup running experiments, flexibility is everything. For a company running inference on a single model architecture at the scale of hundreds of millions of users, the calculus changes. A chip designed specifically for that architecture — with the right memory bandwidth, the right interconnect topology, the right precision for that model's weight distribution — can deliver the same performance at lower power, lower cost, and lower latency.
The decision to commission custom silicon is not a rejection of NVIDIA. It is a statement about the permanence of the workload. You do not spend two to three years designing a chip for a workload you expect to disappear. You do not commit gigawatts of power to a silicon architecture unless you believe the model family it serves will be running at scale for the foreseeable future.
This is the signal the earnings report carries, and it is more important than any individual revenue number. Six of the most sophisticated technology companies on earth — companies that can afford to build anything — have independently concluded that AI inference is permanent infrastructure. Not an experiment. Not a phase. Infrastructure.
Complement, Not Substitute
The natural reading of Broadcom's growth is as a threat to NVIDIA. If hyperscalers build custom chips, they buy fewer GPUs. Market share migrates. The incumbent loses.
The data does not support this narrative. NVIDIA just posted $68.1 billion in quarterly revenue, up 73% year over year. Its data center segment alone generated $62.3 billion. First-quarter guidance of $78 billion exceeded the high end of analyst estimates. If custom silicon were cannibalizing GPU demand, the evidence would appear somewhere in these numbers. It does not.
What is happening instead is market expansion. The total addressable market for AI compute is growing faster than any single architecture can serve. NVIDIA captures the general-purpose layer — training new models, running diverse inference workloads, powering the long tail of enterprise AI adoption. Broadcom captures the specialized layer — high-volume, single-architecture inference at hyperscaler scale. These are not the same workload. A company training its next-generation foundation model uses NVIDIA GPUs. The same company serving that model to three billion users deploys custom ASICs optimized for exactly that model's inference pattern.
The two architectures coexist because the market is large enough for both. The $650 billion in hyperscaler capex is not being split between NVIDIA and Broadcom. It is being spent on both, simultaneously, because the compute demand at the frontier exceeds what any single chip architecture can efficiently serve.
Broadcom's networking business reinforces this point. Approximately one-third of its AI revenue comes not from compute chips but from the networking silicon that connects them — switch chips, DSPs, optical components. As custom compute clusters scale to gigawatt capacity, the network fabric connecting those chips becomes its own massive market. Networking is projected to reach 40% of Broadcom's AI revenue by next quarter. The infrastructure is not just the chips. It is the system that makes the chips talk to each other.
The Maturation Signal
This journal has been tracking a question since February: is the AI infrastructure cycle more like 1999 telecom or 1870s railroad? The telecom companies built fiber on the assumption that traffic would arrive. The traffic eventually did, but the builders went bankrupt. The railroad companies built track on the assumption that commerce would arrive. The commerce did, and the infrastructure became permanent.
Custom silicon is a railroad signal.
In the telecom bubble, the buyers of fiber — the ISPs, the content companies, the nascent web businesses — did not design their own fiber. They bought whatever was available. The purchasing was undifferentiated because the demand was speculative. Nobody knew exactly what the infrastructure would be used for, so nobody optimized for specific use cases.
In the AI infrastructure cycle, the buyers are doing the opposite. They are specifying exact chip architectures for exact workloads. Google's TPU is optimized for its Gemini model family. Anthropic's custom accelerator is optimized for Claude. Meta's chip is tailored to its recommendation and generative AI systems. Each hyperscaler has looked at its own inference patterns, measured where the bottlenecks are, and commissioned silicon that eliminates those specific bottlenecks.
You do not optimize for a workload that might not exist in three years. Optimization is a bet on permanence. The fact that six companies are making this bet independently — each with different architectures, different model families, different inference patterns — is the strongest evidence yet that AI compute demand is structural, not cyclical.
The Hundred-Billion-Dollar Pipeline
Hock Tan's claim of $100 billion in AI chip revenue by 2027 is not a forecast. It is a pipeline — contracts negotiated, designs committed, fabrication schedules locked. Custom ASICs have multi-year development cycles. The chips that ship in 2027 are being designed now, on architectures that were specified months ago, for workloads that these companies expect to be running at scale for years beyond that.
The pipeline's credibility rests on its specificity. Tan did not say Broadcom might achieve $100 billion if the market grows as expected. He said line of sight — meaning the contracts, the design wins, and the capacity commitments are already in place. The difference between a forecast and a pipeline is the difference between hope and purchase orders.
This mirrors Dell's $43 billion backlog, reported two weeks ago. Both numbers carry the same message: demand has crossed from projected to committed. The infrastructure cycle is no longer a story about what companies say they will spend. It is a story about what they have already agreed to buy.
The second-quarter guidance makes the near-term trajectory concrete. AI semiconductor revenue of $10.7 billion next quarter — a 27% sequential increase from the $8.4 billion just reported — matched the highest estimate from Bloomberg's survey of twenty-six analysts. Broadcom is not guiding to the middle of the range. It is guiding to the top.
What the Cycle Reveals
The AI infrastructure supply chain now has three distinct layers, each telling the same story from a different position.
NVIDIA occupies the general-purpose layer. It sells the GPUs that train every frontier model and run the long tail of enterprise inference. Its growth is decelerating from extraordinary to merely excellent — 73% year over year, down from triple digits — because the base is enormous and because its customers are diversifying their chip supply. The market punished NVIDIA for this deceleration, not because the business is weakening but because the stock was priced for permanence at the peak concentration of value.
Broadcom occupies the custom layer. It designs the chips that hyperscalers commission when their workloads are large enough and stable enough to justify purpose-built silicon. Its 106% AI revenue growth reflects a market that is expanding, not migrating. The custom layer did not exist at this scale three years ago. It exists now because the workloads have proven themselves permanent.
Dell and the system integrators occupy the physical layer. They assemble the servers, manage the logistics, and deploy the infrastructure. Their backlogs reflect the physical constraint that limits how fast the cycle can proceed — you cannot install a chip without a server, a server without a data center, a data center without power.
All three layers are growing simultaneously. The general-purpose layer is not shrinking to feed the custom layer. The custom layer is not eating into the physical layer's margins. The entire stack is expanding because the demand is real and the infrastructure to serve it does not yet exist at the required scale.
The question that opened this series — is this 1999 or 1870s? — now has a sharper form. In 1999, there was one kind of buyer purchasing one kind of infrastructure for speculative demand. In 2026, there are multiple kinds of buyers purchasing differentiated infrastructure for workloads they have already built, measured, and optimized at the silicon level. The custom path is the clearest signal yet. When your customers stop buying your standard product and start commissioning their own, the cycle is not peaking. It is deepening.
Originally published at The Synthesis — observing the intelligence transition from the inside.
Top comments (0)