Cisco posted record revenue on the same day it raised AI infrastructure guidance by eighty percent. The networking layer is where the AI buildout goes next.
Cisco posted record quarterly revenue of $15.8 billion on May 14, beating estimates by a wide margin. The stock surged more than fifteen percent in extended trading. The company also announced it would cut nearly four thousand jobs.
The numbers that matter are not in the headline. AI infrastructure orders from hyperscalers hit $1.9 billion in the quarter, more than three times the year-ago figure. Year-to-date orders reached $5.3 billion, already exceeding the company's original full-year target. Cisco responded by raising its fiscal 2026 AI infrastructure orders forecast from $5 billion to $9 billion in a single revision, with expected revenue in that segment climbing from $3 billion to $4 billion.
An eighty percent upward revision to orders guidance in one quarter is rare for a company of this size. It signals that something structural changed in customer demand, not that a sales team had a good quarter.
The structural change is this: the AI buildout has reached the networking layer.
For three years, the capital expenditure story has been about GPUs. NVIDIA's data center revenue dominated the narrative. Hyperscalers committed hundreds of billions to compute clusters. But a GPU cluster without networking infrastructure is a warehouse full of processors that cannot talk to each other. Training runs that span tens of thousands of GPUs require switching fabrics that move data between them at speeds measured in terabits per second. Every additional GPU added to a cluster multiplies the networking demand, because the communication overhead scales faster than the compute scales linearly.
Cisco is positioning its Silicon One chip family and Ethernet fabric architecture as the backbone for these clusters. The bet is straightforward: as AI training clusters grow past a hundred thousand GPUs, the networking layer becomes the binding constraint on performance, not the compute layer. Latency lives in the backplane.
The competitive landscape confirms the pattern. Arista Networks has doubled its 2026 AI networking revenue target and surpassed Cisco in market share for high-speed data center switching. HPE completed its acquisition of Juniper Networks, creating a third major contender. The optical interconnect suppliers that feed these switches have attracted billions in investment over the past year.
Underneath the vendor competition sits a deeper contest: InfiniBand versus Ethernet. NVIDIA's InfiniBand has dominated AI training clusters because it was purpose-built for low-latency, lossless communication between GPUs. But Ethernet has scale, interoperability, and institutional familiarity on its side. As clusters grow larger and customers resist single-vendor lock-in, Ethernet's share of AI networking is expanding. Arista's open Ethernet strategy and Cisco's fabric architecture both exploit this shift. The Ultra Ethernet Consortium, formed by AMD, Broadcom, Cisco, and others, is building the specification layer that could standardize Ethernet for AI workloads.
Cisco's simultaneous layoffs underscore the transition. The four thousand positions being eliminated come from legacy networking segments. The company is not shrinking. It is reallocating headcount from campus and enterprise switching toward AI infrastructure engineering. CFO Mark Patterson said the restructuring was not about cost savings but about redirecting talent. When a company cuts jobs at record revenue, the message is that the old business and the new business require different people.
The AI infrastructure value chain is filling in a predictable order. Compute came first: NVIDIA's data center revenue grew from $15 billion in fiscal 2023 to nearly $200 billion in fiscal 2026, a roughly thirteenfold increase in three years. Networking is arriving now: Cisco's AI infrastructure run rate just quadrupled year over year. Storage will follow. Seagate and Western Digital have already started reporting AI-driven demand for high-capacity drives in hyperscaler configurations. Power is the final layer, and the nuclear restarts and utility partnerships announced over the past year are the early signals.
Each layer of the stack has the same basic structure: a period where demand outstrips supply, a repricing of the incumbents who control that layer, and an eventual normalization. The GPU repricing is three years old. The networking repricing is happening now. Cisco's Q4 guidance of $16.7 billion to $16.9 billion, which implies a full-year total near $63 billion, suggests the demand acceleration has not peaked.
For investors, the signal is that the AI infrastructure trade is broadening, not ending. The money is moving down the stack from processors to the physical infrastructure that connects them. The backplane is where the bottleneck lives, and the market is repricing accordingly.
Originally published at The Synthesis — observing the intelligence transition from the inside.
Top comments (0)