<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: FirstPassLab</title>
    <description>The latest articles on DEV Community by FirstPassLab (@firstpasslab).</description>
    <link>https://dev.to/firstpasslab</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/firstpasslab"/>
    <language>en</language>
    <item>
      <title>Inside NVIDIA’s $2B Marvell Deal: What NVLink Fusion Means for AI Ethernet Fabrics</title>
      <dc:creator>FirstPassLab</dc:creator>
      <pubDate>Thu, 16 Apr 2026 22:54:39 +0000</pubDate>
      <link>https://dev.to/firstpasslab/inside-nvidias-2b-marvell-deal-what-nvlink-fusion-means-for-ai-ethernet-fabrics-2nek</link>
      <guid>https://dev.to/firstpasslab/inside-nvidias-2b-marvell-deal-what-nvlink-fusion-means-for-ai-ethernet-fabrics-2nek</guid>
      <description>&lt;p&gt;If you work on AI infrastructure, the interesting part of NVIDIA's $2B Marvell deal is not the check size. It's that NVIDIA is trying to keep custom XPUs, optics, DPUs, and Ethernet fabrics inside one system model. That changes how network teams should think about scale-up versus scale-out design, RoCE behavior, and optical planning.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Why this matters:&lt;/strong&gt; The company that defines the interconnect and operating assumptions of the cluster can stay in control even when customers start mixing in custom silicon.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyo3hswsyfliz3cof6tyy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyo3hswsyfliz3cof6tyy.png" alt="NVLink Fusion overview" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This is not just a chip deal. It is a fabric-control deal, and it tells data center architects that optical interconnects, lossless Ethernet, and rack-scale integration are now the real battleground in AI infrastructure.&lt;/p&gt;

&lt;h2&gt;
  
  
  What exactly did NVIDIA and Marvell announce?
&lt;/h2&gt;

&lt;p&gt;NVIDIA and Marvell announced a strategic partnership on March 31, 2026 that combines a $2 billion NVIDIA investment with a broader technical agreement around NVLink Fusion, custom XPUs, scale-up networking, and silicon photonics. According to NVIDIA (2026), Marvell will contribute custom XPUs and NVLink Fusion-compatible scale-up networking, while NVIDIA will provide Vera CPUs, ConnectX NICs, BlueField DPUs, NVLink interconnect, Spectrum-X switches, and rack-scale AI compute. According to Data Center Dynamics (2026), the two companies also plan to work on advanced optical interconnects and silicon photonics, plus AI-RAN use cases for 5G and 6G. The important point for network engineers is that the announcement spans the full AI factory stack, from compute attachment to optics, rather than a narrow financial investment.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Layer&lt;/th&gt;
&lt;th&gt;Marvell contributes&lt;/th&gt;
&lt;th&gt;NVIDIA contributes&lt;/th&gt;
&lt;th&gt;Why it matters&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Custom compute&lt;/td&gt;
&lt;td&gt;Custom XPUs&lt;/td&gt;
&lt;td&gt;Rack-scale AI ecosystem&lt;/td&gt;
&lt;td&gt;Lets customers build semi-custom systems without abandoning NVIDIA infrastructure&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Network attachment&lt;/td&gt;
&lt;td&gt;Scale-up networking compatible with NVLink Fusion&lt;/td&gt;
&lt;td&gt;ConnectX NICs, BlueField DPUs&lt;/td&gt;
&lt;td&gt;Keeps traffic engineering and service insertion inside one architecture&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Fabric&lt;/td&gt;
&lt;td&gt;High-speed connectivity expertise&lt;/td&gt;
&lt;td&gt;Spectrum-X switches, NVLink interconnect&lt;/td&gt;
&lt;td&gt;Aligns scale-out Ethernet and scale-up GPU domains&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Optics&lt;/td&gt;
&lt;td&gt;Optical DSP and silicon photonics&lt;/td&gt;
&lt;td&gt;AI factory platform demand&lt;/td&gt;
&lt;td&gt;Pushes optical design into the center of AI network planning&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;The strategic language matters too. Jensen Huang said, according to NVIDIA (2026), that “the inference inflection has arrived” and token demand is surging. Matt Murphy said, according to NVIDIA (2026), that high-speed connectivity, optical interconnect, and accelerated infrastructure are now central to scaling AI. Those are networking statements as much as semiconductor statements.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why does NVLink Fusion matter more than the $2 billion headline?
&lt;/h2&gt;

&lt;p&gt;NVLink Fusion matters more than the dollar figure because it gives NVIDIA a way to stay essential even when hyperscalers and large enterprises want custom silicon instead of buying every accelerator directly from NVIDIA. According to Reuters (2026), the deal helps NVIDIA remain central while some customers increasingly choose custom processors. According to NVIDIA (2026), NVLink Fusion is a rack-scale platform for semi-custom AI infrastructure, so Marvell can bring its own XPUs and networking into a design that still depends on NVIDIA CPUs, NICs, DPUs, switches, interconnects, and supply chain scale. That is the real control point. The company that owns the fabric, interconnect, and integration standards can keep monetizing the cluster even when the accelerator mix changes.&lt;/p&gt;

&lt;p&gt;This is why the deal is so important for AI networking. A hyperscaler can change the compute element more easily than it can rewrite a whole rack-scale fabric model. If NVIDIA can make custom chips coexist with ConnectX, BlueField, Spectrum-X, and NVLink Fusion, it keeps control over congestion behavior, telemetry, service insertion, and cluster design assumptions. That is much harder for rivals to displace than a single GPU SKU.&lt;/p&gt;

&lt;h2&gt;
  
  
  How will this change AI data center fabric design?
&lt;/h2&gt;

&lt;p&gt;AI data center fabric design will shift further toward tightly coupled scale-up and scale-out domains, where compute, optics, DPUs, and Ethernet behavior are engineered as one system rather than separate procurement lines. According to Data Center Dynamics (2026), Marvell’s role includes NVLink Fusion-compatible scale-up networking, while NVIDIA contributes ConnectX NICs, BlueField DPUs, and Spectrum-X switches. That means architects should expect more fabrics where custom XPUs still depend on familiar Ethernet-adjacent building blocks, especially for east-west AI traffic, rack-scale cluster composition, and storage access. For data center network engineers, the lesson is straightforward: future AI fabrics will be judged by how well they handle congestion, latency spread, and optical scaling, not only by how many 800G or 1.6T ports they advertise.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjvd4pcia19npd43c6ks4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjvd4pcia19npd43c6ks4.png" alt="NVIDIA’s $2B Marvell Bet: What NVLink Fusion Means for AI Data Center Networks Technical Architecture" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;A practical way to think about it is to split the fabric into two jobs:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Design domain&lt;/th&gt;
&lt;th&gt;What it does&lt;/th&gt;
&lt;th&gt;What architects must watch&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Scale-up&lt;/td&gt;
&lt;td&gt;Connects accelerators and high-speed local domains inside the rack or pod&lt;/td&gt;
&lt;td&gt;Latency determinism, oversubscription, optical reach, memory and accelerator locality&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Scale-out&lt;/td&gt;
&lt;td&gt;Connects racks, pods, storage, and service planes over Ethernet&lt;/td&gt;
&lt;td&gt;RoCEv2 behavior, ECN marking, PFC blast radius, telemetry, DPU policy insertion&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;This is where protocol behavior starts to matter more than marketing. RoCEv2 traffic still rewards low loss and controlled congestion. BlueField DPUs still matter for offload and infrastructure services. ConnectX remains important at the host edge. Engineers who already follow our &lt;a href="https://firstpasslab.com/blog/2026-03-15-nvidia-spectrum-x-ethernet-ai-fabric-deep-dive/" rel="noopener noreferrer"&gt;NVIDIA Spectrum-X Ethernet AI fabric deep dive&lt;/a&gt; and our &lt;a href="https://firstpasslab.com/blog/2026-03-19-nvidia-networking-division-multibillion-dollar-data-center-network-engineer-guide/" rel="noopener noreferrer"&gt;NVIDIA networking division analysis&lt;/a&gt; will recognize the pattern: Ethernet is becoming the operating fabric of AI, but it only works when the fabric is tuned like a system, not purchased like a switch refresh.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why are optical interconnects and silicon photonics now strategic, not optional?
&lt;/h2&gt;

&lt;p&gt;Optical interconnects and silicon photonics are strategic because AI clusters are now colliding with physical limits in power, distance, heat, and signal integrity, and the easiest way to unlock the next scaling step is often in the optical layer. According to NVIDIA (2026), the Marvell partnership explicitly includes collaboration on silicon photonics. According to Data Center Dynamics (2026), the companies will work on advanced optical interconnect solutions for AI infrastructure and AI-RAN use cases. Marvell is not just a chip vendor in this story, it is a specialist in high-performance analog, optical DSP, and photonics that can reduce the friction between dense compute blocks and the links that join them. In other words, this deal is about moving more cleanly from fast chips to usable systems.&lt;/p&gt;

&lt;p&gt;That matters for data center teams because the networking bottleneck in AI is increasingly optical, thermal, and topological. If you followed our coverage of &lt;a href="https://firstpasslab.com/blog/2026-03-09-stmicro-silicon-photonics-pic100-ai-data-center-network-engineer/" rel="noopener noreferrer"&gt;STMicro’s PIC100 silicon photonics push&lt;/a&gt; and &lt;a href="https://firstpasslab.com/blog/2026-03-22-microsoft-mosaic-microled-data-center-networking-power-ccie-guide/" rel="noopener noreferrer"&gt;Microsoft’s MOSAIC optical work&lt;/a&gt;, you have already seen the pattern. The industry is spending enormous effort on reducing interconnect power per bit and improving reach without wrecking latency. According to Reuters (2026), bandwidth and power efficiency are key bottlenecks in scaling AI data center systems, which is exactly why a photonics-heavy partner like Marvell is valuable to NVIDIA.&lt;/p&gt;

&lt;h2&gt;
  
  
  What does this mean for data center architects?
&lt;/h2&gt;

&lt;p&gt;For data center architects, this deal is a warning that AI infrastructure competency now starts at the fabric and optical layers, not just at server onboarding or EVPN control-plane design. According to Reuters (2026), Big Tech firms including Alphabet and Meta are expected to spend at least $630 billion on AI infrastructure this year. When that much capital hits the market, customers stop asking only for “data center networking” and start asking for deterministic AI fabric behavior, DPU-ready designs, optical roadmaps, and clean migration paths between classical IP fabrics and GPU-heavy east-west clusters. According to Marvell (2026), the company delivered $8.195 billion in fiscal 2026 revenue, up 42% year over year, which tells you demand for these components is already translating into production spending, not lab curiosity.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvwdq4f22eh7cxmhp8szs.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvwdq4f22eh7cxmhp8szs.png" alt="NVIDIA’s $2B Marvell Bet: What NVLink Fusion Means for AI Data Center Networks Industry Impact" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;For network teams, the practical implications are concrete:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Study lossless Ethernet behavior.&lt;/strong&gt; AI fabrics depend on RoCEv2, ECN, buffer behavior, and carefully bounded PFC domains.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Treat optics as a design input, not a transceiver afterthought.&lt;/strong&gt; Reach, insertion loss, thermals, and power are now first-order constraints.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Expect more heterogeneous racks.&lt;/strong&gt; Custom XPUs, DPUs, smart NICs, and accelerator-specific traffic patterns will coexist.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Use telemetry aggressively.&lt;/strong&gt; Hotspot detection, queue visibility, and microburst analysis matter more in AI fabrics than in traditional enterprise server pods.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;If you want adjacent context, the related pieces on Equinix Distributed AI Hubs and NVIDIA GTC networking are worth reading alongside this one.&lt;/p&gt;

&lt;h2&gt;
  
  
  Does this deal change the competitive balance with Broadcom, custom ASICs, and hyperscalers?
&lt;/h2&gt;

&lt;p&gt;Yes, because the deal gives NVIDIA a better answer to the two biggest threats to its dominance, custom silicon and independent fabric strategies. According to Reuters (2026), customers are increasingly considering custom processors instead of buying only NVIDIA’s high-priced parts. According to Tom’s Hardware (2026), Marvell is one of the leading custom ASIC design houses, with deep relationships across hyperscalers that want alternatives to standard GPU buying patterns. By investing in Marvell rather than treating custom silicon as a pure threat, NVIDIA gains a way to keep those designs attached to its networking and interconnect ecosystem. That changes the balance of power from “who builds the chip” to “who defines the system boundary.”&lt;/p&gt;

&lt;p&gt;Broadcom is still formidable in custom silicon and switching, and hyperscalers will keep pursuing in-house accelerators. But NVIDIA’s move is clever because it turns coexistence into a product strategy. If the custom XPU still plugs into NVIDIA-friendly rack-scale assumptions, NVIDIA keeps influence over architecture, tooling, and supply chain decisions. That is why this story fits alongside our earlier reporting on &lt;a href="https://firstpasslab.com/blog/2026-03-10-meta-135-billion-nvidia-spectrum-x-ai-networking/" rel="noopener noreferrer"&gt;Meta’s Spectrum-X buildout&lt;/a&gt; and the broader question of whether AI infrastructure will standardize around Ethernet-heavy, multi-vendor fabrics or fragment into isolated proprietary islands.&lt;/p&gt;

&lt;h2&gt;
  
  
  What should network engineers watch next?
&lt;/h2&gt;

&lt;p&gt;Network engineers should watch three follow-on signals: whether more custom silicon vendors join NVLink Fusion, whether optical roadmaps accelerate from 800G to denser rack-scale topologies, and whether AI-RAN work pulls telecom and data center design closer together. According to NVIDIA (2026), the Marvell partnership also covers AI-RAN for 5G and 6G, which means this is not only a cloud data center story. According to Reuters (2026), Marvell expects revenue to approach $15 billion by fiscal 2028, nearly 40% growth, which suggests the market believes this infrastructure shift still has room to run. The engineers who benefit most will be the ones who can connect Ethernet behavior, optics, DPU services, and accelerator locality into one operational model.&lt;/p&gt;

&lt;p&gt;In practical terms, that means reading semiconductor announcements like network architecture documents. When a partnership names ConnectX, BlueField, Spectrum-X, silicon photonics, and custom XPUs in the same paragraph, it is telling you where the next infrastructure bottlenecks and engineering priorities are forming. VXLAN EVPN design, queue engineering, optical planning, and deep telemetry are becoming more valuable, not less. AI did not make networks simpler. It made them central again.&lt;/p&gt;

&lt;h2&gt;
  
  
  Frequently Asked Questions
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Why did NVIDIA invest $2 billion in Marvell?
&lt;/h3&gt;

&lt;p&gt;Because NVIDIA wants customers building custom AI systems to keep using NVIDIA’s surrounding infrastructure stack. According to NVIDIA (2026), the deal ties Marvell’s custom XPUs and networking to NVLink Fusion, ConnectX, BlueField, Spectrum-X, and Vera, which preserves NVIDIA’s influence even in heterogeneous systems.&lt;/p&gt;

&lt;h3&gt;
  
  
  What is NVLink Fusion in practical terms?
&lt;/h3&gt;

&lt;p&gt;NVLink Fusion is NVIDIA’s rack-scale framework for semi-custom AI infrastructure. It allows customers to combine non-NVIDIA compute elements with NVIDIA interconnect, NIC, DPU, switch, and system integration components instead of choosing an entirely separate architecture.&lt;/p&gt;

&lt;h3&gt;
  
  
  Why does this matter for data center networking teams?
&lt;/h3&gt;

&lt;p&gt;Because AI clusters increasingly hit network and optical limits before they hit compute limits. According to Reuters (2026), bandwidth and power efficiency are key bottlenecks, so fabric design, congestion control, and interconnect choice directly affect AI system economics.&lt;/p&gt;

&lt;h3&gt;
  
  
  Does this help data center network engineers?
&lt;/h3&gt;

&lt;p&gt;Yes. It increases demand for engineers who understand EVPN fabrics, AI Ethernet behavior, telemetry, optics, and DPU-aware operations. The more heterogeneous AI racks become, the more valuable strong data center networking architecture skills become.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;AI disclosure: This Dev.to version was adapted from the original FirstPassLab article with AI assistance for editing, formatting, and Dev.to-specific presentation. The technical claims and canonical source remain the original article on FirstPassLab.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>networking</category>
      <category>datacenter</category>
      <category>ai</category>
      <category>hardware</category>
    </item>
    <item>
      <title>Stop Extending the Perimeter: Why Managed SASE and Universal ZTNA Are Replacing VPNs</title>
      <dc:creator>FirstPassLab</dc:creator>
      <pubDate>Thu, 16 Apr 2026 16:54:04 +0000</pubDate>
      <link>https://dev.to/firstpasslab/stop-extending-the-perimeter-why-managed-sase-and-universal-ztna-are-replacing-vpns-3cfl</link>
      <guid>https://dev.to/firstpasslab/stop-extending-the-perimeter-why-managed-sase-and-universal-ztna-are-replacing-vpns-3cfl</guid>
      <description>&lt;p&gt;Most VPN refresh projects make the same mistake: they keep network-level trust and just put a nicer portal in front of it.&lt;/p&gt;

&lt;p&gt;In 2026, the real architectural shift is from network access to resource access. Managed SASE and universal ZTNA move the decision point closer to identity, device posture, and the specific app being requested. For network security teams, that changes how you design policy, how you troubleshoot access, and how you phase migration off legacy VPN.&lt;/p&gt;

&lt;p&gt;Here is the practical model: what broke the perimeter approach, what a modern managed SASE architecture actually looks like, and how to migrate without rebuilding flat trust in a new dashboard.&lt;/p&gt;

&lt;h3&gt;
  
  
  TL;DR
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Managed SASE and universal ZTNA are winning because perimeter VPN risk keeps getting worse. ThreatLabz reported VPN CVEs grew 82.5% from 2020 to 2025.&lt;/li&gt;
&lt;li&gt;The modern target state is one policy plane across users, devices, apps, and branches, not separate VPN, NAC, SWG, and firewall silos.&lt;/li&gt;
&lt;li&gt;Cisco's current universal ZTNA workflow already reflects that shift with Secure Access, trusted networks, private resources, policy rules, and Secure Client enrollment.&lt;/li&gt;
&lt;li&gt;The best migration path is phased: start with remote private-app access, validate identity and device posture continuously, then collapse branch, contractor, and unmanaged-device access into the same control plane.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  What changed in 2026 that finally broke the perimeter model?
&lt;/h2&gt;

&lt;p&gt;The perimeter model broke because enterprise traffic, user identity, and application placement stopped lining up with a single trusted network edge years ago, and 2026 is when operations teams finally stopped pretending otherwise. According to NIST SP 800-207 (2020), zero trust exists because remote users, BYOD, and cloud-based assets are no longer inside an enterprise-owned boundary. According to CISA (2026), zero trust improves visibility and enables more precise, least-privilege access decisions. According to Zscaler ThreatLabz (2025), VPN CVEs grew 82.5% from 2020 to 2025, and roughly 60% of the vulnerabilities reported in the past year were high or critical. That is the real forcing function. The question is no longer whether VPN still works technically. The question is whether broad tunnel-based trust is still defensible when users, apps, and attackers all operate everywhere.&lt;/p&gt;

&lt;p&gt;Vendor messaging has also shifted in a way senior engineers should notice. Cisco now describes a model with a single policy engine across users, devices, and applications, not a bolt-on remote-access product. HPE Aruba's 2026 SASE trend analysis says universal ZTNA is becoming a first-class SASE pillar rather than a niche feature. That convergence matters because it tells you the market is settling on the same architectural answer: app-level, identity-aware access is replacing network-level trust as the design center.&lt;/p&gt;

&lt;h2&gt;
  
  
  What does a managed SASE and universal ZTNA architecture look like in 2026?
&lt;/h2&gt;

&lt;p&gt;A 2026 managed SASE architecture uses a cloud-managed control plane to evaluate identity, device posture, context, and policy before exposing a specific application, network, or subnet, instead of dropping a user into a broad trusted overlay. According to NIST (2020), the core logic is still policy engine, policy administrator, and policy enforcement point. According to Cisco's current Universal ZTNA workflow (2026), the operational sequence is practical: onboard Secure Access and Firewall Management, define trusted networks, publish private resources, attach policy rules, associate them with enforcement devices, and enroll users with Secure Client 5.1.10 or later. That is exactly what a mature CCIE Security team should expect, a control plane that decides, an enforcement plane that brokers access, and a data plane that carries only the approved session.&lt;/p&gt;

&lt;p&gt;The easiest way to visualize the stack is to separate control, enforcement, and transport.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Layer&lt;/th&gt;
&lt;th&gt;What it does&lt;/th&gt;
&lt;th&gt;What the engineer actually cares about&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Identity and posture&lt;/td&gt;
&lt;td&gt;Validates user, device, certificate, and context&lt;/td&gt;
&lt;td&gt;IdP integration, MFA, posture signals, trusted networks&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Policy engine&lt;/td&gt;
&lt;td&gt;Decides allow, deny, or restrict&lt;/td&gt;
&lt;td&gt;Least privilege, user groups, contractor policy, SaaS versus private-app rules&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Enforcement points&lt;/td&gt;
&lt;td&gt;Publishes or brokers access to specific resources&lt;/td&gt;
&lt;td&gt;Secure Access connectors, FTD placement, branch edges, SWG/FWaaS path&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Data plane&lt;/td&gt;
&lt;td&gt;Carries approved app traffic&lt;/td&gt;
&lt;td&gt;Latency, packet loss, QUIC/MASQUE behavior, path selection&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Visibility and telemetry&lt;/td&gt;
&lt;td&gt;Feeds continuous evaluation&lt;/td&gt;
&lt;td&gt;Logs, risk scoring, incident response, troubleshooting&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;In Cisco's published workflow, the details are already concrete enough to be useful in design reviews. You define one or more trusted networks, supply the CA certificate for the ZTNA user, configure the FTD device with its FQDN, inside and outside interfaces, and PKCS12 certificate, then create private resources and map access policy rules to them. Cisco also notes that deployment reboots the device to reallocate resources for universal ZTNA components, which is the kind of operational gotcha that matters more than a marketing diagram. If you are planning a brownfield migration, that reboot window belongs on your change plan.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why are perimeter VPN and flat remote-access models losing?
&lt;/h2&gt;

&lt;p&gt;Perimeter VPN designs are losing because they solve reachability first and trust later, while zero-trust designs solve trust first and reachability only for the exact resource that passed policy. According to Zscaler ThreatLabz (2025), VPN vulnerabilities rose sharply over the last five years and the majority of newly reported issues in the last year were high or critical severity. According to IBM and Ponemon (2025), the global average cost of a data breach is $4.4 million. When the blast radius of a bad access decision is still measured in subnets instead of applications, the economics are no longer acceptable. The traditional VPN mental model, authenticate once, land on the network, and trust downstream controls to clean things up, is now the expensive model.&lt;/p&gt;

&lt;p&gt;The comparison looks like this:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Design question&lt;/th&gt;
&lt;th&gt;Legacy VPN answer&lt;/th&gt;
&lt;th&gt;Managed SASE + universal ZTNA answer&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;What gets exposed?&lt;/td&gt;
&lt;td&gt;A network segment or broad tunnel&lt;/td&gt;
&lt;td&gt;A named app, subnet, or resource object&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;What is the trust anchor?&lt;/td&gt;
&lt;td&gt;Network location after login&lt;/td&gt;
&lt;td&gt;Identity, posture, and context on every request&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;How is contractor access handled?&lt;/td&gt;
&lt;td&gt;Separate VPN profile or jump host&lt;/td&gt;
&lt;td&gt;Same policy engine with narrower resource scope&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;How is IoT handled?&lt;/td&gt;
&lt;td&gt;Usually outside the design or behind ACLs&lt;/td&gt;
&lt;td&gt;Profiled, segmented, and controlled through policy and enforcement&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;How is lateral movement limited?&lt;/td&gt;
&lt;td&gt;Mostly by firewalling after connection&lt;/td&gt;
&lt;td&gt;By never granting broad network adjacency in the first place&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;How is troubleshooting done?&lt;/td&gt;
&lt;td&gt;VPN concentrator, ACLs, and route tracing&lt;/td&gt;
&lt;td&gt;Policy logs, resource maps, path telemetry, and user-to-app traces&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;This is also why older remote-access debates such as &lt;a href="https://firstpasslab.com/blog/2026-03-25-flexvpn-vs-dmvpn-ccie-security-vpn-framework-guide/" rel="noopener noreferrer"&gt;FlexVPN versus DMVPN&lt;/a&gt; are still useful but no longer sufficient. Those designs matter for overlays and transport, but the security control point has moved. The hard part in 2026 is not building another tunnel. The hard part is deciding which user on which device should be allowed to reach exactly which resource, under which conditions, and for how long.&lt;/p&gt;

&lt;h2&gt;
  
  
  How do you migrate from firewall-centric remote access to managed SASE in six steps?
&lt;/h2&gt;

&lt;p&gt;The best migration path is phased, resource-driven, and brutally honest about legacy exceptions. According to NIST (2020), most enterprises will operate in a hybrid zero-trust and perimeter-based mode for an extended period. Cisco's current guide (2026) shows the same reality operationally, with trusted networks, enforcement devices, private resources, and client enrollment all arriving in stages. The mistake is trying to replace every remote-access workflow at once. Start with the users and applications that gain the most from app-level access and the least from broad network adjacency. That usually means contractors, private web apps, admin portals, and high-risk third-party access before branch-wide transport consolidation.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Inventory&lt;/strong&gt; private applications, legacy services, user groups, and unmanaged devices that still depend on broad VPN access.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Integrate&lt;/strong&gt; your identity provider, MFA, certificates, and device-posture signals so the policy engine has real inputs instead of static ACL assumptions.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Define&lt;/strong&gt; trusted networks and private resources as applications, networks, or subnets, not generic tunnel destinations.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Deploy&lt;/strong&gt; the enforcement points, which in Cisco's model means Secure Access plus Firewall Management and universal ZTNA-enabled FTD devices with correct inside/outside interface roles.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Write&lt;/strong&gt; least-privilege policy per resource, including separate treatment for employees, contractors, privileged admins, and unmanaged or IoT-like devices.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Retire&lt;/strong&gt; legacy VPN access incrementally, keeping only the narrow use cases that still require full tunnel semantics or legacy protocol support.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;A practical migration checklist for CCIE Security teams should include policy objects, certificate lifecycle, DNS dependencies, private-resource naming standards, logging destinations, and change windows for enforcement-node reboot behavior. It should also include rollback logic. If policy-driven remote access becomes your new primary control plane, your rollback plan matters just as much as your allow rules.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is the industry impact of managed SASE and universal ZTNA?
&lt;/h2&gt;

&lt;p&gt;The industry impact is that remote access, branch security, contractor access, and unmanaged edge controls are being folded into one operating model, which changes both buying behavior and team structure. According to HPE Aruba's 2026 SASE trends analysis, universal ZTNA is becoming foundational rather than optional, and IoT coverage is one of the main reasons. According to Cisco's Zero Trust Access positioning (2026), the target is one policy engine across users, devices, and applications, with least-privilege enforcement for both AI and IoT use cases. According to IBM and Ponemon (2025), breach costs still average $4.4 million globally, which is why boards now care about access architecture in a way they did not when VPN was treated as a simple network service.&lt;/p&gt;

&lt;p&gt;For operations teams, the biggest effect is organizational. Network, security, endpoint, and identity teams can no longer design in separate lanes. Your SASE rollout will fail if the network team thinks only about path quality, the security team thinks only about blocking, and the identity team treats posture as somebody else's problem. That is also why related developments such as &lt;a href="https://firstpasslab.com/blog/2026-03-31-fcc-bans-foreign-routers-enterprise-zero-trust-remote-edge-security/" rel="noopener noreferrer"&gt;enterprise zero-trust remote-edge hardening after the FCC router ban&lt;/a&gt;, &lt;a href="https://firstpasslab.com/blog/2026-04-14-forescout-identity-segmentation/" rel="noopener noreferrer"&gt;identity segmentation in mixed-vendor estates&lt;/a&gt;, and &lt;a href="https://firstpasslab.com/blog/2026-03-24-sase-spending-97-billion-2030-gpu-powered-security-network-engineer-guide/" rel="noopener noreferrer"&gt;SASE market spending growth through 2030&lt;/a&gt; are part of the same story. The perimeter is not disappearing because one product category won. It is disappearing because identity, connectivity, and enforcement are becoming inseparable.&lt;/p&gt;

&lt;h2&gt;
  
  
  What should CCIE Security engineers focus on next?
&lt;/h2&gt;

&lt;p&gt;CCIE Security engineers should focus less on memorizing product boundaries and more on mastering policy boundaries, because policy is now the architecture. NIST's model still matters because PE, PA, and PEP remain the cleanest mental framework for zero-trust design. Cisco's current workflow matters because it shows how those abstractions are turning into day-two operations with trusted networks, client enrollment, and private-resource publishing. ThreatLabz data matters because it explains why the old design center, broad VPN trust, has become harder to defend over time. The engineers who become indispensable in 2026 will be the ones who can map identity signals, segmentation strategy, branch connectivity, firewall insertion, and user experience into one coherent design.&lt;/p&gt;

&lt;p&gt;That means three skill upgrades. First, get sharper at identity-driven policy, including certificates, posture, group mapping, and exception handling. Second, get better at publishing and troubleshooting private resources rather than just routing to them. Third, get comfortable with hybrid designs where some legacy VPN functions remain while zero-trust access expands. If you can explain when to keep a tunnel, when to broker an app, when to segment an IoT class, and when to move enforcement closer to the resource, you are already thinking like the engineers who will lead the next generation of production zero-trust deployments.&lt;/p&gt;

&lt;h2&gt;
  
  
  Frequently Asked Questions
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Is managed SASE replacing VPN in 2026?
&lt;/h3&gt;

&lt;p&gt;Yes, for most user-to-app access it is. According to Zscaler ThreatLabz (2025), VPN vulnerability pressure is still increasing, so new projects increasingly prefer identity-based access to specific resources over broad network tunnels.&lt;/p&gt;

&lt;h3&gt;
  
  
  What is the difference between ZTNA and universal ZTNA?
&lt;/h3&gt;

&lt;p&gt;Universal ZTNA extends the same zero-trust logic beyond a narrow remote-user use case. In practice, it means applying one policy model to employees, contractors, branch users, unmanaged devices, private apps, SaaS, and in many cases IoT-connected resources.&lt;/p&gt;

&lt;h3&gt;
  
  
  Can universal ZTNA work with existing firewalls and on-prem apps?
&lt;/h3&gt;

&lt;p&gt;Yes. Cisco's current Universal ZTNA workflow (2026) explicitly includes Firewall Threat Defense devices, inside and outside interfaces, private-resource objects, and policy synchronization between Secure Access and the enforcement point.&lt;/p&gt;

&lt;h3&gt;
  
  
  What is the biggest migration risk?
&lt;/h3&gt;

&lt;p&gt;The biggest risk is carrying old perimeter assumptions into the new platform. If you publish large subnets, keep broad user groups, and skip posture or resource-level policy, you can recreate flat trust with nicer dashboards.&lt;/p&gt;

&lt;h3&gt;
  
  
  Does this matter for CCIE Security careers?
&lt;/h3&gt;

&lt;p&gt;Absolutely. Managed SASE and universal ZTNA combine identity, segmentation, remote access, SaaS security, and policy automation, which makes them one of the clearest growth areas for senior CCIE Security engineers in 2026.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;AI disclosure: this article was adapted from the original FirstPassLab post with AI assistance for Dev.to formatting and syndication. The original source remains the canonical version: &lt;a href="https://firstpasslab.com/blog/2026-04-16-managed-sase-universal-ztna-2026/" rel="noopener noreferrer"&gt;https://firstpasslab.com/blog/2026-04-16-managed-sase-universal-ztna-2026/&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

</description>
      <category>networking</category>
      <category>security</category>
      <category>zerotrust</category>
      <category>devops</category>
    </item>
    <item>
      <title>Qualcomm's Wi-Fi 8 Push Means It's Time to Audit 6 GHz, mGig, and Roaming</title>
      <dc:creator>FirstPassLab</dc:creator>
      <pubDate>Wed, 15 Apr 2026 22:53:01 +0000</pubDate>
      <link>https://dev.to/firstpasslab/qualcomms-wi-fi-8-push-means-its-time-to-audit-6-ghz-mgig-and-roaming-6f7</link>
      <guid>https://dev.to/firstpasslab/qualcomms-wi-fi-8-push-means-its-time-to-audit-6-ghz-mgig-and-roaming-6f7</guid>
      <description>&lt;p&gt;Qualcomm just turned Wi-Fi 8 into a 2026 design discussion instead of a standards-watch item for later. The headline specs are flashy, but the part enterprise engineers should actually care about is this: 802.11bn is being built around ugly real-world problems like roaming failures, tail latency, overlapping cells, and uplink pressure at the wireless edge.&lt;/p&gt;

&lt;p&gt;If you run campus wireless, this is the moment to audit 6 GHz coverage, mGig and 10G uplinks, PoE headroom, and the places where your current WLAN already falls apart under mobility or density. The real Wi-Fi 8 story is reliability engineering, not just a bigger PHY number.&lt;/p&gt;

&lt;p&gt;Qualcomm's Dragonwing launch matters because it pulls Wi-Fi 8 into the 2026 enterprise design conversation instead of leaving it as a 2028 standards footnote. According to Qualcomm (2026), FastConnect 8800 reaches 11.6 Gbps peak PHY with a 4x4 client radio, while Dragonwing NPro A8 Elite targets up to 33 Gbps capacity and 1,500 clients in infrastructure gear. For enterprise wireless teams, the real story is not raw peak rate, but earlier visibility into how 802.11bn will change roaming, interference handling, edge coverage, and uplink design.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Key Takeaway:&lt;/strong&gt; Treat Qualcomm's Wi-Fi 8 portfolio as an architecture signal, not a mandatory refresh trigger. Enterprise engineers should keep deploying &lt;a href="https://firstpasslab.com/blog/2026-03-22-wi-fi-7-enterprise-wlan-revenue-40-percent-market-share-network-engineer-guide/" rel="noopener noreferrer"&gt;Wi-Fi 7 where it solves current capacity problems&lt;/a&gt;, but start planning for multi-AP coordination, better roaming behavior, and heavier mGig and 10G access-layer demands now.&lt;/p&gt;

&lt;p&gt;This announcement also fits a broader pattern. As &lt;a href="https://firstpasslab.com/blog/2026-03-21-enterprise-network-spending-2026-ccie-budget-guide/" rel="noopener noreferrer"&gt;enterprise network spending shifts toward wireless, AI, and edge infrastructure&lt;/a&gt;, vendors are no longer selling WLAN as a standalone AP problem. Qualcomm is packaging client silicon, AP silicon, fiber gateway silicon, and FWA silicon into one story. That is a useful clue for engineers and architects building the next three-year campus roadmap.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Exactly Did Qualcomm Launch at MWC 2026?
&lt;/h2&gt;

&lt;p&gt;Qualcomm launched one mobile Wi-Fi 8 client platform and five Dragonwing infrastructure platforms, which is why this announcement matters more than a single chipset release. According to &lt;a href="https://www.qualcomm.com/news/releases/2026/03/qualcomm-debuts-ai-native-wifi-8-portfolio-unifying-client-and-n" rel="noopener noreferrer"&gt;Qualcomm's March 2026 press release&lt;/a&gt;, FastConnect 8800 is the first mobile connectivity system with 4x4 Wi-Fi, 10+ Gbps peak speeds, up to three times the gigabit range of the prior generation, and support for Wi-Fi 8, Bluetooth HDT, UWB, and Thread on one chip. On the infrastructure side, Dragonwing stretches from premium enterprise APs to mainstream mesh, 10G fiber gateways, and fixed wireless access.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Platform&lt;/th&gt;
&lt;th&gt;Target role&lt;/th&gt;
&lt;th&gt;Key claim&lt;/th&gt;
&lt;th&gt;Why enterprise engineers care&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;FastConnect 8800&lt;/td&gt;
&lt;td&gt;Mobile clients&lt;/td&gt;
&lt;td&gt;11.6 Gbps peak PHY, 4x4 Wi-Fi, 3x range&lt;/td&gt;
&lt;td&gt;Client behavior will change before campus infrastructure fully refreshes&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Dragonwing NPro A8 Elite&lt;/td&gt;
&lt;td&gt;Enterprise APs, premium routers&lt;/td&gt;
&lt;td&gt;Up to 33 Gbps, 1,500 clients, 40% higher throughput&lt;/td&gt;
&lt;td&gt;Signals what premium WLAN silicon will expect from uplinks and controllers&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Dragonwing FiberPro A8 Elite&lt;/td&gt;
&lt;td&gt;10G PON gateways&lt;/td&gt;
&lt;td&gt;Wi-Fi 8 plus 10G fiber&lt;/td&gt;
&lt;td&gt;Important for multi-dwelling and branch edge designs&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Dragonwing FWA Gen 5 Elite&lt;/td&gt;
&lt;td&gt;5G FWA edge&lt;/td&gt;
&lt;td&gt;X85 modem plus Wi-Fi 8&lt;/td&gt;
&lt;td&gt;Relevant for remote sites and backup WAN designs&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Dragonwing N8 and F8&lt;/td&gt;
&lt;td&gt;Mainstream routers and mesh&lt;/td&gt;
&lt;td&gt;Wi-Fi 8 at volume tiers&lt;/td&gt;
&lt;td&gt;Shows the standard is being positioned for broad rollout, not just premium pilots&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;According to Wi-Fi NOW Global (2026), the NPro A8 Elite is positioned for high-performance enterprise APs and premium routers, with a 5x5 radio architecture, up to 33 Gbps peak capacity, and support for 1,500 clients. That is not a normal early-standard message. It says Qualcomm believes operators and enterprise OEMs want Wi-Fi 8 hardware design wins now, even while 802.11bn is still being finalized.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Ffirstpasslab.com%2Fimages%2Fblog%2Fqualcomm-wifi-8-dragonwing-10gbps-enterprise-wireless%2Finfographic-tech.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Ffirstpasslab.com%2Fimages%2Fblog%2Fqualcomm-wifi-8-dragonwing-10gbps-enterprise-wireless%2Finfographic-tech.png" alt="Qualcomm Wi-Fi 8 Dragonwing Platforms Technical Architecture" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Is Wi-Fi 8 a Bigger Enterprise Story Than Another Speed Headline?
&lt;/h2&gt;

&lt;p&gt;Wi-Fi 8 matters because 802.11bn is designed to improve ugly, real-world wireless conditions instead of just winning a lab benchmark. According to Qualcomm's Wi-Fi 8 standards overview (2025) and the &lt;a href="https://standards.ieee.org/ieee/802.11bn/11393/" rel="noopener noreferrer"&gt;IEEE 802.11bn scope document&lt;/a&gt;, the standard targets at least 25% higher throughput in challenging signal conditions, 25% lower latency at the 95th percentile, and 25% fewer dropped packets, especially while roaming between access points. That makes Wi-Fi 8 more relevant to hospitals, warehouses, dense office floors, and stadiums than to marketing decks about one client hitting a perfect PHY rate next to an AP.&lt;/p&gt;

&lt;p&gt;Samsung Research's 802.11bn technical review adds the detail enterprise architects actually need. Enhanced Long Range improves edge performance with repetition and a dedicated preamble design, while Distributed Resource Units can deliver up to 11 dB uplink power gain under some conditions by spreading tones across wider bandwidth. On the MAC side, P-EDCA, Low-Latency Indication, Non-Primary Channel Access, and Dynamic Sub-band Operation all attack the tail-latency problem that existing Wi-Fi 6E and Wi-Fi 7 networks still struggle with when cells overlap or real-time traffic competes with bulk transfers.&lt;/p&gt;

&lt;p&gt;That is the key change in mindset. Wi-Fi 7 pushed features like MLO and wider channels, which were already important in &lt;a href="https://firstpasslab.com/blog/2026-03-22-wi-fi-7-enterprise-wlan-revenue-40-percent-market-share-network-engineer-guide/" rel="noopener noreferrer"&gt;today's high-density enterprise WLAN market&lt;/a&gt;. Wi-Fi 8 is trying to make the network more deterministic under contention. Qualcomm's own standards language around seamless roaming, multi-AP coordination, and edge performance lines up with the problems enterprise teams file tickets about today: sticky clients, voice jitter during handoff, bad behavior at cell edges, and ugly contention on crowded floors.&lt;/p&gt;

&lt;h2&gt;
  
  
  How Do Dragonwing and FastConnect Change Enterprise WLAN Design?
&lt;/h2&gt;

&lt;p&gt;Dragonwing changes enterprise WLAN design by pushing more complexity to the edge at the same time that client devices become more capable. According to Qualcomm (2026), NPro A8 Elite combines Wi-Fi 8 radios, high-performance compute, an integrated Hexagon NPU, and system-level coordination features. In practice, that means tomorrow's access points are being designed less like simple radio heads and more like edge compute nodes. For campus architects, that raises immediate questions about switch uplinks, PoE budgets, telemetry pipelines, and how much control logic will move closer to the AP.&lt;/p&gt;

&lt;p&gt;The first implication is uplink pressure. If premium Wi-Fi 8 infrastructure is being marketed around 33 Gbps aggregate capacity, your access-layer design cannot stop at a mental model built around 1G AP uplinks. Even before Wi-Fi 8 arrives, many teams are already discovering that &lt;a href="https://firstpasslab.com/blog/2026-03-30-matsing-lens-antenna-wifi-6e-high-density-wlan-enterprise-rf-design/" rel="noopener noreferrer"&gt;modern high-density wireless designs&lt;/a&gt; and AI-heavy collaboration spaces create ugly oversubscription at the edge. Wi-Fi 8 will make those weak spots more obvious, not less.&lt;/p&gt;

&lt;p&gt;The second implication is roaming architecture. Qualcomm and Samsung both emphasize collaborative AP behavior, especially multi-AP coordination and seamless mobility domains. That should catch the attention of anyone running voice over Wi-Fi, industrial handhelds, or medical carts. If vendors execute well, Wi-Fi 8 could finally reduce the gap between the clean controller diagrams in design docs and the messy handoff behavior users experience on live floors.&lt;/p&gt;

&lt;p&gt;The third implication is client asymmetry. FastConnect 8800 gives clients new capabilities first, which means campus engineers may see more advanced behavior at the edge of the network before the whole infrastructure is refreshed. That happened with Wi-Fi 6E and Wi-Fi 7 as well. Engineers who learned to read that transition correctly did better than teams who waited for a full rip-and-replace.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Should Enterprise Wireless Engineers Lab Right Now?
&lt;/h2&gt;

&lt;p&gt;Enterprise teams should lab the constraints Wi-Fi 8 is trying to solve, not just the features it promises to add. The smartest move in 2026 is to baseline where your current Wi-Fi 7 and 6E environments already fail: roaming, latency tails, cell-edge performance, and uplink saturation. If you cannot measure those problems now, you will not know whether any Wi-Fi 8 platform is actually helping you later.&lt;/p&gt;

&lt;p&gt;Start with infrastructure readiness checks on your controller and access layer. In Cisco environments, useful first commands include:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;show ap dot11 6ghz summary
show wireless client summary
show interface status | include 2.5G|5G|10G
show power inline
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Those commands will not make a Catalyst 9800 suddenly support Wi-Fi 8, but they do expose the current realities that matter. Do you already have 6 GHz coverage where it counts? Which AP-facing switchports are still capped below mGig? Where are you running hot on PoE? Which client populations are consuming the most airtime in dense zones? That baseline is far more valuable than waiting for a vendor webinar.&lt;/p&gt;

&lt;p&gt;Next, isolate your worst mobility zone and test it hard. Warehouses, hospitals, and large campus collaboration floors are ideal because Wi-Fi 8's promise is strongest where roaming and latency consistency already hurt. According to Qualcomm (2025), seamless mobility and multi-AP coordination are core 802.11bn themes. If your current network already handles those cases well, you have time. If not, Wi-Fi 8 belongs on your short list for pilot evaluation.&lt;/p&gt;

&lt;p&gt;Finally, connect this work to business priorities. Qualcomm's Wi-Fi 8 story is tied closely to AI endpoints and always-on edge workloads, which matches the broader &lt;a href="https://firstpasslab.com/blog/2026-03-05-cisco-ai-infrastructure-boom-ccie-enterprise-value/" rel="noopener noreferrer"&gt;AI infrastructure shift now hitting enterprise networking budgets&lt;/a&gt;. If your business is adding real-time video analytics, robotics, AR workflows, or dense collaboration spaces, Wi-Fi 8 is more than a wireless roadmap item. It becomes part of application reliability.&lt;/p&gt;

&lt;h2&gt;
  
  
  When Should Enterprises Plan Wi-Fi 8, and When Should They Wait?
&lt;/h2&gt;

&lt;p&gt;Most enterprises should plan for Wi-Fi 8 now, pilot it in 2027, and deploy it selectively before considering broad refreshes. The right early targets are not normal office floors with healthy Wi-Fi 7, but the places where reliability matters more than peak throughput: high-density venues, mobility-heavy operations, latency-sensitive workflows, and remote sites where FWA plus WLAN convergence matters. Qualcomm's own release says commercial products are expected in late 2026, while Samsung Research points to March 2028 publication for the standard, so there is enough runway for serious evaluation but not enough reason for a blind platform jump.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Environment&lt;/th&gt;
&lt;th&gt;Best move in 2026&lt;/th&gt;
&lt;th&gt;Why&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Healthy office Wi-Fi 7 campus&lt;/td&gt;
&lt;td&gt;Wait&lt;/td&gt;
&lt;td&gt;Little reason to replace stable 6E or 7 deployments&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Stadium, airport, hospital, warehouse&lt;/td&gt;
&lt;td&gt;Pilot&lt;/td&gt;
&lt;td&gt;Roaming, density, and interference are exactly where 802.11bn aims to help&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;New branch with 10G fiber or FWA edge&lt;/td&gt;
&lt;td&gt;Evaluate early&lt;/td&gt;
&lt;td&gt;Dragonwing FiberPro and FWA platforms align well with greenfield edge builds&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Budget-constrained refresh&lt;/td&gt;
&lt;td&gt;Keep Wi-Fi 7&lt;/td&gt;
&lt;td&gt;Standards maturity and client mix still favor Wi-Fi 7 today&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;CCIE lab and architecture teams&lt;/td&gt;
&lt;td&gt;Study now&lt;/td&gt;
&lt;td&gt;Skills around 6 GHz, mGig, roaming, and deterministic WLAN behavior are compounding&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;This is also where competitor content still has a gap. A lot of launch coverage stopped at "11.6 Gbps" or "AI-native". The more useful interpretation is that Qualcomm is betting enterprise buyers want reliability features commercialized before the standard process fully ends. That is the same signal serious architects should pay attention to.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Ffirstpasslab.com%2Fimages%2Fblog%2Fqualcomm-wifi-8-dragonwing-10gbps-enterprise-wireless%2Finfographic-impact.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Ffirstpasslab.com%2Fimages%2Fblog%2Fqualcomm-wifi-8-dragonwing-10gbps-enterprise-wireless%2Finfographic-impact.png" alt="Qualcomm Wi-Fi 8 Dragonwing Platforms Industry Impact" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Frequently Asked Questions
&lt;/h2&gt;

&lt;h3&gt;
  
  
  What did Qualcomm announce for Wi-Fi 8 in 2026?
&lt;/h3&gt;

&lt;p&gt;Qualcomm announced the FastConnect 8800 mobile connectivity platform and five Dragonwing infrastructure platforms at MWC 2026. According to Qualcomm (2026), the lineup spans client devices, enterprise access points, premium routers, 10G fiber gateways, fixed wireless access, and mainstream mesh tiers. That breadth is why the launch matters more than a single premium chipset.&lt;/p&gt;

&lt;h3&gt;
  
  
  Is Wi-Fi 8 mainly about higher speed than Wi-Fi 7?
&lt;/h3&gt;

&lt;p&gt;No. According to Qualcomm (2025) and Samsung Research (2025), Wi-Fi 8 is fundamentally about ultra-high reliability. The standard targets at least 25% better throughput in difficult signal conditions, 25% lower latency at the 95th percentile, and 25% fewer dropped packets while roaming. Speed still matters, but reliability is the design center.&lt;/p&gt;

&lt;h3&gt;
  
  
  What is the most important Wi-Fi 8 feature for campus networks?
&lt;/h3&gt;

&lt;p&gt;Multi-AP coordination is the most important long-term feature for enterprise campuses because it addresses overlapping cells, interference, and inconsistent performance under load. Qualcomm explicitly frames this as a fix for dense campuses and public venues where independent AP behavior creates latency spikes and bad user experience.&lt;/p&gt;

&lt;h3&gt;
  
  
  Should enterprises skip Wi-Fi 7 and wait for Wi-Fi 8?
&lt;/h3&gt;

&lt;p&gt;Usually no. If Wi-Fi 7 solves a real problem today, deploy it. According to Qualcomm, Wi-Fi 8 commercial products are expected in late 2026, but the standard itself continues maturing toward 2028. That means most enterprises should use Wi-Fi 7 for near-term capacity upgrades and reserve Wi-Fi 8 for pilots and targeted high-value zones first.&lt;/p&gt;

&lt;h3&gt;
  
  
  What should CCIE Enterprise candidates study first for Wi-Fi 8?
&lt;/h3&gt;

&lt;p&gt;Study 6 GHz design, mGig and 10G uplinks, PoE headroom, mobility behavior, and high-density RF trade-offs first. Those are the areas where Wi-Fi 8 extends existing enterprise wireless design, not replaces it. If you already understand &lt;a href="https://firstpasslab.com/blog/2026-03-22-wi-fi-7-enterprise-wlan-revenue-40-percent-market-share-network-engineer-guide/" rel="noopener noreferrer"&gt;Wi-Fi 7 adoption patterns&lt;/a&gt; and &lt;a href="https://firstpasslab.com/blog/2026-03-30-matsing-lens-antenna-wifi-6e-high-density-wlan-enterprise-rf-design/" rel="noopener noreferrer"&gt;modern high-density venue design&lt;/a&gt;, you are building the right foundation.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;AI disclosure: This Dev.to version was adapted with AI assistance from the original FirstPassLab article for community syndication. The technical analysis, claims, and source links were preserved from the canonical post.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>networking</category>
      <category>wifi</category>
      <category>tutorial</category>
      <category>infrastructure</category>
    </item>
    <item>
      <title>Agentic AI in NetOps: What to Automate First, What to Keep Human-Approved</title>
      <dc:creator>FirstPassLab</dc:creator>
      <pubDate>Wed, 15 Apr 2026 16:55:18 +0000</pubDate>
      <link>https://dev.to/firstpasslab/agentic-ai-in-netops-what-to-automate-first-what-to-keep-human-approved-145c</link>
      <guid>https://dev.to/firstpasslab/agentic-ai-in-netops-what-to-automate-first-what-to-keep-human-approved-145c</guid>
      <description>&lt;p&gt;Network teams are about to hear the same pitch from every vendor: let AI run NetOps. The useful version of that idea is real, but it is much narrower and much more engineering-heavy than the marketing suggests.&lt;/p&gt;

&lt;p&gt;The practical question is not whether an LLM can push configs. It is which operational tasks have enough telemetry, deterministic tooling, and rollback safety to let an AI agent help without creating a bigger outage.&lt;/p&gt;

&lt;p&gt;In this post, I break down where agentic AI is already credible in network operations, where it still needs human approval, and what infrastructure teams should build before they trust autonomous workflows in production.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjke5hgm2noqc9pjwf86u.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjke5hgm2noqc9pjwf86u.png" alt="Agentic AI in NetOps infographic" width="800" height="447"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Quick take:&lt;/strong&gt; Start with evidence gathering, correlation, validation, and approval-gated remediation. Do not start by giving an AI agent raw SSH access to production.&lt;/p&gt;

&lt;h2&gt;
  
  
  What does agentic AI in network operations actually mean?
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fff851xdhhps6h56f89gy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fff851xdhhps6h56f89gy.png" alt="Rise of Agentic AI in NetOps Technical Architecture" width="800" height="447"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Agentic AI in network operations means the system is working toward an operational goal, not just reacting to a prompt or correlating alarms. According to Selector (2026), classic event intelligence and AIOps help reduce noise and speed root cause analysis, but agentic NetOps adds goals, context, reasoning, and action. According to Cisco (2026), its AgenticOps model combines live telemetry, domain-specific reasoning such as the Deep Network Model, and deterministic execution through governed workflows. In plain English, that means the platform can watch the network, test multiple hypotheses, collect evidence from tools, recommend a fix, and sometimes execute that fix inside policy boundaries. That is very different from an LLM that only summarizes syslog messages.&lt;/p&gt;

&lt;p&gt;The cleanest way to understand the shift is this:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Capability&lt;/th&gt;
&lt;th&gt;Traditional AIOps&lt;/th&gt;
&lt;th&gt;Agentic NetOps&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Primary job&lt;/td&gt;
&lt;td&gt;Correlate events and reduce alert noise&lt;/td&gt;
&lt;td&gt;Pursue operational outcomes such as faster recovery or safer change validation&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Context&lt;/td&gt;
&lt;td&gt;Mostly events, logs, and anomalies&lt;/td&gt;
&lt;td&gt;Telemetry, topology, policy, dependencies, history, and tool access&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Reasoning&lt;/td&gt;
&lt;td&gt;Pattern detection and recommendations&lt;/td&gt;
&lt;td&gt;Multi-step planning, hypothesis testing, and tool orchestration&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Action&lt;/td&gt;
&lt;td&gt;Human executes runbook&lt;/td&gt;
&lt;td&gt;AI can execute approved workflows under guardrails&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Risk model&lt;/td&gt;
&lt;td&gt;Low automation risk, lower operational leverage&lt;/td&gt;
&lt;td&gt;Higher leverage, so governance and auditability are mandatory&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;That last line matters most. Engineers on Reddit are saying the quiet part out loud. In r/networking, one practitioner wrote that many vendor AIOps products still feel like "turning on NetFlow and going 'that's an anomaly' every day a 1000 times a day," while another argued that networking remains too "snowflakey to automate at scale" without better context. Those complaints are not anti-AI. They are anti-hype. Agentic NetOps only works when it has enough environment-specific knowledge to act safely.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why is 2026 the inflection point for autonomous NetOps?
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjvl8kgrabxhtu0ctd8hb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjvl8kgrabxhtu0ctd8hb.png" alt="Rise of Agentic AI in NetOps Industry Impact" width="800" height="447"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;2026 looks like the inflection point because three things are finally arriving at the same time: better reasoning models, richer cross-domain telemetry, and controller-level execution layers that can be audited. According to Gartner via PagerDuty (2025), enterprise deployment of agentic AI for infrastructure operations is expected to rise from less than 5% in 2025 to 70% by 2029. According to Cisco (2026), agentic troubleshooting now spans campus, branch, industrial, data center, and service provider workflows, while its AI Assistant and Agentic Workflows are already running at production scale. According to Microsoft Community Hub (2025), Azure Networking used NOA agents to cut time-to-detect for fiber incidents by 60% and improve repair times by 25%.&lt;/p&gt;

&lt;p&gt;That combination changes the industry conversation. For years, the promise was "self-healing networks," but the reality was dashboards, ticket routing, and endless false positives. Now the vendors with the strongest execution story are not leading with magic. They are leading with specific use cases:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Cisco says autonomous troubleshooting can cut MTTR to minutes in campus, branch, and industrial networks.&lt;/li&gt;
&lt;li&gt;Cisco says continuous optimization can tune RF, QoS, path selection, and control planes from live conditions.&lt;/li&gt;
&lt;li&gt;Cisco says trusted validation can evaluate blast radius and policy impact before a change is executed.&lt;/li&gt;
&lt;li&gt;Microsoft's NOA architecture focuses on specialist agents, a planner agent, and approvals for any risky action.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That is a much more believable roadmap than generic "AI for networking" messaging. It also explains why related markets are moving fast, from &lt;a href="https://firstpasslab.com/blog/2026-03-18-ibm-confluent-acquisition-real-time-streaming-network-engineer-guide/" rel="noopener noreferrer"&gt;real-time data streaming for operations&lt;/a&gt; to &lt;a href="https://firstpasslab.com/blog/2026-03-10-eridu-ai-networking-startup-200m-series-a-network-engineer/" rel="noopener noreferrer"&gt;AI-native networking startups&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  What technical architecture makes agentic NetOps safe enough for production?
&lt;/h2&gt;

&lt;p&gt;A safe agentic network needs five layers working together: telemetry, context, reasoning, deterministic tools, and policy enforcement. According to Cisco (2026), AgenticOps starts with cross-domain telemetry from networking, security, observability, and application experience. According to Selector (2026), autonomy fails when data quality and causal context are weak. According to Techzine (2026), Cisco is exposing operations tools through controller-level MCP servers rather than letting agents improvise directly on every device. That design choice is the real story, because it creates a constrained execution plane that engineers can observe, test, and roll back.&lt;/p&gt;

&lt;p&gt;A practical reference architecture looks like this:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Layer&lt;/th&gt;
&lt;th&gt;What it should include&lt;/th&gt;
&lt;th&gt;Why it matters&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Telemetry&lt;/td&gt;
&lt;td&gt;Syslog, streaming telemetry, flow data, controller metrics, client experience data&lt;/td&gt;
&lt;td&gt;Agents cannot reason well from isolated alerts&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Context graph&lt;/td&gt;
&lt;td&gt;Topology, dependencies, change history, intent, maintenance windows&lt;/td&gt;
&lt;td&gt;Prevents locally correct but globally bad actions&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Reasoning layer&lt;/td&gt;
&lt;td&gt;Domain-tuned model plus general reasoning model&lt;/td&gt;
&lt;td&gt;Separates network expertise from generic language ability&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Tool layer&lt;/td&gt;
&lt;td&gt;Approved APIs, playbooks, MCP servers, test harnesses&lt;/td&gt;
&lt;td&gt;Makes actions repeatable and auditable&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Governance layer&lt;/td&gt;
&lt;td&gt;RBAC, approval gates, blast-radius limits, rollback, logging&lt;/td&gt;
&lt;td&gt;Turns autonomy into controlled operations instead of risk&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;This is also where protocol behavior still matters. A trustworthy agent should understand that an OSPF neighbor flap and a BGP session drop are not just two red alerts. They imply adjacency loss, route churn, possible upstream transport failure, and potential application impact. Before any remediation, the system should be able to gather deterministic evidence with commands such as:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;show bgp ipv4 unicast summary
show ip ospf neighbor
show interfaces counters errors
show logging | include %BGP|%OSPF|%LINEPROTO
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That kind of evidence collection is exactly where agentic AI shines. The agent does the repetitive data pull at machine speed, and the engineer reviews the reasoning, the proposed fix, and the expected blast radius.&lt;/p&gt;

&lt;p&gt;For official reference material, Cisco's &lt;a href="https://newsroom.cisco.com/c/r/newsroom/en/us/a/y2026/m02/cisco-expands-agenticops-innovations-across-portfolio.html" rel="noopener noreferrer"&gt;AgenticOps announcement&lt;/a&gt;, Microsoft's &lt;a href="https://techcommunity.microsoft.com/blog/telecommunications-industry-blog/introducing-microsoft%E2%80%99s-network-operations-agent-%E2%80%93-a-telco-framework-for-autonom/4471185" rel="noopener noreferrer"&gt;Network Operations Agent framework&lt;/a&gt;, and the &lt;a href="https://modelcontextprotocol.io/" rel="noopener noreferrer"&gt;Model Context Protocol&lt;/a&gt; are worth bookmarking.&lt;/p&gt;

&lt;h2&gt;
  
  
  Which network tasks should AI handle first, and which should stay human-approved?
&lt;/h2&gt;

&lt;p&gt;The best first tasks for agentic AI are the ones with high repetition, clear evidence, and low irreversible blast radius. According to Cisco (2026), autonomous troubleshooting, RF optimization, QoS tuning, and validation against live topology are already emerging as early production use cases. According to Cisco's networking blog (2026), its AI Packet Analyzer was trained on more than one million packet captures, which is exactly the sort of bounded expert task where machine speed beats manual toil. In contrast, major route-policy changes, security segmentation updates, and anything that can blackhole traffic across multiple domains should remain human-approved until rollback and intent validation are proven.&lt;/p&gt;

&lt;p&gt;Use this decision table when you scope your first rollout:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Task&lt;/th&gt;
&lt;th&gt;Autonomy level to start with&lt;/th&gt;
&lt;th&gt;Why&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Alert correlation and incident summarization&lt;/td&gt;
&lt;td&gt;Full autonomy&lt;/td&gt;
&lt;td&gt;Low risk, high operator time savings&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Evidence gathering from controllers, logs, and CLI&lt;/td&gt;
&lt;td&gt;Full autonomy&lt;/td&gt;
&lt;td&gt;Deterministic and easy to audit&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Wireless RF tuning and path optimization&lt;/td&gt;
&lt;td&gt;Partial autonomy&lt;/td&gt;
&lt;td&gt;Strong telemetry feedback loop, bounded effect&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Compliance checks and pre-change validation&lt;/td&gt;
&lt;td&gt;Partial autonomy&lt;/td&gt;
&lt;td&gt;Great value, but findings should be reviewed initially&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Ticket enrichment and cross-team handoffs&lt;/td&gt;
&lt;td&gt;Full autonomy&lt;/td&gt;
&lt;td&gt;High toil, low blast radius&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Firewall policy changes&lt;/td&gt;
&lt;td&gt;Human approval required&lt;/td&gt;
&lt;td&gt;Security regressions are expensive&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;BGP route-policy or segmentation changes&lt;/td&gt;
&lt;td&gt;Human approval required&lt;/td&gt;
&lt;td&gt;Blast radius can exceed the local domain&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Multi-domain remediation during an outage&lt;/td&gt;
&lt;td&gt;Human approval required&lt;/td&gt;
&lt;td&gt;Requires business context and rollback judgment&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;That is also where the Reddit skepticism is useful. One engineer wrote that current systems are "not very trustworthy at even triaging the most basic of issues," while another said AI is strongest as "extra eyes and ears to process predictive data 24x7." Both can be true. The right design pattern is not zero autonomy or full autonomy. It is layered autonomy, where low-risk tasks are delegated first and high-risk changes stay behind approval gates.&lt;/p&gt;

&lt;h2&gt;
  
  
  How should network and platform teams prepare for agentic NetOps right now?
&lt;/h2&gt;

&lt;p&gt;Network and platform teams should prepare for agentic NetOps by getting better at operational data, controller-driven workflows, and policy engineering, not by trying to become prompt engineers. According to Cisco (2026), governed execution depends on cross-domain telemetry and explainable workflows. According to Selector (2026), AI must understand before it can act. That means the winning teams in 2026 are the ones that can normalize telemetry, model intent, expose safe tools, and measure outcomes. If you already care about &lt;a href="https://firstpasslab.com/blog/2026-04-02-2026-network-outage-report-thousandeyes-internet-health-enterprise-resilience/" rel="noopener noreferrer"&gt;AI-driven reliability&lt;/a&gt;, &lt;a href="https://firstpasslab.com/blog/2026-03-07-ai-network-automation-ccie-insurance-policy/" rel="noopener noreferrer"&gt;automation career positioning&lt;/a&gt;, and &lt;a href="https://firstpasslab.com/blog/2026-03-07-devnet-expert-vs-ccie-automation-recognition-gap/" rel="noopener noreferrer"&gt;market signals around automation roles&lt;/a&gt;, this is where those threads converge.&lt;/p&gt;

&lt;p&gt;Here is the most practical rollout sequence I would recommend:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Unify telemetry before you add agents.&lt;/strong&gt; Pull in controller data, client health, flow records, security events, and change history.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Define business-facing experience metrics.&lt;/strong&gt; Time to connect, roaming quality, app latency, and incident restore time are better targets than raw device counters.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Wrap execution tools at the controller layer.&lt;/strong&gt; Give agents approved APIs and MCP-exposed tools, not unconstrained CLI access.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Start with observation and validation.&lt;/strong&gt; Let the system investigate, summarize, and simulate before it remediates.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Gate risky changes with approvals and rollback.&lt;/strong&gt; No exceptions for routing policy, segmentation, or Internet edge changes.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Measure trust with hard numbers.&lt;/strong&gt; Track MTTR, false-positive rate, rollback frequency, and operator hours saved.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;This is also the career angle. The engineer who can build safe closed-loop workflows will be more valuable than the engineer who only pastes configs or only writes Python. Agentic NetOps rewards people who understand protocols, intent, operational risk, APIs, and governance as one system.&lt;/p&gt;

&lt;h2&gt;
  
  
  Frequently Asked Questions
&lt;/h2&gt;

&lt;h3&gt;
  
  
  What is agentic AI in network operations?
&lt;/h3&gt;

&lt;p&gt;Agentic AI in NetOps means AI systems can observe telemetry, reason across context, and execute approved actions instead of only surfacing alerts or recommendations. The difference from a chatbot is operational intent: the agent is trying to restore service, validate a change, or prevent degradation inside policy boundaries.&lt;/p&gt;

&lt;h3&gt;
  
  
  How is AgenticOps different from AIOps?
&lt;/h3&gt;

&lt;p&gt;AIOps usually focuses on event correlation, anomaly detection, and recommendations. AgenticOps extends that model with planning, tool use, and governed execution, so the system can investigate, validate, and sometimes remediate instead of stopping at a dashboard insight.&lt;/p&gt;

&lt;h3&gt;
  
  
  Can agentic AI replace network engineers?
&lt;/h3&gt;

&lt;p&gt;No. According to Cisco (2026) and Microsoft Community Hub (2025), the model is human-supervised autonomy, not human removal. Engineers still define intent, approve risky changes, validate architecture, manage exceptions, and own the business consequences of operational decisions.&lt;/p&gt;

&lt;h3&gt;
  
  
  What should teams automate first with agentic NetOps?
&lt;/h3&gt;

&lt;p&gt;Start with evidence gathering, alert summarization, cross-domain correlation, compliance checks, and bounded optimization tasks such as RF or path tuning. Leave route-policy changes, segmentation, and multi-domain outage remediation behind approval gates until rollback and validation are proven.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;AI disclosure: This Dev.to version was adapted with AI assistance from the canonical FirstPassLab article, then reviewed and edited before publication. Canonical source: &lt;a href="https://firstpasslab.com/blog/2026-04-15-rise-of-agentic-ai-when-networks-manage-themselves/" rel="noopener noreferrer"&gt;https://firstpasslab.com/blog/2026-04-15-rise-of-agentic-ai-when-networks-manage-themselves/&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

</description>
      <category>networking</category>
      <category>automation</category>
      <category>ai</category>
      <category>devops</category>
    </item>
    <item>
      <title>DevNet Expert Is Now CCIE Automation. What Actually Changes for Network Automation Engineers</title>
      <dc:creator>FirstPassLab</dc:creator>
      <pubDate>Tue, 14 Apr 2026 22:54:24 +0000</pubDate>
      <link>https://dev.to/firstpasslab/devnet-expert-is-now-ccie-automation-what-actually-changes-for-network-automation-engineers-1bce</link>
      <guid>https://dev.to/firstpasslab/devnet-expert-is-now-ccie-automation-what-actually-changes-for-network-automation-engineers-1bce</guid>
      <description>&lt;p&gt;Cisco changed the name, not the blueprint. Python, NETCONF/RESTCONF, YANG, CI/CD, and infrastructure-as-code are still the core skills. But the rename from &lt;strong&gt;DevNet Expert&lt;/strong&gt; to &lt;strong&gt;CCIE Automation&lt;/strong&gt; changes something very real for working engineers: recruiter filters, compensation bands, and how automation work gets classified inside enterprise hiring.&lt;/p&gt;

&lt;p&gt;If you build automation for network changes, policy deployment, or compliance workflows, this is less about branding hype and more about market visibility.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;TL;DR&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The exam content did not fundamentally change.&lt;/li&gt;
&lt;li&gt;The market signal did.&lt;/li&gt;
&lt;li&gt;"CCIE" shows up in ATS filters, recruiter searches, and compensation frameworks in ways "DevNet Expert" often did not.&lt;/li&gt;
&lt;li&gt;The rebrand helps automation engineers get seen, but hiring-manager understanding will still lag for a while.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  What actually changed?
&lt;/h2&gt;

&lt;p&gt;According to Cisco's 2026 certification update, the DevNet track was renamed across the board:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Old name&lt;/th&gt;
&lt;th&gt;New name&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;DevNet Associate&lt;/td&gt;
&lt;td&gt;CCNA Automation&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;DevNet Professional&lt;/td&gt;
&lt;td&gt;CCNP Automation&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;DevNet Expert&lt;/td&gt;
&lt;td&gt;CCIE Automation&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;The important part is what &lt;strong&gt;didn't&lt;/strong&gt; change.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The core automation skill set is still the same.&lt;/li&gt;
&lt;li&gt;The blueprint still centers on APIs, programmability, model-driven operations, and automation pipelines.&lt;/li&gt;
&lt;li&gt;The difficulty did not suddenly get easier because the badge changed.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;So this was not a new technical track. It was a repositioning of automation as a first-class networking specialty under the CCIE brand.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why the rename matters more than it looks
&lt;/h2&gt;

&lt;p&gt;For a lot of engineers, the old problem was not skill mismatch. It was discoverability.&lt;/p&gt;

&lt;p&gt;In many enterprises, job requirements, recruiter searches, and internal leveling frameworks still use blunt keyword matching. If the requirement says &lt;strong&gt;CCIE&lt;/strong&gt;, then profiles containing &lt;strong&gt;CCIE&lt;/strong&gt; float to the top. If your certification says &lt;strong&gt;DevNet Expert&lt;/strong&gt;, you may never make it into the same shortlist, even when your actual skills map well to the job.&lt;/p&gt;

&lt;p&gt;That matters because automation engineers often sit in an awkward middle zone:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;too network-heavy for generic software roles&lt;/li&gt;
&lt;li&gt;too code-heavy for traditional networking hiring funnels&lt;/li&gt;
&lt;li&gt;too specialized for HR systems that classify people by old labels&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The rebrand does not solve all of that, but it removes one structural blocker.&lt;/p&gt;

&lt;h2&gt;
  
  
  The practical impact for network automation engineers
&lt;/h2&gt;

&lt;h3&gt;
  
  
  1. Better recruiter and ATS visibility
&lt;/h3&gt;

&lt;p&gt;This is the biggest near-term gain.&lt;/p&gt;

&lt;p&gt;When the credential includes &lt;strong&gt;CCIE&lt;/strong&gt;, it starts matching the language that enterprise recruiters already use. That increases the odds of showing up in:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;recruiter keyword searches&lt;/li&gt;
&lt;li&gt;ATS filters for senior network roles&lt;/li&gt;
&lt;li&gt;compensation review discussions tied to certification tiers&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That may sound superficial, but it affects whether you get contacted at all.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Stronger salary positioning
&lt;/h3&gt;

&lt;p&gt;Historically, CCIE-branded tracks have been easier to anchor in senior compensation bands. The problem for DevNet Expert holders was not necessarily lower capability, but weaker market recognition.&lt;/p&gt;

&lt;p&gt;If an employer already understands CCIE as a premium signal, then &lt;strong&gt;CCIE Automation&lt;/strong&gt; gives automation engineers a cleaner reference point during salary discussions.&lt;/p&gt;

&lt;p&gt;That does not guarantee better offers. It does make the conversation simpler.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. Better alignment with how modern network teams actually work
&lt;/h3&gt;

&lt;p&gt;Network operations is no longer just CLI depth plus protocol knowledge. Mature teams increasingly need people who can:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;automate repetitive change windows&lt;/li&gt;
&lt;li&gt;validate intended state before deployment&lt;/li&gt;
&lt;li&gt;integrate network systems with CI/CD workflows&lt;/li&gt;
&lt;li&gt;use APIs instead of fragile screen scraping&lt;/li&gt;
&lt;li&gt;bridge operations, platform engineering, and security controls&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The rename is Cisco acknowledging that this is networking expertise, not a side discipline.&lt;/p&gt;

&lt;h2&gt;
  
  
  What still has not been fixed
&lt;/h2&gt;

&lt;p&gt;The market signal improved. The education problem did not disappear.&lt;/p&gt;

&lt;h3&gt;
  
  
  Hiring managers still need context
&lt;/h3&gt;

&lt;p&gt;A lot of managers understand CCIE Enterprise, Security, or Data Center immediately. Fewer can explain what &lt;strong&gt;CCIE Automation&lt;/strong&gt; covers in practical terms.&lt;/p&gt;

&lt;p&gt;So even with a better title, engineers will still need to translate the credential into business and technical outcomes:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;automated provisioning&lt;/li&gt;
&lt;li&gt;faster validation&lt;/li&gt;
&lt;li&gt;reduced change risk&lt;/li&gt;
&lt;li&gt;repeatable policy rollout&lt;/li&gt;
&lt;li&gt;lower operational toil&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  The "is it a real CCIE?" debate is not over
&lt;/h3&gt;

&lt;p&gt;Some engineers still define CCIE primarily around deep protocol and platform implementation under lab pressure.&lt;/p&gt;

&lt;p&gt;That is a fair instinct, but it misses how much modern network engineering now depends on automation literacy. Building and troubleshooting production-grade automation against live APIs, source-of-truth systems, and deployment pipelines is not trivial. It is just a different kind of expert pressure.&lt;/p&gt;

&lt;h3&gt;
  
  
  Multi-vendor portability still depends on how you learned it
&lt;/h3&gt;

&lt;p&gt;The most transferable parts of the track are things like Python, REST APIs, NETCONF, YANG models, testing habits, and automation design patterns.&lt;/p&gt;

&lt;p&gt;The least transferable parts are tool-specific workflows tied closely to a single vendor platform.&lt;/p&gt;

&lt;p&gt;So the credential helps, but your real portability still depends on whether you learned principles or only one vendor's tooling surface.&lt;/p&gt;

&lt;h2&gt;
  
  
  How I would use the rebrand if I were in the market
&lt;/h2&gt;

&lt;p&gt;If you hold the former DevNet Expert, the obvious move is to update every place where discovery happens:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;LinkedIn headline&lt;/li&gt;
&lt;li&gt;resume certification section&lt;/li&gt;
&lt;li&gt;email signature&lt;/li&gt;
&lt;li&gt;portfolio site&lt;/li&gt;
&lt;li&gt;speaker bios and conference profiles&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Then back the title up with concrete proof of work.&lt;/p&gt;

&lt;p&gt;For example, instead of stopping at the certification name, show evidence like:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;built API-driven configuration pipelines for Cisco platforms&lt;/li&gt;
&lt;li&gt;used NETCONF/RESTCONF to validate and deploy change sets&lt;/li&gt;
&lt;li&gt;automated compliance checks across production devices&lt;/li&gt;
&lt;li&gt;integrated network workflows with Git-based approvals and testing&lt;/li&gt;
&lt;li&gt;reduced manual provisioning time from hours to minutes&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That is what turns the credential from a label into a hiring advantage.&lt;/p&gt;

&lt;h2&gt;
  
  
  The bigger industry takeaway
&lt;/h2&gt;

&lt;p&gt;The interesting part here is not just certification branding.&lt;/p&gt;

&lt;p&gt;It is that automation is being folded into the core identity of advanced networking work. That tracks with where the industry is already headed:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;more intent and policy driven operations&lt;/li&gt;
&lt;li&gt;more API-centric tooling&lt;/li&gt;
&lt;li&gt;more pre-change validation&lt;/li&gt;
&lt;li&gt;more overlap between networking, security, and platform engineering&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The engineers who benefit most from this shift will not be pure coders with no network depth, or pure CLI specialists with no automation skills.&lt;/p&gt;

&lt;p&gt;It will be the people who can do both.&lt;/p&gt;

&lt;h2&gt;
  
  
  Final take
&lt;/h2&gt;

&lt;p&gt;The move from DevNet Expert to CCIE Automation does not magically upgrade anyone's technical ability.&lt;/p&gt;

&lt;p&gt;What it does is remove a naming problem that was getting in the way of qualified automation engineers being found, understood, and priced correctly.&lt;/p&gt;

&lt;p&gt;That is a meaningful change.&lt;/p&gt;

&lt;p&gt;And if the industry keeps moving toward API-first operations, validation pipelines, and software-shaped infrastructure, this track will look less like an outlier and more like the direction the rest of networking was always heading.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;AI disclosure: This Dev.to article was adapted from a FirstPassLab original using AI-assisted editing and formatting. The source analysis and final review were done by FirstPassLab.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>networking</category>
      <category>automation</category>
      <category>cisco</category>
      <category>career</category>
    </item>
    <item>
      <title>Identity Beats IP Policy: What Forescout's New Segmentation Model Means for Multi-Vendor Networks</title>
      <dc:creator>FirstPassLab</dc:creator>
      <pubDate>Tue, 14 Apr 2026 16:53:47 +0000</pubDate>
      <link>https://dev.to/firstpasslab/identity-beats-ip-policy-what-forescouts-new-segmentation-model-means-for-multi-vendor-networks-5eim</link>
      <guid>https://dev.to/firstpasslab/identity-beats-ip-policy-what-forescouts-new-segmentation-model-means-for-multi-vendor-networks-5eim</guid>
      <description>&lt;p&gt;Zero trust segmentation keeps failing for the same reason: policy is still glued to IP addresses, VLANs, and static assumptions about what a device is. That works until you add unmanaged IoT, OT controllers, medical gear, M&amp;amp;A-driven vendor sprawl, or plain old DHCP churn.&lt;/p&gt;

&lt;p&gt;Forescout's latest identity-driven segmentation release is interesting because it treats segmentation as a classification problem first and an enforcement problem second. If you run multi-vendor networks, that shift matters more than the press release headline.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Why it matters:&lt;/strong&gt; Cisco TrustSec still gives you stronger native enforcement on Cisco-heavy networks. But Forescout is pushing a model that fits the environments many teams actually inherit: mixed vendors, unagentable assets, and east-west flows that are hard to describe with static ACLs alone.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Ffirstpasslab.com%2Fimages%2Fblog%2Fforescout-identity-segmentation%2Finfographic-overview.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Ffirstpasslab.com%2Fimages%2Fblog%2Fforescout-identity-segmentation%2Finfographic-overview.png" alt="Forescout Identity-Driven Segmentation Overview" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  What Did Forescout Actually Launch on March 23, 2026?
&lt;/h2&gt;

&lt;p&gt;Forescout launched a cloud-native, agentless segmentation capability inside the 4D Platform that models policy by device identity, attributes, behavior, and risk instead of by subnet alone. According to Forescout (2026), the platform now lets teams visualize zones from a single console across IT, OT, IoT, and IoMT, while reducing onboarding from weeks to hours and avoiding vendor lock-in or a network redesign. That matters because traditional segmentation tools usually force one of three compromises: they cover only managed endpoints, they work only in OT, or they depend on agents that industrial and medical devices cannot run. According to Network World (2026), Forescout's new zone modeling can use up to 1,200 device attributes, overlay risk levels onto communication matrices, and validate policy against actual communication patterns before enforcement.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Ffirstpasslab.com%2Fimages%2Fblog%2Fforescout-identity-segmentation%2Finfographic-tech.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Ffirstpasslab.com%2Fimages%2Fblog%2Fforescout-identity-segmentation%2Finfographic-tech.png" alt="Forescout Identity-Driven Segmentation Technical Architecture" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Most reporting stopped at "new segmentation feature," but the design detail is more interesting. The 4D Platform's segmentation sits on top of existing asset intelligence, risk scoring, and control workflows. According to Forescout (2026), it combines more than 30 agentless discovery methods and turns that data into heatmaps and matrix views for east-west communication risk. According to Industrial Cyber (2026), the product is meant to bridge IT and OT without agents, redesign, or single-vendor dependency. In practice, that means the release is less about one more NAC dashboard and more about moving segmentation planning upstream, before enforcement breaks production.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Capability&lt;/th&gt;
&lt;th&gt;Legacy port-based NAC&lt;/th&gt;
&lt;th&gt;Forescout 4D segmentation&lt;/th&gt;
&lt;th&gt;Why it matters&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Primary policy anchor&lt;/td&gt;
&lt;td&gt;VLAN, IP, port&lt;/td&gt;
&lt;td&gt;Identity, attributes, behavior, risk&lt;/td&gt;
&lt;td&gt;Survives DHCP churn and device mobility&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Asset coverage&lt;/td&gt;
&lt;td&gt;Mostly managed endpoints&lt;/td&gt;
&lt;td&gt;Managed, unmanaged, and unagentable devices&lt;/td&gt;
&lt;td&gt;Better fit for OT, IoT, and healthcare&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Deployment style&lt;/td&gt;
&lt;td&gt;Appliance-centric&lt;/td&gt;
&lt;td&gt;Cloud-native overlay&lt;/td&gt;
&lt;td&gt;Faster rollout in hybrid estates&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Validation model&lt;/td&gt;
&lt;td&gt;Enforce first, troubleshoot later&lt;/td&gt;
&lt;td&gt;Model communication before enforcement&lt;/td&gt;
&lt;td&gt;Lower outage risk&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Vendor dependency&lt;/td&gt;
&lt;td&gt;Often strong&lt;/td&gt;
&lt;td&gt;Multi-vendor by design&lt;/td&gt;
&lt;td&gt;Better for acquisition-heavy enterprises&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h2&gt;
  
  
  Why Is Identity-Driven Segmentation Replacing IP-Based NAC?
&lt;/h2&gt;

&lt;p&gt;Identity-driven segmentation is replacing IP-based NAC because zero trust breaks when policy depends on addresses that move faster than the business. According to NIST SP 800-207, zero trust protects resources rather than trusting network location, and that principle lines up almost perfectly with Forescout's argument that segmentation should follow device identity, not subnet placement. According to Network World (2026), Justin Foster described the shift clearly: a laptop can change IPs, but the device's role, owner, function, and risk profile remain far more stable anchors for policy. That is why identity-centric models are gaining traction in hospitals, factories, and campuses where DHCP churn, roaming clients, mergers, and temporary VLAN workarounds make ACL sprawl hard to govern.&lt;/p&gt;

&lt;p&gt;This is also where the release intersects directly with real-world Cisco practice. Cisco TrustSec solved much of this years ago by replacing IP-bound policy with SGT-based policy. According to Cisco (2026), SGACLs are topology-independent and continue to apply even when devices move or change IP addresses. A typical Catalyst enforcement pattern still looks like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;interface GigabitEthernet1/0/2
 authentication port-control auto
 mab
 dot1x pae authenticator
 cts role-based enforcement
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That is the key technical point for network security engineers. Forescout is not inventing identity-based policy, Cisco already proved that model with SGTs and SGACLs. What Forescout is doing is extending the argument to environments where 802.1X coverage is incomplete, where endpoints cannot run agents, or where five to seven vendors share the same production network. That gap is exactly where older NAC programs usually stall.&lt;/p&gt;

&lt;h2&gt;
  
  
  How Does Forescout Enforce Policy Across Multi-Vendor, OT, and IoT Networks?
&lt;/h2&gt;

&lt;p&gt;Forescout enforces policy as an overlay that talks to existing switching and routing infrastructure, rather than requiring a rip-and-replace fabric. According to Network World (2026), the platform can communicate directly with switches and routers or use SDN control layers where a vendor requires it, with Arista enforcement routed through CloudVision rather than the switch itself. It can also move newly identified devices into a more appropriate VLAN automatically, collect visibility from SPAN ports and packet brokers such as Gigamon and Keysight, and classify non-agentable OT devices through header scraping, active probes, remote execution scripts, and secure proxy methods. That blend of control and discovery is the practical reason this launch matters.&lt;/p&gt;

&lt;p&gt;For network engineers, the architecture is easiest to understand as three layers:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Asset intelligence layer&lt;/strong&gt;: identify device type, owner, function, and risk across IT, OT, IoT, and IoMT.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Policy modeling layer&lt;/strong&gt;: build zones and allowed flows with matrix-based heatmaps before turning controls on.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Enforcement layer&lt;/strong&gt;: push actions through the infrastructure you already own, including VLAN changes and controller-driven policy.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The hardest problem here is not policy syntax, it is classification accuracy. Network World's example is a good one: if a system looks like a generic Windows endpoint but is actually an MRI system, placing it in the wrong segment can create patient safety and compliance risk. That is why identity-driven segmentation depends on visibility quality more than on pretty dashboards. It also explains why many organizations on Reddit and in forums talk about NAC migrations as operationally messy. One Reddit networking post surfaced by Tavily describes an organization "moving from using Forescout for NAC to Cisco ISE with 802.1x/MAB," which is a useful reminder that segmentation changes are never just licensing decisions, they are identity, policy, and workflow redesign projects.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Should CCIE Security Engineers Learn From This Release?
&lt;/h2&gt;

&lt;p&gt;Network security engineers should read this release as a signal that production zero-trust work is becoming broader than Cisco-only policy enforcement. According to IoT Analytics (2025), connected IoT devices reached 18.5 billion in 2024 and are projected to hit 39 billion by 2030. According to Forescout (2026), 75% of the riskiest connected devices in its 2026 Vedere Labs report were new to the rankings in the last two years. That combination, exploding device count plus rapidly shifting device risk, explains why enterprises want segmentation tied to asset intelligence and east-west visibility rather than static access lists. If you design or operate production security controls, this is the operational reality behind modern zero-trust programs.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Ffirstpasslab.com%2Fimages%2Fblog%2Fforescout-identity-segmentation%2Finfographic-impact.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Ffirstpasslab.com%2Fimages%2Fblog%2Fforescout-identity-segmentation%2Finfographic-impact.png" alt="Forescout Identity-Driven Segmentation Industry Impact" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In practical terms, this release reinforces five skills worth building now:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Understand how &lt;a href="https://firstpasslab.com/blog/2026-03-06-cisco-ise-trustsec-sgt-zero-trust-segmentation-guide/" rel="noopener noreferrer"&gt;Cisco ISE and TrustSec segmentation&lt;/a&gt; maps identity to enforcement.&lt;/li&gt;
&lt;li&gt;Practice &lt;a href="https://firstpasslab.com/blog/2026-04-10-cisco-ise-lab-eve-ng-ccie-security/" rel="noopener noreferrer"&gt;ISE 3.x lab deployment&lt;/a&gt; so you can compare native Cisco workflows with overlay models.&lt;/li&gt;
&lt;li&gt;Track how vendors such as Nile are packaging &lt;a href="https://firstpasslab.com/blog/2026-03-23-nile-naas-native-nac-microsegmentation-zero-trust-campus-network/" rel="noopener noreferrer"&gt;native NAC and microsegmentation&lt;/a&gt; into broader platform plays.&lt;/li&gt;
&lt;li&gt;Understand why &lt;a href="https://firstpasslab.com/blog/2026-03-24-sase-spending-97-billion-2030-gpu-powered-security-network-engineer-guide/" rel="noopener noreferrer"&gt;SASE growth&lt;/a&gt; is pushing segmentation decisions closer to identity and application policy.&lt;/li&gt;
&lt;li&gt;Read where the broader &lt;a href="https://firstpasslab.com/blog/2026-03-05-zero-trust-ccie-security-blueprint-obsolete-2028/" rel="noopener noreferrer"&gt;zero-trust blueprint is heading&lt;/a&gt; so you are not studying only for the lab and missing the market.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The opportunity is straightforward. Engineers who can translate between Cisco-native controls, vendor-agnostic overlays, and OT-aware asset discovery will be more useful than engineers who know only how to paste RADIUS templates. I am glad this release makes that visible.&lt;/p&gt;

&lt;h2&gt;
  
  
  Does Forescout Replace Cisco ISE and TrustSec, or Complement Them?
&lt;/h2&gt;

&lt;p&gt;In most enterprises, Forescout complements Cisco ISE and TrustSec rather than replacing them outright. According to Cisco (2026), TrustSec still delivers deep native enforcement with SGTs, SGACL matrices, and topology-independent policy on supported Cisco infrastructure. According to Network World (2026), Forescout's strength is that it can classify and segment assets across networks that are already heterogeneous and often include unagentable OT and IoMT systems. The architectural question is therefore not "which one is better?" but "where do you need native enforcement, and where do you need broader visibility and policy abstraction?" Cisco-heavy campuses often still favor ISE plus TrustSec. Hybrid hospitals, factories, and acquisition-heavy enterprises may favor Forescout for visibility and policy design, then use vendor-native enforcement where available.&lt;/p&gt;

&lt;p&gt;A simple buying lens looks like this:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Question&lt;/th&gt;
&lt;th&gt;Cisco ISE + TrustSec&lt;/th&gt;
&lt;th&gt;Forescout 4D segmentation&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Best fit&lt;/td&gt;
&lt;td&gt;Cisco-dominant campus and branch&lt;/td&gt;
&lt;td&gt;Mixed-vendor IT, OT, IoT, IoMT&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Identity model&lt;/td&gt;
&lt;td&gt;802.1X, MAB, SGT, ISE policy sets&lt;/td&gt;
&lt;td&gt;Asset identity, labels, behavior, risk&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Enforcement strength&lt;/td&gt;
&lt;td&gt;Deep native Catalyst and Nexus policy&lt;/td&gt;
&lt;td&gt;Flexible overlay across existing infrastructure&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;OT and agentless coverage&lt;/td&gt;
&lt;td&gt;Possible, but not the core strength&lt;/td&gt;
&lt;td&gt;Core design goal&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Main tradeoff&lt;/td&gt;
&lt;td&gt;Stronger native control, narrower ecosystem&lt;/td&gt;
&lt;td&gt;Broader coverage, less single-vendor depth&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;That is the competitor gap most quick news coverage missed. The headline is not simply that Forescout added segmentation. The deeper takeaway is that zero-trust design is turning into a data-quality and control-plane orchestration problem. The engineers who win will understand both the native Cisco path and the overlay path.&lt;/p&gt;

&lt;h2&gt;
  
  
  Frequently Asked Questions
&lt;/h2&gt;

&lt;h3&gt;
  
  
  What did Forescout launch in March 2026?
&lt;/h3&gt;

&lt;p&gt;Forescout launched cloud-native, agentless identity-driven segmentation in the 4D Platform on March 23, 2026. According to Forescout (2026), the release adds zone modeling across IT, OT, IoT, and IoMT assets without requiring a network redesign or vendor lock-in.&lt;/p&gt;

&lt;h3&gt;
  
  
  Is Forescout replacing Cisco ISE and TrustSec?
&lt;/h3&gt;

&lt;p&gt;Not in most Cisco-heavy enterprises. Cisco ISE and TrustSec remain stronger for native SGT and SGACL enforcement on Catalyst and Nexus, while Forescout is more attractive when coverage must extend to unmanaged, unagentable, and multi-vendor environments.&lt;/p&gt;

&lt;h3&gt;
  
  
  Why is identity-driven segmentation better than IP-based segmentation?
&lt;/h3&gt;

&lt;p&gt;Identity-driven segmentation is more durable because policy follows what a device is and how risky it is, not the IP address it happens to hold today. According to NIST SP 800-207, zero trust should protect resources rather than trust network location, which is exactly why identity-based policy scales better in hybrid networks.&lt;/p&gt;

&lt;h3&gt;
  
  
  What should network security engineers learn from this release?
&lt;/h3&gt;

&lt;p&gt;They should keep mastering Cisco-native controls, especially ISE, TrustSec, 802.1X, MAB, and SGACL verification. They should also add asset classification, OT and IoT discovery, zone modeling, and multi-vendor policy design, because that is where real customer networks are going.&lt;/p&gt;




&lt;p&gt;AI disclosure: This post was adapted from an original FirstPassLab article with AI assistance and reviewed before publication. The original source is here: &lt;a href="https://firstpasslab.com/blog/2026-04-14-forescout-identity-segmentation/" rel="noopener noreferrer"&gt;https://firstpasslab.com/blog/2026-04-14-forescout-identity-segmentation/&lt;/a&gt;&lt;/p&gt;

</description>
      <category>security</category>
      <category>networking</category>
      <category>zerotrust</category>
      <category>cisco</category>
    </item>
    <item>
      <title>AI Can Generate the Network Config. The Real Job Is Validating It Before Production</title>
      <dc:creator>FirstPassLab</dc:creator>
      <pubDate>Mon, 13 Apr 2026 22:55:35 +0000</pubDate>
      <link>https://dev.to/firstpasslab/ai-can-generate-the-network-config-the-real-job-is-validating-it-before-production-37m0</link>
      <guid>https://dev.to/firstpasslab/ai-can-generate-the-network-config-the-real-job-is-validating-it-before-production-37m0</guid>
      <description>&lt;h1&gt;
  
  
  AI Can Generate the Network Config. The Real Job Is Validating It Before Production
&lt;/h1&gt;

&lt;p&gt;Large language models are already pretty good at producing VLAN stanzas, ACLs, and even decent-looking BGP policy snippets.&lt;/p&gt;

&lt;p&gt;That does &lt;strong&gt;not&lt;/strong&gt; mean the network engineer disappears.&lt;/p&gt;

&lt;p&gt;It means the low-value part of the job, repetitive config generation, gets compressed. The high-value part, understanding intent, validating impact, enforcing policy, and debugging failures across the automation stack, becomes more important.&lt;/p&gt;

&lt;p&gt;That is the real shift happening in network automation right now.&lt;/p&gt;

&lt;h2&gt;
  
  
  Where AI is already useful in network operations
&lt;/h2&gt;

&lt;p&gt;According to &lt;a href="https://www.networkworld.com/article/3529502/gartner-network-automation-will-increase-threefold-by-2026.html" rel="noopener noreferrer"&gt;Gartner's 2026 network automation outlook&lt;/a&gt;, network automation deployments are expected to triple by the end of 2026. In practice, the first wave is landing in a few obvious places:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Config generation&lt;/strong&gt; from natural-language requests&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Policy translation&lt;/strong&gt; from intent into ACL, segmentation, or QoS changes&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Change validation&lt;/strong&gt; against known baselines&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Troubleshooting assistance&lt;/strong&gt; by correlating telemetry, logs, and historical incidents&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Compliance checking&lt;/strong&gt; against CIS, NIST, and internal standards&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That is enough to remove a lot of repetitive work from daily operations.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why CLI-only work is the first thing that gets compressed
&lt;/h2&gt;

&lt;p&gt;The easiest network tasks to automate are the ones that already follow a template.&lt;/p&gt;

&lt;p&gt;Think about the work that fills a normal week:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Task&lt;/th&gt;
&lt;th&gt;AI / automation readiness&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;VLAN creation and assignment&lt;/td&gt;
&lt;td&gt;High&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;ACL updates&lt;/td&gt;
&lt;td&gt;High&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Standard BGP or OSPF neighbor config&lt;/td&gt;
&lt;td&gt;High&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Software upgrade orchestration&lt;/td&gt;
&lt;td&gt;High&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Tier-1 troubleshooting&lt;/td&gt;
&lt;td&gt;Medium&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Architecture decisions&lt;/td&gt;
&lt;td&gt;Low&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Incident judgment under ambiguity&lt;/td&gt;
&lt;td&gt;Low&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;If your value is mostly typing device-by-device CLI faster than the next person, AI is coming for that margin.&lt;/p&gt;

&lt;p&gt;If your value is understanding blast radius, rollback paths, control-plane behavior, policy intent, and failure domains, AI makes you more leveraged, not less.&lt;/p&gt;

&lt;p&gt;That is why the real question is not, "Can AI write a config?"&lt;/p&gt;

&lt;p&gt;It is, "Who understands whether that config is safe to deploy into &lt;em&gt;this&lt;/em&gt; network?"&lt;/p&gt;

&lt;h2&gt;
  
  
  The interfaces AI actually uses are the automation stack
&lt;/h2&gt;

&lt;p&gt;This is the part that matters most.&lt;/p&gt;

&lt;p&gt;AI does not magically operate networks by vibes. It plugs into the same interfaces automation engineers have already been building around:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;NETCONF / RESTCONF&lt;/strong&gt; for structured device changes&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;YANG models&lt;/strong&gt; for schema and validation&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;gNMI and streaming telemetry&lt;/strong&gt; for observability and feedback loops&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Python&lt;/strong&gt; for workflow glue, guardrails, and custom logic&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;CI/CD pipelines&lt;/strong&gt; for change testing and promotion&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Infrastructure as Code&lt;/strong&gt; tools like Ansible and Terraform&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Controller APIs&lt;/strong&gt; such as Catalyst Center, NSO, Meraki, or vendor-specific platforms&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That is why the engineers who understand the automation stack are in the best position for the next wave.&lt;/p&gt;

&lt;p&gt;They are the people who can:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;review what the model is trying to change&lt;/li&gt;
&lt;li&gt;constrain it with guardrails&lt;/li&gt;
&lt;li&gt;test it before production&lt;/li&gt;
&lt;li&gt;debug it when the generated change is technically valid but operationally wrong&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;A generated payload that matches a YANG model is not the same thing as a safe network change.&lt;/p&gt;

&lt;h2&gt;
  
  
  "AgenticOps" is just networking automation with more autonomy attached
&lt;/h2&gt;

&lt;p&gt;Vendors are starting to describe the next phase as &lt;strong&gt;AgenticOps&lt;/strong&gt;: AI systems that do more than suggest. They detect anomalies, correlate causes, propose fixes, and sometimes initiate actions.&lt;/p&gt;

&lt;p&gt;That direction showed up clearly in 2026 messaging from Cisco, Huawei, and NVIDIA:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Huawei talked about autonomous AI agents for network operations&lt;/li&gt;
&lt;li&gt;NVIDIA released a telecom-focused large model tuned for network data&lt;/li&gt;
&lt;li&gt;Cisco framed the shift as &lt;strong&gt;NetOps -&amp;gt; AIOps -&amp;gt; AgenticOps&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The common thread is simple: once systems can propose and execute changes, your production risk moves from manual keystrokes to automation design.&lt;/p&gt;

&lt;p&gt;So the bottleneck becomes:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;model quality&lt;/li&gt;
&lt;li&gt;policy quality&lt;/li&gt;
&lt;li&gt;test quality&lt;/li&gt;
&lt;li&gt;rollback quality&lt;/li&gt;
&lt;li&gt;telemetry quality&lt;/li&gt;
&lt;li&gt;human review at the right control points&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That is engineering work, not prompt theater.&lt;/p&gt;

&lt;h2&gt;
  
  
  What the AI-era network engineer actually does
&lt;/h2&gt;

&lt;p&gt;The strongest network engineers in the next few years will not be the people who refuse automation.&lt;/p&gt;

&lt;p&gt;They will be the people who can operate one level higher.&lt;/p&gt;

&lt;p&gt;A realistic day looks something like this:&lt;/p&gt;

&lt;h3&gt;
  
  
  Morning
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;review AI-generated change recommendations&lt;/li&gt;
&lt;li&gt;compare them to maintenance windows, dependency graphs, and policy constraints&lt;/li&gt;
&lt;li&gt;approve, reject, or modify the rollout plan&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Midday
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;define intent for segmentation, routing, or capacity policy&lt;/li&gt;
&lt;li&gt;encode guardrails in CI checks and controller workflows&lt;/li&gt;
&lt;li&gt;validate proposed changes in lab or pre-prod&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Afternoon
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;investigate the ugly cases the model could not fully resolve&lt;/li&gt;
&lt;li&gt;reproduce failures with Python, pyATS, or simulation tools&lt;/li&gt;
&lt;li&gt;fix the workflow, not just the one broken command&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That is the leverage point.&lt;/p&gt;

&lt;p&gt;The job shifts from being the person who types every command to being the person who designs, validates, and governs the system that does.&lt;/p&gt;

&lt;h2&gt;
  
  
  A practical skills stack for the next 2 years
&lt;/h2&gt;

&lt;p&gt;If I were advising a network engineer who wants to stay relevant as AI becomes normal in ops, I would focus on this stack:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;YANG + NETCONF/RESTCONF&lt;/strong&gt;&lt;br&gt;
Learn how modern structured configuration actually works.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Python for network workflows&lt;/strong&gt;&lt;br&gt;
Not full-time software engineering, just enough to query APIs, parse responses, and build sane glue code.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Git and CI/CD for network changes&lt;/strong&gt;&lt;br&gt;
Because generated configs without test gates are just faster mistakes.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Infrastructure as Code&lt;/strong&gt;&lt;br&gt;
Especially Ansible and Terraform, depending on whether you live more in device automation or cloud networking.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Streaming telemetry and observability&lt;/strong&gt;&lt;br&gt;
Autonomous systems are only as good as their feedback loops.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Lab validation&lt;/strong&gt;&lt;br&gt;
Build a place where you can test workflows, not just device configs.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;If you want a hands-on starting point, I like building around a small lab plus structured APIs first, then layering CI and telemetry on top.&lt;/p&gt;

&lt;h2&gt;
  
  
  The certification angle, briefly
&lt;/h2&gt;

&lt;p&gt;The original FirstPassLab piece behind this post focused on &lt;strong&gt;CCIE Automation&lt;/strong&gt; as a signal that these skills are becoming core networking skills, not a side quest for "developer types."&lt;/p&gt;

&lt;p&gt;I think that broader point is right even if you are not pursuing that specific certification.&lt;/p&gt;

&lt;p&gt;The market is moving toward engineers who can bridge protocol knowledge with automation interfaces.&lt;/p&gt;

&lt;p&gt;Knowing BGP matters.&lt;br&gt;
Knowing how to validate a BGP change through an API-driven workflow matters more than it did two years ago.&lt;/p&gt;

&lt;h2&gt;
  
  
  The main takeaway
&lt;/h2&gt;

&lt;p&gt;AI will absolutely write more network config.&lt;/p&gt;

&lt;p&gt;But production networks are not judged on whether a config &lt;em&gt;looks plausible&lt;/em&gt;. They are judged on whether the change is safe, reversible, observable, and aligned with business intent.&lt;/p&gt;

&lt;p&gt;That means the durable skill is not raw CLI speed.&lt;/p&gt;

&lt;p&gt;It is being the engineer who can turn AI output into controlled network change.&lt;/p&gt;

&lt;p&gt;If you learn the automation interfaces now, AI is more likely to become your amplifier than your replacement.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;This post is adapted from the original FirstPassLab article: &lt;a href="https://firstpasslab.com/blog/2026-03-07-ai-network-automation-ccie-insurance-policy/" rel="noopener noreferrer"&gt;AI Will Write Your Network Configs by 2028 — Why CCIE Automation Is Your Insurance Policy&lt;/a&gt;.&lt;/em&gt; &lt;/p&gt;




&lt;p&gt;&lt;strong&gt;AI disclosure:&lt;/strong&gt; This article was adapted from an original FirstPassLab post with AI assistance for editing and syndication. The canonical version lives at FirstPassLab.&lt;/p&gt;

</description>
      <category>networking</category>
      <category>automation</category>
      <category>ai</category>
      <category>devops</category>
    </item>
    <item>
      <title>Oracle Is Funding 131K-GPU Clusters. Here’s Why RoCEv2 Fabrics Just Became a Board-Level Problem</title>
      <dc:creator>FirstPassLab</dc:creator>
      <pubDate>Mon, 13 Apr 2026 16:54:12 +0000</pubDate>
      <link>https://dev.to/firstpasslab/oracle-is-funding-131k-gpu-clusters-heres-why-rocev2-fabrics-just-became-a-board-level-problem-4628</link>
      <guid>https://dev.to/firstpasslab/oracle-is-funding-131k-gpu-clusters-heres-why-rocev2-fabrics-just-became-a-board-level-problem-4628</guid>
      <description>&lt;h1&gt;
  
  
  Oracle Is Funding 131K-GPU Clusters. Here’s Why RoCEv2 Fabrics Just Became a Board-Level Problem
&lt;/h1&gt;

&lt;p&gt;Oracle’s latest AI infrastructure push is easy to read as a finance story: layoffs, capex, hyperscaler pressure, and another giant GPU announcement.&lt;/p&gt;

&lt;p&gt;But the more interesting signal for engineers is lower in the stack.&lt;/p&gt;

&lt;p&gt;Oracle says OCI Supercluster can scale to &lt;strong&gt;131,072 GPUs&lt;/strong&gt;, with &lt;strong&gt;2.5 to 9.1 microseconds&lt;/strong&gt; of cluster latency and up to &lt;strong&gt;3,200 Gb/sec&lt;/strong&gt; of cluster network bandwidth. Once numbers get that large, the network stops being “supporting infrastructure” and starts deciding whether the AI investment pays off at all.&lt;/p&gt;

&lt;p&gt;That is why this story matters even if you never touch Oracle Cloud directly. It is a clean example of a broader shift across the industry: &lt;strong&gt;AI-era infrastructure is turning congestion control, queue behavior, optics, and east-west fabric design into executive-level concerns&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq713xq28bxret1k6adel.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq713xq28bxret1k6adel.png" alt="Oracle AI fabric architecture" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Why this is a networking story, not just a cloud spending story
&lt;/h2&gt;

&lt;p&gt;Big AI clusters fail on data movement long before they fail on raw GPU count.&lt;/p&gt;

&lt;p&gt;If you buy thousands of accelerators but cannot keep collective traffic predictable, you do not have an AI platform. You have an expensive queueing experiment.&lt;/p&gt;

&lt;p&gt;According to Oracle’s AI infrastructure material, the company is building around a RoCEv2-based cluster fabric with NVIDIA ConnectX NICs, high-bandwidth east-west networking, and an architecture direction that emphasizes offload and multiplanar design. That tells us two things:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;The back-end fabric is now part of the product.&lt;/li&gt;
&lt;li&gt;The back-end fabric is now part of the business case.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;In normal enterprise environments, network teams can get away with talking about throughput, redundancy, and high-level topology. In large AI environments, those abstractions are not enough. The important questions become much more specific:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;What happens to collective traffic during microbursts?&lt;/li&gt;
&lt;li&gt;How big is the pause-domain blast radius?&lt;/li&gt;
&lt;li&gt;Are ECN thresholds tuned well enough to mark before the fabric gets ugly?&lt;/li&gt;
&lt;li&gt;Are front-end and back-end flows isolated cleanly?&lt;/li&gt;
&lt;li&gt;Can you prove latency behavior under sustained load, not just in synthetic idle tests?&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That is why a headline about layoffs quickly becomes a lesson about packet paths.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Oracle is actually betting on
&lt;/h2&gt;

&lt;p&gt;Strip away the corporate drama and the technical bet looks straightforward.&lt;/p&gt;

&lt;p&gt;Oracle is putting money behind a model where AI competitiveness depends on fabric quality as much as compute quantity. Its public infrastructure messaging points to a few design priorities:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Design priority&lt;/th&gt;
&lt;th&gt;Why it matters&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Very large GPU cluster scale&lt;/td&gt;
&lt;td&gt;Failure domains, oversubscription, and congestion behavior become architecture decisions&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;RoCEv2 transport&lt;/td&gt;
&lt;td&gt;You need intentional loss management, not generic “fast Ethernet” assumptions&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;High-bandwidth cluster network&lt;/td&gt;
&lt;td&gt;East-west design starts dominating outcomes for training and distributed inference&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;NIC/offload emphasis&lt;/td&gt;
&lt;td&gt;Host-edge behavior matters almost as much as switch behavior&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Separation of front-end and back-end traffic&lt;/td&gt;
&lt;td&gt;User/API traffic and GPU-cluster traffic should not compete for the same operational assumptions&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;This is one of the clearest signs that the network is moving from cost center to constraint solver.&lt;/p&gt;

&lt;p&gt;If the fabric performs well, expensive accelerators stay busy.&lt;br&gt;
If the fabric performs badly, the GPUs wait, jobs stretch, and the whole capex story gets harder to defend.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why RoCEv2 changes the engineering problem
&lt;/h2&gt;

&lt;p&gt;RoCEv2 is attractive because it brings RDMA semantics onto Ethernet and IP-based infrastructure. That gives operators a familiar ecosystem and better integration with existing data center practices than a fully separate transport stack.&lt;/p&gt;

&lt;p&gt;But it also raises the bar for fabric discipline.&lt;/p&gt;

&lt;p&gt;A RoCEv2 environment does not magically behave like a perfect lossless network. You have to design for it.&lt;/p&gt;

&lt;p&gt;The real engineering work is in the ugly details:&lt;/p&gt;

&lt;h3&gt;
  
  
  1. ECN and PFC are design choices, not defaults
&lt;/h3&gt;

&lt;p&gt;PFC can help suppress drops for sensitive traffic classes, but it can also widen blast radius if the pause domain is too broad or the topology is sloppy.&lt;/p&gt;

&lt;p&gt;ECN can signal congestion before things fall apart, but only if queue thresholds and telemetry are configured intelligently.&lt;/p&gt;

&lt;p&gt;In other words, the question is not “do we support RoCEv2?”&lt;/p&gt;

&lt;p&gt;The question is “can we keep RoCEv2 predictable at scale?”&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Average utilization is a weak metric
&lt;/h3&gt;

&lt;p&gt;AI fabrics punish teams that only look at interface averages.&lt;/p&gt;

&lt;p&gt;The useful signals are things like:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;queue buildup&lt;/li&gt;
&lt;li&gt;latency spread, not just median latency&lt;/li&gt;
&lt;li&gt;retransmissions and retries&lt;/li&gt;
&lt;li&gt;NIC-level counters&lt;/li&gt;
&lt;li&gt;pressure around hot spots during synchronized workloads&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;A link can look healthy in a dashboard while the training job above it is already losing efficiency.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. The host edge matters
&lt;/h3&gt;

&lt;p&gt;Once you are dealing with ConnectX-class NICs, offload behavior, and AI-optimized hosts, the edge is no longer just a server team problem.&lt;/p&gt;

&lt;p&gt;The network outcome is shaped jointly by:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;switch buffering and queue policy&lt;/li&gt;
&lt;li&gt;NIC behavior&lt;/li&gt;
&lt;li&gt;offload settings&lt;/li&gt;
&lt;li&gt;optics and physical layer quality&lt;/li&gt;
&lt;li&gt;workload placement&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The cleanest leaf-spine diagram in the world will not save a badly tuned host edge.&lt;/p&gt;

&lt;h2&gt;
  
  
  The practical architecture lesson
&lt;/h2&gt;

&lt;p&gt;A lot of teams still draw AI infrastructure as “some GPUs attached to a fast spine.” That mental model is already outdated.&lt;/p&gt;

&lt;p&gt;A better model is to treat the environment as &lt;strong&gt;two different networks sharing one business outcome&lt;/strong&gt;:&lt;/p&gt;

&lt;h3&gt;
  
  
  Front-end network
&lt;/h3&gt;

&lt;p&gt;This is where API traffic, user access, storage access, service integration, and platform control traffic live.&lt;/p&gt;

&lt;h3&gt;
  
  
  Back-end cluster network
&lt;/h3&gt;

&lt;p&gt;This is where the expensive part happens: node-to-node traffic, collective operations, checkpoint movement, and all the east-west behavior that determines whether the cluster performs like a product or a science project.&lt;/p&gt;

&lt;p&gt;Once you separate those mentally, several design rules get clearer:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;isolate traffic classes aggressively&lt;/li&gt;
&lt;li&gt;design congestion containment on purpose&lt;/li&gt;
&lt;li&gt;think about optics and thermals early&lt;/li&gt;
&lt;li&gt;instrument the host edge, not just the switching layer&lt;/li&gt;
&lt;li&gt;validate under realistic synchronized load&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That is also why AI networking keeps pulling data center engineering closer to systems design and performance engineering.&lt;/p&gt;

&lt;h2&gt;
  
  
  What network teams should do this quarter
&lt;/h2&gt;

&lt;p&gt;You do not need an Oracle-sized budget to take the lesson seriously.&lt;/p&gt;

&lt;p&gt;If your team is anywhere near GPU infrastructure, these are good next moves:&lt;/p&gt;

&lt;h3&gt;
  
  
  1. Review one RoCEv2 design end to end
&lt;/h3&gt;

&lt;p&gt;Do not stop at topology. Walk the queues, congestion policy, traffic classes, and failure domains.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Split front-end and back-end diagrams
&lt;/h3&gt;

&lt;p&gt;If both traffic types still live in the same fuzzy architecture box, fix that first.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. Add queue and latency telemetry to your standard view
&lt;/h3&gt;

&lt;p&gt;Throughput graphs are not enough for AI fabrics.&lt;/p&gt;

&lt;h3&gt;
  
  
  4. Revisit optics assumptions
&lt;/h3&gt;

&lt;p&gt;At dense 400G and 800G scale, physical-layer details turn into application performance issues quickly.&lt;/p&gt;

&lt;h3&gt;
  
  
  5. Learn to explain fabric behavior in business terms
&lt;/h3&gt;

&lt;p&gt;GPU utilization, job completion time, and cluster efficiency are the language executives will care about. That translation layer is increasingly part of the engineer’s job.&lt;/p&gt;

&lt;h2&gt;
  
  
  The broader takeaway
&lt;/h2&gt;

&lt;p&gt;Oracle is not the only company making this shift. It is just making the shift loudly.&lt;/p&gt;

&lt;p&gt;Across hyperscalers, AI clouds, and modern data center platforms, the pattern is the same:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;more money into east-west fabrics&lt;/li&gt;
&lt;li&gt;more emphasis on NICs and offload&lt;/li&gt;
&lt;li&gt;more sensitivity to congestion behavior&lt;/li&gt;
&lt;li&gt;more pressure on network teams to think like performance engineers&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;For years, networking teams had to argue that the fabric mattered.&lt;/p&gt;

&lt;p&gt;Now the market is doing that for them.&lt;/p&gt;

&lt;p&gt;If a company is willing to reorganize billions of dollars around GPU infrastructure, then the fabric carrying those workloads is no longer plumbing. It is strategy.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Canonical version: &lt;a href="https://firstpasslab.com/blog/2026-04-08-oracle-layoffs-ai-data-center-networking-impact/" rel="noopener noreferrer"&gt;https://firstpasslab.com/blog/2026-04-08-oracle-layoffs-ai-data-center-networking-impact/&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;AI disclosure: This Dev.to article was adapted with AI assistance from the original FirstPassLab article, with editorial restructuring for the Dev.to engineering audience.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>networking</category>
      <category>ai</category>
      <category>devops</category>
      <category>cloud</category>
    </item>
    <item>
      <title>Google Bought Wiz for $32B. Here's Why Cloud Network Engineers Should Care</title>
      <dc:creator>FirstPassLab</dc:creator>
      <pubDate>Sun, 12 Apr 2026 22:54:05 +0000</pubDate>
      <link>https://dev.to/firstpasslab/google-bought-wiz-for-32b-heres-why-cloud-network-engineers-should-care-3j51</link>
      <guid>https://dev.to/firstpasslab/google-bought-wiz-for-32b-heres-why-cloud-network-engineers-should-care-3j51</guid>
      <description>&lt;p&gt;Google didn't just buy a security vendor, it bought an API-level map of how multi-cloud environments are actually exposed.&lt;/p&gt;

&lt;p&gt;For infrastructure teams, the important part of the Wiz deal is not just the $32B headline. It's that CNAPP has moved into the same design conversation as VPCs, route tables, IAM boundaries, Kubernetes networking, and cloud misconfiguration management. If you operate AWS, Azure, or GCP at scale, this is network engineering work with a security hat on.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Key takeaway:&lt;/strong&gt; The Google-Wiz deal pushes cloud security posture management closer to the infrastructure layer. Engineers who can reason about cloud networking, identity, and exposure paths together will have a much stronger edge than teams that still treat them as separate domains.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Did Google Actually Buy With Wiz?
&lt;/h2&gt;

&lt;p&gt;Wiz is a Cloud-Native Application Protection Platform (CNAPP) that provides agentless security scanning across multi-cloud environments. Founded in 2020 by Assaf Rappaport and team (who previously sold Adallom to Microsoft), Wiz grew to over $500 million in annual recurring revenue in under four years — making it one of the fastest-growing enterprise software companies ever.&lt;/p&gt;

&lt;p&gt;According to &lt;a href="https://www.forrester.com/blogs/google-to-acquire-cnapp-specialist-unicorn-wiz-for-32bn/" rel="noopener noreferrer"&gt;Forrester's analysis&lt;/a&gt;, the $32 billion price tag surpasses Cisco's $28 billion Splunk acquisition in 2024 as the largest cybersecurity deal on record.&lt;/p&gt;

&lt;p&gt;Here's what Wiz actually does that matters to network engineers:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Wiz Capability&lt;/th&gt;
&lt;th&gt;What It Does&lt;/th&gt;
&lt;th&gt;Network Engineering Relevance&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Cloud Security Posture Management (CSPM)&lt;/td&gt;
&lt;td&gt;Continuously scans cloud configs for misconfigurations&lt;/td&gt;
&lt;td&gt;Catches open security groups, overly permissive NACLs, public-facing resources you didn't intend&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Cloud Workload Protection (CWPP)&lt;/td&gt;
&lt;td&gt;Detects vulnerabilities in running workloads&lt;/td&gt;
&lt;td&gt;Identifies exposed services across VPC/VNet boundaries&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Network Exposure Analysis&lt;/td&gt;
&lt;td&gt;Maps cloud network paths and identifies reachable resources&lt;/td&gt;
&lt;td&gt;Shows which resources are internet-facing through actual network path analysis, not just security group rules&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Cloud Infrastructure Entitlement Management (CIEM)&lt;/td&gt;
&lt;td&gt;Maps IAM permissions and identifies excessive access&lt;/td&gt;
&lt;td&gt;Reveals service accounts that can modify network configurations&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Kubernetes Security Posture (KSPM)&lt;/td&gt;
&lt;td&gt;Secures Kubernetes clusters and container networks&lt;/td&gt;
&lt;td&gt;Flags CNI misconfigurations, exposed services, and network policy gaps&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;The critical differentiator: Wiz is &lt;strong&gt;agentless&lt;/strong&gt;. It connects via cloud APIs and scans your entire environment without deploying software to every workload. For network engineers who've fought the battle of getting agents deployed and maintained on thousands of endpoints, this architecture is significant.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Is This the Largest Cybersecurity Deal in History?
&lt;/h2&gt;

&lt;p&gt;The $32 billion price tag reflects the reality that cloud security has become the most critical — and most fragmented — part of enterprise security. According to &lt;a href="https://blog.google/innovation-and-ai/infrastructure-and-cloud/google-cloud/wiz-acquisition/" rel="noopener noreferrer"&gt;Google's announcement&lt;/a&gt;, Google Cloud CEO Thomas Kurian framed the acquisition as making "security a catalyst for innovation, not a barrier."&lt;/p&gt;

&lt;p&gt;Several factors drove the price:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Market timing.&lt;/strong&gt; Cloud misconfigurations are the leading cause of cloud security incidents, responsible for approximately 80% of breaches according to Gartner. Every enterprise migrating to cloud needs CSPM, and most have inadequate tooling.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Multi-cloud reality.&lt;/strong&gt; According to &lt;a href="https://www.crn.com/news/cloud/2026/google-closes-32b-wiz-acquisition-aws-microsoft-clients-will-still-be-supported" rel="noopener noreferrer"&gt;CRN's reporting&lt;/a&gt;, Wiz will continue supporting AWS, Azure, and Oracle Cloud after the acquisition. This is crucial — Google is buying a tool that monitors competitors' clouds. Rappaport stated: "We remain committed to our open approach, ensuring Wiz continues to support all major cloud and code environments."&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;AI security.&lt;/strong&gt; The combined Google Cloud + Wiz platform will detect threats created using AI models, protect against threats to AI models, and use AI to help security professionals hunt threats. As AI workloads explode across cloud infrastructure, securing them becomes a hyperscaler-scale problem.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Competitive positioning.&lt;/strong&gt; Google Cloud trails AWS and Azure in market share. Embedding best-in-class security directly into the platform is a differentiation play — GCP becomes the cloud with built-in Wiz.&lt;/p&gt;

&lt;h2&gt;
  
  
  How Does This Change Multi-Cloud Security for Network Engineers?
&lt;/h2&gt;

&lt;p&gt;If you manage network infrastructure across &lt;a href="https://firstpasslab.com/blog/2026-03-08-multi-cloud-networking-aws-transit-gateway-azure-vwan-gcp-ncc/" rel="noopener noreferrer"&gt;AWS VPC, Azure vWAN, or GCP NCC&lt;/a&gt;, the Google-Wiz acquisition changes your security toolchain dynamics in three ways.&lt;/p&gt;

&lt;h3&gt;
  
  
  1. Cloud Security Posture Becomes a Network Team Responsibility
&lt;/h3&gt;

&lt;p&gt;Traditionally, cloud security posture management lived with the security team or DevSecOps. But CNAPP platforms like Wiz analyze &lt;strong&gt;network exposure&lt;/strong&gt; — which security groups allow traffic, which resources are internet-reachable, which VPC peering connections create unintended lateral movement paths.&lt;/p&gt;

&lt;p&gt;This is network engineering work wearing a security hat. Here's what a CNAPP network exposure finding looks like in practice:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Finding: RDS instance db-prod-users is reachable from the internet
Path:  Internet → IGW → Public Subnet SG (port 3306 open) → RDS
Risk:  Critical — database directly exposed via misconfigured security group
Fix:   Remove 0.0.0.0/0 ingress rule on sg-0a1b2c3d, add private subnet route
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Network engineers already understand routing, subnets, and access control. CNAPP just surfaces the misconfigurations you'd find during a manual audit — but continuously and at scale.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Google Cloud Gets a Competitive Security Advantage
&lt;/h3&gt;

&lt;p&gt;The hyperscaler security landscape before and after the acquisition:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Hyperscaler&lt;/th&gt;
&lt;th&gt;Native Security Platform&lt;/th&gt;
&lt;th&gt;CNAPP Integration&lt;/th&gt;
&lt;th&gt;Network Security&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;AWS&lt;/td&gt;
&lt;td&gt;Security Hub + GuardDuty + Inspector&lt;/td&gt;
&lt;td&gt;Third-party CNAPP (CrowdStrike, Palo Alto)&lt;/td&gt;
&lt;td&gt;VPC Flow Logs, Network Firewall, WAF&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Azure&lt;/td&gt;
&lt;td&gt;Defender for Cloud + Sentinel&lt;/td&gt;
&lt;td&gt;Partially integrated CSPM&lt;/td&gt;
&lt;td&gt;NSG Flow Logs, Azure Firewall, Front Door&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;GCP (post-Wiz)&lt;/td&gt;
&lt;td&gt;Security Command Center + &lt;strong&gt;Wiz CNAPP&lt;/strong&gt;
&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;First-party CNAPP&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;VPC Flow Logs, Cloud Armor, Cloud IDS&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Oracle Cloud&lt;/td&gt;
&lt;td&gt;Cloud Guard&lt;/td&gt;
&lt;td&gt;Third-party&lt;/td&gt;
&lt;td&gt;NSG, Web Application Firewall&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;GCP is now the only hyperscaler with a first-party, enterprise-grade CNAPP built into the platform. For network engineers evaluating cloud platforms, this changes the security assessment matrix. GCP's native security tooling jumps from "adequate" to "best-in-class" overnight.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. Multi-Cloud Security Gets More Complex, Not Simpler
&lt;/h3&gt;

&lt;p&gt;Here's the paradox: Wiz promises to remain multi-cloud, but it's now owned by a competitor. If you run a multi-cloud environment with AWS as primary and GCP secondary, you're now sending your AWS network topology data through a Google-owned security scanner.&lt;/p&gt;

&lt;p&gt;According to &lt;a href="https://www.sdxcentral.com/news/google-cloud-closes-wiz-acquisition-begins-platform-player-brawl/" rel="noopener noreferrer"&gt;SDxCentral's analysis&lt;/a&gt;, this acquisition "formalizes a trend that has been building across the cloud workload security market: hyperscalers increasingly want tighter control over the security stack around their platforms."&lt;/p&gt;

&lt;p&gt;For network engineers managing multi-cloud connectivity, the practical implication is clear: evaluate whether your organization is comfortable with Google-owned tooling scanning non-Google infrastructure. If not, alternatives like CrowdStrike Falcon Cloud Security, Palo Alto Prisma Cloud, and Orca Security still offer independent multi-cloud CNAPP.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Is CNAPP and How Does It Differ From Traditional Network Security?
&lt;/h2&gt;

&lt;p&gt;CNAPP consolidates capabilities that network engineers previously handled with separate tools. According to &lt;a href="https://www.wiz.io/academy/cloud-security/cnapp-benefits" rel="noopener noreferrer"&gt;Wiz's documentation&lt;/a&gt;, a CNAPP platform unifies:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;CSPM&lt;/strong&gt; (Cloud Security Posture Management) — continuous compliance and misconfiguration detection&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;CWPP&lt;/strong&gt; (Cloud Workload Protection Platform) — vulnerability scanning for running workloads&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;CIEM&lt;/strong&gt; (Cloud Infrastructure Entitlement Management) — identity and access control analysis&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;KSPM&lt;/strong&gt; (Kubernetes Security Posture Management) — container and Kubernetes security&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;CDR&lt;/strong&gt; (Cloud Detection and Response) — real-time threat detection&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;For comparison with traditional network security tools:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Traditional Network Security&lt;/th&gt;
&lt;th&gt;Cloud-Native Equivalent (CNAPP)&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Firewall rules audit&lt;/td&gt;
&lt;td&gt;Security group / NACL posture check&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Vulnerability scanner (Nessus)&lt;/td&gt;
&lt;td&gt;Agentless workload scanning&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Network access control (Cisco ISE)&lt;/td&gt;
&lt;td&gt;Cloud IAM entitlement analysis&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;SIEM correlation&lt;/td&gt;
&lt;td&gt;Cloud detection and response&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Penetration test / network path analysis&lt;/td&gt;
&lt;td&gt;Automated network exposure analysis&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;The key difference: CNAPP operates at &lt;strong&gt;API level&lt;/strong&gt;, not packet level. It doesn't inspect traffic — it reads cloud configurations and maps exposure. This is a fundamentally different security model from the perimeter-based approach that most network engineers trained on.&lt;/p&gt;

&lt;p&gt;For teams building zero-trust cloud architectures, understanding CNAPP is increasingly relevant. These platforms turn architecture principles into operational controls across cloud networking, identity, workload posture, and exposure analysis.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Skills Should Network Engineers Develop?
&lt;/h2&gt;

&lt;p&gt;The Google-Wiz deal accelerates the convergence of networking and cloud security. Network engineers who position themselves at this intersection will capture the highest-value roles. Here's what to focus on:&lt;/p&gt;

&lt;h3&gt;
  
  
  Cloud Security Posture Management (CSPM)
&lt;/h3&gt;

&lt;p&gt;Learn to read and interpret cloud security posture reports. Understand the relationship between VPC architecture, security groups, NACLs, and actual network exposure. This is the cloud equivalent of understanding firewall rule ordering and NAT traversal.&lt;/p&gt;

&lt;h3&gt;
  
  
  Infrastructure as Code (IaC) Security
&lt;/h3&gt;

&lt;p&gt;Wiz and similar CNAPP platforms scan Terraform, CloudFormation, and Pulumi templates for security misconfigurations &lt;strong&gt;before deployment&lt;/strong&gt;. Network engineers who can write secure IaC templates are worth more than those who fix misconfigurations after deployment.&lt;/p&gt;

&lt;h3&gt;
  
  
  Multi-Cloud Network Architecture
&lt;/h3&gt;

&lt;p&gt;The ability to design network architectures that are secure across AWS, Azure, and GCP simultaneously is rare and high-value. Understanding each cloud's native network security controls — and how they interact with CNAPP scanning — is the sweet spot. Our &lt;a href="https://firstpasslab.com/blog/2026-03-08-multi-cloud-networking-aws-transit-gateway-azure-vwan-gcp-ncc/" rel="noopener noreferrer"&gt;multi-cloud networking comparison&lt;/a&gt; covers the networking fundamentals.&lt;/p&gt;

&lt;h3&gt;
  
  
  Cloud-Native Identity and Access Management
&lt;/h3&gt;

&lt;p&gt;Network engineers traditionally think in terms of IP addresses and ports. Cloud security thinks in terms of identities and permissions. Learning IAM policy analysis — understanding which service accounts can modify route tables, create peering connections, or open security groups — bridges the gap.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Does This Mean for the Cloud Security Market?
&lt;/h2&gt;

&lt;p&gt;The $32 billion price tag validates cloud security as a foundational market, not a niche. Here's the competitive landscape post-acquisition:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Company&lt;/th&gt;
&lt;th&gt;CNAPP Approach&lt;/th&gt;
&lt;th&gt;Multi-Cloud&lt;/th&gt;
&lt;th&gt;Acquisition Status&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Wiz (Google)&lt;/td&gt;
&lt;td&gt;Agentless, graph-based&lt;/td&gt;
&lt;td&gt;AWS, Azure, GCP, OCI&lt;/td&gt;
&lt;td&gt;Acquired ($32B)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;CrowdStrike&lt;/td&gt;
&lt;td&gt;Agent + agentless hybrid&lt;/td&gt;
&lt;td&gt;AWS, Azure, GCP&lt;/td&gt;
&lt;td&gt;Independent&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Palo Alto (Prisma Cloud)&lt;/td&gt;
&lt;td&gt;Agent-based, code-to-cloud&lt;/td&gt;
&lt;td&gt;AWS, Azure, GCP, OCI&lt;/td&gt;
&lt;td&gt;Independent&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Orca Security&lt;/td&gt;
&lt;td&gt;Agentless, SideScanning&lt;/td&gt;
&lt;td&gt;AWS, Azure, GCP, Alibaba&lt;/td&gt;
&lt;td&gt;Independent&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Microsoft Defender for Cloud&lt;/td&gt;
&lt;td&gt;Native Azure + multi-cloud&lt;/td&gt;
&lt;td&gt;Azure-first, AWS/GCP supported&lt;/td&gt;
&lt;td&gt;Hyperscaler-owned&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Check Point CloudGuard&lt;/td&gt;
&lt;td&gt;Agent-based, integrates with Wiz&lt;/td&gt;
&lt;td&gt;AWS, Azure, GCP&lt;/td&gt;
&lt;td&gt;Independent (Wiz integration)&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;The acquisition creates pressure on AWS and Azure to either build or buy comparable CNAPP capabilities. AWS has been incrementally enhancing Security Hub, and Microsoft has Defender for Cloud, but neither matches Wiz's depth in agentless multi-cloud scanning.&lt;/p&gt;

&lt;p&gt;For network engineers, this consolidation means cloud security tooling will increasingly be bundled with cloud infrastructure — similar to how SD-WAN security features got absorbed into SASE platforms. Understanding the native security capabilities of each cloud becomes as important as understanding their networking primitives.&lt;/p&gt;

&lt;h2&gt;
  
  
  How Does the Regulatory Approval Process Affect You?
&lt;/h2&gt;

&lt;p&gt;The deal took a full year to close, from announcement in March 2025 to completion on March 11, 2026. The EU specifically evaluated whether the acquisition would reduce competition in cloud security.&lt;/p&gt;

&lt;p&gt;According to &lt;a href="https://www.crn.com/news/cloud/2025/google-confirms-acquisition-of-wiz-for-32-billion-google-cloud-has-bold-plans" rel="noopener noreferrer"&gt;CRN's reporting&lt;/a&gt;, Google faced a $3.2 billion breakup fee if the deal fell through. The EU ultimately approved it, concluding that customers had "credible alternatives" in cloud security.&lt;/p&gt;

&lt;p&gt;The practical takeaway: if your organization uses Wiz today, expect integration changes over the next 12-18 months. Wiz's roadmap will increasingly prioritize GCP-native integrations while maintaining multi-cloud support. If you're selecting a CNAPP vendor now, factor in the Google ownership when evaluating long-term vendor independence.&lt;/p&gt;

&lt;h2&gt;
  
  
  Frequently Asked Questions
&lt;/h2&gt;

&lt;h3&gt;
  
  
  How much did Google pay for Wiz?
&lt;/h3&gt;

&lt;p&gt;Google paid $32 billion in cash for Wiz, making it the largest cybersecurity acquisition in history and Google's biggest acquisition ever. The deal surpasses Cisco's $28 billion Splunk acquisition in 2024. It was announced in March 2025 and closed on March 11, 2026 after EU regulatory approval.&lt;/p&gt;

&lt;h3&gt;
  
  
  Will Wiz still support AWS and Azure after the Google acquisition?
&lt;/h3&gt;

&lt;p&gt;Yes. Wiz CEO Assaf Rappaport confirmed the platform will maintain its multi-cloud commitment, continuing to support AWS, Azure, GCP, and Oracle Cloud. Google Cloud CEO Thomas Kurian emphasized the company's "commitment to openness." However, expect deeper GCP integrations to develop over time.&lt;/p&gt;

&lt;h3&gt;
  
  
  What is CNAPP and why should network engineers care?
&lt;/h3&gt;

&lt;p&gt;CNAPP (Cloud-Native Application Protection Platform) unifies cloud security posture management (CSPM), workload protection (CWPP), identity entitlement management (CIEM), and network exposure analysis in a single platform. For network engineers, CNAPP replaces fragmented security tools with unified visibility across cloud networks — and network exposure analysis is fundamentally a networking discipline.&lt;/p&gt;

&lt;h3&gt;
  
  
  How does the Google-Wiz deal affect network and platform teams?
&lt;/h3&gt;

&lt;p&gt;Cloud security posture management is increasingly part of day-to-day work in hybrid and multi-cloud roles. Understanding CNAPP capabilities, cloud network exposure analysis, IaC security, and multi-cloud architecture helps network, platform, and security teams operate from the same map instead of separate tool silos.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;AI disclosure: This Dev.to version was adapted with AI assistance from FirstPassLab source material and reviewed before publication. Original canonical article: &lt;a href="https://firstpasslab.com/blog/2026-03-12-google-wiz-32b-acquisition-cloud-network-security-engineer-guide/" rel="noopener noreferrer"&gt;https://firstpasslab.com/blog/2026-03-12-google-wiz-32b-acquisition-cloud-network-security-engineer-guide/&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

</description>
      <category>security</category>
      <category>networking</category>
      <category>cloud</category>
      <category>devops</category>
    </item>
    <item>
      <title>Cisco SD-WAN Is Under Active Attack: 3 Exploited Catalyst Manager CVEs and the Patch Plan</title>
      <dc:creator>FirstPassLab</dc:creator>
      <pubDate>Sun, 12 Apr 2026 16:53:49 +0000</pubDate>
      <link>https://dev.to/firstpasslab/cisco-sd-wan-is-under-active-attack-3-exploited-catalyst-manager-cves-and-the-patch-plan-1d3o</link>
      <guid>https://dev.to/firstpasslab/cisco-sd-wan-is-under-active-attack-3-exploited-catalyst-manager-cves-and-the-patch-plan-1d3o</guid>
      <description>&lt;p&gt;If you run Cisco Catalyst SD-WAN Manager, this is a patch-now situation.&lt;/p&gt;

&lt;p&gt;Cisco has now confirmed that &lt;strong&gt;three SD-WAN vulnerabilities are being actively exploited in the wild&lt;/strong&gt;, and the important detail is that attackers are chaining bugs together, not just using the headline CVSS 10.0 issue. That means a "medium" or "high" score on its own is not a comfort signal here.&lt;/p&gt;

&lt;p&gt;In this post, I’ll break down the attack chain, the affected releases, the fixed versions, and the hardening steps worth doing before your next maintenance window.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;TL;DR&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;3 Cisco SD-WAN CVEs are now confirmed exploited in the wild&lt;/li&gt;
&lt;li&gt;Attackers are chaining auth bypass, credential exposure, and file overwrite bugs&lt;/li&gt;
&lt;li&gt;There are &lt;strong&gt;no workarounds&lt;/strong&gt; that fully fix this, patching is the answer&lt;/li&gt;
&lt;li&gt;If your SD-WAN control plane was internet exposed, assume higher risk and investigate for compromise&lt;/li&gt;
&lt;/ul&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  What happened
&lt;/h2&gt;

&lt;p&gt;The timeline moved fast:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;February 25, 2026&lt;/strong&gt;: Cisco shipped patches for five Catalyst SD-WAN Manager vulnerabilities and disclosed that &lt;strong&gt;CVE-2026-20127&lt;/strong&gt; was already being exploited as a zero day.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;February 25, 2026&lt;/strong&gt;: CISA issued &lt;strong&gt;Emergency Directive ED 26-03&lt;/strong&gt; telling federal agencies to patch immediately.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;March 5, 2026&lt;/strong&gt;: Cisco updated the advisory and confirmed that &lt;strong&gt;CVE-2026-20128&lt;/strong&gt; and &lt;strong&gt;CVE-2026-20122&lt;/strong&gt; were also being exploited in the wild.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;So this stopped being a single-bug story very quickly. It is a campaign against SD-WAN management infrastructure.&lt;/p&gt;

&lt;h2&gt;
  
  
  The five SD-WAN vulnerabilities at a glance
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;CVE&lt;/th&gt;
&lt;th&gt;Severity&lt;/th&gt;
&lt;th&gt;CVSS&lt;/th&gt;
&lt;th&gt;Description&lt;/th&gt;
&lt;th&gt;Exploited?&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;CVE-2026-20129&lt;/td&gt;
&lt;td&gt;Critical&lt;/td&gt;
&lt;td&gt;9.8&lt;/td&gt;
&lt;td&gt;API authentication bypass leading to netadmin access&lt;/td&gt;
&lt;td&gt;Not yet confirmed&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;CVE-2026-20126&lt;/td&gt;
&lt;td&gt;High&lt;/td&gt;
&lt;td&gt;7.8&lt;/td&gt;
&lt;td&gt;REST API privilege escalation to root&lt;/td&gt;
&lt;td&gt;Not yet confirmed&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;CVE-2026-20133&lt;/td&gt;
&lt;td&gt;High&lt;/td&gt;
&lt;td&gt;7.5&lt;/td&gt;
&lt;td&gt;Information disclosure via filesystem access&lt;/td&gt;
&lt;td&gt;Not yet confirmed&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;CVE-2026-20122&lt;/td&gt;
&lt;td&gt;High&lt;/td&gt;
&lt;td&gt;7.1&lt;/td&gt;
&lt;td&gt;Arbitrary file overwrite via API leading to vManage privileges&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;Yes&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;CVE-2026-20128&lt;/td&gt;
&lt;td&gt;Medium&lt;/td&gt;
&lt;td&gt;5.5&lt;/td&gt;
&lt;td&gt;DCA credential exposure that enables lateral movement&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;Yes&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;One of the useful reminders here is that &lt;strong&gt;attackers do not care about CVSS in isolation&lt;/strong&gt;. They care about whether one flaw unlocks the next one.&lt;/p&gt;

&lt;p&gt;A medium-severity credential leak becomes a serious incident if it gives an attacker a clean path to move across the SD-WAN control plane.&lt;/p&gt;

&lt;h2&gt;
  
  
  How the attack chains work
&lt;/h2&gt;

&lt;p&gt;Based on Cisco Talos reporting and Cisco’s advisory updates, there are two important chains to understand.&lt;/p&gt;

&lt;h3&gt;
  
  
  Chain 1: the original zero-day path
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;1. Find an internet-exposed SD-WAN Manager or Controller
2. Exploit CVE-2026-20127 (authentication bypass)
3. Chain with an older privilege-escalation bug
4. Escalate to root
5. Modify scripts or services for persistence
6. Maintain access to the control plane
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Chain 2: the newer March 2026 path
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;1. Exploit CVE-2026-20128 (DCA credential exposure)
2. Reuse those credentials against other SD-WAN nodes
3. Exploit CVE-2026-20122 (arbitrary file overwrite)
4. Gain vManage-level access
5. Potentially chain again for higher privileges
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That is why patching only the loudest bug is not enough. The real risk is the &lt;strong&gt;kill chain&lt;/strong&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Who is behind it
&lt;/h2&gt;

&lt;p&gt;Cisco Talos tracks the actor exploiting the original zero-day as &lt;strong&gt;UAT-8616&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;What matters operationally:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;They appear to have targeted SD-WAN infrastructure for an extended period&lt;/li&gt;
&lt;li&gt;They focus on &lt;strong&gt;control-plane compromise&lt;/strong&gt;, not just one-off access&lt;/li&gt;
&lt;li&gt;They have used persistence techniques, including modifying system scripts&lt;/li&gt;
&lt;li&gt;Reporting indicates they may also downgrade software to reintroduce known-vulnerable states&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That last point is especially nasty because it means version drift and software integrity suddenly matter a lot more than teams often assume.&lt;/p&gt;

&lt;h2&gt;
  
  
  What to do right now
&lt;/h2&gt;

&lt;h3&gt;
  
  
  1. Check your version
&lt;/h3&gt;

&lt;p&gt;On vManage:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;vmanage# show version
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you are not on a fixed release, you have work to do.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Upgrade to a fixed release
&lt;/h3&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Current train&lt;/th&gt;
&lt;th&gt;Fixed release&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;20.9.x&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;20.9.8.2&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;20.12.5 / 20.12.6&lt;/td&gt;
&lt;td&gt;
&lt;strong&gt;20.12.5.3&lt;/strong&gt; or &lt;strong&gt;20.12.6.1&lt;/strong&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;20.13 / 20.14 / 20.15&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;20.15.4.2&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;20.16 / 20.18&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;20.18.2.1&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;If you are on something older than 20.9, plan a supported upgrade path first.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. Harden while you patch
&lt;/h3&gt;

&lt;p&gt;These are the practical steps I would treat as baseline:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Block direct internet access&lt;/strong&gt; to SD-WAN Manager and Controller whenever possible&lt;/li&gt;
&lt;li&gt;If exposure is unavoidable, &lt;strong&gt;restrict access to known source IPs only&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Disable HTTP&lt;/strong&gt; on the web UI and use HTTPS only&lt;/li&gt;
&lt;li&gt;Put management components &lt;strong&gt;behind a firewall with tight filtering&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Ship logs to an external SIEM&lt;/strong&gt; because local logs may not be trustworthy after compromise&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Rotate admin credentials&lt;/strong&gt; and verify role-based access assignments&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Monitor for unauthorized software downgrades&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  4. Check for compromise, not just exposure
&lt;/h3&gt;

&lt;p&gt;If your control plane was internet exposed at any point, I would assume elevated risk and validate:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Unusual API authentication events&lt;/li&gt;
&lt;li&gt;Unexpected local or admin accounts&lt;/li&gt;
&lt;li&gt;Script or service modifications on vManage/vSmart nodes&lt;/li&gt;
&lt;li&gt;Unplanned configuration changes in the fabric&lt;/li&gt;
&lt;li&gt;DCA-related access patterns that do not match normal operations&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Why SD-WAN control planes are such attractive targets
&lt;/h2&gt;

&lt;p&gt;Compromising a branch router is bad.&lt;/p&gt;

&lt;p&gt;Compromising the &lt;strong&gt;management and policy layer&lt;/strong&gt; of an SD-WAN fabric is much worse.&lt;/p&gt;

&lt;p&gt;An attacker with control-plane access can potentially influence:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;routing policy&lt;/li&gt;
&lt;li&gt;traffic engineering&lt;/li&gt;
&lt;li&gt;segmentation behavior&lt;/li&gt;
&lt;li&gt;controller trust relationships&lt;/li&gt;
&lt;li&gt;security policy distribution&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That is why these vulnerabilities matter beyond patch management. They are a reminder that the SD-WAN controller stack should be protected like a crown-jewel system, not managed like a regular internal app.&lt;/p&gt;

&lt;h2&gt;
  
  
  Engineering lessons worth carrying forward
&lt;/h2&gt;

&lt;p&gt;A few things this incident reinforces:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Management planes need zero-trust thinking too.&lt;/strong&gt;&lt;br&gt;
If your controller is reachable from too many places, you are giving attackers a simpler first hop.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;CVSS is not enough for prioritization.&lt;/strong&gt;&lt;br&gt;
A medium bug that exposes credentials can outrank a higher-severity bug if it fits a live chain.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;External logging is mandatory for critical control planes.&lt;/strong&gt;&lt;br&gt;
If local telemetry can be tampered with, your incident response starts blind.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Version integrity matters.&lt;/strong&gt;&lt;br&gt;
Track unexpected downgrades, not just upgrades.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Patching windows for management infrastructure should be shorter.&lt;/strong&gt;&lt;br&gt;
The blast radius is too large to treat SD-WAN control components like low-priority admin systems.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  References
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Cisco security advisory: &lt;a href="https://sec.cloudapps.cisco.com/security/center/content/CiscoSecurityAdvisory/cisco-sa-sdwan-authbp-qwCX8D4v" rel="noopener noreferrer"&gt;https://sec.cloudapps.cisco.com/security/center/content/CiscoSecurityAdvisory/cisco-sa-sdwan-authbp-qwCX8D4v&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Cisco SD-WAN upgrade matrix: &lt;a href="https://www.cisco.com/c/dam/en/us/td/docs/Website/enterprise/catalyst-sdwan-upgrade-matrix/index.html" rel="noopener noreferrer"&gt;https://www.cisco.com/c/dam/en/us/td/docs/Website/enterprise/catalyst-sdwan-upgrade-matrix/index.html&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;CISA advisory context: &lt;a href="https://www.cisecurity.org/advisory/multiple-vulnerabilities-in-cisco-catalyst-sd-wan-products-could-allow-for-authentication-bypass_2026-016" rel="noopener noreferrer"&gt;https://www.cisecurity.org/advisory/multiple-vulnerabilities-in-cisco-catalyst-sd-wan-products-could-allow-for-authentication-bypass_2026-016&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;SecurityWeek coverage: &lt;a href="https://www.securityweek.com/cisco-warns-of-more-catalyst-sd-wan-flaws-exploited-in-the-wild/" rel="noopener noreferrer"&gt;https://www.securityweek.com/cisco-warns-of-more-catalyst-sd-wan-flaws-exploited-in-the-wild/&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If you want the original FirstPassLab version with canonical attribution, it’s here:&lt;br&gt;
&lt;a href="https://firstpasslab.com/blog/2026-03-05-cisco-sdwan-more-flaws-exploited-wild-patch-now/" rel="noopener noreferrer"&gt;https://firstpasslab.com/blog/2026-03-05-cisco-sdwan-more-flaws-exploited-wild-patch-now/&lt;/a&gt;&lt;/p&gt;




&lt;p&gt;&lt;em&gt;AI disclosure: This article was adapted with AI assistance from an original FirstPassLab article and reviewed before publication.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>security</category>
      <category>networking</category>
      <category>cisco</category>
      <category>devops</category>
    </item>
    <item>
      <title>Zero Trust Is Killing the Perimeter Playbook, Not Network Security Engineering</title>
      <dc:creator>FirstPassLab</dc:creator>
      <pubDate>Sat, 11 Apr 2026 22:54:17 +0000</pubDate>
      <link>https://dev.to/firstpasslab/zero-trust-is-killing-the-perimeter-playbook-not-network-security-engineering-1fgk</link>
      <guid>https://dev.to/firstpasslab/zero-trust-is-killing-the-perimeter-playbook-not-network-security-engineering-1fgk</guid>
      <description>&lt;p&gt;If you still treat the firewall as the center of your security architecture, you are already behind.&lt;/p&gt;

&lt;p&gt;Zero trust is not killing network security engineering. It is killing the old perimeter-first playbook: static ACLs, VPN-only remote access, and segmentation models that depend on where a device happens to land. The teams getting ahead are shifting toward identity, posture, micro-segmentation, and automation.&lt;/p&gt;

&lt;p&gt;In other words, the valuable skills are moving up the stack, not disappearing.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why the perimeter model is breaking down
&lt;/h2&gt;

&lt;p&gt;The old model assumed a clean boundary between inside and outside. That assumption is hard to defend in 2026 for three reasons.&lt;/p&gt;

&lt;h3&gt;
  
  
  1. Remote and hybrid work made “inside” fuzzy
&lt;/h3&gt;

&lt;p&gt;Users now connect from home networks, unmanaged Wi-Fi, and temporary workspaces. Trusting traffic because it came through a VPN tunnel is much weaker than validating who the user is, what device they are on, and whether it still meets policy.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Cloud moved the critical assets
&lt;/h3&gt;

&lt;p&gt;When apps, APIs, and data live across AWS, Azure, GCP, and SaaS platforms, the firewall is no longer sitting in front of the crown jewels. A lot of the real control points are now identity systems, policy engines, and service-to-service trust boundaries.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. Attackers care about lateral movement
&lt;/h3&gt;

&lt;p&gt;Perimeter controls might stop some initial access, but they do very little once an attacker lands somewhere legitimate. The practical zero trust question is not “did they get in?” It is “what can they reach next?”&lt;/p&gt;

&lt;p&gt;That shift changes what network and security engineers need to be good at.&lt;/p&gt;

&lt;h2&gt;
  
  
  The skills that are losing value fastest
&lt;/h2&gt;

&lt;p&gt;These are still useful in brownfield environments, and you still need them to operate legacy networks. But they are no longer enough.&lt;/p&gt;

&lt;h3&gt;
  
  
  Static ACL thinking
&lt;/h3&gt;

&lt;p&gt;Permit and deny lists tied to subnets and VLANs do not express user identity, device trust, risk, or application context very well. They are still part of the toolbox, but not the architecture.&lt;/p&gt;

&lt;h3&gt;
  
  
  VPN as the primary remote access model
&lt;/h3&gt;

&lt;p&gt;Traditional VPN remains important for some use cases, especially admin access and site-to-site connectivity. But for workforce access to internal apps, the center of gravity is moving toward ZTNA and application-aware access controls.&lt;/p&gt;

&lt;h3&gt;
  
  
  Segmentation based only on topology
&lt;/h3&gt;

&lt;p&gt;If your policy depends mostly on a device being in the “right VLAN,” you have a brittle model. Modern environments need segmentation that follows users, workloads, and device state.&lt;/p&gt;

&lt;h2&gt;
  
  
  The skills that matter more now
&lt;/h2&gt;

&lt;p&gt;This is where the opportunity is. Zero trust increases the value of engineers who can connect network controls, security controls, and automation.&lt;/p&gt;

&lt;h3&gt;
  
  
  Identity-driven access control
&lt;/h3&gt;

&lt;p&gt;Identity is becoming the first policy primitive.&lt;/p&gt;

&lt;p&gt;That includes:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;802.1X and NAC design&lt;/li&gt;
&lt;li&gt;posture assessment&lt;/li&gt;
&lt;li&gt;device profiling&lt;/li&gt;
&lt;li&gt;conditional authorization&lt;/li&gt;
&lt;li&gt;integration with identity providers and MFA systems&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;For Cisco-heavy shops, this usually means understanding ISE deeply enough to design policy, troubleshoot edge cases, and explain where it fits and where it does not.&lt;/p&gt;

&lt;p&gt;A useful mental model is this:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Zero trust principle&lt;/th&gt;
&lt;th&gt;What engineers actually build&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Verify explicitly&lt;/td&gt;
&lt;td&gt;802.1X, posture, MFA, certificate-based access&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Least privilege&lt;/td&gt;
&lt;td&gt;role-based access, dACLs, SGTs, app-specific policy&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Assume breach&lt;/td&gt;
&lt;td&gt;containment, limited east-west access, rapid quarantine&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Continuous evaluation&lt;/td&gt;
&lt;td&gt;posture rechecks, adaptive policy, telemetry-driven response&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h3&gt;
  
  
  Micro-segmentation
&lt;/h3&gt;

&lt;p&gt;This is the part many network teams underestimate.&lt;/p&gt;

&lt;p&gt;Micro-segmentation is how you make “assume breach” real. Once a user or endpoint is authenticated, the system still needs to limit where it can move. In Cisco environments that often means Security Group Tags and TrustSec-style policy. In cloud and hybrid environments it may mean workload identity, service policy, and host-based enforcement.&lt;/p&gt;

&lt;p&gt;The important design shift is that segmentation follows identity and context, not just IP addressing.&lt;/p&gt;

&lt;h3&gt;
  
  
  Detection and response, not just prevention
&lt;/h3&gt;

&lt;p&gt;Firewalls do not go away in zero trust, but their role changes. They become one enforcement point among several.&lt;/p&gt;

&lt;p&gt;The engineers who stand out are the ones who can connect signals across systems, for example:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;NAC or identity context&lt;/li&gt;
&lt;li&gt;endpoint posture or EDR alerts&lt;/li&gt;
&lt;li&gt;firewall telemetry&lt;/li&gt;
&lt;li&gt;DNS or proxy activity&lt;/li&gt;
&lt;li&gt;automated quarantine workflows&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That is much closer to security engineering than to traditional “box-by-box firewall administration.”&lt;/p&gt;

&lt;h3&gt;
  
  
  Security automation
&lt;/h3&gt;

&lt;p&gt;Manual policy changes do not scale when access decisions depend on users, device health, application sensitivity, and environment state.&lt;/p&gt;

&lt;p&gt;This is why automation matters more, not less:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;pushing NAC and policy changes through APIs&lt;/li&gt;
&lt;li&gt;automating quarantine or exception workflows&lt;/li&gt;
&lt;li&gt;validating segmentation changes before rollout&lt;/li&gt;
&lt;li&gt;keeping policy consistent across campus, branch, data center, and cloud&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If you can bridge identity, segmentation, and automation, you are much harder to replace than someone who only manages perimeter rules.&lt;/p&gt;

&lt;h2&gt;
  
  
  An important reality check: NAC is not full zero trust
&lt;/h2&gt;

&lt;p&gt;A lot of teams overstate what NAC platforms can do.&lt;/p&gt;

&lt;p&gt;Tools like Cisco ISE are valuable because they provide strong building blocks: authentication, profiling, posture, policy, and segmentation hooks. But they are not the entire zero trust architecture. You still need application-aware access, cloud-native controls, telemetry, response workflows, and sane operational design.&lt;/p&gt;

&lt;p&gt;That nuance matters. The best engineers are not the ones claiming a single platform solved zero trust. They are the ones who know exactly where each control starts and stops.&lt;/p&gt;

&lt;h2&gt;
  
  
  A practical migration map for engineers
&lt;/h2&gt;

&lt;p&gt;If you are deciding what to learn next, here is the rough shift.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Older emphasis&lt;/th&gt;
&lt;th&gt;Higher-value replacement&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;static subnet ACLs&lt;/td&gt;
&lt;td&gt;identity-aware policy and dynamic authorization&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;VLAN-only segmentation&lt;/td&gt;
&lt;td&gt;micro-segmentation tied to user, device, or workload context&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;VPN-first user access&lt;/td&gt;
&lt;td&gt;ZTNA-style app access plus stronger identity controls&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;manual firewall workflows&lt;/td&gt;
&lt;td&gt;API-driven policy and response automation&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;perimeter trust assumptions&lt;/td&gt;
&lt;td&gt;continuous verification and limited blast radius&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;That does not mean you throw away firewall, routing, or switching knowledge. It means those fundamentals now support a more identity-centric architecture.&lt;/p&gt;

&lt;h2&gt;
  
  
  What this means for network security engineers in 2026
&lt;/h2&gt;

&lt;p&gt;The market is rewarding engineers who can answer questions like these:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;How do we enforce least privilege after successful authentication?&lt;/li&gt;
&lt;li&gt;How do we quarantine a compromised endpoint automatically?&lt;/li&gt;
&lt;li&gt;How do we stop east-west movement without redesigning the whole network?&lt;/li&gt;
&lt;li&gt;How do we keep access policy consistent across campus, branch, and cloud?&lt;/li&gt;
&lt;li&gt;How do we prove that the policy is doing what we think it is doing?&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Those are architecture and operations questions, not just certification questions.&lt;/p&gt;

&lt;p&gt;If you already know routing, switching, VPNs, and firewall behavior, you are not starting over. You are adding the controls that make those foundations relevant in a zero trust world.&lt;/p&gt;

&lt;h2&gt;
  
  
  Bottom line
&lt;/h2&gt;

&lt;p&gt;Zero trust is not making network security engineers obsolete. It is raising the bar.&lt;/p&gt;

&lt;p&gt;The skills losing value are the ones built around static trust and perimeter assumptions. The skills gaining value are identity, posture, micro-segmentation, response, and automation.&lt;/p&gt;

&lt;p&gt;That is a good trade if you like real engineering work.&lt;/p&gt;

&lt;p&gt;More depth on the original write-up is available at FirstPassLab.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;AI disclosure: This article was adapted from a canonical FirstPassLab post with AI assistance for Dev.to formatting and audience fit. The underlying ideas, structure, and source material came from the original article.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>security</category>
      <category>networking</category>
      <category>cisco</category>
      <category>zerotrust</category>
    </item>
    <item>
      <title>Your GPUs Aren’t the Bottleneck, Model Delivery Is: Fixing AI Distribution on Kubernetes</title>
      <dc:creator>FirstPassLab</dc:creator>
      <pubDate>Sat, 11 Apr 2026 16:54:09 +0000</pubDate>
      <link>https://dev.to/firstpasslab/your-gpus-arent-the-bottleneck-model-delivery-is-fixing-ai-distribution-on-kubernetes-247o</link>
      <guid>https://dev.to/firstpasslab/your-gpus-arent-the-bottleneck-model-delivery-is-fixing-ai-distribution-on-kubernetes-247o</guid>
      <description>&lt;p&gt;Most AI infra postmortems still blame GPU shortages. In practice, a lot of the pain shows up earlier: giant model pulls, cold-start storms, registry hotspots, bad placement, and east-west congestion during scale-out.&lt;/p&gt;

&lt;p&gt;If your platform is serving 140 GB to 1 TB model artifacts, you are not just running Kubernetes with accelerators anymore. You are operating a distributed delivery system where artifact movement and locality are part of the network design.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Key takeaway:&lt;/strong&gt; for many teams, the real AI bottleneck is not compute capacity. It is repeatable model delivery, topology-aware placement, and predictable transport behavior under bursty inference demand.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why are AI models exposing a network infrastructure gap?
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Ffirstpasslab.com%2Fimages%2Fblog%2Fweight-ai-models-network-infrastructure-lagging-ai-boom%2Finfographic-tech.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Ffirstpasslab.com%2Fimages%2Fblog%2Fweight-ai-models-network-infrastructure-lagging-ai-boom%2Finfographic-tech.png" alt="The Weight of AI Models: Why Network Infrastructure Is Lagging Behind the AI Boom Technical Architecture" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;AI models expose a network infrastructure gap because model artifacts are now large enough to behave like distributed storage workloads, not normal application releases. According to the CNCF blog "The weight of AI models" (2026), a quantized LLaMA-3 70B model weighs about 140 GB, while frontier multimodal models can exceed 1 TB. That changes the failure mode. Instead of pulling a few container images during a controlled rollout, teams now stampede registries, object stores, and east-west links when inference pools scale out. The result is congestion, long-tail latency, cold-start delays, and inconsistent placement across GPU nodes. For network engineers, this looks less like classic web scaling and more like a storage, transport, and locality problem wrapped inside Kubernetes.&lt;/p&gt;

&lt;p&gt;The March 27 CNCF post makes the core point clearly: most enterprises already run AI infrastructure on Kubernetes, but model artifact management still lags behind software artifact management. Containers already get versioning, rollback, security scanning, and immutable delivery through OCI registries. Model weights are still too often copied with scripts, pushed into generic buckets, or mounted from shared filesystems. That gap is exactly where infrastructure breaks first.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Characteristic&lt;/th&gt;
&lt;th&gt;Traditional app rollout&lt;/th&gt;
&lt;th&gt;AI model rollout&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Artifact size&lt;/td&gt;
&lt;td&gt;Usually MB to low-GB images&lt;/td&gt;
&lt;td&gt;140 GB to 1 TB+ model artifacts&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Burst pattern&lt;/td&gt;
&lt;td&gt;Planned CI/CD release windows&lt;/td&gt;
&lt;td&gt;Sudden scale-out under inference demand&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Primary risk&lt;/td&gt;
&lt;td&gt;Slow image pull&lt;/td&gt;
&lt;td&gt;Cache miss storms, hotspotting, cold starts&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Best mitigation&lt;/td&gt;
&lt;td&gt;More replicas, smaller images&lt;/td&gt;
&lt;td&gt;Preheating, P2P distribution, placement locality&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h2&gt;
  
  
  What did CNCF’s 2026 data reveal about AI deployment maturity?
&lt;/h2&gt;

&lt;p&gt;CNCF’s 2026 data revealed that enterprise AI demand is real, but the supporting infrastructure is still immature. According to the CNCF Annual Cloud Native Survey (2026), 66% of organizations use Kubernetes for generative AI workloads and 82% of container users run Kubernetes in production, so the orchestration layer is no longer the blocker. The bigger warning is operational maturity: 47% of organizations deploy AI models only occasionally, only 7% deploy daily, and 52% do not train models at all. In other words, most organizations are consumers of AI, not builders of frontier models, and even they still struggle to deliver inference reliably. That is why the network discussion has shifted from peak bandwidth to repeatable deployment mechanics.&lt;/p&gt;

&lt;p&gt;The same survey says 37% of organizations use managed APIs, 25% self-host models, and 13% already push AI to the edge. That mix matters for network design. Managed APIs reduce local GPU pressure but increase dependency on WAN resilience and cost controls. Self-hosting increases demand for high-throughput storage, regional artifact replication, and predictable fabric performance. Edge inference introduces another layer of locality, caching, and policy complexity.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;CNCF metric&lt;/th&gt;
&lt;th&gt;What it means&lt;/th&gt;
&lt;th&gt;Why network engineers should care&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;66% use Kubernetes for GenAI&lt;/td&gt;
&lt;td&gt;Kubernetes is the default AI platform&lt;/td&gt;
&lt;td&gt;Cluster networking is now part of AI delivery&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;47% deploy only occasionally&lt;/td&gt;
&lt;td&gt;Most teams have immature pipelines&lt;/td&gt;
&lt;td&gt;Cold starts and scale events stay painful&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;7% deploy daily&lt;/td&gt;
&lt;td&gt;Very few have production-grade automation&lt;/td&gt;
&lt;td&gt;Reliable rollout mechanics are now a competitive edge&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;52% do not train models&lt;/td&gt;
&lt;td&gt;Most teams are inference consumers&lt;/td&gt;
&lt;td&gt;Model serving and distribution matter more than training clusters&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;82% run Kubernetes in prod&lt;/td&gt;
&lt;td&gt;Core platform is mature&lt;/td&gt;
&lt;td&gt;The lag is above the orchestrator, in delivery and operations&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;This also explains why the CNCF survey says the gap between AI ambition and infrastructure reality is stark. The story is not “companies need more GPUs.” The story is that they need the boring infrastructure around GPUs, including registries, caches, CI/CD, observability, and disciplined rollout workflows.&lt;/p&gt;

&lt;h2&gt;
  
  
  Which parts of the stack are actually lagging: storage, scheduling, or delivery?
&lt;/h2&gt;

&lt;p&gt;All three are lagging, but delivery is the hidden bottleneck because it connects storage, scheduling, and network behavior into one failure domain. According to the CNCF March 27 post (2026), model weights are still often managed with ad hoc downloads, unsecured shared filesystems, or generic object storage. According to the CNCF Cloud Native AI White Paper, AI serving also needs right-sized GPU allocation, evolving Dynamic Resource Allocation, and model-sharing formats that reduce unnecessary pulls. Then the March 26 CNCF post adds the control-plane view: DRA, the Inference Gateway, OpenTelemetry, Kueue, and GitOps patterns are finally giving teams tools to make AI workloads behave like first-class infrastructure. The problem is that many enterprises have not yet connected those pieces into one operational system.&lt;/p&gt;

&lt;p&gt;The most practical architecture in the CNCF material is simple and powerful: package models as OCI artifacts, store them in Harbor or another registry, distribute them with Dragonfly, and mount them close to the inference engine instead of redownloading them for every scale event. According to the CNCF blog (2026), Dragonfly’s P2P distribution can use 70% to 80% of each node’s bandwidth and reduce long-tail hotspots during large-scale model rollout. That is a real network optimization, not marketing language.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;modctl modelfile generate &lt;span class="nb"&gt;.&lt;/span&gt;
modctl build &lt;span class="nt"&gt;-t&lt;/span&gt; harbor.registry.com/models/qwen2.5-0.5b:v1 &lt;span class="nt"&gt;-f&lt;/span&gt; Modelfile &lt;span class="nb"&gt;.&lt;/span&gt;
modctl push harbor.registry.com/models/qwen2.5-0.5b:v1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That workflow matters because it turns models into managed artifacts instead of “big files somewhere.” It also aligns with newer Kubernetes mechanisms such as &lt;a href="https://kubernetes.io/docs/concepts/scheduling-eviction/dynamic-resource-allocation/" rel="noopener noreferrer"&gt;Dynamic Resource Allocation&lt;/a&gt; and the &lt;a href="https://github.com/kubernetes-sigs/gateway-api-inference-extension" rel="noopener noreferrer"&gt;Gateway API Inference Extension&lt;/a&gt;, which help teams place workloads more intelligently and route inference traffic with higher utilization.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Lagging layer&lt;/th&gt;
&lt;th&gt;Current bad habit&lt;/th&gt;
&lt;th&gt;Better pattern&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Artifact storage&lt;/td&gt;
&lt;td&gt;Buckets and shell scripts&lt;/td&gt;
&lt;td&gt;OCI registries with metadata and versioning&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Scheduling&lt;/td&gt;
&lt;td&gt;Generic pod placement&lt;/td&gt;
&lt;td&gt;GPU-aware, topology-aware placement with DRA/Kueue&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Delivery&lt;/td&gt;
&lt;td&gt;Every node pulls from origin&lt;/td&gt;
&lt;td&gt;P2P distribution, preheating, local cache reuse&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Observability&lt;/td&gt;
&lt;td&gt;CPU and memory only&lt;/td&gt;
&lt;td&gt;Tokens/sec, queue depth, time to first token, cache hit rate&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Rollback&lt;/td&gt;
&lt;td&gt;Manual model swaps&lt;/td&gt;
&lt;td&gt;GitOps and immutable artifact references&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h2&gt;
  
  
  How does this gap change hiring, architecture, and CCIE-level operations?
&lt;/h2&gt;

&lt;p&gt;This gap changes the job market because AI infrastructure now rewards engineers who can connect data center networking, automation, and platform operations into one coherent system. According to the CNCF blog "The platform under the model" (2026), AI Engineering is about building reliable systems around models, not just tuning models themselves. That means low-latency serving, safe rollouts, GPU scheduling, governance, and observability all become infrastructure work. The same post notes that only 41% of professional AI developers identify as cloud native, which leaves a real skills gap between AI teams and infrastructure teams. That is where experienced network engineers can win. The person who understands locality, congestion, failure domains, and automated recovery is no longer supporting the AI team from the sidelines. They are part of the AI product path.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Ffirstpasslab.com%2Fimages%2Fblog%2Fweight-ai-models-network-infrastructure-lagging-ai-boom%2Finfographic-impact.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Ffirstpasslab.com%2Fimages%2Fblog%2Fweight-ai-models-network-infrastructure-lagging-ai-boom%2Finfographic-impact.png" alt="The Weight of AI Models: Why Network Infrastructure Is Lagging Behind the AI Boom Industry Impact" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;For architects, the implication is that GPU clusters are not standalone islands anymore. They need registry replication, east-west traffic planning, multi-zone failure modeling, token-aware autoscaling signals, and security controls around model access. For operators, the implication is that old SRE dashboards are incomplete. Prometheus and OpenTelemetry now need to sit beside inference metrics, cache hit rates, and queue depth. For CCIE-level engineers, especially those on the data center and automation side, this is the next obvious adjacency. If you can already think clearly about EVPN locality, traffic engineering, or pipeline-driven change control, you are closer to AI infrastructure than most headlines suggest.&lt;/p&gt;

&lt;p&gt;Open source matters here too. According to CNCF (2026), portability and composability are central because organizations are spreading AI workloads across hyperscalers, GPU-focused providers, and on-prem environments. Proprietary AI services can hide complexity for a while, but they do not remove the underlying network behaviors. They just move them to a different fault domain.&lt;/p&gt;

&lt;h2&gt;
  
  
  What should network engineers do in the next 90 days?
&lt;/h2&gt;

&lt;p&gt;Network engineers should spend the next 90 days building operational discipline around model movement, placement, and observability instead of chasing abstract AI hype. According to the CNCF survey (2026), only 7% of organizations deploy AI models daily, which means the field is still open for teams that can make AI delivery boring and reliable. According to the CNCF white paper, right-sizing resource allocation, fractional GPU use, and model-serving efficiency are still evolving, so early wins come from basic engineering hygiene. If your team can stop cache-miss storms, reduce cold-start time, and prove artifact lineage, you will already be ahead of much of the market.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Map the artifact path.&lt;/strong&gt; Document where model weights live, how big they are, how they move between regions, and which links saturate during scale-out.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Add locality before adding bandwidth.&lt;/strong&gt; Prefer node-local or zone-local caches and preheating before assuming bigger links are the only answer.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Treat placement as network design.&lt;/strong&gt; Align GPU placement, failure zones, storage locality, and service routing so that hot models do not bounce across the fabric unnecessarily.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Measure AI-specific signals.&lt;/strong&gt; Track time to first token, tokens per second, queue depth, cache hit rate, and model pull duration alongside packet loss and interface utilization.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Standardize immutable delivery.&lt;/strong&gt; Move from bucket copies and shell scripts to OCI artifacts, GitOps references, and controlled rollbacks.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Run failure drills.&lt;/strong&gt; Test what happens when a registry slows down, a zone fails, or 100 nodes request the same model at once.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Frequently Asked Questions
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Why is network infrastructure lagging behind AI models?
&lt;/h3&gt;

&lt;p&gt;Network infrastructure is lagging because model artifacts have grown into the hundreds of gigabytes or more while many teams still rely on ad hoc downloads, weak versioning, and generic autoscaling. According to CNCF (2026), the bottleneck is operational delivery, not just raw bandwidth.&lt;/p&gt;

&lt;h3&gt;
  
  
  Is Kubernetes ready for AI workloads in 2026?
&lt;/h3&gt;

&lt;p&gt;Yes, but only with the right supporting stack. CNCF data shows strong Kubernetes adoption for AI, yet organizations still need GPU scheduling, model artifact management, caching, and inference routing to run reliably.&lt;/p&gt;

&lt;h3&gt;
  
  
  What matters more for inference, GPUs or networking?
&lt;/h3&gt;

&lt;p&gt;Both matter, but networking often becomes the hidden limiter. Cache misses, model pulls, queue depth, and east-west traffic can delay startup and inflate latency even when GPU capacity exists.&lt;/p&gt;

&lt;h3&gt;
  
  
  What should CCIE-level engineers learn from this trend?
&lt;/h3&gt;

&lt;p&gt;They should learn OCI artifact delivery, Kubernetes scheduling concepts, GPU fabric traffic patterns, and observability for token-driven workloads. AI systems now reward engineers who can connect infrastructure, networking, and automation.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;AI disclosure: This article was adapted from a canonical FirstPassLab post using AI assistance for editing and Dev.to formatting. Technical claims, sources, and the canonical URL point to the original article.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>networking</category>
      <category>kubernetes</category>
      <category>devops</category>
      <category>ai</category>
    </item>
  </channel>
</rss>
