VXLAN EVPN is the dominant data center fabric technology in 2026, and you can build a fully functional leaf-spine fabric using free Nexus 9000v images on EVE-NG — no physical Nexus switches required.
This guide walks through the complete stack: OSPF underlay → BGP EVPN overlay → L2VNI bridging → L3VNI inter-subnet routing, with every NX-OS command and verification step.
With Cisco ACI converging toward NDFC-managed NX-OS fabrics, CLI-based VXLAN EVPN skills are more relevant than ever — whether you're building production fabrics or prepping for certification labs.
Hardware Requirements
Each Nexus 9000v needs 8 GB RAM and 2 vCPUs. A 2-spine, 4-leaf topology with host nodes needs ~48-64 GB on the EVE-NG host.
| Component | Minimum | Recommended |
|---|---|---|
| RAM | 48 GB | 64 GB |
| CPU | 8 cores (VT-x/AMD-V) | 12+ cores |
| Storage | 100 GB SSD | 200 GB NVMe |
| EVE-NG | Community (free) | Pro (optional) |
The same qcow2 images also work in GNS3 and Cisco CML.
Importing Nexus 9000v Images
Download the Nexus 9000v qcow2 from Cisco's software download page (requires a Cisco account).
# Create the image directory
mkdir -p /opt/unetlab/addons/qemu/nxosv9k-10.4.3/
# Move and rename the image
mv nxosv9k-10.4.3.qcow2 /opt/unetlab/addons/qemu/nxosv9k-10.4.3/virtioa.qcow2
# Fix permissions
/opt/unetlab/wrappers/unl_wrapper -a fixpermissions
Important: The image must be named
virtioa.qcow2. Use NX-OS 10.3.x or 10.4.x for full VXLAN EVPN feature support.
Lab Topology
Standard Clos architecture: 2 spines (BGP route reflectors) + 4 leaves (VTEPs).
┌──────────┐ ┌──────────┐
│ Spine-1 │ │ Spine-2 │
│ Lo0: .1 │ │ Lo0: .2 │
│ AS 65000 │ │ AS 65000 │
└────┬┬┬┬──┘ └──┬┬┬┬────┘
││││ ││││
┌───────┘│││ ┌──────┘│││
│ ┌─────┘││ │ ┌────┘││
│ │ ┌───┘│ │ │ ┌──┘│
│ │ │ │ │ │ │ │
┌──┴──┐┌─┴───┐┌───┴─┐┌──┴──┐
│Leaf1││Leaf2 ││Leaf3││Leaf4│
└──┬──┘└──┬──┘└──┬──┘└──┬──┘
│ │ │ │
[Host1] [Host1] [Host2] [Host2]
VLAN 10 VLAN 10 VLAN 20 VLAN 20
IP Addressing
| Device | Loopback0 (Router-ID) | Loopback1 (VTEP) |
|---|---|---|
| Spine-1 | 10.0.0.1/32 | — |
| Spine-2 | 10.0.0.2/32 | — |
| Leaf-1 | 10.0.0.3/32 | 10.0.1.3/32 |
| Leaf-2 | 10.0.0.4/32 | 10.0.1.4/32 |
| Leaf-3 | 10.0.0.5/32 | 10.0.1.5/32 |
| Leaf-4 | 10.0.0.6/32 | 10.0.1.6/32 |
Point-to-point links use /30 subnets from 10.10.x.0/24.
Step 1: OSPF Underlay
The underlay provides IP reachability between all loopbacks. Use point-to-point network type on all fabric links to skip DR/BDR elections.
Spine-1
feature ospf
router ospf UNDERLAY
router-id 10.0.0.1
interface loopback0
ip address 10.0.0.1/32
ip router ospf UNDERLAY area 0.0.0.0
interface Ethernet1/1
description to Leaf-1
no switchport
ip address 10.10.1.1/30
ip ospf network point-to-point
ip router ospf UNDERLAY area 0.0.0.0
no shutdown
interface Ethernet1/2
description to Leaf-2
no switchport
ip address 10.10.2.1/30
ip ospf network point-to-point
ip router ospf UNDERLAY area 0.0.0.0
no shutdown
interface Ethernet1/3
description to Leaf-3
no switchport
ip address 10.10.3.1/30
ip ospf network point-to-point
ip router ospf UNDERLAY area 0.0.0.0
no shutdown
interface Ethernet1/4
description to Leaf-4
no switchport
ip address 10.10.4.1/30
ip ospf network point-to-point
ip router ospf UNDERLAY area 0.0.0.0
no shutdown
Leaf-1
feature ospf
router ospf UNDERLAY
router-id 10.0.0.3
interface loopback0
ip address 10.0.0.3/32
ip router ospf UNDERLAY area 0.0.0.0
interface loopback1
description VTEP Source
ip address 10.0.1.3/32
ip router ospf UNDERLAY area 0.0.0.0
interface Ethernet1/1
description to Spine-1
no switchport
ip address 10.10.1.2/30
ip ospf network point-to-point
ip router ospf UNDERLAY area 0.0.0.0
no shutdown
interface Ethernet1/2
description to Spine-2
no switchport
ip address 10.10.5.2/30
ip ospf network point-to-point
ip router ospf UNDERLAY area 0.0.0.0
no shutdown
Repeat for Leaf-2 through Leaf-4 with appropriate IPs.
Verify: Every leaf must ping every other leaf's Loopback1. If this fails, VXLAN tunnels won't form.
Leaf-1# ping 10.0.1.4 source 10.0.1.3
64 bytes from 10.0.1.4: icmp_seq=0 ttl=253 time=3.2 ms
Step 2: BGP EVPN Overlay
iBGP with spines as route reflectors. All devices share ASN 65000. Spines reflect EVPN routes (Type-2 MAC/IP, Type-5 IP Prefix) between leaves.
Enable features on all devices
feature bgp
feature nv overlay
feature vn-segment-vlan-based
nv overlay evpn
Spine-1 (Route Reflector)
router bgp 65000
router-id 10.0.0.1
address-family l2vpn evpn
retain route-target all
neighbor 10.0.0.3
remote-as 65000
update-source loopback0
address-family l2vpn evpn
send-community both
route-reflector-client
neighbor 10.0.0.4
remote-as 65000
update-source loopback0
address-family l2vpn evpn
send-community both
route-reflector-client
neighbor 10.0.0.5
remote-as 65000
update-source loopback0
address-family l2vpn evpn
send-community both
route-reflector-client
neighbor 10.0.0.6
remote-as 65000
update-source loopback0
address-family l2vpn evpn
send-community both
route-reflector-client
Key detail:
retain route-target allon spines ensures route reflectors keep all EVPN routes regardless of local import policy. Without it, spines drop routes for VNIs they don't participate in.
Leaf-1
router bgp 65000
router-id 10.0.0.3
neighbor 10.0.0.1
remote-as 65000
update-source loopback0
address-family l2vpn evpn
send-community both
neighbor 10.0.0.2
remote-as 65000
update-source loopback0
address-family l2vpn evpn
send-community both
Step 3: L2VNI — Layer 2 Extension
L2VNI maps VLANs to VXLAN Network Identifiers for Layer 2 stretching across the fabric. EVPN distributes MAC addresses via Type-2 routes.
Leaf-1 L2VNI Config
vlan 10
vn-segment 100010
vlan 20
vn-segment 100020
evpn
vni 100010 l2
rd auto
route-target import auto
route-target export auto
vni 100020 l2
rd auto
route-target import auto
route-target export auto
interface nve1
no shutdown
host-reachability protocol bgp
source-interface loopback1
member vni 100010
ingress-replication protocol bgp
member vni 100020
ingress-replication protocol bgp
interface Ethernet1/5
switchport
switchport access vlan 10
no shutdown
Apply on all leaves. Leaf-1/2 host VLAN 10; Leaf-3/4 host VLAN 20.
Verify NVE peers:
Leaf-1# show nve peers
Interface Peer-IP State LearnType Uptime Router-Mac
nve1 10.0.1.4 Up CP 00:02:15 5004.0000.1b08
nve1 10.0.1.5 Up CP 00:02:10 5005.0000.1b08
nve1 10.0.1.6 Up CP 00:02:08 5006.0000.1b08
Peers in Up state with CP (control plane) learning = EVPN overlay is working. ✅
Step 4: L3VNI — Inter-VXLAN Routing
L3VNI enables routing between different VNIs using a tenant VRF and symmetric IRB. Each leaf performs distributed routing — no hairpinning through a centralized router.
Leaf-1 L3VNI Config
vrf context TENANT-1
vni 50000
rd auto
address-family ipv4 unicast
route-target import auto
route-target import auto evpn
route-target export auto
route-target export auto evpn
vlan 500
vn-segment 50000
interface Vlan500
no shutdown
vrf member TENANT-1
ip forward
no ip redirects
interface Vlan10
no shutdown
vrf member TENANT-1
ip address 192.168.10.1/24
fabric forwarding mode anycast-gateway
no ip redirects
interface Vlan20
no shutdown
vrf member TENANT-1
ip address 192.168.20.1/24
fabric forwarding mode anycast-gateway
no ip redirects
fabric forwarding anycast-gateway-mac 0001.0001.0001
interface nve1
member vni 50000 associate-vrf
router bgp 65000
vrf TENANT-1
address-family ipv4 unicast
advertise l2vpn evpn
⚠️ Critical:
fabric forwarding anycast-gateway-macmust be identical on every leaf. This is what makes the distributed gateway work — every leaf responds to ARP for the gateway IP with the same MAC.
End-to-end test — Host-1 (VLAN 10, 192.168.10.10) on Leaf-1 → Host-2 (VLAN 20, 192.168.20.10) on Leaf-3:
Host-1$ ping 192.168.20.10
64 bytes from 192.168.20.10: seq=0 ttl=62 time=8.5 ms
TTL=62 (64 minus 2 hops) confirms symmetric IRB — routed at ingress leaf, forwarded via VXLAN to egress leaf. 🎯
Troubleshooting Cheatsheet
| Symptom | Check | Fix |
|---|---|---|
| NVE peers not forming | show nve peers |
Verify Loopback1 reachability via ping |
| BGP EVPN session idle | show bgp l2vpn evpn summary |
Check nv overlay evpn and feature nv overlay
|
| No Type-2 routes | show bgp l2vpn evpn route-type 2 |
Verify evpn block under VNI + send-community both
|
| L3VNI routing fails | show vrf TENANT-1 |
Check vni 50000 in VRF + member vni 50000 associate-vrf on NVE |
| Anycast GW not responding | show ip arp vrf TENANT-1 |
Verify anycast-gateway-mac is identical on all leaves |
The single most common gotcha: missing nv overlay evpn globally. Without it, zero EVPN routes are exchanged even if BGP sessions show Up.
What This Covers in Practice
This lab maps directly to real-world data center fabric deployments:
- Underlay design: OSPF for loopback reachability
- iBGP EVPN overlay: Route reflector model
- L2VNI + L3VNI: Layer 2 extension + inter-tenant routing
- Distributed anycast gateway: Local routing at each leaf
All of this runs on free Nexus 9000v images — the same NX-OS you'd configure on physical hardware.
Originally published at FirstPassLab. For more data center networking deep dives, check out firstpasslab.com.
🤖 AI Disclosure: This article was adapted from the original with AI assistance. Technical content has been reviewed for accuracy.
Top comments (0)