Cisco NDFC (Nexus Dashboard Fabric Controller) is the platform that provisions, manages, and monitors VXLAN BGP EVPN data center fabrics. If you've been using DCNM, heads up — it reaches end-of-support in April 2026. NDFC replaces it entirely, running as a microservices-based application on the Nexus Dashboard platform.
This guide walks through the complete Easy Fabric workflow: from fabric creation to overlay deployment, with the actual NX-OS configs NDFC generates under the hood.
What Changed from DCNM to NDFC?
DCNM was a standalone Java monolith. NDFC is containerized microservices on Nexus Dashboard, sitting alongside Nexus Dashboard Insights (NDI) and Nexus Dashboard Orchestrator (NDO).
| Feature | DCNM | NDFC |
|---|---|---|
| Deployment | Standalone VM/OVA | Service on Nexus Dashboard |
| Architecture | Monolithic Java | Microservices, containerized |
| Fabric types | Easy Fabric, VXLAN, classic LAN | Same + Campus VXLAN, External |
| Multi-site | Via DCNM | Via NDO (separate service) |
| Assurance | Basic monitoring | Integrated with NDI |
| API | REST API (limited) | Full REST API + Terraform provider |
| Support status | EOL April 2026 | Active development |
What Catches DCNM Users Off-Guard
- Navigation changes — NDFC's left-nav structure differs from DCNM's tabbed interface
- Fabric creation wizard — more parameters exposed upfront, different field ordering
- Deploy workflow — \"Recalculate and Deploy\" replaces DCNM's \"Deploy\" button with a preview + diff step
- Integrated topology view — real-time fabric visualization is now built-in
The Easy Fabric Workflow: Step by Step
Easy Fabric is NDFC's flagship feature — it provisions a complete VXLAN BGP EVPN fabric from a single template.
Step 1: Create the Fabric
Navigate to Fabric Controller → LAN → Fabrics → Create Fabric and select \"Data Center VXLAN EVPN.\"
Core parameters:
| Parameter | Description | Typical Value |
|---|---|---|
| Fabric Name | Unique identifier | DC1-VXLAN |
| BGP ASN | BGP AS number | 65001 |
| Underlay Protocol | IS-IS (recommended) or OSPF | IS-IS |
| Replication Mode | Multicast or Ingress Replication | Multicast |
| Multicast Group Subnet | PIM ASM group range | 239.1.1.0/25 |
| Anycast RP | Enable on spines | Enabled |
| Loopback0 IP Range | Router IDs | 10.2.0.0/22 |
| Loopback1 IP Range | VTEP (NVE) source | 10.3.0.0/22 |
| Subnet Range | P2P inter-switch links | 10.4.0.0/22 |
The template contains hundreds of parameters, but these core settings define the underlay design. NDFC auto-calculates the rest.
Step 2: Discover and Assign Switch Roles
Add switches via seed IP discovery (CDP/LLDP neighbor walk), POAP, or manual add. Then assign roles:
| Role | Function | Typical Platform |
|---|---|---|
| Spine | Route reflector, underlay/overlay hub | Nexus 9500, 9300 |
| Leaf | Server-facing, VTEP, gateway | Nexus 9300, 9200 |
| Border Leaf | External L3 connectivity | Nexus 9300 |
| Border Spine | Combined spine + external | Nexus 9500 |
| Border Gateway | Multi-site EVPN gateway | Nexus 9300, 9500 |
NDFC validates role assignments topologically — it won't let you assign a spine role to a switch that only connects to hosts.
Step 3: Deploy the Underlay
Click Recalculate and Deploy. NDFC generates the complete underlay:
- IS-IS (or OSPF) adjacencies on all spine-leaf links
- PIM sparse-mode with anycast RP on spines
- Loopback0 (router ID) and Loopback1 (NVE source)
- Point-to-point links with /30 or /31 addressing
- iBGP EVPN with spines as route reflectors
Here's what NDFC generates on a leaf switch:
feature isis
feature pim
feature bgp
feature nv overlay
feature vn-segment-vlan-based
router isis UNDERLAY
net 49.0001.0100.0200.0003.00
is-type level-2
interface loopback0
ip address 10.2.0.3/32
ip router isis UNDERLAY
ip pim sparse-mode
interface loopback1
ip address 10.3.0.3/32
ip router isis UNDERLAY
ip pim sparse-mode
interface Ethernet1/49
description to-spine1
no switchport
mtu 9216
ip address 10.4.0.5/30
ip router isis UNDERLAY
ip pim sparse-mode
no shutdown
router bgp 65001
router-id 10.2.0.3
neighbor 10.2.0.1
remote-as 65001
update-source loopback0
address-family l2vpn evpn
send-community both
route-reflector-client
Pro tip: Before deploying, NDFC shows a configuration preview — the actual NX-OS commands. Always review this diff.
Step 4: Create VRFs (L3 VNIs)
Create VRFs for tenant isolation via Fabric → VRFs → Create VRF:
| Field | Description | Example |
|---|---|---|
| VRF Name | Logical name | TENANT-A |
| VRF ID / VNI | L3 VNI | 50001 |
| VLAN ID | SVI VLAN for L3 VNI | 3001 |
| Route Target | Auto or manual | 65001:50001 |
| Maximum Routes | VRF-level route limit | 10000 |
Generated NX-OS:
vrf context TENANT-A
vni 50001
rd auto
address-family ipv4 unicast
route-target both auto
route-target both auto evpn
Step 5: Create Networks (L2 VNIs)
Networks map to VLAN + VNI + anycast gateway via Fabric → Networks → Create Network:
| Field | Description | Example |
|---|---|---|
| Network Name | Logical name | WEB-SERVERS |
| VLAN ID | Local VLAN | 100 |
| VNI | L2 VNI | 30100 |
| Gateway IP | Anycast gateway | 10.10.100.1/24 |
| VRF | Parent VRF | TENANT-A |
Generated NX-OS:
vlan 100
vn-segment 30100
interface Vlan100
vrf member TENANT-A
ip address 10.10.100.1/24
fabric forwarding mode anycast-gateway
no shutdown
interface nve1
member vni 30100
mcast-group 239.1.1.1
member vni 50001 associate-vrf
Step 6: Verify with CLI
NDFC provides topology views, but CLI verification is essential:
# Verify NVE peers (VXLAN tunnel endpoints)
show nve peers
# Verify BGP EVPN neighbor state
show bgp l2vpn evpn summary
# Verify VXLAN VNI mapping
show nve vni
# Verify MAC learning via EVPN
show l2route evpn mac all
# Verify anycast gateway
show interface vlan 100
# Verify underlay reachability
show isis adjacency
show ip pim neighbor
Critical Concepts Under the Hood
BGP EVPN Route Types
| Route Type | Purpose | CLI Verification |
|---|---|---|
| Type 2 | MAC/IP advertisement | show bgp l2vpn evpn route-type 2 |
| Type 3 | Inclusive multicast (BUM) | show bgp l2vpn evpn route-type 3 |
| Type 5 | IP prefix route (inter-subnet) | show bgp l2vpn evpn route-type 5 |
Multicast vs. Ingress Replication
- Multicast (PIM ASM) — BUM traffic flooded via multicast tree. Efficient for large fabrics but requires PIM underlay.
- Ingress Replication — BUM traffic replicated unicast to each remote VTEP. Simpler but higher bandwidth consumption.
Understand the mcast-group vs ingress-replication protocol bgp commands under interface nve1.
vPC and Host-Facing Configuration
NDFC configures vPC (virtual PortChannel) between leaf pairs for dual-homed servers, including:
- vPC domain, peer-link, peer-keepalive
- vPC-specific NVE settings (peer-vtep for Type-5 routes)
- Orphan port handling
vPC interaction with VXLAN EVPN is one of the most complex topics in DC networking — dual-homed servers, orphan ports, and Type-5 route handling all require deep understanding.
How to Practice
- Cisco CML + NDFC VM — Deploy NDFC alongside Nexus 9000v in CML. Requires 32GB+ RAM for NDFC alone.
- CLI First, NDFC Second — Build VXLAN EVPN via CLI first to understand what the GUI generates, then layer NDFC on top.
- Cisco Practice Labs — Pre-staged lab pods with NDFC are available and closest to the real exam environment.
FAQ
What underlay protocol should I choose?
IS-IS is the default and recommended — it scales better, avoids recursive routing issues, and aligns with SDA underlay design.
Can I still use CLI instead of NDFC?
Yes. NDFC generates standard NX-OS config. Understanding both the GUI workflow and CLI is essential.
What is Easy Fabric?
NDFC's automated provisioning workflow that configures the complete VXLAN BGP EVPN underlay and overlay from a single fabric template.
NDFC is the present and future of Cisco data center fabric management. Mastering both the Easy Fabric GUI workflow and the NX-OS CLI underneath is what separates strong DC engineers from everyone just clicking buttons.
Originally published at firstpasslab.com.
Disclosure: This article was adapted from the original with AI assistance for formatting and editing. Technical content has been expert-reviewed.
Top comments (0)