Introduction
Setting up a network lab traditionally requires expensive hardware and a lot of manual effort. With netclab-chart, you can spin up a virtual lab directly on Kubernetes using containerized network OS images like SR Linux or FRRouting.
This post walks through Day 1: installing the lab, running containerized routers, and exploring basic connectivity.
Why Use netclab-chart?
- Quickly deploy virtual network topologies in Kubernetes
- Test automation scripts and network protocols safely
- Reproducible lab environments for experimentation
- Supports multiple vendor NOS containers
Prerequisites
Before installing Netclab Chart, ensure the following are present:
Installation
- Kind cluster:
kind create cluster --name netclab
- CNI bridge and host-device plugins:
docker exec netclab-control-plane bash -c \
'curl -L https://github.com/containernetworking/plugins/releases/download/v1.8.0/cni-plugins-linux-amd64-v1.8.0.tgz \
| tar -xz -C /opt/cni/bin ./bridge ./host-device'
- Multus CNI plugin:
kubectl apply -f https://raw.githubusercontent.com/k8snetworkplumbingwg/multus-cni/master/deployments/multus-daemonset.yml
kubectl -n kube-system wait --for=jsonpath='{.status.numberReady}'=1 --timeout=5m daemonset.apps/kube-multus-ds
- Add helm repo for netclab chart:
helm repo add netclab https://mbakalarski.github.io/netclab-chart
helm repo update
Usage
After installation, you can manage your topology using the YAML file.
Pods will be created according to the topology definition.
Note:
Node and network names must be valid Kubernetes resource names and also acceptable as Linux interface names.
Avoid uppercase letters, underscores, or special characters.
For SR Linux nodes, interface names in the YAML configuration must follow the format e1-x (for example, e1-1, e1-2, etc.).
Configuration options are documented in the table below.
You can override these values in your own file.
| Parameter | Description | Defaults |
|---|---|---|
topology.networks.type |
Type of connection between nodes. Can be bridge or veth. |
veth |
topology.nodes.type |
Type of node. Can be: srlinux, frrouting, linux
|
|
topology.nodes.image |
Container images used for topology nodes. |
ghcr.io/nokia/srlinux:latestquay.io/frrouting/frr:8.4.7bash:latest
|
topology.nodes.memory |
Memory allocation per node type. | srlinux: 4Gifrr: 512Milinux: 200Mi
|
topology.nodes.cpu |
CPU allocation per node type. | srlinux: 2000mfrr: 500mlinux: 200m
|
Example topology
+--------+
| h01 |
| |
| e1 |
+--------+
|
b2
|
+-----------+ +-----------+
| e1-2 | | srl02 or |
| | | frr02 |
| | | |
| e1-1| -- b1 -- | e1-1 |
| | | |
| | | |
| srl01 or | | |
| frr01 | | e1-2 |
+-----------+ +-----------+
|
b3
|
+--------+
| e1 |
| |
| h02 |
+--------+
You can follow instructions for SRLinux or/and FRRouting.
The topologies are independent and can run in separate Kubernetes namespaces. SRLinux pods are placed in dc1-ns, FRRouting - in default namespace.
To get the topology YAML files and router configurations, clone the repository:
git clone https://github.com/mbakalarski/netclab-chart.git && cd netclab-chart
SRLinux
- Start nodes:
helm install dc1 netclab/netclab --values ./examples/topology-srlinux.yaml --namespace dc1-ns --create-namespace
kubectl config set-context --current --namespace dc1-ns
kubectl get pod
NAME READY STATUS RESTARTS AGE
h01 1/1 Running 0 12s
h02 1/1 Running 0 12s
srl01 1/1 Running 0 12s
srl02 1/1 Running 0 12s
- Configure nodes (repeat if they're not ready yet):
kubectl exec h01 -- ip address replace 172.20.0.2/24 dev e1
kubectl exec h01 -- ip route replace 172.30.0.0/24 via 172.20.0.1
kubectl exec h02 -- ip address replace 172.30.0.2/24 dev e1
kubectl exec h02 -- ip route replace 172.20.0.0/24 via 172.30.0.1
kubectl cp ./examples/srl01.cfg srl01:/srl01.cfg
kubectl exec srl01 -- bash -c 'sr_cli --candidate-mode --commit-at-end < /srl01.cfg'
kubectl cp ./examples/srl02.cfg srl02:/srl02.cfg
kubectl exec srl02 -- bash -c 'sr_cli --candidate-mode --commit-at-end < /srl02.cfg'
All changes have been committed. Leaving candidate mode.
All changes have been committed. Leaving candidate mode.
- Test (convergence may take time):
kubectl exec h01 -- ping 172.30.0.2 -I 172.20.0.2
- LLDP neighbor information:
kubectl exec srl01 -- sr_cli show system lldp neighbor
+--------------+-------------------+----------------------+---------------------+------------------------+----------------------+---------------+
| Name | Neighbor | Neighbor System Name | Neighbor Chassis ID | Neighbor First Message | Neighbor Last Update | Neighbor Port |
+==============+===================+======================+=====================+========================+======================+===============+
| ethernet-1/1 | 00:01:03:FF:00:00 | srl02 | 00:01:03:FF:00:00 | 47 seconds ago | 24 seconds ago | ethernet-1/1 |
+--------------+-------------------+----------------------+---------------------+------------------------+----------------------+---------------+
FRRouting
- Start nodes:
kubectl config set-context --current --namespace default
helm install dc2 netclab/netclab --values ./examples/topology-frrouting.yaml
kubectl get pod
NAME READY STATUS RESTARTS AGE
frr01 1/1 Running 0 6s
frr02 1/1 Running 0 6s
h01 1/1 Running 0 6s
h02 1/1 Running 0 6s
- Configure nodes (repeat if they're not ready yet):
kubectl exec h01 -- ip address replace 172.20.0.2/24 dev e1
kubectl exec h01 -- ip route replace 172.30.0.0/24 via 172.20.0.1
kubectl exec h02 -- ip address replace 172.30.0.2/24 dev e1
kubectl exec h02 -- ip route replace 172.20.0.0/24 via 172.30.0.1
kubectl exec frr01 -- ip address add 10.0.0.1/32 dev lo
kubectl exec frr01 -- ip address replace 10.0.1.1/24 dev e1-1
kubectl exec frr01 -- ip address replace 172.20.0.1/24 dev e1-2
kubectl exec frr01 -- touch /etc/frr/vtysh.conf
kubectl exec frr01 -- sed -i -e 's/bgpd=no/bgpd=yes/g' /etc/frr/daemons
kubectl exec frr01 -- /usr/lib/frr/frrinit.sh start
kubectl cp ./examples/frr01.cfg frr01:/frr01.cfg
kubectl exec frr01 -- vtysh -f /frr01.cfg
kubectl exec frr02 -- ip address add 10.0.0.2/32 dev lo
kubectl exec frr02 -- ip address replace 10.0.1.2/24 dev e1-1
kubectl exec frr02 -- ip address replace 172.30.0.1/24 dev e1-2
kubectl exec frr02 -- touch /etc/frr/vtysh.conf
kubectl exec frr02 -- sed -i -e 's/bgpd=no/bgpd=yes/g' /etc/frr/daemons
kubectl exec frr02 -- /usr/lib/frr/frrinit.sh start
kubectl cp ./examples/frr02.cfg frr02:/frr02.cfg
kubectl exec frr02 -- vtysh -f /frr02.cfg
Starting watchfrr with command: ' /usr/lib/frr/watchfrr -d -F traditional zebra bgpd staticd'
Started watchfrr
Starting watchfrr with command: ' /usr/lib/frr/watchfrr -d -F traditional zebra bgpd staticd'
Started watchfrr
- Test (convergence may take time):
kubectl exec h01 -- ping 172.30.0.2 -I 172.20.0.2
Uninstall topologies
- dc1:
helm uninstall dc1 --namespace dc1-ns
kubectl delete ns dc1-ns
- dc2:
helm uninstall dc2 --namespace default
- reset default context:
kubectl config set-context --current --namespace default
Conclusion
netclab-chart lets you quickly deploy a containerized network lab on Kubernetes, providing a safe and flexible environment for testing, automation, and learning.
If you have questions or want to explore more ideas, feel free to drop them in the comments — I’m happy to answer and discuss.
Top comments (0)