DEV Community

DarkEdges
DarkEdges

Posted on

kind, wsl2 and multus

All we wanted to do was install kubevirt create 3 Windows 2022 servers and have them talk to one another over a static IP Address. Simple we thought, Just tell the Virtual Machine to use a host-only IP Address and we are in the money.

Firstly you cannot assign a second network interface unless it is another CNI implementation and secondly you need to install more black magic to make it work.

That black magic is https://github.com/k8snetworkplumbingwg/multus-cni

Step 0 - Install WSL2 and AlmaLinux 9 with docker

We will write a seperate guide on how to do this and update this one with a link to it.

Step 1 - Install kind

We are using kind as it is a supported version for kubevirt so we do that by creating a config file and using it in the deployment

apiVersion: kind.x-k8s.io/v1alpha4
kind: Cluster
nodes:
  - role: control-plane
Enter fullscreen mode Exit fullscreen mode

Then create the cluster using

kind create cluster --name kubevirt --config=kind_config.yaml
Enter fullscreen mode Exit fullscreen mode

Step 2 - Install Multus Daemonset

kubectl apply -f https://raw.githubusercontent.com/k8snetworkplumbingwg/multus-cni/master/deployments/multus-daemonset-thick.yml
Enter fullscreen mode Exit fullscreen mode

Step 3 - Install CNI plugins

First find the name of control plane

kubectl get nodes 
Enter fullscreen mode Exit fullscreen mode

which returns kubevirt-control-plane

NAME                     STATUS   ROLES           AGE   VERSION
kubevirt-control-plane   Ready    control-plane   51m   v1.32.2
Enter fullscreen mode Exit fullscreen mode

then connect to the control plane

docker exec -it kubevirt-control-plane sh
Enter fullscreen mode Exit fullscreen mode

to allow us to install the missing CNI Plugins

cd /tmp
curl -L -o cni-plugins.tgz https://github.com/containernetworking/plugins/releases/download/v1.6.2/cni-plugins-linux-amd64-v1.6.2.tgz
tar -C /opt/cni/bin -xzf cni-plugins.tgz
rm -rf cni-plugins.tgz
Enter fullscreen mode Exit fullscreen mode

Step 4 create a NetworkAttachmentDefinition

This is done through the following

cat <<EOF | kubectl create -f -
apiVersion: "k8s.cni.cncf.io/v1"
kind: NetworkAttachmentDefinition
metadata:
  name: ipvlan-def
spec:
  config: '{
      "cniVersion": "0.3.1",
      "type": "ipvlan",
      "master": "eth0",
      "mode": "l2",
      "ipam": {
        "type": "host-local",
        "subnet": "192.168.200.0/24",
        "rangeStart": "192.168.200.201",
        "rangeEnd": "192.168.200.205",
        "gateway": "192.168.200.1"
      }
    }'
EOF
Enter fullscreen mode Exit fullscreen mode

Step 5 - Create some Pods with additional network interfaces

The following creates

  • 3 pods with additional Network Interfaces listing on
    • 192.168.200.201 - multus-alpine
    • 192.168.200.202 - multus-alpine-2
    • 192.168.200.203 - net-pod
cat <<EOF | kubectl create -f -
---
apiVersion: v1
kind: Pod
metadata:
  labels:
    app: multus-alpine
  name: multus-alpine
  namespace: default
  annotations:
    k8s.v1.cni.cncf.io/networks: '[
      {
         "name" : "ipvlan-def",
         "interface": "eth1",
         "ips": ["192.168.200.201"]
      }
    ]'
spec:
  containers:
    - name: multus-alpine
      image: jmalloc/echo-server
  restartPolicy: Always
---
apiVersion: v1
kind: Pod
metadata:
  labels:
    app: multus-alpine-2
  name: multus-alpine-2
  namespace: default
  annotations:
    k8s.v1.cni.cncf.io/networks: '[
      {
         "name" : "ipvlan-def",
         "interface": "eth1",
        "ips": ["192.168.200.202"]
      }
    ]'
spec:
  containers:
    - name: multus-alpine
      image: jmalloc/echo-server
  restartPolicy: Always
---
apiVersion: v1
kind: Pod
metadata:
  name: net-pod
  annotations:
    k8s.v1.cni.cncf.io/networks: '[
      {
         "name" : "ipvlan-def",
         "interface": "eth1",
        "ips": ["192.168.200.203"]
      }
    ]'
spec:
  containers:
  - name: netshoot-pod
    image: nicolaka/netshoot
    command: ["tail"]
    args: ["-f", "/dev/null"]
  terminationGracePeriodSeconds: 0
EOF
Enter fullscreen mode Exit fullscreen mode

Step 6 - Test connectivity

Check that we are getting responses from both multus-alpine and multus-alpine-2 from the net-pod

multus-alpine

kubectl exec -it net-pod -- curl http://192.168.200.201:8080/
Enter fullscreen mode Exit fullscreen mode

returns

Request served by multus-alpine

GET / HTTP/1.1

Host: 192.168.200.201:8080
Accept: */*
User-Agent: curl/8.7.1
Enter fullscreen mode Exit fullscreen mode

multus-alpine-2

kubectl exec -it net-pod -- curl http://192.168.200.202:8080/
Enter fullscreen mode Exit fullscreen mode

returns

Request served by multus-alpine-2

GET / HTTP/1.1

Host: 192.168.200.202:8080
Accept: */*
User-Agent: curl/8.7.1
Enter fullscreen mode Exit fullscreen mode

Top comments (0)