DEV Community

Karim
Karim

Posted on • Originally published at deep75.Medium on

Virtink : un module complémentaire de virtualisation légère pour Kubernetes …

SmartX Inc. (“SmartX”), une start-up chinoise basée à Pékin, développe une plateforme d’infrastructure hyperconvergée conçue pour aider les entreprises à adopter des systèmes en nuage.

La plate-forme d’infrastructure de l’entreprise est une architecture informatique centrée sur le logiciel qui virtualise et intègre tous les éléments des systèmes conventionnels définis par le matériel, y compris l’informatique, le stockage et la mise e n réseau, ce qui permet aux entreprises de passer rapidement et efficacement des systèmes existants aux systèmes en nuage :

SmartX propose en Open Source sur GitHub, Virtink qui est un add-on Kubernetes pour l’exécution de machines virtuelles avec Cloud Hypervisor. Cloud Hypervisor est un moniteur de machine virtuelle (VMM) open source qui fonctionne au-dessus de l’hyperviseur KVM et de l’hyperviseur Microsoft (MSHV).

Le projet se concentre sur l’exécution de charges de travail en nuage modernes sur des architectures matérielles spécifiques et communes. Dans ce cas, les charges de travail en nuage sont celles qui sont exécutées par les clients au sein d’un fournisseur de services en nuage. Cela signifie des systèmes d’exploitation modernes avec la plupart des E/S gérées par des dispositifs paravirtualisés (par exemple virtIO), aucune exigence pour les dispositifs existants et des CPU 64 bits.

Cloud Hypervisor est implémenté en Rust et est basé sur les crates Rust VMM.

En utilisant Cloud Hypervisor comme hyperviseur sous-jacent, Virtink permet un moyen léger et sécurisé d’exécuter des charges de travail entièrement virtualisées dans un cluster Kubernetes.

Par rapport à KubeVirt, Virtink :

  • N’utilise pas libvirt ou QEMU. En s’appuyant sur Cloud Hypervisor, les VMs ont une empreinte mémoire plus faible (≈30MB), des performances plus élevées et une surface d’attaque plus petite.
  • Ne nécessite pas de processus de lancement par pod à longue durée d’exécution, ce qui réduit encore l’empreinte mémoire d’exécution (≈80MB).

Virtink: A Lightweight Virtualization Add-on for Kubernetes

Virtink, qui est l’abréviation de “Virtualisation dans Kubernetes”, fournit un moyen simple de construire des images VM aussi facilement et rapidement que docker build et peut fonctionner sur des machines x86 et ARM.

Avant de tester, comme le décrit le projet sur GitHub, quelques conditions doivent être remplies avant de pouvoir commencer à utiliser Virtink :

GitHub - smartxworks/virtink: Lightweight Virtualization Add-on for Kubernetes

  • Kubernetes cluster en version v1.16 ~ v1.24
  • Kubernetes doit avoir  — allow-privileged=true afin d’exécuter le DaemonSet privilégié de Virtink. Il est généralement défini par défaut.
  • cert-manager v1.0 ~ v1.8 doit être installé dans le cluster Kubernetes. Vous pouvez l’installer avec kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.8.2/cert-manager.yaml
  • Virtink prend actuellement en charge les runtimes de conteneurs suivants : Docker, Containerd. D’autres runtimes de conteneurs, qui n’utilisent pas de fonctionnalités de virtualisation, devraient également fonctionner. Cependant, ils ne sont pas testés officiellement.
  • Un matériel prenant en charge la virtualisation est nécessaire. Vous devez vérifier si /dev/kvm existe sur chaque nœud Kubernetes.
  • Version du noyau de l’hôte Linux minimum : v4.11 Recommandée : v5.6 ou supérieure

Lancement d’un cluster K3s à 3 instances avec Ubuntu Server dans DigitalOcean qui autorise la virtualisation imbriquée :

avec k3s Ansible :

GitHub - k3s-io/k3s-ansible

git clone https://github.com/k3s-io/k3s-ansible  ✔  base   23:21:00  
Cloning into 'k3s-ansible'...
remote: Enumerating objects: 922, done.
remote: Total 922 (delta 0), reused 0 (delta 0), pack-reused 922
Receiving objects: 100% (922/922), 116.25 KiB | 1.94 MiB/s, done.
Resolving deltas: 100% (351/351), done.

cp -R inventory/sample inventory/my-cluster

       │ File: inventory/my-cluster/hosts.ini
───────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
   1 │ [master]
   2 │ 188.166.109.175
   3 │ 
   4 │ [node]
   5 │ 174.138.1.202
   6 │ 188.166.88.194
   7 │ 
   8 │ [k3s_cluster:children]
   9 │ master
  10 │ node

───────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
       │ File: group_vars/all.yml
───────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
   1 │ ---
   2 │ k3s_version: v1.24.7+k3s1
   3 │ ansible_user: root
   4 │ systemd_dir: /etc/systemd/system
   5 │ master_ip: "{{ hostvars[groups['master'][0]]['ansible_host'] | default(groups['master'][0]) }}"
   6 │ extra_server_args: ""
   7 │ extra_agent_args: ""
───────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
Enter fullscreen mode Exit fullscreen mode

Le cluster k3s est alors provisionné rapidement :

ansible-playbook site.yml -i inventory/my-cluster/hosts.ini

PLAY [node] ***********************************************************************************************************************************************************************

TASK [Gathering Facts] ************************************************************************************************************************************************************
Thursday 10 November 2022 23:35:15 +0100 (0:00:00.432) 0:00:35.215 ***** 
ok: [174.138.1.202]
ok: [188.166.88.194]

TASK [k3s/node : Copy K3s service file] *******************************************************************************************************************************************
Thursday 10 November 2022 23:35:17 +0100 (0:00:01.188) 0:00:36.403 ***** 
changed: [174.138.1.202]
changed: [188.166.88.194]

TASK [k3s/node : Enable and check K3s service] ************************************************************************************************************************************
Thursday 10 November 2022 23:35:18 +0100 (0:00:01.268) 0:00:37.672 ***** 
changed: [188.166.88.194]
changed: [174.138.1.202]

PLAY RECAP ************************************************************************************************************************************************************************
174.138.1.202 : ok=9 changed=5 unreachable=0 failed=0 skipped=12 rescued=0 ignored=0   
188.166.109.175 : ok=20 changed=12 unreachable=0 failed=0 skipped=12 rescued=0 ignored=0   
188.166.88.194 : ok=9 changed=5 unreachable=0 failed=0 skipped=12 rescued=0 ignored=0   

Thursday 10 November 2022 23:35:23 +0100 (0:00:05.542) 0:00:43.214 ***** 
=============================================================================== 
download : Download k3s binary x64 ----------------------------------------------------------------------------------------------------------------------------------------- 9.01s
k3s/master : Enable and check K3s service ---------------------------------------------------------------------------------------------------------------------------------- 8.69s
k3s/node : Enable and check K3s service ------------------------------------------------------------------------------------------------------------------------------------ 5.54s
Gathering Facts ------------------------------------------------------------------------------------------------------------------------------------------------------------ 3.26s
k3s/master : Copy K3s service file ----------------------------------------------------------------------------------------------------------------------------------------- 1.82s
raspberrypi : Test for raspberry pi /proc/cpuinfo -------------------------------------------------------------------------------------------------------------------------- 1.62s
k3s/master : Wait for node-token ------------------------------------------------------------------------------------------------------------------------------------------- 1.50s
k3s/node : Copy K3s service file ------------------------------------------------------------------------------------------------------------------------------------------- 1.27s
Gathering Facts ------------------------------------------------------------------------------------------------------------------------------------------------------------ 1.19s
Gathering Facts ------------------------------------------------------------------------------------------------------------------------------------------------------------ 1.18s
prereq : Enable IPv4 forwarding -------------------------------------------------------------------------------------------------------------------------------------------- 0.67s
k3s/master : Change file access node-token --------------------------------------------------------------------------------------------------------------------------------- 0.63s
k3s/master : Read node-token from master ----------------------------------------------------------------------------------------------------------------------------------- 0.60s
k3s/master : Register node-token file access mode -------------------------------------------------------------------------------------------------------------------------- 0.53s
k3s/master : Replace https://localhost:6443 by https://master-ip:6443 ------------------------------------------------------------------------------------------------------ 0.53s
k3s/master : Copy config file to user home directory ----------------------------------------------------------------------------------------------------------------------- 0.52s
k3s/master : Create crictl symlink ----------------------------------------------------------------------------------------------------------------------------------------- 0.43s
k3s/master : Restore node-token file access -------------------------------------------------------------------------------------------------------------------------------- 0.43s
raspberrypi : Test for raspberry pi /proc/device-tree/model ---------------------------------------------------------------------------------------------------------------- 0.42s
raspberrypi : execute OS related tasks on the Raspberry Pi ----------------------------------------------------------------------------------------------------------------- 0.40s

scp root@188.166.109.175:~/.kube/config ~/.kube/config

$ kubectl cluster-info                                                                                                                         
Kubernetes control plane is running at https://188.166.109.175:6443
CoreDNS is running at https://188.166.109.175:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
Metrics-server is running at https://188.166.109.175:6443/api/v1/namespaces/kube-system/services/https:metrics-server:https/proxy

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

$ kubectl get nodes -o wide                                                                                                                    

NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
k3s1 Ready control-plane,master 4m46s v1.24.7+k3s1 188.166.109.175 <none> Ubuntu 22.10 5.19.0-23-generic containerd://1.6.8-k3s1
k3s3 Ready <none> 4m33s v1.24.7+k3s1 174.138.1.202 <none> Ubuntu 22.10 5.19.0-23-generic containerd://1.6.8-k3s1
k3s2 Ready <none> 4m33s v1.24.7+k3s1 188.166.88.194 <none> Ubuntu 22.10 5.19.0-23-generic containerd://1.6.8-k3s1

$ kubectl get po,svc -A                                                                                                                        

NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system pod/local-path-provisioner-7b7dc8d6f5-8cwd8 1/1 Running 0 4m39s
kube-system pod/coredns-b96499967-cc7fw 1/1 Running 0 4m39s
kube-system pod/svclb-traefik-baea9d36-vjxzg 2/2 Running 0 4m27s
kube-system pod/helm-install-traefik-crd-ghbn2 0/1 Completed 0 4m39s
kube-system pod/svclb-traefik-baea9d36-2vw6x 2/2 Running 0 4m27s
kube-system pod/helm-install-traefik-fc9xw 0/1 Completed 1 4m39s
kube-system pod/svclb-traefik-baea9d36-g42x2 2/2 Running 0 4m27s
kube-system pod/traefik-56cfcbb99f-5qgvd 1/1 Running 0 4m27s
kube-system pod/metrics-server-84f8d4c4fc-wx8bl 1/1 Running 0 4m39s

NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default service/kubernetes ClusterIP 10.43.0.1 <none> 443/TCP 4m54s
kube-system service/kube-dns ClusterIP 10.43.0.10 <none> 53/UDP,53/TCP,9153/TCP 4m50s
kube-system service/metrics-server ClusterIP 10.43.21.66 <none> 443/TCP 4m49s
kube-system service/traefik LoadBalancer 10.43.170.138 174.138.1.202,188.166.109.175,188.166.88.194 80:32630/TCP,443:30841/TCP 4m27s
Enter fullscreen mode Exit fullscreen mode

J’ajoute donc Cert Manager en pré-requis dans ce cluster :

cert-manager

$ kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.8.2/cert-manager.yaml  ✔ 

namespace/cert-manager created
customresourcedefinition.apiextensions.k8s.io/certificaterequests.cert-manager.io created
customresourcedefinition.apiextensions.k8s.io/certificates.cert-manager.io created
customresourcedefinition.apiextensions.k8s.io/challenges.acme.cert-manager.io created
customresourcedefinition.apiextensions.k8s.io/clusterissuers.cert-manager.io created
customresourcedefinition.apiextensions.k8s.io/issuers.cert-manager.io created
customresourcedefinition.apiextensions.k8s.io/orders.acme.cert-manager.io created
serviceaccount/cert-manager-cainjector created
serviceaccount/cert-manager created
serviceaccount/cert-manager-webhook created
configmap/cert-manager-webhook created
clusterrole.rbac.authorization.k8s.io/cert-manager-cainjector created
clusterrole.rbac.authorization.k8s.io/cert-manager-controller-issuers created
clusterrole.rbac.authorization.k8s.io/cert-manager-controller-clusterissuers created
clusterrole.rbac.authorization.k8s.io/cert-manager-controller-certificates created
clusterrole.rbac.authorization.k8s.io/cert-manager-controller-orders created
clusterrole.rbac.authorization.k8s.io/cert-manager-controller-challenges created
clusterrole.rbac.authorization.k8s.io/cert-manager-controller-ingress-shim created
clusterrole.rbac.authorization.k8s.io/cert-manager-view created
clusterrole.rbac.authorization.k8s.io/cert-manager-edit created
clusterrole.rbac.authorization.k8s.io/cert-manager-controller-approve:cert-manager-io created
clusterrole.rbac.authorization.k8s.io/cert-manager-controller-certificatesigningrequests created
clusterrole.rbac.authorization.k8s.io/cert-manager-webhook:subjectaccessreviews created
clusterrolebinding.rbac.authorization.k8s.io/cert-manager-cainjector created
clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-issuers created
clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-clusterissuers created
clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-certificates created
clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-orders created
clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-challenges created
clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-ingress-shim created
clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-approve:cert-manager-io created
clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-certificatesigningrequests created
clusterrolebinding.rbac.authorization.k8s.io/cert-manager-webhook:subjectaccessreviews created
role.rbac.authorization.k8s.io/cert-manager-cainjector:leaderelection created
role.rbac.authorization.k8s.io/cert-manager:leaderelection created
role.rbac.authorization.k8s.io/cert-manager-webhook:dynamic-serving created
rolebinding.rbac.authorization.k8s.io/cert-manager-cainjector:leaderelection created
rolebinding.rbac.authorization.k8s.io/cert-manager:leaderelection created
rolebinding.rbac.authorization.k8s.io/cert-manager-webhook:dynamic-serving created
service/cert-manager created
service/cert-manager-webhook created
deployment.apps/cert-manager-cainjector created
deployment.apps/cert-manager created
deployment.apps/cert-manager-webhook created
mutatingwebhookconfiguration.admissionregistration.k8s.io/cert-manager-webhook created
validatingwebhookconfiguration.admissionregistration.k8s.io/cert-manager-webhook created

$ kubectl get po,svc -A               
                                                                                                   ✔   
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system pod/local-path-provisioner-7b7dc8d6f5-8cwd8 1/1 Running 0 8m46s
kube-system pod/coredns-b96499967-cc7fw 1/1 Running 0 8m46s
kube-system pod/svclb-traefik-baea9d36-vjxzg 2/2 Running 0 8m34s
kube-system pod/helm-install-traefik-crd-ghbn2 0/1 Completed 0 8m46s
kube-system pod/svclb-traefik-baea9d36-2vw6x 2/2 Running 0 8m34s
kube-system pod/helm-install-traefik-fc9xw 0/1 Completed 1 8m46s
kube-system pod/svclb-traefik-baea9d36-g42x2 2/2 Running 0 8m34s
kube-system pod/traefik-56cfcbb99f-5qgvd 1/1 Running 0 8m34s
kube-system pod/metrics-server-84f8d4c4fc-wx8bl 1/1 Running 0 8m46s
cert-manager pod/cert-manager-cainjector-5987875fc7-g94ft 1/1 Running 0 55s
cert-manager pod/cert-manager-6dd9658548-slxrp 1/1 Running 0 55s
cert-manager pod/cert-manager-webhook-7b4c5f579b-76x57 1/1 Running 0 55s

NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default service/kubernetes ClusterIP 10.43.0.1 <none> 443/TCP 9m2s
kube-system service/kube-dns ClusterIP 10.43.0.10 <none> 53/UDP,53/TCP,9153/TCP 8m58s
kube-system service/metrics-server ClusterIP 10.43.21.66 <none> 443/TCP 8m57s
kube-system service/traefik LoadBalancer 10.43.170.138 174.138.1.202,188.166.109.175,188.166.88.194 80:32630/TCP,443:30841/TCP 8m35s
cert-manager service/cert-manager ClusterIP 10.43.182.103 <none> 9402/TCP 57s
cert-manager service/cert-manager-webhook ClusterIP 10.43.136.111 <none> 443/TCP 56s
Enter fullscreen mode Exit fullscreen mode

Il ne me reste plus qu’à installer simplement Virtink avec cette ligne de commande :

$ kubectl apply -f https://github.com/smartxworks/virtink/releases/download/v0.11.0/virtink.yaml  ✔   

namespace/virtink-system created
customresourcedefinition.apiextensions.k8s.io/virtualmachinemigrations.virt.virtink.smartx.com created
customresourcedefinition.apiextensions.k8s.io/virtualmachines.virt.virtink.smartx.com created
serviceaccount/virt-controller created
serviceaccount/virt-daemon created
clusterrole.rbac.authorization.k8s.io/virt-controller created
clusterrole.rbac.authorization.k8s.io/virt-daemon created
clusterrolebinding.rbac.authorization.k8s.io/virt-controller created
clusterrolebinding.rbac.authorization.k8s.io/virt-daemon created
service/virt-controller created
deployment.apps/virt-controller created
daemonset.apps/virt-daemon created
certificate.cert-manager.io/virt-controller-cert created
certificate.cert-manager.io/virt-daemon-cert created
issuer.cert-manager.io/virt-controller-cert-issuer created
issuer.cert-manager.io/virt-daemon-cert-issuer created
mutatingwebhookconfiguration.admissionregistration.k8s.io/virtink-mutating-webhook-configuration created
validatingwebhookconfiguration.admissionregistration.k8s.io/virtink-validating-webhook-configuration created

$ kubectl get po,svc -A               
                                                                                                   ✔  
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system pod/local-path-provisioner-7b7dc8d6f5-8cwd8 1/1 Running 0 10m
kube-system pod/coredns-b96499967-cc7fw 1/1 Running 0 10m
kube-system pod/svclb-traefik-baea9d36-vjxzg 2/2 Running 0 10m
kube-system pod/helm-install-traefik-crd-ghbn2 0/1 Completed 0 10m
kube-system pod/svclb-traefik-baea9d36-2vw6x 2/2 Running 0 10m
kube-system pod/helm-install-traefik-fc9xw 0/1 Completed 1 10m
kube-system pod/svclb-traefik-baea9d36-g42x2 2/2 Running 0 10m
kube-system pod/traefik-56cfcbb99f-5qgvd 1/1 Running 0 10m
kube-system pod/metrics-server-84f8d4c4fc-wx8bl 1/1 Running 0 10m
cert-manager pod/cert-manager-cainjector-5987875fc7-g94ft 1/1 Running 0 2m58s
cert-manager pod/cert-manager-6dd9658548-slxrp 1/1 Running 0 2m58s
cert-manager pod/cert-manager-webhook-7b4c5f579b-76x57 1/1 Running 0 2m58s
virtink-system pod/virt-daemon-dv9m9 1/1 Running 0 41s
virtink-system pod/virt-daemon-mv994 1/1 Running 0 41s
virtink-system pod/virt-controller-768b979d4-gzlzt 1/1 Running 0 41s
virtink-system pod/virt-daemon-jpz87 1/1 Running 0 41s

NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default service/kubernetes ClusterIP 10.43.0.1 <none> 443/TCP 11m
kube-system service/kube-dns ClusterIP 10.43.0.10 <none> 53/UDP,53/TCP,9153/TCP 11m
kube-system service/metrics-server ClusterIP 10.43.21.66 <none> 443/TCP 10m
kube-system service/traefik LoadBalancer 10.43.170.138 174.138.1.202,188.166.109.175,188.166.88.194 80:32630/TCP,443:30841/TCP 10m
cert-manager service/cert-manager ClusterIP 10.43.182.103 <none> 9402/TCP 2m59s
cert-manager service/cert-manager-webhook ClusterIP 10.43.136.111 <none> 443/TCP 2m58s
virtink-system service/virt-controller ClusterIP 10.43.196.10 <none> 443/TCP 42s
Enter fullscreen mode Exit fullscreen mode

Je peux très vite créer trois machines virtuelles sous Ubuntu 22.04 LTS avec ce manifest YAML :

cat <<EOF | kubectl apply -f -
apiVersion: virt.virtink.smartx.com/v1alpha1
kind: VirtualMachine
metadata:
  name: k3s-01
spec:
  instance:
    memory:
      size: 2Gi
    kernel:
      image: smartxworks/virtink-kernel-5.15.12
      cmdline: "console=ttyS0 root=/dev/vda rw"
    disks:
      - name: ubuntu
      - name: cloud-init
    interfaces:
      - name: pod
  volumes:
    - name: ubuntu
      containerRootfs:
        image: smartxworks/virtink-container-rootfs-ubuntu
        size: 15Gi
    - name: cloud-init
      cloudInit:
        userData: |-
          #cloud-config
          password: p@ssw0rd!
          chpasswd: { expire: False }
          ssh_pwauth: True
  networks:
    - name: pod
      pod: {}
---
apiVersion: virt.virtink.smartx.com/v1alpha1
kind: VirtualMachine
metadata:
  name: k3s-02
spec:
  instance:
    memory:
      size: 2Gi
    kernel:
      image: smartxworks/virtink-kernel-5.15.12
      cmdline: "console=ttyS0 root=/dev/vda rw"
    disks:
      - name: ubuntu
      - name: cloud-init
    interfaces:
      - name: pod
  volumes:
    - name: ubuntu
      containerRootfs:
        image: smartxworks/virtink-container-rootfs-ubuntu
        size: 15Gi
    - name: cloud-init
      cloudInit:
        userData: |-
          #cloud-config
          password: p@ssw0rd!
          chpasswd: { expire: False }
          ssh_pwauth: True
  networks:
    - name: pod
      pod: {}
---
apiVersion: virt.virtink.smartx.com/v1alpha1
kind: VirtualMachine
metadata:
  name: k3s-03
spec:
  instance:
    memory:
      size: 2Gi
    kernel:
      image: smartxworks/virtink-kernel-5.15.12
      cmdline: "console=ttyS0 root=/dev/vda rw"
    disks:
      - name: ubuntu
      - name: cloud-init
    interfaces:
      - name: pod
  volumes:
    - name: ubuntu
      containerRootfs:
        image: smartxworks/virtink-container-rootfs-ubuntu
        size: 15Gi
    - name: cloud-init
      cloudInit:
        userData: |-
          #cloud-config
          password: p@ssw0rd!
          chpasswd: { expire: False }
          ssh_pwauth: True
  networks:
    - name: pod
      pod: {}
EOF

virtualmachine.virt.virtink.smartx.com/k3s-01 created
virtualmachine.virt.virtink.smartx.com/k3s-02 created
virtualmachine.virt.virtink.smartx.com/k3s-03 created

$ kubectl get po -o wide  
                                                                                                          ✔   
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
vm-k3s-01-bgg45 1/1 Running 0 17m 10.42.2.6 k3s2 <none> <none>
vm-k3s-02-qh7c8 1/1 Running 0 17m 10.42.0.7 k3s1 <none> <none>
vm-k3s-03-22px2 1/1 Running 0 17m 10.42.1.9 k3s3 <none> <none>
Enter fullscreen mode Exit fullscreen mode

Elles sont normalement accessibles en SSH depuis les noeuds du Cluster. Je récupère leurs adresses IP respectives pour cela :

$ kubectl get pod vm-k3s-01-bgg45 -o jsonpath='{.status.podIP}'  ✔ 
10.42.2.6    
$ kubectl get pod vm-k3s-02-qh7c8 -o jsonpath='{.status.podIP}'  ✔ 
10.42.0.7    
$ kubectl get pod vm-k3s-03-22px2 -o jsonpath='{.status.podIP}'
10.42.1.9
Enter fullscreen mode Exit fullscreen mode

Le moyen le plus simple d’accéder à la VM est d’utiliser un client SSH à l’intérieur du cluster, ce qui donne depuis le noeud maître :

root@k3s1:~# ping -c 4 10.42.2.6
PING 10.42.2.6 (10.42.2.6) 56(84) bytes of data.
64 bytes from 10.42.2.6: icmp_seq=1 ttl=63 time=1.45 ms
64 bytes from 10.42.2.6: icmp_seq=2 ttl=63 time=0.713 ms
64 bytes from 10.42.2.6: icmp_seq=3 ttl=63 time=0.590 ms
64 bytes from 10.42.2.6: icmp_seq=4 ttl=63 time=0.611 ms

--- 10.42.2.6 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3055ms
rtt min/avg/max/mdev = 0.590/0.840/1.449/0.354 ms

root@k3s1:~# ping -c 4 10.42.0.7
PING 10.42.0.7 (10.42.0.7) 56(84) bytes of data.
64 bytes from 10.42.0.7: icmp_seq=1 ttl=64 time=0.270 ms
64 bytes from 10.42.0.7: icmp_seq=2 ttl=64 time=0.228 ms
64 bytes from 10.42.0.7: icmp_seq=3 ttl=64 time=0.344 ms
64 bytes from 10.42.0.7: icmp_seq=4 ttl=64 time=0.282 ms

--- 10.42.0.7 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3052ms
rtt min/avg/max/mdev = 0.228/0.281/0.344/0.041 ms

root@k3s1:~# ping -c 4 10.42.1.9
PING 10.42.1.9 (10.42.1.9) 56(84) bytes of data.
64 bytes from 10.42.1.9: icmp_seq=1 ttl=63 time=1.54 ms
64 bytes from 10.42.1.9: icmp_seq=2 ttl=63 time=0.790 ms
64 bytes from 10.42.1.9: icmp_seq=3 ttl=63 time=0.615 ms
64 bytes from 10.42.1.9: icmp_seq=4 ttl=63 time=0.629 ms

--- 10.42.1.9 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3057ms
rtt min/avg/max/mdev = 0.615/0.894/1.542/0.380 ms

root@k3s1:~# ssh ubuntu@10.42.2.6 'lsb_release -a'
ubuntu@10.42.2.6's password: 
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 22.04 LTS
Release: 22.04
Codename: jammy

root@k3s1:~# ssh ubuntu@10.42.0.7 'lsb_release -a'
ubuntu@10.42.0.7's password: 
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 22.04 LTS
Release: 22.04
Codename: jammy

root@k3s1:~# ssh ubuntu@10.42.1.9 'lsb_release -a'
ubuntu@10.42.1.9's password: 
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 22.04 LTS
Release: 22.04
Codename: jammy
Enter fullscreen mode Exit fullscreen mode

Je peux réutiliser k3s Ansible pour reprovisionner un cluster k3s dans k3s …

root@k3s1:~# git clone https://github.com/k3s-io/k3s-ansible
Cloning into 'k3s-ansible'...
remote: Enumerating objects: 922, done.
remote: Total 922 (delta 0), reused 0 (delta 0), pack-reused 922
Receiving objects: 100% (922/922), 116.25 KiB | 3.32 MiB/s, done.
Resolving deltas: 100% (351/351), done.
root@k3s1:~# cd k3s-ansible/

root@k3s1:~/k3s-ansible# cp -R inventory/sample inventory/my-cluster

root@k3s1:~/k3s-ansible# cat inventory/my-cluster/group_vars/all.yml 
---
k3s_version: v1.24.7+k3s1
ansible_user: ubuntu
systemd_dir: /etc/systemd/system
master_ip: "{{ hostvars[groups['master'][0]]['ansible_host'] | default(groups['master'][0]) }}"
extra_server_args: ""
extra_agent_args: ""

root@k3s1:~/k3s-ansible# cat inventory/my-cluster/hosts.ini
[master]
10.42.2.6

[node]
10.42.0.7
10.42.1.9

[k3s_cluster:children]
master
node

root@k3s1:~/k3s-ansible# ssh-copy-id ubuntu@10.42.0.7
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub"
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
ubuntu@10.42.0.7's password: 

Number of key(s) added: 1

Now try logging into the machine, with: "ssh 'ubuntu@10.42.0.7'"
and check to make sure that only the key(s) you wanted were added.

root@k3s1:~/k3s-ansible# ssh-copy-id ubuntu@10.42.1.9
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub"
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
ubuntu@10.42.1.9's password: 

Number of key(s) added: 1

Now try logging into the machine, with: "ssh 'ubuntu@10.42.1.9'"
and check to make sure that only the key(s) you wanted were added.

root@k3s1:~/k3s-ansible# ssh-copy-id ubuntu@10.42.2.6
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub"
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
ubuntu@10.42.2.6's password: 

Number of key(s) added: 1

Now try logging into the machine, with: "ssh 'ubuntu@10.42.2.6'"
and check to make sure that only the key(s) you wanted were added.

root@k3s1:~/k3s-ansible# apt install python3-pip -y && pip install ansible 

root@k3s1:~/k3s-ansible# ansible-playbook site.yml -i inventory/my-cluster/hosts.ini

PLAY [k3s_cluster] ****************************************************************************************************************************************************************

TASK [Gathering Facts] ************************************************************************************************************************************************************
Thursday 10 November 2022 23:20:03 +0000 (0:00:00.013) 0:00:00.013 ***** 
ok: [10.42.2.6]
ok: [10.42.1.9]
ok: [10.42.0.7]

TASK [prereq : Set SELinux to disabled state] *************************************************************************************************************************************
Thursday 10 November 2022 23:20:04 +0000 (0:00:01.535) 0:00:01.548 ***** 
skipping: [10.42.2.6]
skipping: [10.42.0.7]
skipping: [10.42.1.9]

TASK [prereq : Enable IPv4 forwarding] ********************************************************************************************************************************************
Thursday 10 November 2022 23:20:04 +0000 (0:00:00.058) 0:00:01.607 ***** 
changed: [10.42.2.6]
changed: [10.42.1.9]
changed: [10.42.0.7]

TASK [prereq : Enable IPv6 forwarding] ********************************************************************************************************************************************
Thursday 10 November 2022 23:20:05 +0000 (0:00:00.318) 0:00:01.925 ***** 
skipping: [10.42.2.6]
skipping: [10.42.0.7]
skipping: [10.42.1.9]

TASK [prereq : Add br_netfilter to /etc/modules-load.d/] **************************************************************************************************************************
Thursday 10 November 2022 23:20:05 +0000 (0:00:00.053) 0:00:01.978 ***** 
skipping: [10.42.2.6]
skipping: [10.42.0.7]
skipping: [10.42.1.9]

TASK [prereq : Load br_netfilter] *************************************************************************************************************************************************
Thursday 10 November 2022 23:20:05 +0000 (0:00:00.051) 0:00:02.030 ***** 
skipping: [10.42.2.6]
skipping: [10.42.0.7]
skipping: [10.42.1.9]

TASK [prereq : Set bridge-nf-call-iptables (just to be sure)] *********************************************************************************************************************
Thursday 10 November 2022 23:20:05 +0000 (0:00:00.052) 0:00:02.082 ***** 
skipping: [10.42.2.6] => (item=net.bridge.bridge-nf-call-iptables) 
skipping: [10.42.2.6] => (item=net.bridge.bridge-nf-call-ip6tables) 
skipping: [10.42.0.7] => (item=net.bridge.bridge-nf-call-iptables) 
skipping: [10.42.0.7] => (item=net.bridge.bridge-nf-call-ip6tables) 
skipping: [10.42.1.9] => (item=net.bridge.bridge-nf-call-iptables) 
skipping: [10.42.1.9] => (item=net.bridge.bridge-nf-call-ip6tables) 

TASK [prereq : Add /usr/local/bin to sudo secure_path] ****************************************************************************************************************************
Thursday 10 November 2022 23:20:05 +0000 (0:00:00.072) 0:00:02.154 ***** 
skipping: [10.42.2.6]
skipping: [10.42.0.7]
skipping: [10.42.1.9]

TASK [download : Download k3s binary x64] *****************************************************************************************************************************************
Thursday 10 November 2022 23:20:05 +0000 (0:00:00.103) 0:00:02.258 ***** 
[WARNING]: Module remote_tmp /root/.ansible/tmp did not exist and was created with a mode of 0700, this may cause issues when running as another user. To avoid this, create the
remote_tmp dir with the correct permissions manually
changed: [10.42.2.6]
changed: [10.42.0.7]
changed: [10.42.1.9]

TASK [download : Download k3s binary arm64] ***************************************************************************************************************************************
Thursday 10 November 2022 23:20:15 +0000 (0:00:10.036) 0:00:12.295 ***** 
skipping: [10.42.2.6]
skipping: [10.42.0.7]
skipping: [10.42.1.9]

TASK [download : Download k3s binary armhf] ***************************************************************************************************************************************
Thursday 10 November 2022 23:20:15 +0000 (0:00:00.067) 0:00:12.362 ***** 
skipping: [10.42.2.6]
skipping: [10.42.0.7]
skipping: [10.42.1.9]

TASK [raspberrypi : Test for raspberry pi /proc/cpuinfo] **************************************************************************************************************************
Thursday 10 November 2022 23:20:15 +0000 (0:00:00.084) 0:00:12.447 ***** 
ok: [10.42.2.6]
ok: [10.42.0.7]
ok: [10.42.1.9]

TASK [raspberrypi : Test for raspberry pi /proc/device-tree/model] ****************************************************************************************************************
Thursday 10 November 2022 23:20:16 +0000 (0:00:00.359) 0:00:12.806 ***** 
ok: [10.42.2.6]
ok: [10.42.0.7]
ok: [10.42.1.9]

TASK [raspberrypi : Set raspberry_pi fact to true] ********************************************************************************************************************************
Thursday 10 November 2022 23:20:16 +0000 (0:00:00.255) 0:00:13.062 ***** 
skipping: [10.42.2.6]
skipping: [10.42.0.7]
skipping: [10.42.1.9]

TASK [raspberrypi : Set detected_distribution to Raspbian] ************************************************************************************************************************
Thursday 10 November 2022 23:20:16 +0000 (0:00:00.060) 0:00:13.123 ***** 
skipping: [10.42.2.6]
skipping: [10.42.0.7]
skipping: [10.42.1.9]

TASK [raspberrypi : Set detected_distribution to Raspbian (ARM64 on Debian Buster)] ***********************************************************************************************
Thursday 10 November 2022 23:20:16 +0000 (0:00:00.112) 0:00:13.235 ***** 
skipping: [10.42.2.6]
skipping: [10.42.0.7]
skipping: [10.42.1.9]

TASK [raspberrypi : Set detected_distribution_major_version] **********************************************************************************************************************
Thursday 10 November 2022 23:20:16 +0000 (0:00:00.073) 0:00:13.309 ***** 
skipping: [10.42.2.6]
skipping: [10.42.0.7]
skipping: [10.42.1.9]

TASK [raspberrypi : execute OS related tasks on the Raspberry Pi] *****************************************************************************************************************
Thursday 10 November 2022 23:20:16 +0000 (0:00:00.080) 0:00:13.389 ***** 
skipping: [10.42.2.6] => (item=/root/k3s-ansible/roles/raspberrypi/tasks/prereq/Ubuntu.yml) 
skipping: [10.42.0.7] => (item=/root/k3s-ansible/roles/raspberrypi/tasks/prereq/Ubuntu.yml) 
skipping: [10.42.1.9] => (item=/root/k3s-ansible/roles/raspberrypi/tasks/prereq/Ubuntu.yml) 

PLAY [master] *********************************************************************************************************************************************************************

TASK [Gathering Facts] ************************************************************************************************************************************************************
Thursday 10 November 2022 23:20:16 +0000 (0:00:00.125) 0:00:13.515 ***** 
ok: [10.42.2.6]

TASK [k3s/master : Copy K3s service file] *****************************************************************************************************************************************
Thursday 10 November 2022 23:20:17 +0000 (0:00:00.660) 0:00:14.175 ***** 
changed: [10.42.2.6]

TASK [k3s/master : Enable and check K3s service] **********************************************************************************************************************************
Thursday 10 November 2022 23:20:18 +0000 (0:00:00.688) 0:00:14.864 ***** 
changed: [10.42.2.6]

TASK [k3s/master : Wait for node-token] *******************************************************************************************************************************************
Thursday 10 November 2022 23:20:29 +0000 (0:00:11.160) 0:00:26.025 ***** 
ok: [10.42.2.6]

TASK [k3s/master : Register node-token file access mode] **************************************************************************************************************************
Thursday 10 November 2022 23:20:29 +0000 (0:00:00.458) 0:00:26.483 ***** 
ok: [10.42.2.6]

TASK [k3s/master : Change file access node-token] *********************************************************************************************************************************
Thursday 10 November 2022 23:20:30 +0000 (0:00:00.329) 0:00:26.812 ***** 
changed: [10.42.2.6]

TASK [k3s/master : Read node-token from master] ***********************************************************************************************************************************
Thursday 10 November 2022 23:20:30 +0000 (0:00:00.422) 0:00:27.234 ***** 
ok: [10.42.2.6]

TASK [k3s/master : Store Master node-token] ***************************************************************************************************************************************
Thursday 10 November 2022 23:20:30 +0000 (0:00:00.303) 0:00:27.537 ***** 
ok: [10.42.2.6]

TASK [k3s/master : Restore node-token file access] ********************************************************************************************************************************
Thursday 10 November 2022 23:20:30 +0000 (0:00:00.041) 0:00:27.579 ***** 
changed: [10.42.2.6]

TASK [k3s/master : Create directory .kube] ****************************************************************************************************************************************
Thursday 10 November 2022 23:20:31 +0000 (0:00:00.232) 0:00:27.812 ***** 
changed: [10.42.2.6]

TASK [k3s/master : Copy config file to user home directory] ***********************************************************************************************************************
Thursday 10 November 2022 23:20:31 +0000 (0:00:00.310) 0:00:28.122 ***** 
changed: [10.42.2.6]

TASK [k3s/master : Replace https://localhost:6443 by https://master-ip:6443] ******************************************************************************************************
Thursday 10 November 2022 23:20:31 +0000 (0:00:00.211) 0:00:28.334 ***** 
changed: [10.42.2.6]

TASK [k3s/master : Create kubectl symlink] ****************************************************************************************************************************************
Thursday 10 November 2022 23:20:32 +0000 (0:00:00.693) 0:00:29.027 ***** 
changed: [10.42.2.6]

TASK [k3s/master : Create crictl symlink] *****************************************************************************************************************************************
Thursday 10 November 2022 23:20:32 +0000 (0:00:00.187) 0:00:29.215 ***** 
changed: [10.42.2.6]

PLAY [node] ***********************************************************************************************************************************************************************

TASK [Gathering Facts] ************************************************************************************************************************************************************
Thursday 10 November 2022 23:20:32 +0000 (0:00:00.209) 0:00:29.424 ***** 
ok: [10.42.1.9]
ok: [10.42.0.7]

TASK [k3s/node : Copy K3s service file] *******************************************************************************************************************************************
Thursday 10 November 2022 23:20:33 +0000 (0:00:00.724) 0:00:30.149 ***** 
changed: [10.42.1.9]
changed: [10.42.0.7]

TASK [k3s/node : Enable and check K3s service] ************************************************************************************************************************************
Thursday 10 November 2022 23:20:34 +0000 (0:00:00.538) 0:00:30.687 ***** 
changed: [10.42.1.9]
changed: [10.42.0.7]

PLAY RECAP ************************************************************************************************************************************************************************
10.42.0.7 : ok=8 changed=4 unreachable=0 failed=0 skipped=13 rescued=0 ignored=0   
10.42.1.9 : ok=8 changed=4 unreachable=0 failed=0 skipped=13 rescued=0 ignored=0   
10.42.2.6 : ok=19 changed=11 unreachable=0 failed=0 skipped=13 rescued=0 ignored=0   

Thursday 10 November 2022 23:20:44 +0000 (0:00:10.395) 0:00:41.082 ***** 
=============================================================================== 
k3s/master : Enable and check K3s service --------------------------------------------------------------------------------------------------------------------------------- 11.16s
k3s/node : Enable and check K3s service ----------------------------------------------------------------------------------------------------------------------------------- 10.40s
download : Download k3s binary x64 ---------------------------------------------------------------------------------------------------------------------------------------- 10.04s
Gathering Facts ------------------------------------------------------------------------------------------------------------------------------------------------------------ 1.54s
Gathering Facts ------------------------------------------------------------------------------------------------------------------------------------------------------------ 0.72s
k3s/master : Replace https://localhost:6443 by https://master-ip:6443 ------------------------------------------------------------------------------------------------------ 0.69s
k3s/master : Copy K3s service file ----------------------------------------------------------------------------------------------------------------------------------------- 0.69s
Gathering Facts ------------------------------------------------------------------------------------------------------------------------------------------------------------ 0.66s
k3s/node : Copy K3s service file ------------------------------------------------------------------------------------------------------------------------------------------- 0.54s
k3s/master : Wait for node-token ------------------------------------------------------------------------------------------------------------------------------------------- 0.46s
k3s/master : Change file access node-token --------------------------------------------------------------------------------------------------------------------------------- 0.42s
raspberrypi : Test for raspberry pi /proc/cpuinfo -------------------------------------------------------------------------------------------------------------------------- 0.36s
k3s/master : Register node-token file access mode -------------------------------------------------------------------------------------------------------------------------- 0.33s
prereq : Enable IPv4 forwarding -------------------------------------------------------------------------------------------------------------------------------------------- 0.32s
k3s/master : Create directory .kube ---------------------------------------------------------------------------------------------------------------------------------------- 0.31s
k3s/master : Read node-token from master ----------------------------------------------------------------------------------------------------------------------------------- 0.30s
raspberrypi : Test for raspberry pi /proc/device-tree/model ---------------------------------------------------------------------------------------------------------------- 0.26s
k3s/master : Restore node-token file access -------------------------------------------------------------------------------------------------------------------------------- 0.23s
k3s/master : Copy config file to user home directory ----------------------------------------------------------------------------------------------------------------------- 0.21s
k3s/master : Create crictl symlink ----------------------------------------------------------------------------------------------------------------------------------------- 0.21s
Enter fullscreen mode Exit fullscreen mode

Ce cluster k3s imbriqué dans ces trois machines virtuelles au sein du cluster k3s dans DigitalOcean est accessible et opérationnel …

Virtink est particulièrement bien adapté à l’exécution de clusters Kubernetes entièrement isolés dans un cluster Kubernetes existant et notamment au travers de son outil knest :

GitHub - smartxworks/knest: Kubernetes-in-Kubernetes Made Simple

Il faut avoir son client kubectl et clusterctl installés pour le tester :

$ knest create quickstart

kubeadmcontrolplane.controlplane.cluster.x-k8s.io/quickstart-cp created         
virtinkmachinetemplate.infrastructure.cluster.x-k8s.io/quickstart-cp created    
machinehealthcheck.cluster.x-k8s.io/quickstart-cp-unhealthy-5m created          
machinedeployment.cluster.x-k8s.io/quickstart-md-0 created                      
kubeadmconfigtemplate.bootstrap.cluster.x-k8s.io/quickstart-md-0 created        
virtinkmachinetemplate.infrastructure.cluster.x-k8s.io/quickstart-md-0 created  
machinehealthcheck.cluster.x-k8s.io/quickstart-md-0-unhealthy-5m created        
Waiting for control plane to be initialized...                                  
cluster.cluster.x-k8s.io/quickstart condition met                               
Cluster "quickstart" set.                                                       
Your cluster "quickstart" is now accessible with the kubeconfig file "~/.kube/knest.quickstart.kubeconfig"                                         

$ kubectl --kubeconfig ~/.kube/knest.quickstart.kubeconfig get pods -n kube-system                                                  

NAME READY STATUS RESTARTS AGE  
coredns-6d4b75cb6d-d6jgw 0/1 Pending 0 18s  
coredns-6d4b75cb6d-np8m5 0/1 Pending 0 18s  
etcd-quickstart-cp-94r25 1/1 Running 0 19s  
kube-apiserver-quickstart-cp-94r25 1/1 Running 0 19s  
kube-controller-manager-quickstart-cp-94r25 1/1 Running 0 19s  
kube-proxy-tl9ds 1/1 Running 0 18s  
kube-scheduler-quickstart-cp-94r25 1/1 Running 0 19s  

$ kubectl get vm                                    
NAME STATUS NODE                                            
quickstart-cp-94r25 Running kube-2  

$ knest delete quickstart 
cluster.cluster.x-k8s.io "quickstart" deleted 
Enter fullscreen mode Exit fullscreen mode

Et pour mettre à l’échelle le cluster kubernetes imbriqué :

$ knest scale quickstart 3:3

=> CONTROL_PLANE_MACHINE_COUNT:WORKER_MACHINE_COUNT
Enter fullscreen mode Exit fullscreen mode

knest demo

Comme le précise le projet sur GitHub, actuellement, Calico et Cilium sont les deux seuls plugins CNI recommandés pour les clusters imbriqués, en raison du nombre limité de modules de noyau inclus dans l’image. Le réseau sous-jacent et les pare-feu doivent autoriser les paquets encapsulés.

Il est également possible de connecter des VM à des réseaux secondaires à l’aide de Multus CNI. Cela suppose que Multus CNI est installé dans votre cluster et qu’un CRD NetworkAttachmentDefinition correspondant a été créé.

L’exemple suivant définit un réseau qui utilise le plugin Open vSwitch CNI, qui connectera la VM au pont br1 d’Open vSwitch sur l’hôte. D’autres plugins CNI tels que bridge ou macvlan peuvent également être utilisés.

virtink/interfaces_and_networks.md at main · smartxworks/virtink

En conclusion, construit sur le projet de Cloud Hypervisor , Virtink est de fait un moteur de virtualisation open-source optimisé pour l’orchestration de machines virtuelles légères dans Kubernetes. Un outil en ligne de commande knest est également fourni permettant aux utilisateurs de créer et de gérer des clusters d’un simple clic, réduisant ainsi le coût global de plusieurs clusters notamment dans un scénario multi-tenants …

Avec cette feuille de route pour Virtink qui est quasiment atteinte :

  • Gestion du cycle de vie des VM
  • Disques de conteneur
  • Démarrage direct du noyau avec les rootfs des conteneurs
  • Réseau de pods
  • Réseaux CNI multiples
  • Volumes persistants
  • Volumes de données CDI
  • Support ARM64
  • Migration en direct des VM
  • Passage de NIC SR-IOV
  • Passthrough GPU
  • Placement de processeurs dédiés
  • Connexion à chaud des périphériques VM

À suivre !

Top comments (0)