DEV Community

Karim
Karim

Posted on • Originally published at deep75.Medium on

Effectuer des opérations de base autour des machines virtuelles avec kubevirt-manager …

Kubevirt permet comme on l’a vu dans plusieurs articles précedemment de fournir une plateforme de développement unifiée où les développeurs peuvent construire, modifier et déployer des applications résidant à la fois dans des conteneurs d’application et dans des machines virtuelles dans un environnement commun et partagé.


https://kubevirt.io/2020/KubeVirt_deep_dive-virtualized_gpu_workloads.html

Même si d’autres solutions alternatives de création et d’utilisations de machines virtuelles existent pour Kubernetes :

Une solution existait pour gérer ses machines virtuelles graphiquement dans Kubernetes et KubeVirt avec OpenShift Web Console :

Ici focus sur KubeVirt Manager, un projet récent qui fournit un Frontend Angular simple pour faire fonctionner KubeVirt.

Cet outil vous permet d’effectuer des opérations de base autour des machines virtuelles, des instances de machines virtuelles et des disques :

GitHub - kubevirt-manager/kubevirt-manager: Kubevirt Web UI / Web Interface Manager

Application rapide avec le déploiement d’un cluster AKS dans Azure avec un Scale Set utilisant des instances Spot et ces paramètres :

{
    "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentParameters.json#",
    "contentVersion": "1.0.0.0",
    "parameters": {
        "resourceName": {
            "value": "AKS"
        },
        "location": {
            "value": "francecentral"
        },
        "dnsPrefix": {
            "value": "AKS-dns"
        },
        "kubernetesVersion": {
            "value": "1.26.0"
        },
        "networkPlugin": {
            "value": "kubenet"
        },
        "enableRBAC": {
            "value": true
        },
        "nodeResourceGroup": {
            "value": "MC_RG-AKS_AKS_francecentral"
        },
        "upgradeChannel": {
            "value": "none"
        },
        "adminGroupObjectIDs": {
            "value": []
        },
        "disableLocalAccounts": {
            "value": false
        },
        "azureRbac": {
            "value": false
        },
        "enablePrivateCluster": {
            "value": false
        },
        "enableHttpApplicationRouting": {
            "value": false
        },
        "enableAzurePolicy": {
            "value": false
        },
        "enableSecretStoreCSIDriver": {
            "value": false
        },
        "vmssNodePool": {
            "value": true
        }
    }
}
Enter fullscreen mode Exit fullscreen mode

Mise en oeuvre d’un cluster Kubernetes 1.14 avec des noeuds Windows via AKS-Engine …

Le cluster AKS est déployé avec récupération des identifiants :

$ az aks get-credentials --resource-group RG-AKS --name AKS
Merged "AKS" as current context in /home/cert/.kube/config

$ kubectl cluster-info
Kubernetes control plane is running at https://aks-dns-bn9a54nt.hcp.francecentral.azmk8s.io:443
CoreDNS is running at https://aks-dns-bn9a54nt.hcp.francecentral.azmk8s.io:443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
Metrics-server is running at https://aks-dns-bn9a54nt.hcp.francecentral.azmk8s.io:443/api/v1/namespaces/kube-system/services/https:metrics-server:/proxy

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

$ kubectl get po,svc -A
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system pod/azure-ip-masq-agent-6g72l 1/1 Running 0 6m18s
kube-system pod/azure-ip-masq-agent-nzbf5 1/1 Running 0 6m20s
kube-system pod/azure-ip-masq-agent-rj6mj 1/1 Running 0 6m6s
kube-system pod/cloud-node-manager-rfpcf 1/1 Running 0 6m18s
kube-system pod/cloud-node-manager-rg5jm 1/1 Running 0 6m6s
kube-system pod/cloud-node-manager-zpsgs 1/1 Running 0 6m20s
kube-system pod/coredns-78669d9946-txcm8 1/1 Running 0 4m26s
kube-system pod/coredns-78669d9946-zt9b5 1/1 Running 0 6m30s
kube-system pod/coredns-autoscaler-5b5c4f5b4f-zl2hq 1/1 Running 0 6m30s
kube-system pod/csi-azuredisk-node-44ffr 3/3 Running 0 6m20s
kube-system pod/csi-azuredisk-node-m2s5r 3/3 Running 0 6m18s
kube-system pod/csi-azuredisk-node-v4dc8 3/3 Running 0 6m6s
kube-system pod/csi-azurefile-node-5pzl4 3/3 Running 0 6m18s
kube-system pod/csi-azurefile-node-mqxw4 3/3 Running 0 6m6s
kube-system pod/csi-azurefile-node-nfvct 3/3 Running 0 6m20s
kube-system pod/konnectivity-agent-f94c65c6f-bjcwq 1/1 Running 0 6m29s
kube-system pod/konnectivity-agent-f94c65c6f-rdbfl 1/1 Running 0 6m29s
kube-system pod/kube-proxy-52svq 1/1 Running 0 6m18s
kube-system pod/kube-proxy-cnplk 1/1 Running 0 6m6s
kube-system pod/kube-proxy-dnjjp 1/1 Running 0 6m20s
kube-system pod/metrics-server-5f6654d4df-nqj2w 2/2 Running 0 4m20s
kube-system pod/metrics-server-5f6654d4df-qk22x 2/2 Running 0 4m20s

NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default service/kubernetes ClusterIP 10.0.0.1 <none> 443/TCP 7m4s
kube-system service/kube-dns ClusterIP 10.0.0.10 <none> 53/UDP,53/TCP 6m31s
kube-system service/metrics-server ClusterIP 10.0.139.110 <none> 443/TCP 6m30s

$ kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
aks-agentpool-16963728-vmss000000 Ready agent 6m15s v1.26.0 10.224.0.5 <none> Ubuntu 22.04.1 LTS 5.15.0-1033-azure containerd://1.6.17+azure-1
aks-userpool-16963728-vmss000000 Ready agent 6m27s v1.26.0 10.224.0.4 20.111.27.134 Ubuntu 22.04.1 LTS 5.15.0-1033-azure containerd://1.6.17+azure-1
aks-userpool-16963728-vmss000001 Ready agent 6m29s v1.26.0 10.224.0.6 20.111.27.202 Ubuntu 22.04.1 LTS 5.15.0-1033-azure containerd://1.6.17+azure-1
Enter fullscreen mode Exit fullscreen mode

J’utilise ici un pool de Noeud Spot avec des instances Esv5 qui autorise la virtualisation imbriquée pour cette expérience :

Ev5 and Esv5-series - Azure Virtual Machines

Ce qui permet le déploiement de KubeVirt via HCO (Hyperconverged Cluster Operator) :

GitHub - kubevirt/hyperconverged-cluster-operator: Operator pattern for managing multi-operator products

avec ce script unifié :

curl https://raw.githubusercontent.com/kubevirt/hyperconverged-cluster-operator/main/deploy/deploy.sh | bash

$ curl https://raw.githubusercontent.com/kubevirt/hyperconverged-cluster-operator/main/deploy/deploy.sh | bash

+ hco_namespace=kubevirt-hyperconverged
+ IS_OPENSHIFT=false
+ kubectl api-resources
+ grep clusterversions
+ grep config.openshift.io
+ kubectl create ns kubevirt-hyperconverged --dry-run=true -o yaml
+ kubectl apply -f -
W0309 13:38:03.250568 3638 helpers.go:677] --dry-run=true is deprecated (boolean value) and can be replaced with --dry-run=client.
namespace/kubevirt-hyperconverged created
+ namespaces=("openshift")
+ for namespace in ${namespaces[@]}
++ kubectl get ns openshift
Error from server (NotFound): namespaces "openshift" not found
+ [['' == '']]
+ kubectl create ns openshift --dry-run=true -o yaml
+ kubectl apply -f -
W0309 13:38:04.072929 3667 helpers.go:677] --dry-run=true is deprecated (boolean value) and can be replaced with --dry-run=client.
namespace/openshift created
+ LABEL_SELECTOR_ARG=
+ '[' false '!=' true ']'
+ LABEL_SELECTOR_ARG='-l name!=ssp-operator,name!=hyperconverged-cluster-cli-download'
+ kubectl apply -l 'name!=ssp-operator,name!=hyperconverged-cluster-cli-download' -f https://raw.githubusercontent.com/kubevirt/hyperconverged-cluster-operator/main/deploy/crds/cluster-network-addons00.crd.yaml
customresourcedefinition.apiextensions.k8s.io/networkaddonsconfigs.networkaddonsoperator.network.kubevirt.io created
+ kubectl apply -l 'name!=ssp-operator,name!=hyperconverged-cluster-cli-download' -f https://raw.githubusercontent.com/kubevirt/hyperconverged-cluster-operator/main/deploy/crds/containerized-data-importer00.crd.yaml
customresourcedefinition.apiextensions.k8s.io/cdis.cdi.kubevirt.io created
+ kubectl apply -l 'name!=ssp-operator,name!=hyperconverged-cluster-cli-download' -f https://raw.githubusercontent.com/kubevirt/hyperconverged-cluster-operator/main/deploy/crds/hco00.crd.yaml
customresourcedefinition.apiextensions.k8s.io/hyperconvergeds.hco.kubevirt.io created
+ kubectl apply -l 'name!=ssp-operator,name!=hyperconverged-cluster-cli-download' -f https://raw.githubusercontent.com/kubevirt/hyperconverged-cluster-operator/main/deploy/crds/kubevirt00.crd.yaml
customresourcedefinition.apiextensions.k8s.io/kubevirts.kubevirt.io created
+ kubectl apply -l 'name!=ssp-operator,name!=hyperconverged-cluster-cli-download' -f https://raw.githubusercontent.com/kubevirt/hyperconverged-cluster-operator/main/deploy/crds/hostpath-provisioner00.crd.yaml
customresourcedefinition.apiextensions.k8s.io/hostpathprovisioners.hostpathprovisioner.kubevirt.io created
+ kubectl apply -l 'name!=ssp-operator,name!=hyperconverged-cluster-cli-download' -f https://raw.githubusercontent.com/kubevirt/hyperconverged-cluster-operator/main/deploy/crds/scheduling-scale-performance00.crd.yaml
customresourcedefinition.apiextensions.k8s.io/ssps.ssp.kubevirt.io created
+ kubectl apply -l 'name!=ssp-operator,name!=hyperconverged-cluster-cli-download' -f https://raw.githubusercontent.com/kubevirt/hyperconverged-cluster-operator/main/deploy/crds/tekton-tasks-operator00.crd.yaml
customresourcedefinition.apiextensions.k8s.io/tektontasks.tektontasks.kubevirt.io created
+ kubectl apply -l 'name!=ssp-operator,name!=hyperconverged-cluster-cli-download' -f https://raw.githubusercontent.com/kubevirt/hyperconverged-cluster-operator/main/deploy/cert-manager.yaml
customresourcedefinition.apiextensions.k8s.io/certificaterequests.cert-manager.io created
customresourcedefinition.apiextensions.k8s.io/certificates.cert-manager.io created
customresourcedefinition.apiextensions.k8s.io/challenges.acme.cert-manager.io created
customresourcedefinition.apiextensions.k8s.io/clusterissuers.cert-manager.io created
customresourcedefinition.apiextensions.k8s.io/issuers.cert-manager.io created
customresourcedefinition.apiextensions.k8s.io/orders.acme.cert-manager.io created
namespace/cert-manager created
serviceaccount/cert-manager-cainjector created
serviceaccount/cert-manager created
serviceaccount/cert-manager-webhook created
clusterrole.rbac.authorization.k8s.io/cert-manager-cainjector created
clusterrole.rbac.authorization.k8s.io/cert-manager-controller-issuers created
clusterrole.rbac.authorization.k8s.io/cert-manager-controller-clusterissuers created
clusterrole.rbac.authorization.k8s.io/cert-manager-controller-certificates created
clusterrole.rbac.authorization.k8s.io/cert-manager-controller-orders created
clusterrole.rbac.authorization.k8s.io/cert-manager-controller-challenges created
clusterrole.rbac.authorization.k8s.io/cert-manager-controller-ingress-shim created
clusterrole.rbac.authorization.k8s.io/cert-manager-view created
clusterrole.rbac.authorization.k8s.io/cert-manager-edit created
clusterrole.rbac.authorization.k8s.io/cert-manager-controller-approve:cert-manager-io created
clusterrole.rbac.authorization.k8s.io/cert-manager-controller-certificatesigningrequests created
clusterrole.rbac.authorization.k8s.io/cert-manager-webhook:subjectaccessreviews created
clusterrolebinding.rbac.authorization.k8s.io/cert-manager-cainjector created
clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-issuers created
clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-clusterissuers created
clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-certificates created
clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-orders created
clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-challenges created
clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-ingress-shim created
clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-approve:cert-manager-io created
clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-certificatesigningrequests created
clusterrolebinding.rbac.authorization.k8s.io/cert-manager-webhook:subjectaccessreviews created
role.rbac.authorization.k8s.io/cert-manager-cainjector:leaderelection created
role.rbac.authorization.k8s.io/cert-manager:leaderelection created
role.rbac.authorization.k8s.io/cert-manager-webhook:dynamic-serving created
rolebinding.rbac.authorization.k8s.io/cert-manager-cainjector:leaderelection created
rolebinding.rbac.authorization.k8s.io/cert-manager:leaderelection created
rolebinding.rbac.authorization.k8s.io/cert-manager-webhook:dynamic-serving created
service/cert-manager created
service/cert-manager-webhook created
deployment.apps/cert-manager-cainjector created
deployment.apps/cert-manager created
deployment.apps/cert-manager-webhook created
mutatingwebhookconfiguration.admissionregistration.k8s.io/cert-manager-webhook created
validatingwebhookconfiguration.admissionregistration.k8s.io/cert-manager-webhook created
+ kubectl -n cert-manager wait deployment/cert-manager --for=condition=Available --timeout=300s
deployment.apps/cert-manager condition met
+ kubectl -n cert-manager wait deployment/cert-manager-webhook --for=condition=Available --timeout=300s
deployment.apps/cert-manager-webhook condition met
+ kubectl apply -l 'name!=ssp-operator,name!=hyperconverged-cluster-cli-download' -n kubevirt-hyperconverged -f https://raw.githubusercontent.com/kubevirt/hyperconverged-cluster-operator/main/deploy/cluster_role.yaml
role.rbac.authorization.k8s.io/cluster-network-addons-operator created
role.rbac.authorization.k8s.io/kubevirt-operator created
role.rbac.authorization.k8s.io/tekton-tasks-operator created
role.rbac.authorization.k8s.io/cdi-operator created
role.rbac.authorization.k8s.io/hostpath-provisioner-operator created
clusterrole.rbac.authorization.k8s.io/hyperconverged-cluster-operator created
clusterrole.rbac.authorization.k8s.io/cluster-network-addons-operator created
clusterrole.rbac.authorization.k8s.io/kubevirt-operator created
clusterrole.rbac.authorization.k8s.io/tekton-tasks-operator created
clusterrole.rbac.authorization.k8s.io/cdi-operator created
clusterrole.rbac.authorization.k8s.io/hostpath-provisioner-operator created
+ kubectl apply -l 'name!=ssp-operator,name!=hyperconverged-cluster-cli-download' -n kubevirt-hyperconverged -f https://raw.githubusercontent.com/kubevirt/hyperconverged-cluster-operator/main/deploy/service_account.yaml
serviceaccount/cdi-operator created
serviceaccount/cluster-network-addons-operator created
serviceaccount/hostpath-provisioner-operator created
serviceaccount/hyperconverged-cluster-operator created
serviceaccount/kubevirt-operator created
serviceaccount/tekton-tasks-operator created
+ kubectl apply -l 'name!=ssp-operator,name!=hyperconverged-cluster-cli-download' -n kubevirt-hyperconverged -f https://raw.githubusercontent.com/kubevirt/hyperconverged-cluster-operator/main/deploy/cluster_role_binding.yaml
rolebinding.rbac.authorization.k8s.io/cluster-network-addons-operator created
rolebinding.rbac.authorization.k8s.io/kubevirt-operator created
rolebinding.rbac.authorization.k8s.io/tekton-tasks-operator created
rolebinding.rbac.authorization.k8s.io/cdi-operator created
rolebinding.rbac.authorization.k8s.io/hostpath-provisioner-operator created
clusterrolebinding.rbac.authorization.k8s.io/hyperconverged-cluster-operator created
clusterrolebinding.rbac.authorization.k8s.io/cluster-network-addons-operator created
clusterrolebinding.rbac.authorization.k8s.io/kubevirt-operator created
clusterrolebinding.rbac.authorization.k8s.io/tekton-tasks-operator created
clusterrolebinding.rbac.authorization.k8s.io/cdi-operator created
clusterrolebinding.rbac.authorization.k8s.io/hostpath-provisioner-operator created
+ kubectl apply -l 'name!=ssp-operator,name!=hyperconverged-cluster-cli-download' -n kubevirt-hyperconverged -f https://raw.githubusercontent.com/kubevirt/hyperconverged-cluster-operator/main/deploy/webhooks.yaml
issuer.cert-manager.io/selfsigned created
certificate.cert-manager.io/hyperconverged-cluster-webhook-service-cert created
validatingwebhookconfiguration.admissionregistration.k8s.io/validate-hco.kubevirt.io created
certificate.cert-manager.io/node-maintenance-operator-service-cert created
validatingwebhookconfiguration.admissionregistration.k8s.io/nodemaintenance-validation.kubevirt.io created
certificate.cert-manager.io/hostpath-provisioner-operator-webhook-service-cert created
validatingwebhookconfiguration.admissionregistration.k8s.io/hostpathprovisioner.kubevirt.io created
mutatingwebhookconfiguration.admissionregistration.k8s.io/mutate-hco.kubevirt.io created
+ kubectl apply -l 'name!=ssp-operator,name!=hyperconverged-cluster-cli-download' -n kubevirt-hyperconverged -f https://raw.githubusercontent.com/kubevirt/hyperconverged-cluster-operator/main/deploy/operator.yaml
deployment.apps/hyperconverged-cluster-operator created
deployment.apps/hyperconverged-cluster-webhook created
deployment.apps/cluster-network-addons-operator created
deployment.apps/virt-operator created
deployment.apps/tekton-tasks-operator created
deployment.apps/cdi-operator created
deployment.apps/hostpath-provisioner-operator created
service/hyperconverged-cluster-webhook-service created
service/hostpath-provisioner-operator-service created
+ kubectl -n kubevirt-hyperconverged wait deployment/hyperconverged-cluster-webhook --for=condition=Available --timeout=300s
deployment.apps/hyperconverged-cluster-webhook condition met
+ kubectl apply -l 'name!=ssp-operator,name!=hyperconverged-cluster-cli-download' -n kubevirt-hyperconverged -f https://raw.githubusercontent.com/kubevirt/hyperconverged-cluster-operator/main/deploy/hco.cr.yaml

hyperconverged.hco.kubevirt.io/kubevirt-hyperconverged created
Enter fullscreen mode Exit fullscreen mode

KubeVirt est alors déployé dans ce cluster :

$ kubectl get po,svc -n kubevirt-hyperconverged

NAME READY STATUS RESTARTS AGE
pod/bridge-marker-n7dgz 1/1 Running 0 3m14s
pod/bridge-marker-qld66 1/1 Running 0 3m14s
pod/bridge-marker-w9rjv 1/1 Running 0 3m14s
pod/cdi-apiserver-5b79dbb75c-2wjgl 1/1 Running 0 2m41s
pod/cdi-deployment-68787cb469-l5cpl 1/1 Running 0 2m37s
pod/cdi-operator-b5fc58cd7-b9f2s 1/1 Running 0 3m24s
pod/cdi-uploadproxy-56d645d8d5-trt54 1/1 Running 0 2m46s
pod/cluster-network-addons-operator-6f6765958d-rn2fn 2/2 Running 0 3m35s
pod/hostpath-provisioner-operator-7bb46fd5b7-cg2hb 1/1 Running 0 3m24s
pod/hyperconverged-cluster-operator-7df6b469b4-942tx 1/1 Running 0 3m35s
pod/hyperconverged-cluster-webhook-55cfd6cb69-46r8v 1/1 Running 0 3m35s
pod/kube-cni-linux-bridge-plugin-62jrn 1/1 Running 0 3m14s
pod/kube-cni-linux-bridge-plugin-hqpsg 1/1 Running 0 3m14s
pod/kube-cni-linux-bridge-plugin-nzkpp 1/1 Running 0 3m14s
pod/kubemacpool-cert-manager-6dfd4bb587-b29pl 1/1 Running 0 3m14s
pod/kubemacpool-mac-controller-manager-646dd6f884-gxttc 2/2 Running 0 3m14s
pod/multus-ggncz 1/1 Running 0 3m15s
pod/multus-m2xjm 1/1 Running 0 3m15s
pod/multus-zjm8n 1/1 Running 0 3m15s
pod/tekton-tasks-operator-7d98668446-prr4b 1/1 Running 0 3m35s
pod/virt-api-94f856488-k7fcb 1/1 Running 0 2m20s
pod/virt-controller-7bd9bdbccc-rn762 1/1 Running 0 107s
pod/virt-exportproxy-5c8d7c5668-mwfnz 1/1 Running 0 107s
pod/virt-handler-mzjv5 1/1 Running 0 107s
pod/virt-operator-6797b9bb9c-5jqj8 1/1 Running 0 3m24s
pod/virt-operator-6797b9bb9c-rrm4n 1/1 Running 0 3m24s

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/cdi-api ClusterIP 10.0.59.53 <none> 443/TCP 2m41s
service/cdi-prometheus-metrics ClusterIP 10.0.220.94 <none> 8080/TCP 2m37s
service/cdi-uploadproxy ClusterIP 10.0.37.12 <none> 443/TCP 2m46s
service/hostpath-provisioner-operator-service ClusterIP 10.0.118.100 <none> 9443/TCP 3m35s
service/hyperconverged-cluster-webhook-service ClusterIP 10.0.122.49 <none> 4343/TCP 3m35s
service/kubemacpool-service ClusterIP 10.0.96.143 <none> 443/TCP 3m14s
service/kubevirt-operator-webhook ClusterIP 10.0.145.178 <none> 443/TCP 2m22s
service/kubevirt-prometheus-metrics ClusterIP None <none> 443/TCP 2m22s
service/virt-api ClusterIP 10.0.230.208 <none> 443/TCP 2m22s
service/virt-exportproxy ClusterIP 10.0.218.176 <none> 443/TCP 2m22s
Enter fullscreen mode Exit fullscreen mode

KubeVirt Manager est mise en oeuvre via les manifests YAML fournis dans son dépôt GitHub :

$ git clone https://github.com/kubevirt-manager/kubevirt-manager
Cloning into 'kubevirt-manager'...
remote: Enumerating objects: 4464, done.
remote: Counting objects: 100% (1287/1287), done.
remote: Compressing objects: 100% (446/446), done.
remote: Total 4464 (delta 838), reused 1191 (delta 815), pack-reused 3177
Receiving objects: 100% (4464/4464), 67.53 MiB | 3.41 MiB/s, done.
Resolving deltas: 100% (1461/1461), done.

~/kubevirt-manager$ kubectl apply -f kubernetes/ns.yaml
namespace/kubevirt-manager created

~/kubevirt-manager$ cat kubernetes/ns.yaml 

apiVersion: v1
kind: Namespace
metadata:
  name: kubevirt-manager
  labels:
    app: kubevirt-manager
    kubevirt-manager.io/version: 1.2.0

~/kubevirt-manager$ kubectl apply -f kubernetes/rbac.yaml

serviceaccount/kubevirt-manager created
clusterrole.rbac.authorization.k8s.io/kubevirt-manager created
clusterrolebinding.rbac.authorization.k8s.io/kubevirt-manager created

~/kubevirt-manager$ kubectl apply -f kubernetes/deployment.yaml
deployment.apps/kubevirt-manager created

~/kubevirt-manager$ kubectl apply -f kubernetes/pc.yaml
priorityclass.scheduling.k8s.io/vm-standard created
priorityclass.scheduling.k8s.io/vm-preemptible created

~/kubevirt-manager$ kubectl apply -f kubernetes/service.yaml
service/kubevirt-manager created
Enter fullscreen mode Exit fullscreen mode

KubeVirt Manager est déployé et je change en éditant le service d’exposition de ClusterIP à Load Balancer pour obtenir une exposition en HTTP :

Services, Load Balancing, and Networking

$ kubectl get po,svc -n kubevirt-manager
NAME READY STATUS RESTARTS AGE
pod/kubevirt-manager-5858499887-wfrgn 1/1 Running 0 95s

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubevirt-manager ClusterIP 10.0.173.82 <none> 8080/TCP 63s

$ kubectl edit -n kubevirt-manager service/kubevirt-manager
service/kubevirt-manager edited

$ kubectl get po,svc -n kubevirt-manager
NAME READY STATUS RESTARTS AGE
pod/kubevirt-manager-5858499887-wfrgn 1/1 Running 0 4m56s

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubevirt-manager LoadBalancer 10.0.173.82 20.74.99.190 8080:30143/TCP 4m24s
Enter fullscreen mode Exit fullscreen mode

Et j’y accède via l’adresse IP publique fournie dans AKS :

Krew est installé localement via Kubectl ainsi que le plugin correspondant pour Kubevirt :

Quickstart

$ kubectl krew update
Updated the local copy of plugin index.

$ kubectl krew search virt
NAME DESCRIPTION INSTALLED
pv-migrate Migrate data across persistent volumes no
pvmigrate Migrates PVs between StorageClasses no
view-cert View certificate information stored in secrets no
view-secret Decode Kubernetes secrets no
view-serviceaccount-kubeconfig Show a kubeconfig setting to access the apiserv... no
virt Control KubeVirt virtual machines using virtctl yes
Enter fullscreen mode Exit fullscreen mode

Ce qui permet de lancer directement une petite machine virtuelle avec Cirros :

GitHub - cirros-dev/cirros

$ kubectl apply -f https://raw.githubusercontent.com/kubevirt/demo/master/manifests/vm.yaml
virtualmachine.kubevirt.io/testvm created

$ kubectl describe vm testvm
Name: testvm
Namespace: default
Labels: <none>
Annotations: kubevirt.io/latest-observed-api-version: v1
              kubevirt.io/storage-observed-api-version: v1alpha3
API Version: kubevirt.io/v1
Kind: VirtualMachine
Metadata:
  Creation Timestamp: 2023-03-09T13:23:32Z
  Finalizers:
    kubevirt.io/virtualMachineControllerFinalize
  Generation: 1
  Managed Fields:
    API Version: kubevirt.io/v1alpha3
    Fields Type: FieldsV1
    fieldsV1:
      f:metadata:
        f:annotations:
          f:kubevirt.io/latest-observed-api-version:
          f:kubevirt.io/storage-observed-api-version:
        f:finalizers:
          .:
          v:"kubevirt.io/virtualMachineControllerFinalize":
    Manager: Go-http-client
    Operation: Update
    Time: 2023-03-09T13:23:32Z
    API Version: kubevirt.io/v1alpha3
    Fields Type: FieldsV1
    fieldsV1:
      f:status:
        .:
        f:conditions:
        f:printableStatus:
        f:volumeSnapshotStatuses:
    Manager: Go-http-client
    Operation: Update
    Subresource: status
    Time: 2023-03-09T13:23:32Z
    API Version: kubevirt.io/v1alpha3
    Fields Type: FieldsV1
    fieldsV1:
      f:metadata:
        f:annotations:
          .:
          f:kubectl.kubernetes.io/last-applied-configuration:
      f:spec:
        .:
        f:running:
        f:template:
          .:
          f:metadata:
            .:
            f:labels:
              .:
              f:kubevirt.io/domain:
              f:kubevirt.io/size:
          f:spec:
            .:
            f:domain:
              .:
              f:devices:
                .:
                f:disks:
                f:interfaces:
              f:resources:
                .:
                f:requests:
                  .:
                  f:memory:
            f:networks:
            f:volumes:
    Manager: kubectl-client-side-apply
    Operation: Update
    Time: 2023-03-09T13:23:32Z
  Resource Version: 39937
  UID: 01de5614-922a-4ae8-9bec-40eeb9a7c500
Spec:
  Running: false
  Template:
    Metadata:
      Creation Timestamp: <nil>
      Labels:
        kubevirt.io/domain: testvm
        kubevirt.io/size: small
    Spec:
      Domain:
        Devices:
          Disks:
            Disk:
              Bus: virtio
            Name: rootfs
            Disk:
              Bus: virtio
            Name: cloudinit
          Interfaces:
            Mac Address: 02:9d:e4:00:00:00
            Masquerade:
            Name: default
        Machine:
          Type: q35
        Resources:
          Requests:
            Memory: 64M
      Networks:
        Name: default
        Pod:
      Volumes:
        Container Disk:
          Image: kubevirt/cirros-registry-disk-demo
        Name: rootfs
        Cloud Init No Cloud:
          userDataBase64: SGkuXG4=
        Name: cloudinit
Status:
  Conditions:
    Last Probe Time: 2023-03-09T13:23:32Z
    Last Transition Time: 2023-03-09T13:23:32Z
    Message: VMI does not exist
    Reason: VMINotExists
    Status: False
    Type: Ready
  Printable Status: Stopped
  Volume Snapshot Statuses:
    Enabled: false
    Name: rootfs
    Reason: Snapshot is not supported for this volumeSource type [rootfs]
    Enabled: false
    Name: cloudinit
    Reason: Snapshot is not supported for this volumeSource type [cloudinit]
Events: <none>

$ kubectl virt start testvm
VM testvm was scheduled to start

$ kubectl describe vmi testvm
Name: testvm
Namespace: default
Labels: kubevirt.io/domain=testvm
              kubevirt.io/size=small
Annotations: kubevirt.io/latest-observed-api-version: v1
              kubevirt.io/storage-observed-api-version: v1alpha3
API Version: kubevirt.io/v1
Kind: VirtualMachineInstance
Metadata:
  Creation Timestamp: 2023-03-09T13:26:14Z
  Finalizers:
    kubevirt.io/virtualMachineControllerFinalize
    foregroundDeleteVirtualMachine
  Generation: 4
  Managed Fields:
    API Version: kubevirt.io/v1alpha3
    Fields Type: FieldsV1
    fieldsV1:
      f:metadata:
        f:annotations:
          .:
          f:kubevirt.io/latest-observed-api-version:
          f:kubevirt.io/storage-observed-api-version:
        f:finalizers:
          .:
          v:"kubevirt.io/virtualMachineControllerFinalize":
        f:labels:
          .:
          f:kubevirt.io/domain:
          f:kubevirt.io/size:
        f:ownerReferences:
          .:
          k:{"uid":"01de5614-922a-4ae8-9bec-40eeb9a7c500"}:
      f:spec:
        .:
        f:domain:
          .:
          f:devices:
            .:
            f:disks:
            f:interfaces:
          f:firmware:
            .:
            f:uuid:
          f:machine:
            .:
            f:type:
          f:resources:
            .:
            f:requests:
              .:
              f:memory:
        f:networks:
        f:volumes:
      f:status:
        .:
        f:activePods:
          .:
          f:f2afda42-6f8f-4bad-b35a-7591c3233cc1:
        f:conditions:
        f:guestOSInfo:
        f:phase:
        f:phaseTransitionTimestamps:
        f:qosClass:
        f:runtimeUser:
        f:virtualMachineRevisionName:
    Manager: Go-http-client
    Operation: Update
    Time: 2023-03-09T13:26:14Z
  Owner References:
    API Version: kubevirt.io/v1
    Block Owner Deletion: true
    Controller: true
    Kind: VirtualMachine
    Name: testvm
    UID: 01de5614-922a-4ae8-9bec-40eeb9a7c500
  Resource Version: 41728
  UID: 745caf89-6e75-47d1-b094-91e6119f3cc4
Spec:
  Domain:
    Cpu:
      Cores: 1
      Model: host-model
      Sockets: 1
      Threads: 1
    Devices:
      Disks:
        Disk:
          Bus: virtio
        Name: rootfs
        Disk:
          Bus: virtio
        Name: cloudinit
      Interfaces:
        Mac Address: 02:9d:e4:00:00:00
        Masquerade:
        Name: default
    Features:
      Acpi:
        Enabled: true
    Firmware:
      Uuid: 5a9fc181-957e-5c32-9e5a-2de5e9673531
    Machine:
      Type: q35
    Resources:
      Requests:
        Memory: 64M
  Networks:
    Name: default
    Pod:
  Volumes:
    Container Disk:
      Image: kubevirt/cirros-registry-disk-demo
      Image Pull Policy: Always
    Name: rootfs
    Cloud Init No Cloud:
      userDataBase64: SGkuXG4=
    Name: cloudinit
Status:
  Active Pods:
    f2afda42-6f8f-4bad-b35a-7591c3233cc1: aks-agentpool-16963728-vmss000000
  Conditions:
    Last Probe Time: 2023-03-09T13:26:14Z
    Last Transition Time: 2023-03-09T13:26:14Z
    Message: Guest VM is not reported as running
    Reason: GuestNotRunning
    Status: False
    Type: Ready
  Guest OS Info:
  Phase: Scheduling
  Phase Transition Timestamps:
    Phase: Pending
    Phase Transition Timestamp: 2023-03-09T13:26:14Z
    Phase: Scheduling
    Phase Transition Timestamp: 2023-03-09T13:26:14Z
  Qos Class: Burstable
  Runtime User: 107
  Virtual Machine Revision Name: revision-start-vm-01de5614-922a-4ae8-9bec-40eeb9a7c500-2
Events:
  Type Reason Age From Message
  ---- ------ ---- ---- -------
  Normal SuccessfulCreate 8s virtualmachine-controller Created virtual machine pod virt-launcher-testvm-b77hf
Enter fullscreen mode Exit fullscreen mode

Elle est en exécution dans KubeVirt :

$ kubectl get vmis
NAME AGE PHASE IP NODENAME READY
testvm 61s Running 10.244.2.38 aks-agentpool-16963728-vmss000000 True

$ kubectl get vms
NAME AGE STATUS READY
testvm 3m47s Running True
Enter fullscreen mode Exit fullscreen mode

avec sa console VNC fournie dans KubeVirt Manager :

Bien évidemment, je pouvais également accéder à la console de la machine virtuelle localement :

$ kubectl virt console testvm

testvm login: cirros
Password: gocubsgo

$ ps aux
PID USER COMMAND
    1 root init
    2 root [kthreadd]
    3 root [ksoftirqd/0]
    4 root [kworker/0:0]
    5 root [kworker/0:0H]
    7 root [rcu_sched]
    8 root [rcu_bh]
    9 root [migration/0]
   10 root [watchdog/0]
   11 root [kdevtmpfs]
   12 root [netns]
   13 root [perf]
   14 root [khungtaskd]
   15 root [writeback]
   16 root [ksmd]
   17 root [crypto]
   18 root [kintegrityd]
   19 root [bioset]
   20 root [kblockd]
   21 root [ata_sff]
   22 root [md]
   23 root [devfreq_wq]La sup
Enter fullscreen mode Exit fullscreen mode

L’interruption peut se faire graphiquement ou via la ligne de commande habituelle :

$ kubectl virt stop testvm
VM testvm was scheduled to stop

$ kubectl delete vm testvm
virtualmachine.kubevirt.io "testvm" deleted
Enter fullscreen mode Exit fullscreen mode

Avec les options graphiques, il serait possible de reproduire le schéma de création de machine virtuelle via “Containerized Data Importer (CDI)” …

Experiment with CDI | KubeVirt.io

KubeVirt Manager offre également une intégration avec Prometheus (et donc potentiellement à Grafana).

GitHub - kubevirt-manager/kubevirt-manager: Kubevirt Web UI / Web Interface Manager

Ce projet encore jeune est en toujours développement et des fonctionnalités plus avancées sur le contrôle des machines virtuelles sont à espérer …

À suivre !

Top comments (0)