DEV Community

Karim
Karim

Posted on • Originally published at deep75.Medium on

NeonVM : API de virtualisation légère basée sur QEMU pour Kubernetes …

Focus sur la start-up Neon composée de contributeurs de la célèbre base de données Open Source PostgreSQL et dont la mission est de créer un service de base de données natif au cloud pour chaque développeur : du PostgreSQL multi-cloud entièrement géré. Le PDG de Neon est notamment Nikita Shamgunov, cofondateur de MemSQL / SingleStore …

C’est un projet Open Source de plus comme ceux que j’avais décrit dans les précédents articles comme KubeFire :

KubeFire : Créer et gèrer des clusters Kubernetes en utilisant des microVMs avec Firecracker …

ou celui qui s’y rapproche le plus avec Virtink basé sur Cloud Hypervisor :

Virtink : un module complémentaire de virtualisation légère pour Kubernetes …

Sur leur dépôt GitHub, ils proposent un projet de virtualisation lègère sur Kubernetes avec “NeonVM”, API de virtualisation basée sur QEMU et contrôleur pour Kubernetes.

Ce projet vise à suivre le modèle de l’opérateur Kubernetes. Il utilise des contrôleurs qui fournissent une fonction de réconciliation responsable de la synchronisation des ressources jusqu’à ce que l’état souhaité soit atteint sur le cluster Kubernetes.

Application avec une instance Ubuntu 22.04 LTS dans DigitalOcean pour ce test et qui autorise comme on le sait la virtualisation imbriquée :

Installation de k0s pour former un cluster Kubernetes local à un seul noeud pour les besoins de ce test :

Quick Start Guide - Documentation

root@neonvm:~# curl -sSLf https://get.k0s.sh | sudo sh
Downloading k0s from URL: https://github.com/k0sproject/k0s/releases/download/v1.25.4+k0s.0/k0s-v1.25.4+k0s.0-amd64
k0s is now executable in /usr/local/bin

root@neonvm:~# k0s install controller --single

root@neonvm:~# k0s start

root@neonvm:~# k0s status
Version: v1.25.4+k0s.0
Process ID: 2219
Role: controller
Workloads: true
SingleNode: true
Kube-api probing successful: true
Kube-api probing last error:  

root@neonvm:~# snap install kubectl --classic
kubectl 1.25.4 from Canonical✓ installed

root@neonvm:~# mkdir .kube && k0s kubeconfig admin > ~/.kube/config

root@neonvm:~# kubectl cluster-info
Kubernetes control plane is running at https://134.122.54.87:6443
CoreDNS is running at https://134.122.54.87:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

root@neonvm:~# kubectl get po,svc -A
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system pod/kube-proxy-rqz5z 1/1 Running 0 2m20s
kube-system pod/kube-router-9wkc9 1/1 Running 0 2m20s
kube-system pod/coredns-5d5b5b96f9-n8mrv 1/1 Running 0 2m25s
kube-system pod/metrics-server-69d9d66ff8-7wb4b 1/1 Running 0 2m25s

NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 2m45s
kube-system service/kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP,9153/TCP 2m33s
kube-system service/metrics-server ClusterIP 10.102.167.196 <none> 443/TCP 2m28s
Enter fullscreen mode Exit fullscreen mode

J’installe très rapidement NeonVM avec cert-manager dans un premier temps :

root@neonvm:~# kubectl apply -f https://github.com/cert-manager/cert-manager/releases/latest/download/cert-manager.yaml

namespace/cert-manager created
customresourcedefinition.apiextensions.k8s.io/clusterissuers.cert-manager.io created
customresourcedefinition.apiextensions.k8s.io/challenges.acme.cert-manager.io created
customresourcedefinition.apiextensions.k8s.io/certificaterequests.cert-manager.io created
customresourcedefinition.apiextensions.k8s.io/issuers.cert-manager.io created
customresourcedefinition.apiextensions.k8s.io/certificates.cert-manager.io created
customresourcedefinition.apiextensions.k8s.io/orders.acme.cert-manager.io created
serviceaccount/cert-manager-cainjector created
serviceaccount/cert-manager created
serviceaccount/cert-manager-webhook created
configmap/cert-manager-webhook created
clusterrole.rbac.authorization.k8s.io/cert-manager-cainjector created
clusterrole.rbac.authorization.k8s.io/cert-manager-controller-issuers created
clusterrole.rbac.authorization.k8s.io/cert-manager-controller-clusterissuers created
clusterrole.rbac.authorization.k8s.io/cert-manager-controller-certificates created
clusterrole.rbac.authorization.k8s.io/cert-manager-controller-orders created
clusterrole.rbac.authorization.k8s.io/cert-manager-controller-challenges created
clusterrole.rbac.authorization.k8s.io/cert-manager-controller-ingress-shim created
clusterrole.rbac.authorization.k8s.io/cert-manager-view created
clusterrole.rbac.authorization.k8s.io/cert-manager-edit created
clusterrole.rbac.authorization.k8s.io/cert-manager-controller-approve:cert-manager-io created
clusterrole.rbac.authorization.k8s.io/cert-manager-controller-certificatesigningrequests created
clusterrole.rbac.authorization.k8s.io/cert-manager-webhook:subjectaccessreviews created
clusterrolebinding.rbac.authorization.k8s.io/cert-manager-cainjector created
clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-issuers created
clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-clusterissuers created
clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-certificates created
clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-orders created
clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-challenges created
clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-ingress-shim created
clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-approve:cert-manager-io created
clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-certificatesigningrequests created
clusterrolebinding.rbac.authorization.k8s.io/cert-manager-webhook:subjectaccessreviews created
role.rbac.authorization.k8s.io/cert-manager-cainjector:leaderelection created
role.rbac.authorization.k8s.io/cert-manager:leaderelection created
role.rbac.authorization.k8s.io/cert-manager-webhook:dynamic-serving created
rolebinding.rbac.authorization.k8s.io/cert-manager-cainjector:leaderelection created
rolebinding.rbac.authorization.k8s.io/cert-manager:leaderelection created
rolebinding.rbac.authorization.k8s.io/cert-manager-webhook:dynamic-serving created
service/cert-manager created
service/cert-manager-webhook created
deployment.apps/cert-manager-cainjector created
deployment.apps/cert-manager created
deployment.apps/cert-manager-webhook created
mutatingwebhookconfiguration.admissionregistration.k8s.io/cert-manager-webhook created
validatingwebhookconfiguration.admissionregistration.k8s.io/cert-manager-webhook created

root@neonvm:~# kubectl get po,svc -A

NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system pod/kube-proxy-rqz5z 1/1 Running 0 5m5s
kube-system pod/kube-router-9wkc9 1/1 Running 0 5m5s
kube-system pod/coredns-5d5b5b96f9-n8mrv 1/1 Running 0 5m10s
kube-system pod/metrics-server-69d9d66ff8-7wb4b 1/1 Running 0 5m10s
cert-manager pod/cert-manager-74d949c895-zfmcf 1/1 Running 0 92s
cert-manager pod/cert-manager-cainjector-d9bc5979d-xsbq9 1/1 Running 0 92s
cert-manager pod/cert-manager-webhook-84b7ddd796-ltc69 1/1 Running 0 92s

NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 5m30s
kube-system service/kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP,9153/TCP 5m18s
kube-system service/metrics-server ClusterIP 10.102.167.196 <none> 443/TCP 5m13s
cert-manager service/cert-manager ClusterIP 10.101.64.172 <none> 9402/TCP 92s
cert-manager service/cert-manager-webhook ClusterIP 10.107.99.137 <none> 443/TCP 92s
Enter fullscreen mode Exit fullscreen mode

suivi de NeonVM lui même avec ce manifest YAML :

root@neonvm:~# kubectl apply -f https://github.com/neondatabase/neonvm/releases/latest/download/neonvm.yaml

namespace/neonvm-system created
customresourcedefinition.apiextensions.k8s.io/virtualmachines.vm.neon.tech created
serviceaccount/neonvm-controller created
role.rbac.authorization.k8s.io/neonvm-leader-election-role created
clusterrole.rbac.authorization.k8s.io/neonvm-manager-role created
clusterrole.rbac.authorization.k8s.io/neonvm-metrics-reader created
clusterrole.rbac.authorization.k8s.io/neonvm-proxy-role created
rolebinding.rbac.authorization.k8s.io/neonvm-leader-election-rolebinding created
clusterrolebinding.rbac.authorization.k8s.io/neonvm-manager-rolebinding created
clusterrolebinding.rbac.authorization.k8s.io/neonvm-proxy-rolebinding created
service/neonvm-controller-metrics-service created
service/neonvm-webhook-service created
deployment.apps/neonvm-controller created
certificate.cert-manager.io/neonvm-serving-cert created
issuer.cert-manager.io/neonvm-selfsigned-issuer created
mutatingwebhookconfiguration.admissionregistration.k8s.io/neonvm-mutating-webhook-configuration created
validatingwebhookconfiguration.admissionregistration.k8s.io/neonvm-validating-webhook-configuration created
Enter fullscreen mode Exit fullscreen mode

Et le contrôleur NeonVM est actif localement :

root@neonvm:~# kubectl get po,svc -A

NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system pod/kube-proxy-rqz5z 1/1 Running 0 17m
kube-system pod/kube-router-9wkc9 1/1 Running 0 17m
kube-system pod/coredns-5d5b5b96f9-n8mrv 1/1 Running 0 17m
kube-system pod/metrics-server-69d9d66ff8-7wb4b 1/1 Running 0 17m
cert-manager pod/cert-manager-74d949c895-zfmcf 1/1 Running 0 13m
cert-manager pod/cert-manager-cainjector-d9bc5979d-xsbq9 1/1 Running 0 13m
cert-manager pod/cert-manager-webhook-84b7ddd796-ltc69 1/1 Running 0 13m
neonvm-system pod/neonvm-controller-764d8c5f6c-5hxnt 2/2 Running 0 11m

NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 17m
kube-system service/kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP,9153/TCP 17m
kube-system service/metrics-server ClusterIP 10.102.167.196 <none> 443/TCP 17m
cert-manager service/cert-manager ClusterIP 10.101.64.172 <none> 9402/TCP 13m
cert-manager service/cert-manager-webhook ClusterIP 10.107.99.137 <none> 443/TCP 13m
neonvm-system service/neonvm-controller-metrics-service ClusterIP 10.99.181.63 <none> 8443/TCP 11m
neonvm-system service/neonvm-webhook-service ClusterIP 10.96.127.186 <none> 443/TCP 11m
Enter fullscreen mode Exit fullscreen mode

La construction d’une machine virtuelle se fait ici avec un composant nommé vm-builder :

vm-builder -src alpine:3.16 -dst neondatabase/vm-alpine:3.16
docker push -q neondatabase/vm-alpine:3.16

vm-builder -src ubuntu:22.04 -dst neondatabase/vm-ubuntu:22.04
docker push -q neondatabase/vm-ubuntu:22.04
Enter fullscreen mode Exit fullscreen mode

Il est récupérable directement sous la forme d’un binaire que l’on peut soit-même compiler en Go :

neonvm/main.go at main · neondatabase/neonvm

git clone https://github.com/neondatabase/neonvm && cd neonvm/tools/vm-builder
go build -o bin/vm-builder tools/vm-builder/main.go

Enter fullscreen mode Exit fullscreen mode

Releases · neondatabase/neonvm

Test avec une image basée sur Ubuntu 22.04 :

root@neonvm:~# cat vm.yaml

apiVersion: v1
kind: Service
metadata:
  name: julia
spec:
  ports:
  - name: pluto
    port: 1234
    protocol: TCP
    targetPort: 1234
  type: NodePort
  selector:
    vm.neon.tech/name: julia

---
apiVersion: vm.neon.tech/v1
kind: VirtualMachine
metadata:
  name: julia
spec:
  guest:
    cpus:
      min: 1
      max: 4
      use: 2
    memorySlotSize: 1Gi
    memorySlots:
      min: 1
      max: 6
      use: 4
    rootDisk:
      image: neondatabase/vm-ubuntu:22.04
      size: 20Gi
    ports:
      - port: 1234
Enter fullscreen mode Exit fullscreen mode

Lancement de la machine virtuelle :

root@neonvm:~# kubectl apply -f vm.yaml 

service/julia created
virtualmachine.vm.neon.tech/julia created

root@neonvm:~# kubectl get po,svc -A
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system pod/kube-proxy-rqz5z 1/1 Running 0 72m
kube-system pod/kube-router-9wkc9 1/1 Running 0 72m
kube-system pod/coredns-5d5b5b96f9-n8mrv 1/1 Running 0 72m
kube-system pod/metrics-server-69d9d66ff8-7wb4b 1/1 Running 0 72m
cert-manager pod/cert-manager-74d949c895-zfmcf 1/1 Running 0 68m
cert-manager pod/cert-manager-cainjector-d9bc5979d-xsbq9 1/1 Running 0 68m
cert-manager pod/cert-manager-webhook-84b7ddd796-ltc69 1/1 Running 0 68m
neonvm-system pod/neonvm-controller-764d8c5f6c-5hxnt 2/2 Running 0 66m
default pod/julia-v6vfp 1/1 Running 0 68s

NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 72m
kube-system service/kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP,9153/TCP 72m
kube-system service/metrics-server ClusterIP 10.102.167.196 <none> 443/TCP 72m
cert-manager service/cert-manager ClusterIP 10.101.64.172 <none> 9402/TCP 68m
cert-manager service/cert-manager-webhook ClusterIP 10.107.99.137 <none> 443/TCP 68m
neonvm-system service/neonvm-controller-metrics-service ClusterIP 10.99.181.63 <none> 8443/TCP 66m
neonvm-system service/neonvm-webhook-service ClusterIP 10.96.127.186 <none> 443/TCP 66m
default service/julia NodePort 10.106.153.99 <none> 1234:32002/TCP 68s
Enter fullscreen mode Exit fullscreen mode

Elle est active :

root@neonvm:~# kubectl get neonvm -o wide
NAME CPUS MEMORY POD STATUS AGE NODE IMAGE
julia 2 4Gi julia-v6vfp Running 2m15s neonvm neondatabase/vm-ubuntu:22.04

root@neonvm:~# kubectl describe neonvm julia
Name: julia
Namespace: default
Labels: <none>
Annotations: <none>
API Version: vm.neon.tech/v1
Kind: VirtualMachine
Metadata:
  Creation Timestamp: 2022-12-07T22:31:19Z
  Finalizers:
    vm.neon.tech/finalizer
  Generation: 1
  Managed Fields:
    API Version: vm.neon.tech/v1
    Fields Type: FieldsV1
    fieldsV1:
      f:metadata:
        f:annotations:
          .:
          f:kubectl.kubernetes.io/last-applied-configuration:
      f:spec:
        .:
        f:guest:
          .:
          f:cpus:
            .:
            f:max:
            f:min:
            f:use:
          f:memorySlotSize:
          f:memorySlots:
            .:
            f:max:
            f:min:
            f:use:
          f:ports:
          f:rootDisk:
            .:
            f:image:
            f:imagePullPolicy:
            f:size:
        f:qmp:
        f:restartPolicy:
        f:terminationGracePeriodSeconds:
    Manager: kubectl-client-side-apply
    Operation: Update
    Time: 2022-12-07T22:31:19Z
    API Version: vm.neon.tech/v1
    Fields Type: FieldsV1
    fieldsV1:
      f:metadata:
        f:finalizers:
          .:
          v:"vm.neon.tech/finalizer":
    Manager: manager
    Operation: Update
    Time: 2022-12-07T22:31:19Z
    API Version: vm.neon.tech/v1
    Fields Type: FieldsV1
    fieldsV1:
      f:status:
        .:
        f:conditions:
        f:cpus:
        f:memorySize:
        f:node:
        f:phase:
        f:podIP:
        f:podName:
    Manager: manager
    Operation: Update
    Subresource: status
    Time: 2022-12-07T22:31:29Z
  Resource Version: 5181
  UID: 7ab14f43-6b28-4bd3-86c0-89cff61b216d
Spec:
  Guest:
    Cpus:
      Max: 4
      Min: 1
      Use: 2
    Memory Slot Size: 1Gi
    Memory Slots:
      Max: 6
      Min: 1
      Use: 4
    Ports:
      Port: 1234
      Protocol: TCP
    Root Disk:
      Image: neondatabase/vm-ubuntu:22.04
      Image Pull Policy: IfNotPresent
      Size: 20Gi
  Pod Resources:
  Qmp: 20183
  Restart Policy: Never
  Termination Grace Period Seconds: 5
Status:
  Conditions:
    Last Transition Time: 2022-12-07T22:31:28Z
    Message: Pod (julia-v6vfp) for VirtualMachine (julia) created successfully
    Reason: Reconciling
    Status: True
    Type: Available
  Cpus: 2
  Memory Size: 4Gi
  Node: neonvm
  Phase: Running
  Pod IP: 10.244.0.10
  Pod Name: julia-v6vfp
Events:
  Type Reason Age From Message
  ---- ------ ---- ---- -------
  Normal Created 2m28s virtualmachine-controller Created VirtualMachine julia
  Normal CpuInfo 2m19s virtualmachine-controller VirtualMachine julia uses 2 cpu cores
  Normal MemoryInfo 2m18s virtualmachine-controller VirtualMachine julia uses 2Gi memory
  Normal MemoryInfo 2m18s virtualmachine-controller VirtualMachine julia uses 3Gi memory
  Normal MemoryInfo 2m18s virtualmachine-controller VirtualMachine julia uses 4Gi memory
Enter fullscreen mode Exit fullscreen mode

et j’accède à sa console via screen :

root@neonvm:~# kubectl exec -it $(kubectl get neonvm julia -ojsonpath='{.status.podName}') -- screen /dev/pts/0

ENTER

neonvm login: root (automatic login)

root@neonvm:~# apt update
Get:1 http://archive.ubuntu.com/ubuntu jammy InRelease [270 kB]
Get:2 http://security.ubuntu.com/ubuntu jammy-security InRelease [110 kB]
Get:3 http://archive.ubuntu.com/ubuntu jammy-updates InRelease [114 kB]
Get:4 http://archive.ubuntu.com/ubuntu jammy-backports InRelease [99.8 kB]
Get:5 http://security.ubuntu.com/ubuntu jammy-security/multiverse amd64 Packages [4732 B]
Get:6 http://security.ubuntu.com/ubuntu jammy-security/restricted amd64 Packages [577 kB]
Get:7 http://security.ubuntu.com/ubuntu jammy-security/main amd64 Packages [657 kB]
Get:8 http://security.ubuntu.com/ubuntu jammy-security/universe amd64 Packages [780 kB]
Get:9 http://archive.ubuntu.com/ubuntu jammy/multiverse amd64 Packages [266 kB]
Get:10 http://archive.ubuntu.com/ubuntu jammy/universe amd64 Packages [17.5 MB]
Get:11 http://archive.ubuntu.com/ubuntu jammy/main amd64 Packages [1792 kB]
Get:12 http://archive.ubuntu.com/ubuntu jammy/restricted amd64 Packages [164 kB]
Get:13 http://archive.ubuntu.com/ubuntu jammy-updates/universe amd64 Packages [963 kB]
Get:14 http://archive.ubuntu.com/ubuntu jammy-updates/main amd64 Packages [956 kB]
Get:15 http://archive.ubuntu.com/ubuntu jammy-updates/multiverse amd64 Packages [8150 B]
Get:16 http://archive.ubuntu.com/ubuntu jammy-updates/restricted amd64 Packages [624 kB]
Get:17 http://archive.ubuntu.com/ubuntu jammy-backports/main amd64 Packages [3520 B]
Get:18 http://archive.ubuntu.com/ubuntu jammy-backports/universe amd64 Packages [7278 B]
Fetched 24.9 MB in 3s (7474 kB/s)
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
2 packages can be upgraded. Run 'apt list --upgradable' to see them.
Enter fullscreen mode Exit fullscreen mode

Installation de Julia et Pluto.jl que j’avais utilisé dans ces articles :

via ces dépôts GitHub :

root@neonvm:~# curl -fsSL https://install.julialang.org | sh

This path will then be added to your PATH environment variable by
modifying the profile files located at:

  /root/.bashrc
  /root/.profile

Julia will look for a new version of Juliaup itself every 1440 minutes when you start julia.

You can uninstall at any time with juliaup self uninstall and these
changes will be reverted.

✔ Do you want to install with these default configuration choices? · Proceed with installation

Now installing Juliaup
Installing Julia 1.8.3+0.x64.linux.gnu
Julia was successfully installed on your system.

Depending on which shell you are using, run one of the following
commands to reload the PATH environment variable:

  . /root/.bashrc
  . /root/.profile

root@neonvm:~# source .bashrc
root@neonvm:~# julia
               _
   _ _ _(_)_ | Documentation: https://docs.julialang.org
  (_) | (_) (_) |
   _ _ _| |_ __ _ | Type "?" for help, "]?" for Pkg help.
  | | | | | | |/ _` | |
  | | |_| | | | (_| | | Version 1.8.3 (2022-11-14)
 _/ |\ __'_|_|_|\__'_| | Official https://julialang.org/ release
|__/ |

julia>
(@v1.8) pkg> add Pluto,PlutoUI
  [2f01184e] + SparseArrays
  [10745b16] + Statistics
  [fa267f1f] + TOML v1.0.0
  [a4e569a6] + Tar v1.10.1
  [8dfed614] + Test
  [cf7118a7] + UUIDs
  [4ec0a83e] + Unicode
  [e66e0078] + CompilerSupportLibraries_jll v0.5.2+0
  [deac9b47] + LibCURL_jll v7.84.0+0
  [29816b5a] + LibSSH2_jll v1.10.2+0
  [c8ffd9c3] + MbedTLS_jll v2.28.0+0
  [14a3606d] + MozillaCACerts_jll v2022.2.1
  [4536629a] + OpenBLAS_jll v0.3.20+0
  [83775a58] + Zlib_jll v1.2.12+3
  [8e850b90] + libblastrampoline_jll v5.1.1+0
  [8e850ede] + nghttp2_jll v1.48.0+0
  [3f19e933] + p7zip_jll v17.4.0+0
Precompiling project...
  48 dependencies successfully precompiled in 176 seconds

julia> using Pluto, PlutoUI;
       Pluto.run(
          launch_browser=false,
          host="0.0.0.0",
          port=1234,
          require_secret_for_open_links=false,
          require_secret_for_access=false,
          workspace_use_distributed=false,
)
[ Info: Loading...
[ Info: Listening on: 0.0.0.0:1234
┌ Info: 
└ Go to http://0.0.0.0:1234/ in your browser to start writing ~ have fun!
┌ Info: 
│ Press Ctrl+C in this terminal to stop Pluto
└ 
Enter fullscreen mode Exit fullscreen mode

Je quitte la console avec screen (CTRL-a + d) …

How To Use Linux Screen

Le Notebook Pluto est visible via NodePort en TCP/32002 ici :

Tutoriel - Kubernetes Services

root@neonvm:~# curl http://10.106.153.99:1234

<!DOCTYPE html><html lang="en"><head><meta name="viewport" content="width=device-width"><title>Pluto.jl</title><meta charset="utf-8"><script type="module" src="editor.a387c700.js"></script><link rel="stylesheet" href="editor.788c1e24.css"><link rel="stylesheet" href="vollkorn.089565a8.css"><link rel="stylesheet" href="juliamono.a2a5b30d.css"><script>console.log("Pluto.jl, by Fons van der Plas (https://github.com/fonsp) and Mikołaj Bochenski (https://github.com/malyvsen) 🌈");</script><meta name="author" content="Fons van der Plas; Mikołaj Bochenski"><meta name="theme-color" media="(prefers-color-scheme: light)" content="white"><meta name="theme-color" media="(prefers-color-scheme: dark)" content="#2a2928"><meta name="color-scheme" content="light dark"><link rel="icon" type="image/png" sizes="16x16" href="favicon-16x16.347d2855.png"><link rel="icon" type="image/png" sizes="32x32" href="favicon-32x32.8789add4.png"><link rel="icon" type="image/png" sizes="96x96" href="favicon-96x96.48689391.png"><meta name="description" content="Pluto.jl notebooks"><script src="editor.4b96dd74.js" defer></script><script src="editor.9f9dc874.js" defer></script><link rel="pluto-logo-big" href="logo.004c1d7c.svg"><link rel="stylesheet" href="index.a096377a.css"><link rel="stylesheet" href="index.420fa7f8.css"><script src="index.478ed99f.js" type="module" defer></script><script src="editor.b9f0ac7b.js"></script><link rel="prefetch" as="stylesheet" type="text/html" href="editor.075ee715.css"><link rel="prefetch" as="script" type="text/javascript" href="editor.9eeb148f.js"><link rel="prerender" href="editor.html"></head><body class="nosessions"> <loading-bar></loading-bar> <div id="app"> <section id="title"><h1>welcome to <img src="logo.004c1d7c.svg"></h1></section> <section id="mywork"> <div> <h2>My work</h2> <ul id="recent"> <li class="new"><a href="new"><button><span class="ionicon"></span></button>Create a <strong>new notebook</strong></a></li> <li><em>Loading...</em></li> </ul> </div> </section> <section id="open"> <div> <h2>Open a notebook</h2> <p><em>Loading...</em></p> </div> </section> <section id="featured"> <div> <div class="featured-source"> <h1>Featured Notebooks</h1> <p>These notebooks from the Julia community show off what you can do with Pluto. Give it a try, you might learn something new!</p> <div class="collection"> <h2>Loading...</h2> </div> </div> </div> </section> </div> </body></html>

root@neonvm:~# kubectl get svc 

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 94m
julia NodePort 10.106.153.99 <none> 1234:32002/TCP 22m
Enter fullscreen mode Exit fullscreen mode

avec exécution d’un graphe rapide avec Julia :

Tutorial


begin
using Pkg
Pkg.add(["Plots", "LaTeXStrings"])

using Plots,LaTeXStrings
x = 10 .^ range(0, 4, length=100)
y = @. 3/(4+100x)

plot(x, y, label="3/(4+100x)")
plot!(xscale=:log10, yscale=:log10, minorgrid=true)
xlims!(1e+0, 1e+4)
ylims!(1e-5, 1e+0)
title!("Log-log plot")
xlabel!("x")
ylabel!("y") 
end
Enter fullscreen mode Exit fullscreen mode

Je peux éditer la machine virtuelle :

root@neonvm:~# kubectl edit neonvm julia

# Please edit the object below. Lines beginning with a '#' will be ignored,
# and an empty file will abort the edit. If an error occurs while saving this file will be
# reopened with the relevant failures.
#
apiVersion: vm.neon.tech/v1
kind: VirtualMachine
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"vm.neon.tech/v1","kind":"VirtualMachine","metadata":{"annotations":{},"name":"julia","namespace":"default"},"spec":{"guest":{"cpus":{"max":4,"min":1,"use":2},"memorySlotSize":"1Gi","memorySlots":{"max":6,"min":1,"use":4},"ports":[{"port":1234}],"rootDisk":{"image":"neondatabase/vm-ubuntu:22.04","size":"20Gi"}}}}
  creationTimestamp: "2022-12-07T22:31:19Z"
  finalizers:
  - vm.neon.tech/finalizer
  generation: 1
  name: julia
  namespace: default
  resourceVersion: "5181"
  uid: 7ab14f43-6b28-4bd3-86c0-89cff61b216d
spec:
  guest:
    cpus:
      max: 4
      min: 1
      use: 2
    memorySlotSize: 1Gi
    memorySlots:
      max: 6
      min: 1
      use: 4
    ports:
    - port: 1234
      protocol: TCP
    rootDisk:
      image: neondatabase/vm-ubuntu:22.04
      imagePullPolicy: IfNotPresent
      size: 20Gi
  podResources: {}
  qmp: 20183
  restartPolicy: Never
  terminationGracePeriodSeconds: 5
status:
  conditions:
  - lastTransitionTime: "2022-12-07T22:31:28Z"
    message: Pod (julia-v6vfp) for VirtualMachine (julia) created successfully
    reason: Reconciling
    status: "True"
    type: Available
  cpus: 2
  memorySize: 4Gi
  node: neonvm
Enter fullscreen mode Exit fullscreen mode

Je peux aussi la patcher en ajoutant ici plus de coeur à la machine virtuelle :

root@neonvm:~# kubectl patch neonvm julia --type='json' -p='[{"op": "replace", "path": "/spec/guest/cpus/use", "value":4}]'
virtualmachine.vm.neon.tech/julia patched

root@neonvm:~# kubectl describe neonvm julia
Name: julia
Namespace: default
Labels: <none>
Annotations: <none>
API Version: vm.neon.tech/v1
Kind: VirtualMachine
Metadata:
  Creation Timestamp: 2022-12-07T22:31:19Z
  Finalizers:
    vm.neon.tech/finalizer
  Generation: 2
  Managed Fields:
    API Version: vm.neon.tech/v1
    Fields Type: FieldsV1
    fieldsV1:
      f:metadata:
        f:annotations:
          .:
          f:kubectl.kubernetes.io/last-applied-configuration:
      f:spec:
        .:
        f:guest:
          .:
          f:cpus:
            .:
            f:max:
            f:min:
          f:memorySlotSize:
          f:memorySlots:
            .:
            f:max:
            f:min:
            f:use:
          f:ports:
          f:rootDisk:
            .:
            f:image:
            f:imagePullPolicy:
            f:size:
        f:qmp:
        f:restartPolicy:
        f:terminationGracePeriodSeconds:
    Manager: kubectl-client-side-apply
    Operation: Update
    Time: 2022-12-07T22:31:19Z
    API Version: vm.neon.tech/v1
    Fields Type: FieldsV1
    fieldsV1:
      f:metadata:
        f:finalizers:
          .:
          v:"vm.neon.tech/finalizer":
    Manager: manager
    Operation: Update
    Time: 2022-12-07T22:31:19Z
    API Version: vm.neon.tech/v1
    Fields Type: FieldsV1
    fieldsV1:
      f:spec:
        f:guest:
          f:cpus:
            f:use:
    Manager: kubectl-patch
    Operation: Update
    Time: 2022-12-07T23:16:21Z
    API Version: vm.neon.tech/v1
    Fields Type: FieldsV1
    fieldsV1:
      f:status:
        .:
        f:conditions:
        f:cpus:
        f:memorySize:
        f:node:
        f:phase:
        f:podIP:
        f:podName:
    Manager: manager
    Operation: Update
    Subresource: status
    Time: 2022-12-07T23:16:22Z
  Resource Version: 7999
  UID: 7ab14f43-6b28-4bd3-86c0-89cff61b216d
Spec:
  Guest:
    Cpus:
      Max: 4
      Min: 1
      Use: 4
    Memory Slot Size: 1Gi
    Memory Slots:
      Max: 6
      Min: 1
      Use: 4
    Ports:
      Port: 1234
      Protocol: TCP
    Root Disk:
      Image: neondatabase/vm-ubuntu:22.04
      Image Pull Policy: IfNotPresent
      Size: 20Gi
  Pod Resources:
  Qmp: 20183
  Restart Policy: Never
  Termination Grace Period Seconds: 5
Status:
  Conditions:
    Last Transition Time: 2022-12-07T22:31:28Z
    Message: Pod (julia-v6vfp) for VirtualMachine (julia) created successfully
    Reason: Reconciling
    Status: True
    Type: Available
  Cpus: 4
  Memory Size: 4Gi
  Node: neonvm
  Phase: Running
  Pod IP: 10.244.0.10
  Pod Name: julia-v6vfp
Events:
  Type Reason Age From Message
  ---- ------ ---- ---- -------
  Normal Created 45m virtualmachine-controller Created VirtualMachine julia
  Normal CpuInfo 45m virtualmachine-controller VirtualMachine julia uses 2 cpu cores
  Normal MemoryInfo 45m virtualmachine-controller VirtualMachine julia uses 2Gi memory
  Normal MemoryInfo 45m virtualmachine-controller VirtualMachine julia uses 3Gi memory
  Normal MemoryInfo 45m virtualmachine-controller VirtualMachine julia uses 4Gi memory
  Normal CpuInfo 11s virtualmachine-controller VirtualMachine julia uses 3 cpu cores
  Normal CpuInfo 11s virtualmachine-controller VirtualMachine julia uses 4 cpu cores
Enter fullscreen mode Exit fullscreen mode

La machine virtuelle étant bien en exécution dans QEMU …

root@neonvm:~# kubectl exec -it $(kubectl get neonvm julia -ojsonpath='{.status.podName}') -- /bin/sh

Defaulted container "runner" out of: runner, init-rootdisk (init)

/ # ps aux
PID USER TIME COMMAND
    1 root 0:00 runner -vmdump eyJxbXAiOjIwMTgzLCJ0ZXJtaW5hdGlvbkdyYWNlUGVyaW9kU2Vjb25kcyI6NSwicG9kUmVzb3VyY2VzIjp7fSwicmVzdGFydFBvbGljeSI6Ik5ldmVyIiwiZ3Vlc3QiOnsiY3B1cyI6ey
   23 root 0:00 [dnsmasq]
   24 dnsmasq 0:00 dnsmasq --port=0 --bind-interfaces --dhcp-authoritative --interface=br-def --dhcp-range=10.255.255.254,static,255.255.255.252 --dhcp-host=be:56:c3:55:e9:de,1
   25 root 12:54 qemu-system-x86_64 -machine q35 -nographic -no-reboot -nodefaults -only-migratable -audiodev none,id=noaudio -serial pty -serial stdio -msg timestamp=on -qmp
   65 root 0:13 {screen} SCREEN /dev/pts/0
  417 root 0:05 {screen} SCREEN /dev/pts/0
  525 root 0:00 [screen]
  532 root 0:00 {screen} SCREEN /dev/pts/0
  533 root 0:00 /bin/sh
  539 root 0:00 ps aux
Enter fullscreen mode Exit fullscreen mode

Je peux alors la détruire simplement :

root@neonvm:~# kubectl delete neonvm julia
virtualmachine.vm.neon.tech "julia" deleted
Enter fullscreen mode Exit fullscreen mode

NeonVM est un jeune projet Open Source en évolution avec pour le moment ces fonctionnalités acquises ou à acquérir dans cette feuille de route comme l’indique très bien son dépôt GitHub …

  • Mise en œuvre de Webhooks pour la mutation et la validation (fait)
  • Support de Multus CNI (à faire)
  • Connexion à chaud des processeurs et de la mémoire (via un patch de ressources) (fait)
  • Live Migration des CRDs (à faire)
  • Simplification de la création d’images disque VM à partir de n’importe quelle image docker (fait avec vm-builder)
  • Support de l’architecture ARM64 (à faire)

À suivre !

Top comments (0)