DEV Community

Karim
Karim

Posted on • Originally published at deep75.Medium on

KubeFire : Créer et gèrer des clusters Kubernetes en utilisant des microVMs avec Firecracker …

David Ko, Senior Engineering Manager chez SUSE propose dans son dépôt sur GitHub un projet intéressant nommé KubeFire qui permet de créer et de gérer des clusters Kubernetes fonctionnant sur des microVMs FireCracker via weaveworks/ignite :

GitHub - innobead/kubefire: KubeFire 🔥, creates and manages Kubernetes Clusters using Firecracker microVMs

J’avais déjà parlé de Weave Ignite il y a quelques années dans cet article,

Weave Ignite et Weave Footloose dans Scaleway : quand les machines virtuelles se prennent pour des…

et de sa mise en oeuvre dans un contexte de clusters Kubernetes imbriqués :

Des clusters Kubernetes imbriqués avec Ignite, Firecracker, Containerd, Kind et Rancher …

KubeFire utilise les rootfs et le noyau indépendant des images OCI au lieu du format traditionnel des images de machines virtuelles traditionnelles comme QCOW2, VHD, VMDK, etc …

KubeFire utilise également containerd pour gérer les processus Firecracker et dispose de différents bootstappers de cluster pour provisionner les clusters Kubernetes comme kubeadm , k3s , RKE2 et _ k0s_.

Enfin KubeFire supporte le déploiement de clusters sur différentes architectures comme x86_64/AMD64 et ARM64/AARCH64 …

Je commence par installer Huber, un autre projet open source de David Ko qui est destiné à simplifier la gestion des paquets à partir des projets GitHub grâce à une liste impressionnante intégrée (mise à jour en direct) de projets populaires. Il prend également en charge la fonction de dépôt pour gérer l’installation de paquets à partir de votre projet Github personnel :

GitHub - innobead/huber: Huber 📦, Package Manager for GitHub repos

Je pars encore une fois d’une machine dans DigitalOcean qui autorise la virtualisation imbriquée :

curl -X POST -H 'Content-Type: application/json' \
    -H 'Authorization: Bearer '$TOKEN'' \
    -d '{"name":"kubefire",
        "size":"s-8vcpu-16gb",
        "region":"nyc3",
        "image":"ubuntu-22-04-x64",
        "vpc_uuid":"94ad9a63-98c1-4c58-a5f0-d9a9b413e0a7"}' \
    "https://api.digitalocean.com/v2/droplets"
Enter fullscreen mode Exit fullscreen mode


root@kubefire:~# apt install libssl-dev libarchive-dev -y
root@kubefire:~# wget http://nz2.archive.ubuntu.com/ubuntu/pool/main/o/openssl/libssl1.1_1.1.1f-1ubuntu2.16_amd64.deb
--2022-11-11 11:36:57-- http://nz2.archive.ubuntu.com/ubuntu/pool/main/o/openssl/libssl1.1_1.1.1f-1ubuntu2.16_amd64.deb
Resolving nz2.archive.ubuntu.com (nz2.archive.ubuntu.com)... 91.189.91.39, 185.125.190.36, 91.189.91.38, ...
Connecting to nz2.archive.ubuntu.com (nz2.archive.ubuntu.com)|91.189.91.39|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 1320576 (1.3M) [application/x-debian-package]
Saving to: ‘libssl1.1_1.1.1f-1ubuntu2.16_amd64.deb’

libssl1.1_1.1.1f-1ubuntu2.16_amd64.deb 100%[=============================================================================================>] 1.26M --.-KB/s in 0.06s   

2022-11-11 11:36:57 (21.7 MB/s) - ‘libssl1.1_1.1.1f-1ubuntu2.16_amd64.deb’ saved [1320576/1320576]

root@kubefire:~# dpkg -i libssl1.1_1.1.1f-1ubuntu2.16_amd64.deb
Selecting previously unselected package libssl1.1:amd64.
(Reading database ... 64650 files and directories currently installed.)
Preparing to unpack libssl1.1_1.1.1f-1ubuntu2.16_amd64.deb ...
Unpacking libssl1.1:amd64 (1.1.1f-1ubuntu2.16) ...
Setting up libssl1.1:amd64 (1.1.1f-1ubuntu2.16) ...
Processing triggers for libc-bin (2.35-0ubuntu3.1) ...

root@kubefire:~# curl -sfSL https://raw.githubusercontent.com/innobead/huber/master/hack/install.sh | bash
++ uname
+ os=Linux
++ uname -m
+ arch=x86_64
+ filename=huber-linux-amd64
+ case $os in
+ case $arch in
+ filename=huber-linux-amd64
++ get_latest_release
++ curl -sfSL https://api.github.com/repos/innobead/huber/releases/latest
++ grep '"tag_name":'
++ sed -E 's/.*"([^"]+)".*/\1/'
+ curl -sfSLO https://github.com/innobead/huber/releases/download/v0.3.8/huber-linux-amd64
+ chmod +x huber-linux-amd64
+ mkdir -p /root/.huber/bin
+ mv huber-linux-amd64 /root/.huber/bin/huber
+ export_statement='export PATH=$HOME/.huber/bin:$PATH'
+ grep -Fxq 'export PATH=$HOME/.huber/bin:$PATH' /root/.bashrc
+ echo 'export PATH=$HOME/.huber/bin:$PATH'
+ cat
The installation script has updated the $PATH environment variable in /root/.bashrc.
Please restart the shell or source again to make it take effect.
root@kubefire:~# source .bashrc
root@kubefire:~# huber
huber v0.3.8 Commit: d642e4b-20220708065617
Huber, simplify github package management

USAGE:
    huber [OPTIONS] [SUBCOMMAND]

OPTIONS:
    -h, --help Print help information
    -k, --github-key <string> Github SSH private key path for authenticating public/private
                                   github repository access. This is required if you connect github
                                   w/ SSH instead of https [env: GITHUB_KEY=]
    -l, --log-level <string> Log level [default: error] [possible values: off, error, warn,
                                   info, debug, trace]
    -o, --output <string> Output format [default: console] [possible values: console, json,
                                   yaml]
    -t, --github-token <string> Github token, used for authorized access instead of limited
                                   public access [env: GITHUB_TOKEN=]
    -V, --version Print version information

SUBCOMMANDS:
    config Manages the configuration
    current Updates the current package version [aliases: c]
    flush Flushes inactive artifacts [aliases: f]
    help Print this message or the help of the given subcommand(s)
    info Shows the package info [aliases: i]
    install Installs the package [aliases: in]
    repo Manages repositories
    reset Resets huber [aliases: r]
    search Searches package [aliases: se]
    self-update Updates huber [aliases: su]
    show Shows installed packages [aliases: s]
    uninstall Uninstalls package [aliases: un, rm]
    update Updates the installed package(s) [aliases: up]
Enter fullscreen mode Exit fullscreen mode

Je peux alors procéder à l’installation de KubeFire :

root@kubefire:~# huber install kubefire
Installed executables:
 - /root/.huber/bin/kubefire
kubefire (version: v0.3.8, source: github) installed

root@kubefire:~# kubefire
KubeFire, creates and manages Kubernetes clusters using FireCracker microVMs

Usage:
  kubefire [flags]
  kubefire [command]

Available Commands:
  cache Manages caches
  cluster Manages clusters
  completion Generate the autocompletion script for the specified shell
  help Help about any command
  image Shows supported RootFS and Kernel images
  info Shows info of prerequisites, supported K8s/K3s versions
  install Installs or updates prerequisites
  kubeconfig Manages kubeconfig of clusters
  node Manages nodes
  uninstall Uninstalls prerequisites
  version Shows version

Flags:
  -t, --github-token string GIthub Personal Access Token used to query repo release info
  -h, --help help for kubefire
  -l, --log-level string log level, options: [panic, fatal, error, warning, info, debug, trace] (default "info")

Use "kubefire [command] --help" for more information about a command.

root@kubefire:~# kubefire cluster create -h
Creates cluster

Usage:
  kubefire cluster create [name] [flags]

Flags:
  -b, --bootstrapper string Bootstrapper type, options: [kubeadm, k3s, rke2, k0s] (default "kubeadm")
  -c, --config string Cluster configuration file (ex: use 'config-template' command to generate the default cluster config)
  -o, --extra-options string Extra options (ex: key=value,...) for bootstrapper
  -f, --force Force to recreate if the cluster exists
  -h, --help help for create
  -i, --image string Rootfs container image (default "ghcr.io/innobead/kubefire-opensuse-leap:15.2")
      --kernel-args string Kernel arguments (default "console=ttyS0 reboot=k panic=1 pci=off ip=dhcp security=apparmor apparmor=1")
      --kernel-image string Kernel container image (default "ghcr.io/innobead/kubefire-ignite-kernel:4.19.125-amd64")
      --master-count int Count of master node (default 1)
      --master-cpu int CPUs of master node (default 2)
      --master-memory string Memory of master node (default "2GB")
      --master-size string Disk size of master node (default "10GB")
      --no-cache Forget caches
      --no-start Don't start nodes
  -k, --pubkey string Public key
  -v, --version string Version of Kubernetes supported by bootstrapper (ex: v1.18, v1.18.8, empty)
      --worker-count int Count of worker node
      --worker-cpu int CPUs of worker node (default 2)
      --worker-memory string Memory of worker node (default "2GB")
      --worker-size string Disk size of worker node (default "10GB")

Global Flags:
  -t, --github-token string GIthub Personal Access Token used to query repo release info
  -l, --log-level string log level, options: [panic, fatal, error, warning, info, debug, trace] (default "info")
Enter fullscreen mode Exit fullscreen mode

sur l’instance Ubuntu (et qui va installer Weave Ignite localement de manière automatique avec cette ligne de commande) :

root@kubefire:~# kubefire install

INFO[2022-11-11T11:46:13Z] downloading https://raw.githubusercontent.com/innobead/kubefire/v0.3.8/scripts/install-prerequisites.sh to save /root/.kubefire/bin/v0.3.8/install-prerequisites.sh force=false version=v0.3.8
INFO[2022-11-11T11:46:14Z] running script (install-prerequisites.sh) version=v0.3.8
INFO[2022-11-11T11:46:14Z] running /root/.kubefire/bin/v0.3.8/install-prerequisites.sh version=v0.3.8
INFO[2022-11-11T11:46:14Z] + TMP_DIR=/tmp/kubefire                      
INFO[2022-11-11T11:46:14Z] ++ go env GOARCH                             
INFO[2022-11-11T11:46:14Z] ++ echo amd64                                
INFO[2022-11-11T11:46:14Z] + GOARCH=amd64                               
INFO[2022-11-11T11:46:14Z] + KUBEFIRE_VERSION=v0.3.8                    
INFO[2022-11-11T11:46:14Z] + CONTAINERD_VERSION=v1.6.6
+ IGNITE_VERION=v0.10.0 
INFO[2022-11-11T11:46:14Z] + CNI_VERSION=v1.1.1
+ RUNC_VERSION=v1.1.3   
INFO[2022-11-11T11:46:14Z] + '[' -z v0.3.8 ']'
+ '[' -z v1.6.6 ']'
+ '[' -z v0.10.0 ']'
+ '[' -z v1.1.1 ']'
+ '[' -z v1.1.3 ']' 
INFO[2022-11-11T11:46:14Z] ++ sed -E 's/(v[0-9]+\.[0-9]+\.[0-9]+)[a-zA-Z0-9\-]*/\1/g' 
INFO[2022-11-11T11:46:14Z] +++ echo v0.3.8                              
INFO[2022-11-11T11:46:14Z] + STABLE_KUBEFIRE_VERSION=v0.3.8             
INFO[2022-11-11T11:46:14Z] + rm -rf /tmp/kubefire                       
INFO[2022-11-11T11:46:14Z] + mkdir -p /tmp/kubefire                     
INFO[2022-11-11T11:46:14Z] + pushd /tmp/kubefire
/tmp/kubefire /root    
INFO[2022-11-11T11:46:14Z] + trap cleanup EXIT ERR INT TERM             
INFO[2022-11-11T11:46:14Z] + check_virtualization
+ _is_arm_arch        
INFO[2022-11-11T11:46:14Z] + uname -m                                   
INFO[2022-11-11T11:46:14Z] + grep aarch64                               
INFO[2022-11-11T11:46:14Z] + return 1                                   
INFO[2022-11-11T11:46:14Z] + lscpu                                      
INFO[2022-11-11T11:46:14Z] + grep 'Virtuali[s|z]ation'                  
INFO[2022-11-11T11:46:14Z] Virtualization: VT-x
Virtualization type: full 
INFO[2022-11-11T11:46:14Z] + lsmod                                      
INFO[2022-11-11T11:46:14Z] + grep kvm                                   
INFO[2022-11-11T11:46:14Z] kvm_intel 372736 0
kvm 1028096 1 kvm_intel 
INFO[2022-11-11T11:46:14Z] + install_runc
+ _check_version /usr/local/bin/runc -version v1.1.3 
INFO[2022-11-11T11:46:14Z] + set +o pipefail
+ local exec_name=/usr/local/bin/runc
+ local exec_version_cmd=-version
+ local version=v1.1.3
+ command -v /usr/local/bin/runc
+ return 1
+ _is_arm_arch 
INFO[2022-11-11T11:46:14Z] + uname -m                                   
INFO[2022-11-11T11:46:14Z] + grep aarch64                               
INFO[2022-11-11T11:46:14Z] + return 1                                   
INFO[2022-11-11T11:46:14Z] + curl -sfSL https://github.com/opencontainers/runc/releases/download/v1.1.3/runc.amd64 -o runc 
INFO[2022-11-11T11:46:14Z] + chmod +x runc                              
INFO[2022-11-11T11:46:14Z] + sudo mv runc /usr/local/bin/               
INFO[2022-11-11T11:46:14Z] + install_containerd
+ _check_version /usr/local/bin/containerd --version v1.6.6 
INFO[2022-11-11T11:46:14Z] + set +o pipefail
+ local exec_name=/usr/local/bin/containerd
+ local exec_version_cmd=--version
+ local version=v1.6.6
+ command -v /usr/local/bin/containerd
+ return 1
+ local version=1.6.6
+ local dir=containerd-1.6.6
+ _is_arm_arch 
INFO[2022-11-11T11:46:14Z] + uname -m                                   
INFO[2022-11-11T11:46:14Z] + grep aarch64                               
INFO[2022-11-11T11:46:14Z] + return 1                                   
INFO[2022-11-11T11:46:14Z] + curl -sfSLO https://github.com/containerd/containerd/releases/download/v1.6.6/containerd-1.6.6-linux-amd64.tar.gz 
INFO[2022-11-11T11:46:15Z] + mkdir -p containerd-1.6.6                  
INFO[2022-11-11T11:46:15Z] + tar -zxvf containerd-1.6.6-linux-amd64.tar.gz -C containerd-1.6.6 
INFO[2022-11-11T11:46:15Z] bin/
bin/containerd-shim                     
INFO[2022-11-11T11:46:15Z] bin/containerd                               
INFO[2022-11-11T11:46:16Z] bin/containerd-shim-runc-v1                  
INFO[2022-11-11T11:46:16Z] bin/containerd-stress                        
INFO[2022-11-11T11:46:16Z] bin/containerd-shim-runc-v2                  
INFO[2022-11-11T11:46:16Z] bin/ctr                                      
INFO[2022-11-11T11:46:17Z] + chmod +x containerd-1.6.6/bin/containerd containerd-1.6.6/bin/containerd-shim containerd-1.6.6/bin/containerd-shim-runc-v1 containerd-1.6.6/bin/containerd-shim-runc-v2 containerd-1.6.6/bin/containerd-stress containerd-1.6.6/bin/ctr 
INFO[2022-11-11T11:46:17Z] + sudo mv containerd-1.6.6/bin/containerd containerd-1.6.6/bin/containerd-shim containerd-1.6.6/bin/containerd-shim-runc-v1 containerd-1.6.6/bin/containerd-shim-runc-v2 containerd-1.6.6/bin/containerd-stress containerd-1.6.6/bin/ctr /usr/local/bin/ 
INFO[2022-11-11T11:46:17Z] + curl -sfSLO https://raw.githubusercontent.com/containerd/containerd/v1.6.6/containerd.service 
INFO[2022-11-11T11:46:17Z] + sudo groupadd containerd                   
INFO[2022-11-11T11:46:17Z] + sudo mv containerd.service /etc/systemd/system/containerd.service 
INFO[2022-11-11T11:46:17Z] ++ command -v chgrp                          
INFO[2022-11-11T11:46:17Z] ++ tr -d '\n'                                
INFO[2022-11-11T11:46:17Z] + chgrp_path=/usr/bin/chgrp                  
INFO[2022-11-11T11:46:17Z] + sudo sed -i -E 's#(ExecStart=/usr/local/bin/containerd)#\1\nExecStartPost=/usr/bin/chgrp containerd /run/containerd/containerd.sock#g' /etc/systemd/system/containerd.service 
INFO[2022-11-11T11:46:17Z] + sudo mkdir -p /etc/containerd              
INFO[2022-11-11T11:46:17Z] + containerd config default                  
INFO[2022-11-11T11:46:17Z] + sudo tee /etc/containerd/config.toml       
INFO[2022-11-11T11:46:17Z] + sudo systemctl enable --now containerd     
INFO[2022-11-11T11:46:17Z] Created symlink /etc/systemd/system/multi-user.target.wants/containerd.service → /etc/systemd/system/containerd.service. 
INFO[2022-11-11T11:46:17Z] + install_cni
+ _check_version /opt/cni/bin/bridge --version v1.1.1
+ set +o pipefail 
INFO[2022-11-11T11:46:17Z] + local exec_name=/opt/cni/bin/bridge
+ local exec_version_cmd=--version
+ local version=v1.1.1
+ command -v /opt/cni/bin/bridge 
INFO[2022-11-11T11:46:17Z] + return 1                                   
INFO[2022-11-11T11:46:17Z] + mkdir -p /opt/cni/bin                      
INFO[2022-11-11T11:46:17Z] + local f=https://github.com/containernetworking/plugins/releases/download/v1.1.1/cni-plugins-linux-amd64-v1.1.1.tgz
+ _is_arm_arch 
INFO[2022-11-11T11:46:17Z] + uname -m                                   
INFO[2022-11-11T11:46:17Z] + grep aarch64                               
INFO[2022-11-11T11:46:17Z] + return 1                                   
INFO[2022-11-11T11:46:17Z] + curl -sfSL https://github.com/containernetworking/plugins/releases/download/v1.1.1/cni-plugins-linux-amd64-v1.1.1.tgz 
INFO[2022-11-11T11:46:17Z] + tar -C /opt/cni/bin -xz                    
INFO[2022-11-11T11:46:19Z] + install_cni_patches
+ _is_arm_arch         
INFO[2022-11-11T11:46:19Z] + uname -m                                   
INFO[2022-11-11T11:46:19Z] + grep aarch64                               
INFO[2022-11-11T11:46:19Z] + return 1
+ curl -o host-local-rev -sfSL https://github.com/innobead/kubefire/releases/download/v0.3.8/host-local-rev-linux-amd64 
INFO[2022-11-11T11:46:19Z] + chmod +x host-local-rev                    
INFO[2022-11-11T11:46:19Z] + sudo mv host-local-rev /opt/cni/bin/       
INFO[2022-11-11T11:46:19Z] + install_ignite
+ _check_version /usr/local/bin/ignite version v0.10.0
+ set +o pipefail 
INFO[2022-11-11T11:46:19Z] + local exec_name=/usr/local/bin/ignite
+ local exec_version_cmd=version
+ local version=v0.10.0
+ command -v /usr/local/bin/ignite
+ return 1 
INFO[2022-11-11T11:46:19Z] + for binary in ignite ignited
+ echo 'Installing ignite...' 
INFO[2022-11-11T11:46:19Z] Installing ignite...                         
INFO[2022-11-11T11:46:19Z] + local f=https://github.com/weaveworks/ignite/releases/download/v0.10.0/ignite-amd64
+ _is_arm_arch 
INFO[2022-11-11T11:46:19Z] + uname -m                                   
INFO[2022-11-11T11:46:19Z] + grep aarch64                               
INFO[2022-11-11T11:46:19Z] + return 1
+ curl -sfSLo ignite https://github.com/weaveworks/ignite/releases/download/v0.10.0/ignite-amd64 
INFO[2022-11-11T11:46:20Z] + chmod +x ignite                            
INFO[2022-11-11T11:46:20Z] + sudo mv ignite /usr/local/bin              
INFO[2022-11-11T11:46:20Z] + for binary in ignite ignited
+ echo 'Installing ignited...'
Installing ignited...
+ local f=https://github.com/weaveworks/ignite/releases/download/v0.10.0/ignited-amd64 
INFO[2022-11-11T11:46:20Z] + _is_arm_arch                               
INFO[2022-11-11T11:46:20Z] + grep aarch64
+ uname -m                    
INFO[2022-11-11T11:46:20Z] + return 1
+ curl -sfSLo ignited https://github.com/weaveworks/ignite/releases/download/v0.10.0/ignited-amd64 
INFO[2022-11-11T11:46:21Z] + chmod +x ignited                           
INFO[2022-11-11T11:46:21Z] + sudo mv ignited /usr/local/bin             
INFO[2022-11-11T11:46:21Z] + check_ignite
+ ignite version              
INFO[2022-11-11T11:46:21Z] Ignite version: version.Info{Major:"0", Minor:"10", GitVersion:"v0.10.0", GitCommit:"4540abeb9ba6daba32a72ef2b799095c71ebacb0", GitTreeState:"clean", BuildDate:"2021-07-19T20:52:59Z", GoVersion:"go1.16.3", Compiler:"gc", Platform:"linux/amd64", SandboxImage:version.Image{Name:"weaveworks/ignite", Tag:"v0.10.0", Delimeter:":"}, KernelImage:version.Image{Name:"weaveworks/ignite-kernel", Tag:"5.10.51", Delimeter:":"}} 
INFO[2022-11-11T11:46:21Z] Firecracker version: v0.22.4                 
INFO[2022-11-11T11:46:21Z] + create_cni_default_config                  
INFO[2022-11-11T11:46:21Z] + mkdir -p /etc/cni/net.d/                   
INFO[2022-11-11T11:46:21Z] + sudo cat                                   
INFO[2022-11-11T11:46:21Z] + popd
/root
+ cleanup                       
INFO[2022-11-11T11:46:21Z] + rm -rf /tmp/kubefire   
Enter fullscreen mode Exit fullscreen mode

Lancement d’un premier cluster Kubernetes avec RKE2 :

RKE2 - Rancher's Next Generation Kubernetes Distribution

root@kubefire:~# kubefire cluster create rke2-cluster --bootstrapper=rke2 --worker-count 3
Enter fullscreen mode Exit fullscreen mode


root@kubefire:~# mkdir .kube
root@kubefire:~# cp /root/.kubefire/clusters/rke2-cluster/admin.conf .kube/config
root@kubefire:~# snap install kubectl --classic
kubectl 1.25.3 from Canonical✓ installed
Enter fullscreen mode Exit fullscreen mode

Je peux accéder au cluster RKE2 qui tournent dans ces microVMs dans Ignite localement :

root@kubefire:~# kubectl cluster-info

Kubernetes control plane is running at https://10.62.0.2:6443
CoreDNS is running at https://10.62.0.2:6443/api/v1/namespaces/kube-system/services/rke2-coredns-rke2-coredns:udp-53/proxy

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

root@kubefire:~# kubectl get nodes -o wide

NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
rke2-cluster-master-1 NotReady control-plane,etcd,master 3m11s v1.25.3+rke2r1 10.62.0.2 <none> openSUSE Leap 15.2 4.19.125 containerd://1.6.8-k3s1
rke2-cluster-worker-1 Ready <none> 2m21s v1.25.3+rke2r1 10.62.0.3 <none> openSUSE Leap 15.2 4.19.125 containerd://1.6.8-k3s1
rke2-cluster-worker-2 Ready <none> 116s v1.25.3+rke2r1 10.62.0.4 <none> openSUSE Leap 15.2 4.19.125 containerd://1.6.8-k3s1
rke2-cluster-worker-3 Ready <none> 2m44s v1.25.3+rke2r1 10.62.0.5 <none> openSUSE Leap 15.2 4.19.125 containerd://1.6.8-k3s1

root@kubefire:~# ignite ps

VM ID IMAGE KERNEL SIZE CPUS MEMORY CREATED STATUS IPS PORTS NAME
b353bc11f1a05796 ghcr.io/innobead/kubefire-opensuse-leap:15.2 ghcr.io/innobead/kubefire-ignite-kernel:4.19.125-amd64 10.0 GB 2 2.0 GB 7m51s ago Up 7m51s 10.62.0.5 rke2-cluster-worker-3
e953c33ff09f08c2 ghcr.io/innobead/kubefire-opensuse-leap:15.2 ghcr.io/innobead/kubefire-ignite-kernel:4.19.125-amd64 10.0 GB 2 2.0 GB 7m52s ago Up 7m52s 10.62.0.3 rke2-cluster-worker-1
ee08a646eae9136a ghcr.io/innobead/kubefire-opensuse-leap:15.2 ghcr.io/innobead/kubefire-ignite-kernel:4.19.125-amd64 10.0 GB 2 2.0 GB 7m55s ago Up 7m55s 10.62.0.2 rke2-cluster-master-1
f009f46da5e6e98d ghcr.io/innobead/kubefire-opensuse-leap:15.2 ghcr.io/innobead/kubefire-ignite-kernel:4.19.125-amd64 10.0 GB 2 2.0 GB 7m51s ago Up 7m51s 10.62.0.4 rke2-cluster-worker-2
Enter fullscreen mode Exit fullscreen mode

et c’est bien une image avec openSUSE Leap 15.2 qui a été utilisée ici …

root@kubefire:~# ignite images

IMAGE ID NAME CREATED SIZE
4adfa4fff68ca7a8 ghcr.io/innobead/kubefire-opensuse-leap:15.2 9m25s 315.9 MB

root@kubefire:~# ignite kernels

KERNEL ID NAME CREATED SIZE VERSION
a58f33b249d35c11 ghcr.io/innobead/kubefire-ignite-kernel:4.19.125-amd64 9m29s 51.0 MB 4.19.125
Enter fullscreen mode Exit fullscreen mode

Je peux supprimer simplement le cluster RKE2 créé précédemment :

root@kubefire:~# kubefire cluster list
NAME BOOTSTRAPPER IMAGE KERNELIMAGE                                            
rke2-cluster rke2 ghcr.io/innobead/kubefire-opensuse-leap:15.2 ghcr.io/innobead/kubefire-ignite-kernel:4.19.125-amd64

root@kubefire:~# kubefire cluster delete rke2-cluster

INFO[2022-11-11T12:01:07Z] deleting cluster cluster=rke2-cluster force=false
INFO[2022-11-11T12:01:07Z] deleting master nodes cluster=rke2-cluster
INFO[2022-11-11T12:01:07Z] deleting node node=rke2-cluster-master-1
INFO[2022-11-11T12:01:07Z] time="2022-11-11T12:01:07Z" level=info msg="Removing the container with ID \"ignite-ee08a646eae9136a\" from the \"cni\" network" 
INFO[2022-11-11T12:01:08Z] time="2022-11-11T12:01:08Z" level=info msg="Removed VM with name \"rke2-cluster-master-1\" and ID \"ee08a646eae9136a\"" 
INFO[2022-11-11T12:01:08Z] deleting worker nodes cluster=rke2-cluster
INFO[2022-11-11T12:01:08Z] deleting node node=rke2-cluster-worker-1
INFO[2022-11-11T12:01:09Z] time="2022-11-11T12:01:09Z" level=info msg="Removing the container with ID \"ignite-e953c33ff09f08c2\" from the \"cni\" network" 
INFO[2022-11-11T12:01:10Z] time="2022-11-11T12:01:10Z" level=info msg="Removed VM with name \"rke2-cluster-worker-1\" and ID \"e953c33ff09f08c2\"" 
INFO[2022-11-11T12:01:10Z] deleting node node=rke2-cluster-worker-2
INFO[2022-11-11T12:01:10Z] time="2022-11-11T12:01:10Z" level=info msg="Removing the container with ID \"ignite-f009f46da5e6e98d\" from the \"cni\" network" 
INFO[2022-11-11T12:01:11Z] time="2022-11-11T12:01:11Z" level=info msg="Removed VM with name \"rke2-cluster-worker-2\" and ID \"f009f46da5e6e98d\"" 
INFO[2022-11-11T12:01:11Z] deleting node node=rke2-cluster-worker-3
INFO[2022-11-11T12:01:12Z] time="2022-11-11T12:01:12Z" level=info msg="Removing the container with ID \"ignite-b353bc11f1a05796\" from the \"cni\" network" 
INFO[2022-11-11T12:01:13Z] time="2022-11-11T12:01:13Z" level=info msg="Removed VM with name \"rke2-cluster-worker-3\" and ID \"b353bc11f1a05796\"" 
INFO[2022-11-11T12:01:13Z] deleting cluster configurations cluster=rke2-cluster
Enter fullscreen mode Exit fullscreen mode

Lancement d’un autre cluster avec k0s :

k0s | Kubernetes distribution for bare-metal, on-prem, edge, IoT

root@kubefire:~# kubefire cluster create k0s-cluster --bootstrapper=k0s --worker-count 3
Enter fullscreen mode Exit fullscreen mode

Le cluster k0s est déployé et accessible :

root@kubefire:~# ignite ps
VM ID IMAGE KERNEL SIZE CPUS MEMORY CREATED STATUS IPS PORTS NAME
3496c4dd65782f93 ghcr.io/innobead/kubefire-opensuse-leap:15.2 ghcr.io/innobead/kubefire-ignite-kernel:4.19.125-amd64 10.0 GB 2 2.0 GB 2m16s ago Up 2m16s 10.62.0.12 k0s-cluster-worker-2
4d09c8924a1fafaa ghcr.io/innobead/kubefire-opensuse-leap:15.2 ghcr.io/innobead/kubefire-ignite-kernel:4.19.125-amd64 10.0 GB 2 2.0 GB 2m16s ago Up 2m16s 10.62.0.13 k0s-cluster-worker-3
b5fc5345aebea1f3 ghcr.io/innobead/kubefire-opensuse-leap:15.2 ghcr.io/innobead/kubefire-ignite-kernel:4.19.125-amd64 10.0 GB 2 2.0 GB 2m20s ago Up 2m19s 10.62.0.10 k0s-cluster-master-1
eec1af3d9971e57c ghcr.io/innobead/kubefire-opensuse-leap:15.2 ghcr.io/innobead/kubefire-ignite-kernel:4.19.125-amd64 10.0 GB 2 2.0 GB 2m17s ago Up 2m17s 10.62.0.11 k0s-cluster-worker-1
root@kubefire:~# kubefire cluster list
NAME BOOTSTRAPPER IMAGE KERNELIMAGE                                            
k0s-cluster k0s ghcr.io/innobead/kubefire-opensuse-leap:15.2 ghcr.io/innobead/kubefire-ignite-kernel:4.19.125-amd64

root@kubefire:~# ln -sf /root/.kubefire/clusters/k0s-cluster/admin.conf .kube/config

root@kubefire:~# kubectl cluster-info
Kubernetes control plane is running at https://10.62.0.10:6443
CoreDNS is running at https://10.62.0.10:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

root@kubefire:~# kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
3496c4dd65782f93 Ready <none> 115s v1.25.3+k0s 10.62.0.12 <none> openSUSE Leap 15.2 4.19.125 containerd://1.6.9
4d09c8924a1fafaa Ready <none> 114s v1.25.3+k0s 10.62.0.13 <none> openSUSE Leap 15.2 4.19.125 containerd://1.6.9
b5fc5345aebea1f3 Ready control-plane 2m12s v1.25.3+k0s 10.62.0.10 <none> openSUSE Leap 15.2 4.19.125 containerd://1.6.9
eec1af3d9971e57c Ready <none> 112s v1.25.3+k0s 10.62.0.11 <none> openSUSE Leap 15.2 4.19.125 containerd://1.6.9

root@kubefire:~# kubectl get po,svc -A
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system pod/coredns-5d5b5b96f9-msqkk 1/1 Running 0 118s
kube-system pod/coredns-5d5b5b96f9-qrkph 1/1 Running 0 2m21s
kube-system pod/konnectivity-agent-57wvf 1/1 Running 0 118s
kube-system pod/konnectivity-agent-8cxj7 1/1 Running 0 2m1s
kube-system pod/konnectivity-agent-jzmr5 1/1 Running 0 2m18s
kube-system pod/konnectivity-agent-nkzwg 1/1 Running 0 119s
kube-system pod/kube-proxy-8nxhk 1/1 Running 0 2m18s
kube-system pod/kube-proxy-kxdbg 1/1 Running 0 2m1s
kube-system pod/kube-proxy-l975d 1/1 Running 0 118s
kube-system pod/kube-proxy-qc4rx 1/1 Running 0 119s
kube-system pod/kube-router-9xvz9 1/1 Running 0 117s
kube-system pod/kube-router-ffm92 1/1 Running 0 2m1s
kube-system pod/kube-router-prjkj 1/1 Running 0 2m18s
kube-system pod/kube-router-vsx4w 1/1 Running 0 119s
kube-system pod/metrics-server-7c555fc57f-2ml86 1/1 Running 0 2m19s

NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 2m46s
kube-system service/kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP,9153/TCP 2m31s
kube-system service/metrics-server ClusterIP 10.109.139.30 <none> 443/TCP 2m20s
Enter fullscreen mode Exit fullscreen mode

Déploiement test du sempiternel démonstrateur FC :

root@kubefire:~# cat <<EOF | kubectl apply -f -
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: fcdemo3
  labels:
    app: fcdemo3
spec:
  replicas: 4
  selector:
    matchLabels:
      app: fcdemo3
  template:
    metadata:
      labels:
        app: fcdemo3
    spec:
      containers:
      - name: fcdemo3
        image: mcas/franceconnect-demo2:latest
        ports:
        - containerPort: 3000
---
apiVersion: v1
kind: Service
metadata:
  name: fcdemo-service
spec:
  type: NodePort
  selector:
    app: fcdemo3
  ports:
  - protocol: TCP
    port: 80
    targetPort: 3000
EOF
deployment.apps/fcdemo3 created
service/fcdemo-service created

root@kubefire:~# kubectl get po,svc -A

NAMESPACE NAME READY STATUS RESTARTS AGE
default pod/fcdemo3-6bff77544b-52nsm 1/1 Running 0 31s
default pod/fcdemo3-6bff77544b-bbxrp 1/1 Running 0 31s
default pod/fcdemo3-6bff77544b-kctfg 1/1 Running 0 31s
default pod/fcdemo3-6bff77544b-pcgjc 1/1 Running 0 31s
kube-system pod/coredns-5d5b5b96f9-msqkk 1/1 Running 0 49m
kube-system pod/coredns-5d5b5b96f9-qrkph 1/1 Running 0 50m
kube-system pod/konnectivity-agent-57wvf 1/1 Running 0 49m
kube-system pod/konnectivity-agent-8cxj7 1/1 Running 0 49m
kube-system pod/konnectivity-agent-jzmr5 1/1 Running 0 49m
kube-system pod/konnectivity-agent-nkzwg 1/1 Running 0 49m
kube-system pod/kube-proxy-8nxhk 1/1 Running 0 49m
kube-system pod/kube-proxy-kxdbg 1/1 Running 0 49m
kube-system pod/kube-proxy-l975d 1/1 Running 0 49m
kube-system pod/kube-proxy-qc4rx 1/1 Running 0 49m
kube-system pod/kube-router-9xvz9 1/1 Running 0 49m
kube-system pod/kube-router-ffm92 1/1 Running 0 49m
kube-system pod/kube-router-prjkj 1/1 Running 0 49m
kube-system pod/kube-router-vsx4w 1/1 Running 0 49m
kube-system pod/metrics-server-7c555fc57f-2ml86 1/1 Running 0 49m

NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default service/fcdemo-service NodePort 10.104.7.85 <none> 80:32724/TCP 31s
default service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 50m
kube-system service/kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP,9153/TCP 50m
kube-system service/metrics-server ClusterIP 10.109.139.30 <none> 443/TCP 49m

root@kubefire:~# curl http://10.62.0.10:32724
<!doctype html>
<html lang="en">
<head>
    <meta charset="UTF-8">
    <meta name="viewport"
          content="width=device-width, user-scalable=no, initial-scale=1.0, maximum-scale=1.0, minimum-scale=1.0">
    <meta http-equiv="X-UA-Compatible" content="ie=edge">
    <link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/bulma/0.7.1/css/bulma.min.css" integrity="sha256-zIG416V1ynj3Wgju/scU80KAEWOsO5rRLfVyRDuOv7Q=" crossorigin="anonymous" />
    <title>Démonstrateur Fournisseur de Service</title>
</head>

<body>
<nav class="navbar" role="navigation" aria-label="main navigation">
    <div class="navbar-start">
        <div class="navbar-brand">
            <a class="navbar-item" href="/">
                <img src="/img/fc_logo_v2.png" alt="Démonstrateur Fournisseur de Service" height="28">
            </a>
        </div>
        <a href="/" class="navbar-item">
            Home
        </a>
    </div>
    <div class="navbar-end">
        <div class="navbar-item">

                <div class="buttons">
                    <a class="button is-light" href="/login">Se connecter</a>
                </div>

        </div>
    </div>
</nav>

<section class="hero is-info is-medium">
    <div class="hero-body">
        <div class="container">
            <h1 class="title">
                Bienvenue sur le démonstrateur de fournisseur de service
            </h1>
            <h2 class="subtitle">
                Cliquez sur "se connecter" pour vous connecter via <strong>FranceConnect</strong>
            </h2>
        </div>
    </div>
</section>

<section class="section is-small">
    <div class="container">
        <h1 class="title">Récupérer vos données via FranceConnect</h1>

        <p>Pour récupérer vos données via <strong>FranceConnect</strong> cliquez sur le bouton ci-dessous</p>
    </div>
</section>
<section class="section is-small">
    <div class="container has-text-centered">
        <!-- FC btn -->
        <a href="/data" class="button is-link">Récupérer mes données via FranceConnect</a>
    </div>
</section>
<footer class="footer custom-content">
    <div class="content has-text-centered">
        <p>
            <a href="https://partenaires.franceconnect.gouv.fr/fcp/fournisseur-service"
               target="_blank"
               alt="lien vers la documentation France Connect">
                <strong>Documentation FranceConnect Partenaires</strong>
            </a>
        </p>
    </div>
</footer>
<!-- This script brings the FranceConnect tools modal which enable "disconnect", "see connection history" and "see FC FAQ" features -->
<script src="https://fcp.integ01.dev-franceconnect.fr/js/franceconnect.js"></script>

</body>
</html>
Enter fullscreen mode Exit fullscreen mode

Je l’expose directement avec Ngrok et SSH sans aucune installation locale :

Overview | ngrok documentation

root@kubefire:~# ssh -R 80:10.62.0.10:32724 tunnel.us.ngrok.com http

ngrok (via SSH) (Ctrl+C to quit)

Account (Plan: Free)
Region us
Forwarding http://8846-143-198-10-54.ngrok.io

GET http://8846-143-198-10-54.ngrok.io/
GET http://8846-143-198-10-54.ngrok.io/img/fc_logo_v2.png
GET http://8846-143-198-10-54.ngrok.io/
GET http://8846-143-198-10-54.ngrok.io/img/fc_logo_v2.png
Enter fullscreen mode Exit fullscreen mode

Dernier test avec le très connu k3s :

K3s

root@kubefire:~# kubefire cluster create k3s-cluster --bootstrapper=k3s --worker-count 3
Enter fullscreen mode Exit fullscreen mode

Le cluster k3s est opérationnel :

root@kubefire:~# ln -sf /root/.kubefire/clusters/k3s-cluster/admin.conf .kube/config
root@kubefire:~# kubectl cluster-info
Kubernetes control plane is running at https://10.62.0.14:6443
CoreDNS is running at https://10.62.0.14:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
Metrics-server is running at https://10.62.0.14:6443/api/v1/namespaces/kube-system/services/https:metrics-server:https/proxy

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
root@kubefire:~# kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
k3s-cluster-master-1 Ready control-plane,etcd,master 112s v1.25.3+k3s1 10.62.0.14 <none> openSUSE Leap 15.2 4.19.125 containerd://1.6.8-k3s1
k3s-cluster-worker-1 Ready <none> 100s v1.25.3+k3s1 10.62.0.15 <none> openSUSE Leap 15.2 4.19.125 containerd://1.6.8-k3s1
k3s-cluster-worker-2 Ready <none> 86s v1.25.3+k3s1 10.62.0.16 <none> openSUSE Leap 15.2 4.19.125 containerd://1.6.8-k3s1
k3s-cluster-worker-3 Ready <none> 71s v1.25.3+k3s1 10.62.0.17 <none> openSUSE Leap 15.2 4.19.125 containerd://1.6.8-k3s1
root@kubefire:~# kubectl get po,svc -A
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system pod/coredns-75fc8f8fff-l9kd2 1/1 Running 0 104s
kube-system pod/helm-install-traefik-crd-8m4m9 0/1 Completed 0 105s
kube-system pod/helm-install-traefik-tlz8m 0/1 Completed 2 105s
kube-system pod/local-path-provisioner-5b5579c644-g5xdg 1/1 Running 0 104s
kube-system pod/metrics-server-5c8978b444-v8hhn 1/1 Running 0 104s
kube-system pod/svclb-traefik-8c1e12b3-5wcbq 2/2 Running 0 69s
kube-system pod/svclb-traefik-8c1e12b3-8j4k5 2/2 Running 0 68s
kube-system pod/svclb-traefik-8c1e12b3-qk9w6 2/2 Running 0 69s
kube-system pod/svclb-traefik-8c1e12b3-xw97j 2/2 Running 0 69s
kube-system pod/traefik-9c6dc6686-jgkkt 1/1 Running 0 69s

NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default service/kubernetes ClusterIP 10.43.0.1 <none> 443/TCP 2m
kube-system service/kube-dns ClusterIP 10.43.0.10 <none> 53/UDP,53/TCP,9153/TCP 117s
kube-system service/metrics-server ClusterIP 10.43.184.146 <none> 443/TCP 116s
kube-system service/traefik LoadBalancer 10.43.128.113 10.62.0.14,10.62.0.15,10.62.0.16,10.62.0.17 80:30115/TCP,443:32705/TCP 69s

root@kubefire:~# ignite ps
VM ID IMAGE KERNEL SIZE CPUS MEMORY CREATED STATUS IPS PORTS NAME
4cd852fea4599df0 ghcr.io/innobead/kubefire-opensuse-leap:15.2 ghcr.io/innobead/kubefire-ignite-kernel:4.19.125-amd64 10.0 GB 2 2.0 GB 3m39s ago Up 3m39s 10.62.0.15 k3s-cluster-worker-1
500fc05240b97ba7 ghcr.io/innobead/kubefire-opensuse-leap:15.2 ghcr.io/innobead/kubefire-ignite-kernel:4.19.125-amd64 10.0 GB 2 2.0 GB 3m38s ago Up 3m38s 10.62.0.16 k3s-cluster-worker-2
b0d53184fcc81421 ghcr.io/innobead/kubefire-opensuse-leap:15.2 ghcr.io/innobead/kubefire-ignite-kernel:4.19.125-amd64 10.0 GB 2 2.0 GB 3m42s ago Up 3m42s 10.62.0.14 k3s-cluster-master-1
e5dfb68b740f0fab ghcr.io/innobead/kubefire-opensuse-leap:15.2 ghcr.io/innobead/kubefire-ignite-kernel:4.19.125-amd64 10.0 GB 2 2.0 GB 3m38s ago Up 3m38s 10.62.0.17 k3s-cluster-worker-3
Enter fullscreen mode Exit fullscreen mode

Avec le même déploiement précédent utilisant cette fois-çi l’ Ingress Controller dans le cluster :

root@kubefire:~# cat <<EOF | kubectl apply -f -
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: fcdemo3
  labels:
    app: fcdemo3
spec:
  replicas: 4
  selector:
    matchLabels:
      app: fcdemo3
  template:
    metadata:
      labels:
        app: fcdemo3
    spec:
      containers:
      - name: fcdemo3
        image: mcas/franceconnect-demo2:latest
        ports:
        - containerPort: 3000
---
apiVersion: v1
kind: Service
metadata:
  name: fcdemo-service
spec:
  type: ClusterIP
  selector:
    app: fcdemo3
  ports:
  - protocol: TCP
    port: 80
    targetPort: 3000
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: fcdemo-ingress
spec:
  defaultBackend:
    service:
      name: fcdemo-service
      port:
        number: 80
EOF
deployment.apps/fcdemo3 created
service/fcdemo-service created
ingress.networking.k8s.io/fcdemo-ingress created

root@kubefire:~# kubectl get po,svc,ing -A

NAMESPACE NAME READY STATUS RESTARTS AGE
default pod/fcdemo3-6bff77544b-69b96 1/1 Running 0 12s
default pod/fcdemo3-6bff77544b-9bxtm 1/1 Running 0 12s
default pod/fcdemo3-6bff77544b-mr4mc 1/1 Running 0 12s
default pod/fcdemo3-6bff77544b-ndd5b 1/1 Running 0 12s
kube-system pod/coredns-75fc8f8fff-l9kd2 1/1 Running 0 5m42s
kube-system pod/helm-install-traefik-crd-8m4m9 0/1 Completed 0 5m43s
kube-system pod/helm-install-traefik-tlz8m 0/1 Completed 2 5m43s
kube-system pod/local-path-provisioner-5b5579c644-g5xdg 1/1 Running 0 5m42s
kube-system pod/metrics-server-5c8978b444-v8hhn 1/1 Running 0 5m42s
kube-system pod/svclb-traefik-8c1e12b3-5wcbq 2/2 Running 0 5m7s
kube-system pod/svclb-traefik-8c1e12b3-8j4k5 2/2 Running 0 5m6s
kube-system pod/svclb-traefik-8c1e12b3-qk9w6 2/2 Running 0 5m7s
kube-system pod/svclb-traefik-8c1e12b3-xw97j 2/2 Running 0 5m7s
kube-system pod/traefik-9c6dc6686-jgkkt 1/1 Running 0 5m7s

NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default service/fcdemo-service ClusterIP 10.43.122.124 <none> 80/TCP 12s
default service/kubernetes ClusterIP 10.43.0.1 <none> 443/TCP 5m58s
kube-system service/kube-dns ClusterIP 10.43.0.10 <none> 53/UDP,53/TCP,9153/TCP 5m55s
kube-system service/metrics-server ClusterIP 10.43.184.146 <none> 443/TCP 5m54s
kube-system service/traefik LoadBalancer 10.43.128.113 10.62.0.14,10.62.0.15,10.62.0.16,10.62.0.17 80:30115/TCP,443:32705/TCP 5m7s

NAMESPACE NAME CLASS HOSTS ADDRESS PORTS AGE
default ingress.networking.k8s.io/fcdemo-ingress <none> * 80 12s
Enter fullscreen mode Exit fullscreen mode

J’utilise ce très simple load balancer localement avec Gobetween pour exposer publiquement le démonstrateur FC :

gobetween - modern & minimalistic L4 load balancer

root@kubefire:~# snap install gobetween --edge

gobetween (edge) 0.8.0+snapshot from Yaroslav Pogrebnyak (yyyar) installed

root@kubefire:~# cat gobetween.toml 
[servers.sample]
bind = "143.198.10.54:80"
protocol = "tcp" 
balance = "roundrobin"

max_connections = 10000
client_idle_timeout = "10m"
backend_idle_timeout = "10m"
backend_connection_timeout = "2s"

    [servers.sample.discovery]
    kind = "static"
    static_list = [
      "10.62.0.14:80 weight=1",
      "10.62.0.16:80 weight=1",
      "10.62.0.15:80 weight=1",
      "10.62.0.17:80 weight=1"
    ]

    [servers.sample.healthcheck]
    fails = 1                      
    passes = 1
    interval = "2s"   
    timeout="1s"             
    kind = "ping"
    ping_timeout_duration = "500ms"

root@kubefire:~# /snap/gobetween/57/bin/gobetween from-file gobetween.toml

gobetween v0.8.0+snapshot
2022-11-11 13:51:20 [INFO] (manager): Initializing...
2022-11-11 13:51:20 [INFO] (server): Creating 'sample': 143.198.10.54:80 roundrobin static ping
2022-11-11 13:51:20 [INFO] (scheduler): Starting scheduler sample
2022-11-11 13:51:20 [INFO] (manager): Initialized
2022-11-11 13:51:20 [INFO] (metrics): Metrics disabled
2022-11-11 13:51:20 [INFO] (api): API disabled
Enter fullscreen mode Exit fullscreen mode

sur le port 80 en HTTP :

Kubefire n’a pas engendré dans ces tests une grande consommation des ressources même si cette dernière dépend de la configuration mise en oeuvre au moment du déploiement d’un cluster Kubernetes et de la distribution associée :

Un projet open source intéressant puisqu’il simplifie grandement le déploiement de Weave Ignite (ce qui peut être intéressant dans le cadre d’un scénario GitOps) mais également des différentes distributions Kubernetes dans des lignes de commande simplifiées …

À suivre !

Top comments (0)