DEV Community

Karim
Karim

Posted on • Originally published at deep75.Medium on

Kubero : alternative à Heroku pour Kubernetes …

Kubero apporte la facilité induite par Heroku pour les développeurs à votre cluster Kubernetes. Vos développeurs n‘ont plus à se soucier de l’infrastructure sous-jacente ou du déploiement. Il permet de déployer vos applications en quelques clics.

Il fournit également un tableau de bord et une CLI pour gérer vos applications …

Je débute par la création d’une instance Ubuntu 22.04 LTS dans Linode :

avec Docker et Kind installés localement :

root@localhost:~# curl -fsSL https://get.docker.com | sh -
root@localhost:~# curl -Lo ./kind https://kind.sigs.k8s.io/dl/v0.17.0/kind-linux-amd64
root@localhost:~# chmod +x ./kind
root@localhost:~# mv ./kind /usr/local/bin/kind
root@localhost:~# kind
kind creates and manages local Kubernetes clusters using Docker container 'nodes'

Usage:
  kind [command]

Available Commands:
  build Build one of [node-image]
  completion Output shell completion code for the specified shell (bash, zsh or fish)
  create Creates one of [cluster]
  delete Deletes one of [cluster]
  export Exports one of [kubeconfig, logs]
  get Gets one of [clusters, nodes, kubeconfig]
  help Help about any command
  load Loads images into nodes
  version Prints the kind CLI version

Flags:
  -h, --help help for kind
      --loglevel string DEPRECATED: see -v instead
  -q, --quiet silence all stderr output
  -v, --verbosity int32 info log verbosity, higher value produces more output
      --version version for kind

Use "kind [command] --help" for more information about a command.
Enter fullscreen mode Exit fullscreen mode

Récupération du dépôt de Kubero sur GitHub et du fichier de configuration pour Kind :

root@localhost:~# git clone https://github.com/kubero-dev/kubero
root@localhost:~# cd kubero && cat kind.yaml 

kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
name: kubero
networking:
  ipFamily: ipv4
  apiServerAddress: 127.0.0.1
nodes:
- role: control-plane
  #image: kindest/node:v1.21.1
  kubeadmConfigPatches:
  - |
    kind: InitConfiguration
    nodeRegistration:
      kubeletExtraArgs:
        node-labels: "ingress-ready=true"
  extraPortMappings:
  - containerPort: 80
    hostPort: 80
    protocol: TCP
  - containerPort: 443
    hostPort: 443
    protocol: TCP
Enter fullscreen mode Exit fullscreen mode

Lancement d’un cluster local avec ce fichier YAML :

root@localhost:~/kubero# kind create cluster --config kind.yaml

Creating cluster "kubero" ...
 ✓ Ensuring node image (kindest/node:v1.25.3) 🖼 
 ✓ Preparing nodes 📦  
 ✓ Writing configuration 📜 
 ✓ Starting control-plane 🕹️ 
 ✓ Installing CNI 🔌 
 ✓ Installing StorageClass 💾 
Set kubectl context to "kind-kubero"
You can now use your cluster with:

kubectl cluster-info --context kind-kubero

Have a nice day! 👋

root@localhost:~/kubero# curl -LO https://storage.googleapis.com/kubernetes-release/release/v1.25.3/bin/linux/amd64/kubectl && chmod +x ./kubectl && mv kubectl /usr/local/bin/
root@localhost:~/kubero# kubectl cluster-info --context kind-kubero

Kubernetes control plane is running at https://127.0.0.1:33737
CoreDNS is running at https://127.0.0.1:33737/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
root@localhost:~/kubero# kubectl get po,svc -A
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system pod/coredns-565d847f94-97rms 1/1 Running 0 61s
kube-system pod/coredns-565d847f94-nkcfw 1/1 Running 0 61s
kube-system pod/etcd-kubero-control-plane 1/1 Running 0 75s
kube-system pod/kindnet-j9mh9 1/1 Running 0 61s
kube-system pod/kube-apiserver-kubero-control-plane 1/1 Running 0 75s
kube-system pod/kube-controller-manager-kubero-control-plane 1/1 Running 0 75s
kube-system pod/kube-proxy-mf4kh 1/1 Running 0 61s
kube-system pod/kube-scheduler-kubero-control-plane 1/1 Running 0 75s
local-path-storage pod/local-path-provisioner-684f458cdd-n96qr 1/1 Running 0 61s

NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 77s
kube-system service/kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP,9153/TCP 75s
Enter fullscreen mode Exit fullscreen mode

suivi du contrôleur d’entrée via Nginx Ingress Controller :

root@localhost:~/kubero# kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/main/deploy/static/provider/kind/deploy.yaml

namespace/ingress-nginx created
serviceaccount/ingress-nginx created
serviceaccount/ingress-nginx-admission created
role.rbac.authorization.k8s.io/ingress-nginx created
role.rbac.authorization.k8s.io/ingress-nginx-admission created
clusterrole.rbac.authorization.k8s.io/ingress-nginx created
clusterrole.rbac.authorization.k8s.io/ingress-nginx-admission created
rolebinding.rbac.authorization.k8s.io/ingress-nginx created
rolebinding.rbac.authorization.k8s.io/ingress-nginx-admission created
clusterrolebinding.rbac.authorization.k8s.io/ingress-nginx created
clusterrolebinding.rbac.authorization.k8s.io/ingress-nginx-admission created
configmap/ingress-nginx-controller created
service/ingress-nginx-controller created
service/ingress-nginx-controller-admission created
deployment.apps/ingress-nginx-controller created
job.batch/ingress-nginx-admission-create created
job.batch/ingress-nginx-admission-patch created
ingressclass.networking.k8s.io/nginx created
validatingwebhookconfiguration.admissionregistration.k8s.io/ingress-nginx-admission created
Enter fullscreen mode Exit fullscreen mode

d’OLM (Operator Lifecycle Manager) :

root@localhost:~/kubero# kubectl create -f https://raw.githubusercontent.com/operator-framework/operator-lifecycle-manager/master/deploy/upstream/quickstart/crds.yaml

customresourcedefinition.apiextensions.k8s.io/catalogsources.operators.coreos.com created
customresourcedefinition.apiextensions.k8s.io/clusterserviceversions.operators.coreos.com created
customresourcedefinition.apiextensions.k8s.io/installplans.operators.coreos.com created
customresourcedefinition.apiextensions.k8s.io/operatorconditions.operators.coreos.com created
customresourcedefinition.apiextensions.k8s.io/operatorgroups.operators.coreos.com created
customresourcedefinition.apiextensions.k8s.io/operators.operators.coreos.com created
customresourcedefinition.apiextensions.k8s.io/subscriptions.operators.coreos.com created

root@localhost:~/kubero# kubectl create -f https://raw.githubusercontent.com/operator-framework/operator-lifecycle-manager/master/deploy/upstream/quickstart/olm.yaml

namespace/olm created
namespace/operators created
serviceaccount/olm-operator-serviceaccount created
clusterrole.rbac.authorization.k8s.io/system:controller:operator-lifecycle-manager created
clusterrolebinding.rbac.authorization.k8s.io/olm-operator-binding-olm created
deployment.apps/olm-operator created
deployment.apps/catalog-operator created
clusterrole.rbac.authorization.k8s.io/aggregate-olm-edit created
clusterrole.rbac.authorization.k8s.io/aggregate-olm-view created
operatorgroup.operators.coreos.com/global-operators created
operatorgroup.operators.coreos.com/olm-operators created
clusterserviceversion.operators.coreos.com/packageserver created
catalogsource.operators.coreos.com/operatorhubio-catalog created
Enter fullscreen mode Exit fullscreen mode

et de l’opérateur Kubero lui-même :

root@localhost:~/kubero# kubectl create -f https://operatorhub.io/install/kubero-operator.yaml

subscription.operators.coreos.com/my-kubero-operator created

root@localhost:~/kubero# kubectl get po,svc -A

NAMESPACE NAME READY STATUS RESTARTS AGE
ingress-nginx pod/ingress-nginx-admission-create-ddsv6 0/1 Completed 0 4m43s
ingress-nginx pod/ingress-nginx-admission-patch-2plx2 0/1 Completed 1 4m43s
ingress-nginx pod/ingress-nginx-controller-6bccc5966-2j4sz 1/1 Running 0 4m43s
kube-system pod/coredns-565d847f94-97rms 1/1 Running 0 7m47s
kube-system pod/coredns-565d847f94-nkcfw 1/1 Running 0 7m47s
kube-system pod/etcd-kubero-control-plane 1/1 Running 0 8m1s
kube-system pod/kindnet-j9mh9 1/1 Running 0 7m47s
kube-system pod/kube-apiserver-kubero-control-plane 1/1 Running 0 8m1s
kube-system pod/kube-controller-manager-kubero-control-plane 1/1 Running 0 8m1s
kube-system pod/kube-proxy-mf4kh 1/1 Running 0 7m47s
kube-system pod/kube-scheduler-kubero-control-plane 1/1 Running 0 8m1s
local-path-storage pod/local-path-provisioner-684f458cdd-n96qr 1/1 Running 0 7m47s
olm pod/18f72e0a5269b33ed112ac0b68bd1dd3f01dd1a0df1234d6e4e40a056chtqtd 0/1 Completed 0 67s
olm pod/catalog-operator-6b8c45596c-4j8sw 1/1 Running 0 2m37s
olm pod/olm-operator-56cf65dbf9-qvvg2 1/1 Running 0 2m37s
olm pod/operatorhubio-catalog-vgq5r 1/1 Running 0 2m28s
olm pod/packageserver-6dcd7999ff-wdhjp 1/1 Running 0 2m29s
olm pod/packageserver-6dcd7999ff-xjfnc 1/1 Running 0 2m29s
operators pod/kubero-operator-controller-manager-8684c79fc4-jzwlf 2/2 Running 0 50s

NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 8m3s
ingress-nginx service/ingress-nginx-controller NodePort 10.96.67.115 <none> 80:30991/TCP,443:31267/TCP 4m43s
ingress-nginx service/ingress-nginx-controller-admission ClusterIP 10.96.17.59 <none> 443/TCP 4m43s
kube-system service/kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP,9153/TCP 8m1s
olm service/operatorhubio-catalog ClusterIP 10.96.192.97 <none> 50051/TCP 2m28s
olm service/packageserver-service ClusterIP 10.96.140.220 <none> 5443/TCP 2m29s
operators service/kubero-operator-controller-manager-metrics-service ClusterIP 10.96.177.228 <none> 8443/TCP 51s
Enter fullscreen mode Exit fullscreen mode

L’interface graphique de Kubero peut alors être déployée :

root@localhost:~/kubero# kubectl create namespace kubero

namespace/kubero created

root@localhost:~/kubero# kubectl create secret generic kubero-secrets \
    --from-literal=KUBERO_WEBHOOK_SECRET=$(openssl rand -hex 20) \
    --from-literal=KUBERO_SESSION_KEY=$(openssl rand -hex 20) \
    --from-literal=GITHUB_PERSONAL_ACCESS_TOKEN=$GITHUB_TOKEN \
    -n kubero

secret/kubero-secrets created

root@localhost:~/kubero# kubectl apply -f https://raw.githubusercontent.com/kubero-dev/kubero-operator/main/config/samples/application_v1alpha1_kubero.yaml -n kubero

kubero.application.kubero.dev/kubero created

root@localhost:~# kubectl get po,svc,ing -A

NAMESPACE NAME READY STATUS RESTARTS AGE
ingress-nginx pod/ingress-nginx-admission-create-ddsv6 0/1 Completed 0 21m
ingress-nginx pod/ingress-nginx-admission-patch-2plx2 0/1 Completed 1 21m
ingress-nginx pod/ingress-nginx-controller-6bccc5966-2j4sz 1/1 Running 0 21m
kube-system pod/coredns-565d847f94-97rms 1/1 Running 0 24m
kube-system pod/coredns-565d847f94-nkcfw 1/1 Running 0 24m
kube-system pod/etcd-kubero-control-plane 1/1 Running 0 24m
kube-system pod/kindnet-j9mh9 1/1 Running 0 24m
kube-system pod/kube-apiserver-kubero-control-plane 1/1 Running 0 24m
kube-system pod/kube-controller-manager-kubero-control-plane 1/1 Running 0 24m
kube-system pod/kube-proxy-mf4kh 1/1 Running 0 24m
kube-system pod/kube-scheduler-kubero-control-plane 1/1 Running 0 24m
kubero pod/kubero-5fcf56f7b7-2lxrb 1/1 Running 0 12m
local-path-storage pod/local-path-provisioner-684f458cdd-n96qr 1/1 Running 0 24m
olm pod/18f72e0a5269b33ed112ac0b68bd1dd3f01dd1a0df1234d6e4e40a056chtqtd 0/1 Completed 0 17m
olm pod/catalog-operator-6b8c45596c-4j8sw 1/1 Running 0 19m
olm pod/olm-operator-56cf65dbf9-qvvg2 1/1 Running 0 19m
olm pod/operatorhubio-catalog-vgq5r 1/1 Running 0 19m
olm pod/packageserver-6dcd7999ff-wdhjp 1/1 Running 0 19m
olm pod/packageserver-6dcd7999ff-xjfnc 1/1 Running 0 19m
operators pod/kubero-operator-controller-manager-8684c79fc4-jzwlf 2/2 Running 0 17m

NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 24m
ingress-nginx service/ingress-nginx-controller NodePort 10.96.67.115 <none> 80:30991/TCP,443:31267/TCP 21m
ingress-nginx service/ingress-nginx-controller-admission ClusterIP 10.96.17.59 <none> 443/TCP 21m
kube-system service/kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP,9153/TCP 24m
kubero service/kubero ClusterIP 10.96.64.102 <none> 2000/TCP 12m
olm service/operatorhubio-catalog ClusterIP 10.96.192.97 <none> 50051/TCP 19m
olm service/packageserver-service ClusterIP 10.96.140.220 <none> 5443/TCP 19m
operators service/kubero-operator-controller-manager-metrics-service ClusterIP 10.96.177.228 <none> 8443/TCP 17m

NAMESPACE NAME CLASS HOSTS ADDRESS PORTS AGE
kubero ingress.networking.k8s.io/kubero <none> kubero.lacolhost.com localhost 80 12m
Enter fullscreen mode Exit fullscreen mode

Un ConfigMap est un objet d’API utilisé pour stocker des données non confidentielles dans des paires clé-valeur. Les Pods peuvent consommer les ConfigMaps comme variables d’environnement, arguments de ligne de commande ou fichiers de configuration dans un volume. Voici celui par défaut de Kubero qui peut être édité :

root@localhost:~# kubectl edit configmap kubero-config -n kubero
Enter fullscreen mode Exit fullscreen mode

Kubero utilise les images buildpacks que l’on peut retrouver ici :

buildpacks/packs at main · kubero-dev/buildpacks

Les Buildpacks n’ont rien en commun avec buildpacks.io.

Cloud Native Buildpacks

Votre code ne sera pas construit dans l’image en cours d’exécution mais monté en lecture seule dans l’image en cours d’exécution …

Modification du fichier hosts sur son poste pour accéder au dashboard de Kubero :

$ cat /etc/hosts  ✔ 

139.144.180.212 kubero.lacolhost.com
Enter fullscreen mode Exit fullscreen mode

Création d’un pipeline via l’interface graphique :

avec le déploiement de l’application correspondante :

Cela peut également être réalisé en ligne de commande par l’entremise de manifests YAML.

Kubero est un PaaS pour Kubernetes. Aucune connaissance de Helm ou de Kubernetes n’est requise pour démarrer une nouvelle application.

Il s’agit d’une solution native de Kubernetes encore en développement et qui ne nécessite aucune dépendance externe. Kubero est essentiellement un opérateur Kubernetes doté d’une interface utilisateur et d’une interface CLI. Il a une très petite empreinte et est facile à installer.

À suivre !

Top comments (0)