DEV Community

Robert Waffen for betadots

Posted on

Puppet and Kubernetes

Within this article we explain how one can make use of Puppet to deploy Kubernetes.

Puppet Module selection

There are several Puppet Modules on the Puppet Forge which allow one to manage Kubernetes using Puppet.
You will find the Puppetlabs Kubernetes and the Voxpupuli k8smodule.
All other modules have not received an update for several years. We therefore consider these modules to be unmaintained.

Sidenote: at CfgMgmtCamp 2023 the Puppet community asked the Puppet staff to please deprecate the puppetlabs-kubernetes module. This module requires to already have a kubernetes instance running and uses a container to read config which you can then use to configure kubernetes. Sounds like a hen-egg problem.

We highly recommend to make use of the new, modern Voxpupuli Module.

Puppet-K8S Module

The Puppet K8S Module is able to install control nodes and worker nodes. Both need the class k8s within their node classification.
All settings can be configured using hiera. Most parameters are spread over three classes: k8s main class, k8s::server and k8s::node subclasses.

A simple setup requires the following parameters

Kubernetes controller (apiserver, controller-manager and scheduler)

k8s::role: 'server'
k8s::master: 'http://controller-0.example.com:6443'
Enter fullscreen mode Exit fullscreen mode

The setup of the etcd server instances is controlled via two different ways:

  1. provide a static list of etcd server fqdn
  2. use PuppetDB for etcd server discovery.
k8s::puppetdb_discovery: true
# or
k8s::server::etcd_servers:
  - 'https://node1:2379'
  - 'https://node2:2379'
  - 'https://node3:2379'
Enter fullscreen mode Exit fullscreen mode

If you don't have pre-existing tls certificates you can use the generate features. This will auto generate all needed certificates for a cluster. On a single controller this is all you need. If you have a clustered control plane you need to somehow transfer the generated certs from the first controller from /etc/kubernetes/certs and /var/lib/etcd/certs to the other controllers. The cert distribution in a clustered setup is not part of the module yet.

k8s::server::etcd::generate_ca: true
k8s::server::generate_ca: true
Enter fullscreen mode Exit fullscreen mode

Kubernetes worker (kubelet)

k8s::role: 'node'
k82::master: 'https://controller-0.example.com:6443'
Enter fullscreen mode Exit fullscreen mode

Example for containerd as cri and bridge networking as cni

Controller

k8s::role: 'server'
k8s::master: 'https://controller-0.example.com:6443' # default
k8s::container_manager: 'containerd'

k8s::manage_firewall: true    # default: false
k8s::puppetdb_discovery: true # default: false

k8s::server::node_on_server: false # don't use controller as worker
k8s::server::etcd::generate_ca: true
k8s::server::generate_ca: true

# bind apiserver to a interface the worker and controller can communicate with
k8s::server::apiserver::advertise_address: "%{facts.networking.interfaces.enp0s8.ip}"

# flannel networking is default in the module
# but we want to showcase bridged networking here
k8s::server::resources::manage_flannel: false

k8s::service_cluster_cidr: '10.20.0.0/20' # overlay network for cluster services
k8s::cluster_cidr: '10.20.16.0/20'        # overlay network for the pods in the cluster
Enter fullscreen mode Exit fullscreen mode

Worker

k8s::role: 'node'
k8s::master: 'https://controller-0.example.com:6443' # default
k8s::container_manager: 'containerd'

k8s::manage_fireall: true
k8s::puppetdb_discovery: true

# the same as in k8s::server::resources::bootstrap::secret but prefixed with "puppet."
k8s::node::node_token: "puppet.%{lookup('k8s::server::resources::bootstrap::secret')}"

# for debugging
k8s::node::manage_crictl: true
k8s::install::crictl::config:
  'runtime-endpoint': 'unix:///run/containerd/containerd.sock'
  'image-endpoint': 'unix:///run/containerd/containerd.sock'

k8s::service_cluster_cidr: '10.20.0.0/20' # overlay network for cluster services
k8s::cluster_cidr: '10.20.16.0/20'        # overlay network for the pods in the cluster
Enter fullscreen mode Exit fullscreen mode

Shared data

lookup_options:
  k8s::server::resources::bootstrap::secret:
    convert_to: Sensitive

# Sensitive[Pattern[/^[a-z0-9]{16}$/]]
k8s::server::resources::bootstrap::secret: 'a23456789bcdefgh'
Enter fullscreen mode Exit fullscreen mode

Example data for containerd as cri and cilium as cni

In the first place we need kube-proxy to get an initial setup working.
After this we will install cilium, which will completly replace kube-proxy.
After the installation of cilium we can remove kube-proxy.

Controller

k8s::role: 'server'
k8s::master: 'https://controller-0.example.com:6443' # default
k8s::container_manager: 'containerd' # default: crio

k8s::manage_firewall: true    # default: false
k8s::puppetdb_discovery: true # default: false

k8s::server::node_on_server: false # don't use controller as worker
k8s::server::etcd::generate_ca: true
k8s::server::generate_ca: true


# bind apiserver to a interface the worker and controller can communicate with
k8s::server::apiserver::advertise_address: "%{facts.networking.interfaces.enp0s8.ip}"

# we want to showcase cilium here
k8s::server::resources::manage_flannel: false
Enter fullscreen mode Exit fullscreen mode

Worker

k8s::role: 'node'
k8s::master: 'https://controller-0.example.com:6443'
k8s::container_manager: 'containerd'


k8s::manage_firewall: true
k8s::puppetdb_discovery: true

# the same as in k8s::server::resources::bootstrap::secret but prefixed with "puppet."
k8s::node::node_token: "puppet.%{lookup('k8s::server::resources::bootstrap::secret')}"

# for debugging
k8s::node::manage_crictl: true
k8s::install::crictl::config:
  'runtime-endpoint': 'unix:///run/containerd/containerd.sock'
  'image-endpoint': 'unix:///run/containerd/containerd.sock'

k8s::service_cluster_cidr: '10.20.0.0/20' # overlay network for cluster services
k8s::cluster_cidr: '10.20.16.0/20'        # overlay network for the pods in the cluster
Enter fullscreen mode Exit fullscreen mode

Shared data

lookup_options:
  k8s::server::resources::bootstrap::secret:
    convert_to: Sensitive

# Sensitive[Pattern[/^[a-z0-9]{16}$/]]
k8s::server::resources::bootstrap::secret: 'a23456789bcdefgh'
Enter fullscreen mode Exit fullscreen mode

If this setup is deployed like that, we can now deploy cilium.

Initialize cilium

⚠ī¸ All steps here are done on one of the controllers.

Download the cilium binary. This is not included in the module yet. Most likely you also need some configuration then for cilium. So create a cilium-values.yaml

Taken from the cilium quick installation

CILIUM_CLI_VERSION=$(curl -s https://raw.githubusercontent.com/cilium/cilium-cli/master/stable.txt)
CLI_ARCH=amd64
if [ "$(uname -m)" = "aarch64" ]; then CLI_ARCH=arm64; fi
curl -L --fail --remote-name-all https://github.com/cilium/cilium-cli/releases/download/${CILIUM_CLI_VERSION}/cilium-linux-${CLI_ARCH}.tar.gz{,.sha256sum}
sha256sum --check cilium-linux-${CLI_ARCH}.tar.gz.sha256sum
sudo tar xzvfC cilium-linux-${CLI_ARCH}.tar.gz /usr/local/bin
rm cilium-linux-${CLI_ARCH}.tar.gz{,.sha256sum}
Enter fullscreen mode Exit fullscreen mode

The cilium-values.yaml

---
k8sServiceHost: controller-0.example.com
k8sServicePort: 6443
autoDirectNodeRoutes: true
rollOutCiliumPods: true
kubeProxyReplacement: strict
tunnel: disabled
ipv4NativeRoutingCIDR: 10.20.16.0/20 # overlay network for the pods in the cluster
priorityClassName: system-cluster-critical
ipam:
  mode: kubernetes
nodePort:
  enabled: true
  directRoutingDevice: ens192
bpf:
  clockProbe: true
  masquerade: true
  tproxy: true
loadBalancer:
  mode: hybrid
  algorithm: maglev
  hashSeed: uWul3Twb7mKCmNSN
hubble:
  relay:
    enabled: true
    rollOutPods: true
  ui:
    enabled: true
    rollOutPods: true
operator:
  rollOutPods: true
  prometheus:
    enabled: true
hostPort:
  enabled: true
ipv4:
  enabled: true
ipv6:
  enabled: true
socketLB:
  enabled: true
prometheus:
  enabled: true
Enter fullscreen mode Exit fullscreen mode

Before running the cilium install, check if all worker nodes are connected to the cluster. They may be in a NotReady state, but this is okay for now.

# kubectl get nodes

NAME                   STATUS     ROLES    AGE   VERSION
worker-1.example.com   NotReady   <none>   83s   v1.26.4
worker-2.example.com   NotReady   <none>   83s   v1.26.4
Enter fullscreen mode Exit fullscreen mode

Installing cilium with the values:

cilium install --version v1.13.2 --helm-values /path/to/cilium-values.yaml
Enter fullscreen mode Exit fullscreen mode
ℹī¸  Using Cilium version 1.13.2
🔮 Auto-detected cluster name: default
🔮 Auto-detected datapath mode: tunnel
🔮 Auto-detected kube-proxy has not been installed
ℹī¸  Cilium will fully replace all functionalities of kube-proxy
ℹī¸  helm template --namespace kube-system cilium cilium/cilium --version 1.13.2 --set autoDirectNodeRoutes=true,bpf.clockProbe=true,bpf.masquerade=true,bpf.tproxy=true,cluster.id=0,cluster.name=default,encryption.nodeEncryption=false,hostPort.enabled=true,hubble.relay.enabled=true,hubble.relay.rollOutPods=true,hubble.ui.enabled=true,hubble.ui.rollOutPods=true,ipam.mode=kubernetes,ipv4.enabled=true,ipv4NativeRoutingCIDR=10.20.16.0/20,ipv6.enabled=true,k8sServiceHost=localhost,k8sServicePort=6443,kubeProxyReplacement=strict,loadBalancer.algorithm=maglev,loadBalancer.hashSeed=uWul3Twb7mKCmNSN,loadBalancer.mode=hybrid,nodePort.directRoutingDevice=enp0s8,nodePort.enabled=true,operator.prometheus.enabled=true,operator.replicas=1,operator.rollOutPods=true,priorityClassName=system-cluster-critical,prometheus.enabled=true,rollOutCiliumPods=true,serviceAccounts.cilium.name=cilium,serviceAccounts.operator.name=cilium-operator,socketLB.enabled=true,tunnel=disabled
ℹī¸  Storing helm values file in kube-system/cilium-cli-helm-values Secret
🔑 Created CA in secret cilium-ca
🔑 Generating certificates for Hubble...
🚀 Creating Service accounts...
🚀 Creating Cluster roles...
🚀 Creating ConfigMap for Cilium version 1.13.2...
🚀 Creating Agent DaemonSet...
🚀 Creating Operator Deployment...
⌛ Waiting for Cilium to be installed and ready...
✅ Cilium was successfully installed! Run 'cilium status' to view installation health
Enter fullscreen mode Exit fullscreen mode

Now all worker nodes should be in a Ready state.

kubectl get nodes

NAME                   STATUS   ROLES    AGE   VERSION
worker-1.example.com   Ready    <none>   5m    v1.26.4
worker-2.example.com   Ready    <none>   5m    v1.26.4
Enter fullscreen mode Exit fullscreen mode
cilium status

    /¯¯\
 /¯¯\__/¯¯\    Cilium:          OK
 \__/¯¯\__/    Operator:        OK
 /¯¯\__/¯¯\    Hubble Relay:    disabled
 \__/¯¯\__/    ClusterMesh:     disabled
    \__/

Deployment        cilium-operator    Desired: 1, Ready: 1/1, Available: 1/1
DaemonSet         cilium             Desired: 1, Ready: 1/1, Available: 1/1
Containers:       cilium             Running: 1
                  cilium-operator    Running: 1
Cluster Pods:     1/1 managed by Cilium
Image versions    cilium             quay.io/cilium/cilium:v1.13.2@sha256:85708b11d45647c35b9288e0de0706d24a5ce8a378166cadc700f756cc1a38d6: 1
                  cilium-operator    quay.io/cilium/operator-generic:v1.13.2@sha256:a1982c0a22297aaac3563e428c330e17668305a41865a842dec53d241c5490ab: 1
Enter fullscreen mode Exit fullscreen mode

After successfully installing cilium, we can now disable kube-proxy. We don't need it anymore. Therefor set on the controller the following key and value.

k8s::server::resources::kube_proxy::ensure: absent
Enter fullscreen mode Exit fullscreen mode

Further reading

Top comments (0)