DEV Community

urgensherpa
urgensherpa

Posted on • Updated on

Detected that the sandbox image "registry.k8s.io/pause:3.8" of the container runtime is inconsistent with that used by kubeadm

Recently while deploying 3 node kubernetes cluster( v 1.28.2-1.1) I came across this issue in

kubeadm init stage
W1001 10:56:31.090889 28295 checks.go:835] detected that the sandbox image "registry.k8s.io/pause:3.8" of the container runtime is inconsistent with that used by kubeadm. It is recommended that using "registry.k8s.io/pause:3.9" as the CRI sandbox image.

After digging in further, I found that the sandbox image used by containerd runtime is older than what is expected by kubelet.

containerd config dump
Enter fullscreen mode Exit fullscreen mode

apart from this i also saw the cgroup type inconsistency (it should have been systemd)
Here is what I have in /etc/containerd/config.toml

version = 2

root = "/var/lib/containerd"
state = "/run/containerd"
oom_score = 0
imports = ["/etc/containerd/runtime_*.toml", "./debug.toml"]

[grpc]
  address = "/run/containerd/containerd.sock"
  uid = 0
  gid = 0

[debug]
  address = "/run/containerd/debug.sock"
  uid = 0
  gid = 0
  level = "info"

[metrics]
  address = ""
  grpc_histogram = false

[cgroup]
  path = ""

[plugins]
  [plugins."io.containerd.monitor.v1.cgroups"]
    no_prometheus = false
  [plugins."io.containerd.service.v1.diff-service"]
    default = ["walking"]
  [plugins."io.containerd.gc.v1.scheduler"]
    pause_threshold = 0.02
    deletion_threshold = 0
    mutation_threshold = 100
    schedule_delay = 0
    startup_delay = "100ms"
  [plugins."io.containerd.runtime.v2.task"]
    platforms = ["linux/amd64"]
    sched_core = true
  [plugins."io.containerd.service.v1.tasks-service"]
    blockio_config_file = ""
    rdt_config_file = ""
  [plugins."io.containerd.grpc.v1.cri"]
    sandbox_image = "registry.k8s.io/pause:3.9"
    [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc]
      runtime_type = "io.containerd.runc.v2"
      [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
        SystemdCgroup = true
Enter fullscreen mode Exit fullscreen mode

the issue in kubeadm init should be resolved (more problems encountered ...more below)

In Linux, a process namespace is a feature that provides isolation and separation of processes, particularly in the context of containers and virtualization. Namespaces allow different processes to have their own view of system resources, such as process IDs, network interfaces, file systems, and more. This isolation is essential for creating containers, virtual machines, and other forms of process separation.
redhat.com/
the pause container image used to initialize and maintain the container namespaces for other containers within a pod. This "pause container" is a minimalistic, lightweight container that does nothing but sleep indefinitely. It acts as template where the user containers are spawned from ,it holds the namespaces even when the other containers crash/dies.

Pod Initialization: When a pod is created, Kubernetes initializes the pod's namespaces (e.g., PID, network, IPC) by creating the sandbox pause container within those namespaces. The pause container is typically a minimalistic container that sleeps indefinitely and does not perform any active work.

Pod Containers: The other containers defined in the pod specification (e.g., application containers) are then created within the same namespaces as the pause container. These containers share the same namespaces, which allows them to interact with each other as if they were running on the same host.

Init and Signal Handling: The pause container acts as an "init" process for the pod's namespaces. It handles signals sent to the pod, such as SIGTERM. When a signal is sent to the pod (e.g., when the pod is terminated), the pause container receives the signal and ensures that it is propagated to the other containers within the pod. This allows the other containers to perform graceful shutdown and cleanup.

Cleanup and Termination: Once the other containers have completed their tasks or terminated, the pause container remains running. If the pod is deleted or terminated, the pause container is responsible for shutting down any remaining containers within the pod's namespaces.

The flannel pods were still in crashloopback state , checking logs further i saw

W1003 17:32:40.526877       1 client_config.go:617] Neither --kubeconfig nor --master was specified.  Using the inClusterConfig.  This might not work.
I1003 17:32:40.544578       1 kube.go:145] Waiting 10m0s for node controller to sync
I1003 17:32:40.544610       1 kube.go:490] Starting kube subnet manager
I1003 17:32:41.544855       1 kube.go:152] Node controller sync successful
I1003 17:32:41.544918       1 main.go:232] Created subnet manager: Kubernetes Subnet Manager - ubuntu1
I1003 17:32:41.544929       1 main.go:235] Installing signal handlers
I1003 17:32:41.545237       1 main.go:543] Found network config - Backend type: vxlan
I1003 17:32:41.545279       1 match.go:206] Determining IP address of default interface
I1003 17:32:41.545967       1 match.go:259] Using interface with name enp3s0 and address 192.168.1.111
I1003 17:32:41.545997       1 match.go:281] Defaulting external address to interface address (192.168.1.111)
I1003 17:32:41.546148       1 vxlan.go:141] VXLAN config: VNI=1 Port=0 GBP=false Learning=false DirectRouting=false
E1003 17:32:41.546522       1 main.go:335] Error registering network: failed to acquire lease: node "ubuntu1" pod cidr not assigned
W1003 17:32:41.546809       1 reflector.go:347] github.com/flannel-io/flannel/pkg/subnet/kube/kube.go:491: watch of *v1.Node ended with: an error on the server ("unable to decode an event from the watch stream: context canceled") has prevented the request from succeeding
I1003 17:32:41.546831       1 main.go:523] Stopping shutdownHandler...

Enter fullscreen mode Exit fullscreen mode

this issue was resolved by creating the file /run/flannel/subnet.env on all nodes.

cat <<EOF | tee /run/flannel/subnet.env
FLANNEL_NETWORK=10.244.0.0/16
FLANNEL_SUBNET=10.244.0.0/16
FLANNEL_MTU=1450
FLANNEL_IPMASQ=true
EOF
Enter fullscreen mode Exit fullscreen mode

Run patch on master node which applies to worker nodes as well:-

kubectl patch node masternode1 -p '{"spec":{"podCIDR":"10.244.0.0/16"}}'

kubectl patch node workernode1 -p '{"spec":{"podCIDR":"10.244.0.0/16"}}'

kubectl patch node workernode2 -p '{"spec":{"podCIDR":"10.244.0.0/16"}}'

Enter fullscreen mode Exit fullscreen mode

On master node:
delete existing flannel if any kubectl delete -f kube-flannel.yml and run

kubectl apply -f kube-flannel.yml --kubeconfig /etc/kubernetes/admin.conf 
Enter fullscreen mode Exit fullscreen mode

if the nodes are not patched below error is observed on the problematic flannel node in crashloopback state

E1003 18:11:28.374694       1 main.go:335] Error registering network: failed to acquire lease: node "nodea" pod cidr not assigned
I1003 18:11:28.374936       1 main.go:523] Stopping shutdownHandler...
W1003 18:11:28.375084       1 reflector.go:347] github.com/flannel-io/flannel/pkg/subnet/kube/kube.go:491: watch of *v1.Node ended with: an error on the server ("unable to decode an event from the watch stream: context canceled") has prevented the request from succeeding
Enter fullscreen mode Exit fullscreen mode

Reference:-
https://stackoverflow.com/questions/50833616/kube-flannel-cant-get-cidr-although-podcidr-available-on-node/58618952#58618952

Top comments (0)