A worker node provides a running environment for client applications. These applications are microservices running as application containers. In Kubernetes the application containers are encapsulated in Pods, controlled by the cluster control plane agents running on the control plane node. Pods are scheduled on worker nodes, where they find required compute, memory and storage resources to run, and networking to talk to each other and the outside world. A Pod is the smallest scheduling work unit in Kubernetes. It is a logical collection of one or more containers scheduled together, and the collection can be started, stopped, or rescheduled as a single unit of work.
Also, in a multi-worker Kubernetes cluster, the network traffic between client users and the containerized applications deployed in Pods is handled directly by the worker nodes, and is not routed through the control plane node.
A worker node has following components:
- Container Runtime
- Node Agent-Kubelet
- Proxy - kube-proxy
- Add-ons for DNS, Dashboard user interface, cluster-level monitoring and logging.
Container Runtime
Although Kubernetes is described as a "container orchestration engine", it lacks the capability to directly handle and run containers. In order to manage a container's lifecycle, Kubernetes requires a container runtime on the node where a Pod and its containers are to be scheduled. Runtimes are required on all nodes of a Kubernetes cluster, both control plane and worker. Kubernetes supports several container runtimes:
- CRI-O A lightweight container runtime for Kubernetes, supporting quay.io and Docker Hub image registries.
- containerd A simple, robust, and portable container runtime.
- Docker Engine A popular and complex container platform which uses containerd as a container runtime.
- Mirantis Container Runtime Formerly known as the Docker Enterprise Edition.
Node Agent-kubelet
The kubelet is an agent running on each node, control plane and workers, and communicates with the control plane. It receives Pod definitions, primarily from the API Server, and interacts with the container runtime on the node to run containers associated with the Pod. It also monitors the health and resources of Pods running containers.
The kubelet connects to container runtimes through a plugin based interface - the Container Runtime Interface (CRI). The CRI consists of protocol buffers, gRPC API, libraries, and additional specifications and tools. In order to connect to interchangeable container runtimes, kubelet uses a CRI shim, an application which provides a clear abstraction layer between kubelet and the container runtime.
As shown above, the kubelet acting as grpc client connects to the CRI shim acting as grpc server to perform container and image operations. The CRI implements two services: ImageService and RuntimeService. The ImageService is responsible for all the image-related operations, while the RuntimeService is responsible for all the Pod and container-related operations.
kubelet - CRI shims
Now let's explore kubelet and CRI shims little bit more,
Originally the kubelet agent supported only a couple of container runtimes, first the Docker Engine followed by rkt, through a unique interface model integrated directly in the kubelet source code. However, this approach was not intended to last forever even though it was especially beneficial for Docker. In time, Kubernetes started migrating towards a standardized approach to container runtime integration by introducing the CRI. Kubernetes adopted a decoupled and flexible method to integrate with various container runtimes without the need to recompile its source code. Any container runtime that implements the CRI could be used by Kubernetes to manage containers.
Shims are Container Runtime Interface (CRI) implementations, interfaces or adapters, specific to each container runtime supported by Kubernetes. Here are some examples of CRI shims:
cri-containerd
cri-containerd allows containers to be directly created and managed with containerd at kubelet's request:
CRI-O
CRI-O enables the use of any Open Container Initiative (OCI) compatible runtime with Kubernetes, such as runC:
dockershim and cri-dockerd
Before Kubernetes release v1.24 the dockershim allowed containers to be created and managed by invoking the Docker Engine and its internal runtime containerd. Due to Docker Engine's popularity, this shim has been the default interface used by kubelet. However, starting with Kubernetes release v1.24, the dockershim is no longer being maintained by the Kubernetes project, its specific code is removed from kubelet source code, thus will no longer be supported by the kubelet node agent of Kubernetes. As a result, Docker, Inc., and Mirantis have agreed to introduce and maintain a replacement adapter, cri-dockerd that would ensure that the Docker Engine will continue to be a container runtime option for Kubernetes, in addition to the Mirantis Container Runtime (MCR). The introduction of cri-dockerd also ensures that both Docker Engine and MCR follow the same standardized integration method as the CRI-compatible runtimes.
Additional details about the deprecation process of the dockershim can be found on the Updated: Dockershim Removal FAQ page.
Proxy - kube-proxy
The kube-proxy is the network agent which runs on each node, control plane and workers, responsible for dynamic updates and maintenance of all networking rules on the node. It abstracts the details of Pods networking and forwards connection requests to the containers in the Pods.
The kube-proxy is responsible for TCP, UDP, and SCTP stream forwarding or random forwarding across a set of Pod backends of an application, and it implements forwarding rules defined by users through Service API objects.
Add-ons
Add-ons are cluster features and functionality not yet available in Kubernetes, therefore implemented through 3rd-party pods and services.
DNS: Cluster DNS is a DNS server required to assign DNS records to Kubernetes objects and resources.
Dashboard: A general purpose web-based user interface for cluster management.
Monitoring: Collects cluster-level container metrics and saves them to a central data store.
Logging: Collects cluster-level container logs and saves them to a central log store for analysis.
Conclusion
The worker node components, including the kubelet, kube-proxy, and container runtime, work together like a well-oiled machine to keep our applications running smoothly. Whether you’re new to Kubernetes or an experienced user, understanding these components is key to unlocking its full potential. And don’t worry if it all seems a bit overwhelming at first - we’re all learning together!
Stay tuned for my next part of the blog where I’ll dive into the exciting world of Kubernetes networking. See you there! 😊
Top comments (0)