DEV Community

Naveen Jayachandran
Naveen Jayachandran

Posted on

Kubernetes – Node

Kubernetes Nodes are the actual worker or master machines where the real execution of workloads takes place. Each node runs essential services to operate Pods and is managed by the Kubernetes Control Plane. A single Kubernetes node can host multiple Pods, and each Pod can run one or more containers.

There are three key processes that operate on every node to schedule and manage Pods:

Container Runtime: The engine that runs containers inside Pods. Example: Docker, containerd, CRI-O.

Kubelet: The agent responsible for communicating with the Control Plane and the container runtime. It ensures Pods are running as defined in the PodSpec.

Kube-Proxy: Handles networking for Pods by forwarding requests from Kubernetes Services to the correct Pods on the node.

What is Kubernetes?
Kubernetes is an open-source container orchestration platform that automates the deployment, scaling, and management of containerized applications. Originally developed by Google and now maintained by the Cloud Native Computing Foundation (CNCF), Kubernetes is written in Go (Golang) and supports deployment on public, private, and hybrid cloud environments.

Kubernetes enables organizations to manage a large number of containers as a single logical unit, simplifying scaling and fault tolerance across infrastructure.

What is a Kubernetes Node?
A Kubernetes Node is a physical or virtual machine that runs containerized workloads. Each node contains:

Kubelet – communicates with the Control Plane and ensures containers are running correctly.

Container Runtime – executes the containers (e.g., Docker, containerd).

Kube-Proxy – manages network rules and service discovery to route traffic efficiently.

Nodes collectively form the computing capacity of a Kubernetes cluster, where actual application workloads run.

How Does a Kubernetes Pod Work?
A Pod is the smallest deployable unit in Kubernetes — often compared to a process in a traditional OS.

Each Pod encapsulates one or more tightly coupled containers that share:

Storage (volumes)

Networking (same IP and port space)

Configuration data

Pods are ephemeral by design. If a Pod fails, Kubernetes automatically replaces it with a new replica to maintain desired application state and availability.

How Does a Kubernetes Node Work?
Nodes provide the execution environment for Pods. Kubernetes distinguishes between two types of nodes:

Master Node (Control Plane): Manages cluster operations such as scheduling, scaling, and maintaining cluster state.

Worker Node: Executes application Pods and reports status to the Control Plane.

A cluster can have any number of worker nodes, and it’s best practice to have at least two master nodes for high availability and failover capability.

Kubernetes Node Name Uniqueness
Each node in a Kubernetes cluster must have a unique name. Duplicate node names lead to inconsistencies and conflicts in object references, metadata, and state tracking.

If two nodes share the same name, the Kubernetes scheduler cannot reliably distinguish between them, which may result in misdirected workloads or volume attachments.

Kubernetes Nodes Not Ready
To view nodes and their status:

kubectl get nodes
Node statuses include:

Ready: Node is healthy and available for scheduling Pods.

NotReady: Node is currently unhealthy (could be due to network failure, kubelet crash, or pod issues).

Unknown: Node is not responding to the Control Plane (communication timeout or network issue).

Self-registration of Kubernetes Nodes
Nodes must be registered with the Kubernetes API server before the Control Plane can schedule Pods onto them.

By default, nodes self-register using the kubelet process. Kubelet communicates with the API server and automatically creates a Node object in the cluster.

Options for Self-registration
Kubeconfig Access: Provide kubelet with a kubeconfig file path for API server authentication.

Register Node Flag: The kubelet flag --register-node=true (default) allows automatic registration with the API server, which then creates the Node object used by the scheduler.

Manual Kubernetes Node Administration
If self-registration is disabled, nodes can be manually registered or removed:

kubectl create node
kubectl delete node
When creating a Node object manually, include name, labels, and taints in the YAML file.
Labels and taints help control Pod scheduling to ensure workloads are deployed only on compatible nodes.

Kubernetes Node Status
To describe a node in detail:

kubectl describe node
A healthy node includes conditions similar to:

"conditions": [
{
"type": "Ready",
"status": "True",
"reason": "KubeletReady",
"message": "kubelet is posting ready status"
}
]
Kubernetes Node Controller
The Node Controller monitors node health and manages node lifecycle.
If --register-node=true, nodes are automatically registered.
For manual setups, use --register-node=false to disable auto-registration.

Resource Capacity Tracking
When a node registers, it reports its resource capacity to the API server, including:

CPU cores

Memory

Ephemeral storage

Persistent volumes

The scheduler uses this data to ensure that Pods are placed only on nodes with sufficient available resources.

Kubernetes Node Topology
Node topology allows controlling how Pods are distributed across nodes and zones for performance, resilience, and affinity requirements.

Example:

apiVersion: v1
kind: Pod
metadata:
name: my-pod
spec:
containers:

  • name: my-container image: nginx nodeSelector: topology.kubernetes.io/zone: us-east-1a Here, the Pod is scheduled specifically to nodes in the us-east-1a availability zone.

Graceful Node Shutdown
A graceful shutdown allows running Pods to complete their tasks and save their state before termination.

Benefits:

Prevents data loss or corruption

Allows stateful applications to persist important data

Maintains high availability and minimizes downtime

Ensures predictable, stable system behavior

If a Pod does not terminate within the grace period, it is forcefully stopped.

Non-Graceful Node Shutdown Handling
A non-graceful shutdown (e.g., power failure) terminates Pods abruptly, without kubelet notification.

Consequences include:

Loss of in-memory data or unsaved state

Stateful applications entering “Terminating” status indefinitely

Scheduler unable to recreate replacements until node recovery

This is why ensuring graceful shutdowns is critical in production clusters.

Kubernetes Nodes vs Kubernetes Pods
Aspect Kubernetes Nodes Kubernetes Pods
Definition Physical or virtual machines that run workloads Smallest deployable unit that runs containers
Function Host one or more Pods Host one or more containers
Resources Provide CPU, memory, and storage Consume resources from the Node
Responsibility Managed by Control Plane Managed by Scheduler and Controller Manager
Role Run Kubernetes workloads Execute application containers
Managing Kubernetes Nodes
Node management includes:

Provisioning & Deployment – Adding new nodes to increase capacity

Maintenance & Upgrades – Applying patches, updates, or reconfigurations

Scaling – Automatically or manually adding/removing nodes for optimal performance and availability

Optimizing Kubernetes Node Performance
Performance optimization ensures efficient use of compute resources and maintains cluster stability.

Key focus areas:

Scheduling strategy

Resource allocation

Container runtime tuning

Resource Utilization Optimization
Container Packing: Maximize utilization by co-locating compatible workloads.

Resource Requests & Limits: Define fair resource allocations to prevent contention.

Eviction Policies: Gracefully evict Pods when nodes run low on resources.

Resource Monitoring: Continuously track usage to detect inefficiencies and bottlenecks.

Scheduling Strategies
Node Affinity / Anti-Affinity: Schedule Pods on or away from specific nodes based on labels.

Workload-Aware Scheduling: Place Pods based on performance characteristics and resource needs.

Dynamic Scheduling: Continuously rebalance workloads as cluster conditions change.

Container Runtime Tuning
Configuration Optimization: Adjust runtime settings (e.g., Docker, containerd) for better efficiency.

Image Optimization: Use minimal, lightweight images to reduce start times.

Regular Updates: Keep runtime versions current for performance and security improvements.

Memory Management: Fine-tune container memory allocation for consistent performance.

Securing Kubernetes Nodes
Node security is essential to protect workloads from unauthorized access and vulnerabilities. Key practices include:

Node Hardening & Patch Management

Network Policies and Access Controls

Container Runtime Security (sandboxing, scanning, non-root execution)

Top comments (0)