Prerequisites for Understanding Kubernetes
Please review my previous article to gain a clear understanding of Docker images, Docker Compose, and Docker Swarm. This background will help you see the limitations of these tools and appreciate the need for a more sophisticated container orchestration solution.
- Introduction-to-linux
- Linux-namespaces & cgroups
- Networking Fundamentals
- Docker & Containers
- Docker compose
- Optional - Docker Swarm
- Basic Cloud & Virtualization Concepts
- Please check out Kubernetes Roadmap so it’s crystal clear which skills to stack first and in what order.
Types of Infrastructure Deployment
Bare Metal Machines
Before virtualization came along, there was no way to isolate applications on the same machine, or server.
Bare Metal Machines are Physical servers without a virtualization layer between the hardware and the operating system. Instead of running inside a virtual machine (VM) managed by something like VMware, Hyper-V, or a cloud hypervisor, the OS is installed directly on the server’s hardware.
Why Bare Metal Is Used
- High performance needs
- Databases, AI/ML workloads, or big data processing that need maximum CPU/IO performance.
- Specialized hardware
- Compliance and security
- Certain industries require full control over physical machines (finance, healthcare, defense).
- Kubernetes on bare metal
- Avoids virtualization overhead; gives pods direct access to network and storage hardware.
Virtual Machines
A Virtual Machine (VM) is a software-based emulation of a physical computer, created and managed by a hypervisor.
The code running inside the VM thinks the VM is a separate computer, hence the term “virtual machine.” Just like physical machines, VMs require an OS. An OS running inside a VM is referred to as guest OS, while the OS on the real machine is referred to as host OS.
Why Use Virtual Machines
- Server consolidation
- Run multiple workloads on fewer physical machines.
- Testing & development
- Spin up different OS versions or environments quickly.
- Disaster recovery
- Restore VMs from snapshots in minutes.
- Legacy application support
- Run older OS versions for applications that don’t work on modern hardware.
In the context of Kubernetes, Bare Metal Machines and Virtual Machines are types of cluster node infrastructure — basically, the environment on which your Kubernetes nodes (control plane + worker nodes) run.
In short:
Bare Metal and VMs = where Kubernetes runs
Containers = what Kubernetes runs
Introduction to Kubernetes
Kubernetes (K8s) is the open source platform of modern container orchestration. It helps teams deploy, scale, and manage containerized applications efficiently. It takes care of things like starting your apps, keeping them running if something breaks, scaling them up when traffic increases, and rolling out updates without downtime.
What Kubernetes solves
Containers are a good way to bundle and run your applications. In a production environment, you need to manage the containers that run the applications and ensure that there is no downtime. For example, if a container goes down, another container needs to start. Wouldn't it be easier if this behavior was handled by a system?
That's how Kubernetes comes to the rescue! Kubernetes provides you with a framework to run distributed systems resiliently. It takes care of scaling and failover for your application, provides deployment patterns, and more. For example: Kubernetes can easily manage a canary deployment for your system.
Here’s the short version of what Kubernetes solves:
- Container orchestration – Runs and manages containers across many machines.
- Service discovery & load balancing – Finds services and balances traffic automatically.
- Horizontal scaling – Adds/removes containers based on demand.
- Self-healing – Restarts, replaces, or reschedules failed containers.
- Storage orchestration – Connects containers to persistent storage.
- Secrets & config management – Securely stores and injects sensitive data/configs.
- Automated rollouts & rollbacks – Deploys updates without downtime and reverts if needed.
- Resource scheduling – Efficiently allocates CPU/memory to workloads.
Kubernetes architecture
Kubernetes follows a master–worker (control plane–node) model:
- Control Plane → The “brain” that makes decisions about the cluster.
- Worker Nodes → The “muscles” that actually run your applications in containers.
Control Plane (Manages the Cluster)
- API Server– Entry point for all commands and API calls, talks to every component.
- etcd – Key–value database that stores the cluster’s state and configuration.
- Scheduler – Decides which node runs each pod based on resources and rules.
- Controller Manager – Watches the cluster and makes changes to match the desired state (e.g., starts new pods if one dies).
- Cloud Controller Manager- Component that connects your cluster to a cloud provider’s APIs.
Worker Node (Runs Your Applications)
- Kubelet – Node agent that talks to the API Server and ensures containers are running.
- Kube-proxy – Handles pod-to-pod and external networking, including load balancing.
- Container Runtime – Software that actually runs containers (containerd, CRI-O, etc.).
Pod (Smallest Unit)
Pod – One or more containers grouped together, sharing network and storage, scheduled onto a node.
Kubernetes Core Concepts
1️⃣Pod – Smallest Deployable Unit
- A pod is one or more containers that share the same network (IP address) and storage.
- Containers in a pod usually work together (e.g., app + sidecar).
- Pods are ephemeral — if a pod dies, it’s replaced, not “fixed.”
2️⃣ ReplicaSet – Keeps Pods Running
- Ensures a set number of identical pods are always running.
- If one pod fails or is deleted, it’s replaced automatically.
- Can be created directly, but usually managed via Deployments.
3️⃣ Deployment – Manages Updates & Rollbacks
- Higher-level abstraction over ReplicaSets.
- Handles:
- Rolling updates (gradually updating pods to new versions).
- Rollbacks (returning to an earlier stable version).
- Scaling up/down.
- Most common way to deploy apps in Kubernetes.
4️⃣ Service – Stable Networking for Pods
Pods have dynamic IPs, so a Service provides a fixed endpoint for accessing them.
Types:
- ClusterIP – Internal access only (default).
- NodePort – Exposes the service on each node’s IP at a fixed port.
- LoadBalancer – Uses cloud provider’s LB to expose externally.
5️⃣ Ingress – HTTP Routing to Services
- Routes external HTTP(S) traffic to the correct services inside the cluster.
- Requires an Ingress Controller (like NGINX, Traefik).
- Supports domain-based and path-based routing.
6️⃣ Namespace – Logical Separation
- Virtual partitions inside a cluster.
- Helps separate resources for different environments (dev/test/prod) or teams.
- Supports access control via RBAC.
7️⃣ ConfigMap – Non-Sensitive Config Storage
- Stores plain-text configuration values.
- Injects configs into pods via environment variables or mounted files.
8️⃣ Secret – Sensitive Data Storage
- Stores passwords, tokens, certificates in Base64-encoded form.
- Injected into pods securely.
9️⃣ Volume – Storage for Pods
Pods are temporary, so volumes store data persistently.
Types:
- emptyDir – Temporary storage tied to the pod’s lifecycle.
- hostPath – Maps to a directory on the node.
- Persistent Volumes (PV) & Persistent Volume Claims (PVC) – For long-term storage (often backed by cloud storage).
🔟 Labels & Selectors – Tagging and Grouping
- Labels → Key-value tags added to resources.
- Selectors → Match resources with specific labels (used by Services, Deployments, etc.).
References & Credits
- official Kubernetes documentation
- AI tools were used to assist in research and writing, but final content was reviewed and verified by the author.
Top comments (0)