Prerequisite
- Docker or any Container service
- Basic linux understanding with commands
- Networking basics like: IP addresses, ports, protocols, DNS.
- Version Control like GIT
Introduction
Modern applications don’t live on a single server anymore. They run across many environments that could be your laptop, cloud servers, or even multiple clouds. Containers make this possible by packaging apps with everything they need to run.
Problem: when you have hundreds or thousands of containers, how do you start them, keep them running, scale them, and make sure they can talk to each other?
That’s where Kubernetes (K8s) comes in. Think of it as an air traffic controller for containers as it decides where containers should run, monitors their health, and makes sure your applications are always available.
Why Kubernetes?
- K8s solves real deployment problems that were headaches. They are below:
- Portability → Run anywhere: local, cloud, or hybrid.
- Self-healing → If a container crashes, Kubernetes restarts it automatically.
- Scalability → Automatically adds more containers when load increases.
- Observability → Built-in monitoring and logging tools
- Before Kubernetes, deploying an app meant manually setting up servers, load balancers, and networking. Today, Kubernetes handles all that automatically.
Before vs After Kubernetes
Before:
Imagine you had to deploy an app on three servers. You’d SSH into each, start the app, set up load balancing, and hope nothing crashed. If a server went down, you’d have to fix it manually. So much to do.
After:
You tell Kubernetes how many copies of your app you want, and it does the rest from scheduling pods to handling traffic, and even replacing failed instances automatically.
Kubernetes Architecture
Kubernetes Cluster
- In the last part we discussed that before k8 we have to manually setup everything across all the machines. But after k8 those machines comes under k8 and form a group called kubernetes cluster.
- These machines, whether physical or virtual, are called nodes.
- Kubernetes manages them as one unified system, so you don’t have to think about each server individually.
It has two main parts:
- Control Plane: The brain of Kubernetes.
- Worker Nodes: The hands that do the actual work of running your applications.
Control Plane (The Brain)
- The control plane makes global decisions about the cluster like scheduling, scaling, and responding to failures. Below are the key components of the control plane:
- API Server: Acts as the interface for managing the cluster and communicate with all components.
- Scheduler: Decides which node should run each workload.
- Controller Manager: Handles background tasks such as maintaining node health, scaling and other cluster-wide operations.
- etcd: A key-value store that stores all cluster data, including configuration and state.
Worker Nodes (The Hands)
- Worker nodes are where your applications actually run.
- Each worker node has:
- kubectl: Communicates with the control plane and ensures that the containers for a pod are running on that node.
- Container Runtime: The engine that runs containers (e.g., containerd, CRI-O). Yes docker is not here.
- kube-proxy: Manages networking rules for communication between pods and services.
- If the control plane is the brain, worker nodes are the muscle that executes its instructions.
Pods: The Smallest Deployable Unit
- Kubernetes does not run containers directly, it runs pods.
A pod is:
- A wrapper around one or more containers. (like in the image)
- Shared environment: network namespace, storage volumes.
- Temporary: If a pod fails, Kubernetes can replace it automatically.
Example:
- A pod might have your main application running and there is also another container (very specific use cases) for logging.
kubectl: Kubernetes Command-Line Tool
- kubectl is the primary way to interact with a Kubernetes cluster.
- It talks to the API Server to create, inspect, and manage resources.
Example:
# See all nodes in your cluster
kubectl get nodes
# Create a pod running Nginx
kubectl run nginx --image=nginx
# See all pods
kubectl get pods
- So Whats happening when you run kubectl run nginx --image=nginx:
- kubectl sends the request to the API Server.
- The Scheduler chooses a worker node to run the pod.
- The kubelet on that node pulls the image (nginx) and starts the container.
- The pod runs until it’s stopped, deleted, or replaced.
Installing Kubernetes
- For now i am personally using docker desktop to start kubernetes which spins up a single node kubernetes cluster.
- Open docker desktop and go to settings and there you will see kubernetes option, click it.
- Check the "Enable Kubernetes" option and you are good to go.
- This is a great way to start practicing k8s.
- You can also use minikube which is a single-node kubernetes cluster and great for learning.
- Refer this for installing minikube on windows, linux and mac.
Top comments (0)