DEV Community

Naveen Jayachandran
Naveen Jayachandran

Posted on

Introduction to Kubernetes (K8s)

Before Kubernetes, Docker and Docker Swarm revolutionized how developers packaged applications. By bundling an application and its dependencies into a single portable unit called a container, developers could run apps consistently across environments. However, as applications scaled to hundreds or thousands of containers, new challenges emerged:

Scalability Issues – Managing and distributing containers manually became complex.

Multi-Cloud Deployments – Orchestrating containers across multiple environments was difficult.

Security & Resource Management – Ensuring proper access, isolation, and efficient resource usage was challenging.

Rolling Updates & Zero Downtime Deployments – Updating applications without causing outages was tricky.

Kubernetes was created to solve these challenges, acting as the "brain" or orchestrator for containerized applications and automating their management at scale.

What is Kubernetes?
Kubernetes, often abbreviated as K8s (K + 8 letters + s), is an open-source platform that automates deployment, scaling, and management of containerized applications.

Key Facts:

Origin: Developed by Google, inspired by internal systems Borg and Omega.

Official Release: 2014

CNCF Donation: Donated to the Cloud Native Computing Foundation (CNCF) in 2015, which now maintains it.

Adoption: Supported by all major cloud providers.

Name Meaning: "Kubernetes" comes from Greek, meaning “helmsman” or “pilot,” symbolizing its role in steering applications.

Think of Kubernetes like an orchestra conductor: each container is a musician. While a few musicians can perform without guidance, an entire orchestra requires a conductor to coordinate complex symphonies. You provide the conductor with the sheet music (your desired configuration), and Kubernetes ensures every container performs its role, replacing failed containers and scaling as needed.

Key Features of Kubernetes
Automated Scheduling: Efficiently assigns containers to nodes for optimal resource usage.

Self-Healing: Automatically restarts, replaces, and reschedules failed containers.

Rollouts & Rollbacks: Manages application updates with minimal downtime and supports easy rollbacks.

Scaling & Load Balancing: Supports horizontal scaling and distributes traffic evenly.

Resource Optimization: Monitors resource usage and ensures efficiency.

Monolithic vs Microservices
Traditionally, applications were built using a monolithic architecture, where all components were interconnected in a single codebase. Updating even a small feature, like the payment module in an e-commerce app, required redeploying the entire application—introducing risk and potential downtime.

To overcome these limitations, the industry shifted to microservices architecture, where each feature (payments, search, notifications) is developed and deployed independently. Microservices offered flexibility and scalability but introduced a new challenge: managing hundreds or thousands of small, containerized services.

Containers solved packaging, but orchestrating microservices at scale became essential. Kubernetes emerged as a powerful solution to automate deployment, scaling, and coordination of microservices.

Essential Kubernetes Terminologies
Think of Kubernetes as a well-organized company, where different components and teams collaborate to run applications efficiently.

Pod: The smallest deployable unit in Kubernetes. A Pod can contain one or more containers that share the same network and storage, enabling seamless communication.

Node: A machine (physical or virtual) in a Kubernetes cluster that runs applications. Each node contains a container runtime (like Docker), the Kubelet (agent), and Kube-proxy (networking).

Cluster: A group of nodes working together to run containerized applications. Clusters have two types of nodes:

Master Node (Control Plane): The brain of the cluster, responsible for scheduling, decision-making, and maintaining cluster state.

Worker Nodes: Machines that run the actual application Pods.

Deployment: A Kubernetes object used to manage Pods. It provides declarative updates, meaning you define the desired state, and Kubernetes handles the rest.

ReplicaSet: Ensures that a specified number of identical Pods are running at all times.

Service: Provides a stable way to expose and connect Pods within the cluster, even if the Pods themselves change over time.

Ingress: Manages external access to services, providing HTTP/HTTPS routing and acting as a reverse proxy.

ConfigMap: Stores non-sensitive configuration data separately from the application code, enabling updates without redeployment.

Secret: Securely stores sensitive information like passwords, tokens, or API keys.

Persistent Volume (PV): Provides durable storage in the cluster that persists even when Pods are deleted or restarted.

Kubelet: An agent running on each worker node that ensures Pods are running as expected.

Kube-proxy: Handles networking for Pods, ensuring communication within the cluster.

Top comments (0)