DEV Community

Cover image for Kubernetes 101: A Beginner's Guide
Daniel Favour for SigNoz

Posted on

Kubernetes 101: A Beginner's Guide

Kubernetes also known as K8s is an open-source container orchestration tool developed by Google allowing you to run and manage your container-based workload across a distributed cluster of nodes.

Follow us on Twitter to learn more about DevOps.
Image description

Container orchestration is a solution that consists of a set of tools and scripts that can help host containers in a production environment; that way if one fails, the application is still accessible through the others. The major essence of Kubernetes is to host your applications in the form of containers.

NB: The name K8S(pronounced Kates) was gotten from counting the letters between letter "K" and letter "S". Kubernetes uses YAML to define the resources sent to the API server.

Deployment mechanism before Kubernetes.

Image description

Traditional Deployment

Applications were previously run on physical servers and this method doesn't let you define resources.

In situations where multiple applications are running and one application is consuming more resources, instead of equal distribution, applications would be run on individual physical servers but this process is costly and not feasible. This process also had scaling issues for specific applications and long downtime.

Virtualized Deployment

Virtualization was introduced to solve the problems of traditional deployment. It allows you to run multiple Virtual Machines (VMs) on a single physical server's CPU. Virtualization allows applications to be isolated between VMs. Virtualization allows better utilization of resource and isolation of applications between VMs. It allows more effortless adding and updating of applications that solve the scalability issue in traditional deployment.

Some of the challenges with virtualized deployment were operating system images being heavyweight with image size in GB, application is not portable and has a slow booting process.

Container Deployment

Containers are similar to VMs, but they have better isolation which allows the operating system to be shared among applications. Having similarities to a VM, a container has its own filesystem, share of CPU, memory, process space, and more. As they are decoupled from the underlying infrastructure, they are portable across clouds and OS distributions. Containers are lightweight.
They take up less space than VMs and also boot quickly. Containers are highly scalable with the help of orchestration platforms such as Kubernetes or Docker Swarm which helps to create and manage docker containers.

Why we need Kubernetes

Containers being a good way to package and run application, they are managed to avoid downtime, that is to say if one container fails, another would be started immediately. Kubernetes provides you with a framework to run distributed systems resiliently. It takes care of scaling and failover for your application, provides deployment patterns.

Kubernetes Architecture

Image description

1. Kubernetes Cluster

It is composed of nodes. It can be referred to as all the machines collectively that run containers and are managed by the control planes. Clusters make it easy to develop, migrate and manage applications. A Kubernetes cluster includes a container designated as a “master node” that schedules workloads for the rest of the containers — or “worker nodes” — in the cluster.

2. Kubernetes Nodes

Nodes are comprised of physical or virtual machines in a cluster. They are categorized as Worker Nodes and Master Nodes. A worker node is simply a container host. Each worker node includes software to run containers managed by the Kubernetes control plane. The master watches over the nodes in the cluster and is responsible for the actual orchestration of containers on the worker nodes. The control plane runs on the master nodes.

Each node runs a Kubelet which is a tiny application that runs on a machine to communicate back with the control plane.

3. Control plane

It is a set of API's and software that Kubernetes users interact with. These API's or software are collectively referred to as Master Components. The control plane has two main responsibilities. It exposes the Kubernetes API through the API server and manages the nodes that make up the cluster. The control plane makes decisions about cluster management and detects and responds to cluster events.

4. Kubernetes Pods

It is the smallest unit of deployment in Kubernetes found inside each node. Containers are grouped into pods and pods may include one or more containers. All containers in a pod run on the same node.

Components

Installing Kubernetes on a system gives you the following components:

API Server:

It acts as the frontend, User CLI's etc, all talk to the API server to interact with a Kubernetes cluster.

ETCD:

It is a distributed key value store to store all data used to manage a cluster. It is responsible for implementing logs within the cluster to ensure there are no conflicts between the masters.

Scheduler:

It is responsible for distributing work or containers across multiple nodes. Stores resources usage data for each compute node. It decides whether a cluster is heavy and whether new containers should be deployed or not. It is in charge of allocating worker nodes to our application and selecting the best node that meets requirements.

Controller:

Responsible for noticing and responding when nodes/containers goes down.

Container runtime:

Underlying software that is used to run containers.

Kubelet:

It is the agent that runs on each node in the cluster. The agent is responsible for making sure that apps are healthy as well as the containers running on the nodes as expected. It runs on every single Kubernetes worker node.

Docker vs Kubernetes

Image description
Both Docker and Kubernetes are open cloud-native technologies.
Docker is a containerization platform while Kubernetes is a container orchestrator for container platforms like Docker or to say Docker focuses on packaging containerized applications on a single node and Kubernetes is meant to run these containerized applications across a cluster. The most used container technology to run containers on these hosts is Docker.

As much as they can be used independently, they complement themselves really well. Kubernetes provides orchestration of Docker containers, scheduling and automatically deploying them across IT environments to ensure high availability in cases of high demand.

Advantages of using Kubernetes

Portability:

Containers are portable across a range of environments from virtual environments to bare metal. Kubernetes is supported in all major public clouds, as a result, you can run containerized applications on K8s across many different environments.

Self-healing:

Kubernetes restarts containers that fail, replaces containers, kills containers that don't respond to your user-defined health check, and doesn't advertise them to clients until they are ready to serve.

Scalability:

Cloud native applications scale horizontally. Kubernetes uses auto-scaling, spinning up additional container instances and scaling out automatically in response to demand.

Service discovery and load balancing:

Containers can run on their own IP addresses or use a common DNS name for an entire group. K8s can load balance and distribute network traffic to maintain a stable deployment.

Cost efficiency:

Kubernetes inherent resource optimization, automated scaling, and flexibility to run workloads where they provide the most value means your IT spend is in your control.


If you enjoyed this article, do follow us on Twitter to learn more about DevOps.

Image description

Latest comments (1)

Collapse
 
itsrakesh profile image
Rakesh Potnuru

Great article and easy to understand!