Cover image for An Introduction To Kubernetes

An Introduction To Kubernetes

gbengelebs profile image Elegberun Olugbenga ・8 min read

Have you ever wondered how companies like Facebook are able to serve applications to Billions of users, manage all the different application servers, updates, and the numerous application components? The answer lies in distributed computing. A distributed system is a system with multiple components that communicate and coordinate actions with one another to appear as a single application to the end-user.In this article, I will be discussing one tool to help achieve distributed computing-KUBERNETES. In my previous article, I explained docker and how it helps in delivering software in packages called containers thereby creating robust web applications that are lightweight, easy to start up, and scale with speed. You can check it out here if you have not read it as it provides a foundation for Kubernetes which I will be discussing here.


To understand how Kubernetes works it is best we understand the problem it came to solve. Let's assume you have built an awesome dating web app. This web app has been packaged into several containers, each handling different parts of the application. For example, one container for Authentication Logic. Another for User recommendation etc. Imagine you have several thousands of these containers and you need to deploy them to multiple servers. As the dating app gets more users you will also need to think about scaling. i.e adding new servers to handle the additional load. Now when we start managing millions of users per second you will agree with me that it is only an amount of time before the whole system becomes complex and difficult to manage. This problem was what Kubernetes was made to solve. It was born out of Google's experience running production workloads.

Kubernetes is a system for managing containerized applications across a variety of infrastructure. In this blog post, I will discuss some of the basic components of Kubernetes, and we will be puzzling the pieces that make up this goliath.

Kubernetes can be visualized as a group of systems each system providing a layer of abstraction over the preceding one.
For better understanding, we will divide the Kubernetes system into two parts. Hardware and Software.



A node is the basic building block of Kubernetes. It provides computing power to the Kubernetes system. Don't let the name put you off. A node can be any device which has CPU and RAM, your phone or laptop can very well be your Kubernetes node. In production systems, a node could be a physical machine in a data center, or a virtual machine hosted on a cloud provider.



A cluster is a collection of nodes. A pool of machines working together sharing resources between each other is referred to as a cluster. The advantage of a cluster is better resource management for applications, improved fault tolerance, and improved scalability. If one machine fails Kubernetes could run the application on another machine and if there is too much load on a single machine. This load can be distributed across several machines. When you deploy programs into a cluster Kubernetes intelligently handles the distribution of workloads between the various nodes in that cluster to give you the best performance. When we run our application on a cluster we shouldn't care about which node, the distribution of the work will be automatically handled by Kubernetes. Each machine within the cluster uses a shared network to communicate with one another.
A Cluster

Photo by Ivan Fioravanti on HackerNoon


Data saved on a node is volatile because programs running application codes and dependencies could be moved to other nodes in the cluster. For example, if one node has run out of memory or is unresponsive. The application can be moved to another node in its cluster and If a program tries to save data to a file for later, but is then relocated onto a new node, the file will no longer be where the program expects it to be. For this reason, Key Data is typically not stored on nodes but instead on hardware called a Persistent Volume. This persistent volume can store data for a long time irrespective of the state of the nodes or pods. This Persistent volume is usually attached to the cluster and can be thought of as attaching an external hard drive to your cluster to provide a storage system which can then be used by any node.



Containers allow developers to package up an application including the components like libraries and other dependencies into a single process. Any program and all its dependencies can be bundled up into a single file and then shared on the internet. Anyone can download the container and deploy it on their infrastructure and have the exact version of that software running on their system with very little setup required.


Kubernetes does not run containers directly but instead packages multiple containers into a higher-level structure called a pod. Containers in the same pod share network and resources and can easily communicate with one another.
The unit of a Kubernetes deployment is a pod. Let's assume you have multiple containers for the dating site web app you just created. One container has a web app with your application logic. Another container has your user recommendation webapi. You could package these two containers into a pod. And they would serve as a single deployment. If your dating app gains traction and you want to scale it out to more users You could simply replicate that pod and deploy it to your cluster as required. Pods can hold multiple containers, But limit yourself to a reasonable amount as containers in a pod scale together and it could lead to wasted resources when scaling containers that do not need to be scaled.

Alt Text


Pods are usually managed by a layer of abstraction called a deployment. A Deployment runs multiple replicas of your application(pods) and automatically replaces any instances that fail or become unresponsive. The deployment is like a manager for the pods, it will automatically spin up the number of pods requested, it will monitor the pods and re-create the pods in case of failure. In this way, Deployments help ensure that one or more instances of your application are available to serve user requests.
A Deployment


A service can be defined as a logical set of pods. Because pods are dynamic and change frequently. It is important to provide a stable interface to a set of pods which does not change. This interface is what other applications see when they want to access a pod. A service can be defined as an abstraction on the top of the pod which provides a single IP address and DNS name by which pods can be accessed. It helps pods to scale very easily without worrying about the changing IP address of each pod; you could create a cluster of nodes, launch deployments of pods onto the cluster and then add a service to provide the external face by with users communicate with your Kubernetes cluster. A service’s IP address remains stable regardless of changes to the pods it routes to. By deploying a service, you easily gain discoverability and can simplify your container designs.
A Service

External Traffic

So you have all the services running in your cluster, but now the question is how do you get external traffic into your cluster? There are several ways to do this but I will highlight one the Ingress Controller.

Ingress Controller

Ingress is not a service but an API object that manages external access to the services in a cluster. It acts as a single entry-point to your cluster that routes the request to different services. The Ingress Controller is the component responsible for fulfilling those requests. To expose the ingress, the best production-ready solution is by using a load balancer.
Ingress Controller

Now let's dive deeper into the Kubernetes architecture. How does Kubernetes automatically manage all the nodes, clusters, services, and pods intelligently?


Kubernetes Architecture

Photo by Jef Spaleta Sensu Blog


One node in the cluster acts as a Master server responsible for most of the centralized logic Kubernetes provides. This server acts as a gateway and brain for the cluster by exposing an API for users and clients, health checking other servers, and orchestrating communication between other components.
It Determines the best ways to schedule workloads to the different nodes.


  1. Etcd Storage: It is an open-source key-value data store developed by the CoreOS team and can be accessed by all nodes in the cluster. Kubernetes uses “Etcd” to store configuration data of the cluster to represent the overall state of the cluster anytime. It stores the current state of the system and the desired state. If Kubernetes finds differences in etcd between current and desired states, it performs the necessary adjustments.

  2. Kube API-Server: The API server is the central management entity that receives REST requests for modifications to services, serving as a front-end to control the cluster. This component exposes via API the Kubernetes control plane. This is how the Kubernetes CLI or “kubectl” interacts with the cluster. All other components of the cluster make requests to the API server, and it updates information on etcd.

  3. Scheduler: The process that actually assigns workloads to specific nodes in the cluster is the scheduler. It helps to schedule the pods on various nodes based on resource utilization and decides where to deploy each service. The scheduler has the information regarding the resources available to the members as well as the one which is left for configuring the service to run.

  4. Controller Manager: This is a background process that watches the shared state of the cluster through the apiserver and makes changes attempting to move the current state towards the desired state. When there is any change in the service, the controller spots the change and starts working towards the new desired state.


    The main contact point for each node with the cluster group is a small service called kubelet. The kubelet makes sure all containers are running and healthy. This service is responsible for relaying information to and from the control plane services, as well as interacting with the etcd store to read configuration details or write new values. Work is received in the form of a manifest that defines the workload and the operating parameters.

  2. PROXY
    The proxy allows for network communication inside the cluster, it makes sure services are available to other components and can forward requests to the correct containers, it also performs primitive load balancing.

  3. Container Runtime
    The container runtime is the component which is responsible for running the containers, the most common runtimes choice is Docker.

Wrapping Up.....

The developer codes up an application and packages it into a container using containerization software such as Docker. This container is put into a pod and labeled as a deployment. Several Replicas of the pod are assigned an interface called a service. This service is then deployed to a collection of computers known as a cluster and then assigned a single point of entry into the application, opening up communication for the cluster utilizing a tool such as an Ingress Controller.

The Kubernetes control plane manages the pods assigning them to different nodes within the cluster and automatically ensuring that the desired state of the deployment is maintained. If the application gains more users these pod deployments can be easily replicated and scaled accordingly with more nodes been assigned to the cluster. The developer can manage all these processes by communicating with the control plane using the Kube API server with the kubectl commands.
Welcome to the world of Kubernetes!

Kubernetes isn’t just a new syntax - it’s a whole new way of thinking about distributed computation.

For more content like this, make sure to follow me here on dev and Twitter (@ElegberunDaniel).


  1. Kubernetes 101: Pods, Nodes, Containers, and Clusters
  2. kubernetes-101
  3. An Introduction to Kubernetes
  4. How Kubernetes works
  5. Stupid Simple Kubernetes


Editor guide
ankkie profile image

Well explained 🙌🏼