DEV Community

Cover image for Kubernetes Architecture Overview
KodeKloud
KodeKloud

Posted on • Originally published at kodekloud.com

Kubernetes Architecture Overview

We start with a basic overview of the Kubernetes cluster architecture. We first look at the architecture at a high level, and then we drill down into each of these components. We see what their roles and responsibilities are and how they are configured. And finally, you go through a practice test where you look at an existing cluster and are asked to identify various details with respect to these components in the cluster.

We're going to use an analogy of ships to understand the architecture of Kubernetes.

Alt Text

The purpose of Kubernetes is to host your applications in the form of containers in an automated fashion so that you can easily deploy as many instances of your application as required and easily enable communication between different services within your application.

So there are many things involved that work together to make this possible.

So let's take a 10,000 feet look at the Kubernetes architecture. We have two kinds of ships in this example, cargo ships that do the actual work of carrying containers across the sea, and control ships that are responsible for monitoring and managing the cargo ships.

Alt Text

The Kubernetes cluster consists of a set of nodes, which may be physical or virtual, on-premise or on cloud that hosts applications in the form of containers.

These relate to the cargo ships in this analogy. The worker nodes in the cluster are ships that can load containers. But somebody needs to load the containers on the ships and not just load, plan how to load, identify the right ships, store information about the ships, monitor and track the location of containers on the ships, manage the whole loading process etc. This is done by the controller ships that host different offices and departments, monitoring equipments, communication equipments, cranes for moving containers between ships etc.

Alt Text

The control ships relate to the master node in the Kubernetes cluster. The master node is responsible for managing the Kubernetes cluster storing information regarding the different nodes, planning which containers cause were, monitoring the nodes and containers on them etc. The master node does all of these using a set of components together known as the control plane components.

We will look at each of these components now. Now there are many containers being loaded and unloaded from the ships on a daily basis. And so you need to maintain information about the different ships what container is on which ship and what time it was loaded, etc. All of these are stored in a highly available key-value store known as Etcd. Etcd is a database that stores information in a key-value format, we will look more into what actually cluster actually is, what data is stored in it, and how it stores the data in one of the upcoming lectures. When ships arrive, you load containers on them using cranes.

Alt Text

The cranes identify the containers that need to be placed on ships. It identifies the right ship based on its size, its capacity, the number of containers already on the ship, and any other conditions such as the destination of the ship, the type of containers it is allowed to carry, etc. So those are schedulers in a Kubernetes cluster, and Scheduler identifies the right node to place a container on based on the containers’ resource requirements, the worker nodes capacity or any other policies or constraints, such as taints and tolerations or node affinity rules that are on them. We will look at these in much more detail with examples and practice tests later in this course. We have a whole section on scheduling alone.

Alt Text

There are different offices In the dark that are assigned to special tasks or departments.

For example, the operations team takes care of ship handling, traffic control, etc. They deal with issues related to damages, the routes, the different ships date, etc. The cargo team takes care of containers when continuous or damaged or destroyed, they make sure new containers are made available. You have the services office that takes care of the IT and communications between different ships. Similarly, in Kubernetes, we have controllers available that take care of different areas. The node controller takes care of nodes. They're responsible for onboarding new nodes to the cluster handling situations where nodes become unavailable or get destroyed. And the replication controller ensures that the desired number of containers are running at all times in your replication group.

Alt Text

So we have seen different components like the different offices, the different ships, the data store, the cranes, etc. But how do these communicate with each other? How does one office reach the other office and who manages them all at a high level. The Kube API server is the primary management component of Kubernetes.

Alt Text

The Kube API server is responsible for orchestrating all operations within the cluster. It exposes the Kubernetes API, which is used by external users to perform management operations on the cluster, as well as the various controllers to monitor the state of the cluster and make necessary changes as required. And by the worker nodes to communicate with the server.

Now, we're working with containers here, containers are everywhere. So we need everything to be container compatible. Our applications are in the form of containers. The different components that form the entire management system on the master node could be hosted in the form of containers. The DNS service networking solution can all be deployed in the form of containers. So we need these softwares that can run containers. And that's the container runtime engine, a popular one being Docker. So we need Docker or it's supported equivalent installed on all the nodes in the cluster, including the master nodes, if you wish to host the control plane components as containers. Now, it doesn't always have to be Docker. Kubernetes supports other runtime engines as well like containerd, or rocket.

Alt Text

Let us now turn our focus on to the cargo ships. Now every ship has a captain, the captain is responsible for managing all activities on these ships. The captain is responsible for liaising with the masterships, starting with letting the Mastership know that they're interested in joining the group receiving information about the containers to be loaded on the ship and loading the appropriate containers as required. Sending reports back to the master about the status of this ship and the status of the containers on the ship, etc. Now the captain of the ship is the Kubelet in Kubernetes. A Kubelet is an agent that runs on each node in a cluster. It listens for instructions from the Kube API server and deploys or destroys containers on the nodes as required. The Kube API server periodically fetches status reports from the Kubelet to monitor the status of nodes and containers on them.

Alt Text

The Kubelet was more of a captain on the ship that manages containers on the ship. But the applications running on the worker nodes need to be able to communicate with each other. For example, you might have a web server running in one container on one of the nodes and a database server running on another container on another node. How would the webserver reads the database server on the other node? communication between worker nodes are enabled by another component that runs on the worker node known as the Kube proxy service. The Kube proxy service ensures that the necessary rules are in place on the worker nodes to allow the containers running on them to reach each other.

Alt Text

So, to summarise, we have the master and worker nodes. On the master, we have the ETCD cluster, which stores information about the cluster. We have the Kube scheduler that is responsible for scheduling applications or containers or nodes. We have different controllers that take care of different functions like the node controller, replication controller, etc. We have the Kube API server that is responsible for orchestrating all operations within the cluster.

On the worker node, we have the Kubelet that listens for instructions from the Kube API server, and manages containers, and the Kube proxy that helps in enabling communication between services within the cluster. So that's a high-level overview of the various components.

That's it for now, and I will see you in our upcoming articles that explain more in-depth on Kubernetes and other related DevOps tools.

Take the below courses to master Kubernetes

Kubernetes for absolute beginners

Kubernetes- Absolute beginner to expert

Certified Kubernetes Application Developer Certification Course (CKAD)

Certified Kubernetes Administrator with Practice Tests (CKA)

Top comments (0)