DEV Community

Cover image for Understanding Kubernetes Architecture
Prashant Ghildiyal
Prashant Ghildiyal

Posted on • Originally published at

Understanding Kubernetes Architecture

Kubernetes is becoming the new standard for deploying and managing the software in Cloud because of the wide number of features that Kubernetes provide such as automated rolling and rollback of deployments, storage orchestration, automating bin packing, self-healing, management of secrets and configuration files and many more. This blog post will give you a high-level view of the Kubernetes Architecture and in-depth explanation about Kubernetes Node Architecture and its components.

Overview of Kubernetes Architecture

A Kubernetes cluster consists of a set of worker machines known as nodes that run containerized applications. Every cluster has at least one worker node. The worker node(s) host the pods that are the components of the application workload.

Kubernetes cluster consists of at least one master and multiple nodes.


It is responsible for exposing the application program interface (API), scheduling the deployments and managing the overall Kubernetes cluster, the master server runs the following components:

Kubernetes Master and Node Components

API Server: It performs all the administrative tasks through the API server within the master node.

scheduler: It schedules the tasks to worker nodes. It stores the resource usage information for each slave node.

controller-manager: The controller watches the desired state of the objects it manages and ensures that their current state through the API server. If the current state of the objects it manages does not meet the desired state, then the control loop takes corrective steps to make sure that the current state is the same as the desired state.

etcd: etcd is a distributed key-value store that stores the cluster state and can also be used to store configuration details such as subnets, ConfigMaps, Secrets, etc. It can be part of the Kubernetes Master, or it can be configured externally.

Basic Kubernetes Architecture Representation


It is a physical server that has pods inside them that are responsible for running applications inside them and managed by the Master. The necessary components of Node are:

Kubelet: It is an agent that communicates with the Master node and executes on the worker nodes. It gets the Pod specifications through the API server and executes the containers associated with the Pod and ensures that the containers running within the pods are running and in a healthy state.

Kubelet also periodically monitors the state of the pods and in case of any problem, launches the new instance instead. Kubelet also has an internal HTTP server exposing a read-only view at port 10255. For Example: /healthz is a health check endpoint, /pods get a list of running pods, /spec to get specifications of the machine on which kubelet is running on.

Kube-Proxy: Kube-proxy runs on each node to deal with individual host sub-netting and ensures that the services are available to external parties. For each Service endpoint, Kube-proxy sets up the routes so that it can reach to it.

It serves as a network proxy and a load balancer for the pods running on that particular node. It is an important networking part of Kubernetes and ensures that there is an efficient communication across all elements of the cluster.

cAdvisor: It was originally created by Google is now integrated with Kubelet. It collects metrics such as CPU, memory, file, and network usage for all running containers and makes sure that they are running properly. All the data is sent to the scheduler to ensure the performance of the node which is further used for tasks like scheduling, horizontal pod scaling and managing container resource limits.

Container Runtime: Container Runtime is the software that is responsible for pulling images from public or private registries running containers based on these images. Kubelet directly interacts with Container Runtime to start, stop or delete containers. Kubernetes supports several runtimes such as Docker, container, CRI-O and any other implementation of Kubernetes CRI(Container Runtime Interface).

About Me:

I am Prashant and I've been thoroughly enjoying working in the DevOps space for years. My main areas of work used to be around DevOps, Kubernetes Orchestration and CI/CD. With Experience I understood that there is a lack of consolidated delivery workflow for Kubernetes. Sure, there are tools that will help us achieve a specific task but no tool that encompasses the whole workflow efficiently. So, with some of my passionate friends in tech, I have Cofounded Devtron Labs intended to solve this problem. It is a completely community led Open Source, Software Delivery workflow for Kubernetes that efficiently manages security, cost and Stability.

GitHub logo devtron-labs / devtron

Software Delivery Workflow For Kubernetes

Devtron is an open source software delivery workflow for kubernetes written in go
Explore documentation »

Website · Blog · Join Discord · Twitter

Join Discord Go Report Card License CII Best Practices made-with-Go Website

📖 Menu

💡 Why Devtron?

It is designed as a self-serve platform for operationalizing and maintaining applications (AppOps) on kubernetes in a developer friendly way

🎉 Features

Zero code software delivery workflow
  • Workflow which understands the domain of kubernetes, testing, CD, SecOps so that you dont have to write scripts
  • Reusable and composable components so that workflows are easy to contruct and reason through
Multi cloud deployment
  • Deploy to multiple kubernetes cluster
Easy dev-sec-ops integration
  • Multi level security policy at global, cluster, environment and application for efficient hierarchical policy management
  • Behavior driven security policy
  • Define policies and exception for kubernetes resources
  • Define policies for events for faster resolution
Application debugging dashboard
  • One place…

Top comments (0)