DEV Community

Cover image for Beginner's guide to Kubernetes ☸️
Sarmad Saleem
Sarmad Saleem

Posted on • Edited on • Originally published at scoutapm.com

Beginner's guide to Kubernetes ☸️

It's easy to get lost in today's continuously changing landscape of cloud native technologies. The learning curve from a beginner's perspective is quite steep, and without proper context it becomes increasingly difficult to sift through all the buzzwords.

Alt Text

If you have been developing software, chances are you may have heard of Kubernetes by now. Before we jump into what Kubernetes is, it's essential to familiarize ourselves with containerization and how it came about.

In this guide, we are going to paint a contextual picture of how deployments have evolved, what's the promise of containerization, where Kubernetes fits into the picture, and common misconceptions around it. We'll also learn the basic architecture of Kubernetes, core concepts, and some examples. Our goal from this guide is to lower the barrier to entry and equip you with a mind map to navigate this landscape more confidently.

Use the links to skip ahead in the guide:

👉 Evolution of the deployment model 🥚
👉 Demystifying container orchestration 🔓
👉 What exactly is Kubernetes? ☸️
👉 Show me by example ⚽️
👉 Basic features 🐎
👉 Ecosystem 🔄
👉 Common questions ❓

🥚 Evolution of the deployment model

This evolution can be categorized into three rough categories, namely traditional, virtualized, and containerized deployments. Let's briefly touch upon each to better actualize this evolution.

0️⃣ Bare metal

Some of us are old enough to remember the archaic days when the most common way of deploying applications was on in-house physical servers. Cloud wasn't a thing yet, organizations had to plan server capacity to be able to budget for it. Ordering new servers was a slow and tedious task that took weeks of vendor management and negotiations. Once shiny new servers did arrive, they came with the overhead of setup, deployment, uptime, maintenance, security, and disaster recovery.

From a deployment perspective, there was no good way to define resource boundaries in this model. Multiple applications, when deployed on the same server, would interfere with each other because of lack of isolation. That forced deployment of applications on dedicated servers, which ended up being an expensive endeavour making resource utilization the biggest inefficiency.

In those days, companies had to maintain their in-house server rooms (many still do) and deal with prerequisites like air conditioning, uninterrupted power, and internet connectivity. Even with such high capital and operating expenses, they were limited in their ability to scale as demand increased. Adding additional capacity to handle increased load would involve installing new physical servers. "Write once and run everywhere" was a utopian dream, these were the days of "works on my machine".

1️⃣ Virtual machines

Enter virtualization. This solution is a layer of abstraction on top of physical servers, such that it allows for running multiple virtual machines on any given server. It enforces a level of isolation and security, allowing better resource utilization, scalability, and reduced costs.

It allows us to run multiple apps, each in a dedicated virtual machine offering complete isolation. If one goes down, it doesn't interfere with the other. Additionally, we can specify resource budgets for each. For example, allocate 40% of physical server resources to VM1 and 60% to VM2.

Okay, so this addresses isolation and resource utilization issues but what about scaling with increased load? Spinning a VM is way faster than adding a physical server.However scaling of VMs is still bound by available hardware capacity.

This is where public cloud providers come into the picture. They streamline the logistics of buying, maintaining, running, and scaling servers against a rental fee. This means organizations don't have to plan for capacity beforehand. This brings down the capital expense of buying the server and operating expense of maintaining it significantly.

2️⃣ Containers

If we have already addressed the issue of isolation, resource utilization, and scaling with virtual machines, then why are we even talking about containers? Containers take it up a notch. You can think of them as mini virtual machines that, instead of packaging a full-fledged operating system, try to leverage the underlying host OS for most things. Container-based virtualization guarantees higher application density and maximum utilization of server resources.

An important distinction between virtual machines and containers is that VM virtualizes underlying hardware whereas the container virtualizes the underlying operating system. Both have their use cases, in fact, many container deployments use VM as their host operating system rather than running directly on bare metal.

The emergence of Docker engine accelerated the adoption of this technology. It has now become the defacto standard to build and share containerized apps - from desktop to the cloud. Shift towards microservices as a superior approach to application development is another important factor that has fueled the rise of containerization.

🔓 Demystifying container orchestration

While containers by themselves are extremely useful, they can become quite challenging to deploy, manage, and scale across multiple hosts in different environments. Container orchestration is another fancy word for streamlining this process.

Container orchestration explanation

As of today, there are several open-source and proprietary solutions to manage containers out there.

Open-source landscape

If we look at the open-source landscape, some notable options include

  • Kubernetes
  • Docker Swarm
  • Apache Marathon on Mesos
  • Hashicorp Nomad
  • Titus by Netflix

Proprietary landscape

On the other hand, if we look at the propriety landscape, most of it is dominated by major public cloud providers. All of them came up with their home-grown solution to manage containers. Some of the notable mentions include:

  • Amazon Web Services (AWS)

    • Elastic Beanstalk
    • Elastic Container Service (ECS)
    • Fargate
  • Google Cloud Platform (GCP)

    • Cloud Run
    • Compute Engine
  • Microsoft Azure

    • Container Instances
    • Web Apps for Containers

Gold standard

Similar to how Docker became the de-facto for containerization, the industry has found Kubernetes to rule the container orchestration landscape. That's why most major cloud providers have started to offer managed Kubernetes service as well. We'll learn more about them later in the ecosystem section.

☸️ What exactly is Kubernetes?

Kubernetes is open-source software that has become the defacto standard for orchestrating containerized workloads in private, public, and hybrid cloud environments.

It was initially developed by engineers at Google, who distilled years of experience in running production workloads at scale into Kubernetes. It was open-sourced in 2014 and has since been maintained by CNCF (Cloud Native Computing Foundation). It's often abbreviated as k8s which is a numeronym (starting with the letter "k" and ending with "s" with 8 other characters in between).

Managing containers at scale is commonly referred to as quite challenging, why is that? Running a single Docker container on your laptop may seem trivial (we'll see this in the example below) but doing that for a large number of containers across multiple hosts in an automated fashion ensuring zero downtime isn't as trivial.

Let's take an example of a Netflix-like video-on-demand platform consisting of 100+ microservices resulting in 5000+ containers running atop 100+ VMs of varying sizes. Different teams are responsible for different microservices. They follow continuous integration and continuous delivery (CI/CD) driven workflow and push to production multiple times a day. The expectation from production workloads is to be always available, scale up and down automatically if demand changes, and recover from failures when encountered.

In situations like these, the utility of container orchestration tools really shine. Tools like Kubernetes allow you to abstract away the underlying cluster of virtual or physical machines into one unified blob of resources. Typically they expose an API, using which you can specify how many containers you'd like to deploy for a given app and how they should behave under increased load. API-first nature of these tools allows you to automate deployment processes inside your CI pipeline, giving teams the ability to iterate quickly. Being able to manage this kind of complexity in a streamlined manner is one of the major reasons why tools like Kubernetes have gained such popularity.

Kubernetes Architecture

Kubernetes architecture

To understand the Kubernetes' view of the world, we need to familiarize ourselves with cluster architecture first. Kubernetes cluster is a group of physical or virtual machines which is divided into two high-level components, control plane and worker nodes. It's okay if some of the terminologies mentioned below don't make much sense yet.

  • Control plane - Acts as the brain for the entire cluster. In that, it is responsible for accepting instruction from the users, health checking all servers, deciding how to best schedule workloads, and orchestrating communication between components. Constituents of control plane include:
    • kube-apiserver - Responsible for exposing Kubernetes API. In other words, this is the gateway into Kubernetes
    • etcd - Distributed, reliable key-value store that is used as a backing store for all cluster data
    • kube-scheduler - Responsible for selecting a worker node for newly created pods (also known as scheduling)
    • kube-controller-manager - Responsible for running controller processes like Node, Replication, Endpoints, etc. These controllers will start to make more sense after we discuss k8s objects
    • cloud-controller-manager - Holds cloud-specific control logic
  • Worker nodes - These are machines responsible for accepting instructions from the control plane and running containerized workloads. Node has the following sub-components:
    • kubelet - An agent that makes sure all containers are running in any given pod. We'll get to what that means in a bit.
    • kube-proxy - A network proxy that is used to implement the concept of service. We'll get to what that means in a bit.
    • Container runtime - This is the software responsible for running containers. Kubernetes supports Docker, containerd, rkt to name a few.

The key takeaway here is that the control plane is the brain responsible for accepting user instructions and figuring out the best way to execute them. Whereas worker nodes are machines responsible for obeying instructions from the control plane and running containerized workloads.

Kubernetes Objects

Now that we have some know-how of Kubernetes architecture, the next milestone in our journey is understanding the Kubernetes object model. Kubernetes has a few abstractions that make up the building blocks of any containerized workload.

We'll go over a few different types of objects available in Kubernetes that you are more likely to interact with:

  • Pod - It is the smallest deployable unit of computing in Kubernetes hierarchy. It can contain one or more tightly coupled containers sharing environment, volumes, and IP space. Generally, it is discouraged for users to manage pods directly, instead, Kubernetes offers higher-level objects (deployment, statefulset & daemonset) to encapsulate that management.
  • Deployment - High-level object designed to ease the life cycle management of replicated pods. Users describe a desired state in the deployment object and the deployment controller changes the actual state to match the desired state. Generally, this is the object users interact with the most. It is best suited for stateless applications.
  • Stateful Set - You can think of it as a specialized deployment best suited for stateful applications like a relational database. They offer ordering and uniqueness guarantees.
  • Daemon Set - You can think of it as a specialized deployment when you want your pods to be on every node (or a subset of it). Best suited for cluster support services like log aggregation, security, etc.
  • Secret & Config Map - These objects allow users to store sensitive information and configuration respectively. These can then be exposed to certain apps thus allowing for more streamlined configuration and secrets management
  • Service - This object groups a set of pods together and makes them accessible through DNS within the cluster. Different types of services include NodePort, ClusterIP, and LoadBalancer.
  • Ingress - Ingress object allows for external access to the service in a cluster using an IP address or some URL. Additionally, it can provide SSL termination and load balancing as well
  • Namespace - This object is used to logically group resources inside a cluster

Note: There are other objects like Replication Controller, Replica Set, Job, Cron Job, etc. that we have deliberately skipped for simplicity's sake.

⚽️ Show me by example

Now that we have touched upon some of the most commonly used Kubernetes objects that act as the building blocks of containerized workloads, let's put it to work. For this example we'll do the following:

  • Setup prerequisites like Docker, Kubernetes & kubectl
  • Create a very simple hello world node app
  • Containerize it using Docker and push it to a public docker registry
  • Use some of the above-explained Kubernetes objects to declare our containerized workload specification

0️⃣ Setup

This guide assumes that you are on macOS, have Docker Desktop installed and running. It comes with a standalone Kubernetes instance which is a single-node cluster, an excellent choice these days to run Kubernetes locally. Additionally, you must also have kubectl installed. It is a command-line tool that allows us to run commands against Kubernetes clusters.

The easiest way to get up and running on Mac is to use Homebrew package manager like so:

# Install docker
brew cask install docker

# Install kubectl
brew install kubectl
Enter fullscreen mode Exit fullscreen mode

Setup instructions and source code for this exercise can be found here.

1️⃣ Sample hello world app

Here we have a very simple hello world app written in NodeJS. It creates an HTTP server, listens on port 3000, and responds with "Hello World".

const http = require("http");

const hostname = "0.0.0.0";
const port = 3000;

const server = http.createServer((req, res) => {
  res.statusCode = 200;
  res.setHeader("Content-Type", "text/plain");
  res.end("Hello World");
});

server.listen(port, hostname, () => {
  console.log(`Server running at http://${hostname}:${port}/`);
});
Enter fullscreen mode Exit fullscreen mode

2️⃣ Containerize sample app

To dockerize our app, we'll need to create a Dockerfile. It describes how to assemble the image. Here's what our Dockerfile looks like:

# Use node alpine as base image for small image size
FROM node:12-alpine

# Create app directory
WORKDIR /usr/src/app

# Install app dependencies
COPY package*.json ./
RUN npm install

# Bundle app source
COPY . .

# Expose port
EXPOSE 3000

# Launch container with this command
CMD [ "node", "app.js" ]
Enter fullscreen mode Exit fullscreen mode

Let's use Docker CLI to build and test the image using the Dockerfile above:

# Build docker image
docker build -t sarmadsaleem/scout-apm:node-app .

# Run container based on docker image
docker run -p 3000:3000 -it --rm sarmadsaleem/scout-apm:node-app

# Verify functionality
curl http://localhost:3000
Enter fullscreen mode Exit fullscreen mode

Now that we have verified our container works fine locally, let's push this docker image to a public registry. For this example, we'll be using a public repository on Docker Hub to push the docker image.

# Docker hub login
docker login --username sarmadsaleem

# Push image to Docker Hub repository
docker push sarmadsaleem/scout-apm:node-app
Enter fullscreen mode Exit fullscreen mode

At this point, we have packaged our node app in a docker container and made it public in the form of an image. Anyone can pull it from our public repository and run it anywhere.

3️⃣ Define workload specification using Kubernetes objects

With our dockerized sample app ready, the only thing remaining to do is declare our desired state of workload using Kubernetes objects. We'll be dealing with Namespace, Deployment & Service in this example.

Typically Kubernetes manifests are declared in YAML files that describe the desired state. It is then passed to the Kubernetes API using kubectl.

apiVersion: v1
kind: Namespace
metadata:
  name: scout-apm
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: node-app-deployment
  namespace: scout-apm
spec:
  selector:
    matchLabels:
      app: scout-apm-node-app
  replicas: 3
  template:
    metadata:
      labels:
        app: scout-apm-node-app
    spec:
      containers:
        - name: node-app
          image: docker.io/sarmadsaleem/scout-apm:node-app
          resources:
            limits:
              memory: "256Mi"
              cpu: "500m"
          imagePullPolicy: Always
          ports:
            - name: http
              containerPort: 3000
---
apiVersion: v1
kind: Service
metadata:
  name: node-app-service
  namespace: scout-apm
  labels:
    app: scout-apm-node-app
spec:
  selector:
    app: scout-apm-node-app
  type: NodePort
  ports:
    - protocol: TCP
      port: 3000
      targetPort: 3000
Enter fullscreen mode Exit fullscreen mode

It's okay if some of the things in this manifest don't make sense yet. The key takeaway is that we have declared specification for our containerized workload using Kubernetes objects. A Namespace object is just a wrapper to group things together. Deployment object does the heavy lifting of creating pods (which hold containers), maintaining a specified number of replicas, and managing their lifecycle. Service object streamlines network access to all pods.

Now that we have the manifest ready, let's select the correct context for kubectl and apply the manifest:

# Switch kube context
kubectl config use-context docker-desktop

# Apply kube manifests
kubectl apply -f path/to/app.yaml
Enter fullscreen mode Exit fullscreen mode

That was fast, what just happened? Kubernetes accepted our declared manifest and tried to execute it. In doing so it created a namespace, a bunch of pods, a replica set, a deployment, and a service. We should be able to verify all of them by doing so:

# Get state of kube objects
kubectl get pod,replicaset,deployment,service -n scout-apm
Enter fullscreen mode Exit fullscreen mode

Does this mean our sample app is running? Yes! We should be able to reach our node app by visiting the following:

# Get service port
kubectl get service -n scout-apm

# Verify functionality
curl http://localhost:<service-port>

# Let's clean up after ourselves
kubectl delete -f path/to/app.yaml
Enter fullscreen mode Exit fullscreen mode

In a more production-ready setup, you'll probably have to deal with considerations like optimizing your docker images, automating deployments, setting up health checks, securing the cluster, managing incoming traffic over SSL, instrumenting your apps for observability, etc. The goal of this simple exercise was to jump from theory into a practical playground where we can see Docker in action, learn how to interact with Kubernetes, and deploy a sample workload.

🐎 Basic features

Let's touch upon some of the major Kubernetes features:

  • Self-healing - One of the biggest selling points of Kubernetes is that it provides self-healing capabilities thus making your applications more robust and resilient.
  • Horizontal scaling & load balancing - Kubernetes advertises itself to be planet scale. In that, it provides multiple ways to scale applications up and down. These horizontal scaling capabilities stretch from containers to underlying nodes.
  • Automated rollouts and rollbacks - It provides the ability to progressively roll out changes while monitoring application health. If anything goes wrong, changes are rolled back automatically to the last working version.
  • Service discovery - Streamlines service discovery mechanism by giving pods their IP and consolidating all those IPs behind a service so that a single DNS name can be used to reach all pods and load can be balanced across them.
  • Secret and configuration management - Inline with Twelve Factor App, Kubernetes provides tools to externalize secrets and configurations. This means sensitive information and configuration can be updated without rebuilding your image and the risk of embedding secrets in images is neutralized.
  • Storage orchestration - Provides ways to mount storage systems like local storage or something from a public cloud provider. This mounted storage is then made available to workloads via a unified API.
  • Batch execution - In addition to long-lived workloads, it also provides ways to manage short-lived batch workloads

🔄 Kubernetes ecosystem

In the past few years, the Kubernetes ecosystem has grown exponentially. The popularity of Kubernetes has led to greater adoption thus inspiring innovation in different verticals like (but not limited to) the following:

  • Cluster provisioning - Managed Kubernetes services like EKS, GKE, and AKS are rather recent additions to the landscape. Most of these platforms provide API, CLI, and GUI interfaces to provide clusters. Before these, provisioning a new cluster and maintaining your control plane used to involve non-trivial effort. Some of the popular tools to provision and bootstrap clusters include Kops, Kubeadm, Kubespray, and Rancher.
  • Managed control plane - As we have learned already, these days most cloud providers offer a managed Kubernetes offering. What that means is that the provider is responsible for managing and maintaining the cluster. This reduces maintenance and management overhead. However, this is very use-case specific, in some cases you'd like to self-manage it to have better flexibility. Some of the managed Kubernetes offerings include EKS, GKE, and AKS.
  • Cluster management - Most common way to interact and manage clusters is still kubectl however this vertical has been evolving at a rapid pace. Helm has emerged as Kubernetes goto package manager, think of it as Homebrew for your cluster. To streamline the management of YAML templates, Kustomize is leading the charge and has already been integrated with kubectl.
  • Secrets management - Built-in Kubernetes secrets are base64 encoded, not encrypted. Companies find themselves outgrow the built-in functionality quite quickly and turn to more sophisticated solutions like Sealed Secrets, Vault and Cloud Managed Secret Stores (AWS Secrets Manager, Cloud KMS, Azure Key Vault, etc.)
  • Monitoring, logging & tracing - Kubernetes provides some basic features around monitoring, logging, and tracing but as soon as you get into running multiple microservices, these features can fall short. A common monitoring stack in the industry today is Prometheus, Alertmanager & Grafana. When it comes to log aggregation, ELK (Elasticsearch, Logstash, Kibana) & EFK (Elasticsearch, Fluentd, Kibana) stacks have gained popularity. Lastly for distributed tracing, Jaeger, and Zipkin are the two open-source tools that seem to have a high adoption rate.
  • Development toolkits - Oketo, Tilt, and Garden are some of the famous development toolkits that help streamline development workflow while working with multiple microservices in a cloud-native context.
  • Infrastructure as code - Infrastructure as code (IaC) is the management of infrastructure in a descriptive model. Tools like Terraform and Pulumi have become popular choices and have support for provisioning Kubernetes related resources using code. There are several benefits to this strategy include version-controlled infrastructure, audibility, static analysis, and automation to name a few.
  • Traffic management - Typically Kubernetes uses Ingress controllers to expose services to the outside world. This is where SSL termination can also happen. Some of the popular ingress controllers include NGINX, AWS ALB Ingress Controller, Istio, Kong, and Traefik. Additionally, new constructs like API Gateway & Service Mesh have been introduced into the ecosystem lately and the boundary between how one can manage ingress traffic (north to south) and internal traffic (east to west) has become increasingly blurry.

Common Questions

Let's answer some common questions and clear up some misconceptions around Kubernetes in this section.

Is Kubernetes free?

Open-source version of Kubernetes itself is free to download, build, extend, and use for everyone, so there are no costs associated with the software itself. Typically organizations run their Kubernetes clusters in public, private, hybrid, or multi-cloud environments, in those cases, they have to pay for the underlying resources.

Who created Kubernetes?

Kubernetes originated at Google and distilled years of experience in running production workloads at scale. It was founded by Joe Beda, Brendan Burns, Craig McLuckie who were quickly joined by other Google engineers in their endeavor. It was later donated to Cloud Native Computing Foundation (CNCF) and is now being maintained by the foundation along with the open-source community under Apache License.

What is the difference between Docker and Kubernetes?

Docker is a container runtime meant to run on a single node whereas Kubernetes is a container orchestration tool meant to run across a cluster of nodes. They are not opposing technologies. They complement one another. We have covered this topic in detail in one of our earlier blog posts here.

How do you upgrade Kubernetes?

Upgrading Kubernetes version is a common practice to keep up with the latest security patches, new features, and bug fixes. This process is typically dictated by the tool you used to provision the cluster. If it's a managed control plane, the cloud provider exposes an API to trigger the upgrade. If it's a self-managed control plane, bootstrapping tools like kops and kubeadm simplify this workflow.

Is Kubernetes the best way to run containers in production today?

This can be a controversial one. Kubernetes is arguably the most feature-complete container orchestration tool with a vibrant community and buzzing ecosystem. Is it the best way to run containers in production today? That depends on your use-case. Perhaps in some cases, you can get by using a PaaS solution like Heroku. Alternatively, you may be able to leverage new-age serverless container services like Google Cloud Run or AWS Fargate. In other cases where you want complete control and flexibility over your workloads, Kubernetes may be the front runner among container orchestration tools.

Top comments (0)