DEV Community

Cover image for Kubernetes Foundations and Cluster Setup with K3s
Rolf Streefkerk
Rolf Streefkerk

Posted on • Originally published at rolfstreefkerk.com

Kubernetes Foundations and Cluster Setup with K3s

Preface

TLDR: to get right into the article, go here

In my own personal Cloud journey, Kubernetes never really played much of a role up until very recently.

My interests were with public Cloud providers such as AWS for their accessibility and managed service offerings they provide to significantly increase productivity.

But, when i actually started to run production workloads on AWS I noticed the significant cost involved in fairly small sized deployments.

As an example we had an Amazon merchandise app called Trademerch.io that cost us around 130USD per month to run when we started and finished our rework about 5 years ago (ElasticSearch cluster with 1 node, 2 RDS databases, 2 EC2 servers of which 1 was on a Cron recurring Job). Fast forward to today, with a virtually unchanged architecture we were still paying 130USD per month using the same AWS services, sizing and usage patterns.

The cost aspect of the public cloud is not something new, but it became painfully obvious that AWS is not there to return the favor, but instead seeks to optimize its own revenue primarily (although they claim otherwise!) This experience has left me jaded when it comes to public Cloud providers in general.

How can this be done differently?

Enter Kubernetes and the “deploy where you want” paradigm. Once i finally started looking into Kubernetes about a month ago, I noticed immediately that it’s essentially a Platform as a Service (PAAS) that you fully control. However, most deployment options are NOT cheap (again AWS, but also Azure and Google itself).

Most public clouds will charge 75 USD to just run the control plane (the “controller” of the Kubernetes cluster) and then you still need to add Nodes to run your workloads (Virtual machines such as EC2). Luckily there are alternatives out there such as Digital Ocean, and Oracle that will allow you to run Kubernetes much more cost effectively.

Next to this, you have the option to self host anywhere you like, which negates issues of vendor lock with traditional public cloud managed services. Particularly Serverless offerings, since they’re custom built for their own Cloud.

Now I’ll admit, cost to entry with Serverless can be 0 USD, when your app has no users or very few users it’s possible to run at at zero or near zero cost. But as soon as you get free of the “Free Tier”, costs starts to accumulate and migrating out is a more time consuming affair.

Kubernetes offers a way to host your applications and infrastructure in a way that is fully in your control, and enables you to scale to virtually millions of containers (parent company Google run this size, regular people of course will not ever come close!).

My reasoning is, if I can control the infrastructure stack I can learn to scale it to sufficient size and never have to migrate out of the stack. That provides me with transparency and clarity on the future roadmap. Granted, I will still use Serverless where it makes sense.

Currently for my insightprose.com app I use CloudFlare to host my static website and some of their serverless function capability to run forms. But much of the backend will run on Digital Ocean using 4-6 Node Kubernetes cluster that will run everything needed from Monitoring to Databases and API’s.

I see Kubernetes as an investment for that reason, and because it has been proven to be extremely successful at what it aims to provide. The down side is the up-front investment in time and money, for each person that cost-benefit analysis may turn out different.

For me it’s a resounding yes to invest in this technology, and to learn how to best use it for my benefit.

I hope this series of articles will impart you with enthusiasm to learn and use Kubernetes for your own projects and products, or perhaps even aid you in your current activities!

A humble note; since I’m new with Kubernetes, I expect to make mistakes.

If you spot any, or have advice to improve the content. I’d be very appreciative if you would contact me via Twitter or via my contact form.

Thanks for your support!

Thanks and enjoy your learning journey through Kubernetes!

Introduction

Kubernetes was first introduced in 2014 by Google and has since become the de facto standard for large scale container orchestration, powering anything from small startups to conglomerates like Google itself.

Kubernetes has rose to prominence because of its ability to deploy, scale and manage complex application across multiple containers and hosts. Since 2014 applications have grown considerably more complex. With that growth, new service paradigms such as the micro service architecture has gained in popularity.

In the most recent Cloud Native Annual Survey of 2023, 1 , Kubernetes was still seeing year over year growth with a dramatic increase in production use from 58% in 2022 to 66% in 2023 of total respondents.

Kubernetes offers the following main features:

  1. Automated deployment and scaling: Easily scale your application up or down based on demand.
  2. Self-healing capabilities: Automatically restarts failed containers and replaces or reschedules containers when nodes die.
  3. Service discovery and load balancing: Kubernetes can expose a container using the DNS name or its own IP address. If traffic to a container is high, Kubernetes is able to load balance and distribute the network traffic.
  4. Storage orchestration: Automatically mount the storage system of your choice, whether from local storage, public cloud providers, or network storage systems.
  5. Secret and configuration management: Deploy and update secrets and application configuration without rebuilding your image and without exposing secrets in your stack configuration.

Real world use cases: Airbnb's Kubernetes Journey

Airbnb, the popular online marketplace for lodging and tourism experiences, provides an excellent example of Kubernetes in action. As their platform grew, they faced challenges with their monolithic architecture. They needed a system that could handle their rapidly increasing scale while allowing for more frequent and reliable deployments.

Airbnb adopted Kubernetes 2 to containerize their applications and move towards a microservices architecture. This transition allowed them to:

  1. Increase deployment frequency: exponential growth in deployments was achieved all the way up to 125,000 production deployments per year.
  2. Increased resource usage efficiency: scheduling improvements reduced computing resource waste.
  3. Enhanced reliability and fault tolerance.
  4. Improved configuration management: version, refactor, and automatically update configuration across hundreds of services has dramatically improved efficiency in this regard.
  5. Increased developer productivity: via standardization and automations repetitive task reductions were achieved.

While adopting Kubernetes required investment in tooling and processes for Airbnb, it provided key improvements across the board from development experience to standardization and increased deployment efficiency.

In this article, we'll be using k3s 1.30, a lightweight Kubernetes distribution that's perfect for learning, development, and edge computing. K3s provides a fully compliant Kubernetes distribution with a smaller footprint, making it easier to get started and experiment with Kubernetes concepts.

Understanding Containers: A Quick Recap

Before we dive deep into Kubernetes, let's revisit the concept of containers, the building blocks of any Kubernetes system.

Containers are lightweight, standalone, executable packages that include everything needed to run software. This includes the code, runtime, system tools, libraries, and settings. Containers are isolated from one another and the underlying infrastructure, ensuring consistency across environments.

The key benefits of containers include:

  1. Consistency across environments: Containers run the same regardless of where they're deployed, eliminating the "it works on my machine" problem.
  2. Isolation and resource efficiency: Containers share the host OS kernel but are isolated from each other. This allows for efficient resource utilization while maintaining security.
  3. Quick startup and scalability: Containers can start up in seconds and are easy to replicate, making them ideal for scalable applications.
  4. Version control and component reuse: Container images can be versioned, shared, and reused, promoting better development practices and code reuse.

While Docker popularized containers, Kubernetes works with various container runtimes. In k3s, containerd is used as the default container runtime.

📌 Deep Dive for Intermediate Users:
If you're familiar with containers, you might be interested in the Open Container Initiative (OCI) standards. These standards ensure interoperability between different container technologies. K3s uses containerd, which is OCI-compliant, allowing it to work with containers created by Docker and other OCI-compliant tools 3.

Kubernetes Architecture: The Blueprint of Orchestration

Let’s start with an overview of the Kubernetes architecture to better understand how the major components interact:

Kubernetes-overview

The developer or user has access to Kubernetes and everything that’s running on it via the command line tool; kubectl, this tool has immediate API Server access to the whole cluster.

Control Plane Components

First off we have the major components of the control plane, which is essentially the backbone of the cluster making global decisions around scheduling and events.

  1. API Server: exposes the API of the control plane and is scaled horizontally (adding more instances) and balancing traffic between them.

  2. etcd: highly available key-value cluster storage for all cluster data. On K3s this is a SQLite database.

  3. Scheduler: schedules newly created Pods with no assigned Node. Takes into account; individual and collective resource requirements, hardware/software/policy constraints, affinity, anti-affinity specifications, data locality etc.

  4. Controller Manager: this runs the controller processes of which there are various kinds, a few examples:

    • Node controller: monitors and responds to nodes going down.
    • Job controller: watches for Job objects that are a one-off task, creates Pods to run those tasks to finish the tasks.
  5. Cloud Controller Manager: (not depicted and optional): these run controllers that are specific to the cloud provider, this includes many different controller types in a single binary just like the “regular” controller manager.

Node Components

For the Nodes, which are the workers of the cluster and most of the time will be virtual servers like for instance EC2 on AWS or Droplets on Digital Ocean.

  1. Kubelet: Kubelet is like a “node agent” that runs on each Node and that can register the node using a PodSpec. This is essentially the interface between the control plane and the node. Providing two key services:

    1. Node Administration: The kubelet monitors the node and optimizes it, procures the resources and setting networking to maintain operation.
    2. Pod execution: The kubelet acts as the coordinator of operations for each node on a pod, activities such as scheduling, startup, and optimizing pod performance within the context of the overall Kubernetes cluster.
  2. Container Runtime: Docker this is the default runtime using containerd open source daemon that facilitates the life cycle management via API requests. The container runtime deals with;

    • verifying and loading container images
    • monitoring system resources
    • isolating and allocating resources
    • and the container life cycle.
  3. Kube Proxy: Kube proxy is a network proxy running on each node in the cluster, this implements part of the Service concept when it comes to the networking side. Specifically, networking rules allowing/disallowing network communication to your pods or outside the cluster. Typically, it will use iptables for traffic routing.

📌 Deep Dive for Intermediate Users:
K3s simplifies the control plane by combining the API Server, Controller Manager, and Scheduler into a single binary. It also replaces etcd with SQLite as the default data store, though etcd is still supported. This design allows k3s to be lighter and faster to set up, making it ideal for edge computing and IoT scenarios 4.

Core Kubernetes Concepts: The Building Blocks

We’ve already used the terms Pods and Nodes, lets explore these concepts a bit further and understand how they interact.

Kubernetes building blocks

Pods

The smallest deployable units of computing to create and manage in Kubernetes

Key characteristics of Pods:

  • A Pod can contain one or more containers.
  • Containers in a Pod share the same network namespace, meaning they can communicate with each other using localhost.
  • Containers also share the same shared storage volume in Pod.
  • Pods are ephemeral by design. They can be created, destroyed, and recreated as needed.
  • Each Pod gets its own IP address in the cluster.

Example use cases for multi-container Pods:

  • Sidecar container for logging
  • Initialization container that sets up the environment before the main container starts

📌 Deep Dive for Intermediate Users:
Pods are created usually as workload management resources, not directly as a Pod. That means, Pods are created as a Deployment, StatefulSet or a Job specification. More on this in the coming articles of this series.

Nodes

Nodes are where the workloads are run for a cluster, placed inside a pod that runs on a node. Nodes can be virtual or a physical machine.

Key points about Nodes:

  • Each Node runs a Kubelet, a container runtime, and a Kube Proxy.
  • Nodes can be added or removed from the cluster to scale your infrastructure.
  • Kubernetes supports heterogeneous clusters, meaning you can mix and match different types of nodes.

Clusters

A Kubernetes cluster is the complete set of Nodes and the Control Plane managing them. It's the entire ecosystem where your applications live.

Important aspects of Clusters:

  • Clusters can span multiple physical or virtual machines.
  • They provide redundancy and high availability for your applications.
  • You can run multiple clusters for different environments (dev, staging, production) or to separate concerns.

Namespaces

Namespaces provide a way to divide cluster resources between multiple users or projects. They're like virtual clusters within a physical cluster.

Benefits of using Namespaces:

  • They provide a scope for names, allowing you to use the same resource names in different namespaces.
  • They're a great way to divide resources between different teams or projects.
  • You can set resource quotas for each namespace, ensuring fair resource allocation.

📌 Deep Dive for Intermediate Users:
Kubernetes already starts with 4 initial namespaces 5, 2 of the more notable namespaces included are;
default so that you can start using your new cluster without first creating a namespace.
kube-system is the namespace for objects created by the Kubernetes system

Hands-on: Setting Up k3s

We should have a pretty good understanding of what Kubernetes is and what components it utilizes to create an environment that can run applications and manage them effectively.

Let’s go ahead and setup our own cluster using k3s 1.30

Installing k3s

The installation is very simple.

curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC="--disable traefik" sh -
Enter fullscreen mode Exit fullscreen mode

This command downloads the latest k3s binary and installs it as a service.

By default Traefik is included in the installation of k3s, this flag -disable traefik disables it. We will install Traefik later on in this series, you can ignore it for now.

Copy the kube config file to our home directory, this will avoid us having to run sudo for any k3s command:

mkdir ~/.kube
sudo cp /etc/rancher/k3s/k3s.yaml ~/.kube/config
sudo chown $USER:$USER ~/.kube/config
chmod 600 ~/.kube/config
Enter fullscreen mode Exit fullscreen mode

Ensure you load the configuration that we just copied.

echo 'export KUBECONFIG=~/.kube/config' >> /home/vagrant/.bashrc
Enter fullscreen mode Exit fullscreen mode

Starting Your Kubernetes Cluster

One of the advantages of k3s is that it starts automatically after installation. You can verify that it's running with:

k3s kubectl get nodes
Enter fullscreen mode Exit fullscreen mode

You should see output similar to:

NAME                STATUS   ROLES                  AGE   VERSION
insight-prose-dev   Ready    control-plane,master   17h   v1.30.3+k3s1
Enter fullscreen mode Exit fullscreen mode

This output shows that you have a single-node cluster up and running, with your machine acting as both the control plane and a worker node.

To show all the Pods currently in the cluster:

k3s kubectl get pods -A
Enter fullscreen mode Exit fullscreen mode

📌 Deep Dive for Intermediate Users:
Kubernetes (and as such K3s) uses containerd as its container runtime, which is lighter than Docker. If you're used to Docker commands, you might need to use crictl for debugging container issues in k3s. The crictl command-line tool is included with k3s 6.

Your First Kubernetes Deployment

With our cluster up and running, let's deploy a simple web application. We'll use Nginx as our example. The following diagram illustrates the deployment process:

Kubernetes deployment process

The above diagram shows which control plane services (yellow) are involved in creating a new Deployment and how the process is sequenced.

Creating a Deployment YAML

Create a file named nginx-deployment.yaml with the following content:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
  labels:
    app: nginx
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:1.26.1
        ports:
        - containerPort: 80
Enter fullscreen mode Exit fullscreen mode

We’re deploying the following:

  • apiVersion, kind, and metadata are required fields that describe the resource and its API version.
  • spec describes the desired state for the Deployment.
  • replicas: 3 tells Kubernetes to run three copies of this application.
  • selector defines how the Deployment finds which Pods to manage. These labels need to be unique within the namespace, otherwise strange things can happen.
  • template describes the Pods that will be created. In this case, each Pod will run one container using the nginx:1.26.1 image.
  • we expose this Deployment on port 80. This should look very familiar those coming from a Docker background.

Deploying the Application

Apply the deployment to your cluster:

kubectl apply -f nginx-deployment.yaml
Enter fullscreen mode Exit fullscreen mode

This command tells Kubernetes to create the resources described in the YAML file.

🚨 Troubleshooting Tip:
If you see an error like "The connection to the server localhost:8080 was refused", make sure you've set up your kubeconfig correctly as described in the previous section. You might need to restart your terminal or run export KUBECONFIG=~/.kube/config.

Exposing the Application

To access the Nginx server from outside the cluster, we need to expose it as a Service:

kubectl expose deployment nginx-deployment --type=LoadBalancer --port=80
Enter fullscreen mode Exit fullscreen mode

This creates a Service that exposes our Nginx deployment. The --type=LoadBalancer flag tells Kubernetes to provide an external IP to access the Service.

To see the details of the Service:

kubectl get services nginx-deployment
Enter fullscreen mode Exit fullscreen mode

In a cloud environment, this would typically provide an external IP. In k3s running locally, you might see <pending> for the external IP. You can still access the service using the cluster IP or node port.

📌 Deep Dive for Intermediate Users:
K3s comes with ServiceLB LoadBalancer out of the box, which allows the use of LoadBalancer services without additional configuration. Many Kubernetes installations will require a LoadBalancer to be installed first, as is typical with public cloud vendors to offer their own instead. 7.

Practical Exercise: Scaling and Updating

Now that we have our application running, let's try scaling it and updating to a new version.

Scaling the Deployment

To scale our deployment to 5 replicas:

kubectl scale deployment nginx-deployment --replicas=5
Enter fullscreen mode Exit fullscreen mode

Verify the scaling:

kubectl get pods
Enter fullscreen mode Exit fullscreen mode

You should now see 5 nginx pods running. This demonstrates how easy it is to scale applications in Kubernetes.

🚨 Troubleshooting Tip:
If some pods are stuck in "Pending" status, it might be due to resource constraints. Check the events for more information:
kubectl describe deployments
Look for events related to scheduling or resource allocation.

Updating the Application

Let's update Nginx to a newer version. Edit the nginx-deployment.yaml file and change the image version:

spec:
  containers:
  - name: nginx
    image: nginx:1.27
Enter fullscreen mode Exit fullscreen mode

Apply the changes:

kubectl apply -f nginx-deployment.yaml
Enter fullscreen mode Exit fullscreen mode

Watch the rollout status:

kubectl rollout status deployment/nginx-deployment
Enter fullscreen mode Exit fullscreen mode

Kubernetes will gradually update the pods to the new version, ensuring zero downtime. This process, known as a rolling update (RollingUpdate is the default), replaces old Pods with new ones incrementally.

📌 Deep Dive for Intermediate Users:
Kubernetes offers fine-grained control over the update process through fields like .spec.strategy.rollingUpdate.maxSurge and .spec.strategy.rollingUpdate.maxUnavailable in the deployment spec. These allow you to control how many pods can be created above the desired number, and how many pods can be unavailable during the update process respectively (both default to 25%) 8.

Conclusion: Your Kubernetes Journey Begins

We've covered a lot of ground in this first foundational Kubernetes article; from understanding the basics of Kubernetes architecture to deploying and managing your first application using k3s.

Here's a quick recap:

  1. Kubernetes architecture consists of a control plane and nodes, working together to manage containerized applications.
  2. Core concepts like Pods, Nodes, Clusters, and Namespaces form the building blocks of any Kubernetes system.
  3. K3s provides a lightweight, easy-to-install Kubernetes distribution perfect for learning and development.
  4. Deployments in Kubernetes allow you to declaratively manage your applications.
  5. Kubernetes makes it easy to scale and update your applications with simple commands.

What's Next?

In upcoming articles in this series, we'll dive deeper into:

  1. Kubernetes Networking: Understanding Services, Ingress, and Network Policies
  2. Persistent Storage in Kubernetes: Working with PersistentVolumes and StorageClasses
  3. Kubernetes Security: Role-Based Access Control (RBAC) and Pod Security Policies
  4. Advanced Workloads: Exploring StatefulSets, DaemonSets, and Jobs
  5. Monitoring and Logging in Kubernetes: Setting up robust observability for your cluster

Each of these topics builds on the foundation we've laid in this article, allowing you to create more complex and robust applications on Kubernetes.

Test Your Knowledge

To reinforce what you've learned, try the following exercises:

  1. Deploy a different application (e.g., a simple Node.js app) to your k3s cluster.
  2. Rollback the Nginx deployment to the previous 1.26.1 version.
  3. Create a service of type ClusterIP for your application and access it from within the cluster.
  4. Set up a cronjob that prints "Hello, Kubernetes!" every minute.
  5. Create a ConfigMap and use it to configure your application.

As you progress, you'll find that Kubernetes opens up a world of possibilities for deploying, scaling, and managing containerized applications. Starting out small and expanding, will open up more questions and with that a search for new answers that will expand your skillset.

Thanks for your time and let me know in the comments if you’re currently using Kubernetes or you’re planning to and on what project(s).

See you in the next one!

References


  1. Cloud Native 2023: The Undisputed Infrastructure of Global Technology - https://www.cncf.io/reports/cncf-annual-survey-2023/  

  2. Developing Kubernetes Services at Airbnb Scale - https://www.infoq.com/presentations/airbnb-kubernetes-services/ 

  3. Open Container Initiative - https://opencontainers.org/ 

  4. K3s Documentation. "Architecture." - [https://docs.k3s.io/architecture](https://docs.k3s.io/architecture 

  5. Kubernetes Documentation “Namespaces” - https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/ 

  6. K3s “Using Docker as the Container Runtime” - https://docs.k3s.io/advanced#using-docker-as-the-container-runtime 

  7. K3s “Service Load Balancer - https://docs.k3s.io/networking/networking-services#service-load-balancer 

  8. Kubernetes Documentation “Deployments” - https://kubernetes.io/docs/concepts/workloads/controllers/deployment/ 

Top comments (0)