DEV Community

Cover image for Building a Kubernetes Home Lab
PaddyAdallah
PaddyAdallah

Posted on

Building a Kubernetes Home Lab

Building a Kubernetes Home Lab

Greetings.

Kubernetes (K8s) has quickly become the preferred way to deploy compute resources in the cloud, both on-prem and in the public cloud. As a result, there is a surge in demand for skilled engineers to deploy scalable and robust solutions in various application areas. Thus, it is necessary to understand how the architecture of the technology works and gain the crucial hands-on skill of deploying solutions in the cloud. In this article, I share insights from a talk on Deploying a Kubernetes Home Lab, hosted on StreamingClouds by Kevin Evans and Robin Smorenburg, featuring Michael Levan as the guest speaker. Strap in.

Why Home Labs?

There is so much information out there. Typically, what ends up happening is; you read, read some more, watch videos, and listen to podcasts, You end up having all this information, but you haven’t used it for various reasons. Maybe you do not have an opportunity at work, maybe you don’t have a home lab(which is why you are here), or perhaps you don’t have a use case to be able to implement it yet. A solid way to reinforce your learning is to take what you are learning and implement it. It reinforces your brain. You are going to have the theoretical and practical bit. As a practising techie, you can’t have one without the other.

Building a Home Lab.

A set of tools are required to get your setup up and running. For a beginner, minikube is the place to start. It features a straightforward installation guide depending on the operating system in use. Once installed, run ‘minikube start,’ and it will create a K8s cluster. At this point, you have a development K8s cluster. Never use it for production — It has only one control plane, and the control plane and the worker nodes are bundled together. However, minikube offers you a good introduction.

For a production-grade setup, you need to go deep to understand the building blocks of K8s. Learning and understanding how all the various nodes work is the definition of building a home lab. Once you understand the theoretical and basic setup, start playing with [kubeadm](https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/)**,** which also runs on all operating systems. kubeadm allows you to segregate your control plane and your worker nodes. Many backend production K8 tools and environments running in large organizations use kubeadm.

Afterwards, you can dive into kubernetes-the-hard-way**. **Make sure to understand all those mentioned above before getting here. It is the deep end. In kubernetes-the-hard-way, there is no automation — Kesley Hightower builds the cert store, the CI for all the K8 pieces inside the API to talk, so he goes deep into it, binary by binary. What awaits is a highly rough experience if you get into it before understanding the fundamentals.

It would be best if you also learned how to deploy K8 clusters in the public cloud and automate and manage them. From a Serverless K8s perspective in the AWS cloud, Fargate enables you to connect a fargate profile to your K8 cluster. Interaction and management of the cluster are through an API, and there are no worker nodes. The same applies to Google Cloud, where you can spin up clusters under the Kubernetes Engine. You can choose the standard (with worker nodes), autopilot (serverless K8), or API-specific deployment mode. At some point, we will solely interact with K8s through an API. There will be a service that will manage the worker nodes, reinforcing the glorious concept of K8s that enables us to interact with infrastructure through an API.

Best Approach to Cluster Hardening.

As you set up your home lab, continuously thinking about and integrating security into your design is vital. Some of the ways through which you can secure your clusters are listed below;

  • Don’t make the API server IP address public.

  • Ensure that the servers on which the control plane and worker nodes are running are updated, i.e., patched with the latest security updates.

  • Practice Role-based access control to limit access (Read and Write) to the cluster when you are using **kubectl. **You can edit the permissions on your local kubeconfig file.

  • Specify service accounts that are running your accounts — avoid the defaults

  • Break down your services into layers and secure them at each layer

There are open-source frameworks and tools for hardening your clusters. They allow you to check your clusters against best practices tooling.

In the following article, I evaluate how to choose a Kubernetes platform for Enterprise Deployments. Follow the link below:

How to Choose a Kubernetes Platform for Enterprise Deployments.

Top comments (0)