DEV Community

Cover image for Getting my feet wet with Kubernetes
brahms116
brahms116

Posted on

Getting my feet wet with Kubernetes

Recently, I’ve spent some time playing around with Kubernetes (K8s). Having never used it before, I gave it my humble first try. I used it as part of a project where I wanted to use self host some tools on a VPS and write some server code for some life automations and potentially a blog in the future. You can find the Github Repo for the project at the time of writing for it here.

Did I need to use K8s? Nope. Should I have used K8s? Probably not. My situation and setup really doesn’t call for K8s nor does the full brilliance of K8s really shine in my project. But hey! I thought it was a good way to for me to learn a little about K8s, and some of the fundamentals terminologies like ingress, services, deployments, pvs and pvcs etc.

Environments overview

This is how I ended up setting stuff up. I had three environments, which I wanted to make them as similar to each other as possible. For each environment, I have a K8 namespace…

  • pastureen-production for production
  • pastureen-local for local development
  • pastureen-test for my local test environment

Whilst the production namespace ran on the production VPS server running a MicroK8 single node cluster, the other 2 namespaces ran on my laptop using the K8 engine which comes with docker desktop.

namespace-setup

Inside each namespace, there are K8 services pointing to self hosted tools (at this point, I’ve only got NocoDB setup). Each namespace also has a Postgres database. The database is hostpath storage mounted since I am only using single node clusters and also didn’t have time to look too much into “Stateful Sets” and how to correctly host a database within a K8 cluster.

In the production cluster I will have an API server (this doesn’t actually exist yet at the time of writing) and some cron jobs which run my automated tasks.

On my local namespaces however, instead of an API server, I setup a development container as a service in k8s with the local project files mounted as a volume in the pod. This meant that for local development I can use kube exec -it to run commands and tests inside the pods which are connected with the rest of the services in the k8 namespace whilst still editing the source files from my editor.

whats-inside

Managing K8 resources and how it all deploys

I decided to use Terraform to manage my K8 resources. I know that there are probably better ways of doing this (like Argo CD or Flux CD), but I ended up settling with Terraform as I was already familiar with the tool and it allowed me to achieve the goal of trying out K8s without being bogged down too much on the deployment process.

Having said that, not everything is managed by a single Terraform project as there are dependencies which are required between resources (like for e.g. NocoDB requires a Postgres database connection, and we need to create a logical database for NocoDB inside our physical Postgres instance).

I don’t have an automated pipeline just yet. Its part manual, part scripted. But the steps and order of things are established.

the-pipeline

The pipeline first involves setting up the K8 namespaces and configuring the secrets which are required. I currently do this manually as the secrets are manually configured by editing with Kubectl. Eventually this can be part of a Terraform project, but I still need to figure out how to source the secrets inside the Terraform project, maybe I can reference an AWS SSM parameter value here…

Next step involves building the required docker images referenced by the Terraform projects further down the pipeline. I kinda use a manual multi stage build process. I first build the binaries using the dev container, because its mounted with the cache and previous build artifacts. Then I copy the output binaries into a new docker image and push it up as the production image. You can see the details of this in the README here.

The next 3 steps I’ve got covered in a Haskell script here. The script pretty much..

  1. Deploys the database terraform project which sets up the postgres db inside the cluster
  2. Looks inside the postgres db and determines if any more logical databases need to be created.
  3. Runs a custom simple db migration system to ensure that all necessary db migrations are executed
  4. Deploys the rest of the K8 resources like the dev containers, cron jobs, NocoDB e.g. which rely on the database

May cover the details of the script in another post.

My feelings and what I’ve learnt

In this endeavor, I definitely got the chance to expose myself to the world of K8s. I also think I managed to come up with a setup developing locally with K8s and Haskell which worked quite well.

Looking back though, I can definitely see how K8s are an absolute overkill for the project. In some ways it made life harder rather easier. One of the pain points for me was cluster storage and volume mounts. Compared with docker compose, where a volume mount can be specified by a single line of yaml to a host path, K8s work best when persistent volumes are dynamically allocated and assigned. At one point in the project I had a multi-node setup with Longhorn as a persistent storage solution and the developer experience was so much better (had to ditch the multi node setup due to cost considerations however). There’s this mental model where its the responsibility of the cluster administrator to setup the cluster storage solution and all the application developer needs to do was to issue a “pvc”. Maybe its my own stubbornness in wanting to always map it to a hostpath location where I know my files are being stored, but also its probably the case that I am running a single node cluster. If it was a multi-node cluster, I can see why the complexity around persistent storage is definitely a must.

Another unfamiliarity and challenge for me was K8s ingress. I decided to use Traefik as a Helm Chart as my ingress controller. I think it was a challenge for me because both technologies were unfamiliar, setting it up and trying to make sense of an overwhelming amount of configuration was a bit of a stab in the dark. One major snag for me was the idea that a “LoadBalancer” service in K8s is provisioned and managed by the cloud provider hosting the cluster and doesn’t work right out of the box on a MicroK8 cluster (It did work out of the box on docker desktop, which made it real confusing). I ended up setting up Traefik as a “NodePort” service and hacking the cluster config to allow it to be open on ports 80 and 443 here. Again if I had used K8s as a multi-node orchestrator with a cloud provider like the way it was designed to be used, maybe I wouldn’t have faced so many challenges.

In hindsight, I probably would’ve benefited from learning K8s in a different environment and in a context in which the tool really shined. But again, these contexts are almost impossible to come-by as a hobbyist so ehh. Regarding the needs of project in specific though, I probably should’ve went with just using Terraform to provision docker containers on the VPS and locally instead, might have made my life a lot more straight forward. At the end, its all about trade-offs…

  • To learn less and make life a lot more smoother with quicker project progress
  • OR; Learn more, but potentially hit many road blocks, stressful problems and very slow progress.

That’s something definitely to think about.⏎

Top comments (0)