DEV Community

Cover image for The Home Server Journey - 2: The Control Room
Beppe
Beppe

Posted on • Edited on

The Home Server Journey - 2: The Control Room

I confess that not adding some application deployment to our first article left me a bit frustrated, but I was unsure about it's length, even more so for an author whose reputation alone wouldn't [yet] make people pay more attention that what's natural. But let's fix that right away and get something running.

Hello, World!

As we're talking about Internet servers, what better than a Web page? I've found a very fitting test container in jdkelley's simple-http-server, but there was an issue: no image available for the ARM architecture. That's the only reason why I had to build one and publish it to my own repository, I swear

If those concepts related to containers are still confusing to you, I recommend following Nana Janashia's tutorials and crash courses. She's an amazing teacher and I wouldn't be able to quickly summarize all the knowledge she provides. Her Docker lectures compilation is linked below:

With that out of the way (thanks, Nana), let's use our test image in our first Kubernetes deployment, where containers are encapsulated as pods. From now on I'll be abusing comments inside the YAML-format manifest files as it's an easier and more compact way to describe each parameter:

apiVersion: v1                        # Specification version
kind: ConfigMap                       # Static/immutable data/information component
metadata:                             # Identification attributes
  name: welcome-config                  # Component name
data:                                 # Contents list
  welcome.txt: |                        # Data item name and contents
    Hello, world!
---
apiVersion: apps/v1
kind: Deployment                      # Immutable application management component
metadata:
  name: welcome-deploy                # Component name
  labels:                             # Markers used for identification by other components
    app: welcome                                      
spec:
  replicas: 2                         # Number of application pod instances
  selector:
    matchLabels:                        # Markers searched for in other components
      app: welcome                                    
  template:                           # Settings for every pod that is part of this deployment
    metadata:
      labels:
        app: welcome
    spec:
      containers:                         # List of deployed container in each pod replica
      - name: welcome                                 # Container name
        image: ancapepe/http-server:python-latest     # Image used for this container
        ports:                                        # List of exposed network ports for this container
        - containerPort: 8000                           # Port number
          name: welcome-http                            # Optional port label for reference 
        volumeMounts:                                 # Data storages accessible to this container
          - name: welcome-volume                        # Storage name
            mountPath: /serve/welcome.txt               # Storage path inside this container
            subPath: welcome.txt                        # Storage path inside the original volume
      volumes:                            # List of volumes available for mounting
      - name: welcome-volume                # Volume name
        configMap:                          # Use a ConfigMap component as data source
          name: welcome-config              # ConfigMap reference name
---
apiVersion: v1
kind: Service                         # Internal network access component
metadata:
  name: welcome-service                 # Component name
spec:
  type: LoadBalancer                    # Expose service to non-K8s processes
  selector:                             # Bind to deployments with those labels
    app: welcome
  ports:                                # List of exposed ports
    - protocol: TCP                       # Use TCP Internet protocol
      port: 8080                          # Listen on port 8080
      targetPort: welcome-http            # Redirect trafic to this container port
Enter fullscreen mode Exit fullscreen mode

(Copy it to a text editor and save it to something like welcome.yaml)

Among other things, Kubernetes are a solution for horizontal scaling: if your application process is bogged down by many requests, you may instantiate extra copies of it (replicas) in order to server a larger demand, even dynamically, but here we'll work with a prefixed amount. Moreover, see how services not only centralize access to all pods of a deployment, but allow us to use a different connection port than the one defined in the Docker image

A concept you should have in mind when working with K8s is idempotence: manifests such as above don't describe a sequence of operations to set up your pods and auxiliary components, but the desired final state, from which the operations (to either create or restore the deployment) are automatically defined. That way, trying to re-apply an already [successfully] applied configuration will result in no changes to the cluster

The time has come to get our hands dirty. Surely you may submit your YAML files by copying each new version to one of your nodes and invoking k3s kubectl via SSH, but how about doing that from the comfort of your desktop, where you have originally edited and saved the manifests?

It is possible to install kubectl independently of the K8s cluster (follow the instructions for each platform), but it won't work out-of-the-box, as your local client doesn't know how to find the remote cluster yet:

$ kubectl get nodes                                                                                                                                                                                  
E0916 18:45:31.872817   11709 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp [::1]:8080: connect: connection refused
E0916 18:45:31.873268   11709 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp [::1]:8080: connect: connection refused
E0916 18:45:31.875167   11709 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp [::1]:8080: connect: connection refused
E0916 18:45:31.875837   11709 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp [::1]:8080: connect: connection refused
E0916 18:45:31.877524   11709 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp [::1]:8080: connect: connection refused
The connection to the server localhost:8080 was refused - did you specify the right host or port?
Enter fullscreen mode Exit fullscreen mode

In order to get that information, log into one of your cluster machines and get the system configuration:

[ancapepe@ChoppaServer-1 ~]$ k3s kubectl config view
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: DATA+OMITTED
    server: https://127.0.0.1:6443
  name: default
contexts:
- context:
    cluster: default
    user: default
  name: default
current-context: default
kind: Config
preferences: {}
users:
- name: default
  user:
    client-certificate-data: DATA+OMITTED
    client-key-data: DATA+OMITTED
Enter fullscreen mode Exit fullscreen mode

(Here authentication data is omitted for safety reasons. in order to show the complete configuration, add the --raw option to the end of the command)

Copy the raw output of the command, change the server IP (just the 127.0.0.1) to match your master node's one, and overwrite your desktop's kubeconfig file (on Linux, the default location is <user home directory>/.kube/confg) with these modified contents. Now you should be able to use the control client successfully:

$ kubectl get nodes                                                                                                                                                                                
NAME              STATUS     ROLES                  AGE   VERSION
odroidn2-master   Ready      control-plane,master   9d    v1.30.3+k3s1
rpi4-agent        Ready      <none>                 9d    v1.30.3+k3s1

Enter fullscreen mode Exit fullscreen mode

Finally, apply our test deployment:

$ kubectl apply -f welcome.yaml
configmap/welcome-config created
deployment.apps/welcome-deploy created
service/welcome-service created
$ kubectl get pods                                                                                                                                                        
NAME                             READY   STATUS    RESTARTS   AGE
welcome-deploy-5bd4cb78b-hvj9z   1/1     Running   0          75s
welcome-deploy-5bd4cb78b-xb99p   1/1     Running   0          75s
Enter fullscreen mode Exit fullscreen mode

If you regret your decision (already?), that operation may be reverted using the same manifest:

$ kubectl delete -f welcome.yaml                                                                                                                                           
configmap "welcome-config" deleted
deployment.apps "welcome-deploy" deleted
service "welcome-service" deleted
$ kubectl get pods                                                                                                                                                        
NAME                             READY   STATUS        RESTARTS   AGE
welcome-deploy-5bd4cb78b-hvj9z   1/1     Terminating   0          114s
welcome-deploy-5bd4cb78b-xb99p   1/1     Terminating   0          114s
# A little while later
$ kubectl get pods 
No resources found in default namespace.
Enter fullscreen mode Exit fullscreen mode

As you can see, pods of a given deployment get a random suffix attached to their names, in order to differentiate them. Also, all Kubernetes are organized in namespaces to help with resource management. If not informed as a kubectl command option or inside the file itself, the default namespace is used

In order to perform the same operation with a particular namespace, create it first (if not created already):

$ kubectl create namespace test                                                                                                                                          
namespace/test created
$ kubectl apply -f welcome.yaml --namespace=test                                                                                                                           
configmap/welcome-config created
deployment.apps/welcome-deploy created
service/welcome-service created
$ kubectl get all --namespace=test                                                                                                                                         
NAME                                 READY   STATUS    RESTARTS   AGE
pod/welcome-deploy-5bd4cb78b-5df7d   1/1     Running   0          16s
pod/welcome-deploy-5bd4cb78b-9cpdj   1/1     Running   0          16s

NAME                      TYPE           CLUSTER-IP      EXTERNAL-IP                 PORT(S)          AGE
service/welcome-service   LoadBalancer   10.43.194.209   192.168.3.10,192.168.3.11   8080:31238/TCP   16s

NAME                             READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/welcome-deploy   2/2     2            2           16s

NAME                                       DESIRED   CURRENT   READY   AGE
replicaset.apps/welcome-deploy-5bd4cb78b   2         2         2       16s
Enter fullscreen mode Exit fullscreen mode

(See how namespaces allow to easily select all components that we wish to monitor)

With a successful deployment, our Web page should be displayed by accessing one of the external IPs from the LoadBalancer service (which are the node IPs) on any browser (generally you have to make the usage of HTTP over HTTPS explicitly):

Accessing Web application from browser
(Opening the listed file works too!)

Congrats! Go add it to your list of LinkedIn's skills (just kidding)

Again, if you're interested in understanding all those concepts in more depth, in order to follow along this and the next chapters of our guide, Nana comes to the rescue again with her Kubernetes tutorial:

A nicer view

That's all well and good, but I bet you can imagine how, as you're developing and running more and more components, issuing commands and keeping track of stuff using kubectl gets quite repetitive. One may open multiple terminals running the get command with --watch as option for automatic updates, but switching between windows is not that great either

Thankfully, for whoever likes that approach, there are friendlier GUIs to help manage yours clusters and deployments. Rancher, developed by the same folks from K3s, is one of them, but honestly I've found too complicated to set up. In my previous job I came across an alternative that I consider much more practical: Lens

After installing it, if you're interested, there's not much to do to get it running for your cluster. Select the option to add a new one from a kubeconfig and paste the contents from your local kubectl configuration:

Lens cluster configuration

If everything is correct, you'll be able to access it, visualize your resources, edit manifest files, and even use the integrate terminal to create new components with kubectl (There doesn't seem to be a widget for creation, only modification or deletion of existing components):

Image description
(I don't know about you, but that's what I'll be using from now on)

That's it for now! Thanks for reading and let's start making some real-world stuff next time

Chapter 3

Top comments (0)