DEV Community

Rico Klimpel
Rico Klimpel

Posted on

Setting up a Kubernetes Cluster with K3S on my linux server

I am currently in the initial stages of creating a "sideproject hub", where I aim to rapidly deploy small coding projects within Docker containers on my own server. To ensure total overengineering from the very beginning, I am in the process of establishing a Kubernetes cluster on my server. This cluster will serve as the foundation for efficiently running my future side projects.

Discover the steps I took to set up a Kubernetes cluster on my Linux server using K3S, and witness the moment when I accessed my domain through a web browser for the first time, greeted by a simple NGINX "hello world" page.

Disclaimer: I'm writing these lines as I try all this stuff out for myself. This is how it worked for me, and I'm happy if I can help someone else with my notes. If anyone knows how to do it in a better way or if any information is incorrect, please let me know!

Prerequisites

  • Running Linux Server (In my case Debian GNU/Linux 11 (bullseye))
  • A Domain pointing to the server and its subdomains (<yourdomain.de> & <awesome-subdomain>.<yourdomain.de>)
  • SSH access to the server (preferably via public key and not with username & password)
  • kubectl on local machine

Install & Run Kubernetes Cluster with K3S

  • SSH login to the server: ssh rico@<yourserver.de>
  • K3S installation information: K3S Installation Docs
  • Run the installation command: curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC="--tls-san <yourserver.de>" sh -
  • Copy Kubernetes Config File: sudo cp /etc/rancher/k3s/k3s.yaml ~/.kube/config
  • Test if the cluster is running: kubectl get nodes → Should show one node

The curl Command initiates a download of the K3S installation script from the specified URL. If I got it right, adding your domain as the value for --tls-san ensures that the TLS certificate includes your specific server domain as a valid identity. The entire install command is passed to the shell for execution. This triggers the installation process of K3S with the specified parameters. Loading scripts via curl and executing them directly should always be carefully considered! Only execute trustworthy scripts or those that can be checked.

After installing K3s, the Kubernetes configuration file is generated at /etc/rancher/k3s/k3s.yaml. This file contains essential information about the K3s cluster, such as the API server address and access credentials. As ~/.kube/config is the default location for the for the Kubernetes configuration file, and tools like kubectl try to look it up there, the cp command is executed.

External Access to the Cluster from Local Machine

  • Copy the Kubernetes configuration file from the server to local machine: scp rico@<yourserver.de>:/home/rico/.kube/config ~/.kube/config
  • Modify the config file, e.g. with nano ~/.kube/config and replace 127.0.0.1 with your own clusters server domain and update cluster name if needed
  • Test access: kubectl get nodes → Should display the same information and work in the same way as if the command were executed on the server itself

In case you already have a config file with other clusters configured, be a smart and don't overwrite this file

Run a Hello World Nginx Image as Test Deployment in the Cluster

  • I used this Hello-World-Nginx Container: dockerbogo/docker-nginx-hello-world
  • I started by creating a new Kubernetes namespace for my test projects, but you can also use the default namespace
kubectl create namespace test
Enter fullscreen mode Exit fullscreen mode
  • Create a deployment YAML file hellohub-deployment.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
  name: hellohub
  namespace: test
spec:
  replicas: 1
  selector:
    matchLabels:
      app: hellohub
  template:
    metadata:
      labels:
        app: hellohub
    spec:
      containers:
        - name: hellohub
          image: dockerbogo/docker-nginx-hello-world
          ports:
            - containerPort: 80
Enter fullscreen mode Exit fullscreen mode
  • Apply this file with kubectl apply -f hellohub-deployment.yaml
  • Check if the pod is running: kubectl get pods --all-namespaces or kubectl -n test get pods → Should show a running hellohub-pod
  • Create a service for the deployment:
apiVersion: v1
kind: Service
metadata:
  name: hellohub-service
  namespace: test
spec:
  selector:
    app: hellohub
  ports:
    - protocol: TCP
      port: 80
      targetPort: 80
  type: ClusterIP
Enter fullscreen mode Exit fullscreen mode
  • If you created the service in a file called hellohub-service.yaml you can apply it with kubectl apply -fhellohub-service.yaml. It's also possible to split Kubernetes Resources YAML files with---` and define the service in the same files as the deployment.
  • Check if the service is applied: kubectl -n test get services → Should show a running hellohub-service
  • Test access to Nginx Hello World using port forwarding. Therefore you could use the Pod itself or the service: kubectl port-forward <pod-name> 8080:80 or kubectl -n test port-forward svc/hellohub-service 8080:80 (or use other Kubernetes tools like k9s)
  • Open http://localhost:8080, or whatever port you've choosen in the browser and the site should be shown

The hellohub-deployment.yaml file defines a Kubernetes Deployment named "hellohub" within the "test" namespace. It specifies the use of the dockerbogo/docker-nginx-hello-world Docker image, configuring a single replica of the container, means only one instance is running. The deployment exposes port 80. It allows Kubernetes to manage the pod lifecycle, for example, if the pod crashes, it is restarted immediately.

The hellohub-service.yaml file creates a Kubernetes Service named "hellohub-service" within the "test" namespace. This service acts as an internal ClusterIP service, routing traffic to pods labeled with "app: hellohub" on port 80. The service ensures a stable endpoint for accessing the Nginx Hello World deployment. By defining this service, it allows other components within the Kubernetes cluster to communicate with the "hellohub" deployment seamlessly. With only one pod running, the concept is probably less important. But I was happy to set it up with it just because I already have a fixed name for port forwarding with which I can address the service and don't have to look up the pod first. I still have to find out what else the services are responsible for.

Make Test Deployment Accessible via Subdomain

  • Create Ingress Resources for accessing Hello World on a subdomain: `yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: hellohub-ingress namespace: test spec: rules:
    • host: hellohub. http: paths:
      • path: / pathType: Prefix backend: service: name: hellohub-service port: number: 80 `
  • Nginx Hello World should now be reachable at http://hellohub.&lt;your-domain&gt;/

In Kubernetes, an Ingress provides HTTP and HTTPS routing to services based on external hostnames and paths. It acts as a layer between the external requests and the services within the cluster. The Ingress Resource, like the one in the provided YAML, is a configuration file that defines these routing rules. In the given example, the Ingress Resource named "hellohub-ingress" within the "test" namespace is configured to route requests with the hostname "hellohub." to the "hellohub-service" on port 80. The paths section further refines the routing, indicating that requests to the root path should be directed to the specified service. This Ingress Resource enables the Nginx Hello World application to be reachable at http://hellohub.<your-domain>/.

In case it does not work, remember that it is not necessarily the cluster itself or the Ingress configuration to blame. It may also be due to your DNS settings for the domain, for example.

Uninstalling K3S

Just in case you were also looking for this information. Fortunately, it's quite simple.

  • SSH login to the server
  • Execute K3S uninstall script: /usr/local/bin/k3s-uninstall.sh

Links

Links to the resources I used and to information that helped me in the process.

Top comments (0)