DEV Community

Cover image for Kubernetes Quick Start Guide ☁️⚡️🚀
Deon Pillsbury
Deon Pillsbury

Posted on

Kubernetes Quick Start Guide ☁️⚡️🚀

Containers are great for isolating software environments and making them easy to package. Docker and Docker Compose are great tools for running one or more containers on a single server but say we need to run 100 different containers. This now becomes very difficult to manage and we may need more resources than a single server has available so we need to create a new server to run some of the containers on. We would want to take into account the resources available on each server so we balance the load across them. As an application or multiple applications continue to scale and more servers are required this becomes more and more difficult to orchestrate. This is where Kubernetes comes into the picture as the industry standard container orchestration solution. Kubernetes manages large clusters of servers anywhere from 1 to 5000 and automatically schedules containers to run on servers with the most resources or the ones best suited for the job. You just need to tell it which containers to run and how many replicas of each container you want. Replicas are just the number of instances or processes of the container you want running and Kubernetes will load balance traffic across them. Kubernetes works with a master or control plane node which schedules the containers to run on worker nodes.

💡 Refer to Containers Demystified 🐳🤔 for a full Docker container guide

K8s Architecture

https://phoenixnap.com/kb/understanding-kubernetes-architecture-diagrams

Local Setup

Make sure you have Docker Desktop installed and Install kubectl which is the command line for interacting with Kubernetes clusters. Then install minikube to run a local Kubernetes instance on your computer.

$ kubectl version -o=yaml
clientVersion:
  buildDate: "2023-05-17T14:20:07Z"
  compiler: gc
  gitCommit: 7f6f68fdabc4df88cfea2dcf9a19b2b830f1e647
  gitTreeState: clean
  gitVersion: v1.27.2
  goVersion: go1.20.4
  major: "1"
  minor: "27"
  platform: darwin/arm64
kustomizeVersion: v5.0.1
serverVersion:
  buildDate: "2023-07-19T12:14:49Z"
  compiler: gc
  gitCommit: fa3d7990104d7c1f16943a67f11b154b71f6a132
  gitTreeState: clean
  gitVersion: v1.27.4
  goVersion: go1.20.6
  major: "1"
  minor: "27"
  platform: linux/arm64
Enter fullscreen mode Exit fullscreen mode
$ minikube start --driver=docker
😄  minikube v1.31.2 on Darwin 14.0 (arm64)
✨  Using the docker driver based on user configuration
📌  Using Docker Desktop driver with root privileges
👍  Starting control plane node minikube in cluster minikube
🚜  Pulling base image ...
💾  Downloading Kubernetes v1.27.4 preload ...
    > preloaded-images-k8s-v18-v1...:  327.74 MiB / 327.74 MiB  100.00% 21.54 M
    > gcr.io/k8s-minikube/kicbase...:  404.50 MiB / 404.50 MiB  100.00% 21.34 M
🔥  Creating docker container (CPUs=2, Memory=4000MB) ...
🐳  Preparing Kubernetes v1.27.4 on Docker 24.0.4 ...
    ▪ Generating certificates and keys ...
    ▪ Booting up control plane ...
    ▪ Configuring RBAC rules ...
🔗  Configuring bridge CNI (Container Networking Interface) ...
🔎  Verifying Kubernetes components...
    ▪ Using image gcr.io/k8s-minikube/storage-provisioner:v5
🌟  Enabled addons: storage-provisioner, default-storageclass
🏄  Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default
Enter fullscreen mode Exit fullscreen mode

Enable the dashboard, metrics server and ingress add-ons.

$ minikube addons enable dashboard
💡  dashboard is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
    ▪ Using image docker.io/kubernetesui/dashboard:v2.7.0
    ▪ Using image docker.io/kubernetesui/metrics-scraper:v1.0.8
🌟  The 'dashboard' addon is enabled

$ minikube addons enable metrics-server
💡  metrics-server is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
    ▪ Using image registry.k8s.io/metrics-server/metrics-server:v0.6.4
🌟  The 'metrics-server' addon is enabled

$ minikube addons enable ingress
💡  ingress is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
💡  After the addon is enabled, please run "minikube tunnel" and your ingress resources would be available at "127.0.0.1"
    ▪ Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230407
    ▪ Using image registry.k8s.io/ingress-nginx/controller:v1.8.1
    ▪ Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230407
🔎  Verifying ingress addon...
🌟  The 'ingress' addon is enabled
Enter fullscreen mode Exit fullscreen mode

Verify that kubectl is connected to the Minikube context.

💡 If you work with multiple Kubernetes clusters you will want to make sure you are using the minikube context. If you have never used kubectl before then it should already be selected and the only option.

$ kubectl config current-context
minikube

$ kubectl config get-contexts
CURRENT   NAME             CLUSTER           AUTHINFO          NAMESPACE
          docker-desktop   docker-desktop    docker-desktop    default
*         minikube         minikube          minikube          default

$ kubectl config use-context minikube
Switched to context "minikube".
Enter fullscreen mode Exit fullscreen mode

The K8s server should be up and running and we can verify this by looking at its resources.

$ kubectl get all -A
NAMESPACE              NAME                                             READY   STATUS      RESTARTS      AGE
ingress-nginx          pod/ingress-nginx-admission-create-hb4rq         0/1     Completed   0             11m
ingress-nginx          pod/ingress-nginx-admission-patch-hz9hq          0/1     Completed   2             11m
ingress-nginx          pod/ingress-nginx-controller-7799c6795f-cnj8n    1/1     Running     0             11m
kube-system            pod/coredns-5d78c9869d-qjw4m                     1/1     Running     0             13m
kube-system            pod/etcd-minikube                                1/1     Running     0             14m
kube-system            pod/kube-apiserver-minikube                      1/1     Running     0             14m
kube-system            pod/kube-controller-manager-minikube             1/1     Running     0             14m
kube-system            pod/kube-proxy-52gbh                             1/1     Running     0             13m
kube-system            pod/kube-scheduler-minikube                      1/1     Running     0             14m
kube-system            pod/metrics-server-7746886d4f-xjbwl              1/1     Running     0             11m
kube-system            pod/storage-provisioner                          1/1     Running     1 (13m ago)   14m
kubernetes-dashboard   pod/dashboard-metrics-scraper-5dd9cbfd69-qq9qm   1/1     Running     0             11m
kubernetes-dashboard   pod/kubernetes-dashboard-5c5cfc8747-zs4nx        1/1     Running     0             11m

NAMESPACE              NAME                                         TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)                      AGE
default                service/kubernetes                           ClusterIP   10.96.0.1        <none>        443/TCP                      14m
ingress-nginx          service/ingress-nginx-controller             NodePort    10.106.129.227   <none>        80:30736/TCP,443:31016/TCP   11m
ingress-nginx          service/ingress-nginx-controller-admission   ClusterIP   10.97.6.50       <none>        443/TCP                      11m
kube-system            service/kube-dns                             ClusterIP   10.96.0.10       <none>        53/UDP,53/TCP,9153/TCP       14m
kube-system            service/metrics-server                       ClusterIP   10.110.217.32    <none>        443/TCP                      11m
kubernetes-dashboard   service/dashboard-metrics-scraper            ClusterIP   10.106.106.157   <none>        8000/TCP                     11m
kubernetes-dashboard   service/kubernetes-dashboard                 ClusterIP   10.102.109.102   <none>        80/TCP                       11m

NAMESPACE     NAME                        DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR            AGE
kube-system   daemonset.apps/kube-proxy   1         1         1       1            1           kubernetes.io/os=linux   14m

NAMESPACE              NAME                                        READY   UP-TO-DATE   AVAILABLE   AGE
ingress-nginx          deployment.apps/ingress-nginx-controller    1/1     1            1           11m
kube-system            deployment.apps/coredns                     1/1     1            1           14m
kube-system            deployment.apps/metrics-server              1/1     1            1           11m
kubernetes-dashboard   deployment.apps/dashboard-metrics-scraper   1/1     1            1           11m
kubernetes-dashboard   deployment.apps/kubernetes-dashboard        1/1     1            1           11m

NAMESPACE              NAME                                                   DESIRED   CURRENT   READY   AGE
ingress-nginx          replicaset.apps/ingress-nginx-controller-7799c6795f    1         1         1       11m
kube-system            replicaset.apps/coredns-5d78c9869d                     1         1         1       13m
kube-system            replicaset.apps/metrics-server-7746886d4f              1         1         1       11m
kubernetes-dashboard   replicaset.apps/dashboard-metrics-scraper-5dd9cbfd69   1         1         1       11m
kubernetes-dashboard   replicaset.apps/kubernetes-dashboard-5c5cfc8747        1         1         1       11m

NAMESPACE       NAME                                       COMPLETIONS   DURATION   AGE
ingress-nginx   job.batch/ingress-nginx-admission-create   1/1           16s        11m
ingress-nginx   job.batch/ingress-nginx-admission-patch    1/1           31s        11m
Enter fullscreen mode Exit fullscreen mode

We can see that the kubernetes-dashboard pod is running so we can also view the resources in the Kubernetes Dashboard web app since we installed the dashboard add-on.

$ minikube dashboard
🤔  Verifying dashboard health ...
🚀  Launching proxy ...
🤔  Verifying proxy health ...
🎉  Opening http://127.0.0.1:55388/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ in your default browser...
Enter fullscreen mode Exit fullscreen mode

K8s Dashboard

Now lets take a look at what all of this stuff means.

Compute

K8s Compute

Pod

The building block of Kubernetes is a Pod and most of the time a pod runs a single container and can be loosely thought of as just a container.

💡 Pods do allow for multiple containers to be ran and the main use case I have used is a sidecar container that collects logging from the main container but multi-container pods are rare in practice.

The main use case for running pods directly is if you want to exec into the shell of a container running in the Kubernetes environment to test networking or issues related to the Kubernetes environment.

$ kubectl run my-shell --rm -i --tty --image busybox -- /bin/sh
If you don't see a command prompt, try pressing enter.
/ #
/ # ls
bin    dev    etc    home   lib    lib64  proc   root   sys    tmp    usr    var
/ # exit
Session ended, resume using 'kubectl attach my-shell -c my-shell -i -t' command when the pod is running
pod "my-shell" deleted
Enter fullscreen mode Exit fullscreen mode

Pods are never ran independently in production, the main resource used that controls pods is called a Deployment.

Deployment

Deployments are used to define which pods/containers to run and how many replicas you want running. Once a deployment is created, this creates a replica-set which is responsible for making sure the correct number of pods are running. Each time a deployment is updated say with a new container version a new replica-set is created and the previous one is retained which allows us to quickly rollback to a previous version in case an update breaks our application.

Create the following deployment spec to create a single (only 1 replica) container using the NGINX 1.24 running on Alpine Linux docker image.

📝 nginx-deployment.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: my-server
  labels:
    app: my-server
spec:
  replicas: 1
  selector:
    matchLabels:
      app: my-server
  template:
    metadata:
      labels:
        app: my-server
    spec:
      containers:
        - name: my-server
          image: nginx:1.24-alpine
          ports:
            - containerPort: 80
          resources:
            limits:
              memory: 500Mi
              cpu: 500m
            requests:
              memory: 100Mi
              cpu: 100m
Enter fullscreen mode Exit fullscreen mode

Create the deployment, verify the resources have been created and check the logs of the pod container.

$ kubectl apply -f nginx-deployment.yaml
deployment.apps/my-server created

$ kubectl get deployments
NAME        READY   UP-TO-DATE   AVAILABLE   AGE
my-server   1/1     1            1           68s

$ kubectl get pods
NAME                        READY   STATUS    RESTARTS   AGE
my-server-74465678c-776qx   1/1     Running   0          72s

$ kubectl get replicasets
NAME                  DESIRED   CURRENT   READY   AGE
my-server-74465678c   1         1         1       82s

$ kubectl get all
NAME                            READY   STATUS    RESTARTS   AGE
pod/my-server-74465678c-776qx   1/1     Running   0          87s

NAME                 TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
service/kubernetes   ClusterIP   10.96.0.1    <none>        443/TCP   67m

NAME                        READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/my-server   1/1     1            1           88s

NAME                                  DESIRED   CURRENT   READY   AGE
replicaset.apps/my-server-74465678c   1         1         1       87s

$ kubectl logs my-server-74465678c-776qx
/docker-entrypoint.sh: /docker-entrypoint.d/ is not empty, will attempt to perform configuration
/docker-entrypoint.sh: Looking for shell scripts in /docker-entrypoint.d/
/docker-entrypoint.sh: Launching /docker-entrypoint.d/10-listen-on-ipv6-by-default.sh
10-listen-on-ipv6-by-default.sh: info: Getting the checksum of /etc/nginx/conf.d/default.conf
10-listen-on-ipv6-by-default.sh: info: Enabled listen on IPv6 in /etc/nginx/conf.d/default.conf
/docker-entrypoint.sh: Launching /docker-entrypoint.d/20-envsubst-on-templates.sh
/docker-entrypoint.sh: Launching /docker-entrypoint.d/30-tune-worker-processes.sh
/docker-entrypoint.sh: Configuration complete; ready for start up
2023/10/07 15:31:39 [notice] 1#1: using the "epoll" event method
2023/10/07 15:31:39 [notice] 1#1: nginx/1.24.0
2023/10/07 15:31:39 [notice] 1#1: built by gcc 12.2.1 20220924 (Alpine 12.2.1_git20220924-r4)
2023/10/07 15:31:39 [notice] 1#1: OS: Linux 6.4.16-linuxkit
2023/10/07 15:31:39 [notice] 1#1: getrlimit(RLIMIT_NOFILE): 1048576:1048576
2023/10/07 15:31:39 [notice] 1#1: start worker processes
2023/10/07 15:31:39 [notice] 1#1: start worker process 30
2023/10/07 15:31:39 [notice] 1#1: start worker process 31
2023/10/07 15:31:39 [notice] 1#1: start worker process 32
2023/10/07 15:31:39 [notice] 1#1: start worker process 33
2023/10/07 15:31:39 [notice] 1#1: start worker process 34
2023/10/07 15:31:39 [notice] 1#1: start worker process 35
2023/10/07 15:31:39 [notice] 1#1: start worker process 36
2023/10/07 15:31:39 [notice] 1#1: start worker process 37
2023/10/07 15:31:39 [notice] 1#1: start worker process 38
2023/10/07 15:31:39 [notice] 1#1: start worker process 39
Enter fullscreen mode Exit fullscreen mode

Managing Deployments

When a new version of an application we are running is released we can update our deployment spec to use the new image and apply the updates the same way, in this case we can update to the NGINX 1.25 image.

📝 nginx-deployment.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: my-server
  labels:
    app: my-server
spec:
  replicas: 1
  selector:
    matchLabels:
      app: my-server
  template:
    metadata:
      labels:
        app: my-server
    spec:
      containers:
        - name: my-server
          image: nginx:1.25-alpine # Update image
          ports:
            - containerPort: 80
          resources:
            limits:
              memory: 500Mi
              cpu: 500m
            requests:
              memory: 100Mi
              cpu: 100m
Enter fullscreen mode Exit fullscreen mode

Apply the updated config.

$ kubectl apply -f nginx-deployment.yaml
deployment.apps/my-server configured
Enter fullscreen mode Exit fullscreen mode

💡 The kubectl get command offers the -o option to print information in different formats. We can use the wide option to show some extra metadata such as image versions in this case.

$ kubectl get --help
Display one or many resources.
...
    -o, --output='':
    Output format. One of: (json, yaml, name, go-template, go-template-file, template, templatefile, jsonpath,
    jsonpath-as-json, jsonpath-file, custom-columns, custom-columns-file, wide).

We can verify that our deployment is updated with the new image, a new replica-set and pod are created and the previous replica-set for the 1.24 image is saved. This is what allows Kubernetes to quickly Rollback versions if there are any issues with an update.

$ kubectl get all -o wide
NAME                            READY   STATUS    RESTARTS   AGE     IP            NODE       NOMINATED NODE   READINESS GATES
pod/my-server-6449d849b-cg8hz   1/1     Running   0          2m15s   10.244.0.11   minikube   <none>           <none>

NAME                 TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE   SELECTOR
service/kubernetes   ClusterIP   10.96.0.1    <none>        443/TCP   81m   <none>

NAME                        READY   UP-TO-DATE   AVAILABLE   AGE   CONTAINERS   IMAGES              SELECTOR
deployment.apps/my-server   1/1     1            1           15m   my-server    nginx:1.25-alpine   app=my-server

NAME                                  DESIRED   CURRENT   READY   AGE     CONTAINERS   IMAGES              SELECTOR
replicaset.apps/my-server-6449d849b   1         1         1       2m15s   my-server    nginx:1.25-alpine   app=my-server,pod-template-hash=6449d849b
replicaset.apps/my-server-74465678c   0         0         0       15m     my-server    nginx:1.24-alpine   app=my-server,pod-template-hash=74465678c
Enter fullscreen mode Exit fullscreen mode

Running a single container is a great place to start but as an application increases usage it will need to be scaled up to handle more load and this is where Kubernetes really shines. We simply increase the replicas count on our deployment spec and in a production cluster Kubernetes will automatically schedule the container to run on the best node available. Lets increase our NGINX deployment to 3 replicas.

📝 nginx-deployment.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: my-server
  labels:
    app: my-server
spec:
  replicas: 3 # Run with 3 instances
  selector:
    matchLabels:
      app: my-server
  template:
    metadata:
      labels:
        app: my-server
    spec:
      containers:
        - name: my-server
          image: nginx:1.25-alpine
          ports:
            - containerPort: 80
          resources:
            limits:
              memory: 500Mi
              cpu: 500m
            requests:
              memory: 100Mi
              cpu: 100m
Enter fullscreen mode Exit fullscreen mode

Apply this configuration and we can verify that we see 3 pods/containers running

$ kubectl apply -f nginx-deployment.yaml
deployment.apps/my-server configured

$ kubectl get all -o wide
NAME                            READY   STATUS    RESTARTS   AGE     IP            NODE       NOMINATED NODE   READINESS GATES
pod/my-server-6449d849b-69c66   1/1     Running   0          3s      10.244.0.12   minikube   <none>           <none>
pod/my-server-6449d849b-cg8hz   1/1     Running   0          7m31s   10.244.0.11   minikube   <none>           <none>
pod/my-server-6449d849b-xs74r   1/1     Running   0          3s      10.244.0.13   minikube   <none>           <none>

NAME                 TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE   SELECTOR
service/kubernetes   ClusterIP   10.96.0.1    <none>        443/TCP   86m   <none>

NAME                        READY   UP-TO-DATE   AVAILABLE   AGE   CONTAINERS   IMAGES              SELECTOR
deployment.apps/my-server   3/3     3            3           20m   my-server    nginx:1.25-alpine   app=my-server

NAME                                  DESIRED   CURRENT   READY   AGE     CONTAINERS   IMAGES              SELECTOR
replicaset.apps/my-server-6449d849b   3         3         3       7m31s   my-server    nginx:1.25-alpine   app=my-server,pod-template-hash=6449d849b
replicaset.apps/my-server-74465678c   0         0         0       20m     my-server    nginx:1.24-alpine   app=my-server,pod-template-hash=74465678c
Enter fullscreen mode Exit fullscreen mode

For some applications which are just backend workers this may be all that is needed but for NGINX which is a web server and reverse proxy it will need to be accessible via the network.

Networking

K8s Networking

Service

Kubernetes uses services to connect to running pods, the service is accessible on every node and load balances traffic across the pods/containers wherever they are running.

💡 svc is an abbreviation for service

$ kubectl get svc
NAME         TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
kubernetes   ClusterIP   10.96.0.1    <none>        443/TCP   122m

$ kubectl get service
NAME         TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
kubernetes   ClusterIP   10.96.0.1    <none>        443/TCP   122m

By default services will be created with the type ClusterIP which means it will only be accessible inside the cluster, via the cluster IP and typically will be used in conjunction with a load balancer like HAProxy or a cloud service provided load balancer. In order to directly access a service via any node's IP address, the NodePort type should be used and this will map a node port between 30000-32767 to the service. The service uses a label selector to figure out which pods to connect to, in this case our deployment is using app: my-server and it connects to the pod/container targetPort which is set to containerPort: 80 then exposes its own port: 8000.

📝 nginx-svc.yaml

apiVersion: v1
kind: Service
metadata:
  labels:
    app: my-server
  name: my-server-svc
spec:
  type: NodePort
  ports:
    - name: http
      port: 8000
      protocol: TCP
      targetPort: 80
  selector:
    app: my-server # Attach to pods with the same label
Enter fullscreen mode Exit fullscreen mode

Create the service and verify.

$ kubectl apply -f nginx-svc.yaml
service/my-server-svc created

$ kubectl get svc -o wide
NAME            TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)          AGE    SELECTOR
kubernetes      ClusterIP   10.96.0.1      <none>        443/TCP          134m   <none>
my-server-svc   NodePort    10.100.20.14   <none>        8000:30444/TCP   32s    app=my-server
Enter fullscreen mode Exit fullscreen mode

This service would now be accessible on port 30444 of any node in a production Kubernetes cluster but since we are using Minikube we will need to use the Minikube server command to expose it locally.

$ minikube service my-server-svc --url
http://127.0.0.1:57578
❗  Because you are using a Docker driver on darwin, the terminal needs to be open to run it.
Enter fullscreen mode Exit fullscreen mode

Open your web browser to the printed address and you should now see the NGINX welcome page and traffic will be load balanced across the 3 containers as the page is loaded up.

K8s Service NGINX

Ingress Controller

K8s Ingress

In a production environment there will be a load balancer setup with an Ingress Controller, Service Mesh or some type of Custom Router. This allows all traffic to be sent to the single load balancer IP address and then route the traffic to a service based on the Domain name or subpath. We are using a NGINX ingress controller but service meshes like Istio have been becoming the most popular solution to use as they offer more segmentation, security and granular control.

💡 Kubernetes has namespaces to separate and organize applications, we have just been using the default namespace so far but the NGINX controller is setup in the ingress-nginx namespace and to view or access its resource the namespace flag -n ingress-nginx will need to be provided on commands.

$ kubectl -n ingress-nginx get all
NAME                                            READY   STATUS      RESTARTS   AGE
pod/ingress-nginx-admission-create-hb4rq        0/1     Completed   0          23h
pod/ingress-nginx-admission-patch-hz9hq         0/1     Completed   2          23h
pod/ingress-nginx-controller-7799c6795f-cnj8n   1/1     Running     0          23h

NAME                                         TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)                      AGE
service/ingress-nginx-controller             NodePort    10.106.129.227   <none>        80:30736/TCP,443:31016/TCP   23h
service/ingress-nginx-controller-admission   ClusterIP   10.97.6.50       <none>        443/TCP                      23h

NAME                                       READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/ingress-nginx-controller   1/1     1            1           23h

NAME                                                  DESIRED   CURRENT   READY   AGE
replicaset.apps/ingress-nginx-controller-7799c6795f   1         1         1       23h

NAME                                       COMPLETIONS   DURATION   AGE
job.batch/ingress-nginx-admission-create   1/1           16s        23h
job.batch/ingress-nginx-admission-patch    1/1           31s        23h
Enter fullscreen mode Exit fullscreen mode

We have already enabled the ingress add-on for minikube which sets up an ingress controller so now we can create an ingress definition to route traffic to our service when a user goes to the /myApp sub route.

📝 nginx-ingress.yaml

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: my-server-ingress
  annotations:
    nginx.ingress.kubernetes.io/rewrite-target: /
spec:
  rules:
    - http:
        paths:
          - path: /myApp
            pathType: Prefix
            backend:
              service:
                name: my-server-svc # Service to route to
                port:
                  number: 80
Enter fullscreen mode Exit fullscreen mode

Apply the ingress definition and verify.

$ kubectl apply -f nginx-ingress.yaml
ingress.networking.k8s.io/my-server-ingress created

$ kubectl get ingress
NAME                CLASS   HOSTS   ADDRESS   PORTS   AGE
my-server-ingress   nginx   *                 80      12s
Enter fullscreen mode Exit fullscreen mode

Since we are using Minikube locally we need to use minikube tunnel to access the ingress controller.

$ minikube tunnel
✅  Tunnel successfully started

📌  NOTE: Please do not close this terminal as this process must stay alive for the tunnel to be accessible ...

❗  The service/ingress my-server-ingress requires privileged ports to be exposed: [80 443]
🔑  sudo permission will be asked for it.
🏃  Starting tunnel for service my-server-ingress.
Password:
Enter fullscreen mode Exit fullscreen mode

Navigate to http://localhost/myApp in your web browser and we see that the ingress controller has mapped the /myApp route to our NGINX application!

Configuration

Our default NGINX server is working great but in real world applications you will need to modify the configuration with config files, variables and secrets.

Config Map

Config maps are useful for configurations which do not contain sensitive information. There are two common ways to use them, either adding key value pairs to inject simple environment variables, such as key: value or full configuration files mapped to a single key such as a nginx.conf config. Create a new config map for our NGINX server which contains a nginx.conf file config. The key: value entry will not actually be used by the server but it is just included to show how simple key value entries work.

📝 nginx-cm.yaml

apiVersion: v1
kind: ConfigMap
metadata:
  name: my-server-cm
data:
  key: value
  nginx.conf: |
    worker_processes 3;
    error_log /dev/stdout info;
    events {
        worker_connections 2048;
    }
    http {
        include /etc/nginx/mime.types;

        server {
            listen 80;

            location / {
                root /www/data;
                try_files $uri /index.html;
            }
        }
    }
Enter fullscreen mode Exit fullscreen mode

Create and verify this config.

$ kubectl apply -f nginx-cm.yaml
configmap/my-server-cm created

$ kubectl get configmaps
NAME               DATA   AGE
kube-root-ca.crt   1      24h
my-server-cm       1      8s

$ kubectl get configmap my-server-cm -o yaml
apiVersion: v1
data:
  key: value
  nginx.conf: |
    worker_processes 3;
    error_log /dev/stdout info;
    events {
        worker_connections 2048;
    }
    http {
        include /etc/nginx/mime.types;

        server {
            listen 80;

            location / {
                root /www/data;
                try_files $uri /index.html;
            }
        }
    }
kind: ConfigMap
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"v1","data":{"key":"value","nginx.conf":"worker_processes 3;\nerror_log /dev/stdout info;\nevents {\n    worker_connections 2048;\n}\nhttp {\n    include /etc/nginx/mime.types;\n\n    server {\n        listen 80;\n\n        location / {\n            root /www/data;\n            try_files $uri /index.html;\n        }\n    }\n}\n"},"kind":"ConfigMap","metadata":{"annotations":{},"name":"my-server-cm","namespace":"default"}}
  creationTimestamp: "2023-10-08T14:35:18Z"
  name: my-server-cm
  namespace: default
  resourceVersion: "46978"
  uid: 41dc8ab0-44cf-4ecc-9514-0330eb2ddd90
Enter fullscreen mode Exit fullscreen mode

We can now use this configuration on our NGINX deployment. Environment variable key value config types can be added with envFrom and configuration file types will use volumeMounts to be mounted as a Volume with a mount path to make it available as a file in the container.

📝 nginx-deployment.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: my-server
  labels:
    app: my-server
spec:
  replicas: 3
  selector:
    matchLabels:
      app: my-server
  template:
    metadata:
      labels:
        app: my-server
    spec:
      containers:
        - name: my-server
          image: nginx:1.25-alpine
          ports:
            - containerPort: 80
          resources:
            limits:
              memory: 500Mi
              cpu: 500m
            requests:
              memory: 100Mi
              cpu: 100m
          envFrom: # Add CM key/values to environment variables
            - configMapRef:
                name: my-server-cm
          volumeMounts:
            - name: nginx-conf # Maps to file type CM name
              mountPath: /etc/nginx/nginx.conf
              subPath: nginx.conf
              readOnly: true
      volumes:
        - name: nginx-conf # File type CM
          configMap:
            name: my-server-cm
            items:
              - key: nginx.conf
                path: nginx.conf
Enter fullscreen mode Exit fullscreen mode

Apply the updated deployment config.

$ kubectl apply -f nginx-deployment.yaml
deployment.apps/my-server configured
Enter fullscreen mode Exit fullscreen mode

Secret

Secrets are similar to config maps other than they are intended for sensitive values like passwords. The main difference in the spec is the values are Base64 encoded. kubectl commands have support for .env files and are useful for generating secrets with the proper Base64 encoding. Including the --dry-run=client -o yaml options will only print out the spec to the command line and not create the secret, if these options are omitted it will create the secret on Kubernetes.

📝 .env

MY_SECRET=SuperSecret123
Enter fullscreen mode Exit fullscreen mode
$ kubectl create secret generic example-secret --from-env-file=.env --dry-run=client -o yaml
apiVersion: v1
data:
  MY_SECRET: U3VwZXJTZWNyZXQxMjM=
kind: Secret
metadata:
  creationTimestamp: null
  name: example-secret
Enter fullscreen mode Exit fullscreen mode

After creating a secret, it can be mapped to a deployment the same way as a config map but just using secretRef instead of configMapRef

envFrom:
  - secretRef:
      name: my-secret
Enter fullscreen mode Exit fullscreen mode

Storage

Persistent Volume

Persistent Volumes are how you map persisted file storage to a container, similar to volumes in Docker but it will be through some type of network attached storage to allow for the distributed setup. The persistent volume definition is the actual storage medium itself which is typically provided by cloud providers. A common bare metal solution is to use an NFS server. Here are all of the Types of Persistent Volumes. A storage provisioner is already enabled in Minikube so we can go ahead and take a look at how to use PVs with deployments.

Persistent Volume Claim

Persistent Volume Claims are how you claim available storage and attach it to your pod/container to use. The most important pieces to look at are how much storage space is needed and how will the storage be used or its Access Mode and Kubernetes allows a handful of Access Modes.

  • ReadWriteOnce (RWO) - the volume can be mounted as read-write by a single node. ReadWriteOnce access mode still can allow multiple pods to access the volume when the pods are running on the same node.
  • ReadOnlyMany (ROX) - the volume can be mounted as read-only by many nodes.
  • ReadWriteMany (RWX) - the volume can be mounted as read-write by many nodes.
  • ReadWriteOncePod (RWOP) - the volume can be mounted as read-write by a single Pod. Use ReadWriteOncePod access mode if you want to ensure that only one pod across whole cluster can read that PVC or write to it. This is only supported for CSI volumes and Kubernetes version 1.22+.

A PVC can be created to store HTML files for our NGINX deployment instead of the default test page. There are 3 pods running and each will need to read the HTML files from disk but we also want to be able to copy data to the storage through a pod so our best option will be to use a ReadWriteMany access mode and 1 Gigabyte of storage should be more than enough.

📝 nginx-pvc.yaml

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: my-server-pvc
spec:
  accessModes:
    - ReadWriteMany
  volumeMode: Filesystem
  resources:
    requests:
      storage: 1Gi
Enter fullscreen mode Exit fullscreen mode

Create and verify this config.

$ kubectl apply -f nginx-pvc.yaml
persistentvolumeclaim/my-server-pvc created

$ kubectl get pv,pvc
NAME                                                        CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                   STORAGECLASS   REASON   AGE
persistentvolume/pvc-0ab341dd-3d4a-478f-b39b-1c5f88638419   1Gi        RWX            Delete           Bound    default/my-server-pvc   standard                6s

NAME                                  STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
persistentvolumeclaim/my-server-pvc   Bound    pvc-0ab341dd-3d4a-478f-b39b-1c5f88638419   1Gi        RWX            standard       7s
Enter fullscreen mode Exit fullscreen mode

Update the NGINX deployment and attach the persistent volume to the /www/data directory defined in the NGINX config.

📝 nginx-deployment.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: my-server
  labels:
    app: my-server
spec:
  replicas: 3
  selector:
    matchLabels:
      app: my-server
  template:
    metadata:
      labels:
        app: my-server
    spec:
      containers:
        - name: my-server
          image: nginx:1.25-alpine
          ports:
            - containerPort: 80
          resources:
            limits:
              memory: 500Mi
              cpu: 500m
            requests:
              memory: 100Mi
              cpu: 100m
          envFrom:
            - configMapRef:
                name: my-server-cm
          volumeMounts:
            - name: nginx-conf
              mountPath: /etc/nginx/nginx.conf
              subPath: nginx.conf
              readOnly: true
            - name: data # Maps to PVC name
              mountPath: /www/data
              subPath: data
      volumes:
        - name: nginx-conf
          configMap:
            name: my-server-cm
            items:
              - key: nginx.conf
                path: nginx.conf
        - name: data # PVC
          persistentVolumeClaim:
            claimName: my-server-pvc
Enter fullscreen mode Exit fullscreen mode

Apply the updated deployment config.

$ kubectl apply -f nginx-deployment.yaml
deployment.apps/my-server configured
Enter fullscreen mode Exit fullscreen mode

Now that we have our volume bound to the container we can use the kubectl cp command to add a basic index.html file to the server and this can be copied through any of the NGINX pods since they all map to the same volume.

$ kubectl get pods
NAME                         READY   STATUS    RESTARTS   AGE
my-server-5556d46697-dstnp   1/1     Running   0          10m
my-server-5556d46697-hlst4   1/1     Running   0          10m
my-server-5556d46697-kwtcf   1/1     Running   0          10m

$ kubectl cp index.html my-server-5556d46697-dstnp:/www/data
Enter fullscreen mode Exit fullscreen mode

Make sure your Minikube tunnel is still running, otherwise restart it.

$ minikube tunnel
✅  Tunnel successfully started

📌  NOTE: Please do not close this terminal as this process must stay alive for the tunnel to be accessible ...

❗  The service/ingress my-server-ingress requires privileged ports to be exposed: [80 443]
🔑  sudo permission will be asked for it.
🏃  Starting tunnel for service my-server-ingress.
Password:
Enter fullscreen mode Exit fullscreen mode

Navigate back to http://localhost/myApp in your web browser and we can see that our custom HTML website is now being served! 🎉

K8s NGINX HTML

Production Kubernetes Providers

These are the core concepts for building Kubernetes cloud applications but a real application will not be ran on your local computer with Minikube. Here are the most popular cloud providers who offer managed Kubernetes clusters to deploy production applications.

Top comments (1)

Collapse
 
synertry profile image
Synertry

Coming from Docker Swarm, this is a detailed hands-on intro 🏅