DEV Community

Ježek
Ježek

Posted on • Edited on

Kubernetes by Example

The article explores Kubernetes through practical examples.

The content of the article was structured to ensure clarity and depth while maintaining focus on real-world use cases. By leveraging the hands-on approach advocated throughout the publication, readers will gain an enhanced understanding of core concepts in Kubernetes such as pod management, service discovery, etc.

The article was inspired by the book Kubernetes in Action by Marko Lukša, and in the process of preparing this article, the official Kubernetes Documentation was utilized as a primary reference material. Thus, I insistently recommend that you familiarize yourself with the above-mentioned references in advance.

Enjoy!


Table Of Contents


Kubernetes in Docker

kind is a tool for running local Kubernetes clusters using Docker container nodes.

Create a cluster

# kind-cluster.yaml

kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
networking:
  apiServerPort: 6443
nodes:
- role: control-plane
- role: worker
  extraPortMappings:
  - containerPort: 30666
    hostPort: 40000
- role: worker
  extraPortMappings:
  - containerPort: 30666
    hostPort: 40001
- role: worker
  extraPortMappings:
  - containerPort: 30666
    hostPort: 40002
Enter fullscreen mode Exit fullscreen mode
$ kind create cluster --config kind-cluster.yaml
Creating cluster "kind" ...
 • Ensuring node image (kindest/node:v1.33.1) 🖼  ...
 ✓ Ensuring node image (kindest/node:v1.33.1) 🖼
 • Preparing nodes 📦 📦 📦 📦   ...
 ✓ Preparing nodes 📦 📦 📦 📦
 • Writing configuration 📜  ...
 ✓ Writing configuration 📜
 • Starting control-plane 🕹️  ...
 ✓ Starting control-plane 🕹️
 • Installing CNI 🔌  ...
 ✓ Installing CNI 🔌
 • Installing StorageClass 💾  ...
 ✓ Installing StorageClass 💾
 • Joining worker nodes 🚜  ...
 ✓ Joining worker nodes 🚜
Set kubectl context to "kind-kind"
You can now use your cluster with:

kubectl cluster-info --context kind-kind
Enter fullscreen mode Exit fullscreen mode

Cluster info

$ kind get clusters
kind
Enter fullscreen mode Exit fullscreen mode
$ kubectl cluster-info --context kind-kind
Kubernetes control plane is running at https://127.0.0.1:6443
CoreDNS is running at https://127.0.0.1:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
Enter fullscreen mode Exit fullscreen mode

Cluster nodes

$ kubectl get nodes
NAME                 STATUS   ROLES           AGE   VERSION
kind-control-plane   Ready    control-plane   39m   v1.33.1
kind-worker          Ready    <none>          39m   v1.33.1
kind-worker2         Ready    <none>          39m   v1.33.1
kind-worker3         Ready    <none>          39m   v1.33.1
Enter fullscreen mode Exit fullscreen mode

Pods

Pods are the smallest deployable units of computing that you can create and manage in Kubernetes.

Create a pod

Imperative way

$ kubectl run kubia --image=luksa/kubia --port=8080
pod/kubia created
Enter fullscreen mode Exit fullscreen mode
$ kubectl get pods
NAME    READY   STATUS    RESTARTS   AGE
kubia   1/1     Running   0          5m26s
Enter fullscreen mode Exit fullscreen mode

Declarative way

# pod-basic.yaml

apiVersion: v1
kind: Pod
metadata:
  name: kubia
spec:
  containers:
  - image: luksa/kubia
    name: kubia
    ports:
    - containerPort: 8080
      protocol: TCP
Enter fullscreen mode Exit fullscreen mode
$ kubectl create -f pod-basic.yaml
pod/kubia created
Enter fullscreen mode Exit fullscreen mode
$ kubectl get po
NAME    READY   STATUS    RESTARTS   AGE
kubia   1/1     Running   0          9s
Enter fullscreen mode Exit fullscreen mode

Logs

$ kubectl logs kubia
Kubia server starting...
Enter fullscreen mode Exit fullscreen mode

Logs from specific container in pod:

$ kubectl logs kubia -c kubia
Kubia server starting...
Enter fullscreen mode Exit fullscreen mode

Port forwarding from host to pod

$ kubectl port-forward kubia 30000:8080
Forwarding from 127.0.0.1:30000 -> 8080
Forwarding from [::1]:30000 -> 8080
Enter fullscreen mode Exit fullscreen mode
$ curl -s localhost:30000
You've hit kubia
Enter fullscreen mode Exit fullscreen mode

Labels and Selectors

Labels are key/value pairs that are attached to objects such as Pods.

Labels

# pod-labels.yaml

apiVersion: v1
kind: Pod
metadata:
  name: kubia-labels
  labels:
    tier: backend
    env: dev
spec:
  containers:
  - image: luksa/kubia
    name: kubia
    ports:
    - containerPort: 8080
      protocol: TCP
Enter fullscreen mode Exit fullscreen mode
$ kubectl create -f pod-labels.yaml
pod/kubia-labels created
Enter fullscreen mode Exit fullscreen mode
$ kubectl get po --show-labels
NAME           READY   STATUS    RESTARTS   AGE     LABELS
kubia          1/1     Running   0          4d22h   <none>
kubia-labels   1/1     Running   0          30s     env=dev,tier=backend
Enter fullscreen mode Exit fullscreen mode
$ kubectl get po --label-columns tier,env
NAME           READY   STATUS    RESTARTS   AGE     TIER      ENV
kubia          1/1     Running   0          4d22h
kubia-labels   1/1     Running   0          20m     backend   dev
Enter fullscreen mode Exit fullscreen mode
$ kubectl label po kubia-labels env=test
error: 'env' already has a value (dev), and --overwrite is false

$ kubectl label po kubia-labels env=test --overwrite
pod/kubia-labels labeled

$ kubectl get po --label-columns tier,env
NAME           READY   STATUS    RESTARTS   AGE     TIER      ENV
kubia          1/1     Running   0          4d22h
kubia-labels   1/1     Running   0          24m     backend   test
Enter fullscreen mode Exit fullscreen mode

Selectors

$ kubectl get po -l 'env' --show-labels
NAME           READY   STATUS    RESTARTS   AGE     LABELS
kubia-labels   1/1     Running   0          3h25m   env=test,tier=backend

$ kubectl get po -l '!env' --show-labels
NAME    READY   STATUS    RESTARTS   AGE    LABELS
kubia   1/1     Running   0          5d1h   <none>

$ kubectl get po -l tier=backend --show-labels
NAME           READY   STATUS    RESTARTS   AGE     LABELS
kubia-labels   1/1     Running   0          3h28m   env=test,tier=backend
Enter fullscreen mode Exit fullscreen mode

Annotations

You can use annotations to attach arbitrary non-identifying metadata to objects.

# pod-annotations.yaml

apiVersion: v1
kind: Pod
metadata:
  name: kubia-annotations
  annotations:
    imageregistry: "https://hub.docker.com/"
spec:
  containers:
  - image: luksa/kubia
    name: kubia
    ports:
    - containerPort: 8080
      protocol: TCP
Enter fullscreen mode Exit fullscreen mode
$ kubectl create -f pod-annotations.yaml
pod/kubia-annotations created

$ kubectl describe pod kubia-annotations | grep Annotations
Annotations:      imageregistry: https://hub.docker.com/
Enter fullscreen mode Exit fullscreen mode
$ kubectl annotate pod/kubia-annotations imageregistry=nexus.org --overwrite
pod/kubia-annotations annotated

$ kubectl describe pod kubia-annotations | grep Annotations
Annotations:      imageregistry: nexus.org
Enter fullscreen mode Exit fullscreen mode

Namespaces

Namespaces provide a mechanism for isolating groups of resources within a single cluster.

$ kubectl get ns
NAME                 STATUS   AGE
default              Active   5d2h
kube-node-lease      Active   5d2h
kube-public          Active   5d2h
kube-system          Active   5d2h
local-path-storage   Active   5d2h
Enter fullscreen mode Exit fullscreen mode
$ kubectl get pods --namespace=default
NAME                READY   STATUS    RESTARTS   AGE
kubia               1/1     Running   0          5d2h
kubia-annotations   1/1     Running   0          19m
kubia-labels        1/1     Running   0          4h15m
Enter fullscreen mode Exit fullscreen mode
$ kubectl create namespace custom-namespace
namespace/custom-namespace created

$ kubectl get pods --namespace=custom-namespace
No resources found in custom-namespace namespace.
Enter fullscreen mode Exit fullscreen mode
$ kubectl run nginx --image=nginx --namespace=custom-namespace
pod/nginx created

$ kubectl get pods --namespace=custom-namespace
NAME    READY   STATUS    RESTARTS   AGE
nginx   1/1     Running   0          61s
Enter fullscreen mode Exit fullscreen mode
$ kubectl config set-context --current --namespace=custom-namespace
Context "kind-kind" modified.

$ kubectl get pods
NAME    READY   STATUS    RESTARTS   AGE
nginx   1/1     Running   0          2m57s

$ kubectl config set-context --current --namespace=default
Context "kind-kind" modified.

$ kubectl get pods
NAME                READY   STATUS    RESTARTS   AGE
kubia               1/1     Running   0          5d2h
kubia-annotations   1/1     Running   0          30m
kubia-labels        1/1     Running   0          4h26m
Enter fullscreen mode Exit fullscreen mode
$ kubectl delete ns custom-namespace
namespace "custom-namespace" deleted

$ kubectl get pods --namespace=custom-namespace
No resources found in custom-namespace namespace.

$ kubectl get ns
NAME                 STATUS   AGE
default              Active   5d3h
kube-node-lease      Active   5d3h
kube-public          Active   5d3h
kube-system          Active   5d3h
local-path-storage   Active   5d3h
Enter fullscreen mode Exit fullscreen mode
$ kubectl delete po --all
pod "kubia" deleted
pod "kubia-annotations" deleted
pod "kubia-labels" deleted

$ kubectl get ns
NAME                 STATUS   AGE
default              Active   5d3h
kube-node-lease      Active   5d3h
kube-public          Active   5d3h
kube-system          Active   5d3h
local-path-storage   Active   5d3h

$ kubectl get pods --namespace=default
No resources found in default namespace.
Enter fullscreen mode Exit fullscreen mode

ReplicaSet

A ReplicaSet's purpose is to maintain a stable set of replica Pods running at any given time. As such, it is often used to guarantee the availability of a specified number of identical Pods.

# replicaset.yaml

apiVersion: apps/v1
kind: ReplicaSet
metadata:
  name: kubia
spec:
  replicas: 3
  selector:
    matchLabels:
      app: kubia
  template:
    metadata:
      labels:
        app: kubia
    spec:
      containers:
      - name: kubia
        image: luksa/kubia

Enter fullscreen mode Exit fullscreen mode
$ kubectl create -f replicaset.yaml
replicaset.apps/kubia created

$ kubectl get po
NAME          READY   STATUS    RESTARTS   AGE
kubia-5l82z   1/1     Running   0          5s
kubia-bkjwk   1/1     Running   0          5s
kubia-k78j5   1/1     Running   0          5s

$ kubectl get rs
NAME    DESIRED   CURRENT   READY   AGE
kubia   3         3         3       64s
Enter fullscreen mode Exit fullscreen mode
$ kubectl delete rs kubia
replicaset.apps "kubia" deleted

$ kubectl get rs
No resources found in default namespace.

$ kubectl get po
NAME          READY   STATUS        RESTARTS   AGE
kubia-5l82z   1/1     Terminating   0          5m30s
kubia-bkjwk   1/1     Terminating   0          5m30s
kubia-k78j5   1/1     Terminating   0          5m30s
Enter fullscreen mode Exit fullscreen mode

DaemonSet

A DaemonSet ensures that all (or some) Nodes run a copy of a Pod. As nodes are added to the cluster, Pods are added to them.

# daemonset.yaml

apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: fluentd
spec:
  selector:
    matchLabels:
      app: fluentd
  template:
    metadata:
      labels:
        app: fluentd
    spec:
      nodeSelector:
        disk: ssd
      containers:
      - name: fluentd
        image: quay.io/fluentd_elasticsearch/fluentd:v5.0.1
Enter fullscreen mode Exit fullscreen mode
$ kubectl create -f daemonset.yaml
daemonset.apps/fluentd created

$ kubectl get ds
NAME      DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGE
fluentd   0         0         0       0            0           disk=ssd        115s

$ kubectl get po
No resources found in default namespace.

$ kubectl get node
NAME                 STATUS   ROLES           AGE     VERSION
kind-control-plane   Ready    control-plane   5d21h   v1.33.1
kind-worker          Ready    <none>          5d21h   v1.33.1
kind-worker2         Ready    <none>          5d21h   v1.33.1
kind-worker3         Ready    <none>          5d21h   v1.33.1

$ kubectl label node kind-worker3 disk=ssd
node/kind-worker3 labeled

$ kubectl get ds
NAME      DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGE
fluentd   1         1         1       1            1           disk=ssd        3m49s

$ kubectl get po
NAME            READY   STATUS    RESTARTS   AGE
fluentd-cslcb   1/1     Running   0          39s
Enter fullscreen mode Exit fullscreen mode
$ kubectl delete ds fluentd
daemonset.apps "fluentd" deleted

$ kubectl get ds
No resources found in default namespace.

$ kubectl get po
No resources found in default namespace.
Enter fullscreen mode Exit fullscreen mode

Jobs

Jobs represent one-off tasks that run to completion and then stop.

# job.yaml

apiVersion: batch/v1
kind: Job
metadata:
  name: pi
spec:
  template:
    spec:
      containers:
      - name: pi
        image: perl:5.34.0
        command: ["perl",  "-Mbignum=bpi", "-wle", "print bpi(2000)"]
      restartPolicy: Never
  backoffLimit: 4
Enter fullscreen mode Exit fullscreen mode
$ kubectl create -f job.yaml
job.batch/pi created

$ kubectl get jobs
NAME   STATUS    COMPLETIONS   DURATION   AGE
pi     Running   0/1           34s        34s

$ kubectl get jobs
NAME   STATUS     COMPLETIONS   DURATION   AGE
pi     Complete   1/1           54s        62s

$ kubectl get po
NAME       READY   STATUS      RESTARTS   AGE
pi-8rdmn   0/1     Completed   0          2m1s

$ kubectl events pod/pi-8rdmn
LAST SEEN   TYPE     REASON             OBJECT         MESSAGE
3m44s       Normal   Scheduled          Pod/pi-8rdmn   Successfully assigned default/pi-8rdmn to kind-worker2
3m44s       Normal   Pulling            Pod/pi-8rdmn   Pulling image "perl:5.34.0"
3m44s       Normal   SuccessfulCreate   Job/pi         Created pod: pi-8rdmn
2m59s       Normal   Pulled             Pod/pi-8rdmn   Successfully pulled image "perl:5.34.0" in 44.842s (44.842s including waiting). Image size: 336374010 bytes.
2m59s       Normal   Created            Pod/pi-8rdmn   Created container: pi
2m59s       Normal   Started            Pod/pi-8rdmn   Started container pi
2m50s       Normal   Completed          Job/pi         Job completed
Enter fullscreen mode Exit fullscreen mode
$ kubectl delete job/pi
job.batch "pi" deleted

$ kubectl get po
No resources found in default namespace.
Enter fullscreen mode Exit fullscreen mode

CronJob

CronJob starts one-time Jobs on a repeating schedule.

# cronjob.yaml

apiVersion: batch/v1
kind: CronJob
metadata:
  name: hello
spec:
  schedule: "* * * * *"
  jobTemplate:
    spec:
      template:
        spec:
          containers:
          - name: hello
            image: busybox:1.28
            imagePullPolicy: IfNotPresent
            command:
            - /bin/sh
            - -c
            - date; echo Hello from the Kubernetes cluster
          restartPolicy: OnFailure
Enter fullscreen mode Exit fullscreen mode
$ kubectl create -f cronjob.yaml
cronjob.batch/hello created

$ kubectl get cronjobs
NAME    SCHEDULE    TIMEZONE   SUSPEND   ACTIVE   LAST SCHEDULE   AGE
hello   * * * * *   <none>     False     0        8s              55s

$ kubectl get pods
NAME                   READY   STATUS      RESTARTS   AGE
hello-29223074-gsztp   0/1     Completed   0          30s

$ kubectl get pods
NAME                   READY   STATUS      RESTARTS   AGE
hello-29223074-gsztp   0/1     Completed   0          106s
hello-29223075-9r7kx   0/1     Completed   0          46s
Enter fullscreen mode Exit fullscreen mode
$ kubectl delete cronjobs/hello
cronjob.batch "hello" deleted

$ kubectl get cronjobs
No resources found in default namespace.

$ kubectl get pods
No resources found in default namespace.
Enter fullscreen mode Exit fullscreen mode

Service

Service is a method for exposing a network application that is running as one or more Pods in your cluster.

There several Service types supported in Kubernetes:

  • ClusterIP
  • NodePort
  • ExternalName
  • LoadBalancer

ClusterIP

Exposes the Service on a cluster-internal IP. Choosing this value makes the Service only reachable from within the cluster. This is the default that is used if you don't explicitly specify a type for a Service. You can expose the Service to the public internet using an Ingress or a Gateway.

# pod-labels.yaml

apiVersion: v1
kind: Pod
metadata:
  name: kubia-labels
  labels:
    tier: backend
    env: dev
spec:
  containers:
  - image: luksa/kubia
    name: kubia
    ports:
    - containerPort: 8080
      protocol: TCP
Enter fullscreen mode Exit fullscreen mode
# service-basic.yaml

apiVersion: v1
kind: Service
metadata:
  name: kubia-svc
spec:
  selector:
    tier: backend
  ports:
    - protocol: TCP
      port: 80
      targetPort: 8080
Enter fullscreen mode Exit fullscreen mode
$ kubectl create -f pod-labels.yaml
pod/kubia-labels created

$ kubectl create -f service-basic.yaml
service/kubia-svc created

$ kubectl get svc
NAME         TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)   AGE
kubernetes   ClusterIP   10.96.0.1      <none>        443/TCP   21d
kubia-svc    ClusterIP   10.96.158.86   <none>        80/TCP    5s

$ kubectl get po
NAME           READY   STATUS    RESTARTS   AGE
kubia-labels   1/1     Running   0          116s

$ kubectl exec kubia-labels -- curl -s http://10.96.158.86:80
You've hit kubia-labels
Enter fullscreen mode Exit fullscreen mode
$ kubectl delete -f service-basic.yaml
service "kubia-svc" deleted

$ kubectl delete -f pod-labels.yaml
pod "kubia-labels" deleted
Enter fullscreen mode Exit fullscreen mode
# pod-nginx.yaml

apiVersion: v1
kind: Pod
metadata:
  name: nginx
  labels:
    app.kubernetes.io/name: proxy
spec:
  containers:
  - name: nginx
    image: nginx:stable
    ports:
      - containerPort: 80
        name: http-web-svc
Enter fullscreen mode Exit fullscreen mode
# service-nginx.yaml

apiVersion: v1
kind: Service
metadata:
  name: nginx-svc
spec:
  selector:
    app.kubernetes.io/name: proxy
  ports:
    - name: http-port
      protocol: TCP
      port: 8080
      targetPort: http-web-svc
Enter fullscreen mode Exit fullscreen mode
$ kubectl create -f pod-nginx.yaml
pod/nginx created

$ kubectl create -f service-nginx.yaml
service/nginx-svc created

$ kubectl get po
NAME    READY   STATUS    RESTARTS   AGE
nginx   1/1     Running   0          5m51s

$ kubectl get svc
NAME         TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)    AGE
kubernetes   ClusterIP   10.96.0.1       <none>        443/TCP    21d
nginx-svc    ClusterIP   10.96.230.243   <none>        8080/TCP   32s

$ kubectl exec nginx -- curl -sI http://10.96.230.243:8080
HTTP/1.1 200 OK
Server: nginx/1.28.0
Date: Thu, 07 Aug 2025 12:09:24 GMT
Content-Type: text/html
Content-Length: 615
Last-Modified: Wed, 23 Apr 2025 11:48:54 GMT
Connection: keep-alive
ETag: "6808d3a6-267"
Accept-Ranges: bytes

$ kubectl exec nginx -- curl -sI http://nginx-svc:8080 | head -n 2
HTTP/1.1 200 OK
Server: nginx/1.28.0

$ kubectl exec nginx -- curl -sI http://nginx-svc.default:8080 | head -n 2
HTTP/1.1 200 OK
Server: nginx/1.28.0

$ kubectl exec nginx -- curl -sI http://nginx-svc.default.svc:8080 | head -n 2
HTTP/1.1 200 OK
Server: nginx/1.28.0

$ kubectl exec nginx -- curl -sI http://nginx-svc.default.svc.cluster.local:8080 | head -n 2
HTTP/1.1 200 OK
Server: nginx/1.28.0
Enter fullscreen mode Exit fullscreen mode
$ kubectl delete -f service-nginx.yaml
service "nginx-svc" deleted

$ kubectl delete -f pod-nginx.yaml
pod "nginx" deleted
Enter fullscreen mode Exit fullscreen mode

ExternalName

Maps the Service to the contents of the externalName field (for example, to the hostname api.foo.bar.example). The mapping configures your cluster's DNS server to return a CNAME record with that external hostname value. No proxying of any kind is set up.

# service-ext.yaml

apiVersion: v1
kind: Service
metadata:
  name: httpbin-service
spec:
  type: ExternalName
  externalName: httpbin.org
Enter fullscreen mode Exit fullscreen mode
$ kubectl create -f service-ext.yaml
service/httpbin-service created

$ kubectl create -f pod-basic.yaml
pod/kubia created

$ kubectl get svc
NAME              TYPE           CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
httpbin-service   ExternalName   <none>       httpbin.org   <none>    4m17s
kubernetes        ClusterIP      10.96.0.1    <none>        443/TCP   22d

$ kubectl exec kubia -- curl -sk -X GET https://httpbin-service/uuid -H "accept: application/json"
{
  "uuid": "6a48fe51-a6b6-4e0a-9ef2-381ba7ea2c69"
}
Enter fullscreen mode Exit fullscreen mode
$ kubectl delete -f pod-basic.yaml
pod "kubia" deleted

$ kubectl delete -f service-ext.yaml
service "httpbin-service" deleted
Enter fullscreen mode Exit fullscreen mode

NodePort

Exposes the Service on each Node's IP at a static port (the NodePort). To make the node port available, Kubernetes sets up a cluster IP address, the same as if you had requested a Service of type: ClusterIP.

# service-nginx-nodeport.yaml

apiVersion: v1
kind: Service
metadata:
  name: nginx-svc
spec:
  type: NodePort
  selector:
    app.kubernetes.io/name: proxy
  ports:
  - port: 8080
    targetPort: http-web-svc
    nodePort: 30666
Enter fullscreen mode Exit fullscreen mode
$ kubectl create -f pod-nginx.yaml
pod/nginx created

$ kubectl create -f service-nginx-nodeport.yaml
service/nginx-svc created

$ kubectl get svc nginx-svc
NAME        TYPE       CLUSTER-IP     EXTERNAL-IP   PORT(S)          AGE
nginx-svc   NodePort   10.96.252.35   <none>        8080:30666/TCP   9s

$ docker ps
CONTAINER ID   IMAGE                  COMMAND                  CREATED       STATUS       PORTS                      NAMES
da2c842ddfd6   kindest/node:v1.33.1   "/usr/local/bin/entr…"   3 weeks ago   Up 3 weeks   0.0.0.0:40000->30666/tcp   kind-worker
16bf718b93b6   kindest/node:v1.33.1   "/usr/local/bin/entr…"   3 weeks ago   Up 3 weeks   127.0.0.1:6443->6443/tcp   kind-control-plane
bb18cefdb180   kindest/node:v1.33.1   "/usr/local/bin/entr…"   3 weeks ago   Up 3 weeks   0.0.0.0:40002->30666/tcp   kind-worker3
42cea7794f0b   kindest/node:v1.33.1   "/usr/local/bin/entr…"   3 weeks ago   Up 3 weeks   0.0.0.0:40001->30666/tcp   kind-worker2

$ curl -sI http://localhost:40000 | head -n 2
HTTP/1.1 200 OK
Server: nginx/1.28.0

$ curl -sI http://localhost:40001 | head -n 2
HTTP/1.1 200 OK
Server: nginx/1.28.0

$ curl -sI http://localhost:40002 | head -n 2
HTTP/1.1 200 OK
Server: nginx/1.28.0
Enter fullscreen mode Exit fullscreen mode
$ kubectl delete -f service-nginx-nodeport.yaml
service "nginx-svc" deleted

$ kubectl delete -f pod-nginx.yaml
pod "nginx" deleted
Enter fullscreen mode Exit fullscreen mode

LoadBalancer

Exposes the Service externally using an external load balancer. Kubernetes does not directly offer a load balancing component; you must provide one, or you can integrate your Kubernetes cluster with a cloud provider.

Let's a look how to get service of type LoadBalancer working in a kind cluster using Cloud Provider KIND.

# service-lb-demo.yaml

kind: Pod
apiVersion: v1
metadata:
  name: foo-app
  labels:
    app: http-echo
spec:
  containers:
  - command:
    - /agnhost
    - serve-hostname
    - --http=true
    - --port=8080
    image: registry.k8s.io/e2e-test-images/agnhost:2.39
    name: foo-app

---
kind: Pod
apiVersion: v1
metadata:
  name: bar-app
  labels:
    app: http-echo
spec:
  containers:
  - command:
    - /agnhost
    - serve-hostname
    - --http=true
    - --port=8080
    image: registry.k8s.io/e2e-test-images/agnhost:2.39
    name: bar-app

---
kind: Service
apiVersion: v1
metadata:
  name: http-echo-service
spec:
  type: LoadBalancer
  selector:
    app: http-echo
  ports:
  - port: 5678
    targetPort: 8080
Enter fullscreen mode Exit fullscreen mode
$ kubectl create -f service-lb-demo.yaml
pod/foo-app created
pod/bar-app created
service/http-echo-service created

$ kubectl get svc http-echo-service
NAME                TYPE           CLUSTER-IP    EXTERNAL-IP   PORT(S)          AGE
http-echo-service   LoadBalancer   10.96.97.99   172.18.0.6    5678:31196/TCP   58s

$ kubectl get svc http-echo-service -o=jsonpath='{.status.loadBalancer.ingress[0].ip}'
172.18.0.6

$ for _ in {1..4}; do curl -s 172.18.0.6:5678; echo; done
foo-app
bar-app
bar-app
foo-app
Enter fullscreen mode Exit fullscreen mode
$ kubectl delete -f service-lb-demo.yaml
pod "foo-app" deleted
pod "bar-app" deleted
service "http-echo-service" deleted
Enter fullscreen mode Exit fullscreen mode

Ingress

Ingress exposes HTTP and HTTPS routes from outside the cluster to services within the cluster.

Ingress Controller

In order for an Ingress to work in your cluster, there must be an Ingress Controller running.

You have to run Cloud Provider KIND to enable the loadbalancer controller which Nginx Ingress controller will use through the loadbalancer API in a kind cluster.

$ kubectl apply -f https://kind.sigs.k8s.io/examples/ingress/deploy-ingress-nginx.yaml
namespace/ingress-nginx created
serviceaccount/ingress-nginx created
serviceaccount/ingress-nginx-admission created
role.rbac.authorization.k8s.io/ingress-nginx created
role.rbac.authorization.k8s.io/ingress-nginx-admission created
clusterrole.rbac.authorization.k8s.io/ingress-nginx created
clusterrole.rbac.authorization.k8s.io/ingress-nginx-admission created
rolebinding.rbac.authorization.k8s.io/ingress-nginx created
rolebinding.rbac.authorization.k8s.io/ingress-nginx-admission created
clusterrolebinding.rbac.authorization.k8s.io/ingress-nginx created
clusterrolebinding.rbac.authorization.k8s.io/ingress-nginx-admission created
configmap/ingress-nginx-controller created
service/ingress-nginx-controller created
service/ingress-nginx-controller-admission created
deployment.apps/ingress-nginx-controller created
job.batch/ingress-nginx-admission-create created
job.batch/ingress-nginx-admission-patch created
ingressclass.networking.k8s.io/nginx created
validatingwebhookconfiguration.admissionregistration.k8s.io/ingress-nginx-admission created
Enter fullscreen mode Exit fullscreen mode
$ kubectl wait --namespace ingress-nginx \
>   --for=condition=ready pod \
>   --selector=app.kubernetes.io/component=controller \
>   --timeout=90s
pod/ingress-nginx-controller-86bb9f8d4b-4hg7w condition met
Enter fullscreen mode Exit fullscreen mode
$ kubectl get all -n ingress-nginx
NAME                                            READY   STATUS      RESTARTS   AGE
pod/ingress-nginx-admission-create-ldc97        0/1     Completed   0          2m25s
pod/ingress-nginx-admission-patch-zzlh7         0/1     Completed   0          2m25s
pod/ingress-nginx-controller-86bb9f8d4b-4hg7w   1/1     Running     0          2m25s

NAME                                         TYPE           CLUSTER-IP     EXTERNAL-IP   PORT(S)                      AGE
service/ingress-nginx-controller             LoadBalancer   10.96.146.11   172.18.0.6    80:30367/TCP,443:31847/TCP   2m25s
service/ingress-nginx-controller-admission   ClusterIP      10.96.50.204   <none>        443/TCP                      2m25s

NAME                                       READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/ingress-nginx-controller   1/1     1            1           2m25s

NAME                                                  DESIRED   CURRENT   READY   AGE
replicaset.apps/ingress-nginx-controller-86bb9f8d4b   1         1         1       2m25s

NAME                                       STATUS     COMPLETIONS   DURATION   AGE
job.batch/ingress-nginx-admission-create   Complete   1/1           11s        2m25s
job.batch/ingress-nginx-admission-patch    Complete   1/1           12s        2m25s
Enter fullscreen mode Exit fullscreen mode

Ingress resources

The Ingress concept lets you map traffic to different backends based on rules you define via the Kubernetes API. Traffic routing is controlled by rules defined on the Ingress resource.

Basic usage

# pod-foo-bar.yaml

apiVersion: v1
kind: Pod
metadata:
  name: kubia-foo
  labels:
    app: foo
spec:
  containers:
  - image: luksa/kubia
    name: kubia
    ports:
    - containerPort: 8080
      protocol: TCP
      name: http-port

---
apiVersion: v1
kind: Pod
metadata:
  name: kubia-bar
  labels:
    app: bar
spec:
  containers:
  - image: luksa/kubia
    name: kubia
    ports:
    - containerPort: 8080
      name: http-port
Enter fullscreen mode Exit fullscreen mode
# service-foo-bar.yaml

apiVersion: v1
kind: Service
metadata:
  name: kubia-foo-svc
spec:
  selector:
    app: foo
  ports:
    - name: http-port
      protocol: TCP
      port: 8080
      targetPort: http-port

---
apiVersion: v1
kind: Service
metadata:
  name: kubia-bar-svc
spec:
  selector:
    app: bar
  ports:
    - name: http-port
      protocol: TCP
      port: 8080
      targetPort: http-port
Enter fullscreen mode Exit fullscreen mode
# ingress-basic.yaml 

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: kubia
spec:
  rules:
  - http:
      paths:
      - path: /foo
        pathType: Prefix
        backend:
          service:
            name: kubia-foo-svc
            port:
              number: 80
      - path: /bar
        pathType: Prefix
        backend:
          service:
            name: kubia-bar-svc
            port:
              number: 80
Enter fullscreen mode Exit fullscreen mode
$ kubectl create -f pod-foo-bar.yaml
pod/kubia-foo created
pod/kubia-bar created

$ kubectl create -f service-foo-bar.yaml
service/kubia-foo-svc created
service/kubia-bar-svc created

$ kubectl create -f ingress-basic.yaml
ingress.networking.k8s.io/kubia created
Enter fullscreen mode Exit fullscreen mode
$ kubectl get svc
NAME            TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)   AGE
kubernetes      ClusterIP   10.96.0.1       <none>        443/TCP   22d
kubia-bar-svc   ClusterIP   10.96.230.115   <none>        80/TCP    4m12s
kubia-foo-svc   ClusterIP   10.96.49.21     <none>        80/TCP    4m13s

$ kubectl get ingress
NAME    CLASS    HOSTS   ADDRESS     PORTS   AGE
kubia   <none>   *       localhost   80      67s

$ kubectl -n ingress-nginx get svc
NAME                                 TYPE           CLUSTER-IP     EXTERNAL-IP   PORT(S)                      AGE
ingress-nginx-controller             LoadBalancer   10.96.146.11   172.18.0.6    80:30367/TCP,443:31847/TCP   63m
ingress-nginx-controller-admission   ClusterIP      10.96.50.204   <none>        443/TCP                      63m
Enter fullscreen mode Exit fullscreen mode
$ kubectl get services \
>    --namespace ingress-nginx \
>    ingress-nginx-controller \
>    --output jsonpath='{.status.loadBalancer.ingress[0].ip}'
172.18.0.6

$ curl -s http://172.18.0.6:80/foo
You've hit kubia-foo

$ curl -s http://172.18.0.6:80/bar
You've hit kubia-bar

$ curl -s http://172.18.0.6:80/baz
<html>
<head><title>404 Not Found</title></head>
<body>
<center><h1>404 Not Found</h1></center>
<hr><center>nginx</center>
</body>
</html>
Enter fullscreen mode Exit fullscreen mode

In order to use ingress address localhost (curl http://localhost/foo) you should define extraPortMapping in kind cluster configuration as described in Extra Port Mappings.

$ kubectl delete ingress/kubia
ingress.networking.k8s.io "kubia" deleted
Enter fullscreen mode Exit fullscreen mode

Using a host

# ingress-hosts.yaml

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: kubia
spec:
  rules:
  - host: foo.kubia.com
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: kubia-foo-svc
            port:
              number: 80
  - host: bar.kubia.com
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: kubia-bar-svc
            port:
              number: 80
Enter fullscreen mode Exit fullscreen mode
$ kubectl create -f ingress-hosts.yaml
ingress.networking.k8s.io/kubia created

$ kubectl get ingress/kubia
NAME    CLASS    HOSTS                         ADDRESS     PORTS   AGE
kubia   <none>   foo.kubia.com,bar.kubia.com   localhost   80      103s
Enter fullscreen mode Exit fullscreen mode
$ curl -s http://172.18.0.6
<html>
<head><title>404 Not Found</title></head>
<body>
<center><h1>404 Not Found</h1></center>
<hr><center>nginx</center>
</body>
</html>

$ curl -s http://172.18.0.6 -H 'Host: foo.kubia.com'
You've hit kubia-foo                               '

$ curl -s http://172.18.0.6 -H 'Host: bar.kubia.com'
You've hit kubia-bar
Enter fullscreen mode Exit fullscreen mode
$ kubectl delete ingress/kubia
ingress.networking.k8s.io "kubia" deleted
Enter fullscreen mode Exit fullscreen mode

TLS

You can secure an Ingress by specifying a Secret that contains a TLS private key and certificate.

$ openssl genrsa -out tls.key 2048
Generating RSA private key, 2048 bit long modulus (2 primes)
............................................+++++
............+++++
e is 65537 (0x010001)

$ openssl req -new -x509 -key tls.key -out tls.crt -days 360 -subj //CN=foo.kubia.com

$ kubectl create secret tls tls-secret --cert=tls.crt --key=tls.key
secret/tls-secret created
Enter fullscreen mode Exit fullscreen mode
# ingress-tls.yaml

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: kubia
spec:
  tls:
  - hosts:
      - foo.kubia.com
    secretName: tls-secret
  rules:
  - host: foo.kubia.com
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: kubia-foo-svc
            port:
              number: 80
Enter fullscreen mode Exit fullscreen mode
$ kubectl create -f ingress-tls.yaml
ingress.networking.k8s.io/kubia created

$ kubectl get ingress/kubia
NAME    CLASS    HOSTS           ADDRESS     PORTS     AGE
kubia   <none>   foo.kubia.com   localhost   80, 443   2m13s

$ curl -sk https://172.18.0.6:443 -H 'Host: foo.kubia.com'
You've hit kubia-foo
Enter fullscreen mode Exit fullscreen mode
$ kubectl delete ingress/kubia
ingress.networking.k8s.io "kubia" deleted

$ kubectl delete secret/tls-secret
secret "tls-secret" deleted

$ kubectl delete -f pod-foo-bar.yaml
pod "kubia-foo" deleted
pod "kubia-bar" deleted

$ kubectl delete -f service-foo-bar.yaml
service "kubia-foo-svc" deleted
service "kubia-bar-svc" deleted
Enter fullscreen mode Exit fullscreen mode

Probes

A probe is a diagnostic performed periodically by the kubelet on a container. To perform a diagnostic, the kubelet either executes code within the container, or makes a network request.

livenessProbe

Indicates whether the container is running. If the liveness probe fails, the kubelet kills the container

# pod-liveness-probe.yaml

apiVersion: v1
kind: Pod
metadata:
  name: kubia-liveness
spec:
  containers:
  - image: luksa/kubia-unhealthy
    name: kubia
    livenessProbe:
      httpGet:
        path: /
        port: 8080
Enter fullscreen mode Exit fullscreen mode
$ kubectl create -f pod-liveness-probe.yaml
pod/kubia-liveness created

$ kubectl get po
NAME             READY   STATUS    RESTARTS   AGE
kubia-liveness   1/1     Running   0          42s

$ kubectl events pod/kubia-liveness
LAST SEEN          TYPE      REASON      OBJECT               MESSAGE
113s               Normal    Scheduled   Pod/kubia-liveness   Successfully assigned default/kubia-liveness to kind-worker3
112s               Normal    Pulling     Pod/kubia-liveness   Pulling image "luksa/kubia-unhealthy"
77s                Normal    Pulled      Pod/kubia-liveness   Successfully pulled image "luksa/kubia-unhealthy" in 34.865s (34.865s including waiting). Image size: 263841919 bytes.
77s                Normal    Created     Pod/kubia-liveness   Created container: kubia
77s                Normal    Started     Pod/kubia-liveness   Started container kubia
2s (x3 over 22s)   Warning   Unhealthy   Pod/kubia-liveness   Liveness probe failed: HTTP probe failed with statuscode: 500
2s                 Normal    Killing     Pod/kubia-liveness   Container kubia failed liveness probe, will be restarted

$ kubectl get po
NAME             READY   STATUS    RESTARTS      AGE
kubia-liveness   1/1     Running   1 (20s ago)   2m41s
Enter fullscreen mode Exit fullscreen mode

readinessProbe

Indicates whether the container is ready to respond to requests. If the readiness probe fails, the EndpointSlice controller removes the Pod's IP address from the EndpointSlices of all Services that match the Pod

# pod-readiness-probe.yaml

apiVersion: v1
kind: Pod
metadata:
  name: kubia-readiness
  labels:
    app: kubia
spec:
  containers:
  - image: luksa/kubia
    name: kubia
    readinessProbe:
      exec:
        command:
        - cat
        - /tmp/ready
      initialDelaySeconds: 10
      periodSeconds: 5
    ports:
    - containerPort: 8080
      name: http-web
Enter fullscreen mode Exit fullscreen mode
# service-readiness-probe.yaml

apiVersion: v1
kind: Service
metadata:
  name: kubia-svc
spec:
  type: LoadBalancer
  selector:
    app: kubia
  ports:
  - port: 80
    targetPort: http-web
Enter fullscreen mode Exit fullscreen mode
$ kubectl create -f pod-readiness-probe.yaml
pod/kubia-readiness created

$ kubectl create -f service-readiness-probe.yaml
service/kubia-svc created

$ kubectl get po
NAME              READY   STATUS    RESTARTS   AGE
kubia-readiness   0/1     Running   0          23s

$ kubectl get svc
NAME         TYPE           CLUSTER-IP     EXTERNAL-IP   PORT(S)        AGE
kubernetes   ClusterIP      10.96.0.1      <none>        443/TCP        23d
kubia-svc    LoadBalancer   10.96.150.51   172.18.0.7    80:31868/TCP   33s
Enter fullscreen mode Exit fullscreen mode
$ kubectl exec kubia-readiness -- curl -s http://localhost:8080
You've hit kubia-readiness'

$ kubectl exec kubia-readiness -- curl -s http://kubia-svc:80
command terminated with exit code 7

$ curl -sv http://172.18.0.7:80
*   Trying 172.18.0.7:80...
* Connected to 172.18.0.7 (172.18.0.7) port 80 (#0)
> GET / HTTP/1.1
> Host: 172.18.0.7
> User-Agent: curl/7.79.1
> Accept: */*
>
* Empty reply from server
* Closing connection 0
Enter fullscreen mode Exit fullscreen mode
$ kubectl exec kubia-readiness -- touch tmp/ready

$ kubectl get po
NAME              READY   STATUS    RESTARTS   AGE
kubia-readiness   1/1     Running   0          2m38s

$ kubectl exec kubia-readiness -- curl -s http://kubia-svc:80
You've hit kubia-readiness

$ curl -s http://172.18.0.7:80
You've hit kubia-readiness
Enter fullscreen mode Exit fullscreen mode
$ kubectl delete -f pod-readiness-probe.yaml
pod "kubia-readiness" deleted

$ kubectl delete -f service-readiness-probe.yaml
service "kubia-svc" deleted
Enter fullscreen mode Exit fullscreen mode

startupProbe

Indicates whether the application within the container is started. All other probes are disabled if a startup probe is provided, until it succeeds. If the startup probe fails, the kubelet kills the container.

ports:
- name: liveness-port
  containerPort: 8080

livenessProbe:
  httpGet:
    path: /healthz
    port: liveness-port
  failureThreshold: 1
  periodSeconds: 10

startupProbe:
  httpGet:
    path: /healthz
    port: liveness-port
  failureThreshold: 30
  periodSeconds: 10
Enter fullscreen mode Exit fullscreen mode

For more information about configuring probes, see Configure Liveness, Readiness and Startup Probes


Volumes

Kubernetes volumes provide a way for containers in a pod to access and share data via the filesystem. Data sharing can be between different local processes within a container, or between different containers, or between Pods.

Kubernetes supports several types of volumes.

Ephemeral Volumes

Ephemeral volumes are temporary storage that are intrinsically linked to the lifecycle of a Pod. Ephemeral volumes are designed for scenarios where data persistence is not required beyond the life of a single Pod.

Kubernetes supports several different kinds of ephemeral volumes for different purposes: emptyDir, configmap, downwardAPI, secret, image, CSI

emptyDir

For a Pod that defines an emptyDir volume, the volume is created when the Pod is assigned to a node. The emptyDir volume is initially empty.

# pod-volume-emptydir.yaml

apiVersion: v1
kind: Pod
metadata:
  name: nginx
spec:
  containers:
  - name: nginx
    image: nginx:stable
    volumeMounts:
    - mountPath: /tmp-cache
      name: tmp
  volumes:
  - name: tmp
    emptyDir: {}
Enter fullscreen mode Exit fullscreen mode
$ kubectl create -f pod-volume-emptydir.yaml
pod/nginx created

$ kubectl exec nginx -- ls -l | grep cache
drwxrwxrwx   2 root root 4096 Aug 11 08:13 tmp-cache
Enter fullscreen mode Exit fullscreen mode
$ kubectl delete -f pod-volume-emptydir.yaml
pod "nginx" deleted
Enter fullscreen mode Exit fullscreen mode

You can create a volume in memory using using tmpfs file system:

  - name: tmp
    emptyDir:
      sizeLimit: 500Mi
      medium: Memory
Enter fullscreen mode Exit fullscreen mode

Projected Volumes

A projected volume maps several existing volume sources into the same directory.

Currently, the following types of volume sources can be projected: secret, downwardAPI, configMap, serviceAccountToken, clusterTrustBundle

Persistent Volumes

Persistent volumes offer durable storage, meaning the data stored within them persists even after the associated Pods are deleted, restarted, or rescheduled.

PersistentVolume

A PersistentVolume (PV) is a piece of storage in the cluster that has been provisioned by an administrator or dynamically provisioned using Storage Classes.

PersistentVolume types are implemented as plugins. Kubernetes currently supports the following plugins: csi, fc, iscsi, local, nfs, hostPath

hostPath

A hostPath volume mounts a file or directory from the host node's filesystem into your Pod.

# pod-volume-hostpath.yaml

apiVersion: v1
kind: Pod
metadata:
  name: nginx
spec:
  containers:
  - name: nginx
    image: nginx:stable
    volumeMounts:
    - mountPath: /cache
      name: cache
  volumes:
  - name: cache
    hostPath:
      path: /data/cache
      type: DirectoryOrCreate
Enter fullscreen mode Exit fullscreen mode
$ kubectl create -f pod-volume-hostpath.yaml
pod/nginx created

$ kubectl exec nginx -- ls -l | grep cache
drwxr-xr-x   2 root root 4096 Aug 11 12:27 cache
Enter fullscreen mode Exit fullscreen mode
$ kubectl delete -f pod-volume-hostpath.yaml
pod "nginx" deleted
Enter fullscreen mode Exit fullscreen mode
# pv-hostpath.yaml

apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv-redis
spec:
  capacity: 
    storage: 1Gi
  accessModes:
    - ReadWriteOnce
    - ReadOnlyMany
  persistentVolumeReclaimPolicy: Retain
  hostPath:
    path: /data/redis
Enter fullscreen mode Exit fullscreen mode
$ kubectl create -f pv-hostpath.yaml
persistentvolume/pv-redis created

$ kubectl get pv
NAME         CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM   STORAGECLASS   VOLUMEATTRIBUTESCLASS   REASON   AGE
pv-redis   1Gi        RWO,ROX        Retain           Available                          <unset>                          44s
Enter fullscreen mode Exit fullscreen mode

PersistentVolumeClaim

A PersistentVolumeClaim (PVC) is a request for storage by a user. A PersistentVolumeClaim volume is used to mount a PersistentVolume into a Pod.

# pvc-basic.yaml

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: pvc-redis
spec:
  resources:
    requests:
      storage: 0.5Gi
  accessModes:
  - ReadWriteOnce
  storageClassName: ""
Enter fullscreen mode Exit fullscreen mode
$ kubectl create -f pvc-basic.yaml
persistentvolumeclaim/pvc-redis created

$ kubectl get pvc
NAME        STATUS   VOLUME     CAPACITY   ACCESS MODES   STORAGECLASS   VOLUMEATTRIBUTESCLASS   AGE
pvc-redis   Bound    pv-redis   1Gi        RWO,ROX                       <unset>                 6s

$ kubectl get pv
NAME       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM               STORAGECLASS   VOLUMEATTRIBUTESCLASS   REASON   AGE
pv-redis   1Gi        RWO,ROX        Retain           Bound    default/pvc-redis                  <unset>                          28s
Enter fullscreen mode Exit fullscreen mode
# pod-pvc.yaml

apiVersion: v1
kind: Pod
metadata:
  name: redis 
spec:
  containers:
  - name: redis
    image: redis:6.2
    volumeMounts:
    - name: redis-rdb
      mountPath: /data
  volumes:
  - name: redis-rdb
    persistentVolumeClaim:
      claimName: pvc-redis
Enter fullscreen mode Exit fullscreen mode
$ kubectl create -f pod-pvc.yaml
pod/redis created

$ kubectl get po redis -o jsonpath='{.spec.volumes[?(@.name == "redis-rdb")]}'
{"name":"redis-rdb","persistentVolumeClaim":{"claimName":"pvc-redis"}}
Enter fullscreen mode Exit fullscreen mode
$ kubectl exec redis -- redis-cli save
OK

$ kubectl get po redis -o jsonpath='{.spec.nodeName}'
kind-worker2

$ docker exec kind-worker2 ls -l tmp/redis
total 4
-rw------- 1 999 systemd-journal 102 Aug 11 14:47 dump.rdb
Enter fullscreen mode Exit fullscreen mode
$ kubectl delete po/redis
pod "redis" deleted

$ kubectl delete pvc/pvc-redis
persistentvolumeclaim "pvc-redis" deleted

$ kubectl get pvc
No resources found in default namespace.

$ kubectl get pv
NAME       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS     CLAIM               STORAGECLASS   VOLUMEATTRIBUTESCLASS   REASON   AGE
pv-redis   1Gi        RWO,ROX        Retain           Released   default/pvc-redis                  <unset>                          37m

$ kubectl create -f pvc-basic.yaml
persistentvolumeclaim/pvc-redis created

$ kubectl get pvc
NAME        STATUS    VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS   VOLUMEATTRIBUTESCLASS   AGE
pvc-redis   Pending                                                     <unset>                 9s

$ kubectl get pv
NAME       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS     CLAIM               STORAGECLASS   VOLUMEATTRIBUTESCLASS   REASON   AGE
pv-redis   1Gi        RWO,ROX        Retain           Released   default/pvc-redis                  <unset>                          40m

$ kubectl create -f pod-pvc.yaml
pod/redis created

$ kubectl get po
NAME    READY   STATUS    RESTARTS   AGE
redis   0/1     Pending   0          92s

$ kubectl events pod/redis
LAST SEEN             TYPE      REASON             OBJECT                            MESSAGE
37m                   Normal    Scheduled          Pod/redis                         Successfully assigned default/redis to kind-worker2
37m                   Normal    Pulling            Pod/redis                         Pulling image "redis:6.2"
37m                   Normal    Pulled             Pod/redis                         Successfully pulled image "redis:6.2" in 5.993s (5.993s including waiting). Image size: 40179474 bytes.
37m                   Normal    Created            Pod/redis                         Created container: redis
37m                   Normal    Started            Pod/redis                         Started container redis
6m57s                 Normal    Killing            Pod/redis                         Stopping container redis
2m4s                  Warning   FailedScheduling   Pod/redis                         0/4 nodes are available: pod has unbound immediate PersistentVolumeClaims. preemption: 0/4 nodes are available: 4 Preemption is not helpful for scheduling.
8s (x16 over 3m51s)   Normal    FailedBinding      PersistentVolumeClaim/pvc-redis   no persistent volumes available for this claim and no storage class is set
Enter fullscreen mode Exit fullscreen mode
$ kubectl delete pv/pv-redis
persistentvolume "pv-redis" deleted

$ kubectl get pvc
NAME        STATUS    VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS   VOLUMEATTRIBUTESCLASS   AGE
pvc-redis   Pending                                                     <unset>                 61s

$ kubectl create -f pv-hostpath.yaml
persistentvolume/pv-redis created

$ kubectl get pvc
NAME        STATUS   VOLUME     CAPACITY   ACCESS MODES   STORAGECLASS   VOLUMEATTRIBUTESCLASS   AGE
pvc-redis   Bound    pv-redis   1Gi        RWO,ROX                       <unset>                 2m2s
Enter fullscreen mode Exit fullscreen mode
$ kubectl delete pod/redis
pod "redis" deleted

$ kubectl delete pvc/pvc-redis
persistentvolumeclaim "pvc-redis" deleted

$ kubectl delete pv/pv-redis
persistentvolume "pv-redis" deleted
Enter fullscreen mode Exit fullscreen mode

Dynamic Volume Provisioning

Dynamic volume provisioning allows storage volumes to be created on-demand. Without dynamic provisioning, cluster administrators have to manually make calls to their cloud or storage provider to create new storage volumes, and then create PersistentVolume objects to represent them in Kubernetes.

StorageClass

A StorageClass provides a way for administrators to describe the classes of storage they offer.

$ kubectl get storageclass
NAME                 PROVISIONER             RECLAIMPOLICY   VOLUMEBINDINGMODE      ALLOWVOLUMEEXPANSION   AGE
standard (default)   rancher.io/local-path   Delete          WaitForFirstConsumer   false                  25d
Enter fullscreen mode Exit fullscreen mode
# storageclass-local-path.yaml

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: storageclass-redis
  annotations:
    storageclass.kubernetes.io/is-default-class: "false"
provisioner: rancher.io/local-path
volumeBindingMode: Immediate
Enter fullscreen mode Exit fullscreen mode
$ kubectl create -f storageclass-local-path.yaml
storageclass.storage.k8s.io/storageclass-redis created

$ kubectl get sc
NAME                 PROVISIONER             RECLAIMPOLICY   VOLUMEBINDINGMODE      ALLOWVOLUMEEXPANSION   AGE
standard (default)   rancher.io/local-path   Delete          WaitForFirstConsumer   false                  26d
storageclass-redis   rancher.io/local-path   Delete          Immediate              false                  5m43s                 26s
Enter fullscreen mode Exit fullscreen mode
# pvc-sc.yaml

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: pvc-dynamic-redis
  annotations:
    volume.kubernetes.io/selected-node: kind-worker
spec:
  resources:
    requests:
      storage: 0.5Gi
  accessModes:
  - ReadWriteOnce
  storageClassName: storageclass-redis
Enter fullscreen mode Exit fullscreen mode
$ kubectl create -f pvc-sc.yaml
persistentvolumeclaim/pvc-dynamic-redis created

$ kubectl get pvc
NAME                STATUS    VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS         VOLUMEATTRIBUTESCLASS   AGE
pvc-dynamic-redis   Pending                                      storageclass-redis   <unset>                 8s

$ kubectl get pvc
NAME                STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS         VOLUMEATTRIBUTESCLASS   AGE
pvc-dynamic-redis   Bound    pvc-0d78a617-e1ee-4d1e-8e59-37502fc711a9   512Mi      RWO            storageclass-redis   <unset>                 26s
                         0s
$ kubectl get pv
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                       STORAGECLASS         VOLUMEATTRIBUTESCLASS   REASON   AGE
pvc-0d78a617-e1ee-4d1e-8e59-37502fc711a9   512Mi      RWO            Delete           Bound    default/pvc-dynamic-redis   storageclass-redis   <unset>                          47s
Enter fullscreen mode Exit fullscreen mode
$ kubectl delete sc/st
sc/standard            sc/storageclass-redis

$ kubectl delete sc/storageclass-redis
storageclass.storage.k8s.io "storageclass-redis" deleted

$ kubectl get pv
No resources found
Enter fullscreen mode Exit fullscreen mode

ConfigMaps

A ConfigMap is an API object used to store non-confidential data in key-value pairs. Pods can consume ConfigMaps as environment variables, command-line arguments, or as configuration files in a volume.

Creating ConfigMaps

Imperative way

# application.properties
server.port=8080
spring.profiles.active=development
Enter fullscreen mode Exit fullscreen mode
$ kubectl create configmap my-config \
    --from-literal=foo=bar \
    --from-file=app.props=application.properties
configmap/my-config created
Enter fullscreen mode Exit fullscreen mode
$ kubectl get cm/my-config
NAME        DATA   AGE
my-config   2      61s

$ kubectl get cm/my-config -o yaml
apiVersion: v1
data:
  app.props: |-
    # application.properties
    server.port=8080
    spring.profiles.active=development
  foo: bar
kind: ConfigMap
metadata:
  creationTimestamp: "2025-09-15T20:20:44Z"
  name: my-config
  namespace: default
  resourceVersion: "3636455"
  uid: 9c68ecb1-55ca-469a-b09e-3e1b625cd69b
Enter fullscreen mode Exit fullscreen mode
$ kubectl delete cm my-config
configmap "my-config" deleted
Enter fullscreen mode Exit fullscreen mode

Declarative way

# cm.yaml
apiVersion: v1
kind: ConfigMap
metadata:
  name: my-config
data:
  app.props: |
    server.port=8080
    spring.profiles.active=development
  foo: bar
Enter fullscreen mode Exit fullscreen mode
$ kubectl apply -f cm.yaml
configmap/my-config created
Enter fullscreen mode Exit fullscreen mode
$ kubectl get cm/my-config
NAME        DATA   AGE
my-config   2      19s

$ kubectl get cm/my-config -o yaml
apiVersion: v1
data:
  app.props: |
    server.port=8080
    spring.profiles.active=development
  foo: bar
kind: ConfigMap
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"v1","data":{"app.props":"server.port=8080\nspring.profiles.active=development\n","foo":"bar"},"kind":"ConfigMap","metadata":{"annotations":{},"name":"my-config","namespace":"default"}}
  creationTimestamp: "2025-09-15T20:27:51Z"
  name: my-config
  namespace: default
  resourceVersion: "3637203"
  uid: a8d9fce1-f2bd-470c-93a2-3a7fcc560bbc
Enter fullscreen mode Exit fullscreen mode

Using ConfigMaps

Consuming an environment variable by a reference key

# pod-cm-env.yaml
apiVersion: v1
kind: Pod
metadata:
  name: env-configmap
spec:
  containers:
    - name: app
      command: ["printenv", "MY_VAR"]
      image: busybox:latest
      env:
        - name: MY_VAR
          valueFrom:
            configMapKeyRef:
              name: my-config
              key: foo
Enter fullscreen mode Exit fullscreen mode
$ kubectl apply -f pod-cm-env.yaml
pod/env-configmap created

$ kubectl logs pod/env-configmap
bar
Enter fullscreen mode Exit fullscreen mode
$ kubectl delete -f pod-cm-env.yaml
pod "env-configmap" deleted
Enter fullscreen mode Exit fullscreen mode

Consuming all environment variables from the ConfigMap

# pod-cm-envfrom.yaml
apiVersion: v1
kind: Pod
metadata:
  name: env-from-configmap
spec:
  containers:
    - name: app
      command: ["printenv", "config_foo"]
      image: busybox:latest
      envFrom:
        - prefix: config_
          configMapRef:
            name: my-config
Enter fullscreen mode Exit fullscreen mode
$ kubectl apply -f pod-cm-envfrom.yaml
pod/env-from-configmap created

$ kubectl logs pod/env-from-configmap
bar
Enter fullscreen mode Exit fullscreen mode
$ kubectl delete -f pod-cm-envfrom.yaml
pod "env-from-configmap" deleted
Enter fullscreen mode Exit fullscreen mode

Using configMap volume

# pod-cm-volumemount.yaml
apiVersion: v1
kind: Pod
metadata:
  name: configmap-volumemount
spec:
  containers:
    - name: app
      command: ["cat", "/etc/props/app.props"]
      image: busybox:latest
      volumeMounts:
        - name: app-props
          mountPath: "/etc/props"
          readOnly: true
  volumes:
  - name: app-props
    configMap:
      name: my-config
Enter fullscreen mode Exit fullscreen mode
$ kubectl apply -f pod-cm-volumemount.yaml
pod/configmap-volumemount created

$ kubectl logs pod/configmap-volumemount
server.port=8080
spring.profiles.active=development
Enter fullscreen mode Exit fullscreen mode
$ kubectl delete -f pod-cm-volumemount.yaml
pod "configmap-volumemount" deleted
Enter fullscreen mode Exit fullscreen mode

Using configMap volume with items

# pod-cm-volume-items.yaml
apiVersion: v1
kind: Pod
metadata:
  name: configmap-volume-items
spec:
  containers:
    - name: app
      command: ["cat", "/etc/configs/app.conf"]
      image: busybox:latest
      volumeMounts:
        - name: config
          mountPath: "/etc/configs"
          readOnly: true
  volumes:
    - name: config
      configMap:
        name: my-config
        items:
          - key: foo
            path: app.conf
Enter fullscreen mode Exit fullscreen mode
$ kubectl apply -f pod-cm-volume-items.yaml
pod/configmap-volume-items created

$ kubectl logs pod/configmap-volume-items
bar
Enter fullscreen mode Exit fullscreen mode
$ kubectl delete -f pod-cm-volume-items.yaml
pod "configmap-volume-items" deleted
Enter fullscreen mode Exit fullscreen mode

Secrets

A Secret is an object that contains a small amount of sensitive data such as a password, a token, or a key. Such information might otherwise be put in a Pod specification or in a container image. Using a Secret means that you don't need to include confidential data in your application code..

Default Secrets in a Pod

# pod-basic.yaml
apiVersion: v1
kind: Pod
metadata:
  name: kubia
spec:
  containers:
  - image: luksa/kubia
    name: kubia
    ports:
    - containerPort: 8080
      protocol: TCP
Enter fullscreen mode Exit fullscreen mode
$ kubectl apply -f pod-basic.yaml
pod/kubia created

$ kubectl get po/kubia -o=jsonpath='{.spec.containers[0].volumeMounts}'
[{"mountPath":"/var/run/secrets/kubernetes.io/serviceaccount","name":"kube-api-access-jd9vq","readOnly":true}]

$ kubectl get po/kubia -o=jsonpath='{.spec.volumes[?(@.name == "kube-api-access-jd9vq")].projected.sources}'
[{"serviceAccountToken":{"expirationSeconds":3607,"path":"token"}},{"configMap":{"items":[{"key":"ca.crt","path":"ca.crt"}],"name":"kube-root-ca.crt"}},{"downwardAPI":{"items":[{"fieldRef":{"apiVersion":"v1","fieldPath":"metadata.namespace"},"path":"namespace"}]}}]

$ kubectl exec po/kubia -- ls /var/run/secrets/kubernetes.io/serviceaccount/
ca.crt
namespace
token

$ kubectl delete -f pod-basic.yaml
pod "kubia" deleted

Enter fullscreen mode Exit fullscreen mode

Creating Secrets

Imperative way

Opaque Secrets
$ kubectl create secret generic empty-secret
secret/empty-secret created

$ kubectl get secret empty-secret
NAME           TYPE     DATA   AGE
empty-secret   Opaque   0      9s

$ kubectl get secret/empty-secret -o yaml
apiVersion: v1
kind: Secret
metadata:
  creationTimestamp: "2025-11-05T17:19:07Z"
  name: empty-secret
  namespace: default
  resourceVersion: "6290557"
  uid: 031d7f8d-e96d-4e03-a90f-2cb96308354b
type: Opaque

$ kubectl delete secret/empty-secret
secret "empty-secret" deleted
Enter fullscreen mode Exit fullscreen mode
$ openssl genrsa -out tls.key
Generating RSA private key, 2048 bit long modulus (2 primes)
...............................................................+++++
.................................+++++
e is 65537 (0x010001)

$ openssl req -new -x509 -key tls.key -out tls.crt -subj /CN=kubia.com

$ kubectl create secret generic kubia-secret --from-file=tls.key --from-file=tls.crt
secret/kubia-secret created

$ kubectl get secret/kubia-secret -o yaml
apiVersion: v1
data:
  tls.crt: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tDQpNSUlEQ1RDQ0FmR2dBd0lCQWdJVUxxWEJaRn...LS1FTkQgQ0VSVElGSUNBVEUtLS0tLQ0K
  tls.key: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQ0KTUlJRXBBSUJBQUtDQVFFQXR4UlRYMD...U2VQK3N3PT0NCi0tLS0tRU5EIFJTQSBQUklWQVRFIEtFWS0tLS0tDQo=
kind: Secret
metadata:
  creationTimestamp: "2025-11-05T17:26:21Z"
  name: kubia-secret
  namespace: default
  resourceVersion: "6291327"
  uid: a06d4be4-3e21-47ea-8009-d300c1c449f9
type: Opaque

$ kubectl delete secret/kubia-secret
secret "kubia-secret" deleted
Enter fullscreen mode Exit fullscreen mode
$ kubectl create secret generic test-secret --from-literal='username=admin' --from-literal='password=39528$vdg7Jb'
secret/test-secret created

$ kubectl get secret/test-secret -o yaml
apiVersion: v1
data:
  password: Mzk1MjgkdmRnN0pi
  username: YWRtaW4=
kind: Secret
metadata:
  creationTimestamp: "2025-11-05T18:21:28Z"
  name: test-secret
  namespace: default
  resourceVersion: "6297117"
  uid: 215daac1-7305-43f4-91c6-c7dbdeca2802
type: Opaque

$ kubectl delete secret/test-secret
secret "test-secret" deleted
Enter fullscreen mode Exit fullscreen mode
TLS Secrets
$ kubectl create secret tls my-tls-secret --key=tls.key --cert=tls.crt
secret/my-tls-secret created

$ kubectl get secret/my-tls-secret -o yaml
apiVersion: v1
data:
  tls.crt: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tDQpNSUlEQ1RDQ0FmR2dBd0lCQWdJVUxxWEJaRn...LS1FTkQgQ0VSVElGSUNBVEUtLS0tLQ0K
  tls.key: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQ0KTUlJRXBBSUJBQUtDQVFFQXR4UlRYMD...U2VQK3N3PT0NCi0tLS0tRU5EIFJTQSBQUklWQVRFIEtFWS0tLS0tDQo=
kind: Secret
metadata:
  creationTimestamp: "2025-11-05T17:37:45Z"
  name: my-tls-secret
  namespace: default
  resourceVersion: "6292515"
  uid: f15b375e-2404-4ca0-a08f-014a0efeec70
type: kubernetes.io/tls

$ kubectl delete secret/my-tls-secret
secret "my-tls-secret" deleted
Enter fullscreen mode Exit fullscreen mode
Docker config Secrets
$ kubectl create secret docker-registry my-docker-registry-secret --docker-username=robert --docker-password=passw123 --docker-server=nexus.registry.com:5000
secret/my-docker-registry-secret created

$ kubectl get secret/my-docker-registry-secret -o yaml
apiVersion: v1
data:
  .dockerconfigjson: eyJhdXRocyI6eyJuZXh1cy5yZWdpc3RyeS5jb206NTAwMCI6eyJ1c2VybmFtZSI6InJvYmVydCIsInBhc3N3b3JkIjoicGFzc3cxMjMiLCJhdXRoIjoiY205aVpYSjBPbkJoYzNOM01USXoifX19
kind: Secret
metadata:
  creationTimestamp: "2025-11-05T17:44:10Z"
  name: my-docker-registry-secret
  namespace: default
  resourceVersion: "6293203"
  uid: c9d05ef7-8c8c-4e2b-bf6f-27f80a45d545
type: kubernetes.io/dockerconfigjson

$ kubectl delete secret/my-docker-registry-secret
secret "my-docker-registry-secret" deleted
Enter fullscreen mode Exit fullscreen mode

Declarative way

Opaque Secrets
$ echo -n 'my-app' | base64
bXktYXBw

$ echo -n '39528$vdg7Jb' | base64
Mzk1MjgkdmRnN0pi
Enter fullscreen mode Exit fullscreen mode
# opaque-secret.yaml
apiVersion: v1
kind: Secret
metadata:
  name: opaque-secret
data:
  username: bXktYXBw
  password: Mzk1MjgkdmRnN0pi
Enter fullscreen mode Exit fullscreen mode
$ kubectl apply -f opaque-secret.yaml
secret/opaque-secret created

$ kubectl get secrets
NAME          TYPE     DATA   AGE
opaque-secret   Opaque   2      4s

$ kubectl delete -f opaque-secret.yaml
secret "opaque-secret" deleted
Enter fullscreen mode Exit fullscreen mode
Docker config Secrets
# dockercfg-secret.yaml
apiVersion: v1
kind: Secret
metadata:
  name: secret-dockercfg
type: kubernetes.io/dockercfg
data:
  .dockercfg: |
    eyJhdXRocyI6eyJodHRwczovL2V4YW1wbGUvdjEvIjp7ImF1dGgiOiJvcGVuc2VzYW1lIn19fQo=
Enter fullscreen mode Exit fullscreen mode
$ kubectl apply -f dockercfg-secret.yaml
secret/secret-dockercfg created

$ kubectl get secrets
NAME               TYPE                      DATA   AGE
secret-dockercfg   kubernetes.io/dockercfg   1      3s

$ kubectl describe secret/secret-dockercfg
Name:         secret-dockercfg
Namespace:    default
Labels:       <none>
Annotations:  <none>

Type:  kubernetes.io/dockercfg

Data
====
.dockercfg:  56 bytes

$ kubectl delete -f dockercfg-secret.yaml
secret "secret-dockercfg" deleted
Enter fullscreen mode Exit fullscreen mode
Basic authentication Secret
# basicauth-secret.yaml
apiVersion: v1
kind: Secret
metadata:
  name: secret-basic-auth
type: kubernetes.io/basic-auth
stringData:
  username: admin
  password: pass1234
Enter fullscreen mode Exit fullscreen mode
$ kubectl apply -f basicauth-secret.yaml
secret/secret-basic-auth created

$ kubectl get secrets
NAME                TYPE                       DATA   AGE
secret-basic-auth   kubernetes.io/basic-auth   2      3s

$ kubectl describe secret/secret-basic-auth
Name:         secret-basic-auth
Namespace:    default
Labels:       <none>
Annotations:  <none>

Type:  kubernetes.io/basic-auth

Data
====
password:  8 bytes
username:  5 bytes

$ kubectl delete -f basicauth-secret.yaml
secret "secret-basic-auth" deleted
Enter fullscreen mode Exit fullscreen mode

Using Secrets

Secrets can be mounted as data volumes or exposed as environment variables to be used by a container in a Pod.

$ kubectl create secret generic test-secret --from-literal='username=admin' --from-literal='password=39528$vdg7Jb'
secret/test-secret created

$ kubectl describe secret test-secre
Name:         test-secret
Namespace:    default
Labels:       <none>
Annotations:  <none>

Type:  Opaque

Data
====
password:  12 bytes
username:  5 bytes
Enter fullscreen mode Exit fullscreen mode

Using Secrets as files from a Pod

# pod-secret-volumemount.yaml
apiVersion: v1
kind: Pod
metadata:
  name: secret-test-pod
spec:
  containers:
    - name: test-container
      image: nginx
      volumeMounts:
        - name: secret-volume
          mountPath: /etc/secret-volume
          readOnly: true
  volumes:
    - name: secret-volume
      secret:
        secretName: test-secret
Enter fullscreen mode Exit fullscreen mode
$ kubectl apply -f pod-secret-volumemount.yaml
pod/secret-test-pod created

$ kubectl get pod secret-test-pod
NAME              READY   STATUS    RESTARTS   AGE
secret-test-pod   1/1     Running   0          30s

$ kubectl exec secret-test-pod -- ls /etc/secret-volume
password
username

$ kubectl exec secret-test-pod -- head /etc/secret-volume/{username,password}
==> /etc/secret-volume/username <==
admin
==> /etc/secret-volume/password <==
39528$vdg7Jb

$ kubectl delete -f pod-secret-volumemount.yaml
pod "secret-test-pod" deleted
Enter fullscreen mode Exit fullscreen mode
Project Secret keys to specific file paths
# pod-secret-volume-items.yaml
apiVersion: v1
kind: Pod
metadata:
  name: secret-test-pod
spec:
  containers:
    - name: test-container
      image: nginx
      volumeMounts:
        - name: secret-volume
          mountPath: /etc/secret-volume
          readOnly: true
  volumes:
    - name: secret-volume
      secret:
        secretName: test-secret
        items:
          - key: username
            path: my-group/my-username
Enter fullscreen mode Exit fullscreen mode
$ kubectl apply -f pod-secret-volume-items.yaml
pod/secret-test-pod created

$ kubectl exec secret-test-pod -- ls /etc/secret-volume
my-group

$ kubectl exec secret-test-pod -- ls /etc/secret-volume/my-group
my-username

$ kubectl exec secret-test-pod -- head /etc/secret-volume/my-group/my-username
admin

$ kubectl delete -f pod-secret-volume-items.yaml
pod "secret-test-pod" deleted
Enter fullscreen mode Exit fullscreen mode

Using Secrets as environment variables

Define a container environment variable with data from a single Secret
# pod-secret-env-var.yaml
apiVersion: v1
kind: Pod
metadata:
  name: secret-test-pod
spec:
  containers:
    - name: test-container
      image: nginx
      env:
      - name: SECRET_PASSWORD
        valueFrom:
          secretKeyRef:
            name: test-secret
            key: password
Enter fullscreen mode Exit fullscreen mode
$ kubectl apply -f pod-secret-env-var.yaml
pod/secret-test-pod created

$ kubectl exec secret-test-pod -- /bin/sh -c 'echo $SECRET_PASSWORD'
39528$vdg7Jb

$ kubectl delete -f pod-secret-env-var.yaml
pod "secret-test-pod" deleted
Enter fullscreen mode Exit fullscreen mode
Define all of the Secret's data as container environment variables
# pod-secret-envfrom.yaml
apiVersion: v1
kind: Pod
metadata:
  name: secret-test-pod
spec:
  containers:
    - name: test-container
      image: nginx
      envFrom:
      - secretRef:
          name: test-secret
Enter fullscreen mode Exit fullscreen mode
$ kubectl apply -f pod-secret-envfrom.yaml
pod/secret-test-pod created

$ kubectl exec secret-test-pod -- /bin/sh -c 'echo "username: $username\npassword: $password\n"'
username: admin
password: 39528$vdg7Jb

$ kubectl delete -f pod-secret-envfrom.yaml
pod "secret-test-pod" deleted
Enter fullscreen mode Exit fullscreen mode
$ kubectl delete secrets test-secret
secret "test-secret" deleted
Enter fullscreen mode Exit fullscreen mode

Deployments

A Deployment is a high-level resource used to manage and scale applications while ensuring they remain in the desired state. It provides a declarative way to define how many Pods should run, which container images they should use, and how updates should be applied.

Creating Deployments

Imperative way

$ kubectl create deployment my-nginx-deployment --image=nginx --replicas=3 --port=80
deployment.apps/my-nginx-deployment created

$ kubectl get deploy
NAME                  READY   UP-TO-DATE   AVAILABLE   AGE
my-nginx-deployment   3/3     3            3           20s

$ kubectl rollout status deployment/my-nginx-deployment
deployment "my-nginx-deployment" successfully rolled out

$ kubectl get rs
NAME                             DESIRED   CURRENT   READY   AGE
my-nginx-deployment-677c645895   3         3         3       2m30s

$ kubectl get po
NAME                                   READY   STATUS    RESTARTS   AGE
my-nginx-deployment-677c645895-d4c9q   1/1     Running   0          2m58s
my-nginx-deployment-677c645895-jdvtf   1/1     Running   0          2m58s
my-nginx-deployment-677c645895-mkjsc   1/1     Running   0          2m58s

$ kubectl port-forward deployments/my-nginx-deployment 80
Forwarding from 127.0.0.1:80 -> 80
Forwarding from [::1]:80 -> 80

$ curl -sI localhost:80
HTTP/1.1 200 OK
Server: nginx/1.29.3
Enter fullscreen mode Exit fullscreen mode
$ kubectl set image deployment/my-nginx-deployment nginx=nginx:1.16.1
deployment.apps/my-nginx-deployment image updated

$ kubectl rollout status deployment/my-nginx-deployment
Waiting for deployment "my-nginx-deployment" rollout to finish: 1 out of 3 new replicas have been updated...
Waiting for deployment "my-nginx-deployment" rollout to finish: 1 out of 3 new replicas have been updated...
Waiting for deployment "my-nginx-deployment" rollout to finish: 2 out of 3 new replicas have been updated...
Waiting for deployment "my-nginx-deployment" rollout to finish: 2 out of 3 new replicas have been updated...
Waiting for deployment "my-nginx-deployment" rollout to finish: 2 out of 3 new replicas have been updated...
Waiting for deployment "my-nginx-deployment" rollout to finish: 2 out of 3 new replicas have been updated...
Waiting for deployment "my-nginx-deployment" rollout to finish: 1 old replicas are pending termination...
Waiting for deployment "my-nginx-deployment" rollout to finish: 1 old replicas are pending termination...
deployment "my-nginx-deployment" successfully rolled out

$ kubectl get deploy
NAME                  READY   UP-TO-DATE   AVAILABLE   AGE
my-nginx-deployment   3/3     3            3           5m13s

$ kubectl get rs
NAME                             DESIRED   CURRENT   READY   AGE
my-nginx-deployment-677c645895   0         0         0       5m31s
my-nginx-deployment-68b8b6c496   3         3         3       101s

$ kubectl get po
NAME                                   READY   STATUS    RESTARTS   AGE
my-nginx-deployment-68b8b6c496-6p9jg   1/1     Running   0          118s
my-nginx-deployment-68b8b6c496-mfcnj   1/1     Running   0          2m2s
my-nginx-deployment-68b8b6c496-ngm4b   1/1     Running   0          2m

$ kubectl get po/my-nginx-deployment-68b8b6c496-6p9jg -o jsonpath='{.spec.containers[0].image}'
nginx:1.16.1
Enter fullscreen mode Exit fullscreen mode
$ kubectl rollout history deployment/my-nginx-deployment
deployment.apps/my-nginx-deployment
REVISION  CHANGE-CAUSE
1         <none>
2         <none>

$ kubectl rollout history deployment/my-nginx-deployment --revision=2
deployment.apps/my-nginx-deployment with revision #2
Pod Template:
  Labels:       app=my-nginx-deployment
        pod-template-hash=68b8b6c496
  Containers:
   nginx:
    Image:      nginx:1.16.1
    Port:       80/TCP
    Host Port:  0/TCP
    Environment:        <none>
    Mounts:     <none>
  Volumes:      <none>
  Node-Selectors:       <none>
  Tolerations:  <none>

$ kubectl rollout undo deployment/my-nginx-deployment --to-revision=1
deployment.apps/my-nginx-deployment rolled back

$ kubectl rollout history deployment/my-nginx-deployment
deployment.apps/my-nginx-deployment
REVISION  CHANGE-CAUSE
2         <none>
3         <none>

$ kubectl get rs
NAME                             DESIRED   CURRENT   READY   AGE
my-nginx-deployment-677c645895   3         3         3       11m
my-nginx-deployment-68b8b6c496   0         0         0       7m11s

$ kubectl get po
NAME                                   READY   STATUS    RESTARTS   AGE
my-nginx-deployment-677c645895-cr2vd   1/1     Running   0          71s
my-nginx-deployment-677c645895-cxbpn   1/1     Running   0          73s
my-nginx-deployment-677c645895-l67cc   1/1     Running   0          68s

$ kubectl get po/my-nginx-deployment-677c645895-cr2vd -o jsonpath='{.spec.containers[0].image}'
nginx
Enter fullscreen mode Exit fullscreen mode
$ kubectl scale deployment/my-nginx-deployment --replicas=5
deployment.apps/my-nginx-deployment scaled

$ kubectl get deploy
NAME                  READY   UP-TO-DATE   AVAILABLE   AGE
my-nginx-deployment   5/5     5            5           14m

$ kubectl get rs
NAME                             DESIRED   CURRENT   READY   AGE
my-nginx-deployment-677c645895   5         5         5       14m
my-nginx-deployment-68b8b6c496   0         0         0       10m

$ kubectl get po
NAME                                   READY   STATUS    RESTARTS   AGE
my-nginx-deployment-677c645895-9zrmk   1/1     Running   0          21s
my-nginx-deployment-677c645895-cr2vd   1/1     Running   0          4m34s
my-nginx-deployment-677c645895-cxbpn   1/1     Running   0          4m36s
my-nginx-deployment-677c645895-l67cc   1/1     Running   0          4m31s
my-nginx-deployment-677c645895-qk4b5   1/1     Running   0          21s
Enter fullscreen mode Exit fullscreen mode
$ kubectl rollout pause deployment/my-nginx-deployment
deployment.apps/my-nginx-deployment paused

$ kubectl get po
NAME                                   READY   STATUS    RESTARTS   AGE
my-nginx-deployment-677c645895-9zrmk   1/1     Running   0          3m14s
my-nginx-deployment-677c645895-cr2vd   1/1     Running   0          7m27s
my-nginx-deployment-677c645895-cxbpn   1/1     Running   0          7m29s
my-nginx-deployment-677c645895-l67cc   1/1     Running   0          7m24s
my-nginx-deployment-677c645895-qk4b5   1/1     Running   0          3m14s

$ kubectl scale deployment/my-nginx-deployment --replicas=3
deployment.apps/my-nginx-deployment scaled

$ kubectl get po
NAME                                   READY   STATUS    RESTARTS   AGE
my-nginx-deployment-677c645895-cr2vd   1/1     Running   0          8m28s
my-nginx-deployment-677c645895-cxbpn   1/1     Running   0          8m30s
my-nginx-deployment-677c645895-l67cc   1/1     Running   0          8m25s

$ kubectl set image deployment/my-nginx-deployment nginx=nginx:1.17.2
deployment.apps/my-nginx-deployment image updated

$ kubectl get po
NAME                                   READY   STATUS    RESTARTS   AGE
my-nginx-deployment-677c645895-cr2vd   1/1     Running   0          8m43s
my-nginx-deployment-677c645895-cxbpn   1/1     Running   0          8m35s
my-nginx-deployment-677c645895-l67cc   1/1     Running   0          8m30s

$ kubectl rollout resume deployment/my-nginx-deployment
deployment.apps/my-nginx-deployment resumed

$ kubectl get po
NAME                                   READY   STATUS    RESTARTS   AGE
my-nginx-deployment-75c7c977bb-hwx6r   1/1     Running   0          32s
my-nginx-deployment-75c7c977bb-qlfhc   1/1     Running   0          19s
my-nginx-deployment-75c7c977bb-z7l59   1/1     Running   0          43s

$ kubectl get po/my-nginx-deployment-75c7c977bb-hwx6r -o jsonpath='{.spec.containers[0].image}'
nginx:1.17.2
Enter fullscreen mode Exit fullscreen mode
$ kubectl delete deploy/my-nginx-deployment
deployment.apps "my-nginx-deployment" deleted

$ kubectl get deploy
No resources found in default namespace.

$ kubectl get rs
No resources found in default namespace.

$ kubectl get po
No resources found in default namespace.
Enter fullscreen mode Exit fullscreen mode

Declarative way

# deployment-basic.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: my-nginx-deployment
  labels:
    app: nginx-app
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:latest
        ports:
        - containerPort: 80
Enter fullscreen mode Exit fullscreen mode
$ kubectl apply -f deployment-basic.yaml
deployment.apps/my-nginx-deployment created

$ kubectl get deploy
NAME                  READY   UP-TO-DATE   AVAILABLE   AGE
my-nginx-deployment   3/3     3            3           10s

$ kubectl rollout status deployment/my-nginx-deployment
deployment "my-nginx-deployment" successfully rolled out

$ kubectl rollout status deployment/my-nginx-deployment
deployment "my-nginx-deployment" successfully rolled out

$ kubectl get rs
NAME                           DESIRED   CURRENT   READY   AGE
my-nginx-deployment-96b9d695   3         3         3       31s

$ kubectl get po
NAME                                 READY   STATUS    RESTARTS   AGE
my-nginx-deployment-96b9d695-7hgx5   1/1     Running   0          33s
my-nginx-deployment-96b9d695-nvb6h   1/1     Running   0          33s
my-nginx-deployment-96b9d695-r5t55   1/1     Running   0          33s

$ kubectl delete -f deployment-basic.yaml
deployment.apps "my-nginx-deployment" deleted
Enter fullscreen mode Exit fullscreen mode
apiVersion: v1
kind: ConfigMap
metadata:
  name: nginx-conf
data:
  nginx.conf: |
    user nginx;
    worker_processes  1;
    events {
      worker_connections  10240;
    }
    http {
      server {
        listen 80;
        server_name  _;
        location ~ ^/(healthz|readyz)$ {
            add_header Content-Type text/plain;
            return 200 'OK';
        }
      }
    }
---
# deployment-probes.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: my-nginx-deployment
  labels:
    app: my-nginx
spec:
  progressDeadlineSeconds: 600      # Wait for a deployment to make progress before marking it as stalled
  replicas: 3
  selector:
    matchLabels:
      app: my-app
  template:
    metadata:
      labels:
        app: my-app
    spec:
      containers:
      - name: my-app-container
        image: nginx:latest
        ports:
        - containerPort: 80
        livenessProbe:
          httpGet:
            path: /healthz          # Endpoint for liveness checks
            port: 80
          initialDelaySeconds: 15   # Wait 15 seconds before first liveness probe
          periodSeconds: 10         # Check every 10 seconds
          timeoutSeconds: 5         # Timeout after 5 seconds
          failureThreshold: 3       # Restart container after 3 consecutive failures
        readinessProbe:
          httpGet:
            path: /readyz           # Endpoint for readiness checks
            port: 80
          initialDelaySeconds: 5    # Wait 5 seconds before first readiness probe
          periodSeconds: 5          # Check every 5 seconds
          timeoutSeconds: 3         # Timeout after 3 seconds
          failureThreshold: 1       # Consider not ready after 1 failure
        volumeMounts:
        - name: nginx-conf
          mountPath: /etc/nginx/nginx.conf
          subPath: nginx.conf
          readOnly: true
      volumes:
      - name: nginx-conf
        configMap:
          name: nginx-conf
          items:
            - key: nginx.conf
              path: nginx.conf
Enter fullscreen mode Exit fullscreen mode
$ kubectl apply -f deployment-probes.yaml
configmap/nginx-conf created
deployment.apps/my-nginx-deployment created

$ kubectl get deploy
NAME                  READY   UP-TO-DATE   AVAILABLE   AGE
my-nginx-deployment   3/3     3            3           12s

$ kubectl get rs
NAME                             DESIRED   CURRENT   READY   AGE
my-nginx-deployment-55bc8948d6   3         3         3       52s

$ kubectl get po
NAME                                   READY   STATUS    RESTARTS   AGE
my-nginx-deployment-55bc8948d6-4lhdd   1/1     Running   0          64s
my-nginx-deployment-55bc8948d6-mz5tx   1/1     Running   0          64s
my-nginx-deployment-55bc8948d6-nfkkx   1/1     Running   0          64s

$ kubectl delete -f deployment-probes.yaml
configmap "nginx-conf" deleted
deployment.apps "my-nginx-deployment" deleted
Enter fullscreen mode Exit fullscreen mode

StatefulSet

A StatefulSet is a resource used to manage stateful applications by providing stable, unique network identifiers, persistent storage, and ordered, graceful deployment and scaling for pods. They are ideal for applications like databases that require each replica to have a predictable identity and persistent storage, unlike stateless applications managed by Deployments.

$ kubectl create -f storageclass-local-path.yaml
storageclass.storage.k8s.io/storageclass-redis created

$ kubectl get sc
NAME                 PROVISIONER             RECLAIMPOLICY   VOLUMEBINDINGMODE      ALLOWVOLUMEEXPANSION   AGE
standard (default)   rancher.io/local-path   Delete          WaitForFirstConsumer   false                  115d
storageclass-redis   rancher.io/local-path   Delete          Immediate              false                  43s
Enter fullscreen mode Exit fullscreen mode
# statefulset.yaml

apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: redis
spec:
  serviceName: redis
  replicas: 3
  selector:
    matchLabels:
      app: redis
  template:
    metadata:
      labels:
        app: redis
    spec:
      containers:
      - name: redis
        image: redis:latest
        ports:
        - containerPort: 6379
          name: redis
        volumeMounts:
        - name: data
          mountPath: /data
  volumeClaimTemplates:
  - metadata:
      name: data
    spec:
      accessModes: [ "ReadWriteOnce" ]
      resources:
        requests:
          storage: 0.5Gi
Enter fullscreen mode Exit fullscreen mode
$ kubectl apply -f statefulset.yaml
statefulset.apps/redis created

$ kubectl get pvc
NAME           STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   VOLUMEATTRIBUTESCLASS   AGE
data-redis-0   Bound    pvc-948cc23d-c7a8-4caf-9206-1b52c8f31f75   512Mi      RWO            standard       <unset>                 3m31s
data-redis-1   Bound    pvc-f9ff0efa-dd70-44c6-8601-72f17430f848   512Mi      RWO            standard       <unset>                 2m36s
data-redis-2   Bound    pvc-b2079d26-8db5-4855-b81b-6f1c78aeab0e   512Mi      RWO            standard       <unset>                 98s

$ kubectl get pv
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                  STORAGECLASS   VOLUMEATTRIBUTESCLASS   REASON   AGE
pvc-948cc23d-c7a8-4caf-9206-1b52c8f31f75   512Mi      RWO            Delete           Bound    default/data-redis-0   standard       <unset>                          3m52s
pvc-b2079d26-8db5-4855-b81b-6f1c78aeab0e   512Mi      RWO            Delete           Bound    default/data-redis-2   standard       <unset>                          2m
pvc-f9ff0efa-dd70-44c6-8601-72f17430f848   512Mi      RWO            Delete           Bound    default/data-redis-1   standard       <unset>                          2m57s

$ kubectl get statefulset/redis
NAME    READY   AGE
redis   3/3     4m19s

$ kubectl get po
NAME      READY   STATUS    RESTARTS   AGE
redis-0   1/1     Running   0          4m55s
redis-1   1/1     Running   0          4m
redis-2   1/1     Running   0          3m2s

$ for i in {0..2}; do kubectl exec "redis-$i" -- sh -c 'hostname'; done
redis-0
redis-1
redis-2
Enter fullscreen mode Exit fullscreen mode
$ kubectl delete pod/redis-0
pod "redis-0" deleted

$ kubectl get po
NAME      READY   STATUS    RESTARTS   AGE
redis-0   1/1     Running   0          4s
redis-1   1/1     Running   0          13m
redis-2   1/1     Running   0          12m
Enter fullscreen mode Exit fullscreen mode
$ kubectl scale statefulset/redis --replicas=4
statefulset.apps/redis scaled

$ kubectl get po
NAME      READY   STATUS    RESTARTS   AGE
redis-0   1/1     Running   0          2m53s
redis-1   1/1     Running   0          16m
redis-2   1/1     Running   0          15m
redis-3   0/1     Pending   0          2s

$ kubectl get po
NAME      READY   STATUS              RESTARTS   AGE
redis-0   1/1     Running             0          2m56s
redis-1   1/1     Running             0          16m
redis-2   1/1     Running             0          15m
redis-3   0/1     ContainerCreating   0          5s

$ kubectl get po
NAME      READY   STATUS              RESTARTS   AGE
redis-0   1/1     Running             0          2m58s
redis-1   1/1     Running             0          16m
redis-2   1/1     Running             0          15m
redis-3   0/1     ContainerCreating   0          7s

$ kubectl get po
NAME      READY   STATUS    RESTARTS   AGE
redis-0   1/1     Running   0          2m59s
redis-1   1/1     Running   0          16m
redis-2   1/1     Running   0          15m
redis-3   1/1     Running   0          8s
Enter fullscreen mode Exit fullscreen mode
$ kubectl delete statefulset/redis
statefulset.apps "redis" deleted

$ kubectl get po
No resources found in default namespace.

$ kubectl get statefulsets
No resources found in default namespace.

$ kubectl get pv
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                  STORAGECLASS   VOLUMEATTRIBUTESCLASS   REASON   AGE
pvc-1b2cab34-7cce-4583-8cd3-3e7fce32f72c   512Mi      RWO            Delete           Bound    default/data-redis-3   standard       <unset>                          4m10s
pvc-948cc23d-c7a8-4caf-9206-1b52c8f31f75   512Mi      RWO            Delete           Bound    default/data-redis-0   standard       <unset>                          21m
pvc-b2079d26-8db5-4855-b81b-6f1c78aeab0e   512Mi      RWO            Delete           Bound    default/data-redis-2   standard       <unset>                          19m
pvc-f9ff0efa-dd70-44c6-8601-72f17430f848   512Mi      RWO            Delete           Bound    default/data-redis-1   standard       <unset>                          20m

$ kubectl get pvc
NAME           STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   VOLUMEATTRIBUTESCLASS   AGE
data-redis-0   Bound    pvc-948cc23d-c7a8-4caf-9206-1b52c8f31f75   512Mi      RWO            standard       <unset>                 22m
data-redis-1   Bound    pvc-f9ff0efa-dd70-44c6-8601-72f17430f848   512Mi      RWO            standard       <unset>                 21m
data-redis-2   Bound    pvc-b2079d26-8db5-4855-b81b-6f1c78aeab0e   512Mi      RWO            standard       <unset>                 20m
data-redis-3   Bound    pvc-1b2cab34-7cce-4583-8cd3-3e7fce32f72c   512Mi      RWO            standard       <unset>                 4m55s
Enter fullscreen mode Exit fullscreen mode
$ kubectl delete sc/storageclass-redis
storageclass.storage.k8s.io "storageclass-redis" deleted

$ kubectl delete pvc data-redis-{0,1,2,3}
persistentvolumeclaim "data-redis-0" deleted
persistentvolumeclaim "data-redis-1" deleted
persistentvolumeclaim "data-redis-2" deleted
persistentvolumeclaim "data-redis-3" deleted

$ kubectl get pvc
No resources found in default namespace.

$ kubectl get pv
No resources found
Enter fullscreen mode Exit fullscreen mode

ServiceAccount

A ServiceAccount provides an identity for processes and applications running within a Kubernetes cluster. ServiceAccounts are designed for non-human entities like Pods, system components, or external tools that need to interact with the Kubernetes API.

Default ServiceAccount

$ kubectl get sa
NAME      SECRETS   AGE
default   0         116d

$ kubectl apply -f pod-basic.yaml
pod/kubia created

$ kubectl get pod/kubia -o jsonpath='{.spec.serviceAccount}'
default

$ kubectl delete -f pod-basic.yaml
pod "kubia" deleted
Enter fullscreen mode Exit fullscreen mode

Creating a ServiceAccount

$ kubectl create sa my-sa
serviceaccount/my-sa created

$ kubectl get sa
NAME      SECRETS   AGE
default   0         116d
my-sa     0         7s

$ kubectl get sa/my-sa -o yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  creationTimestamp: "2025-11-10T10:27:44Z"
  name: my-sa
  namespace: default
  resourceVersion: "7002078"
  uid: 487bd1fa-353a-420e-be95-6ee876a277f5

$ kubectl describe sa/my-sa
Name:                my-sa
Namespace:           default
Labels:              <none>
Annotations:         <none>
Image pull secrets:  <none>
Mountable secrets:   <none>
Tokens:              <none>
Events:              <none>
Enter fullscreen mode Exit fullscreen mode

Associate a Secret with a ServiceAccount

# secret-sa-token.yaml

apiVersion: v1
kind: Secret
type: kubernetes.io/service-account-token
metadata:
  name: my-sa-token
  namespace: default
  annotations:
    kubernetes.io/service-account.name: my-sa
Enter fullscreen mode Exit fullscreen mode
$ kubectl apply -f secret-sa-token.yaml
secret/my-sa-token created

$ kubectl get secrets
NAME          TYPE                                  DATA   AGE
my-sa-token   kubernetes.io/service-account-token   3      24s

$ kubectl describe secret/my-sa-token
Name:         my-sa-token
Namespace:    default
Labels:       <none>
Annotations:  kubernetes.io/service-account.name: my-sa
              kubernetes.io/service-account.uid: 487bd1fa-353a-420e-be95-6ee876a277f5

Type:  kubernetes.io/service-account-token

Data
====
ca.crt:     1107 bytes
namespace:  7 bytes
token:      eyJhbGciOiJSUzI1NiIsImtpZCI6IkxiTHE0d29pbFBiUXpXNkI1bWxoMHFQNVZCa2o1cFl1c3...HLmhPxTcMYPc3WNUWIS4t_8E3556087H4f1e-13y8B_dUYYzh-B7NJuOIOp31_eiAxhYzaQYGw

$ kubectl get secret/my-sa-token -o=jsonpath='{.data.token}'  | base64 --decode
eyJhbGciOiJSUzI1NiIsImtpZCI6IkxiTHE0d29pbFBiUXpXNkI1bWxoMHFQNVZCa2o1cFl1c3...HLmhPxTcMYPc3WNUWIS4t_8E3556087H4f1e-13y8B_dUYYzh-B7NJuOIOp31_eiAxhYzaQYGw

$ kubectl describe sa/my-sa
Name:                my-sa
Namespace:           default
Labels:              <none>
Annotations:         <none>
Image pull secrets:  <none>
Mountable secrets:   <none>
Tokens:              my-sa-token
Events:              <none>
Enter fullscreen mode Exit fullscreen mode

Assign a ServiceAccount to a Pod

# pod-sa.yaml

apiVersion: v1
kind: Pod
metadata:
  name: curl
spec:
  serviceAccountName: my-sa
  automountServiceAccountToken: true
  containers:
  - image: alpine/curl
    name: curl
    command: ["sleep", "9999999"]
Enter fullscreen mode Exit fullscreen mode
$ kubectl apply -f pod-sa.yaml
pod/curl created

$ kubectl get po
NAME   READY   STATUS    RESTARTS   AGE
curl   1/1     Running   0          4s

$ kubectl get pod/curl -o jsonpath='{.spec.serviceAccount}'
my-sa

$ kubectl exec -it pod/curl -- cat /var/run/secrets/kubernetes.io/serviceaccount/token
eyJhbGciOiJSUzI1NiIsImtpZCI6IkxiTHE0d29pbFBiUXpXNkI1bWxoMHFQNVZCa2o1cFl1c3...9Wd5ONTHu2VyrTfM6u1FAxC72hKWK0_5zpNg
Enter fullscreen mode Exit fullscreen mode
$ kubectl exec -it pod/curl -- sh
/ # NS=$(cat /var/run/secrets/kubernetes.io/serviceaccount/namespace)
/ # TOKEN=$(cat /var/run/secrets/kubernetes.io/serviceaccount/token)
/ # export CURL_CA_BUNDLE=/var/run/secrets/kubernetes.io/serviceaccount/ca.crt
/ # curl -s https://kubernetes.default.svc.cluster.local/api/v1/namespaces/$NS/pods -H "Authorization: Bearer $TOKEN"
{
  "kind": "Status",
  "apiVersion": "v1",
  "metadata": {},
  "status": "Failure",
  "message": "pods is forbidden: User \"system:serviceaccount:default:my-sa\" cannot list resource \"pods\" in API group \"\" in the namespace \"default\"",
  "reason": "Forbidden",
  "details": {
    "kind": "pods"
  },
  "code": 403
}
Enter fullscreen mode Exit fullscreen mode

RBAC

Role-Based Access Control (RBAC) is a security model that grants access to systems and data based on user roles, not individual users.

Role

# rbac-role.yaml

apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: pod-reader
  namespace: default
rules:
- apiGroups: [""]
  resources: ["pods"]
  verbs: ["get", "watch", "list"]
Enter fullscreen mode Exit fullscreen mode
$ kubectl apply -f rbac-role.yaml -n default
role.rbac.authorization.k8s.io/pod-reader created

$ kubectl get role -n default
NAME         CREATED AT
pod-reader   2025-11-11T09:32:20Z
Enter fullscreen mode Exit fullscreen mode

RoleBinding

# rbac-rolebinding.yaml

apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: read-pods-binding
  namespace: default
subjects:
- kind: ServiceAccount
  name: my-sa
  namespace: default
roleRef:
  kind: Role
  name: pod-reader
  apiGroup: rbac.authorization.k8s.io
Enter fullscreen mode Exit fullscreen mode
$ kubectl apply -f rbac-rolebinding.yaml -n default
rolebinding.rbac.authorization.k8s.io/read-pods-binding created

$ kubectl get rolebindings -n default
NAME                ROLE              AGE
read-pods-binding   Role/pod-reader   35s
Enter fullscreen mode Exit fullscreen mode

Validate ServiceAccount access

$ kubectl exec -it pod/curl -- sh
/ # NS=$(cat /var/run/secrets/kubernetes.io/serviceaccount/namespace)
/ # TOKEN=$(cat /var/run/secrets/kubernetes.io/serviceaccount/token)
/ # export CURL_CA_BUNDLE=/var/run/secrets/kubernetes.io/serviceaccount/ca.crt
/ # curl -s https://kubernetes.default.svc.cluster.local/api/v1/namespaces/$NS/pods -H "Authorization: Bearer $TOKEN"
{
  "kind": "PodList",
  "apiVersion": "v1",
  "metadata": {
    "resourceVersion": "7148388"
  },
  "items": [
    {
      "metadata": {
        "name": "curl",
        "namespace": "default",
        "uid": "4fc1f8e7-9884-42c3-941e-9e27df563592",
        "resourceVersion": "7007001",
        "generation": 1,
        "creationTimestamp": "2025-11-10T11:13:48Z",
...
    }
  ]
}
Enter fullscreen mode Exit fullscreen mode
$ kubectl delete -f rbac-rolebinding.yaml -n default
rolebinding.rbac.authorization.k8s.io "read-pods-binding" deleted

$ kubectl get rolebindings -n default
No resources found in default namespace.

$ kubectl delete role/pod-reader
role.rbac.authorization.k8s.io "pod-reader" deleted

$ kubectl get roles
No resources found in default namespace.
Enter fullscreen mode Exit fullscreen mode

ClusterRole

# rbac-clusterrole.yaml

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: pv-reader
rules:
- apiGroups: [""]
  resources: ["persistentvolumes"]
  verbs: ["get", "list"]
Enter fullscreen mode Exit fullscreen mode
$ kubectl apply -f rbac-clusterrole.yaml
clusterrole.rbac.authorization.k8s.io/pv-reader created

$ kubectl describe clusterrole/pv-reader
Name:         pv-reader
Labels:       <none>
Annotations:  <none>
PolicyRule:
  Resources          Non-Resource URLs  Resource Names  Verbs
  ---------          -----------------  --------------  -----
  persistentvolumes  []                 []              [get list]
Enter fullscreen mode Exit fullscreen mode

ClusterRoleBinding

# rbac-clusterrolebinding.yaml

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: read-pv-binding
subjects:
- kind: ServiceAccount
  name: my-sa
  namespace: default
roleRef:
  kind: ClusterRole
  name: pv-reader
  apiGroup: rbac.authorization.k8s.io
Enter fullscreen mode Exit fullscreen mode
$ kubectl apply -f rbac-clusterrolebinding.yaml
clusterrolebinding.rbac.authorization.k8s.io/read-pv-binding created

$ kubectl describe clusterrolebinding/read-pv-binding
Name:         read-pv-binding
Labels:       <none>
Annotations:  <none>
Role:
  Kind:  ClusterRole
  Name:  pv-reader
Subjects:
  Kind            Name   Namespace
  ----            ----   ---------
  ServiceAccount  my-sa  default
Enter fullscreen mode Exit fullscreen mode

Validate ServiceAccount access

$ kubectl exec -it pod/curl -- sh
/ # NS=$(cat /var/run/secrets/kubernetes.io/serviceaccount/namespace)
/ # TOKEN=$(cat /var/run/secrets/kubernetes.io/serviceaccount/token)
/ # export CURL_CA_BUNDLE=/var/run/secrets/kubernetes.io/serviceaccount/ca.crt
/ # curl -s https://kubernetes.default.svc.cluster.local/api/v1/persistentvolumes -H "Authorization: Bearer $TOKEN"
{
  "kind": "PersistentVolumeList",
  "apiVersion": "v1",
  "metadata": {
    "resourceVersion": "7162702"
  },
  "items": []
}
Enter fullscreen mode Exit fullscreen mode
$ kubectl delete sa/my-sa
serviceaccount "my-sa" deleted

$ kubectl delete clusterrolebinding/read-pv-binding
clusterrolebinding.rbac.authorization.k8s.io "read-pv-binding" deleted

$ kubectl delete clusterrole/pv-reader
clusterrole.rbac.authorization.k8s.io "pv-reader" deleted

$ kubectl delete po/curl
pod "curl" deleted
Enter fullscreen mode Exit fullscreen mode

Pod Security

The Pod Security Admission (PSA) enforces the Pod Security Standards (PSS).

Kubernetes defines a set of labels that you can set to define which of the predefined PSS levels you want to use for a namespace.

The per-mode level label indicates which policy level (privileged, baseline, or restricted) to apply for the mode (enforce, audit, or warn).

metadata:
  labels:
    pod-security.kubernetes.io/<mode>: <level>
Enter fullscreen mode Exit fullscreen mode

Restricted level with warn mode

# ns-psa-warn-restricted.yaml

apiVersion: v1
kind: Namespace
metadata:
  name: psa-warn-restricted
  labels:
    pod-security.kubernetes.io/warn: restricted
Enter fullscreen mode Exit fullscreen mode
$ kubectl apply -f ns-psa-warn-restricted.yaml
namespace/psa-warn-restricted created
Enter fullscreen mode Exit fullscreen mode
# pod-warn-restricted.yaml

apiVersion: v1
kind: Pod
metadata:
  name: busybox
  namespace: psa-warn-restricted
spec:
  containers:
  - image: busybox:1.35.0
    name: busybox
    command: ["sh", "-c", "sleep 1h"]
Enter fullscreen mode Exit fullscreen mode
$ kubectl apply -f pod-warn-restricted.yaml
Warning: would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (container "busybox" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "busybox" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "busybox" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "busybox" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost")
pod/busybox created

$ kubectl get po -n psa-warn-restricted
NAME      READY   STATUS    RESTARTS   AGE
busybox   1/1     Running   0          32s
Enter fullscreen mode Exit fullscreen mode
$ kubectl delete ns/psa-warn-restricted
namespace "psa-warn-restricted" deleted

$ kubectl get po -n psa-warn-restricted
No resources found in psa-warn-restricted namespace.
Enter fullscreen mode Exit fullscreen mode

Restricted level with enforce mode

# ns-psa-enforce-restricted.yaml

apiVersion: v1
kind: Namespace
metadata:
  name: psa-enforce-restricted
  labels:
    pod-security.kubernetes.io/enforce: restricted
Enter fullscreen mode Exit fullscreen mode
$ kubectl apply -f ns-psa-enforce-restricted.yaml
namespace/psa-enforce-restricted created
Enter fullscreen mode Exit fullscreen mode
# pod-enforce-restricted.yaml

apiVersion: v1
kind: Pod
metadata:
  name: busybox
  namespace: psa-enforce-restricted
spec:
  containers:
  - image: busybox:1.35.0
    name: busybox
    command: ["sh", "-c", "sleep 1h"]
Enter fullscreen mode Exit fullscreen mode
$ kubectl apply -f pod-enforce-restricted.yaml
Error from server (Forbidden): error when creating "pod-enforce-restricted.yaml": pods "busybox" is forbidden: violates PodSecurity "restricted:latest": allowPrivilegeEscalation != false (container "busybox" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "busybox" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "busybox" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "busybox" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost")
Enter fullscreen mode Exit fullscreen mode
# pod-enforce-restricted-v2.yaml

apiVersion: v1
kind: Pod
metadata:
  name: busybox
  namespace: psa-enforce-restricted
spec:
  containers:
  - image: busybox:1.35.0
    name: busybox
    command: ["sh", "-c", "sleep 1h"]
    securityContext:
      allowPrivilegeEscalation: false
      capabilities:
        drop: ["ALL"]
      runAsNonRoot: true
      runAsUser: 2000
      runAsGroup: 3000
      seccompProfile:
        type: RuntimeDefault
Enter fullscreen mode Exit fullscreen mode
$ kubectl apply -f pod-enforce-restricted-v2.yaml
pod/busybox created

$ kubectl get po -n psa-enforce-restricted
NAME      READY   STATUS    RESTARTS   AGE
busybox   1/1     Running   0          71s
Enter fullscreen mode Exit fullscreen mode
$ kubectl delete ns/psa-enforce-restricted
namespace "psa-enforce-restricted" deleted

$ kubectl get po -n psa-enforce-restricted
No resources found in psa-enforce-restricted namespace.
Enter fullscreen mode Exit fullscreen mode

Baseline level with enforce mode

# ns-psa-enforce-baseline.yaml

apiVersion: v1
kind: Namespace
metadata:
  name: psa-enforce-baseline
  labels:
    pod-security.kubernetes.io/enforce: baseline
Enter fullscreen mode Exit fullscreen mode
$ kubectl apply -f ns-psa-enforce-baseline.yaml
namespace/psa-enforce-baseline created
Enter fullscreen mode Exit fullscreen mode
# pod-enforce-baseline.yaml

apiVersion: v1
kind: Pod
metadata:
  name: busybox
  namespace: psa-enforce-baseline
spec:
  hostNetwork: true
  containers:
  - image: busybox:1.35.0
    name: busybox
    command: ["sh", "-c", "sleep 1h"]
Enter fullscreen mode Exit fullscreen mode
$ kubectl apply -f pod-enforce-baseline.yaml
Error from server (Forbidden): error when creating "pod-enforce-baseline.yaml": pods "busybox" is forbidden: violates PodSecurity "baseline:latest": host namespaces (hostNetwork=true)
Enter fullscreen mode Exit fullscreen mode
$ kubectl delete -f ns-psa-enforce-baseline.yaml
namespace "psa-enforce-baseline" deleted
Enter fullscreen mode Exit fullscreen mode

Privileged level with enforce mode

# ns-psa-enforce-privileged.yaml

apiVersion: v1
kind: Namespace
metadata:
  name: psa-enforce-privileged
  labels:
    pod-security.kubernetes.io/enforce: privileged
Enter fullscreen mode Exit fullscreen mode
$ kubectl apply -f ns-psa-enforce-privileged.yaml
namespace/psa-enforce-privileged created
Enter fullscreen mode Exit fullscreen mode
# pod-enforce-privileged.yaml

apiVersion: v1
kind: Pod
metadata:
  name: busybox
  namespace: psa-enforce-privileged
spec:
  hostNetwork: true
  hostPID: true
  hostIPC: true
  securityContext:
    runAsUser: 0
  containers:
  - image: busybox:1.35.0
    name: busybox
    command: ["sh", "-c", "sleep 1h"]
    securityContext:
      privileged: true
Enter fullscreen mode Exit fullscreen mode
$ kubectl apply -f pod-enforce-privileged.yaml
pod/busybox created

$ kubectl exec -ti pod/busybox -n psa-enforce-privileged -- ps
PID   USER     TIME  COMMAND
    1 root      0:54 {systemd} /sbin/init
  116 root      1h50 /usr/local/bin/containerd
  185 root      5h06 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///run/containerd/containerd.sock --node-ip=172.18.0.3 --node-labels= --pod-infra-container-image=registry.k8s.io/pause:3.10 --provider-id=kind://docker/kind/kind-worker3 --runtime-cgroups=/system.slice/containerd.service
  358 root      5:49 /usr/local/bin/containerd-shim-runc-v2 -namespace k8s.io -id abfaf0f18c5ce3d14ba5236c34ed8048486151649c0183c0d5228240a64cdc39 -address /run/containerd/containerd.sock
  436 root      4:56 /usr/local/bin/containerd-shim-runc-v2 -namespace k8s.io -id 0c430d300ff99ed691cc7daca4b6276b41621c19739462c7fe1275abf8cd4f93 -address /run/containerd/containerd.sock
  494 65535     0:00 /pause
  554 65535     0:00 /pause
  665 root      5:38 /usr/local/bin/kube-proxy --config=/var/lib/kube-proxy/config.conf --hostname-override=kind-worker3
  698 root      8:15 /bin/kindnetd
151332 root      0:01 /lib/systemd/systemd-journald
336224 root      0:00 /usr/local/bin/containerd-shim-runc-v2 -namespace k8s.io -id 6b7c911e749486aca30a8e963ee6d4781391f73b63490e98cbdbb537fe4b538b -address /run/containerd/containerd.sock
336248 root      0:00 /pause
336274 root      0:00 sleep 1h
336627 root      0:00 ps
Enter fullscreen mode Exit fullscreen mode
$ kubectl delete -f ns-psa-enforce-privileged.yaml
namespace "psa-enforce-privileged" deleted
Enter fullscreen mode Exit fullscreen mode

NetworkPolicy

In contexts like Kubernetes, Network Policies are used to control communication between pods, while broader network policies can be applied to devices like switches and routers to align the network with business needs.

The kind does not support the NetworkPolicy by default, as it comes with a simple networking implementation kindnetd as default CNI pluging.

Install a CNI Networking Plugin

Recreating a cluster

Therefore, it is necessary recreate the kind cluster using the CNI Calico plugin.

$ kind delete cluster
Deleting cluster "kind" ...
Deleted nodes: ["kind-worker" "kind-control-plane" "kind-worker3" "kind-worker2"]
Enter fullscreen mode Exit fullscreen mode
# kind-cluster-cni.yaml

kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
networking:
  disableDefaultCNI: true
  podSubnet: 192.168.0.0/16
nodes:
- role: control-plane
- role: worker
- role: worker
- role: worker
Enter fullscreen mode Exit fullscreen mode
$ kind create cluster --config kind-cluster-cni.yaml --name=kind-calico
Creating cluster "kind-calico" ...
 • Ensuring node image (kindest/node:v1.33.1) 🖼  ...
 ✓ Ensuring node image (kindest/node:v1.33.1) 🖼
 • Preparing nodes 📦 📦 📦 📦   ...
 ✓ Preparing nodes 📦 📦 📦 📦
 • Writing configuration 📜  ...
 ✓ Writing configuration 📜
 • Starting control-plane 🕹️  ...
 ✓ Starting control-plane 🕹️
 • Installing StorageClass 💾  ...
 ✓ Installing StorageClass 💾
 • Joining worker nodes 🚜  ...
 ✓ Joining worker nodes 🚜
Set kubectl context to "kind-kind-calico"
You can now use your cluster with:

kubectl cluster-info --context kind-kind-calico

Thanks for using kind! 😊
Enter fullscreen mode Exit fullscreen mode
$ kubectl get no
NAME                        STATUS     ROLES           AGE     VERSION
kind-calico-control-plane   NotReady   control-plane   3m43s   v1.33.1
kind-calico-worker          NotReady   <none>          3m30s   v1.33.1
kind-calico-worker2         NotReady   <none>          3m30s   v1.33.1
kind-calico-worker3         NotReady   <none>          3m30s   v1.33.1
Enter fullscreen mode Exit fullscreen mode

Installing Calico plugin

$ kubectl create -f https://raw.githubusercontent.com/projectcalico/calico/v3.28.3/manifests/calico.yaml
poddisruptionbudget.policy/calico-kube-controllers created
serviceaccount/calico-kube-controllers created
serviceaccount/calico-node created
serviceaccount/calico-cni-plugin created
configmap/calico-config created
customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgpfilters.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/caliconodestatuses.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipreservations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created
clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrole.rbac.authorization.k8s.io/calico-node created
clusterrole.rbac.authorization.k8s.io/calico-cni-plugin created
clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrolebinding.rbac.authorization.k8s.io/calico-node created
clusterrolebinding.rbac.authorization.k8s.io/calico-cni-plugin created
daemonset.apps/calico-node created
deployment.apps/calico-kube-controllers created
Enter fullscreen mode Exit fullscreen mode
$ kubectl get pods -l k8s-app=calico-node -A --watch
NAMESPACE     NAME                READY   STATUS     RESTARTS   AGE
kube-system   calico-node-c264z   0/1     Init:2/3   0          61s
kube-system   calico-node-d98t7   0/1     Init:2/3   0          61s
kube-system   calico-node-sps7w   0/1     Init:2/3   0          61s
kube-system   calico-node-ssd8q   0/1     Init:2/3   0          61s
kube-system   calico-node-ssd8q   0/1     PodInitializing   0          89s
kube-system   calico-node-d98t7   0/1     PodInitializing   0          89s
kube-system   calico-node-c264z   0/1     Init:2/3          0          89s
kube-system   calico-node-sps7w   0/1     PodInitializing   0          90s
kube-system   calico-node-ssd8q   0/1     Running           0          90s
kube-system   calico-node-d98t7   0/1     Running           0          90s
kube-system   calico-node-c264z   0/1     PodInitializing   0          90s
kube-system   calico-node-sps7w   0/1     Running           0          91s
kube-system   calico-node-c264z   0/1     Running           0          91s
kube-system   calico-node-d98t7   1/1     Running           0          101s
kube-system   calico-node-ssd8q   1/1     Running           0          102s
kube-system   calico-node-c264z   1/1     Running           0          102s
kube-system   calico-node-sps7w   1/1     Running           0          103s

$ kubectl get no
NAME                        STATUS   ROLES           AGE     VERSION
kind-calico-control-plane   Ready    control-plane   4m28s   v1.33.1
kind-calico-worker          Ready    <none>          4m15s   v1.33.1
kind-calico-worker2         Ready    <none>          4m15s   v1.33.1
kind-calico-worker3         Ready    <none>          4m15s   v1.33.1
Enter fullscreen mode Exit fullscreen mode

Setup a NetworkPolicy

# ns-foo-bar.yaml

apiVersion: v1
kind: Namespace
metadata:
  name: foo
  labels:
    tenant: foo
---
apiVersion: v1
kind: Namespace
metadata:
  name: bar
  labels:
    tenant: bar
Enter fullscreen mode Exit fullscreen mode
$ kubectl apply -f ns-foo-bar.yaml
namespace/foo created
namespace/bar created
Enter fullscreen mode Exit fullscreen mode
# pod-ns-foo-bar.yaml

apiVersion: v1
kind: Pod
metadata:
  name: nginx-foo
  namespace: foo
  labels:
    tier: foo
spec:
  containers:
  - image: nginx:latest
    name: nginx-foo
    ports:
    - containerPort: 80
      protocol: TCP
---
apiVersion: v1
kind: Pod
metadata:
  name: nginx-bar
  namespace: bar
  labels:
    tier: bar
spec:
  containers:
  - image: nginx:latest
    name: nginx-bar
    ports:
    - containerPort: 80
      protocol: TCP
Enter fullscreen mode Exit fullscreen mode
$ kubectl apply -f pod-ns-foo-bar.yaml
pod/nginx-foo created
pod/nginx-bar created
Enter fullscreen mode Exit fullscreen mode
Ingress
$ kubectl get po -n bar -o wide
NAME        READY   STATUS    RESTARTS   AGE     IP              NODE                  NOMINATED NODE   READINESS GATES
nginx-bar   1/1     Running   0          2m16s   192.168.52.65   kind-calico-worker2   <none>           <none>

$ kubectl exec pod/nginx-foo -n foo -- curl -sI http://192.168.52.65:80
HTTP/1.1 200 OK
Server: nginx/1.29.3
Date: Sat, 15 Nov 2025 12:34:19 GMT
Content-Type: text/html
Content-Length: 615
Last-Modified: Tue, 28 Oct 2025 12:05:10 GMT
Connection: keep-alive
ETag: "6900b176-267"
Accept-Ranges: bytes
Enter fullscreen mode Exit fullscreen mode
# networkpolicy-ingress.yaml

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: deny-all
  namespace: bar
spec:
  podSelector: {}
  policyTypes:
  - Ingress
Enter fullscreen mode Exit fullscreen mode
$ kubectl apply -f networkpolicy-ingress.yaml
networkpolicy.networking.k8s.io/deny-all created

$ kubectl get networkpolicy -n bar
NAME       POD-SELECTOR   AGE
deny-all   <none>         41s

$ kubectl exec pod/nginx-foo -n foo -- curl -svI http://192.168.52.65:80
*   Trying 192.168.52.65:80...
* connect to 192.168.52.65 port 80 from 192.168.28.193 port 43618 failed: Connection timed out
* Failed to connect to 192.168.52.65 port 80 after 135435 ms: Could not connect to server
* closing connection #0
command terminated with exit code 28
Enter fullscreen mode Exit fullscreen mode
Ingress
$ kubectl get po -n foo -o wide
NAME        READY   STATUS    RESTARTS   AGE   IP               NODE                  NOMINATED NODE   READINESS GATES
nginx-foo   1/1     Running   0          15m   192.168.28.193   kind-calico-worker3   <none>           <none>

$ kubectl exec pod/nginx-bar -n bar -- curl -sI http://192.168.28.193:80
HTTP/1.1 200 OK
Server: nginx/1.29.3
Date: Sat, 15 Nov 2025 12:46:20 GMT
Content-Type: text/html
Content-Length: 615
Last-Modified: Tue, 28 Oct 2025 12:05:10 GMT
Connection: keep-alive
ETag: "6900b176-267"
Accept-Ranges: bytes
Enter fullscreen mode Exit fullscreen mode
# networkpolicy-egress.yaml

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: deny-pod-bar-egress
  namespace: bar
spec:
  podSelector:
    matchLabels:
      tier: bar
  policyTypes:
  - Egress
Enter fullscreen mode Exit fullscreen mode
$ kubectl apply -f networkpolicy-egress.yaml
networkpolicy.networking.k8s.io/deny-pod-bar-egress created

$ kubectl get networkpolicy -n bar
NAME                  POD-SELECTOR   AGE
deny-all              <none>         18m
deny-pod-bar-egress   tier=bar       8s

$ kubectl exec pod/nginx-bar -n bar -- curl -svI http://192.168.28.193:80
*   Trying 192.168.28.193:80...
* connect to 192.168.28.193 port 80 from 192.168.52.65 port 58978 failed: Connection timed out
* Failed to connect to 192.168.28.193 port 80 after 135323 ms: Could not connect to server
* closing connection #0
command terminated with exit code 28
Enter fullscreen mode Exit fullscreen mode
$ kubectl delete -n bar networkpolicies deny-all deny-pod-bar-egress
networkpolicy.networking.k8s.io "deny-all" deleted
networkpolicy.networking.k8s.io "deny-pod-bar-egress" deleted

$ kubectl delete ns foo bar
namespace "foo" deleted
namespace "bar" deleted
Enter fullscreen mode Exit fullscreen mode

LimitRange

A LimitRange is a policy defined within a specific namespace to constrain resource allocations for Pods and Containers.

$ kubectl apply -f ns-foo-bar.yaml
namespace/foo created
namespace/bar created
Enter fullscreen mode Exit fullscreen mode
# pod-resource-requests-limits.yaml

apiVersion: v1
kind: Pod
metadata:
  name: nginx
spec:
  containers:
  - name: nginx
    image: nginx:latest
    resources:
      requests:
        cpu: 200m
        memory: 10Mi
      limits:
        cpu: 1
        memory: 2Gi
Enter fullscreen mode Exit fullscreen mode
$ kubectl apply -f pod-resource-requests-limits.yaml -n foo
pod/nginx created
Enter fullscreen mode Exit fullscreen mode
# limitrange.yaml

apiVersion: v1
kind: LimitRange
metadata:
  name: resource-constraints
spec:
  limits:
  - type: Pod
    max:
      memory: 2Gi
      cpu: 1
    min:
      memory: 50Mi
      cpu: 200m      
  - type: Container
    default:
      cpu: 0.5
    defaultRequest:
      cpu: 0.2
    max:
      memory: 1Gi
      cpu: 800m
    min:
      memory: 50Mi
      cpu: 100m
Enter fullscreen mode Exit fullscreen mode
$ kubectl apply -f limitrange.yaml -n bar
limitrange/resource-constraints created

$ kubectl get limitranges -n bar
NAME                   CREATED AT
resource-constraints   2025-11-17T23:06:33Z

$ kubectl apply -f pod-resource-requests-limits.yaml -n bar
Error from server (Forbidden): error when creating "pod-resource-requests-limits.yaml": pods "nginx" is forbidden: [minimum memory usage per Pod is 50Mi, but request is 10Mi, minimum memory usage per Container is 50Mi, but request is 10Mi, maximum cpu usage per Container is 800m, but limit is 1, maximum memory usage per Container is 1Gi, but limit is 2Gi]
Enter fullscreen mode Exit fullscreen mode
# pod-basic.yaml

apiVersion: v1
kind: Pod
metadata:
  name: kubia
spec:
  containers:
  - image: luksa/kubia
    name: kubia
    ports:
    - containerPort: 8080
      protocol: TCP
Enter fullscreen mode Exit fullscreen mode
$ kubectl apply -f pod-basic.yaml -n bar
pod/kubia created

$ kubectl get pod/kubia -n bar -o jsonpath={.spec.containers[0].resources}
{"limits":{"cpu":"500m","memory":"1Gi"},"requests":{"cpu":"200m","memory":"1Gi"}}
Enter fullscreen mode Exit fullscreen mode
$ kubectl delete -f ns-foo-bar.yaml
namespace "foo" deleted
namespace "bar" deleted
Enter fullscreen mode Exit fullscreen mode

ResourceQuota

A ResourceQuota is an object that allows cluster administrators to limit the aggregated consumption of compute resources (CPU, memory, storage) and the number of API objects within a specific namespace.

$ kubectl create namespace baz
namespace/baz created
Enter fullscreen mode Exit fullscreen mode
# quota.yaml

apiVersion: v1
kind: ResourceQuota
metadata:
  name: compute-resources-quota
spec:
  hard:
    requests.cpu: 500m
    requests.memory: 1Gi
    limits.cpu: 2
    limits.memory: 2Gi
    pods: 3
    secrets: 10
Enter fullscreen mode Exit fullscreen mode
$ kubectl create -f quota.yaml -n baz
resourcequota/compute-resources-quota created

$ kubectl describe -n baz quota
Name:            compute-resources-quota
Namespace:       baz
Resource         Used  Hard
--------         ----  ----
limits.cpu       0     2
limits.memory    0     2Gi
pods             0     3
requests.cpu     0     500m
requests.memory  0     1Gi
secrets          0     10
Enter fullscreen mode Exit fullscreen mode
# rs-resource-requests-limits.yaml

apiVersion: apps/v1
kind: ReplicaSet
metadata:
  name: nginx
spec:
  replicas: 1
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:latest
        resources:
          requests:
            cpu: 200m
            memory: 10Mi
          limits:
            cpu: 1
            memory: 2Gi
Enter fullscreen mode Exit fullscreen mode
$ kubectl create -f rs-resource-requests-limits.yaml -n baz
replicaset.apps/nginx created

$ kubectl describe -n baz quota
Name:            compute-resources-quota
Namespace:       baz
Resource         Used  Hard
--------         ----  ----
limits.cpu       1     2
limits.memory    2Gi   2Gi
pods             1     3
requests.cpu     200m  500m
requests.memory  10Mi  1Gi
secrets          0     10
Enter fullscreen mode Exit fullscreen mode
$ kubectl scale -n baz rs/nginx --replicas 3
replicaset.apps/nginx scaled

$ kubectl describe -n baz quota
Name:            compute-resources-quota
Namespace:       baz
Resource         Used  Hard
--------         ----  ----
limits.cpu       1     2
limits.memory    2Gi   2Gi
pods             1     3
requests.cpu     200m  500m
requests.memory  10Mi  1Gi
secrets          0     10

$ kubectl get rs -n baz
NAME    DESIRED   CURRENT   READY   AGE
nginx   3         1         1       1m52s

$ kubectl events -n baz rs/nginx
LAST SEEN               TYPE      REASON             OBJECT             MESSAGE
111s                    Normal    Scheduled          Pod/nginx-rwqnq    Successfully assigned baz/nginx-rwqnq to kind-calico-worker2
111s                    Normal    SuccessfulCreate   ReplicaSet/nginx   Created pod: nginx-rwqnq
110s                    Normal    Pulling            Pod/nginx-rwqnq    Pulling image "nginx:latest"
109s                    Normal    Started            Pod/nginx-rwqnq    Started container nginx
109s                    Normal    Created            Pod/nginx-rwqnq    Created container: nginx
109s                    Normal    Pulled             Pod/nginx-rwqnq    Successfully pulled image "nginx:latest" in 1.257s (1.257s including waiting). Image size: 59774010 bytes.
27s                     Warning   FailedCreate       ReplicaSet/nginx   Error creating: pods "nginx-rvzhm" is forbidden: exceeded quota: compute-resources-quota, requested: limits.memory=2Gi, used: limits.memory=2Gi, limited: limits.memory=2Gi
27s                     Warning   FailedCreate       ReplicaSet/nginx   Error creating: pods "nginx-64g4w" is forbidden: exceeded quota: compute-resources-quota, requested: limits.memory=2Gi, used: limits.memory=2Gi, limited: limits.memory=2Gi
27s                     Warning   FailedCreate       ReplicaSet/nginx   Error creating: pods "nginx-lgkm2" is forbidden: exceeded quota: compute-resources-quota, requested: limits.memory=2Gi, used: limits.memory=2Gi, limited: limits.memory=2Gi
27s                     Warning   FailedCreate       ReplicaSet/nginx   Error creating: pods "nginx-4776k" is forbidden: exceeded quota: compute-resources-quota, requested: limits.memory=2Gi, used: limits.memory=2Gi, limited: limits.memory=2Gi
27s                     Warning   FailedCreate       ReplicaSet/nginx   Error creating: pods "nginx-2trm4" is forbidden: exceeded quota: compute-resources-quota, requested: limits.memory=2Gi, used: limits.memory=2Gi, limited: limits.memory=2Gi
27s                     Warning   FailedCreate       ReplicaSet/nginx   Error creating: pods "nginx-c5t9z" is forbidden: exceeded quota: compute-resources-quota, requested: limits.memory=2Gi, used: limits.memory=2Gi, limited: limits.memory=2Gi
27s                     Warning   FailedCreate       ReplicaSet/nginx   Error creating: pods "nginx-8jkkk" is forbidden: exceeded quota: compute-resources-quota, requested: limits.memory=2Gi, used: limits.memory=2Gi, limited: limits.memory=2Gi
26s                     Warning   FailedCreate       ReplicaSet/nginx   Error creating: pods "nginx-44vv6" is forbidden: exceeded quota: compute-resources-quota, requested: limits.memory=2Gi, used: limits.memory=2Gi, limited: limits.memory=2Gi
26s                     Warning   FailedCreate       ReplicaSet/nginx   Error creating: pods "nginx-4m8jm" is forbidden: exceeded quota: compute-resources-quota, requested: limits.memory=2Gi, used: limits.memory=2Gi, limited: limits.memory=2Gi
7s (x4 over 24s)        Warning   FailedCreate       ReplicaSet/nginx   (combined from similar events): Error creating: pods "nginx-k65jl" is forbidden: exceeded quota: compute-resources-quota, requested: limits.memory=2Gi, used: limits.memory=2Gi, limited: limits.memory=2Gi
Enter fullscreen mode Exit fullscreen mode
$ kubectl delete ns/baz
namespace "baz" deleted
Enter fullscreen mode Exit fullscreen mode

HorizontalPodAutoscaler

Horizontal Pod Autoscaler is a component that automatically scales the number of Pod replicas in a Deployment, ReplicaSet, or StatefulSet based on observed metrics.

Installing the Metrics Server

Metrics Server is a scalable, efficient source of container resource metrics for built-in autoscaling pipelines.

$ kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml
serviceaccount/metrics-server created
clusterrole.rbac.authorization.k8s.io/system:aggregated-metrics-reader created
clusterrole.rbac.authorization.k8s.io/system:metrics-server created
rolebinding.rbac.authorization.k8s.io/metrics-server-auth-reader created
clusterrolebinding.rbac.authorization.k8s.io/metrics-server:system:auth-delegator created
clusterrolebinding.rbac.authorization.k8s.io/system:metrics-server created
service/metrics-server created
deployment.apps/metrics-server created
apiservice.apiregistration.k8s.io/v1beta1.metrics.k8s.io created

$ kubectl patch -n kube-system deployment metrics-server --type=json -p '[{"op":"add","path":"/spec/template/spec/containers/0/args/-","value":"--kubelet-insecure-tls"}]'
deployment.apps/metrics-server patched
Enter fullscreen mode Exit fullscreen mode
$ kubectl get apiservices | grep metrics.k8s.io
v1beta1.metrics.k8s.io            kube-system/metrics-server   True        97s

$ kubectl top node
NAME                        CPU(cores)   CPU%   MEMORY(bytes)   MEMORY%
kind-calico-control-plane   146m         3%     1102Mi          13%
kind-calico-worker          52m          1%     426Mi           5%
kind-calico-worker2         49m          1%     303Mi           3%
kind-calico-worker3         49m          1%     383Mi           4%
Enter fullscreen mode Exit fullscreen mode

Setup the HPA

# nginx-service.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx
spec:
  selector:
    matchLabels:
      run: nginx
  template:
    metadata:
      labels:
        run: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:latest
        ports:
        - containerPort: 80
        resources:
          limits:
            cpu: 500m
          requests:
            cpu: 100m
---
apiVersion: v1
kind: Service
metadata:
  name: nginx
  labels:
    run: nginx
spec:
  ports:
  - port: 80
  selector:
    run: nginx
Enter fullscreen mode Exit fullscreen mode
$ kubectl apply -f nginx-service.yaml
deployment.apps/nginx created
service/nginx created

$ kubectl get po
NAME                     READY   STATUS    RESTARTS   AGE
nginx-54ff6b8849-lnsn6   1/1     Running   0          16s
Enter fullscreen mode Exit fullscreen mode
# hpa.yaml

apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
  name: nginx-hpa
spec:
  scaleTargetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: nginx
  minReplicas: 1
  maxReplicas: 5
  metrics:
    - type: Resource
      resource:
        name: cpu
        target:
          type: Utilization
          averageUtilization: 30
Enter fullscreen mode Exit fullscreen mode
$ kubectl apply -f hpa.yaml
horizontalpodautoscaler.autoscaling/nginx-hpa created

$ kubectl get hpa
NAME        REFERENCE          TARGETS       MINPODS   MAXPODS   REPLICAS   AGE
nginx-hpa   Deployment/nginx   cpu: 0%/30%   1         5         1          16

$ kubectl get deployment
NAME    READY   UP-TO-DATE   AVAILABLE   AGE
nginx   1/1     1            1           83s
Enter fullscreen mode Exit fullscreen mode
$ kubectl run -ti --rm load-generator --image=alpine/curl --restart=Never --pod-running-timeout=10m -- /bin/sh -c "while true; do curl -sI http://nginx:80; done"
Enter fullscreen mode Exit fullscreen mode
$ kubectl get hpa nginx-hpa --watch
NAME        REFERENCE          TARGETS       MINPODS   MAXPODS   REPLICAS   AGE
nginx-hpa   Deployment/nginx   cpu: 0%/30%   1         5         1          62s
nginx-hpa   Deployment/nginx   cpu: 10%/30%   1         5         1          91s
nginx-hpa   Deployment/nginx   cpu: 53%/30%   1         5         1          106s
nginx-hpa   Deployment/nginx   cpu: 48%/30%   1         5         2          2m1s
nginx-hpa   Deployment/nginx   cpu: 29%/30%   1         5         2          2m16s
nginx-hpa   Deployment/nginx   cpu: 25%/30%   1         5         2          2m31s
nginx-hpa   Deployment/nginx   cpu: 24%/30%   1         5         2          3m1s
nginx-hpa   Deployment/nginx   cpu: 25%/30%   1         5         2          3m16s
Enter fullscreen mode Exit fullscreen mode
$ kubectl delete po load-generator
pod "load-generator" deleted

$ kubectl get hpa nginx-hpa --watch
NAME        REFERENCE          TARGETS        MINPODS   MAXPODS   REPLICAS   AGE
nginx-hpa   Deployment/nginx   cpu: 25%/30%   1         5         2          4m3s
nginx-hpa   Deployment/nginx   cpu: 12%/30%   1         5         2          4m16s
nginx-hpa   Deployment/nginx   cpu: 8%/30%    1         5         2          4m31s
nginx-hpa   Deployment/nginx   cpu: 0%/30%    1         5         2          4m46s
nginx-hpa   Deployment/nginx   cpu: 0%/30%    1         5         2          9m1s
nginx-hpa   Deployment/nginx   cpu: 0%/30%    1         5         1          9m16s
Enter fullscreen mode Exit fullscreen mode
$ kubectl delete deployments/nginx
deployment.apps "nginx" deleted

$ kubectl delete hpa/nginx-hpa
horizontalpodautoscaler.autoscaling "nginx-hpa" deleted
Enter fullscreen mode Exit fullscreen mode

PodDisruptionBudget

A PodDisruptionBudget is an object that ensures a specified minimum number or percentage of pods remain available during voluntary disruptions.

$ kubectl create deployment nginx --image=nginx --replicas=1
deployment.apps/nginx created

$ kubectl get deployments.apps nginx
NAME    READY   UP-TO-DATE   AVAILABLE   AGE
nginx   1/1     1            1           21s
Enter fullscreen mode Exit fullscreen mode
# pdb.yaml

apiVersion: policy/v1
kind: PodDisruptionBudget
metadata:
  name: nginx-pdb
spec:
  minAvailable: 2
  selector:
    matchLabels:
      app: nginx
Enter fullscreen mode Exit fullscreen mode
$ kubectl create -f pdb.yaml
poddisruptionbudget.policy/nginx-pdb created

$ kubectl get pdb
NAME        MIN AVAILABLE   MAX UNAVAILABLE   ALLOWED DISRUPTIONS   AGE
nginx-pdb   2               N/A               0                     29s
Enter fullscreen mode Exit fullscreen mode
$ kubectl get po -o wide
NAME                     READY   STATUS    RESTARTS   AGE   IP              NODE                  NOMINATED NODE   READINESS GATES
nginx-5869d7778c-cftzp   1/1     Running   0          11m   192.168.52.77   kind-calico-worker2   <none>           <none>

$ kubectl drain kind-calico-worker2 --ignore-daemonsets
node/kind-calico-worker2 cordoned
Warning: ignoring DaemonSet-managed Pods: kube-system/calico-node-9ghkn, kube-system/kube-proxy-zcrm6
evicting pod default/nginx-5869d7778c-cftzp
error when evicting pods/"nginx-5869d7778c-cftzp" -n "default" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget.
Enter fullscreen mode Exit fullscreen mode
$ kubectl get no
NAME                        STATUS                     ROLES           AGE    VERSION
kind-calico-control-plane   Ready                      control-plane   116m   v1.33.1
kind-calico-worker          Ready                      <none>          115m   v1.33.1
kind-calico-worker2         Ready,SchedulingDisabled   <none>          115m   v1.33.1
kind-calico-worker3         Ready                      <none>          115m   v1.33.1

$ kubectl get po -o wide
NAME                     READY   STATUS    RESTARTS   AGE   IP              NODE                  NOMINATED NODE   READINESS GATES
nginx-5869d7778c-cftzp   1/1     Running   0          16m   192.168.52.77   kind-calico-worker2   <none>           <none>
Enter fullscreen mode Exit fullscreen mode
$ kubectl uncordon kind-calico-worker2
node/kind-calico-worker2 uncordoned

$ kubectl get no
NAME                        STATUS   ROLES           AGE    VERSION
kind-calico-control-plane   Ready    control-plane   116m   v1.33.1
kind-calico-worker          Ready    <none>          116m   v1.33.1
kind-calico-worker2         Ready    <none>          116m   v1.33.1
kind-calico-worker3         Ready    <none>          116m   v1.33.1
Enter fullscreen mode Exit fullscreen mode
$ kubectl delete pdb/nginx-pdb
poddisruptionbudget.policy "nginx-pdb" deleted

$ kubectl delete deployments.apps/nginx
deployment.apps "nginx" deleted
Enter fullscreen mode Exit fullscreen mode

Taints and Tolerations

Taints and Tolerations are mechanisms that work together to control pod placement on nodes.

Taints

A taint is applied to a node to indicate that the node should not accept certain pods. Taints are key-value pairs with an associated effect.

$ kubectl get nodes
NAME                        STATUS   ROLES           AGE     VERSION
kind-calico-control-plane   Ready    control-plane   4d12h   v1.33.1
kind-calico-worker          Ready    <none>          4d12h   v1.33.1
kind-calico-worker2         Ready    <none>          4d12h   v1.33.1
kind-calico-worker3         Ready    <none>          4d12h   v1.33.1

$ kubectl get node/kind-calico-control-plane -o jsonpath='{.spec.taints}'
[{"effect":"NoSchedule","key":"node-role.kubernetes.io/control-plane"}]

$ kubectl get node/kind-calico-worker -o jsonpath='{.spec.taints}'
$
Enter fullscreen mode Exit fullscreen mode
$ kubectl taint node kind-calico-worker node-type=production:NoSchedule
node/kind-calico-worker tainted

$ kubectl get node/kind-calico-worker -o jsonpath='{.spec.taints}'
[{"effect":"NoSchedule","key":"node-type","value":"production"}]
Enter fullscreen mode Exit fullscreen mode
$ kubectl create deploy nginx --image nginx --replicas=5
deployment.apps/nginx created

$ kubectl get po -o custom-columns=NAME:.metadata.name,NODE:.spec.nodeName,STATUS:.status.phase
NAME                     NODE                  STATUS
nginx-5869d7778c-cxngx   kind-calico-worker2   Running
nginx-5869d7778c-d8fm2   kind-calico-worker3   Running
nginx-5869d7778c-lpfjl   kind-calico-worker2   Running
nginx-5869d7778c-mmwhl   kind-calico-worker3   Running
nginx-5869d7778c-mqt2k   kind-calico-worker2   Running
Enter fullscreen mode Exit fullscreen mode

Tolerations

A toleration is applied to a pod and allows that pod to be scheduled on a node with a matching taint.

# deployment-tolerations.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx
  labels:
    app: nginx
spec:
  replicas: 5
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx
  tolerations:
  - key: node-type
    operator: Equal
    value: production
    effect: NoSchedule
Enter fullscreen mode Exit fullscreen mode
$ kubectl apply -f deployment-tolerations.yaml
Warning: resource deployments/nginx is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically.
deployment.apps/nginx configured

$ kubectl get po
NAME                     READY   STATUS    RESTARTS   AGE
nginx-6cd5747b88-75vfn   1/1     Running   0          3m40s
nginx-6cd5747b88-gr7r8   1/1     Running   0          3m40s
nginx-6cd5747b88-pqv6s   1/1     Running   0          3m37s
nginx-6cd5747b88-xzpjl   1/1     Running   0          3m36s
nginx-6cd5747b88-zzf7h   1/1     Running   0          3m39s

$ kubectl get po -o custom-columns=NAME:.metadata.name,NODE:.spec.nodeName,STATUS:.status.phase
NAME                     NODE                  STATUS
nginx-6cd5747b88-75vfn   kind-calico-worker    Running
nginx-6cd5747b88-gr7r8   kind-calico-worker3   Running
nginx-6cd5747b88-pqv6s   kind-calico-worker    Running
nginx-6cd5747b88-xzpjl   kind-calico-worker3   Running
nginx-6cd5747b88-zzf7h   kind-calico-worker2   Running
Enter fullscreen mode Exit fullscreen mode
$ kubectl taint node kind-calico-worker2 node-type=development:NoExecute
node/kind-calico-worker2 tainted

$ kubectl get po -o custom-columns=NAME:.metadata.name,NODE:.spec.nodeName,STATUS:.status.phase
NAME                     NODE                  STATUS
nginx-6cd5747b88-6k6s2   kind-calico-worker3   Running
nginx-6cd5747b88-75vfn   kind-calico-worker    Running
nginx-6cd5747b88-gr7r8   kind-calico-worker3   Running
nginx-6cd5747b88-pqv6s   kind-calico-worker    Running
nginx-6cd5747b88-xzpjl   kind-calico-worker3   Running
Enter fullscreen mode Exit fullscreen mode
$ kubectl delete deployments.apps nginx
deployment.apps "nginx" deleted

$ kubectl taint node kind-calico-worker2 node-type=development:NoExecute-
node/kind-calico-worker2 untainted

$ kubectl taint node kind-calico-worker node-type=production:NoSchedule-
node/kind-calico-worker untainted
Enter fullscreen mode Exit fullscreen mode

Affinity

Kubernetes affinity is a powerful feature that allows for more expressive and flexible control over how pods are scheduled onto nodes within a cluster. It provides a more advanced alternative to nodeSelector for directing pod placement.

Node affinity

$ kubectl get no
NAME                        STATUS   ROLES           AGE     VERSION
kind-calico-control-plane   Ready    control-plane   5d11h   v1.33.1
kind-calico-worker          Ready    <none>          5d11h   v1.33.1
kind-calico-worker2         Ready    <none>          5d11h   v1.33.1
kind-calico-worker3         Ready    <none>          5d11h   v1.33.1

$ kubectl label node kind-calico-worker disktype=ssd
node/kind-calico-worker labeled

$ kubectl get no -L disktype
NAME                        STATUS   ROLES           AGE     VERSION   DISKTYPE
kind-calico-control-plane   Ready    control-plane   5d11h   v1.33.1
kind-calico-worker          Ready    <none>          5d11h   v1.33.1   ssd
kind-calico-worker2         Ready    <none>          5d11h   v1.33.1
kind-calico-worker3         Ready    <none>          5d11h   v1.33.1
Enter fullscreen mode Exit fullscreen mode

Schedule a Pod using required node affinity

# deployment-required-nodeaffinity.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx
  labels:
    app: nginx
spec:
  replicas: 5
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
            - matchExpressions:
              - key: disktype
                operator: In
                values:
                - ssd 
Enter fullscreen mode Exit fullscreen mode
$ kubectl get po
No resources found in default namespace.

$ kubectl apply -f deployment-required-nodeaffinity.yaml
deployment.apps/nginx created

$ kubectl get po -o custom-columns=NAME:.metadata.name,NODE:.spec.nodeName,STATUS:.status.phase
NAME                     NODE                 STATUS
nginx-7bdd94c84c-6hxgp   kind-calico-worker   Running
nginx-7bdd94c84c-kcbhv   kind-calico-worker   Running
nginx-7bdd94c84c-kh2f7   kind-calico-worker   Running
nginx-7bdd94c84c-qshhl   kind-calico-worker   Running
nginx-7bdd94c84c-thspc   kind-calico-worker   Running
Enter fullscreen mode Exit fullscreen mode

Schedule a Pod using preferred node affinity

$ kubectl label node kind-calico-worker{2,3} gpu=true
node/kind-calico-worker2 labeled
node/kind-calico-worker3 labeled

$ kubectl get no -L disktype,gpu
NAME                        STATUS   ROLES           AGE     VERSION   DISKTYPE   GPU
kind-calico-control-plane   Ready    control-plane   5d12h   v1.33.1
kind-calico-worker          Ready    <none>          5d12h   v1.33.1   ssd
kind-calico-worker2         Ready    <none>          5d12h   v1.33.1              true
kind-calico-worker3         Ready    <none>          5d12h   v1.33.1              true
Enter fullscreen mode Exit fullscreen mode
# deployment-preffered-nodeaffinity.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx
  labels:
    app: nginx
spec:
  replicas: 5
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx
      affinity:
        nodeAffinity:
          preferredDuringSchedulingIgnoredDuringExecution:
          - weight: 80
            preference:
              matchExpressions:
              - key: disktype
                operator: In
                values:
                - ssd
          - weight: 20
            preference:
              matchExpressions:
              - key: gpu
                operator: In
                values:
                - "true"
Enter fullscreen mode Exit fullscreen mode
$ kubectl apply -f deployment-preffered-nodeaffinity.yaml
deployment.apps/nginx configured

$ kubectl get po -o custom-columns=NAME:.metadata.name,NODE:.spec.nodeName,STATUS:.status.phase
NAME                     NODE                  STATUS
nginx-5898d5cc8d-46z7p   kind-calico-worker2   Running
nginx-5898d5cc8d-5q8rw   kind-calico-worker    Running
nginx-5898d5cc8d-kkmdz   kind-calico-worker    Running
nginx-5898d5cc8d-q95zw   kind-calico-worker    Running
nginx-5898d5cc8d-r27b5   kind-calico-worker3   Running
Enter fullscreen mode Exit fullscreen mode
$ kubectl delete deployments.apps nginx
deployment.apps "nginx" deleted
Enter fullscreen mode Exit fullscreen mode

Pod affinity and anti-affinity

$ kubectl run backend -l app=backend --image busybox -- sleep 999999
pod/backend created

$ kubectl get po -o custom-columns=NAME:.metadata.name,NODE:.spec.nodeName,STATUS:.status.phase
NAME      NODE                  STATUS
backend   kind-calico-worker3   Running
Enter fullscreen mode Exit fullscreen mode
# deployment-required-podaffinity.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: frontend
  labels:
    app: nginx
spec:
  replicas: 5
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx
      affinity:
        podAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
          - labelSelector:
              matchLabels:
                app: backend
            topologyKey: kubernetes.io/hostname
Enter fullscreen mode Exit fullscreen mode
$ kubectl apply -f deployment-required-podaffinity.yaml
deployment.apps/frontend created

$ kubectl get po -o custom-columns=NAME:.metadata.name,NODE:.spec.nodeName,STATUS:.status.phase
NAME                        NODE                  STATUS
backend                     kind-calico-worker3   Running
frontend-67c67944bf-lnw56   kind-calico-worker3   Pending
frontend-67c67944bf-wc7j5   kind-calico-worker3   Pending
frontend-67c67944bf-wxj65   kind-calico-worker3   Pending
frontend-67c67944bf-xlc9r   kind-calico-worker3   Running
frontend-67c67944bf-xs79k   kind-calico-worker3   Running
Enter fullscreen mode Exit fullscreen mode
# deployment-required-podantiaffinity.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: frontend
  labels:
    app: nginx
spec:
  replicas: 5
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx
      affinity:
        podAntiAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
          - labelSelector:
              matchLabels:
                app: backend
            topologyKey: kubernetes.io/hostname
Enter fullscreen mode Exit fullscreen mode
$ kubectl apply -f deployment-required-podaffinity.yaml
deployment.apps/frontend created

$ kubectl get po -o custom-columns=NAME:.metadata.name,NODE:.spec.nodeName,STATUS:.status.phase
NAME                        NODE                  STATUS
backend                     kind-calico-worker3   Running
frontend-6dd9b5dbb9-cnswz   kind-calico-worker    Running
frontend-6dd9b5dbb9-f8lmq   kind-calico-worker    Running
frontend-6dd9b5dbb9-lph98   kind-calico-worker2   Running
frontend-6dd9b5dbb9-m424v   kind-calico-worker2   Running
frontend-6dd9b5dbb9-mbqmm   kind-calico-worker2   Running
Enter fullscreen mode Exit fullscreen mode
$ kubectl delete deployments.apps frontend
deployment.apps "frontend" deleted

$ kubectl delete pod/backend
pod "backend" deleted
Enter fullscreen mode Exit fullscreen mode

Cleanup

$ kind delete clusters kind-calico
Deleted nodes: ["kind-calico-worker" "kind-calico-worker3" "kind-calico-worker2" "kind-calico-control-plane"]
Deleted clusters: ["kind-calico"]
Enter fullscreen mode Exit fullscreen mode

Top comments (0)