DEV Community

Ježek
Ježek

Posted on • Edited on

The Joy of Kubernetes

The article explores Kubernetes through practical examples in a joyful manner.

The content of the article was structured to ensure clarity and depth while maintaining focus on real-world use cases. By leveraging the hands-on approach advocated throughout the publication, readers will gain an enhanced understanding of core concepts in Kubernetes such as pod management, service discovery, etc.

The article was inspired by the book Kubernetes in Action by Marko Lukša, and in the process of preparing this article, the official Kubernetes Documentation was utilized as a primary reference material. Thus, I insistently recommend that you familiarize yourself with the above-mentioned references in advance.

Enjoy!


Table Of Contents


Kubernetes in Docker

kind is a tool for running local Kubernetes clusters using Docker container nodes.

Create a cluster

# kind-cluster.yaml

kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
networking:
  apiServerPort: 6443
nodes:
- role: control-plane
- role: worker
  extraPortMappings:
  - containerPort: 30666
    hostPort: 40000
- role: worker
  extraPortMappings:
  - containerPort: 30666
    hostPort: 40001
- role: worker
  extraPortMappings:
  - containerPort: 30666
    hostPort: 40002
Enter fullscreen mode Exit fullscreen mode
$ kind create cluster --config kind-cluster.yaml
Creating cluster "kind" ...
 • Ensuring node image (kindest/node:v1.33.1) 🖼  ...
 ✓ Ensuring node image (kindest/node:v1.33.1) 🖼
 • Preparing nodes 📦 📦 📦 📦   ...
 ✓ Preparing nodes 📦 📦 📦 📦
 • Writing configuration 📜  ...
 ✓ Writing configuration 📜
 • Starting control-plane 🕹️  ...
 ✓ Starting control-plane 🕹️
 • Installing CNI 🔌  ...
 ✓ Installing CNI 🔌
 • Installing StorageClass 💾  ...
 ✓ Installing StorageClass 💾
 • Joining worker nodes 🚜  ...
 ✓ Joining worker nodes 🚜
Set kubectl context to "kind-kind"
You can now use your cluster with:

kubectl cluster-info --context kind-kind
Enter fullscreen mode Exit fullscreen mode

Cluster info

$ kind get clusters
kind
Enter fullscreen mode Exit fullscreen mode
$ kubectl cluster-info --context kind-kind
Kubernetes control plane is running at https://127.0.0.1:6443
CoreDNS is running at https://127.0.0.1:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
Enter fullscreen mode Exit fullscreen mode

Cluster nodes

$ kubectl get nodes
NAME                 STATUS   ROLES           AGE   VERSION
kind-control-plane   Ready    control-plane   39m   v1.33.1
kind-worker          Ready    <none>          39m   v1.33.1
kind-worker2         Ready    <none>          39m   v1.33.1
kind-worker3         Ready    <none>          39m   v1.33.1
Enter fullscreen mode Exit fullscreen mode

Pods

Pods are the smallest deployable units of computing that you can create and manage in Kubernetes.

Create a pod

Imperative way

$ kubectl run kubia --image=luksa/kubia --port=8080
pod/kubia created
Enter fullscreen mode Exit fullscreen mode
$ kubectl get pods
NAME    READY   STATUS    RESTARTS   AGE
kubia   1/1     Running   0          5m26s
Enter fullscreen mode Exit fullscreen mode

Declarative way

# pod-basic.yaml

apiVersion: v1
kind: Pod
metadata:
  name: kubia
spec:
  containers:
  - image: luksa/kubia
    name: kubia
    ports:
    - containerPort: 8080
      protocol: TCP
Enter fullscreen mode Exit fullscreen mode
$ kubectl create -f pod-basic.yaml
pod/kubia created
Enter fullscreen mode Exit fullscreen mode
$ kubectl get po
NAME    READY   STATUS    RESTARTS   AGE
kubia   1/1     Running   0          9s
Enter fullscreen mode Exit fullscreen mode

Logs

$ kubectl logs kubia
Kubia server starting...
Enter fullscreen mode Exit fullscreen mode

Logs from specific container in pod:

$ kubectl logs kubia -c kubia
Kubia server starting...
Enter fullscreen mode Exit fullscreen mode

Port forwarding from host to pod

$ kubectl port-forward kubia 30000:8080
Forwarding from 127.0.0.1:30000 -> 8080
Forwarding from [::1]:30000 -> 8080
Enter fullscreen mode Exit fullscreen mode
$ curl -s localhost:30000
You've hit kubia
Enter fullscreen mode Exit fullscreen mode

Labels and Selectors

Labels are key/value pairs that are attached to objects such as Pods.

Labels

# pod-labels.yaml

apiVersion: v1
kind: Pod
metadata:
  name: kubia-labels
  labels:
    tier: backend
    env: dev
spec:
  containers:
  - image: luksa/kubia
    name: kubia
    ports:
    - containerPort: 8080
      protocol: TCP
Enter fullscreen mode Exit fullscreen mode
$ kubectl create -f pod-labels.yaml
pod/kubia-labels created
Enter fullscreen mode Exit fullscreen mode
$ kubectl get po --show-labels
NAME           READY   STATUS    RESTARTS   AGE     LABELS
kubia          1/1     Running   0          4d22h   <none>
kubia-labels   1/1     Running   0          30s     env=dev,tier=backend
Enter fullscreen mode Exit fullscreen mode
$ kubectl get po --label-columns tier,env
NAME           READY   STATUS    RESTARTS   AGE     TIER      ENV
kubia          1/1     Running   0          4d22h
kubia-labels   1/1     Running   0          20m     backend   dev
Enter fullscreen mode Exit fullscreen mode
$ kubectl label po kubia-labels env=test
error: 'env' already has a value (dev), and --overwrite is false

$ kubectl label po kubia-labels env=test --overwrite
pod/kubia-labels labeled

$ kubectl get po --label-columns tier,env
NAME           READY   STATUS    RESTARTS   AGE     TIER      ENV
kubia          1/1     Running   0          4d22h
kubia-labels   1/1     Running   0          24m     backend   test
Enter fullscreen mode Exit fullscreen mode

Selectors

$ kubectl get po -l 'env' --show-labels
NAME           READY   STATUS    RESTARTS   AGE     LABELS
kubia-labels   1/1     Running   0          3h25m   env=test,tier=backend

$ kubectl get po -l '!env' --show-labels
NAME    READY   STATUS    RESTARTS   AGE    LABELS
kubia   1/1     Running   0          5d1h   <none>

$ kubectl get po -l tier=backend --show-labels
NAME           READY   STATUS    RESTARTS   AGE     LABELS
kubia-labels   1/1     Running   0          3h28m   env=test,tier=backend
Enter fullscreen mode Exit fullscreen mode

Annotations

You can use annotations to attach arbitrary non-identifying metadata to objects.

# pod-annotations.yaml

apiVersion: v1
kind: Pod
metadata:
  name: kubia-annotations
  annotations:
    imageregistry: "https://hub.docker.com/"
spec:
  containers:
  - image: luksa/kubia
    name: kubia
    ports:
    - containerPort: 8080
      protocol: TCP
Enter fullscreen mode Exit fullscreen mode
$ kubectl create -f pod-annotations.yaml
pod/kubia-annotations created

$ kubectl describe pod kubia-annotations | grep Annotations
Annotations:      imageregistry: https://hub.docker.com/
Enter fullscreen mode Exit fullscreen mode
$ kubectl annotate pod/kubia-annotations imageregistry=nexus.org --overwrite
pod/kubia-annotations annotated

$ kubectl describe pod kubia-annotations | grep Annotations
Annotations:      imageregistry: nexus.org
Enter fullscreen mode Exit fullscreen mode

Namespaces

Namespaces provide a mechanism for isolating groups of resources within a single cluster.

$ kubectl get ns
NAME                 STATUS   AGE
default              Active   5d2h
kube-node-lease      Active   5d2h
kube-public          Active   5d2h
kube-system          Active   5d2h
local-path-storage   Active   5d2h
Enter fullscreen mode Exit fullscreen mode
$ kubectl get pods --namespace=default
NAME                READY   STATUS    RESTARTS   AGE
kubia               1/1     Running   0          5d2h
kubia-annotations   1/1     Running   0          19m
kubia-labels        1/1     Running   0          4h15m
Enter fullscreen mode Exit fullscreen mode
$ kubectl create namespace custom-namespace
namespace/custom-namespace created

$ kubectl get pods --namespace=custom-namespace
No resources found in custom-namespace namespace.
Enter fullscreen mode Exit fullscreen mode
$ kubectl run nginx --image=nginx --namespace=custom-namespace
pod/nginx created

$ kubectl get pods --namespace=custom-namespace
NAME    READY   STATUS    RESTARTS   AGE
nginx   1/1     Running   0          61s
Enter fullscreen mode Exit fullscreen mode
$ kubectl config set-context --current --namespace=custom-namespace
Context "kind-kind" modified.

$ kubectl get pods
NAME    READY   STATUS    RESTARTS   AGE
nginx   1/1     Running   0          2m57s

$ kubectl config set-context --current --namespace=default
Context "kind-kind" modified.

$ kubectl get pods
NAME                READY   STATUS    RESTARTS   AGE
kubia               1/1     Running   0          5d2h
kubia-annotations   1/1     Running   0          30m
kubia-labels        1/1     Running   0          4h26m
Enter fullscreen mode Exit fullscreen mode
$ kubectl delete ns custom-namespace
namespace "custom-namespace" deleted

$ kubectl get pods --namespace=custom-namespace
No resources found in custom-namespace namespace.

$ kubectl get ns
NAME                 STATUS   AGE
default              Active   5d3h
kube-node-lease      Active   5d3h
kube-public          Active   5d3h
kube-system          Active   5d3h
local-path-storage   Active   5d3h
Enter fullscreen mode Exit fullscreen mode
$ kubectl delete po --all
pod "kubia" deleted
pod "kubia-annotations" deleted
pod "kubia-labels" deleted

$ kubectl get ns
NAME                 STATUS   AGE
default              Active   5d3h
kube-node-lease      Active   5d3h
kube-public          Active   5d3h
kube-system          Active   5d3h
local-path-storage   Active   5d3h

$ kubectl get pods --namespace=default
No resources found in default namespace.
Enter fullscreen mode Exit fullscreen mode

ReplicaSet

A ReplicaSet's purpose is to maintain a stable set of replica Pods running at any given time. As such, it is often used to guarantee the availability of a specified number of identical Pods.

# replicaset.yaml

apiVersion: apps/v1
kind: ReplicaSet
metadata:
  name: kubia
spec:
  replicas: 3
  selector:
    matchLabels:
      app: kubia
  template:
    metadata:
      labels:
        app: kubia
    spec:
      containers:
      - name: kubia
        image: luksa/kubia

Enter fullscreen mode Exit fullscreen mode
$ kubectl create -f replicaset.yaml
replicaset.apps/kubia created

$ kubectl get po
NAME          READY   STATUS    RESTARTS   AGE
kubia-5l82z   1/1     Running   0          5s
kubia-bkjwk   1/1     Running   0          5s
kubia-k78j5   1/1     Running   0          5s

$ kubectl get rs
NAME    DESIRED   CURRENT   READY   AGE
kubia   3         3         3       64s
Enter fullscreen mode Exit fullscreen mode
$ kubectl delete rs kubia
replicaset.apps "kubia" deleted

$ kubectl get rs
No resources found in default namespace.

$ kubectl get po
NAME          READY   STATUS        RESTARTS   AGE
kubia-5l82z   1/1     Terminating   0          5m30s
kubia-bkjwk   1/1     Terminating   0          5m30s
kubia-k78j5   1/1     Terminating   0          5m30s
Enter fullscreen mode Exit fullscreen mode

DaemonSet

A DaemonSet ensures that all (or some) Nodes run a copy of a Pod. As nodes are added to the cluster, Pods are added to them.

# daemonset.yaml

apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: fluentd
spec:
  selector:
    matchLabels:
      app: fluentd
  template:
    metadata:
      labels:
        app: fluentd
    spec:
      nodeSelector:
        disk: ssd
      containers:
      - name: fluentd
        image: quay.io/fluentd_elasticsearch/fluentd:v5.0.1
Enter fullscreen mode Exit fullscreen mode
$ kubectl create -f daemonset.yaml
daemonset.apps/fluentd created

$ kubectl get ds
NAME      DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGE
fluentd   0         0         0       0            0           disk=ssd        115s

$ kubectl get po
No resources found in default namespace.

$ kubectl get node
NAME                 STATUS   ROLES           AGE     VERSION
kind-control-plane   Ready    control-plane   5d21h   v1.33.1
kind-worker          Ready    <none>          5d21h   v1.33.1
kind-worker2         Ready    <none>          5d21h   v1.33.1
kind-worker3         Ready    <none>          5d21h   v1.33.1

$ kubectl label node kind-worker3 disk=ssd
node/kind-worker3 labeled

$ kubectl get ds
NAME      DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGE
fluentd   1         1         1       1            1           disk=ssd        3m49s

$ kubectl get po
NAME            READY   STATUS    RESTARTS   AGE
fluentd-cslcb   1/1     Running   0          39s
Enter fullscreen mode Exit fullscreen mode
$ kubectl delete ds fluentd
daemonset.apps "fluentd" deleted

$ kubectl get ds
No resources found in default namespace.

$ kubectl get po
No resources found in default namespace.
Enter fullscreen mode Exit fullscreen mode

Jobs

Jobs represent one-off tasks that run to completion and then stop.

# job.yaml

apiVersion: batch/v1
kind: Job
metadata:
  name: pi
spec:
  template:
    spec:
      containers:
      - name: pi
        image: perl:5.34.0
        command: ["perl",  "-Mbignum=bpi", "-wle", "print bpi(2000)"]
      restartPolicy: Never
  backoffLimit: 4
Enter fullscreen mode Exit fullscreen mode
$ kubectl create -f job.yaml
job.batch/pi created

$ kubectl get jobs
NAME   STATUS    COMPLETIONS   DURATION   AGE
pi     Running   0/1           34s        34s

$ kubectl get jobs
NAME   STATUS     COMPLETIONS   DURATION   AGE
pi     Complete   1/1           54s        62s

$ kubectl get po
NAME       READY   STATUS      RESTARTS   AGE
pi-8rdmn   0/1     Completed   0          2m1s

$ kubectl events pod/pi-8rdmn
LAST SEEN   TYPE     REASON             OBJECT         MESSAGE
3m44s       Normal   Scheduled          Pod/pi-8rdmn   Successfully assigned default/pi-8rdmn to kind-worker2
3m44s       Normal   Pulling            Pod/pi-8rdmn   Pulling image "perl:5.34.0"
3m44s       Normal   SuccessfulCreate   Job/pi         Created pod: pi-8rdmn
2m59s       Normal   Pulled             Pod/pi-8rdmn   Successfully pulled image "perl:5.34.0" in 44.842s (44.842s including waiting). Image size: 336374010 bytes.
2m59s       Normal   Created            Pod/pi-8rdmn   Created container: pi
2m59s       Normal   Started            Pod/pi-8rdmn   Started container pi
2m50s       Normal   Completed          Job/pi         Job completed
Enter fullscreen mode Exit fullscreen mode
$ kubectl delete job/pi
job.batch "pi" deleted

$ kubectl get po
No resources found in default namespace.
Enter fullscreen mode Exit fullscreen mode

CronJob

CronJob starts one-time Jobs on a repeating schedule.

# cronjob.yaml

apiVersion: batch/v1
kind: CronJob
metadata:
  name: hello
spec:
  schedule: "* * * * *"
  jobTemplate:
    spec:
      template:
        spec:
          containers:
          - name: hello
            image: busybox:1.28
            imagePullPolicy: IfNotPresent
            command:
            - /bin/sh
            - -c
            - date; echo Hello from the Kubernetes cluster
          restartPolicy: OnFailure
Enter fullscreen mode Exit fullscreen mode
$ kubectl create -f cronjob.yaml
cronjob.batch/hello created

$ kubectl get cronjobs
NAME    SCHEDULE    TIMEZONE   SUSPEND   ACTIVE   LAST SCHEDULE   AGE
hello   * * * * *   <none>     False     0        8s              55s

$ kubectl get pods
NAME                   READY   STATUS      RESTARTS   AGE
hello-29223074-gsztp   0/1     Completed   0          30s

$ kubectl get pods
NAME                   READY   STATUS      RESTARTS   AGE
hello-29223074-gsztp   0/1     Completed   0          106s
hello-29223075-9r7kx   0/1     Completed   0          46s
Enter fullscreen mode Exit fullscreen mode
$ kubectl delete cronjobs/hello
cronjob.batch "hello" deleted

$ kubectl get cronjobs
No resources found in default namespace.

$ kubectl get pods
No resources found in default namespace.
Enter fullscreen mode Exit fullscreen mode

Service

Service is a method for exposing a network application that is running as one or more Pods in your cluster.

There several Service types supported in Kubernetes:

  • ClusterIP
  • NodePort
  • ExternalName
  • LoadBalancer

ClusterIP

Exposes the Service on a cluster-internal IP. Choosing this value makes the Service only reachable from within the cluster. This is the default that is used if you don't explicitly specify a type for a Service. You can expose the Service to the public internet using an Ingress or a Gateway.

# pod-labels.yaml

apiVersion: v1
kind: Pod
metadata:
  name: kubia-labels
  labels:
    tier: backend
    env: dev
spec:
  containers:
  - image: luksa/kubia
    name: kubia
    ports:
    - containerPort: 8080
      protocol: TCP
Enter fullscreen mode Exit fullscreen mode
# service-basic.yaml

apiVersion: v1
kind: Service
metadata:
  name: kubia-svc
spec:
  selector:
    tier: backend
  ports:
    - protocol: TCP
      port: 80
      targetPort: 8080
Enter fullscreen mode Exit fullscreen mode
$ kubectl create -f pod-labels.yaml
pod/kubia-labels created

$ kubectl create -f service-basic.yaml
service/kubia-svc created

$ kubectl get svc
NAME         TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)   AGE
kubernetes   ClusterIP   10.96.0.1      <none>        443/TCP   21d
kubia-svc    ClusterIP   10.96.158.86   <none>        80/TCP    5s

$ kubectl get po
NAME           READY   STATUS    RESTARTS   AGE
kubia-labels   1/1     Running   0          116s

$ kubectl exec kubia-labels -- curl -s http://10.96.158.86:80
You've hit kubia-labels
Enter fullscreen mode Exit fullscreen mode
$ kubectl delete -f service-basic.yaml
service "kubia-svc" deleted

$ kubectl delete -f pod-labels.yaml
pod "kubia-labels" deleted
Enter fullscreen mode Exit fullscreen mode
# pod-nginx.yaml

apiVersion: v1
kind: Pod
metadata:
  name: nginx
  labels:
    app.kubernetes.io/name: proxy
spec:
  containers:
  - name: nginx
    image: nginx:stable
    ports:
      - containerPort: 80
        name: http-web-svc
Enter fullscreen mode Exit fullscreen mode
# service-nginx.yaml

apiVersion: v1
kind: Service
metadata:
  name: nginx-svc
spec:
  selector:
    app.kubernetes.io/name: proxy
  ports:
    - name: http-port
      protocol: TCP
      port: 8080
      targetPort: http-web-svc
Enter fullscreen mode Exit fullscreen mode
$ kubectl create -f pod-nginx.yaml
pod/nginx created

$ kubectl create -f service-nginx.yaml
service/nginx-svc created

$ kubectl get po
NAME    READY   STATUS    RESTARTS   AGE
nginx   1/1     Running   0          5m51s

$ kubectl get svc
NAME         TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)    AGE
kubernetes   ClusterIP   10.96.0.1       <none>        443/TCP    21d
nginx-svc    ClusterIP   10.96.230.243   <none>        8080/TCP   32s

$ kubectl exec nginx -- curl -sI http://10.96.230.243:8080
HTTP/1.1 200 OK
Server: nginx/1.28.0
Date: Thu, 07 Aug 2025 12:09:24 GMT
Content-Type: text/html
Content-Length: 615
Last-Modified: Wed, 23 Apr 2025 11:48:54 GMT
Connection: keep-alive
ETag: "6808d3a6-267"
Accept-Ranges: bytes

$ kubectl exec nginx -- curl -sI http://nginx-svc:8080 | head -n 2
HTTP/1.1 200 OK
Server: nginx/1.28.0

$ kubectl exec nginx -- curl -sI http://nginx-svc.default:8080 | head -n 2
HTTP/1.1 200 OK
Server: nginx/1.28.0

$ kubectl exec nginx -- curl -sI http://nginx-svc.default.svc:8080 | head -n 2
HTTP/1.1 200 OK
Server: nginx/1.28.0

$ kubectl exec nginx -- curl -sI http://nginx-svc.default.svc.cluster.local:8080 | head -n 2
HTTP/1.1 200 OK
Server: nginx/1.28.0
Enter fullscreen mode Exit fullscreen mode
$ kubectl delete -f service-nginx.yaml
service "nginx-svc" deleted

$ kubectl delete -f pod-nginx.yaml
pod "nginx" deleted
Enter fullscreen mode Exit fullscreen mode

ExternalName

Maps the Service to the contents of the externalName field (for example, to the hostname api.foo.bar.example). The mapping configures your cluster's DNS server to return a CNAME record with that external hostname value. No proxying of any kind is set up.

# service-ext.yaml

apiVersion: v1
kind: Service
metadata:
  name: httpbin-service
spec:
  type: ExternalName
  externalName: httpbin.org
Enter fullscreen mode Exit fullscreen mode
$ kubectl create -f service-ext.yaml
service/httpbin-service created

$ kubectl create -f pod-basic.yaml
pod/kubia created

$ kubectl get svc
NAME              TYPE           CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
httpbin-service   ExternalName   <none>       httpbin.org   <none>    4m17s
kubernetes        ClusterIP      10.96.0.1    <none>        443/TCP   22d

$ kubectl exec kubia -- curl -sk -X GET https://httpbin-service/uuid -H "accept: application/json"
{
  "uuid": "6a48fe51-a6b6-4e0a-9ef2-381ba7ea2c69"
}
Enter fullscreen mode Exit fullscreen mode
$ kubectl delete -f pod-basic.yaml
pod "kubia" deleted

$ kubectl delete -f service-ext.yaml
service "httpbin-service" deleted
Enter fullscreen mode Exit fullscreen mode

NodePort

Exposes the Service on each Node's IP at a static port (the NodePort). To make the node port available, Kubernetes sets up a cluster IP address, the same as if you had requested a Service of type: ClusterIP.

# service-nginx-nodeport.yaml

apiVersion: v1
kind: Service
metadata:
  name: nginx-svc
spec:
  type: NodePort
  selector:
    app.kubernetes.io/name: proxy
  ports:
  - port: 8080
    targetPort: http-web-svc
    nodePort: 30666
Enter fullscreen mode Exit fullscreen mode
$ kubectl create -f pod-nginx.yaml
pod/nginx created

$ kubectl create -f service-nginx-nodeport.yaml
service/nginx-svc created

$ kubectl get svc nginx-svc
NAME        TYPE       CLUSTER-IP     EXTERNAL-IP   PORT(S)          AGE
nginx-svc   NodePort   10.96.252.35   <none>        8080:30666/TCP   9s

$ docker ps
CONTAINER ID   IMAGE                  COMMAND                  CREATED       STATUS       PORTS                      NAMES
da2c842ddfd6   kindest/node:v1.33.1   "/usr/local/bin/entr…"   3 weeks ago   Up 3 weeks   0.0.0.0:40000->30666/tcp   kind-worker
16bf718b93b6   kindest/node:v1.33.1   "/usr/local/bin/entr…"   3 weeks ago   Up 3 weeks   127.0.0.1:6443->6443/tcp   kind-control-plane
bb18cefdb180   kindest/node:v1.33.1   "/usr/local/bin/entr…"   3 weeks ago   Up 3 weeks   0.0.0.0:40002->30666/tcp   kind-worker3
42cea7794f0b   kindest/node:v1.33.1   "/usr/local/bin/entr…"   3 weeks ago   Up 3 weeks   0.0.0.0:40001->30666/tcp   kind-worker2

$ curl -sI http://localhost:40000 | head -n 2
HTTP/1.1 200 OK
Server: nginx/1.28.0

$ curl -sI http://localhost:40001 | head -n 2
HTTP/1.1 200 OK
Server: nginx/1.28.0

$ curl -sI http://localhost:40002 | head -n 2
HTTP/1.1 200 OK
Server: nginx/1.28.0
Enter fullscreen mode Exit fullscreen mode
$ kubectl delete -f service-nginx-nodeport.yaml
service "nginx-svc" deleted

$ kubectl delete -f pod-nginx.yaml
pod "nginx" deleted
Enter fullscreen mode Exit fullscreen mode

LoadBalancer

Exposes the Service externally using an external load balancer. Kubernetes does not directly offer a load balancing component; you must provide one, or you can integrate your Kubernetes cluster with a cloud provider.

Let's a look how to get service of type LoadBalancer working in a kind cluster using Cloud Provider KIND.

# service-lb-demo.yaml

kind: Pod
apiVersion: v1
metadata:
  name: foo-app
  labels:
    app: http-echo
spec:
  containers:
  - command:
    - /agnhost
    - serve-hostname
    - --http=true
    - --port=8080
    image: registry.k8s.io/e2e-test-images/agnhost:2.39
    name: foo-app

---
kind: Pod
apiVersion: v1
metadata:
  name: bar-app
  labels:
    app: http-echo
spec:
  containers:
  - command:
    - /agnhost
    - serve-hostname
    - --http=true
    - --port=8080
    image: registry.k8s.io/e2e-test-images/agnhost:2.39
    name: bar-app

---
kind: Service
apiVersion: v1
metadata:
  name: http-echo-service
spec:
  type: LoadBalancer
  selector:
    app: http-echo
  ports:
  - port: 5678
    targetPort: 8080
Enter fullscreen mode Exit fullscreen mode
$ kubectl create -f service-lb-demo.yaml
pod/foo-app created
pod/bar-app created
service/http-echo-service created

$ kubectl get svc http-echo-service
NAME                TYPE           CLUSTER-IP    EXTERNAL-IP   PORT(S)          AGE
http-echo-service   LoadBalancer   10.96.97.99   172.18.0.6    5678:31196/TCP   58s

$ kubectl get svc http-echo-service -o=jsonpath='{.status.loadBalancer.ingress[0].ip}'
172.18.0.6

$ for _ in {1..4}; do curl -s 172.18.0.6:5678; echo; done
foo-app
bar-app
bar-app
foo-app
Enter fullscreen mode Exit fullscreen mode
$ kubectl delete -f service-lb-demo.yaml
pod "foo-app" deleted
pod "bar-app" deleted
service "http-echo-service" deleted
Enter fullscreen mode Exit fullscreen mode

Ingress

Ingress exposes HTTP and HTTPS routes from outside the cluster to services within the cluster.

Ingress Controller

In order for an Ingress to work in your cluster, there must be an Ingress Controller running.

You have to run Cloud Provider KIND to enable the loadbalancer controller which Nginx Ingress controller will use through the loadbalancer API in a kind cluster.

$ kubectl apply -f https://kind.sigs.k8s.io/examples/ingress/deploy-ingress-nginx.yaml
namespace/ingress-nginx created
serviceaccount/ingress-nginx created
serviceaccount/ingress-nginx-admission created
role.rbac.authorization.k8s.io/ingress-nginx created
role.rbac.authorization.k8s.io/ingress-nginx-admission created
clusterrole.rbac.authorization.k8s.io/ingress-nginx created
clusterrole.rbac.authorization.k8s.io/ingress-nginx-admission created
rolebinding.rbac.authorization.k8s.io/ingress-nginx created
rolebinding.rbac.authorization.k8s.io/ingress-nginx-admission created
clusterrolebinding.rbac.authorization.k8s.io/ingress-nginx created
clusterrolebinding.rbac.authorization.k8s.io/ingress-nginx-admission created
configmap/ingress-nginx-controller created
service/ingress-nginx-controller created
service/ingress-nginx-controller-admission created
deployment.apps/ingress-nginx-controller created
job.batch/ingress-nginx-admission-create created
job.batch/ingress-nginx-admission-patch created
ingressclass.networking.k8s.io/nginx created
validatingwebhookconfiguration.admissionregistration.k8s.io/ingress-nginx-admission created
Enter fullscreen mode Exit fullscreen mode
$ kubectl wait --namespace ingress-nginx \
>   --for=condition=ready pod \
>   --selector=app.kubernetes.io/component=controller \
>   --timeout=90s
pod/ingress-nginx-controller-86bb9f8d4b-4hg7w condition met
Enter fullscreen mode Exit fullscreen mode
$ kubectl get all -n ingress-nginx
NAME                                            READY   STATUS      RESTARTS   AGE
pod/ingress-nginx-admission-create-ldc97        0/1     Completed   0          2m25s
pod/ingress-nginx-admission-patch-zzlh7         0/1     Completed   0          2m25s
pod/ingress-nginx-controller-86bb9f8d4b-4hg7w   1/1     Running     0          2m25s

NAME                                         TYPE           CLUSTER-IP     EXTERNAL-IP   PORT(S)                      AGE
service/ingress-nginx-controller             LoadBalancer   10.96.146.11   172.18.0.6    80:30367/TCP,443:31847/TCP   2m25s
service/ingress-nginx-controller-admission   ClusterIP      10.96.50.204   <none>        443/TCP                      2m25s

NAME                                       READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/ingress-nginx-controller   1/1     1            1           2m25s

NAME                                                  DESIRED   CURRENT   READY   AGE
replicaset.apps/ingress-nginx-controller-86bb9f8d4b   1         1         1       2m25s

NAME                                       STATUS     COMPLETIONS   DURATION   AGE
job.batch/ingress-nginx-admission-create   Complete   1/1           11s        2m25s
job.batch/ingress-nginx-admission-patch    Complete   1/1           12s        2m25s
Enter fullscreen mode Exit fullscreen mode

Ingress resources

The Ingress concept lets you map traffic to different backends based on rules you define via the Kubernetes API. Traffic routing is controlled by rules defined on the Ingress resource.

Basic usage

# pod-foo-bar.yaml

apiVersion: v1
kind: Pod
metadata:
  name: kubia-foo
  labels:
    app: foo
spec:
  containers:
  - image: luksa/kubia
    name: kubia
    ports:
    - containerPort: 8080
      protocol: TCP
      name: http-port

---
apiVersion: v1
kind: Pod
metadata:
  name: kubia-bar
  labels:
    app: bar
spec:
  containers:
  - image: luksa/kubia
    name: kubia
    ports:
    - containerPort: 8080
      name: http-port
Enter fullscreen mode Exit fullscreen mode
# service-foo-bar.yaml

apiVersion: v1
kind: Service
metadata:
  name: kubia-foo-svc
spec:
  selector:
    app: foo
  ports:
    - name: http-port
      protocol: TCP
      port: 8080
      targetPort: http-port

---
apiVersion: v1
kind: Service
metadata:
  name: kubia-bar-svc
spec:
  selector:
    app: bar
  ports:
    - name: http-port
      protocol: TCP
      port: 8080
      targetPort: http-port
Enter fullscreen mode Exit fullscreen mode
# ingress-basic.yaml 

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: kubia
spec:
  rules:
  - http:
      paths:
      - path: /foo
        pathType: Prefix
        backend:
          service:
            name: kubia-foo-svc
            port:
              number: 80
      - path: /bar
        pathType: Prefix
        backend:
          service:
            name: kubia-bar-svc
            port:
              number: 80
Enter fullscreen mode Exit fullscreen mode
$ kubectl create -f pod-foo-bar.yaml
pod/kubia-foo created
pod/kubia-bar created

$ kubectl create -f service-foo-bar.yaml
service/kubia-foo-svc created
service/kubia-bar-svc created

$ kubectl create -f ingress-basic.yaml
ingress.networking.k8s.io/kubia created
Enter fullscreen mode Exit fullscreen mode
$ kubectl get svc
NAME            TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)   AGE
kubernetes      ClusterIP   10.96.0.1       <none>        443/TCP   22d
kubia-bar-svc   ClusterIP   10.96.230.115   <none>        80/TCP    4m12s
kubia-foo-svc   ClusterIP   10.96.49.21     <none>        80/TCP    4m13s

$ kubectl get ingress
NAME    CLASS    HOSTS   ADDRESS     PORTS   AGE
kubia   <none>   *       localhost   80      67s

$ kubectl -n ingress-nginx get svc
NAME                                 TYPE           CLUSTER-IP     EXTERNAL-IP   PORT(S)                      AGE
ingress-nginx-controller             LoadBalancer   10.96.146.11   172.18.0.6    80:30367/TCP,443:31847/TCP   63m
ingress-nginx-controller-admission   ClusterIP      10.96.50.204   <none>        443/TCP                      63m
Enter fullscreen mode Exit fullscreen mode
$ kubectl get services \
>    --namespace ingress-nginx \
>    ingress-nginx-controller \
>    --output jsonpath='{.status.loadBalancer.ingress[0].ip}'
172.18.0.6

$ curl -s http://172.18.0.6:80/foo
You've hit kubia-foo

$ curl -s http://172.18.0.6:80/bar
You've hit kubia-bar

$ curl -s http://172.18.0.6:80/baz
<html>
<head><title>404 Not Found</title></head>
<body>
<center><h1>404 Not Found</h1></center>
<hr><center>nginx</center>
</body>
</html>
Enter fullscreen mode Exit fullscreen mode

In order to use ingress address localhost (curl http://localhost/foo) you should define extraPortMapping in kind cluster configuration as described in Extra Port Mappings.

$ kubectl delete ingress/kubia
ingress.networking.k8s.io "kubia" deleted
Enter fullscreen mode Exit fullscreen mode

Using a host

# ingress-hosts.yaml

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: kubia
spec:
  rules:
  - host: foo.kubia.com
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: kubia-foo-svc
            port:
              number: 80
  - host: bar.kubia.com
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: kubia-bar-svc
            port:
              number: 80
Enter fullscreen mode Exit fullscreen mode
$ kubectl create -f ingress-hosts.yaml
ingress.networking.k8s.io/kubia created

$ kubectl get ingress/kubia
NAME    CLASS    HOSTS                         ADDRESS     PORTS   AGE
kubia   <none>   foo.kubia.com,bar.kubia.com   localhost   80      103s
Enter fullscreen mode Exit fullscreen mode
$ curl -s http://172.18.0.6
<html>
<head><title>404 Not Found</title></head>
<body>
<center><h1>404 Not Found</h1></center>
<hr><center>nginx</center>
</body>
</html>

$ curl -s http://172.18.0.6 -H 'Host: foo.kubia.com'
You've hit kubia-foo                               '

$ curl -s http://172.18.0.6 -H 'Host: bar.kubia.com'
You've hit kubia-bar
Enter fullscreen mode Exit fullscreen mode
$ kubectl delete ingress/kubia
ingress.networking.k8s.io "kubia" deleted
Enter fullscreen mode Exit fullscreen mode

TLS

You can secure an Ingress by specifying a Secret that contains a TLS private key and certificate.

$ openssl genrsa -out tls.key 2048
Generating RSA private key, 2048 bit long modulus (2 primes)
............................................+++++
............+++++
e is 65537 (0x010001)

$ openssl req -new -x509 -key tls.key -out tls.crt -days 360 -subj //CN=foo.kubia.com

$ kubectl create secret tls tls-secret --cert=tls.crt --key=tls.key
secret/tls-secret created
Enter fullscreen mode Exit fullscreen mode
# ingress-tls.yaml

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: kubia
spec:
  tls:
  - hosts:
      - foo.kubia.com
    secretName: tls-secret
  rules:
  - host: foo.kubia.com
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: kubia-foo-svc
            port:
              number: 80
Enter fullscreen mode Exit fullscreen mode
$ kubectl create -f ingress-tls.yaml
ingress.networking.k8s.io/kubia created

$ kubectl get ingress/kubia
NAME    CLASS    HOSTS           ADDRESS     PORTS     AGE
kubia   <none>   foo.kubia.com   localhost   80, 443   2m13s

$ curl -sk https://172.18.0.6:443 -H 'Host: foo.kubia.com'
You've hit kubia-foo
Enter fullscreen mode Exit fullscreen mode
$ kubectl delete ingress/kubia
ingress.networking.k8s.io "kubia" deleted

$ kubectl delete secret/tls-secret
secret "tls-secret" deleted

$ kubectl delete -f pod-foo-bar.yaml
pod "kubia-foo" deleted
pod "kubia-bar" deleted

$ kubectl delete -f service-foo-bar.yaml
service "kubia-foo-svc" deleted
service "kubia-bar-svc" deleted
Enter fullscreen mode Exit fullscreen mode

Probes

A probe is a diagnostic performed periodically by the kubelet on a container. To perform a diagnostic, the kubelet either executes code within the container, or makes a network request.

livenessProbe

Indicates whether the container is running. If the liveness probe fails, the kubelet kills the container

# pod-liveness-probe.yaml

apiVersion: v1
kind: Pod
metadata:
  name: kubia-liveness
spec:
  containers:
  - image: luksa/kubia-unhealthy
    name: kubia
    livenessProbe:
      httpGet:
        path: /
        port: 8080
Enter fullscreen mode Exit fullscreen mode
$ kubectl create -f pod-liveness-probe.yaml
pod/kubia-liveness created

$ kubectl get po
NAME             READY   STATUS    RESTARTS   AGE
kubia-liveness   1/1     Running   0          42s

$ kubectl events pod/kubia-liveness
LAST SEEN          TYPE      REASON      OBJECT               MESSAGE
113s               Normal    Scheduled   Pod/kubia-liveness   Successfully assigned default/kubia-liveness to kind-worker3
112s               Normal    Pulling     Pod/kubia-liveness   Pulling image "luksa/kubia-unhealthy"
77s                Normal    Pulled      Pod/kubia-liveness   Successfully pulled image "luksa/kubia-unhealthy" in 34.865s (34.865s including waiting). Image size: 263841919 bytes.
77s                Normal    Created     Pod/kubia-liveness   Created container: kubia
77s                Normal    Started     Pod/kubia-liveness   Started container kubia
2s (x3 over 22s)   Warning   Unhealthy   Pod/kubia-liveness   Liveness probe failed: HTTP probe failed with statuscode: 500
2s                 Normal    Killing     Pod/kubia-liveness   Container kubia failed liveness probe, will be restarted

$ kubectl get po
NAME             READY   STATUS    RESTARTS      AGE
kubia-liveness   1/1     Running   1 (20s ago)   2m41s
Enter fullscreen mode Exit fullscreen mode

readinessProbe

Indicates whether the container is ready to respond to requests. If the readiness probe fails, the EndpointSlice controller removes the Pod's IP address from the EndpointSlices of all Services that match the Pod

# pod-readiness-probe.yaml

apiVersion: v1
kind: Pod
metadata:
  name: kubia-readiness
  labels:
    app: kubia
spec:
  containers:
  - image: luksa/kubia
    name: kubia
    readinessProbe:
      exec:
        command:
        - cat
        - /tmp/ready
      initialDelaySeconds: 10
      periodSeconds: 5
    ports:
    - containerPort: 8080
      name: http-web
Enter fullscreen mode Exit fullscreen mode
# service-readiness-probe.yaml

apiVersion: v1
kind: Service
metadata:
  name: kubia-svc
spec:
  type: LoadBalancer
  selector:
    app: kubia
  ports:
  - port: 80
    targetPort: http-web
Enter fullscreen mode Exit fullscreen mode
$ kubectl create -f pod-readiness-probe.yaml
pod/kubia-readiness created

$ kubectl create -f service-readiness-probe.yaml
service/kubia-svc created

$ kubectl get po
NAME              READY   STATUS    RESTARTS   AGE
kubia-readiness   0/1     Running   0          23s

$ kubectl get svc
NAME         TYPE           CLUSTER-IP     EXTERNAL-IP   PORT(S)        AGE
kubernetes   ClusterIP      10.96.0.1      <none>        443/TCP        23d
kubia-svc    LoadBalancer   10.96.150.51   172.18.0.7    80:31868/TCP   33s
Enter fullscreen mode Exit fullscreen mode
$ kubectl exec kubia-readiness -- curl -s http://localhost:8080
You've hit kubia-readiness'

$ kubectl exec kubia-readiness -- curl -s http://kubia-svc:80
command terminated with exit code 7

$ curl -sv http://172.18.0.7:80
*   Trying 172.18.0.7:80...
* Connected to 172.18.0.7 (172.18.0.7) port 80 (#0)
> GET / HTTP/1.1
> Host: 172.18.0.7
> User-Agent: curl/7.79.1
> Accept: */*
>
* Empty reply from server
* Closing connection 0
Enter fullscreen mode Exit fullscreen mode
$ kubectl exec kubia-readiness -- touch tmp/ready

$ kubectl get po
NAME              READY   STATUS    RESTARTS   AGE
kubia-readiness   1/1     Running   0          2m38s

$ kubectl exec kubia-readiness -- curl -s http://kubia-svc:80
You've hit kubia-readiness

$ curl -s http://172.18.0.7:80
You've hit kubia-readiness
Enter fullscreen mode Exit fullscreen mode
$ kubectl delete -f pod-readiness-probe.yaml
pod "kubia-readiness" deleted

$ kubectl delete -f service-readiness-probe.yaml
service "kubia-svc" deleted
Enter fullscreen mode Exit fullscreen mode

startupProbe

Indicates whether the application within the container is started. All other probes are disabled if a startup probe is provided, until it succeeds. If the startup probe fails, the kubelet kills the container.

ports:
- name: liveness-port
  containerPort: 8080

livenessProbe:
  httpGet:
    path: /healthz
    port: liveness-port
  failureThreshold: 1
  periodSeconds: 10

startupProbe:
  httpGet:
    path: /healthz
    port: liveness-port
  failureThreshold: 30
  periodSeconds: 10
Enter fullscreen mode Exit fullscreen mode

For more information about configuring probes, see Configure Liveness, Readiness and Startup Probes


Volumes

Kubernetes volumes provide a way for containers in a pod to access and share data via the filesystem. Data sharing can be between different local processes within a container, or between different containers, or between Pods.

Kubernetes supports several types of volumes.

Ephemeral Volumes

Ephemeral volumes are temporary storage that are intrinsically linked to the lifecycle of a Pod. Ephemeral volumes are designed for scenarios where data persistence is not required beyond the life of a single Pod.

Kubernetes supports several different kinds of ephemeral volumes for different purposes: emptyDir, configmap, downwardAPI, secret, image, CSI

emptyDir

For a Pod that defines an emptyDir volume, the volume is created when the Pod is assigned to a node. The emptyDir volume is initially empty.

# pod-volume-emptydir.yaml

apiVersion: v1
kind: Pod
metadata:
  name: nginx
spec:
  containers:
  - name: nginx
    image: nginx:stable
    volumeMounts:
    - mountPath: /tmp-cache
      name: tmp
  volumes:
  - name: tmp
    emptyDir: {}
Enter fullscreen mode Exit fullscreen mode
$ kubectl create -f pod-volume-emptydir.yaml
pod/nginx created

$ kubectl exec nginx -- ls -l | grep cache
drwxrwxrwx   2 root root 4096 Aug 11 08:13 tmp-cache
Enter fullscreen mode Exit fullscreen mode
$ kubectl delete -f pod-volume-emptydir.yaml
pod "nginx" deleted
Enter fullscreen mode Exit fullscreen mode

You can create a volume in memory using using tmpfs file system:

  - name: tmp
    emptyDir:
      sizeLimit: 500Mi
      medium: Memory
Enter fullscreen mode Exit fullscreen mode

Projected Volumes

A projected volume maps several existing volume sources into the same directory.

Currently, the following types of volume sources can be projected: secret, downwardAPI, configMap, serviceAccountToken, clusterTrustBundle

Persistent Volumes

Persistent volumes offer durable storage, meaning the data stored within them persists even after the associated Pods are deleted, restarted, or rescheduled.

PersistentVolume

A PersistentVolume (PV) is a piece of storage in the cluster that has been provisioned by an administrator or dynamically provisioned using Storage Classes.

PersistentVolume types are implemented as plugins. Kubernetes currently supports the following plugins: csi, fc, iscsi, local, nfs, hostPath

hostPath

A hostPath volume mounts a file or directory from the host node's filesystem into your Pod.

# pod-volume-hostpath.yaml

apiVersion: v1
kind: Pod
metadata:
  name: nginx
spec:
  containers:
  - name: nginx
    image: nginx:stable
    volumeMounts:
    - mountPath: /cache
      name: cache
  volumes:
  - name: cache
    hostPath:
      path: /data/cache
      type: DirectoryOrCreate
Enter fullscreen mode Exit fullscreen mode
$ kubectl create -f pod-volume-hostpath.yaml
pod/nginx created

$ kubectl exec nginx -- ls -l | grep cache
drwxr-xr-x   2 root root 4096 Aug 11 12:27 cache
Enter fullscreen mode Exit fullscreen mode
$ kubectl delete -f pod-volume-hostpath.yaml
pod "nginx" deleted
Enter fullscreen mode Exit fullscreen mode
# pv-hostpath.yaml

apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv-redis
spec:
  capacity: 
    storage: 1Gi
  accessModes:
    - ReadWriteOnce
    - ReadOnlyMany
  persistentVolumeReclaimPolicy: Retain
  hostPath:
    path: /data/redis
Enter fullscreen mode Exit fullscreen mode
$ kubectl create -f pv-hostpath.yaml
persistentvolume/pv-redis created

$ kubectl get pv
NAME         CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM   STORAGECLASS   VOLUMEATTRIBUTESCLASS   REASON   AGE
pv-redis   1Gi        RWO,ROX        Retain           Available                          <unset>                          44s
Enter fullscreen mode Exit fullscreen mode

PersistentVolumeClaim

A PersistentVolumeClaim (PVC) is a request for storage by a user. A PersistentVolumeClaim volume is used to mount a PersistentVolume into a Pod.

# pvc-basic.yaml

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: pvc-redis
spec:
  resources:
    requests:
      storage: 0.5Gi
  accessModes:
  - ReadWriteOnce
  storageClassName: ""
Enter fullscreen mode Exit fullscreen mode
$ kubectl create -f pvc-basic.yaml
persistentvolumeclaim/pvc-redis created

$ kubectl get pvc
NAME        STATUS   VOLUME     CAPACITY   ACCESS MODES   STORAGECLASS   VOLUMEATTRIBUTESCLASS   AGE
pvc-redis   Bound    pv-redis   1Gi        RWO,ROX                       <unset>                 6s

$ kubectl get pv
NAME       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM               STORAGECLASS   VOLUMEATTRIBUTESCLASS   REASON   AGE
pv-redis   1Gi        RWO,ROX        Retain           Bound    default/pvc-redis                  <unset>                          28s
Enter fullscreen mode Exit fullscreen mode
$ kubectl create -f pod-pvc.yaml
pod/redis created

$ kubectl get po redis -o jsonpath='{.spec.volumes[?(@.name == "redis-rdb")]}'
{"name":"redis-rdb","persistentVolumeClaim":{"claimName":"pvc-redis"}}
Enter fullscreen mode Exit fullscreen mode
$ kubectl exec redis -- redis-cli save
OK

$ kubectl get po redis -o jsonpath='{.spec.nodeName}'
kind-worker2

$ docker exec kind-worker2 ls -l tmp/redis
total 4
-rw------- 1 999 systemd-journal 102 Aug 11 14:47 dump.rdb
Enter fullscreen mode Exit fullscreen mode
$ kubectl delete po/redis
pod "redis" deleted

$ kubectl delete pvc/pvc-redis
persistentvolumeclaim "pvc-redis" deleted

$ kubectl get pvc
No resources found in default namespace.

$ kubectl get pv
NAME       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS     CLAIM               STORAGECLASS   VOLUMEATTRIBUTESCLASS   REASON   AGE
pv-redis   1Gi        RWO,ROX        Retain           Released   default/pvc-redis                  <unset>                          37m

$ kubectl create -f pvc-basic.yaml
persistentvolumeclaim/pvc-redis created

$ kubectl get pvc
NAME        STATUS    VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS   VOLUMEATTRIBUTESCLASS   AGE
pvc-redis   Pending                                                     <unset>                 9s

$ kubectl get pv
NAME       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS     CLAIM               STORAGECLASS   VOLUMEATTRIBUTESCLASS   REASON   AGE
pv-redis   1Gi        RWO,ROX        Retain           Released   default/pvc-redis                  <unset>                          40m

$ kubectl create -f pod-pvc.yaml
pod/redis created

$ kubectl get po
NAME    READY   STATUS    RESTARTS   AGE
redis   0/1     Pending   0          92s

$ kubectl events pod/redis
LAST SEEN             TYPE      REASON             OBJECT                            MESSAGE
37m                   Normal    Scheduled          Pod/redis                         Successfully assigned default/redis to kind-worker2
37m                   Normal    Pulling            Pod/redis                         Pulling image "redis:6.2"
37m                   Normal    Pulled             Pod/redis                         Successfully pulled image "redis:6.2" in 5.993s (5.993s including waiting). Image size: 40179474 bytes.
37m                   Normal    Created            Pod/redis                         Created container: redis
37m                   Normal    Started            Pod/redis                         Started container redis
6m57s                 Normal    Killing            Pod/redis                         Stopping container redis
2m4s                  Warning   FailedScheduling   Pod/redis                         0/4 nodes are available: pod has unbound immediate PersistentVolumeClaims. preemption: 0/4 nodes are available: 4 Preemption is not helpful for scheduling.
8s (x16 over 3m51s)   Normal    FailedBinding      PersistentVolumeClaim/pvc-redis   no persistent volumes available for this claim and no storage class is set
Enter fullscreen mode Exit fullscreen mode
$ kubectl delete pv/pv-redis
persistentvolume "pv-redis" deleted

$ kubectl get pvc
NAME        STATUS    VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS   VOLUMEATTRIBUTESCLASS   AGE
pvc-redis   Pending                                                     <unset>                 61s

$ kubectl create -f pv-hostpath.yaml
persistentvolume/pv-redis created

$ kubectl get pvc
NAME        STATUS   VOLUME     CAPACITY   ACCESS MODES   STORAGECLASS   VOLUMEATTRIBUTESCLASS   AGE
pvc-redis   Bound    pv-redis   1Gi        RWO,ROX                       <unset>                 2m2s
Enter fullscreen mode Exit fullscreen mode
$ kubectl delete pod/redis
pod "redis" deleted

$ kubectl delete pvc/pvc-redis
persistentvolumeclaim "pvc-redis" deleted

$ kubectl delete pv/pv-redis
persistentvolume "pv-redis" deleted
Enter fullscreen mode Exit fullscreen mode

Dynamic Volume Provisioning

Dynamic volume provisioning allows storage volumes to be created on-demand. Without dynamic provisioning, cluster administrators have to manually make calls to their cloud or storage provider to create new storage volumes, and then create PersistentVolume objects to represent them in Kubernetes.

StorageClass

A StorageClass provides a way for administrators to describe the classes of storage they offer.

$ kubectl get storageclass
NAME                 PROVISIONER             RECLAIMPOLICY   VOLUMEBINDINGMODE      ALLOWVOLUMEEXPANSION   AGE
standard (default)   rancher.io/local-path   Delete          WaitForFirstConsumer   false                  25d
Enter fullscreen mode Exit fullscreen mode
# storageclass-local-path.yaml

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: storageclass-redis
  annotations:
    storageclass.kubernetes.io/is-default-class: "false"
provisioner: rancher.io/local-path
volumeBindingMode: Immediate
Enter fullscreen mode Exit fullscreen mode
$ kubectl create -f storageclass-local-path.yaml
storageclass.storage.k8s.io/storageclass-redis created

$ kubectl get sc
NAME                 PROVISIONER             RECLAIMPOLICY   VOLUMEBINDINGMODE      ALLOWVOLUMEEXPANSION   AGE
standard (default)   rancher.io/local-path   Delete          WaitForFirstConsumer   false                  26d
storageclass-redis   rancher.io/local-path   Delete          Immediate              false                  5m43s                 26s
Enter fullscreen mode Exit fullscreen mode
# pvc-sc.yaml

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: pvc-dynamic-redis
  annotations:
    volume.kubernetes.io/selected-node: kind-worker
spec:
  resources:
    requests:
      storage: 0.5Gi
  accessModes:
  - ReadWriteOnce
  storageClassName: storageclass-redis
Enter fullscreen mode Exit fullscreen mode
$ kubectl create -f pvc-sc.yaml
persistentvolumeclaim/pvc-dynamic-redis created

$ kubectl get pvc
NAME                STATUS    VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS         VOLUMEATTRIBUTESCLASS   AGE
pvc-dynamic-redis   Pending                                      storageclass-redis   <unset>                 8s

$ kubectl get pvc
NAME                STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS         VOLUMEATTRIBUTESCLASS   AGE
pvc-dynamic-redis   Bound    pvc-0d78a617-e1ee-4d1e-8e59-37502fc711a9   512Mi      RWO            storageclass-redis   <unset>                 26s
                         0s
$ kubectl get pv
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                       STORAGECLASS         VOLUMEATTRIBUTESCLASS   REASON   AGE
pvc-0d78a617-e1ee-4d1e-8e59-37502fc711a9   512Mi      RWO            Delete           Bound    default/pvc-dynamic-redis   storageclass-redis   <unset>                          47s
Enter fullscreen mode Exit fullscreen mode
$ kubectl delete sc/st
sc/standard            sc/storageclass-redis

$ kubectl delete sc/storageclass-redis
storageclass.storage.k8s.io "storageclass-redis" deleted

$ kubectl get pv
No resources found
Enter fullscreen mode Exit fullscreen mode

ConfigMaps

A ConfigMap is an API object used to store non-confidential data in key-value pairs. Pods can consume ConfigMaps as environment variables, command-line arguments, or as configuration files in a volume.

Creating ConfigMaps

Imperative way

# application.properties
server.port=8080
spring.profiles.active=development
Enter fullscreen mode Exit fullscreen mode
$ kubectl create configmap my-config \
    --from-literal=foo=bar \
    --from-file=app.props=application.properties
configmap/my-config created
Enter fullscreen mode Exit fullscreen mode
$ kubectl get cm/my-config
NAME        DATA   AGE
my-config   2      61s

$ kubectl get cm/my-config -o yaml
apiVersion: v1
data:
  app.props: |-
    # application.properties
    server.port=8080
    spring.profiles.active=development
  foo: bar
kind: ConfigMap
metadata:
  creationTimestamp: "2025-09-15T20:20:44Z"
  name: my-config
  namespace: default
  resourceVersion: "3636455"
  uid: 9c68ecb1-55ca-469a-b09e-3e1b625cd69b
Enter fullscreen mode Exit fullscreen mode
$ kubectl delete cm my-config
configmap "my-config" deleted
Enter fullscreen mode Exit fullscreen mode

Declarative way

# cm.yaml
apiVersion: v1
kind: ConfigMap
metadata:
  name: my-config
data:
  app.props: |
    server.port=8080
    spring.profiles.active=development
  foo: bar
Enter fullscreen mode Exit fullscreen mode
$ kubectl apply -f cm.yaml
configmap/my-config created
Enter fullscreen mode Exit fullscreen mode
$ kubectl get cm/my-config
NAME        DATA   AGE
my-config   2      19s

$ kubectl get cm/my-config -o yaml
apiVersion: v1
data:
  app.props: |
    server.port=8080
    spring.profiles.active=development
  foo: bar
kind: ConfigMap
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"v1","data":{"app.props":"server.port=8080\nspring.profiles.active=development\n","foo":"bar"},"kind":"ConfigMap","metadata":{"annotations":{},"name":"my-config","namespace":"default"}}
  creationTimestamp: "2025-09-15T20:27:51Z"
  name: my-config
  namespace: default
  resourceVersion: "3637203"
  uid: a8d9fce1-f2bd-470c-93a2-3a7fcc560bbc
Enter fullscreen mode Exit fullscreen mode

Using ConfigMaps

Consuming an environment variable by a reference key

# pod-cm-env.yaml
apiVersion: v1
kind: Pod
metadata:
  name: env-configmap
spec:
  containers:
    - name: app
      command: ["printenv", "MY_VAR"]
      image: busybox:latest
      env:
        - name: MY_VAR
          valueFrom:
            configMapKeyRef:
              name: my-config
              key: foo
Enter fullscreen mode Exit fullscreen mode
$ kubectl apply -f pod-cm-env.yaml
pod/env-configmap created

$ kubectl logs pod/env-configmap
bar
Enter fullscreen mode Exit fullscreen mode
$ kubectl delete -f pod-cm-env.yaml
pod "env-configmap" deleted
Enter fullscreen mode Exit fullscreen mode

Consuming all environment variables from the ConfigMap

# pod-cm-envfrom.yaml
apiVersion: v1
kind: Pod
metadata:
  name: env-from-configmap
spec:
  containers:
    - name: app
      command: ["printenv", "config_foo"]
      image: busybox:latest
      envFrom:
        - prefix: config_
          configMapRef:
            name: my-config
Enter fullscreen mode Exit fullscreen mode
$ kubectl apply -f pod-cm-envfrom.yaml
pod/env-from-configmap created

$ kubectl logs pod/env-from-configmap
bar
Enter fullscreen mode Exit fullscreen mode
$ kubectl delete -f pod-cm-envfrom.yaml
pod "env-from-configmap" deleted
Enter fullscreen mode Exit fullscreen mode

Using configMap volume

# pod-cm-volumemount.yaml
apiVersion: v1
kind: Pod
metadata:
  name: configmap-volumemount
spec:
  containers:
    - name: app
      command: ["cat", "/etc/props/app.props"]
      image: busybox:latest
      volumeMounts:
        - name: app-props
          mountPath: "/etc/props"
          readOnly: true
  volumes:
  - name: app-props
    configMap:
      name: my-config
Enter fullscreen mode Exit fullscreen mode
$ kubectl apply -f pod-cm-volumemount.yaml
pod/configmap-volumemount created

$ kubectl logs pod/configmap-volumemount
server.port=8080
spring.profiles.active=development
Enter fullscreen mode Exit fullscreen mode
$ kubectl delete -f pod-cm-volumemount.yaml
pod "configmap-volumemount" deleted
Enter fullscreen mode Exit fullscreen mode

Using configMap volume with items

# pod-cm-volume-items.yaml
apiVersion: v1
kind: Pod
metadata:
  name: configmap-volume-items
spec:
  containers:
    - name: app
      command: ["cat", "/etc/configs/app.conf"]
      image: busybox:latest
      volumeMounts:
        - name: config
          mountPath: "/etc/configs"
          readOnly: true
  volumes:
    - name: config
      configMap:
        name: my-config
        items:
          - key: foo
            path: app.conf
Enter fullscreen mode Exit fullscreen mode
$ kubectl apply -f pod-cm-volume-items.yaml
pod/configmap-volume-items created

$ kubectl logs pod/configmap-volume-items
bar
Enter fullscreen mode Exit fullscreen mode
$ kubectl delete -f pod-cm-volume-items.yaml
pod "configmap-volume-items" deleted
Enter fullscreen mode Exit fullscreen mode

Secrets

A Secret is an object that contains a small amount of sensitive data such as a password, a token, or a key. Such information might otherwise be put in a Pod specification or in a container image. Using a Secret means that you don't need to include confidential data in your application code..

Default Secrets in a Pod

# pod-basic.yaml
apiVersion: v1
kind: Pod
metadata:
  name: kubia
spec:
  containers:
  - image: luksa/kubia
    name: kubia
    ports:
    - containerPort: 8080
      protocol: TCP
Enter fullscreen mode Exit fullscreen mode
$ kubectl apply -f pod-basic.yaml
pod/kubia created

$ kubectl get po/kubia -o=jsonpath='{.spec.containers[0].volumeMounts}'
[{"mountPath":"/var/run/secrets/kubernetes.io/serviceaccount","name":"kube-api-access-jd9vq","readOnly":true}]

$ kubectl get po/kubia -o=jsonpath='{.spec.volumes[?(@.name == "kube-api-access-jd9vq")].projected.sources}'
[{"serviceAccountToken":{"expirationSeconds":3607,"path":"token"}},{"configMap":{"items":[{"key":"ca.crt","path":"ca.crt"}],"name":"kube-root-ca.crt"}},{"downwardAPI":{"items":[{"fieldRef":{"apiVersion":"v1","fieldPath":"metadata.namespace"},"path":"namespace"}]}}]

$ kubectl exec po/kubia -- ls /var/run/secrets/kubernetes.io/serviceaccount/
ca.crt
namespace
token

$ kubectl delete -f pod-basic.yaml
pod "kubia" deleted

Enter fullscreen mode Exit fullscreen mode

Creating Secrets

Imperative way

Opaque Secrets
$ kubectl create secret generic empty-secret
secret/empty-secret created

$ kubectl get secret empty-secret
NAME           TYPE     DATA   AGE
empty-secret   Opaque   0      9s

$ kubectl get secret/empty-secret -o yaml
apiVersion: v1
kind: Secret
metadata:
  creationTimestamp: "2025-11-05T17:19:07Z"
  name: empty-secret
  namespace: default
  resourceVersion: "6290557"
  uid: 031d7f8d-e96d-4e03-a90f-2cb96308354b
type: Opaque

$ kubectl delete secret/empty-secret
secret "empty-secret" deleted
Enter fullscreen mode Exit fullscreen mode
$ openssl genrsa -out tls.key
Generating RSA private key, 2048 bit long modulus (2 primes)
...............................................................+++++
.................................+++++
e is 65537 (0x010001)

$ openssl req -new -x509 -key tls.key -out tls.crt -subj /CN=kubia.com

$ kubectl create secret generic kubia-secret --from-file=tls.key --from-file=tls.crt
secret/kubia-secret created

$ kubectl get secret/kubia-secret -o yaml
apiVersion: v1
data:
  tls.crt: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tDQpNSUlEQ1RDQ0FmR2dBd0lCQWdJVUxxWEJaRn...LS1FTkQgQ0VSVElGSUNBVEUtLS0tLQ0K
  tls.key: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQ0KTUlJRXBBSUJBQUtDQVFFQXR4UlRYMD...U2VQK3N3PT0NCi0tLS0tRU5EIFJTQSBQUklWQVRFIEtFWS0tLS0tDQo=
kind: Secret
metadata:
  creationTimestamp: "2025-11-05T17:26:21Z"
  name: kubia-secret
  namespace: default
  resourceVersion: "6291327"
  uid: a06d4be4-3e21-47ea-8009-d300c1c449f9
type: Opaque

$ kubectl delete secret/kubia-secret
secret "kubia-secret" deleted
Enter fullscreen mode Exit fullscreen mode
$ kubectl create secret generic test-secret --from-literal='username=admin' --from-literal='password=39528$vdg7Jb'
secret/test-secret created

$ kubectl get secret/test-secret -o yaml
apiVersion: v1
data:
  password: Mzk1MjgkdmRnN0pi
  username: YWRtaW4=
kind: Secret
metadata:
  creationTimestamp: "2025-11-05T18:21:28Z"
  name: test-secret
  namespace: default
  resourceVersion: "6297117"
  uid: 215daac1-7305-43f4-91c6-c7dbdeca2802
type: Opaque

$ kubectl delete secret/test-secret
secret "test-secret" deleted
Enter fullscreen mode Exit fullscreen mode
TLS Secrets
$ kubectl create secret tls my-tls-secret --key=tls.key --cert=tls.crt
secret/my-tls-secret created

$ kubectl get secret/my-tls-secret -o yaml
apiVersion: v1
data:
  tls.crt: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tDQpNSUlEQ1RDQ0FmR2dBd0lCQWdJVUxxWEJaRn...LS1FTkQgQ0VSVElGSUNBVEUtLS0tLQ0K
  tls.key: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQ0KTUlJRXBBSUJBQUtDQVFFQXR4UlRYMD...U2VQK3N3PT0NCi0tLS0tRU5EIFJTQSBQUklWQVRFIEtFWS0tLS0tDQo=
kind: Secret
metadata:
  creationTimestamp: "2025-11-05T17:37:45Z"
  name: my-tls-secret
  namespace: default
  resourceVersion: "6292515"
  uid: f15b375e-2404-4ca0-a08f-014a0efeec70
type: kubernetes.io/tls

$ kubectl delete secret/my-tls-secret
secret "my-tls-secret" deleted
Enter fullscreen mode Exit fullscreen mode
Docker config Secrets
$ kubectl create secret docker-registry my-docker-registry-secret --docker-username=robert --docker-password=passw123 --docker-server=nexus.registry.com:5000
secret/my-docker-registry-secret created

$ kubectl get secret/my-docker-registry-secret -o yaml
apiVersion: v1
data:
  .dockerconfigjson: eyJhdXRocyI6eyJuZXh1cy5yZWdpc3RyeS5jb206NTAwMCI6eyJ1c2VybmFtZSI6InJvYmVydCIsInBhc3N3b3JkIjoicGFzc3cxMjMiLCJhdXRoIjoiY205aVpYSjBPbkJoYzNOM01USXoifX19
kind: Secret
metadata:
  creationTimestamp: "2025-11-05T17:44:10Z"
  name: my-docker-registry-secret
  namespace: default
  resourceVersion: "6293203"
  uid: c9d05ef7-8c8c-4e2b-bf6f-27f80a45d545
type: kubernetes.io/dockerconfigjson

$ kubectl delete secret/my-docker-registry-secret
secret "my-docker-registry-secret" deleted
Enter fullscreen mode Exit fullscreen mode

Declarative way

Opaque Secrets
$ echo -n 'my-app' | base64
bXktYXBw

$ echo -n '39528$vdg7Jb' | base64
Mzk1MjgkdmRnN0pi
Enter fullscreen mode Exit fullscreen mode
# opaque-secret.yaml
apiVersion: v1
kind: Secret
metadata:
  name: opaque-secret
data:
  username: bXktYXBw
  password: Mzk1MjgkdmRnN0pi
Enter fullscreen mode Exit fullscreen mode
$ kubectl apply -f opaque-secret.yaml
secret/opaque-secret created

$ kubectl get secrets
NAME          TYPE     DATA   AGE
opaque-secret   Opaque   2      4s

$ kubectl delete -f opaque-secret.yaml
secret "opaque-secret" deleted
Enter fullscreen mode Exit fullscreen mode
Docker config Secrets
# dockercfg-secret.yaml
apiVersion: v1
kind: Secret
metadata:
  name: secret-dockercfg
type: kubernetes.io/dockercfg
data:
  .dockercfg: |
    eyJhdXRocyI6eyJodHRwczovL2V4YW1wbGUvdjEvIjp7ImF1dGgiOiJvcGVuc2VzYW1lIn19fQo=
Enter fullscreen mode Exit fullscreen mode
$ kubectl apply -f dockercfg-secret.yaml
secret/secret-dockercfg created

$ kubectl get secrets
NAME               TYPE                      DATA   AGE
secret-dockercfg   kubernetes.io/dockercfg   1      3s

$ kubectl describe secret/secret-dockercfg
Name:         secret-dockercfg
Namespace:    default
Labels:       <none>
Annotations:  <none>

Type:  kubernetes.io/dockercfg

Data
====
.dockercfg:  56 bytes

$ kubectl delete -f dockercfg-secret.yaml
secret "secret-dockercfg" deleted
Enter fullscreen mode Exit fullscreen mode
Basic authentication Secret
# basicauth-secret.yaml
apiVersion: v1
kind: Secret
metadata:
  name: secret-basic-auth
type: kubernetes.io/basic-auth
stringData:
  username: admin
  password: pass1234
Enter fullscreen mode Exit fullscreen mode
$ kubectl apply -f basicauth-secret.yaml
secret/secret-basic-auth created

$ kubectl get secrets
NAME                TYPE                       DATA   AGE
secret-basic-auth   kubernetes.io/basic-auth   2      3s

$ kubectl describe secret/secret-basic-auth
Name:         secret-basic-auth
Namespace:    default
Labels:       <none>
Annotations:  <none>

Type:  kubernetes.io/basic-auth

Data
====
password:  8 bytes
username:  5 bytes

$ kubectl delete -f basicauth-secret.yaml
secret "secret-basic-auth" deleted
Enter fullscreen mode Exit fullscreen mode

Using Secrets

Secrets can be mounted as data volumes or exposed as environment variables to be used by a container in a Pod.

$ kubectl create secret generic test-secret --from-literal='username=admin' --from-literal='password=39528$vdg7Jb'
secret/test-secret created

$ kubectl describe secret test-secre
Name:         test-secret
Namespace:    default
Labels:       <none>
Annotations:  <none>

Type:  Opaque

Data
====
password:  12 bytes
username:  5 bytes
Enter fullscreen mode Exit fullscreen mode

Using Secrets as files from a Pod

# pod-secret-volumemount.yaml
apiVersion: v1
kind: Pod
metadata:
  name: secret-test-pod
spec:
  containers:
    - name: test-container
      image: nginx
      volumeMounts:
        - name: secret-volume
          mountPath: /etc/secret-volume
          readOnly: true
  volumes:
    - name: secret-volume
      secret:
        secretName: test-secret
Enter fullscreen mode Exit fullscreen mode
$ kubectl apply -f pod-secret-volumemount.yaml
pod/secret-test-pod created

$ kubectl get pod secret-test-pod
NAME              READY   STATUS    RESTARTS   AGE
secret-test-pod   1/1     Running   0          30s

$ kubectl exec secret-test-pod -- ls /etc/secret-volume
password
username

$ kubectl exec secret-test-pod -- head /etc/secret-volume/{username,password}
==> /etc/secret-volume/username <==
admin
==> /etc/secret-volume/password <==
39528$vdg7Jb

$ kubectl delete -f pod-secret-volumemount.yaml
pod "secret-test-pod" deleted
Enter fullscreen mode Exit fullscreen mode
Project Secret keys to specific file paths
# pod-secret-volume-items.yaml
apiVersion: v1
kind: Pod
metadata:
  name: secret-test-pod
spec:
  containers:
    - name: test-container
      image: nginx
      volumeMounts:
        - name: secret-volume
          mountPath: /etc/secret-volume
          readOnly: true
  volumes:
    - name: secret-volume
      secret:
        secretName: test-secret
        items:
          - key: username
            path: my-group/my-username
Enter fullscreen mode Exit fullscreen mode
$ kubectl apply -f pod-secret-volume-items.yaml
pod/secret-test-pod created

$ kubectl exec secret-test-pod -- ls /etc/secret-volume
my-group

$ kubectl exec secret-test-pod -- ls /etc/secret-volume/my-group
my-username

$ kubectl exec secret-test-pod -- head /etc/secret-volume/my-group/my-username
admin

$ kubectl delete -f pod-secret-volume-items.yaml
pod "secret-test-pod" deleted
Enter fullscreen mode Exit fullscreen mode

Using Secrets as environment variables

Define a container environment variable with data from a single Secret
# pod-secret-env-var.yaml
apiVersion: v1
kind: Pod
metadata:
  name: secret-test-pod
spec:
  containers:
    - name: test-container
      image: nginx
      env:
      - name: SECRET_PASSWORD
        valueFrom:
          secretKeyRef:
            name: test-secret
            key: password
Enter fullscreen mode Exit fullscreen mode
$ kubectl apply -f pod-secret-env-var.yaml
pod/secret-test-pod created

$ kubectl exec secret-test-pod -- /bin/sh -c 'echo $SECRET_PASSWORD'
39528$vdg7Jb

$ kubectl delete -f pod-secret-env-var.yaml
pod "secret-test-pod" deleted
Enter fullscreen mode Exit fullscreen mode
Define all of the Secret's data as container environment variables
# pod-secret-envfrom.yaml
apiVersion: v1
kind: Pod
metadata:
  name: secret-test-pod
spec:
  containers:
    - name: test-container
      image: nginx
      envFrom:
      - secretRef:
          name: test-secret
Enter fullscreen mode Exit fullscreen mode
$ kubectl apply -f pod-secret-envfrom.yaml
pod/secret-test-pod created

$ kubectl exec secret-test-pod -- /bin/sh -c 'echo "username: $username\npassword: $password\n"'
username: admin
password: 39528$vdg7Jb

$ kubectl delete -f pod-secret-envfrom.yaml
pod "secret-test-pod" deleted
Enter fullscreen mode Exit fullscreen mode
$ kubectl delete secrets test-secret
secret "test-secret" deleted
Enter fullscreen mode Exit fullscreen mode

Deployments

A Deployment is a high-level resource used to manage and scale applications while ensuring they remain in the desired state. It provides a declarative way to define how many Pods should run, which container images they should use, and how updates should be applied.

Creating Deployments

Imperative way

$ kubectl create deployment my-nginx-deployment --image=nginx --replicas=3 --port=80
deployment.apps/my-nginx-deployment created

$ kubectl get deploy
NAME                  READY   UP-TO-DATE   AVAILABLE   AGE
my-nginx-deployment   3/3     3            3           20s

$ kubectl rollout status deployment/my-nginx-deployment
deployment "my-nginx-deployment" successfully rolled out

$ kubectl get rs
NAME                             DESIRED   CURRENT   READY   AGE
my-nginx-deployment-677c645895   3         3         3       2m30s

$ kubectl get po
NAME                                   READY   STATUS    RESTARTS   AGE
my-nginx-deployment-677c645895-d4c9q   1/1     Running   0          2m58s
my-nginx-deployment-677c645895-jdvtf   1/1     Running   0          2m58s
my-nginx-deployment-677c645895-mkjsc   1/1     Running   0          2m58s

$ kubectl port-forward deployments/my-nginx-deployment 80
Forwarding from 127.0.0.1:80 -> 80
Forwarding from [::1]:80 -> 80

$ curl -sI localhost:80
HTTP/1.1 200 OK
Server: nginx/1.29.3
Enter fullscreen mode Exit fullscreen mode
$ kubectl set image deployment/my-nginx-deployment nginx=nginx:1.16.1
deployment.apps/my-nginx-deployment image updated

$ kubectl rollout status deployment/my-nginx-deployment
Waiting for deployment "my-nginx-deployment" rollout to finish: 1 out of 3 new replicas have been updated...
Waiting for deployment "my-nginx-deployment" rollout to finish: 1 out of 3 new replicas have been updated...
Waiting for deployment "my-nginx-deployment" rollout to finish: 2 out of 3 new replicas have been updated...
Waiting for deployment "my-nginx-deployment" rollout to finish: 2 out of 3 new replicas have been updated...
Waiting for deployment "my-nginx-deployment" rollout to finish: 2 out of 3 new replicas have been updated...
Waiting for deployment "my-nginx-deployment" rollout to finish: 2 out of 3 new replicas have been updated...
Waiting for deployment "my-nginx-deployment" rollout to finish: 1 old replicas are pending termination...
Waiting for deployment "my-nginx-deployment" rollout to finish: 1 old replicas are pending termination...
deployment "my-nginx-deployment" successfully rolled out

$ kubectl get deploy
NAME                  READY   UP-TO-DATE   AVAILABLE   AGE
my-nginx-deployment   3/3     3            3           5m13s

$ kubectl get rs
NAME                             DESIRED   CURRENT   READY   AGE
my-nginx-deployment-677c645895   0         0         0       5m31s
my-nginx-deployment-68b8b6c496   3         3         3       101s

$ kubectl get po
NAME                                   READY   STATUS    RESTARTS   AGE
my-nginx-deployment-68b8b6c496-6p9jg   1/1     Running   0          118s
my-nginx-deployment-68b8b6c496-mfcnj   1/1     Running   0          2m2s
my-nginx-deployment-68b8b6c496-ngm4b   1/1     Running   0          2m

$ kubectl get po/my-nginx-deployment-68b8b6c496-6p9jg -o jsonpath='{.spec.containers[0].image}'
nginx:1.16.1
Enter fullscreen mode Exit fullscreen mode
$ kubectl rollout history deployment/my-nginx-deployment
deployment.apps/my-nginx-deployment
REVISION  CHANGE-CAUSE
1         <none>
2         <none>

$ kubectl rollout history deployment/my-nginx-deployment --revision=2
deployment.apps/my-nginx-deployment with revision #2
Pod Template:
  Labels:       app=my-nginx-deployment
        pod-template-hash=68b8b6c496
  Containers:
   nginx:
    Image:      nginx:1.16.1
    Port:       80/TCP
    Host Port:  0/TCP
    Environment:        <none>
    Mounts:     <none>
  Volumes:      <none>
  Node-Selectors:       <none>
  Tolerations:  <none>

$ kubectl rollout undo deployment/my-nginx-deployment --to-revision=1
deployment.apps/my-nginx-deployment rolled back

$ kubectl rollout history deployment/my-nginx-deployment
deployment.apps/my-nginx-deployment
REVISION  CHANGE-CAUSE
2         <none>
3         <none>

$ kubectl get rs
NAME                             DESIRED   CURRENT   READY   AGE
my-nginx-deployment-677c645895   3         3         3       11m
my-nginx-deployment-68b8b6c496   0         0         0       7m11s

$ kubectl get po
NAME                                   READY   STATUS    RESTARTS   AGE
my-nginx-deployment-677c645895-cr2vd   1/1     Running   0          71s
my-nginx-deployment-677c645895-cxbpn   1/1     Running   0          73s
my-nginx-deployment-677c645895-l67cc   1/1     Running   0          68s

$ kubectl get po/my-nginx-deployment-677c645895-cr2vd -o jsonpath='{.spec.containers[0].image}'
nginx
Enter fullscreen mode Exit fullscreen mode
$ kubectl scale deployment/my-nginx-deployment --replicas=5
deployment.apps/my-nginx-deployment scaled

$ kubectl get deploy
NAME                  READY   UP-TO-DATE   AVAILABLE   AGE
my-nginx-deployment   5/5     5            5           14m

$ kubectl get rs
NAME                             DESIRED   CURRENT   READY   AGE
my-nginx-deployment-677c645895   5         5         5       14m
my-nginx-deployment-68b8b6c496   0         0         0       10m

$ kubectl get po
NAME                                   READY   STATUS    RESTARTS   AGE
my-nginx-deployment-677c645895-9zrmk   1/1     Running   0          21s
my-nginx-deployment-677c645895-cr2vd   1/1     Running   0          4m34s
my-nginx-deployment-677c645895-cxbpn   1/1     Running   0          4m36s
my-nginx-deployment-677c645895-l67cc   1/1     Running   0          4m31s
my-nginx-deployment-677c645895-qk4b5   1/1     Running   0          21s
Enter fullscreen mode Exit fullscreen mode
$ kubectl rollout pause deployment/my-nginx-deployment
deployment.apps/my-nginx-deployment paused

$ kubectl get po
NAME                                   READY   STATUS    RESTARTS   AGE
my-nginx-deployment-677c645895-9zrmk   1/1     Running   0          3m14s
my-nginx-deployment-677c645895-cr2vd   1/1     Running   0          7m27s
my-nginx-deployment-677c645895-cxbpn   1/1     Running   0          7m29s
my-nginx-deployment-677c645895-l67cc   1/1     Running   0          7m24s
my-nginx-deployment-677c645895-qk4b5   1/1     Running   0          3m14s

$ kubectl scale deployment/my-nginx-deployment --replicas=3
deployment.apps/my-nginx-deployment scaled

$ kubectl get po
NAME                                   READY   STATUS    RESTARTS   AGE
my-nginx-deployment-677c645895-cr2vd   1/1     Running   0          8m28s
my-nginx-deployment-677c645895-cxbpn   1/1     Running   0          8m30s
my-nginx-deployment-677c645895-l67cc   1/1     Running   0          8m25s

$ kubectl set image deployment/my-nginx-deployment nginx=nginx:1.17.2
deployment.apps/my-nginx-deployment image updated

$ kubectl get po
NAME                                   READY   STATUS    RESTARTS   AGE
my-nginx-deployment-677c645895-cr2vd   1/1     Running   0          8m43s
my-nginx-deployment-677c645895-cxbpn   1/1     Running   0          8m35s
my-nginx-deployment-677c645895-l67cc   1/1     Running   0          8m30s

$ kubectl rollout resume deployment/my-nginx-deployment
deployment.apps/my-nginx-deployment resumed

$ kubectl get po
NAME                                   READY   STATUS    RESTARTS   AGE
my-nginx-deployment-75c7c977bb-hwx6r   1/1     Running   0          32s
my-nginx-deployment-75c7c977bb-qlfhc   1/1     Running   0          19s
my-nginx-deployment-75c7c977bb-z7l59   1/1     Running   0          43s

$ kubectl get po/my-nginx-deployment-75c7c977bb-hwx6r -o jsonpath='{.spec.containers[0].image}'
nginx:1.17.2
Enter fullscreen mode Exit fullscreen mode
$ kubectl delete deploy/my-nginx-deployment
deployment.apps "my-nginx-deployment" deleted

$ kubectl get deploy
No resources found in default namespace.

$ kubectl get rs
No resources found in default namespace.

$ kubectl get po
No resources found in default namespace.
Enter fullscreen mode Exit fullscreen mode

Declarative way

# deployment-basic.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: my-nginx-deployment
  labels:
    app: nginx-app
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:latest
        ports:
        - containerPort: 80
Enter fullscreen mode Exit fullscreen mode
$ kubectl apply -f deployment-basic.yaml
deployment.apps/my-nginx-deployment created

$ kubectl get deploy
NAME                  READY   UP-TO-DATE   AVAILABLE   AGE
my-nginx-deployment   3/3     3            3           10s

$ kubectl rollout status deployment/my-nginx-deployment
deployment "my-nginx-deployment" successfully rolled out

$ kubectl rollout status deployment/my-nginx-deployment
deployment "my-nginx-deployment" successfully rolled out

$ kubectl get rs
NAME                           DESIRED   CURRENT   READY   AGE
my-nginx-deployment-96b9d695   3         3         3       31s

$ kubectl get po
NAME                                 READY   STATUS    RESTARTS   AGE
my-nginx-deployment-96b9d695-7hgx5   1/1     Running   0          33s
my-nginx-deployment-96b9d695-nvb6h   1/1     Running   0          33s
my-nginx-deployment-96b9d695-r5t55   1/1     Running   0          33s

$ kubectl delete -f deployment-basic.yaml
deployment.apps "my-nginx-deployment" deleted
Enter fullscreen mode Exit fullscreen mode
apiVersion: v1
kind: ConfigMap
metadata:
  name: nginx-conf
data:
  nginx.conf: |
    user nginx;
    worker_processes  1;
    events {
      worker_connections  10240;
    }
    http {
      server {
        listen 80;
        server_name  _;
        location ~ ^/(healthz|readyz)$ {
            add_header Content-Type text/plain;
            return 200 'OK';
        }
      }
    }
---

apiVersion: apps/v1
kind: Deployment
metadata:
  name: my-nginx-deployment
  labels:
    app: my-nginx
spec:
  progressDeadlineSeconds: 600      # Wait for a deployment to make progress before marking it as stalled
  replicas: 3
  selector:
    matchLabels:
      app: my-app
  template:
    metadata:
      labels:
        app: my-app
    spec:
      containers:
      - name: my-app-container
        image: nginx:latest
        ports:
        - containerPort: 80
        livenessProbe:
          httpGet:
            path: /healthz          # Endpoint for liveness checks
            port: 80
          initialDelaySeconds: 15   # Wait 15 seconds before first liveness probe
          periodSeconds: 10         # Check every 10 seconds
          timeoutSeconds: 5         # Timeout after 5 seconds
          failureThreshold: 3       # Restart container after 3 consecutive failures
        readinessProbe:
          httpGet:
            path: /readyz           # Endpoint for readiness checks
            port: 80
          initialDelaySeconds: 5    # Wait 5 seconds before first readiness probe
          periodSeconds: 5          # Check every 5 seconds
          timeoutSeconds: 3         # Timeout after 3 seconds
          failureThreshold: 1       # Consider not ready after 1 failure
        volumeMounts:
        - name: nginx-conf
          mountPath: /etc/nginx/nginx.conf
          subPath: nginx.conf
          readOnly: true
      volumes:
      - name: nginx-conf
        configMap:
          name: nginx-conf
          items:
            - key: nginx.conf
              path: nginx.conf
Enter fullscreen mode Exit fullscreen mode
$ kubectl apply -f deployment-probes.yaml
configmap/nginx-conf created
deployment.apps/my-nginx-deployment created

$ kubectl get deploy
NAME                  READY   UP-TO-DATE   AVAILABLE   AGE
my-nginx-deployment   3/3     3            3           12s

$ kubectl get rs
NAME                             DESIRED   CURRENT   READY   AGE
my-nginx-deployment-55bc8948d6   3         3         3       52s

$ kubectl get po
NAME                                   READY   STATUS    RESTARTS   AGE
my-nginx-deployment-55bc8948d6-4lhdd   1/1     Running   0          64s
my-nginx-deployment-55bc8948d6-mz5tx   1/1     Running   0          64s
my-nginx-deployment-55bc8948d6-nfkkx   1/1     Running   0          64s

$ kubectl delete -f deployment-probes.yaml
configmap "nginx-conf" deleted
deployment.apps "my-nginx-deployment" deleted
Enter fullscreen mode Exit fullscreen mode

StatefulSet

A StatefulSet is a resource used to manage stateful applications by providing stable, unique network identifiers, persistent storage, and ordered, graceful deployment and scaling for pods. They are ideal for applications like databases that require each replica to have a predictable identity and persistent storage, unlike stateless applications managed by Deployments.

$ kubectl create -f storageclass-local-path.yaml
storageclass.storage.k8s.io/storageclass-redis created

$ kubectl get sc
NAME                 PROVISIONER             RECLAIMPOLICY   VOLUMEBINDINGMODE      ALLOWVOLUMEEXPANSION   AGE
standard (default)   rancher.io/local-path   Delete          WaitForFirstConsumer   false                  115d
storageclass-redis   rancher.io/local-path   Delete          Immediate              false                  43s
Enter fullscreen mode Exit fullscreen mode
$ kubectl apply -f statefulset.yaml
statefulset.apps/redis created

$ kubectl get pvc
NAME           STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   VOLUMEATTRIBUTESCLASS   AGE
data-redis-0   Bound    pvc-948cc23d-c7a8-4caf-9206-1b52c8f31f75   512Mi      RWO            standard       <unset>                 3m31s
data-redis-1   Bound    pvc-f9ff0efa-dd70-44c6-8601-72f17430f848   512Mi      RWO            standard       <unset>                 2m36s
data-redis-2   Bound    pvc-b2079d26-8db5-4855-b81b-6f1c78aeab0e   512Mi      RWO            standard       <unset>                 98s

$ kubectl get pv
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                  STORAGECLASS   VOLUMEATTRIBUTESCLASS   REASON   AGE
pvc-948cc23d-c7a8-4caf-9206-1b52c8f31f75   512Mi      RWO            Delete           Bound    default/data-redis-0   standard       <unset>                          3m52s
pvc-b2079d26-8db5-4855-b81b-6f1c78aeab0e   512Mi      RWO            Delete           Bound    default/data-redis-2   standard       <unset>                          2m
pvc-f9ff0efa-dd70-44c6-8601-72f17430f848   512Mi      RWO            Delete           Bound    default/data-redis-1   standard       <unset>                          2m57s

$ kubectl get statefulset/redis
NAME    READY   AGE
redis   3/3     4m19s

$ kubectl get po
NAME      READY   STATUS    RESTARTS   AGE
redis-0   1/1     Running   0          4m55s
redis-1   1/1     Running   0          4m
redis-2   1/1     Running   0          3m2s

$ for i in {0..2}; do kubectl exec "redis-$i" -- sh -c 'hostname'; done
redis-0
redis-1
redis-2
Enter fullscreen mode Exit fullscreen mode
$ kubectl delete pod/redis-0
pod "redis-0" deleted

$ kubectl get po
NAME      READY   STATUS    RESTARTS   AGE
redis-0   1/1     Running   0          4s
redis-1   1/1     Running   0          13m
redis-2   1/1     Running   0          12m
Enter fullscreen mode Exit fullscreen mode
$ kubectl scale statefulset/redis --replicas=4
statefulset.apps/redis scaled

$ kubectl get po
NAME      READY   STATUS    RESTARTS   AGE
redis-0   1/1     Running   0          2m53s
redis-1   1/1     Running   0          16m
redis-2   1/1     Running   0          15m
redis-3   0/1     Pending   0          2s

$ kubectl get po
NAME      READY   STATUS              RESTARTS   AGE
redis-0   1/1     Running             0          2m56s
redis-1   1/1     Running             0          16m
redis-2   1/1     Running             0          15m
redis-3   0/1     ContainerCreating   0          5s

$ kubectl get po
NAME      READY   STATUS              RESTARTS   AGE
redis-0   1/1     Running             0          2m58s
redis-1   1/1     Running             0          16m
redis-2   1/1     Running             0          15m
redis-3   0/1     ContainerCreating   0          7s

$ kubectl get po
NAME      READY   STATUS    RESTARTS   AGE
redis-0   1/1     Running   0          2m59s
redis-1   1/1     Running   0          16m
redis-2   1/1     Running   0          15m
redis-3   1/1     Running   0          8s
Enter fullscreen mode Exit fullscreen mode
$ kubectl delete statefulset/redis
statefulset.apps "redis" deleted

$ kubectl get po
No resources found in default namespace.

$ kubectl get statefulsets
No resources found in default namespace.

$ kubectl get pv
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                  STORAGECLASS   VOLUMEATTRIBUTESCLASS   REASON   AGE
pvc-1b2cab34-7cce-4583-8cd3-3e7fce32f72c   512Mi      RWO            Delete           Bound    default/data-redis-3   standard       <unset>                          4m10s
pvc-948cc23d-c7a8-4caf-9206-1b52c8f31f75   512Mi      RWO            Delete           Bound    default/data-redis-0   standard       <unset>                          21m
pvc-b2079d26-8db5-4855-b81b-6f1c78aeab0e   512Mi      RWO            Delete           Bound    default/data-redis-2   standard       <unset>                          19m
pvc-f9ff0efa-dd70-44c6-8601-72f17430f848   512Mi      RWO            Delete           Bound    default/data-redis-1   standard       <unset>                          20m

$ kubectl get pvc
NAME           STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   VOLUMEATTRIBUTESCLASS   AGE
data-redis-0   Bound    pvc-948cc23d-c7a8-4caf-9206-1b52c8f31f75   512Mi      RWO            standard       <unset>                 22m
data-redis-1   Bound    pvc-f9ff0efa-dd70-44c6-8601-72f17430f848   512Mi      RWO            standard       <unset>                 21m
data-redis-2   Bound    pvc-b2079d26-8db5-4855-b81b-6f1c78aeab0e   512Mi      RWO            standard       <unset>                 20m
data-redis-3   Bound    pvc-1b2cab34-7cce-4583-8cd3-3e7fce32f72c   512Mi      RWO            standard       <unset>                 4m55s
Enter fullscreen mode Exit fullscreen mode
$ kubectl delete sc/storageclass-redis
storageclass.storage.k8s.io "storageclass-redis" deleted

$ kubectl delete pvc data-redis-{0,1,2,3}
persistentvolumeclaim "data-redis-0" deleted
persistentvolumeclaim "data-redis-1" deleted
persistentvolumeclaim "data-redis-2" deleted
persistentvolumeclaim "data-redis-3" deleted

$ kubectl get pvc
No resources found in default namespace.

$ kubectl get pv
No resources found
Enter fullscreen mode Exit fullscreen mode

ServiceAccount

A ServiceAccount provides an identity for processes and applications running within a Kubernetes cluster. ServiceAccounts are designed for non-human entities like Pods, system components, or external tools that need to interact with the Kubernetes API.

Default ServiceAccount

$ kubectl get sa
NAME      SECRETS   AGE
default   0         116d

$ kubectl apply -f pod-basic.yaml
pod/kubia created

$ kubectl get pod/kubia -o jsonpath='{.spec.serviceAccount}'
default

$ kubectl delete -f pod-basic.yaml
pod "kubia" deleted
Enter fullscreen mode Exit fullscreen mode

Creating a ServiceAccount

$ kubectl create sa my-sa
serviceaccount/my-sa created

$ kubectl get sa
NAME      SECRETS   AGE
default   0         116d
my-sa     0         7s

$ kubectl get sa/my-sa -o yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  creationTimestamp: "2025-11-10T10:27:44Z"
  name: my-sa
  namespace: default
  resourceVersion: "7002078"
  uid: 487bd1fa-353a-420e-be95-6ee876a277f5

$ kubectl describe sa/my-sa
Name:                my-sa
Namespace:           default
Labels:              <none>
Annotations:         <none>
Image pull secrets:  <none>
Mountable secrets:   <none>
Tokens:              <none>
Events:              <none>
Enter fullscreen mode Exit fullscreen mode

Associate a Secret with a ServiceAccount

# secret-sa-token.yaml

apiVersion: v1
kind: Secret
type: kubernetes.io/service-account-token
metadata:
  name: my-sa-token
  namespace: default
  annotations:
    kubernetes.io/service-account.name: my-sa
Enter fullscreen mode Exit fullscreen mode
$ kubectl apply -f secret-sa-token.yaml
secret/my-sa-token created

$ kubectl get secrets
NAME          TYPE                                  DATA   AGE
my-sa-token   kubernetes.io/service-account-token   3      24s

$ kubectl describe secret/my-sa-token
Name:         my-sa-token
Namespace:    default
Labels:       <none>
Annotations:  kubernetes.io/service-account.name: my-sa
              kubernetes.io/service-account.uid: 487bd1fa-353a-420e-be95-6ee876a277f5

Type:  kubernetes.io/service-account-token

Data
====
ca.crt:     1107 bytes
namespace:  7 bytes
token:      eyJhbGciOiJSUzI1NiIsImtpZCI6IkxiTHE0d29pbFBiUXpXNkI1bWxoMHFQNVZCa2o1cFl1c3...HLmhPxTcMYPc3WNUWIS4t_8E3556087H4f1e-13y8B_dUYYzh-B7NJuOIOp31_eiAxhYzaQYGw

$ kubectl get secret/my-sa-token -o=jsonpath='{.data.token}'  | base64 --decode
eyJhbGciOiJSUzI1NiIsImtpZCI6IkxiTHE0d29pbFBiUXpXNkI1bWxoMHFQNVZCa2o1cFl1c3...HLmhPxTcMYPc3WNUWIS4t_8E3556087H4f1e-13y8B_dUYYzh-B7NJuOIOp31_eiAxhYzaQYGw

$ kubectl describe sa/my-sa
Name:                my-sa
Namespace:           default
Labels:              <none>
Annotations:         <none>
Image pull secrets:  <none>
Mountable secrets:   <none>
Tokens:              my-sa-token
Events:              <none>
Enter fullscreen mode Exit fullscreen mode

Assign a ServiceAccount to a Pod

# pod-sa.yaml

apiVersion: v1
kind: Pod
metadata:
  name: curl
spec:
  serviceAccountName: my-sa
  automountServiceAccountToken: true
  containers:
  - image: alpine/curl
    name: curl
    command: ["sleep", "9999999"]
Enter fullscreen mode Exit fullscreen mode
$ kubectl apply -f pod-sa.yaml
pod/curl created

$ kubectl get po
NAME   READY   STATUS    RESTARTS   AGE
curl   1/1     Running   0          4s

$ kubectl get pod/curl -o jsonpath='{.spec.serviceAccount}'
my-sa

$ kubectl exec -it pod/curl -- cat /var/run/secrets/kubernetes.io/serviceaccount/token
eyJhbGciOiJSUzI1NiIsImtpZCI6IkxiTHE0d29pbFBiUXpXNkI1bWxoMHFQNVZCa2o1cFl1c3...9Wd5ONTHu2VyrTfM6u1FAxC72hKWK0_5zpNg
Enter fullscreen mode Exit fullscreen mode
$ kubectl exec -it pod/curl -- sh
/ # NS=$(cat /var/run/secrets/kubernetes.io/serviceaccount/namespace)
/ # TOKEN=$(cat /var/run/secrets/kubernetes.io/serviceaccount/token)
/ # export CURL_CA_BUNDLE=/var/run/secrets/kubernetes.io/serviceaccount/ca.crt
/ # curl -s https://kubernetes.default.svc.cluster.local/api/v1/namespaces/$NS/pods -H "Authorization: Bearer $TOKEN"
{
  "kind": "Status",
  "apiVersion": "v1",
  "metadata": {},
  "status": "Failure",
  "message": "pods is forbidden: User \"system:serviceaccount:default:my-sa\" cannot list resource \"pods\" in API group \"\" in the namespace \"default\"",
  "reason": "Forbidden",
  "details": {
    "kind": "pods"
  },
  "code": 403
}
Enter fullscreen mode Exit fullscreen mode

RBAC

TODO


Pod Security

TODO


NetworkPolicy

TODO


LimitRange

TODO


ResourceQuota

TODO


HorizontalPodAutoscaler

TODO


PodDisruptionBudget

TODO


Taints and Tolerations

TODO


Affinity

TODO


Top comments (0)