<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Ježek</title>
    <description>The latest articles on DEV Community by Ježek (@hedgehog).</description>
    <link>https://dev.to/hedgehog</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/hedgehog"/>
    <language>en</language>
    <item>
      <title>Getting started with Argo CD using the CLI</title>
      <dc:creator>Ježek</dc:creator>
      <pubDate>Thu, 04 Dec 2025 00:58:54 +0000</pubDate>
      <link>https://dev.to/hedgehog/getting-started-with-argo-cd-using-the-cli-3l7h</link>
      <guid>https://dev.to/hedgehog/getting-started-with-argo-cd-using-the-cli-3l7h</guid>
      <description>&lt;p&gt;&lt;a href="https://argo-cd.readthedocs.io/" rel="noopener noreferrer"&gt;Argo CD&lt;/a&gt; is a declarative, GitOps-based continuous delivery (CD) tool for &lt;a href="https://kubernetes.io/" rel="noopener noreferrer"&gt;Kubernetes&lt;/a&gt;. It automates the deployment and lifecycle management of applications in a cluster by using &lt;a href="https://git-scm.com/" rel="noopener noreferrer"&gt;Git&lt;/a&gt; repositories as the single source of truth for the desired state of the infrastructure.&lt;/p&gt;

&lt;p&gt;This article explains how to set up &lt;code&gt;Argo CD&lt;/code&gt; and deploy an application in a &lt;code&gt;Kubernetes&lt;/code&gt; cluster using the command line and manifests.&lt;/p&gt;




&lt;h2&gt;
  
  
  Prerequisites:
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Installed the &lt;a href="https://kind.sigs.k8s.io" rel="noopener noreferrer"&gt;kind&lt;/a&gt; with &lt;a href="https://kind.sigs.k8s.io/docs/user/loadbalancer/" rel="noopener noreferrer"&gt;Cloud Provider KIND&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Installed &lt;a href="https://argo-cd.readthedocs.io/en/stable/cli_installation/" rel="noopener noreferrer"&gt;ArgoCD CLI&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Kubernetes Cluster &lt;a&gt;&lt;/a&gt;
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;#  kind-cluster.yaml&lt;/span&gt;

&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Cluster&lt;/span&gt;
&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;kind.x-k8s.io/v1alpha4&lt;/span&gt;
&lt;span class="na"&gt;networking&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;apiServerPort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;6443&lt;/span&gt;
&lt;span class="na"&gt;nodes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;role&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;control-plane&lt;/span&gt;
&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;role&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;worker&lt;/span&gt;
&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;role&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;worker&lt;/span&gt;
&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;role&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;worker&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kind create cluster &lt;span class="nt"&gt;--config&lt;/span&gt; kind-cluster.yaml
Creating cluster &lt;span class="s2"&gt;"kind"&lt;/span&gt; ...
 • Ensuring node image &lt;span class="o"&gt;(&lt;/span&gt;kindest/node:v1.33.1&lt;span class="o"&gt;)&lt;/span&gt; 🖼  ...
 ✓ Ensuring node image &lt;span class="o"&gt;(&lt;/span&gt;kindest/node:v1.33.1&lt;span class="o"&gt;)&lt;/span&gt; 🖼
 • Preparing nodes 📦 📦 📦 📦   ...
 ✓ Preparing nodes 📦 📦 📦 📦
 • Writing configuration 📜  ...
 ✓ Writing configuration 📜
 • Starting control-plane 🕹️  ...
 ✓ Starting control-plane 🕹️
 • Installing CNI 🔌  ...
 ✓ Installing CNI 🔌
 • Installing StorageClass 💾  ...
 ✓ Installing StorageClass 💾
 • Joining worker nodes 🚜  ...
 ✓ Joining worker nodes 🚜
Set kubectl context to &lt;span class="s2"&gt;"kind-kind"&lt;/span&gt;
You can now use your cluster with:

kubectl cluster-info &lt;span class="nt"&gt;--context&lt;/span&gt; kind-kind

Have a &lt;span class="nb"&gt;nice &lt;/span&gt;day! 👋
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get nodes
NAME                 STATUS   ROLES           AGE     VERSION
kind-control-plane   Ready    control-plane   7m2s    v1.33.1
kind-worker          Ready    &amp;lt;none&amp;gt;          6m53s   v1.33.1
kind-worker2         Ready    &amp;lt;none&amp;gt;          6m53s   v1.33.1
kind-worker3         Ready    &amp;lt;none&amp;gt;          6m53s   v1.33.1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  Argo CD
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Setup the Argo CD
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl create namespace argocd
namespace/argocd created

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get namespaces
NAME                 STATUS   AGE
argocd               Active   9s
default              Active   15m
kube-node-lease      Active   15m
kube-public          Active   15m
kube-system          Active   15m
local-path-storage   Active   15m
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl apply &lt;span class="nt"&gt;-n&lt;/span&gt; argocd &lt;span class="nt"&gt;-f&lt;/span&gt; https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml
customresourcedefinition.apiextensions.k8s.io/applications.argoproj.io created
customresourcedefinition.apiextensions.k8s.io/applicationsets.argoproj.io created
customresourcedefinition.apiextensions.k8s.io/appprojects.argoproj.io created
serviceaccount/argocd-application-controller created
serviceaccount/argocd-applicationset-controller created
serviceaccount/argocd-dex-server created
serviceaccount/argocd-notifications-controller created
serviceaccount/argocd-redis created
serviceaccount/argocd-repo-server created
serviceaccount/argocd-server created
role.rbac.authorization.k8s.io/argocd-application-controller created
role.rbac.authorization.k8s.io/argocd-applicationset-controller created
role.rbac.authorization.k8s.io/argocd-dex-server created
role.rbac.authorization.k8s.io/argocd-notifications-controller created
role.rbac.authorization.k8s.io/argocd-redis created
role.rbac.authorization.k8s.io/argocd-server created
clusterrole.rbac.authorization.k8s.io/argocd-application-controller created
clusterrole.rbac.authorization.k8s.io/argocd-applicationset-controller created
clusterrole.rbac.authorization.k8s.io/argocd-server created
rolebinding.rbac.authorization.k8s.io/argocd-application-controller created
rolebinding.rbac.authorization.k8s.io/argocd-applicationset-controller created
rolebinding.rbac.authorization.k8s.io/argocd-dex-server created
rolebinding.rbac.authorization.k8s.io/argocd-notifications-controller created
rolebinding.rbac.authorization.k8s.io/argocd-redis created
rolebinding.rbac.authorization.k8s.io/argocd-server created
clusterrolebinding.rbac.authorization.k8s.io/argocd-application-controller created
clusterrolebinding.rbac.authorization.k8s.io/argocd-applicationset-controller created
clusterrolebinding.rbac.authorization.k8s.io/argocd-server created
configmap/argocd-cm created
configmap/argocd-cmd-params-cm created
configmap/argocd-gpg-keys-cm created
configmap/argocd-notifications-cm created
configmap/argocd-rbac-cm created
configmap/argocd-ssh-known-hosts-cm created
configmap/argocd-tls-certs-cm created
secret/argocd-notifications-secret created
secret/argocd-secret created
service/argocd-applicationset-controller created
service/argocd-dex-server created
service/argocd-metrics created
service/argocd-notifications-controller-metrics created
service/argocd-redis created
service/argocd-repo-server created
service/argocd-server created
service/argocd-server-metrics created
deployment.apps/argocd-applicationset-controller created
deployment.apps/argocd-dex-server created
deployment.apps/argocd-notifications-controller created
deployment.apps/argocd-redis created
deployment.apps/argocd-repo-server created
deployment.apps/argocd-server created
statefulset.apps/argocd-application-controller created
networkpolicy.networking.k8s.io/argocd-application-controller-network-policy created
networkpolicy.networking.k8s.io/argocd-applicationset-controller-network-policy created
networkpolicy.networking.k8s.io/argocd-dex-server-network-policy created
networkpolicy.networking.k8s.io/argocd-notifications-controller-network-policy created
networkpolicy.networking.k8s.io/argocd-redis-network-policy created
networkpolicy.networking.k8s.io/argocd-repo-server-network-policy created
networkpolicy.networking.k8s.io/argocd-server-network-policy created
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get all &lt;span class="nt"&gt;-n&lt;/span&gt; argocd
NAME                                                   READY   STATUS    RESTARTS        AGE
pod/argocd-application-controller-0                    1/1     Running   0               4m14s
pod/argocd-applicationset-controller-fc5545556-xwf5k   1/1     Running   0               4m15s
pod/argocd-dex-server-f59c65cff-rn6m5                  1/1     Running   1 &lt;span class="o"&gt;(&lt;/span&gt;3m10s ago&lt;span class="o"&gt;)&lt;/span&gt;   4m15s
pod/argocd-notifications-controller-59f6949d7-qmw9h    1/1     Running   0               4m15s
pod/argocd-redis-75c946f559-r7hw9                      1/1     Running   0               4m15s
pod/argocd-repo-server-6959c47c44-zcgvw                1/1     Running   0               4m15s
pod/argocd-server-65544f4864-d8bdg                     1/1     Running   0               4m15s

NAME                                              TYPE           CLUSTER-IP      EXTERNAL-IP   PORT&lt;span class="o"&gt;(&lt;/span&gt;S&lt;span class="o"&gt;)&lt;/span&gt;                      AGE
service/argocd-applicationset-controller          ClusterIP      10.96.198.72    &amp;lt;none&amp;gt;        7000/TCP,8080/TCP            4m15s
service/argocd-dex-server                         ClusterIP      10.96.130.197   &amp;lt;none&amp;gt;        5556/TCP,5557/TCP,5558/TCP   4m15s
service/argocd-metrics                            ClusterIP      10.96.76.149    &amp;lt;none&amp;gt;        8082/TCP                     4m15s
service/argocd-notifications-controller-metrics   ClusterIP      10.96.212.160   &amp;lt;none&amp;gt;        9001/TCP                     4m15s
service/argocd-redis                              ClusterIP      10.96.87.147    &amp;lt;none&amp;gt;        6379/TCP                     4m15s
service/argocd-repo-server                        ClusterIP      10.96.249.213   &amp;lt;none&amp;gt;        8081/TCP,8084/TCP            4m15s
service/argocd-server                             ClusterIP      10.96.182.198   &amp;lt;none&amp;gt;        80:31808/TCP,443:32242/TCP   4m15s
service/argocd-server-metrics                     ClusterIP      10.96.205.52    &amp;lt;none&amp;gt;        8083/TCP                     4m15s

NAME                                               READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/argocd-applicationset-controller   1/1     1            1           4m15s
deployment.apps/argocd-dex-server                  1/1     1            1           4m15s
deployment.apps/argocd-notifications-controller    1/1     1            1           4m15s
deployment.apps/argocd-redis                       1/1     1            1           4m15s
deployment.apps/argocd-repo-server                 1/1     1            1           4m15s
deployment.apps/argocd-server                      1/1     1            1           4m15s

NAME                                                         DESIRED   CURRENT   READY   AGE
replicaset.apps/argocd-applicationset-controller-fc5545556   1         1         1       4m15s
replicaset.apps/argocd-dex-server-f59c65cff                  1         1         1       4m15s
replicaset.apps/argocd-notifications-controller-59f6949d7    1         1         1       4m15s
replicaset.apps/argocd-redis-75c946f559                      1         1         1       4m15s
replicaset.apps/argocd-repo-server-6959c47c44                1         1         1       4m15s
replicaset.apps/argocd-server-65544f4864                     1         1         1       4m15s

NAME                                             READY   AGE
statefulset.apps/argocd-application-controller   1/1     4m15s
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl patch svc argocd-server &lt;span class="nt"&gt;-n&lt;/span&gt; argocd &lt;span class="nt"&gt;-p&lt;/span&gt; &lt;span class="s1"&gt;'{"spec": {"type": "LoadBalancer"}}'&lt;/span&gt;
service/argocd-server patched

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get svc argocd-server &lt;span class="nt"&gt;-n&lt;/span&gt; argocd &lt;span class="nt"&gt;-o&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nv"&gt;jsonpath&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s1"&gt;'{.status.loadBalancer.ingress[0].ip}'&lt;/span&gt;
172.18.0.6

&lt;span class="nv"&gt;$ &lt;/span&gt;curl &lt;span class="nt"&gt;-sIL&lt;/span&gt; http://172.18.0.6:80 &lt;span class="nt"&gt;-k&lt;/span&gt;
HTTP/1.1 307 Temporary Redirect
Content-Type: text/html&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="nv"&gt;charset&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;utf-8
Location: https://172.18.0.6/
Date: Fri, 28 Nov 2025 14:30:00 GMT

HTTP/1.1 200 OK
Accept-Ranges: bytes
Content-Length: 788
Content-Security-Policy: frame-ancestors &lt;span class="s1"&gt;'self'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
Content-Type: text/html&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="nv"&gt;charset&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;utf-8
Vary: Accept-Encoding
X-Frame-Options: sameorigin
X-Xss-Protection: 1
Date: Fri, 28 Nov 2025 14:30:00 GMT
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;argocd admin initial-password &lt;span class="nt"&gt;-n&lt;/span&gt; argocd
CH5hFPo5q1TcHi9-

 This password must be only used &lt;span class="k"&gt;for &lt;/span&gt;first &lt;span class="nb"&gt;time &lt;/span&gt;login. We strongly recommend you update the password using &lt;span class="sb"&gt;`&lt;/span&gt;argocd account update-password&lt;span class="sb"&gt;`&lt;/span&gt;&lt;span class="nb"&gt;.&lt;/span&gt;

&lt;span class="nv"&gt;$ &lt;/span&gt;argocd login 172.18.0.6 &lt;span class="nt"&gt;--insecure&lt;/span&gt; &lt;span class="nt"&gt;--username&lt;/span&gt; admin
Password: CH5hFPo5q1TcHi9-
&lt;span class="s1"&gt;'admin:login'&lt;/span&gt; logged &lt;span class="k"&gt;in &lt;/span&gt;successfully
Context &lt;span class="s1"&gt;'172.18.0.6'&lt;/span&gt; updated
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;argocd account update-password
&lt;span class="k"&gt;***&lt;/span&gt; Enter password of currently logged &lt;span class="k"&gt;in &lt;/span&gt;user &lt;span class="o"&gt;(&lt;/span&gt;admin&lt;span class="o"&gt;)&lt;/span&gt;:
&lt;span class="k"&gt;***&lt;/span&gt; Enter new password &lt;span class="k"&gt;for &lt;/span&gt;user admin:
&lt;span class="k"&gt;***&lt;/span&gt; Confirm new password &lt;span class="k"&gt;for &lt;/span&gt;user admin:
Password updated
Context &lt;span class="s1"&gt;'172.18.0.6'&lt;/span&gt; updated

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl &lt;span class="nt"&gt;-n&lt;/span&gt; argocd delete secret argocd-initial-admin-secret
secret &lt;span class="s2"&gt;"argocd-initial-admin-secret"&lt;/span&gt; deleted
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;argocd version
argocd: v3.2.0+66b2f30
  BuildDate: 2025-11-04T15:21:01Z
  GitCommit: 66b2f302d91a42cc151808da0eec0846bbe1062c
  GitTreeState: clean
  GoVersion: go1.25.0
  Compiler: gc
  Platform: windows/amd64
argocd-server: v3.2.0+66b2f30
  BuildDate: 2025-11-04T14:51:35Z
  GitCommit: 66b2f302d91a42cc151808da0eec0846bbe1062c
  GitTreeState: clean
  GoVersion: go1.25.0
  Compiler: gc
  Platform: linux/amd64
  Kustomize Version: v5.7.0 2025-06-28T07:00:07Z
  Helm Version: v3.18.4+gd80839c
  Kubectl Version: v0.34.0
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj2e7hhqzur4m3bpbg2z6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj2e7hhqzur4m3bpbg2z6.png" alt=" " width="800" height="233"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Setup a Repository
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;argocd repo add https://github.com/viastakhov/argocd-kustomize-demo.git
Repository &lt;span class="s1"&gt;'https://github.com/viastakhov/argocd-kustomize-demo.git'&lt;/span&gt; added

&lt;span class="nv"&gt;$ &lt;/span&gt;argocd repo list
TYPE  NAME  REPO                                                     INSECURE  OCI    LFS    CREDS  STATUS      MESSAGE  PROJECT
git         https://github.com/viastakhov/argocd-kustomize-demo.git  &lt;span class="nb"&gt;false     false  false  false  &lt;/span&gt;Successful
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkauk83e3qtxw7pe1u349.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkauk83e3qtxw7pe1u349.png" alt=" " width="800" height="240"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Create a Project
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://argo-cd.readthedocs.io/en/stable/user-guide/projects/" rel="noopener noreferrer"&gt;Projects&lt;/a&gt; provide a logical grouping of applications, which is useful when Argo CD is used by multiple teams.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# ./argo-cd/project.yaml &lt;/span&gt;

&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;argoproj.io/v1alpha1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;AppProject&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;sourceRepos&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;*'&lt;/span&gt;
  &lt;span class="na"&gt;destinations&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;*'&lt;/span&gt;
    &lt;span class="na"&gt;server&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;*'&lt;/span&gt;
  &lt;span class="na"&gt;clusterResourceWhitelist&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;group&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;*'&lt;/span&gt;
    &lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;*'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;argocd proj create &lt;span class="nt"&gt;-f&lt;/span&gt; ./argo-cd/project.yaml

&lt;span class="nv"&gt;$ &lt;/span&gt;argocd proj list
NAME     DESCRIPTION  DESTINATIONS  SOURCES  CLUSTER-RESOURCE-WHITELIST  NAMESPACE-RESOURCE-BLACKLIST  SIGNATURE-KEYS  ORPHANED-RESOURCES  DESTINATION-SERVICE-ACCOUNTS
default               &lt;span class="k"&gt;*&lt;/span&gt;,&lt;span class="k"&gt;*&lt;/span&gt;           &lt;span class="k"&gt;*&lt;/span&gt;        &lt;span class="k"&gt;*&lt;/span&gt;/&lt;span class="k"&gt;*&lt;/span&gt;                         &amp;lt;none&amp;gt;                        &amp;lt;none&amp;gt;          disabled            &amp;lt;none&amp;gt;
nginx                 &lt;span class="k"&gt;*&lt;/span&gt;,&lt;span class="k"&gt;*&lt;/span&gt;           &lt;span class="k"&gt;*&lt;/span&gt;        &lt;span class="k"&gt;*&lt;/span&gt;/&lt;span class="k"&gt;*&lt;/span&gt;                         &amp;lt;none&amp;gt;                        &amp;lt;none&amp;gt;          disabled            &amp;lt;none&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fic3m1wuhav2uzbnf3qym.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fic3m1wuhav2uzbnf3qym.png" alt=" " width="800" height="238"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Create an Application
&lt;/h3&gt;

&lt;p&gt;An &lt;a href="https://argo-cd.readthedocs.io/en/stable/operator-manual/declarative-setup/#applications" rel="noopener noreferrer"&gt;Application&lt;/a&gt; is the Kubernetes resource object representing a deployed application instance in an environment.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# ./argo-cd/appset.yaml &lt;/span&gt;

&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;argoproj.io/v1alpha1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ApplicationSet&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx&lt;/span&gt;
  &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;argocd&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;goTemplate&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
  &lt;span class="na"&gt;goTemplateOptions&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;missingkey=error"&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
  &lt;span class="na"&gt;generators&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;git&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;repoURL&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;https://github.com/viastakhov/argocd-kustomize-demo.git&lt;/span&gt;
      &lt;span class="na"&gt;revision&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;HEAD&lt;/span&gt;
      &lt;span class="na"&gt;directories&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;kustomize/overlays/*&lt;/span&gt;  
  &lt;span class="na"&gt;template&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;nginx-{{index&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;.path.segments&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;2}}'&lt;/span&gt;
      &lt;span class="na"&gt;finalizers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;resources-finalizer.argocd.argoproj.io&lt;/span&gt;
    &lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;project&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx&lt;/span&gt;
      &lt;span class="na"&gt;source&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;repoURL&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;https://github.com/viastakhov/argocd-kustomize-demo.git&lt;/span&gt;
        &lt;span class="na"&gt;targetRevision&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;HEAD&lt;/span&gt;
        &lt;span class="na"&gt;path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;{{.path.path}}'&lt;/span&gt;   
      &lt;span class="na"&gt;destination&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;server&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;https://kubernetes.default.svc&lt;/span&gt;
        &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;{{index&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;.path.segments&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;2}}'&lt;/span&gt;
      &lt;span class="na"&gt;syncPolicy&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;syncOptions&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;CreateNamespace=false&lt;/span&gt;  
        &lt;span class="na"&gt;automated&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;enabled&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
          &lt;span class="na"&gt;prune&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;
          &lt;span class="na"&gt;selfHeal&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;
          &lt;span class="na"&gt;allowEmpty&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;argocd appset create ./argo-cd/appset.yaml
ApplicationSet &lt;span class="s1"&gt;'nginx'&lt;/span&gt; created
Name:               argocd/nginx
Project:            nginx
Server:             https://kubernetes.default.svc
Namespace:          &lt;span class="o"&gt;{{&lt;/span&gt;index .path.segments 2&lt;span class="o"&gt;}}&lt;/span&gt;
Source:
- Repo:             https://github.com/viastakhov/argocd-kustomize-demo.git
  Target:           HEAD
  Path:             &lt;span class="o"&gt;{{&lt;/span&gt;.path.path&lt;span class="o"&gt;}}&lt;/span&gt;
SyncPolicy:         Automated

&lt;span class="nv"&gt;$ &lt;/span&gt;argocd appset list
NAME          PROJECT  SYNCPOLICY  CONDITIONS                                                                                                                                                                                                                                                             REPO                                                     PATH            TARGET
argocd/nginx  nginx    nil         &lt;span class="o"&gt;[{&lt;/span&gt;ParametersGenerated Successfully generated parameters &lt;span class="k"&gt;for &lt;/span&gt;all Applications 2025-12-04 03:04:54 +0300 MSK True ParametersGenerated&lt;span class="o"&gt;}&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;ResourcesUpToDate All applications have been generated successfully 2025-12-04 03:04:54 +0300 MSK True ApplicationSetUpToDate&lt;span class="o"&gt;}]&lt;/span&gt;  https://github.com/viastakhov/argocd-kustomize-demo.git  &lt;span class="o"&gt;{{&lt;/span&gt;.path.path&lt;span class="o"&gt;}}&lt;/span&gt;  HEAD
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;argocd app list
NAME               CLUSTER                         NAMESPACE  PROJECT  STATUS  HEALTH   SYNCPOLICY  CONDITIONS  REPO                                                     PATH                     TARGET
argocd/nginx-dev   https://kubernetes.default.svc  dev        nginx    Synced  Healthy  Auto        &amp;lt;none&amp;gt;      https://github.com/viastakhov/argocd-kustomize-demo.git  kustomize/overlays/dev   HEAD
argocd/nginx-prod  https://kubernetes.default.svc  prod       nginx    Synced  Healthy  Auto        &amp;lt;none&amp;gt;      https://github.com/viastakhov/argocd-kustomize-demo.git  kustomize/overlays/prod  HEAD
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;argocd app resources nginx-prod
GROUP  KIND        NAMESPACE   NAME                   ORPHANED
       ConfigMap   production  nginx-conf-c6752d2t6f  No
       Namespace               production             No
       Service     production  nginx-svc              No
apps   Deployment  production  nginx-app              No
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;argocd app get nginx-prod
Name:               argocd/nginx-prod
Project:            nginx
Server:             https://kubernetes.default.svc
Namespace:          prod
URL:                https://172.18.0.6/applications/nginx-prod
Source:
- Repo:             https://github.com/viastakhov/argocd-kustomize-demo.git
  Target:           HEAD
  Path:             kustomize/overlays/prod
SyncWindow:         Sync Allowed
Sync Policy:        Automated
Sync Status:        Synced to HEAD &lt;span class="o"&gt;(&lt;/span&gt;d70b25e&lt;span class="o"&gt;)&lt;/span&gt;
Health Status:      Healthy

GROUP  KIND        NAMESPACE   NAME                   STATUS   HEALTH   HOOK  MESSAGE
       Namespace   prod        production             Running  Synced         namespace/production created
       ConfigMap   production  nginx-conf-c6752d2t6f  Synced                  configmap/nginx-conf-c6752d2t6f created
       Service     production  nginx-svc              Synced   Healthy        service/nginx-svc created
apps   Deployment  production  nginx-app              Synced   Healthy        deployment.apps/nginx-app created
       Namespace               production             Synced
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fna2r36l5hok5j19v6h6i.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fna2r36l5hok5j19v6h6i.png" alt=" " width="800" height="241"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxkeci919mfgrs14igd7g.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxkeci919mfgrs14igd7g.png" alt=" " width="800" height="252"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Cleanup
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;argocd appset delete nginx
Are you sure you want to delete &lt;span class="s1"&gt;'nginx'&lt;/span&gt; and all its Applications? &lt;span class="o"&gt;[&lt;/span&gt;y/n] y
applicationset &lt;span class="s1"&gt;'nginx'&lt;/span&gt; deleted

&lt;span class="nv"&gt;$ &lt;/span&gt;argocd appset list
NAME  PROJECT  SYNCPOLICY  CONDITIONS  REPO  PATH  TARGET

&lt;span class="nv"&gt;$ &lt;/span&gt;argocd app list
NAME  CLUSTER  NAMESPACE  PROJECT  STATUS  HEALTH  SYNCPOLICY  CONDITIONS  REPO  PATH  TARGET
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h3&gt;
  
  
  Bonus
&lt;/h3&gt;

&lt;h4&gt;
  
  
  Applications In Any Namespace
&lt;/h4&gt;

&lt;p&gt;Since version 2.5, Argo CD supports managing &lt;code&gt;Application&lt;/code&gt; resources in namespaces other than the namespace of the control plane (usually &lt;code&gt;argocd&lt;/code&gt;), but this feature has to be explicitly enabled and configured accordingly.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# ./argo-cd/project.yaml &lt;/span&gt;

&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;argoproj.io/v1alpha1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;AppProject&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;sourceNamespaces&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;*'&lt;/span&gt;
  &lt;span class="na"&gt;sourceRepos&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;*'&lt;/span&gt;
  &lt;span class="na"&gt;destinations&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;*'&lt;/span&gt;
    &lt;span class="na"&gt;server&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;*'&lt;/span&gt;
  &lt;span class="na"&gt;clusterResourceWhitelist&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;group&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;*'&lt;/span&gt;
    &lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;*'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;argocd proj create &lt;span class="nt"&gt;-f&lt;/span&gt; ./argo-cd/project.yaml

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl patch cm/argocd-cmd-params-cm &lt;span class="nt"&gt;-n&lt;/span&gt; argocd &lt;span class="nt"&gt;-p&lt;/span&gt; &lt;span class="s1"&gt;'{"data": {"application.namespaces": "'&lt;/span&gt;&lt;span class="k"&gt;*&lt;/span&gt;&lt;span class="s1"&gt;'"}}'&lt;/span&gt;
configmap/argocd-cmd-params-cm patched

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl rollout restart &lt;span class="nt"&gt;-n&lt;/span&gt; argocd deployment argocd-server
deployment.apps/argocd-server restarted

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl rollout restart &lt;span class="nt"&gt;-n&lt;/span&gt; argocd statefulset argocd-application-controller
statefulset.apps/argocd-application-controller restarted
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl patch clusterrole argocd-server &lt;span class="nt"&gt;--type&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s1"&gt;'json'&lt;/span&gt; &lt;span class="nt"&gt;-p&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s1"&gt;'[{"op": "replace", "path": "/rules/3/resources", "value": ["applications", "applicationsets", "appprojects"]}]'&lt;/span&gt;
clusterrole.rbac.authorization.k8s.io/argocd-server patched

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl patch clusterrole argocd-server &lt;span class="nt"&gt;--type&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s1"&gt;'json'&lt;/span&gt; &lt;span class="nt"&gt;-p&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s1"&gt;'[{"op": "replace", "path": "/rules/3/verbs", "value": ["get", "watch", "list", "create", "update", "delete", "patch"]}]'&lt;/span&gt;
clusterrole.rbac.authorization.k8s.io/argocd-server patched
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl create ns development
namespace/development created

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl create ns production
namespace/production created
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;argocd app create nginx &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--repo&lt;/span&gt; https://github.com/viastakhov/argocd-kustomize-demo.git &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--path&lt;/span&gt; kustomize/overlays/dev &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--dest-server&lt;/span&gt; https://kubernetes.default.svc &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--dest-namespace&lt;/span&gt; development &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--sync-policy&lt;/span&gt; auto &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--project&lt;/span&gt; nginx &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--app-namespace&lt;/span&gt; development &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--set-finalizer&lt;/span&gt;
application &lt;span class="s1"&gt;'nginx'&lt;/span&gt; created

&lt;span class="nv"&gt;$ &lt;/span&gt;argocd app create nginx &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--repo&lt;/span&gt; https://github.com/viastakhov/argocd-kustomize-demo.git &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--path&lt;/span&gt; kustomize/overlays/prod &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--dest-server&lt;/span&gt; https://kubernetes.default.svc &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--dest-namespace&lt;/span&gt; production &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--sync-policy&lt;/span&gt; auto &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--project&lt;/span&gt; nginx &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--app-namespace&lt;/span&gt; production &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--set-finalizer&lt;/span&gt;
application &lt;span class="s1"&gt;'nginx'&lt;/span&gt; created
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;argocd app list
NAME               CLUSTER                         NAMESPACE    PROJECT  STATUS  HEALTH   SYNCPOLICY  CONDITIONS  REPO                                                     PATH                     TARGET
development/nginx  https://kubernetes.default.svc  development  nginx    Synced  Healthy  Auto        &amp;lt;none&amp;gt;      https://github.com/viastakhov/argocd-kustomize-demo.git  kustomize/overlays/dev
production/nginx   https://kubernetes.default.svc  production   nginx    Synced  Healthy  Auto        &amp;lt;none&amp;gt;      https://github.com/viastakhov/argocd-kustomize-demo.git  kustomize/overlays/prod

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get applications &lt;span class="nt"&gt;-n&lt;/span&gt; development &lt;span class="nt"&gt;-o&lt;/span&gt; wide
NAME    SYNC STATUS   HEALTH STATUS   REVISION                                   PROJECT
nginx   Synced        Healthy         2550d54636eee62d843f4c56b27b7bdcf351d34a   nginx

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get applications &lt;span class="nt"&gt;-n&lt;/span&gt; production &lt;span class="nt"&gt;-o&lt;/span&gt; wide
NAME    SYNC STATUS   HEALTH STATUS   REVISION                                   PROJECT
nginx   Synced        Healthy         2550d54636eee62d843f4c56b27b7bdcf351d34a   nginx
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fug27a3nwm5c629nq672k.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fug27a3nwm5c629nq672k.png" alt="Applications" width="800" height="364"&gt;&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl delete ns development
namespace &lt;span class="s2"&gt;"development"&lt;/span&gt; deleted

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl delete ns production
namespace &lt;span class="s2"&gt;"production"&lt;/span&gt; deleted

&lt;span class="nv"&gt;$ &lt;/span&gt;argocd app list
NAME  CLUSTER  NAMESPACE  PROJECT  STATUS  HEALTH  SYNCPOLICY  CONDITIONS  REPO  PATH  TARGET
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



</description>
      <category>kubernetes</category>
      <category>argocd</category>
      <category>gitops</category>
      <category>kustomize</category>
    </item>
    <item>
      <title>Kubernetes by Example</title>
      <dc:creator>Ježek</dc:creator>
      <pubDate>Wed, 05 Nov 2025 20:24:58 +0000</pubDate>
      <link>https://dev.to/hedgehog/the-joy-of-kubernetes-4ijl</link>
      <guid>https://dev.to/hedgehog/the-joy-of-kubernetes-4ijl</guid>
      <description>&lt;p&gt;The article explores &lt;a href="https://kubernetes.io/" rel="noopener noreferrer"&gt;Kubernetes&lt;/a&gt; through practical examples.&lt;/p&gt;

&lt;p&gt;The content of the article was structured to ensure clarity and depth while maintaining focus on real-world use cases. By leveraging the hands-on approach advocated throughout the publication, readers will gain an enhanced understanding of core &lt;a href="https://kubernetes.io/docs/concepts/" rel="noopener noreferrer"&gt;concepts&lt;/a&gt; in Kubernetes such as pod management, service discovery, etc.&lt;/p&gt;

&lt;p&gt;The article was inspired by the book &lt;a href="https://www.manning.com/books/kubernetes-in-action" rel="noopener noreferrer"&gt;Kubernetes in Action&lt;/a&gt; by &lt;a href="https://github.com/luksa" rel="noopener noreferrer"&gt;Marko Lukša&lt;/a&gt;, and in the process of preparing this article, the official &lt;a href="https://kubernetes.io/docs/" rel="noopener noreferrer"&gt;Kubernetes Documentation&lt;/a&gt; was utilized as a primary reference material. Thus, I insistently recommend that you familiarize yourself with the above-mentioned references in advance.&lt;/p&gt;

&lt;p&gt;Enjoy!&lt;/p&gt;




&lt;h2&gt;
  
  
  Table Of Contents
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Kubernetes in Docker&lt;/li&gt;
&lt;li&gt;Pods&lt;/li&gt;
&lt;li&gt;Namespaces&lt;/li&gt;
&lt;li&gt;ReplicaSet&lt;/li&gt;
&lt;li&gt;DaemonSet&lt;/li&gt;
&lt;li&gt;Jobs&lt;/li&gt;
&lt;li&gt;CronJob&lt;/li&gt;
&lt;li&gt;Service&lt;/li&gt;
&lt;li&gt;Ingress&lt;/li&gt;
&lt;li&gt;Probes&lt;/li&gt;
&lt;li&gt;Volumes&lt;/li&gt;
&lt;li&gt;ConfigMaps&lt;/li&gt;
&lt;li&gt;Secrets&lt;/li&gt;
&lt;li&gt;Deployments&lt;/li&gt;
&lt;li&gt;StatefulSet&lt;/li&gt;
&lt;li&gt;ServiceAccount&lt;/li&gt;
&lt;li&gt;RBAC&lt;/li&gt;
&lt;li&gt;Pod Security&lt;/li&gt;
&lt;li&gt;NetworkPolicy&lt;/li&gt;
&lt;li&gt;LimitRange&lt;/li&gt;
&lt;li&gt;ResourceQuota&lt;/li&gt;
&lt;li&gt;HorizontalPodAutoscaler&lt;/li&gt;
&lt;li&gt;PodDisruptionBudget&lt;/li&gt;
&lt;li&gt;Taints and Tolerations&lt;/li&gt;
&lt;li&gt;Affinity&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Kubernetes in Docker &lt;a&gt;&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://kind.sigs.k8s.io/" rel="noopener noreferrer"&gt;kind&lt;/a&gt; is a tool for running local Kubernetes clusters using Docker container &lt;code&gt;nodes&lt;/code&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Create a cluster
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# kind-cluster.yaml&lt;/span&gt;

&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Cluster&lt;/span&gt;
&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;kind.x-k8s.io/v1alpha4&lt;/span&gt;
&lt;span class="na"&gt;networking&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;apiServerPort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;6443&lt;/span&gt;
&lt;span class="na"&gt;nodes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;role&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;control-plane&lt;/span&gt;
&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;role&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;worker&lt;/span&gt;
  &lt;span class="na"&gt;extraPortMappings&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;containerPort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;30666&lt;/span&gt;
    &lt;span class="na"&gt;hostPort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;40000&lt;/span&gt;
&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;role&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;worker&lt;/span&gt;
  &lt;span class="na"&gt;extraPortMappings&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;containerPort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;30666&lt;/span&gt;
    &lt;span class="na"&gt;hostPort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;40001&lt;/span&gt;
&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;role&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;worker&lt;/span&gt;
  &lt;span class="na"&gt;extraPortMappings&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;containerPort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;30666&lt;/span&gt;
    &lt;span class="na"&gt;hostPort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;40002&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kind create cluster &lt;span class="nt"&gt;--config&lt;/span&gt; kind-cluster.yaml
Creating cluster &lt;span class="s2"&gt;"kind"&lt;/span&gt; ...
 • Ensuring node image &lt;span class="o"&gt;(&lt;/span&gt;kindest/node:v1.33.1&lt;span class="o"&gt;)&lt;/span&gt; 🖼  ...
 ✓ Ensuring node image &lt;span class="o"&gt;(&lt;/span&gt;kindest/node:v1.33.1&lt;span class="o"&gt;)&lt;/span&gt; 🖼
 • Preparing nodes 📦 📦 📦 📦   ...
 ✓ Preparing nodes 📦 📦 📦 📦
 • Writing configuration 📜  ...
 ✓ Writing configuration 📜
 • Starting control-plane 🕹️  ...
 ✓ Starting control-plane 🕹️
 • Installing CNI 🔌  ...
 ✓ Installing CNI 🔌
 • Installing StorageClass 💾  ...
 ✓ Installing StorageClass 💾
 • Joining worker nodes 🚜  ...
 ✓ Joining worker nodes 🚜
Set kubectl context to &lt;span class="s2"&gt;"kind-kind"&lt;/span&gt;
You can now use your cluster with:

kubectl cluster-info &lt;span class="nt"&gt;--context&lt;/span&gt; kind-kind
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Cluster info
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kind get clusters
kind
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl cluster-info &lt;span class="nt"&gt;--context&lt;/span&gt; kind-kind
Kubernetes control plane is running at https://127.0.0.1:6443
CoreDNS is running at https://127.0.0.1:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Cluster nodes
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get nodes
NAME                 STATUS   ROLES           AGE   VERSION
kind-control-plane   Ready    control-plane   39m   v1.33.1
kind-worker          Ready    &amp;lt;none&amp;gt;          39m   v1.33.1
kind-worker2         Ready    &amp;lt;none&amp;gt;          39m   v1.33.1
kind-worker3         Ready    &amp;lt;none&amp;gt;          39m   v1.33.1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  Pods &lt;a&gt;&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://kubernetes.io/docs/concepts/workloads/pods/" rel="noopener noreferrer"&gt;Pods&lt;/a&gt;  are the smallest deployable units of computing that you can create and manage in Kubernetes.&lt;/p&gt;

&lt;h3&gt;
  
  
  Create a pod
&lt;/h3&gt;

&lt;h4&gt;
  
  
  Imperative way
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl run kubia &lt;span class="nt"&gt;--image&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;luksa/kubia &lt;span class="nt"&gt;--port&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;8080
pod/kubia created
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get pods
NAME    READY   STATUS    RESTARTS   AGE
kubia   1/1     Running   0          5m26s
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Declarative way
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# pod-basic.yaml&lt;/span&gt;

&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Pod&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;kubia&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;containers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;luksa/kubia&lt;/span&gt;
    &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;kubia&lt;/span&gt;
    &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;containerPort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;8080&lt;/span&gt;
      &lt;span class="na"&gt;protocol&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;TCP&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl create &lt;span class="nt"&gt;-f&lt;/span&gt; pod-basic.yaml
pod/kubia created
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get po
NAME    READY   STATUS    RESTARTS   AGE
kubia   1/1     Running   0          9s
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Logs
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl logs kubia
Kubia server starting...
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Logs from specific container in pod:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl logs kubia &lt;span class="nt"&gt;-c&lt;/span&gt; kubia
Kubia server starting...
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Port forwarding from host to pod
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl port-forward kubia 30000:8080
Forwarding from 127.0.0.1:30000 -&amp;gt; 8080
Forwarding from &lt;span class="o"&gt;[&lt;/span&gt;::1]:30000 -&amp;gt; 8080
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;curl &lt;span class="nt"&gt;-s&lt;/span&gt; localhost:30000
You&lt;span class="s1"&gt;'ve hit kubia
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Labels and Selectors
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/" rel="noopener noreferrer"&gt;Labels&lt;/a&gt; are key/value pairs that are attached to objects such as Pods.&lt;/p&gt;

&lt;h4&gt;
  
  
  Labels
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# pod-labels.yaml&lt;/span&gt;

&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Pod&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;kubia-labels&lt;/span&gt;
  &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;tier&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;backend&lt;/span&gt;
    &lt;span class="na"&gt;env&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;dev&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;containers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;luksa/kubia&lt;/span&gt;
    &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;kubia&lt;/span&gt;
    &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;containerPort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;8080&lt;/span&gt;
      &lt;span class="na"&gt;protocol&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;TCP&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl create &lt;span class="nt"&gt;-f&lt;/span&gt; pod-labels.yaml
pod/kubia-labels created
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get po &lt;span class="nt"&gt;--show-labels&lt;/span&gt;
NAME           READY   STATUS    RESTARTS   AGE     LABELS
kubia          1/1     Running   0          4d22h   &amp;lt;none&amp;gt;
kubia-labels   1/1     Running   0          30s     &lt;span class="nb"&gt;env&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;dev,tier&lt;span class="o"&gt;=&lt;/span&gt;backend
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get po &lt;span class="nt"&gt;--label-columns&lt;/span&gt; tier,env
NAME           READY   STATUS    RESTARTS   AGE     TIER      ENV
kubia          1/1     Running   0          4d22h
kubia-labels   1/1     Running   0          20m     backend   dev
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl label po kubia-labels &lt;span class="nb"&gt;env&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nb"&gt;test
&lt;/span&gt;error: &lt;span class="s1"&gt;'env'&lt;/span&gt; already has a value &lt;span class="o"&gt;(&lt;/span&gt;dev&lt;span class="o"&gt;)&lt;/span&gt;, and &lt;span class="nt"&gt;--overwrite&lt;/span&gt; is &lt;span class="nb"&gt;false&lt;/span&gt;

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl label po kubia-labels &lt;span class="nb"&gt;env&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nb"&gt;test&lt;/span&gt; &lt;span class="nt"&gt;--overwrite&lt;/span&gt;
pod/kubia-labels labeled

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get po &lt;span class="nt"&gt;--label-columns&lt;/span&gt; tier,env
NAME           READY   STATUS    RESTARTS   AGE     TIER      ENV
kubia          1/1     Running   0          4d22h
kubia-labels   1/1     Running   0          24m     backend   &lt;span class="nb"&gt;test&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Selectors
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get po &lt;span class="nt"&gt;-l&lt;/span&gt; &lt;span class="s1"&gt;'env'&lt;/span&gt; &lt;span class="nt"&gt;--show-labels&lt;/span&gt;
NAME           READY   STATUS    RESTARTS   AGE     LABELS
kubia-labels   1/1     Running   0          3h25m   &lt;span class="nb"&gt;env&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nb"&gt;test&lt;/span&gt;,tier&lt;span class="o"&gt;=&lt;/span&gt;backend

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get po &lt;span class="nt"&gt;-l&lt;/span&gt; &lt;span class="s1"&gt;'!env'&lt;/span&gt; &lt;span class="nt"&gt;--show-labels&lt;/span&gt;
NAME    READY   STATUS    RESTARTS   AGE    LABELS
kubia   1/1     Running   0          5d1h   &amp;lt;none&amp;gt;

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get po &lt;span class="nt"&gt;-l&lt;/span&gt; &lt;span class="nv"&gt;tier&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;backend &lt;span class="nt"&gt;--show-labels&lt;/span&gt;
NAME           READY   STATUS    RESTARTS   AGE     LABELS
kubia-labels   1/1     Running   0          3h28m   &lt;span class="nb"&gt;env&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nb"&gt;test&lt;/span&gt;,tier&lt;span class="o"&gt;=&lt;/span&gt;backend
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Annotations
&lt;/h3&gt;

&lt;p&gt;You can use &lt;a href="https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations/" rel="noopener noreferrer"&gt;annotations&lt;/a&gt; to attach arbitrary non-identifying metadata to objects.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# pod-annotations.yaml&lt;/span&gt;

&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Pod&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;kubia-annotations&lt;/span&gt;
  &lt;span class="na"&gt;annotations&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;imageregistry&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;https://hub.docker.com/"&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;containers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;luksa/kubia&lt;/span&gt;
    &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;kubia&lt;/span&gt;
    &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;containerPort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;8080&lt;/span&gt;
      &lt;span class="na"&gt;protocol&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;TCP&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl create &lt;span class="nt"&gt;-f&lt;/span&gt; pod-annotations.yaml
pod/kubia-annotations created

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl describe pod kubia-annotations | &lt;span class="nb"&gt;grep &lt;/span&gt;Annotations
Annotations:      imageregistry: https://hub.docker.com/
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl annotate pod/kubia-annotations &lt;span class="nv"&gt;imageregistry&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;nexus.org &lt;span class="nt"&gt;--overwrite&lt;/span&gt;
pod/kubia-annotations annotated

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl describe pod kubia-annotations | &lt;span class="nb"&gt;grep &lt;/span&gt;Annotations
Annotations:      imageregistry: nexus.org
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  Namespaces &lt;a&gt;&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/" rel="noopener noreferrer"&gt;Namespaces&lt;/a&gt; provide a mechanism for isolating groups of resources within a single cluster.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get ns
NAME                 STATUS   AGE
default              Active   5d2h
kube-node-lease      Active   5d2h
kube-public          Active   5d2h
kube-system          Active   5d2h
local-path-storage   Active   5d2h
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get pods &lt;span class="nt"&gt;--namespace&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;default
NAME                READY   STATUS    RESTARTS   AGE
kubia               1/1     Running   0          5d2h
kubia-annotations   1/1     Running   0          19m
kubia-labels        1/1     Running   0          4h15m
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl create namespace custom-namespace
namespace/custom-namespace created

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get pods &lt;span class="nt"&gt;--namespace&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;custom-namespace
No resources found &lt;span class="k"&gt;in &lt;/span&gt;custom-namespace namespace.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl run nginx &lt;span class="nt"&gt;--image&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;nginx &lt;span class="nt"&gt;--namespace&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;custom-namespace
pod/nginx created

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get pods &lt;span class="nt"&gt;--namespace&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;custom-namespace
NAME    READY   STATUS    RESTARTS   AGE
nginx   1/1     Running   0          61s
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl config set-context &lt;span class="nt"&gt;--current&lt;/span&gt; &lt;span class="nt"&gt;--namespace&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;custom-namespace
Context &lt;span class="s2"&gt;"kind-kind"&lt;/span&gt; modified.

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get pods
NAME    READY   STATUS    RESTARTS   AGE
nginx   1/1     Running   0          2m57s

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl config set-context &lt;span class="nt"&gt;--current&lt;/span&gt; &lt;span class="nt"&gt;--namespace&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;default
Context &lt;span class="s2"&gt;"kind-kind"&lt;/span&gt; modified.

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get pods
NAME                READY   STATUS    RESTARTS   AGE
kubia               1/1     Running   0          5d2h
kubia-annotations   1/1     Running   0          30m
kubia-labels        1/1     Running   0          4h26m
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl delete ns custom-namespace
namespace &lt;span class="s2"&gt;"custom-namespace"&lt;/span&gt; deleted

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get pods &lt;span class="nt"&gt;--namespace&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;custom-namespace
No resources found &lt;span class="k"&gt;in &lt;/span&gt;custom-namespace namespace.

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get ns
NAME                 STATUS   AGE
default              Active   5d3h
kube-node-lease      Active   5d3h
kube-public          Active   5d3h
kube-system          Active   5d3h
local-path-storage   Active   5d3h
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl delete po &lt;span class="nt"&gt;--all&lt;/span&gt;
pod &lt;span class="s2"&gt;"kubia"&lt;/span&gt; deleted
pod &lt;span class="s2"&gt;"kubia-annotations"&lt;/span&gt; deleted
pod &lt;span class="s2"&gt;"kubia-labels"&lt;/span&gt; deleted

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get ns
NAME                 STATUS   AGE
default              Active   5d3h
kube-node-lease      Active   5d3h
kube-public          Active   5d3h
kube-system          Active   5d3h
local-path-storage   Active   5d3h

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get pods &lt;span class="nt"&gt;--namespace&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;default
No resources found &lt;span class="k"&gt;in &lt;/span&gt;default namespace.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  ReplicaSet &lt;a&gt;&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;A &lt;a href="https://kubernetes.io/docs/concepts/workloads/controllers/replicaset/" rel="noopener noreferrer"&gt;ReplicaSet&lt;/a&gt;'s purpose is to maintain a stable set of replica Pods running at any given time. As such, it is often used to guarantee the availability of a specified number of identical Pods.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# replicaset.yaml&lt;/span&gt;

&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;apps/v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ReplicaSet&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;kubia&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;replicas&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;3&lt;/span&gt;
  &lt;span class="na"&gt;selector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;matchLabels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;kubia&lt;/span&gt;
  &lt;span class="na"&gt;template&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;kubia&lt;/span&gt;
    &lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;containers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;kubia&lt;/span&gt;
        &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;luksa/kubia&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl create &lt;span class="nt"&gt;-f&lt;/span&gt; replicaset.yaml
replicaset.apps/kubia created

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get po
NAME          READY   STATUS    RESTARTS   AGE
kubia-5l82z   1/1     Running   0          5s
kubia-bkjwk   1/1     Running   0          5s
kubia-k78j5   1/1     Running   0          5s

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get rs
NAME    DESIRED   CURRENT   READY   AGE
kubia   3         3         3       64s
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl delete rs kubia
replicaset.apps &lt;span class="s2"&gt;"kubia"&lt;/span&gt; deleted

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get rs
No resources found &lt;span class="k"&gt;in &lt;/span&gt;default namespace.

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get po
NAME          READY   STATUS        RESTARTS   AGE
kubia-5l82z   1/1     Terminating   0          5m30s
kubia-bkjwk   1/1     Terminating   0          5m30s
kubia-k78j5   1/1     Terminating   0          5m30s
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  DaemonSet &lt;a&gt;&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;A &lt;a href="https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/" rel="noopener noreferrer"&gt;DaemonSet&lt;/a&gt; ensures that all (or some) Nodes run a copy of a Pod. As nodes are added to the cluster, Pods are added to them.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# daemonset.yaml&lt;/span&gt;

&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;apps/v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;DaemonSet&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;fluentd&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;selector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;matchLabels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;fluentd&lt;/span&gt;
  &lt;span class="na"&gt;template&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;fluentd&lt;/span&gt;
    &lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;nodeSelector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;disk&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ssd&lt;/span&gt;
      &lt;span class="na"&gt;containers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;fluentd&lt;/span&gt;
        &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;quay.io/fluentd_elasticsearch/fluentd:v5.0.1&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl create &lt;span class="nt"&gt;-f&lt;/span&gt; daemonset.yaml
daemonset.apps/fluentd created

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get ds
NAME      DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGE
fluentd   0         0         0       0            0           &lt;span class="nv"&gt;disk&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;ssd        115s

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get po
No resources found &lt;span class="k"&gt;in &lt;/span&gt;default namespace.

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get node
NAME                 STATUS   ROLES           AGE     VERSION
kind-control-plane   Ready    control-plane   5d21h   v1.33.1
kind-worker          Ready    &amp;lt;none&amp;gt;          5d21h   v1.33.1
kind-worker2         Ready    &amp;lt;none&amp;gt;          5d21h   v1.33.1
kind-worker3         Ready    &amp;lt;none&amp;gt;          5d21h   v1.33.1

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl label node kind-worker3 &lt;span class="nv"&gt;disk&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;ssd
node/kind-worker3 labeled

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get ds
NAME      DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGE
fluentd   1         1         1       1            1           &lt;span class="nv"&gt;disk&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;ssd        3m49s

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get po
NAME            READY   STATUS    RESTARTS   AGE
fluentd-cslcb   1/1     Running   0          39s
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl delete ds fluentd
daemonset.apps &lt;span class="s2"&gt;"fluentd"&lt;/span&gt; deleted

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get ds
No resources found &lt;span class="k"&gt;in &lt;/span&gt;default namespace.

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get po
No resources found &lt;span class="k"&gt;in &lt;/span&gt;default namespace.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  Jobs &lt;a&gt;&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://kubernetes.io/docs/concepts/workloads/controllers/job/" rel="noopener noreferrer"&gt;Jobs&lt;/a&gt; represent one-off tasks that run to completion and then stop.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# job.yaml&lt;/span&gt;

&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;batch/v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Job&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;pi&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;template&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;containers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;pi&lt;/span&gt;
        &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;perl:5.34.0&lt;/span&gt;
        &lt;span class="na"&gt;command&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;perl"&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt;  &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;-Mbignum=bpi"&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;-wle"&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;print&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;bpi(2000)"&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
      &lt;span class="na"&gt;restartPolicy&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Never&lt;/span&gt;
  &lt;span class="na"&gt;backoffLimit&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;4&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl create &lt;span class="nt"&gt;-f&lt;/span&gt; job.yaml
job.batch/pi created

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get &lt;span class="nb"&gt;jobs
&lt;/span&gt;NAME   STATUS    COMPLETIONS   DURATION   AGE
pi     Running   0/1           34s        34s

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get &lt;span class="nb"&gt;jobs
&lt;/span&gt;NAME   STATUS     COMPLETIONS   DURATION   AGE
pi     Complete   1/1           54s        62s

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get po
NAME       READY   STATUS      RESTARTS   AGE
pi-8rdmn   0/1     Completed   0          2m1s

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl events pod/pi-8rdmn
LAST SEEN   TYPE     REASON             OBJECT         MESSAGE
3m44s       Normal   Scheduled          Pod/pi-8rdmn   Successfully assigned default/pi-8rdmn to kind-worker2
3m44s       Normal   Pulling            Pod/pi-8rdmn   Pulling image &lt;span class="s2"&gt;"perl:5.34.0"&lt;/span&gt;
3m44s       Normal   SuccessfulCreate   Job/pi         Created pod: pi-8rdmn
2m59s       Normal   Pulled             Pod/pi-8rdmn   Successfully pulled image &lt;span class="s2"&gt;"perl:5.34.0"&lt;/span&gt; &lt;span class="k"&gt;in &lt;/span&gt;44.842s &lt;span class="o"&gt;(&lt;/span&gt;44.842s including waiting&lt;span class="o"&gt;)&lt;/span&gt;&lt;span class="nb"&gt;.&lt;/span&gt; Image size: 336374010 bytes.
2m59s       Normal   Created            Pod/pi-8rdmn   Created container: pi
2m59s       Normal   Started            Pod/pi-8rdmn   Started container pi
2m50s       Normal   Completed          Job/pi         Job completed
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl delete job/pi
job.batch &lt;span class="s2"&gt;"pi"&lt;/span&gt; deleted

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get po
No resources found &lt;span class="k"&gt;in &lt;/span&gt;default namespace.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  CronJob &lt;a&gt;&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://kubernetes.io/docs/concepts/workloads/controllers/cron-jobs/" rel="noopener noreferrer"&gt;CronJob&lt;/a&gt; starts one-time Jobs on a repeating schedule.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# cronjob.yaml&lt;/span&gt;

&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;batch/v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;CronJob&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;hello&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;schedule&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;*&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;*&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;*&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;*&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;*"&lt;/span&gt;
  &lt;span class="na"&gt;jobTemplate&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;template&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;containers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;hello&lt;/span&gt;
            &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;busybox:1.28&lt;/span&gt;
            &lt;span class="na"&gt;imagePullPolicy&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;IfNotPresent&lt;/span&gt;
            &lt;span class="na"&gt;command&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;/bin/sh&lt;/span&gt;
            &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;-c&lt;/span&gt;
            &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;date; echo Hello from the Kubernetes cluster&lt;/span&gt;
          &lt;span class="na"&gt;restartPolicy&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;OnFailure&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl create &lt;span class="nt"&gt;-f&lt;/span&gt; cronjob.yaml
cronjob.batch/hello created

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get cronjobs
NAME    SCHEDULE    TIMEZONE   SUSPEND   ACTIVE   LAST SCHEDULE   AGE
hello   &lt;span class="k"&gt;*&lt;/span&gt; &lt;span class="k"&gt;*&lt;/span&gt; &lt;span class="k"&gt;*&lt;/span&gt; &lt;span class="k"&gt;*&lt;/span&gt; &lt;span class="k"&gt;*&lt;/span&gt;   &amp;lt;none&amp;gt;     False     0        8s              55s

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get pods
NAME                   READY   STATUS      RESTARTS   AGE
hello-29223074-gsztp   0/1     Completed   0          30s

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get pods
NAME                   READY   STATUS      RESTARTS   AGE
hello-29223074-gsztp   0/1     Completed   0          106s
hello-29223075-9r7kx   0/1     Completed   0          46s
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl delete cronjobs/hello
cronjob.batch &lt;span class="s2"&gt;"hello"&lt;/span&gt; deleted

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get cronjobs
No resources found &lt;span class="k"&gt;in &lt;/span&gt;default namespace.

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get pods
No resources found &lt;span class="k"&gt;in &lt;/span&gt;default namespace.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  Service &lt;a&gt;&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://kubernetes.io/docs/concepts/services-networking/service/" rel="noopener noreferrer"&gt;Service&lt;/a&gt; is a method for exposing a network application that is running as one or more Pods in your cluster.&lt;/p&gt;

&lt;p&gt;There several Service types supported in Kubernetes:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;ClusterIP&lt;/li&gt;
&lt;li&gt;NodePort&lt;/li&gt;
&lt;li&gt;ExternalName&lt;/li&gt;
&lt;li&gt;LoadBalancer&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  ClusterIP
&lt;/h3&gt;

&lt;p&gt;Exposes the Service on a cluster-internal IP. Choosing this value makes the Service only reachable from within the cluster. This is the default that is used if you don't explicitly specify a type for a Service. You can expose the Service to the public internet using an &lt;a href="https://kubernetes.io/docs/concepts/services-networking/ingress/" rel="noopener noreferrer"&gt;Ingress&lt;/a&gt; or a &lt;a href="https://gateway-api.sigs.k8s.io/" rel="noopener noreferrer"&gt;Gateway&lt;/a&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# pod-labels.yaml&lt;/span&gt;

&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Pod&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;kubia-labels&lt;/span&gt;
  &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;tier&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;backend&lt;/span&gt;
    &lt;span class="na"&gt;env&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;dev&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;containers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;luksa/kubia&lt;/span&gt;
    &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;kubia&lt;/span&gt;
    &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;containerPort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;8080&lt;/span&gt;
      &lt;span class="na"&gt;protocol&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;TCP&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# service-basic.yaml&lt;/span&gt;

&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Service&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;kubia-svc&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;selector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;tier&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;backend&lt;/span&gt;
  &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;protocol&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;TCP&lt;/span&gt;
      &lt;span class="na"&gt;port&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;80&lt;/span&gt;
      &lt;span class="na"&gt;targetPort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;8080&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl create &lt;span class="nt"&gt;-f&lt;/span&gt; pod-labels.yaml
pod/kubia-labels created

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl create &lt;span class="nt"&gt;-f&lt;/span&gt; service-basic.yaml
service/kubia-svc created

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get svc
NAME         TYPE        CLUSTER-IP     EXTERNAL-IP   PORT&lt;span class="o"&gt;(&lt;/span&gt;S&lt;span class="o"&gt;)&lt;/span&gt;   AGE
kubernetes   ClusterIP   10.96.0.1      &amp;lt;none&amp;gt;        443/TCP   21d
kubia-svc    ClusterIP   10.96.158.86   &amp;lt;none&amp;gt;        80/TCP    5s

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get po
NAME           READY   STATUS    RESTARTS   AGE
kubia-labels   1/1     Running   0          116s

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl &lt;span class="nb"&gt;exec &lt;/span&gt;kubia-labels &lt;span class="nt"&gt;--&lt;/span&gt; curl &lt;span class="nt"&gt;-s&lt;/span&gt; http://10.96.158.86:80
You&lt;span class="s1"&gt;'ve hit kubia-labels
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl delete &lt;span class="nt"&gt;-f&lt;/span&gt; service-basic.yaml
service &lt;span class="s2"&gt;"kubia-svc"&lt;/span&gt; deleted

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl delete &lt;span class="nt"&gt;-f&lt;/span&gt; pod-labels.yaml
pod &lt;span class="s2"&gt;"kubia-labels"&lt;/span&gt; deleted
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# pod-nginx.yaml&lt;/span&gt;

&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Pod&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx&lt;/span&gt;
  &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;app.kubernetes.io/name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;proxy&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;containers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx&lt;/span&gt;
    &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx:stable&lt;/span&gt;
    &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;containerPort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;80&lt;/span&gt;
        &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;http-web-svc&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# service-nginx.yaml&lt;/span&gt;

&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Service&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx-svc&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;selector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;app.kubernetes.io/name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;proxy&lt;/span&gt;
  &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;http-port&lt;/span&gt;
      &lt;span class="na"&gt;protocol&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;TCP&lt;/span&gt;
      &lt;span class="na"&gt;port&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;8080&lt;/span&gt;
      &lt;span class="na"&gt;targetPort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;http-web-svc&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl create &lt;span class="nt"&gt;-f&lt;/span&gt; pod-nginx.yaml
pod/nginx created

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl create &lt;span class="nt"&gt;-f&lt;/span&gt; service-nginx.yaml
service/nginx-svc created

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get po
NAME    READY   STATUS    RESTARTS   AGE
nginx   1/1     Running   0          5m51s

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get svc
NAME         TYPE        CLUSTER-IP      EXTERNAL-IP   PORT&lt;span class="o"&gt;(&lt;/span&gt;S&lt;span class="o"&gt;)&lt;/span&gt;    AGE
kubernetes   ClusterIP   10.96.0.1       &amp;lt;none&amp;gt;        443/TCP    21d
nginx-svc    ClusterIP   10.96.230.243   &amp;lt;none&amp;gt;        8080/TCP   32s

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl &lt;span class="nb"&gt;exec &lt;/span&gt;nginx &lt;span class="nt"&gt;--&lt;/span&gt; curl &lt;span class="nt"&gt;-sI&lt;/span&gt; http://10.96.230.243:8080
HTTP/1.1 200 OK
Server: nginx/1.28.0
Date: Thu, 07 Aug 2025 12:09:24 GMT
Content-Type: text/html
Content-Length: 615
Last-Modified: Wed, 23 Apr 2025 11:48:54 GMT
Connection: keep-alive
ETag: &lt;span class="s2"&gt;"6808d3a6-267"&lt;/span&gt;
Accept-Ranges: bytes

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl &lt;span class="nb"&gt;exec &lt;/span&gt;nginx &lt;span class="nt"&gt;--&lt;/span&gt; curl &lt;span class="nt"&gt;-sI&lt;/span&gt; http://nginx-svc:8080 | &lt;span class="nb"&gt;head&lt;/span&gt; &lt;span class="nt"&gt;-n&lt;/span&gt; 2
HTTP/1.1 200 OK
Server: nginx/1.28.0

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl &lt;span class="nb"&gt;exec &lt;/span&gt;nginx &lt;span class="nt"&gt;--&lt;/span&gt; curl &lt;span class="nt"&gt;-sI&lt;/span&gt; http://nginx-svc.default:8080 | &lt;span class="nb"&gt;head&lt;/span&gt; &lt;span class="nt"&gt;-n&lt;/span&gt; 2
HTTP/1.1 200 OK
Server: nginx/1.28.0

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl &lt;span class="nb"&gt;exec &lt;/span&gt;nginx &lt;span class="nt"&gt;--&lt;/span&gt; curl &lt;span class="nt"&gt;-sI&lt;/span&gt; http://nginx-svc.default.svc:8080 | &lt;span class="nb"&gt;head&lt;/span&gt; &lt;span class="nt"&gt;-n&lt;/span&gt; 2
HTTP/1.1 200 OK
Server: nginx/1.28.0

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl &lt;span class="nb"&gt;exec &lt;/span&gt;nginx &lt;span class="nt"&gt;--&lt;/span&gt; curl &lt;span class="nt"&gt;-sI&lt;/span&gt; http://nginx-svc.default.svc.cluster.local:8080 | &lt;span class="nb"&gt;head&lt;/span&gt; &lt;span class="nt"&gt;-n&lt;/span&gt; 2
HTTP/1.1 200 OK
Server: nginx/1.28.0
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl delete &lt;span class="nt"&gt;-f&lt;/span&gt; service-nginx.yaml
service &lt;span class="s2"&gt;"nginx-svc"&lt;/span&gt; deleted

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl delete &lt;span class="nt"&gt;-f&lt;/span&gt; pod-nginx.yaml
pod &lt;span class="s2"&gt;"nginx"&lt;/span&gt; deleted
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  ExternalName
&lt;/h3&gt;

&lt;p&gt;Maps the Service to the contents of the &lt;code&gt;externalName&lt;/code&gt; field (for example, to the hostname &lt;code&gt;api.foo.bar.example&lt;/code&gt;). The mapping configures your cluster's DNS server to return a &lt;code&gt;CNAME&lt;/code&gt; record with that external hostname value. No proxying of any kind is set up.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# service-ext.yaml&lt;/span&gt;

&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Service&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;httpbin-service&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ExternalName&lt;/span&gt;
  &lt;span class="na"&gt;externalName&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;httpbin.org&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl create &lt;span class="nt"&gt;-f&lt;/span&gt; service-ext.yaml
service/httpbin-service created

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl create &lt;span class="nt"&gt;-f&lt;/span&gt; pod-basic.yaml
pod/kubia created

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get svc
NAME              TYPE           CLUSTER-IP   EXTERNAL-IP   PORT&lt;span class="o"&gt;(&lt;/span&gt;S&lt;span class="o"&gt;)&lt;/span&gt;   AGE
httpbin-service   ExternalName   &amp;lt;none&amp;gt;       httpbin.org   &amp;lt;none&amp;gt;    4m17s
kubernetes        ClusterIP      10.96.0.1    &amp;lt;none&amp;gt;        443/TCP   22d

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl &lt;span class="nb"&gt;exec &lt;/span&gt;kubia &lt;span class="nt"&gt;--&lt;/span&gt; curl &lt;span class="nt"&gt;-sk&lt;/span&gt; &lt;span class="nt"&gt;-X&lt;/span&gt; GET https://httpbin-service/uuid &lt;span class="nt"&gt;-H&lt;/span&gt; &lt;span class="s2"&gt;"accept: application/json"&lt;/span&gt;
&lt;span class="o"&gt;{&lt;/span&gt;
  &lt;span class="s2"&gt;"uuid"&lt;/span&gt;: &lt;span class="s2"&gt;"6a48fe51-a6b6-4e0a-9ef2-381ba7ea2c69"&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl delete &lt;span class="nt"&gt;-f&lt;/span&gt; pod-basic.yaml
pod &lt;span class="s2"&gt;"kubia"&lt;/span&gt; deleted

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl delete &lt;span class="nt"&gt;-f&lt;/span&gt; service-ext.yaml
service &lt;span class="s2"&gt;"httpbin-service"&lt;/span&gt; deleted
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  NodePort
&lt;/h3&gt;

&lt;p&gt;Exposes the Service on each Node's IP at a static port (the &lt;code&gt;NodePort&lt;/code&gt;). To make the node port available, Kubernetes sets up a cluster IP address, the same as if you had requested a Service of type: &lt;code&gt;ClusterIP&lt;/code&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# service-nginx-nodeport.yaml&lt;/span&gt;

&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Service&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx-svc&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;NodePort&lt;/span&gt;
  &lt;span class="na"&gt;selector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;app.kubernetes.io/name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;proxy&lt;/span&gt;
  &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;port&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;8080&lt;/span&gt;
    &lt;span class="na"&gt;targetPort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;http-web-svc&lt;/span&gt;
    &lt;span class="na"&gt;nodePort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;30666&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl create &lt;span class="nt"&gt;-f&lt;/span&gt; pod-nginx.yaml
pod/nginx created

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl create &lt;span class="nt"&gt;-f&lt;/span&gt; service-nginx-nodeport.yaml
service/nginx-svc created

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get svc nginx-svc
NAME        TYPE       CLUSTER-IP     EXTERNAL-IP   PORT&lt;span class="o"&gt;(&lt;/span&gt;S&lt;span class="o"&gt;)&lt;/span&gt;          AGE
nginx-svc   NodePort   10.96.252.35   &amp;lt;none&amp;gt;        8080:30666/TCP   9s

&lt;span class="nv"&gt;$ &lt;/span&gt;docker ps
CONTAINER ID   IMAGE                  COMMAND                  CREATED       STATUS       PORTS                      NAMES
da2c842ddfd6   kindest/node:v1.33.1   &lt;span class="s2"&gt;"/usr/local/bin/entr…"&lt;/span&gt;   3 weeks ago   Up 3 weeks   0.0.0.0:40000-&amp;gt;30666/tcp   kind-worker
16bf718b93b6   kindest/node:v1.33.1   &lt;span class="s2"&gt;"/usr/local/bin/entr…"&lt;/span&gt;   3 weeks ago   Up 3 weeks   127.0.0.1:6443-&amp;gt;6443/tcp   kind-control-plane
bb18cefdb180   kindest/node:v1.33.1   &lt;span class="s2"&gt;"/usr/local/bin/entr…"&lt;/span&gt;   3 weeks ago   Up 3 weeks   0.0.0.0:40002-&amp;gt;30666/tcp   kind-worker3
42cea7794f0b   kindest/node:v1.33.1   &lt;span class="s2"&gt;"/usr/local/bin/entr…"&lt;/span&gt;   3 weeks ago   Up 3 weeks   0.0.0.0:40001-&amp;gt;30666/tcp   kind-worker2

&lt;span class="nv"&gt;$ &lt;/span&gt;curl &lt;span class="nt"&gt;-sI&lt;/span&gt; http://localhost:40000 | &lt;span class="nb"&gt;head&lt;/span&gt; &lt;span class="nt"&gt;-n&lt;/span&gt; 2
HTTP/1.1 200 OK
Server: nginx/1.28.0

&lt;span class="nv"&gt;$ &lt;/span&gt;curl &lt;span class="nt"&gt;-sI&lt;/span&gt; http://localhost:40001 | &lt;span class="nb"&gt;head&lt;/span&gt; &lt;span class="nt"&gt;-n&lt;/span&gt; 2
HTTP/1.1 200 OK
Server: nginx/1.28.0

&lt;span class="nv"&gt;$ &lt;/span&gt;curl &lt;span class="nt"&gt;-sI&lt;/span&gt; http://localhost:40002 | &lt;span class="nb"&gt;head&lt;/span&gt; &lt;span class="nt"&gt;-n&lt;/span&gt; 2
HTTP/1.1 200 OK
Server: nginx/1.28.0
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl delete &lt;span class="nt"&gt;-f&lt;/span&gt; service-nginx-nodeport.yaml
service &lt;span class="s2"&gt;"nginx-svc"&lt;/span&gt; deleted

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl delete &lt;span class="nt"&gt;-f&lt;/span&gt; pod-nginx.yaml
pod &lt;span class="s2"&gt;"nginx"&lt;/span&gt; deleted
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  LoadBalancer
&lt;/h3&gt;

&lt;p&gt;Exposes the Service externally using an external load balancer. Kubernetes does not directly offer a load balancing component; you must provide one, or you can integrate your Kubernetes cluster with a cloud provider.&lt;/p&gt;

&lt;p&gt;Let's a look how to get service of type &lt;code&gt;LoadBalancer&lt;/code&gt; working in a &lt;code&gt;kind cluster&lt;/code&gt; using &lt;a href="https://kind.sigs.k8s.io/docs/user/loadbalancer/" rel="noopener noreferrer"&gt;Cloud Provider KIND&lt;/a&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# service-lb-demo.yaml&lt;/span&gt;

&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Pod&lt;/span&gt;
&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;foo-app&lt;/span&gt;
  &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;http-echo&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;containers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;command&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;/agnhost&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;serve-hostname&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;--http=true&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;--port=8080&lt;/span&gt;
    &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;registry.k8s.io/e2e-test-images/agnhost:2.39&lt;/span&gt;
    &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;foo-app&lt;/span&gt;

&lt;span class="nn"&gt;---&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Pod&lt;/span&gt;
&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;bar-app&lt;/span&gt;
  &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;http-echo&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;containers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;command&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;/agnhost&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;serve-hostname&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;--http=true&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;--port=8080&lt;/span&gt;
    &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;registry.k8s.io/e2e-test-images/agnhost:2.39&lt;/span&gt;
    &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;bar-app&lt;/span&gt;

&lt;span class="nn"&gt;---&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Service&lt;/span&gt;
&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;http-echo-service&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;LoadBalancer&lt;/span&gt;
  &lt;span class="na"&gt;selector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;http-echo&lt;/span&gt;
  &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;port&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;5678&lt;/span&gt;
    &lt;span class="na"&gt;targetPort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;8080&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl create &lt;span class="nt"&gt;-f&lt;/span&gt; service-lb-demo.yaml
pod/foo-app created
pod/bar-app created
service/http-echo-service created

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get svc http-echo-service
NAME                TYPE           CLUSTER-IP    EXTERNAL-IP   PORT&lt;span class="o"&gt;(&lt;/span&gt;S&lt;span class="o"&gt;)&lt;/span&gt;          AGE
http-echo-service   LoadBalancer   10.96.97.99   172.18.0.6    5678:31196/TCP   58s

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get svc http-echo-service &lt;span class="nt"&gt;-o&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nv"&gt;jsonpath&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s1"&gt;'{.status.loadBalancer.ingress[0].ip}'&lt;/span&gt;
172.18.0.6

&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="k"&gt;for &lt;/span&gt;_ &lt;span class="k"&gt;in&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;1..4&lt;span class="o"&gt;}&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;do &lt;/span&gt;curl &lt;span class="nt"&gt;-s&lt;/span&gt; 172.18.0.6:5678&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="nb"&gt;echo&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;done
&lt;/span&gt;foo-app
bar-app
bar-app
foo-app
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl delete &lt;span class="nt"&gt;-f&lt;/span&gt; service-lb-demo.yaml
pod &lt;span class="s2"&gt;"foo-app"&lt;/span&gt; deleted
pod &lt;span class="s2"&gt;"bar-app"&lt;/span&gt; deleted
service &lt;span class="s2"&gt;"http-echo-service"&lt;/span&gt; deleted
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  Ingress &lt;a&gt;&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://kubernetes.io/docs/concepts/services-networking/ingress/" rel="noopener noreferrer"&gt;Ingress&lt;/a&gt; exposes HTTP and HTTPS routes from outside the cluster to services within the cluster.&lt;/p&gt;

&lt;h3&gt;
  
  
  Ingress Controller
&lt;/h3&gt;

&lt;p&gt;In order for an &lt;code&gt;Ingress&lt;/code&gt; to work in your cluster, there must be an &lt;a href="https://kubernetes.io/docs/concepts/services-networking/ingress-controllers/" rel="noopener noreferrer"&gt;Ingress Controller&lt;/a&gt; running. &lt;/p&gt;

&lt;p&gt;You have to run &lt;a href="https://kind.sigs.k8s.io/docs/user/loadbalancer/" rel="noopener noreferrer"&gt;Cloud Provider KIND&lt;/a&gt; to enable the loadbalancer controller which &lt;a href="https://github.com/kubernetes/ingress-nginx/blob/main/README.md" rel="noopener noreferrer"&gt;Nginx Ingress&lt;/a&gt; controller will use through the loadbalancer API in a &lt;code&gt;kind cluster&lt;/code&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; https://kind.sigs.k8s.io/examples/ingress/deploy-ingress-nginx.yaml
namespace/ingress-nginx created
serviceaccount/ingress-nginx created
serviceaccount/ingress-nginx-admission created
role.rbac.authorization.k8s.io/ingress-nginx created
role.rbac.authorization.k8s.io/ingress-nginx-admission created
clusterrole.rbac.authorization.k8s.io/ingress-nginx created
clusterrole.rbac.authorization.k8s.io/ingress-nginx-admission created
rolebinding.rbac.authorization.k8s.io/ingress-nginx created
rolebinding.rbac.authorization.k8s.io/ingress-nginx-admission created
clusterrolebinding.rbac.authorization.k8s.io/ingress-nginx created
clusterrolebinding.rbac.authorization.k8s.io/ingress-nginx-admission created
configmap/ingress-nginx-controller created
service/ingress-nginx-controller created
service/ingress-nginx-controller-admission created
deployment.apps/ingress-nginx-controller created
job.batch/ingress-nginx-admission-create created
job.batch/ingress-nginx-admission-patch created
ingressclass.networking.k8s.io/nginx created
validatingwebhookconfiguration.admissionregistration.k8s.io/ingress-nginx-admission created
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl &lt;span class="nb"&gt;wait&lt;/span&gt; &lt;span class="nt"&gt;--namespace&lt;/span&gt; ingress-nginx &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;   &lt;span class="nt"&gt;--for&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nv"&gt;condition&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;ready pod &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;   &lt;span class="nt"&gt;--selector&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;app.kubernetes.io/component&lt;span class="o"&gt;=&lt;/span&gt;controller &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;   &lt;span class="nt"&gt;--timeout&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;90s
pod/ingress-nginx-controller-86bb9f8d4b-4hg7w condition met
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get all &lt;span class="nt"&gt;-n&lt;/span&gt; ingress-nginx
NAME                                            READY   STATUS      RESTARTS   AGE
pod/ingress-nginx-admission-create-ldc97        0/1     Completed   0          2m25s
pod/ingress-nginx-admission-patch-zzlh7         0/1     Completed   0          2m25s
pod/ingress-nginx-controller-86bb9f8d4b-4hg7w   1/1     Running     0          2m25s

NAME                                         TYPE           CLUSTER-IP     EXTERNAL-IP   PORT&lt;span class="o"&gt;(&lt;/span&gt;S&lt;span class="o"&gt;)&lt;/span&gt;                      AGE
service/ingress-nginx-controller             LoadBalancer   10.96.146.11   172.18.0.6    80:30367/TCP,443:31847/TCP   2m25s
service/ingress-nginx-controller-admission   ClusterIP      10.96.50.204   &amp;lt;none&amp;gt;        443/TCP                      2m25s

NAME                                       READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/ingress-nginx-controller   1/1     1            1           2m25s

NAME                                                  DESIRED   CURRENT   READY   AGE
replicaset.apps/ingress-nginx-controller-86bb9f8d4b   1         1         1       2m25s

NAME                                       STATUS     COMPLETIONS   DURATION   AGE
job.batch/ingress-nginx-admission-create   Complete   1/1           11s        2m25s
job.batch/ingress-nginx-admission-patch    Complete   1/1           12s        2m25s
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Ingress resources
&lt;/h3&gt;

&lt;p&gt;The &lt;code&gt;Ingress&lt;/code&gt; concept lets you map traffic to different backends based on rules you define via the Kubernetes API. Traffic routing is controlled by rules defined on the Ingress resource.&lt;/p&gt;

&lt;h4&gt;
  
  
  Basic usage
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# pod-foo-bar.yaml&lt;/span&gt;

&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Pod&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;kubia-foo&lt;/span&gt;
  &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;foo&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;containers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;luksa/kubia&lt;/span&gt;
    &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;kubia&lt;/span&gt;
    &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;containerPort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;8080&lt;/span&gt;
      &lt;span class="na"&gt;protocol&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;TCP&lt;/span&gt;
      &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;http-port&lt;/span&gt;

&lt;span class="nn"&gt;---&lt;/span&gt;
&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Pod&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;kubia-bar&lt;/span&gt;
  &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;bar&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;containers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;luksa/kubia&lt;/span&gt;
    &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;kubia&lt;/span&gt;
    &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;containerPort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;8080&lt;/span&gt;
      &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;http-port&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# service-foo-bar.yaml&lt;/span&gt;

&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Service&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;kubia-foo-svc&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;selector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;foo&lt;/span&gt;
  &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;http-port&lt;/span&gt;
      &lt;span class="na"&gt;protocol&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;TCP&lt;/span&gt;
      &lt;span class="na"&gt;port&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;8080&lt;/span&gt;
      &lt;span class="na"&gt;targetPort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;http-port&lt;/span&gt;

&lt;span class="nn"&gt;---&lt;/span&gt;
&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Service&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;kubia-bar-svc&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;selector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;bar&lt;/span&gt;
  &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;http-port&lt;/span&gt;
      &lt;span class="na"&gt;protocol&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;TCP&lt;/span&gt;
      &lt;span class="na"&gt;port&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;8080&lt;/span&gt;
      &lt;span class="na"&gt;targetPort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;http-port&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# ingress-basic.yaml &lt;/span&gt;

&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;networking.k8s.io/v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Ingress&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;kubia&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;rules&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;http&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;paths&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/foo&lt;/span&gt;
        &lt;span class="na"&gt;pathType&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Prefix&lt;/span&gt;
        &lt;span class="na"&gt;backend&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;service&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;kubia-foo-svc&lt;/span&gt;
            &lt;span class="na"&gt;port&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
              &lt;span class="na"&gt;number&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;80&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/bar&lt;/span&gt;
        &lt;span class="na"&gt;pathType&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Prefix&lt;/span&gt;
        &lt;span class="na"&gt;backend&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;service&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;kubia-bar-svc&lt;/span&gt;
            &lt;span class="na"&gt;port&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
              &lt;span class="na"&gt;number&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;80&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl create &lt;span class="nt"&gt;-f&lt;/span&gt; pod-foo-bar.yaml
pod/kubia-foo created
pod/kubia-bar created

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl create &lt;span class="nt"&gt;-f&lt;/span&gt; service-foo-bar.yaml
service/kubia-foo-svc created
service/kubia-bar-svc created

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl create &lt;span class="nt"&gt;-f&lt;/span&gt; ingress-basic.yaml
ingress.networking.k8s.io/kubia created
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get svc
NAME            TYPE        CLUSTER-IP      EXTERNAL-IP   PORT&lt;span class="o"&gt;(&lt;/span&gt;S&lt;span class="o"&gt;)&lt;/span&gt;   AGE
kubernetes      ClusterIP   10.96.0.1       &amp;lt;none&amp;gt;        443/TCP   22d
kubia-bar-svc   ClusterIP   10.96.230.115   &amp;lt;none&amp;gt;        80/TCP    4m12s
kubia-foo-svc   ClusterIP   10.96.49.21     &amp;lt;none&amp;gt;        80/TCP    4m13s

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get ingress
NAME    CLASS    HOSTS   ADDRESS     PORTS   AGE
kubia   &amp;lt;none&amp;gt;   &lt;span class="k"&gt;*&lt;/span&gt;       localhost   80      67s

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl &lt;span class="nt"&gt;-n&lt;/span&gt; ingress-nginx get svc
NAME                                 TYPE           CLUSTER-IP     EXTERNAL-IP   PORT&lt;span class="o"&gt;(&lt;/span&gt;S&lt;span class="o"&gt;)&lt;/span&gt;                      AGE
ingress-nginx-controller             LoadBalancer   10.96.146.11   172.18.0.6    80:30367/TCP,443:31847/TCP   63m
ingress-nginx-controller-admission   ClusterIP      10.96.50.204   &amp;lt;none&amp;gt;        443/TCP                      63m
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get services &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;    &lt;span class="nt"&gt;--namespace&lt;/span&gt; ingress-nginx &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;    ingress-nginx-controller &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;    &lt;span class="nt"&gt;--output&lt;/span&gt; &lt;span class="nv"&gt;jsonpath&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s1"&gt;'{.status.loadBalancer.ingress[0].ip}'&lt;/span&gt;
172.18.0.6

&lt;span class="nv"&gt;$ &lt;/span&gt;curl &lt;span class="nt"&gt;-s&lt;/span&gt; http://172.18.0.6:80/foo
You&lt;span class="s1"&gt;'ve hit kubia-foo

$ curl -s http://172.18.0.6:80/bar
You'&lt;/span&gt;ve hit kubia-bar

&lt;span class="nv"&gt;$ &lt;/span&gt;curl &lt;span class="nt"&gt;-s&lt;/span&gt; http://172.18.0.6:80/baz
&amp;lt;html&amp;gt;
&amp;lt;&lt;span class="nb"&gt;head&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&amp;lt;title&amp;gt;404 Not Found&amp;lt;/title&amp;gt;&amp;lt;/head&amp;gt;
&amp;lt;body&amp;gt;
&amp;lt;center&amp;gt;&amp;lt;h1&amp;gt;404 Not Found&amp;lt;/h1&amp;gt;&amp;lt;/center&amp;gt;
&amp;lt;hr&amp;gt;&amp;lt;center&amp;gt;nginx&amp;lt;/center&amp;gt;
&amp;lt;/body&amp;gt;
&amp;lt;/html&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;In order to use ingress address localhost (&lt;code&gt;curl http://localhost/foo&lt;/code&gt;) you should define &lt;code&gt;extraPortMapping&lt;/code&gt; in &lt;code&gt;kind&lt;/code&gt; cluster configuration as described in &lt;a href="https://kind.sigs.k8s.io/docs/user/configuration/#extra-port-mappings" rel="noopener noreferrer"&gt;Extra Port Mappings&lt;/a&gt;.&lt;br&gt;
&lt;/p&gt;
&lt;/blockquote&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl delete ingress/kubia
ingress.networking.k8s.io &lt;span class="s2"&gt;"kubia"&lt;/span&gt; deleted
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Using a host
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# ingress-hosts.yaml&lt;/span&gt;

&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;networking.k8s.io/v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Ingress&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;kubia&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;rules&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;host&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;foo.kubia.com&lt;/span&gt;
    &lt;span class="na"&gt;http&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;paths&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/&lt;/span&gt;
        &lt;span class="na"&gt;pathType&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Prefix&lt;/span&gt;
        &lt;span class="na"&gt;backend&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;service&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;kubia-foo-svc&lt;/span&gt;
            &lt;span class="na"&gt;port&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
              &lt;span class="na"&gt;number&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;80&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;host&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;bar.kubia.com&lt;/span&gt;
    &lt;span class="na"&gt;http&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;paths&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/&lt;/span&gt;
        &lt;span class="na"&gt;pathType&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Prefix&lt;/span&gt;
        &lt;span class="na"&gt;backend&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;service&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;kubia-bar-svc&lt;/span&gt;
            &lt;span class="na"&gt;port&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
              &lt;span class="na"&gt;number&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;80&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl create &lt;span class="nt"&gt;-f&lt;/span&gt; ingress-hosts.yaml
ingress.networking.k8s.io/kubia created

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get ingress/kubia
NAME    CLASS    HOSTS                         ADDRESS     PORTS   AGE
kubia   &amp;lt;none&amp;gt;   foo.kubia.com,bar.kubia.com   localhost   80      103s
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;curl &lt;span class="nt"&gt;-s&lt;/span&gt; http://172.18.0.6
&amp;lt;html&amp;gt;
&amp;lt;&lt;span class="nb"&gt;head&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&amp;lt;title&amp;gt;404 Not Found&amp;lt;/title&amp;gt;&amp;lt;/head&amp;gt;
&amp;lt;body&amp;gt;
&amp;lt;center&amp;gt;&amp;lt;h1&amp;gt;404 Not Found&amp;lt;/h1&amp;gt;&amp;lt;/center&amp;gt;
&amp;lt;hr&amp;gt;&amp;lt;center&amp;gt;nginx&amp;lt;/center&amp;gt;
&amp;lt;/body&amp;gt;
&amp;lt;/html&amp;gt;

&lt;span class="nv"&gt;$ &lt;/span&gt;curl &lt;span class="nt"&gt;-s&lt;/span&gt; http://172.18.0.6 &lt;span class="nt"&gt;-H&lt;/span&gt; &lt;span class="s1"&gt;'Host: foo.kubia.com'&lt;/span&gt;
You&lt;span class="s1"&gt;'ve hit kubia-foo                               '&lt;/span&gt;

&lt;span class="nv"&gt;$ &lt;/span&gt;curl &lt;span class="nt"&gt;-s&lt;/span&gt; http://172.18.0.6 &lt;span class="nt"&gt;-H&lt;/span&gt; &lt;span class="s1"&gt;'Host: bar.kubia.com'&lt;/span&gt;
You&lt;span class="s1"&gt;'ve hit kubia-bar
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl delete ingress/kubia
ingress.networking.k8s.io &lt;span class="s2"&gt;"kubia"&lt;/span&gt; deleted
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  TLS
&lt;/h4&gt;

&lt;p&gt;You can secure an Ingress by specifying a &lt;a href="https://kubernetes.io/docs/concepts/configuration/secret/#tls-secrets" rel="noopener noreferrer"&gt;Secret&lt;/a&gt; that contains a TLS private key and certificate.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;openssl genrsa &lt;span class="nt"&gt;-out&lt;/span&gt; tls.key 2048
Generating RSA private key, 2048 bit long modulus &lt;span class="o"&gt;(&lt;/span&gt;2 primes&lt;span class="o"&gt;)&lt;/span&gt;
............................................+++++
............+++++
e is 65537 &lt;span class="o"&gt;(&lt;/span&gt;0x010001&lt;span class="o"&gt;)&lt;/span&gt;

&lt;span class="nv"&gt;$ &lt;/span&gt;openssl req &lt;span class="nt"&gt;-new&lt;/span&gt; &lt;span class="nt"&gt;-x509&lt;/span&gt; &lt;span class="nt"&gt;-key&lt;/span&gt; tls.key &lt;span class="nt"&gt;-out&lt;/span&gt; tls.crt &lt;span class="nt"&gt;-days&lt;/span&gt; 360 &lt;span class="nt"&gt;-subj&lt;/span&gt; //CN&lt;span class="o"&gt;=&lt;/span&gt;foo.kubia.com

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl create secret tls tls-secret &lt;span class="nt"&gt;--cert&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;tls.crt &lt;span class="nt"&gt;--key&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;tls.key
secret/tls-secret created
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# ingress-tls.yaml&lt;/span&gt;

&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;networking.k8s.io/v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Ingress&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;kubia&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;tls&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;hosts&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;foo.kubia.com&lt;/span&gt;
    &lt;span class="na"&gt;secretName&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;tls-secret&lt;/span&gt;
  &lt;span class="na"&gt;rules&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;host&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;foo.kubia.com&lt;/span&gt;
    &lt;span class="na"&gt;http&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;paths&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/&lt;/span&gt;
        &lt;span class="na"&gt;pathType&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Prefix&lt;/span&gt;
        &lt;span class="na"&gt;backend&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;service&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;kubia-foo-svc&lt;/span&gt;
            &lt;span class="na"&gt;port&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
              &lt;span class="na"&gt;number&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;80&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl create &lt;span class="nt"&gt;-f&lt;/span&gt; ingress-tls.yaml
ingress.networking.k8s.io/kubia created

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get ingress/kubia
NAME    CLASS    HOSTS           ADDRESS     PORTS     AGE
kubia   &amp;lt;none&amp;gt;   foo.kubia.com   localhost   80, 443   2m13s

&lt;span class="nv"&gt;$ &lt;/span&gt;curl &lt;span class="nt"&gt;-sk&lt;/span&gt; https://172.18.0.6:443 &lt;span class="nt"&gt;-H&lt;/span&gt; &lt;span class="s1"&gt;'Host: foo.kubia.com'&lt;/span&gt;
You&lt;span class="s1"&gt;'ve hit kubia-foo
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl delete ingress/kubia
ingress.networking.k8s.io &lt;span class="s2"&gt;"kubia"&lt;/span&gt; deleted

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl delete secret/tls-secret
secret &lt;span class="s2"&gt;"tls-secret"&lt;/span&gt; deleted

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl delete &lt;span class="nt"&gt;-f&lt;/span&gt; pod-foo-bar.yaml
pod &lt;span class="s2"&gt;"kubia-foo"&lt;/span&gt; deleted
pod &lt;span class="s2"&gt;"kubia-bar"&lt;/span&gt; deleted

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl delete &lt;span class="nt"&gt;-f&lt;/span&gt; service-foo-bar.yaml
service &lt;span class="s2"&gt;"kubia-foo-svc"&lt;/span&gt; deleted
service &lt;span class="s2"&gt;"kubia-bar-svc"&lt;/span&gt; deleted
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  Probes &lt;a&gt;&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;A &lt;a href="https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#container-probes" rel="noopener noreferrer"&gt;probe&lt;/a&gt; is a diagnostic performed periodically by the kubelet on a container. To perform a diagnostic, the kubelet either executes code within the container, or makes a network request.&lt;/p&gt;

&lt;h3&gt;
  
  
  livenessProbe
&lt;/h3&gt;

&lt;blockquote&gt;
&lt;p&gt;Indicates whether the container is running. If the liveness probe fails, the kubelet kills the container&lt;br&gt;
&lt;/p&gt;
&lt;/blockquote&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# pod-liveness-probe.yaml&lt;/span&gt;

&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Pod&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;kubia-liveness&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;containers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;luksa/kubia-unhealthy&lt;/span&gt;
    &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;kubia&lt;/span&gt;
    &lt;span class="na"&gt;livenessProbe&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;httpGet&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/&lt;/span&gt;
        &lt;span class="na"&gt;port&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;8080&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl create &lt;span class="nt"&gt;-f&lt;/span&gt; pod-liveness-probe.yaml
pod/kubia-liveness created

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get po
NAME             READY   STATUS    RESTARTS   AGE
kubia-liveness   1/1     Running   0          42s

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl events pod/kubia-liveness
LAST SEEN          TYPE      REASON      OBJECT               MESSAGE
113s               Normal    Scheduled   Pod/kubia-liveness   Successfully assigned default/kubia-liveness to kind-worker3
112s               Normal    Pulling     Pod/kubia-liveness   Pulling image &lt;span class="s2"&gt;"luksa/kubia-unhealthy"&lt;/span&gt;
77s                Normal    Pulled      Pod/kubia-liveness   Successfully pulled image &lt;span class="s2"&gt;"luksa/kubia-unhealthy"&lt;/span&gt; &lt;span class="k"&gt;in &lt;/span&gt;34.865s &lt;span class="o"&gt;(&lt;/span&gt;34.865s including waiting&lt;span class="o"&gt;)&lt;/span&gt;&lt;span class="nb"&gt;.&lt;/span&gt; Image size: 263841919 bytes.
77s                Normal    Created     Pod/kubia-liveness   Created container: kubia
77s                Normal    Started     Pod/kubia-liveness   Started container kubia
2s &lt;span class="o"&gt;(&lt;/span&gt;x3 over 22s&lt;span class="o"&gt;)&lt;/span&gt;   Warning   Unhealthy   Pod/kubia-liveness   Liveness probe failed: HTTP probe failed with statuscode: 500
2s                 Normal    Killing     Pod/kubia-liveness   Container kubia failed liveness probe, will be restarted

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get po
NAME             READY   STATUS    RESTARTS      AGE
kubia-liveness   1/1     Running   1 &lt;span class="o"&gt;(&lt;/span&gt;20s ago&lt;span class="o"&gt;)&lt;/span&gt;   2m41s
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  readinessProbe
&lt;/h3&gt;

&lt;blockquote&gt;
&lt;p&gt;Indicates whether the container is ready to respond to requests. If the readiness probe fails, the &lt;a href="https://kubernetes.io/docs/concepts/services-networking/endpoint-slices/" rel="noopener noreferrer"&gt;EndpointSlice&lt;/a&gt; controller removes the Pod's IP address from the &lt;code&gt;EndpointSlices&lt;/code&gt; of all Services that match the Pod&lt;br&gt;
&lt;/p&gt;
&lt;/blockquote&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# pod-readiness-probe.yaml&lt;/span&gt;

&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Pod&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;kubia-readiness&lt;/span&gt;
  &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;kubia&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;containers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;luksa/kubia&lt;/span&gt;
    &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;kubia&lt;/span&gt;
    &lt;span class="na"&gt;readinessProbe&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;exec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;command&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;cat&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;/tmp/ready&lt;/span&gt;
      &lt;span class="na"&gt;initialDelaySeconds&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;10&lt;/span&gt;
      &lt;span class="na"&gt;periodSeconds&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;5&lt;/span&gt;
    &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;containerPort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;8080&lt;/span&gt;
      &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;http-web&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# service-readiness-probe.yaml&lt;/span&gt;

&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Service&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;kubia-svc&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;LoadBalancer&lt;/span&gt;
  &lt;span class="na"&gt;selector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;kubia&lt;/span&gt;
  &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;port&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;80&lt;/span&gt;
    &lt;span class="na"&gt;targetPort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;http-web&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl create &lt;span class="nt"&gt;-f&lt;/span&gt; pod-readiness-probe.yaml
pod/kubia-readiness created

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl create &lt;span class="nt"&gt;-f&lt;/span&gt; service-readiness-probe.yaml
service/kubia-svc created

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get po
NAME              READY   STATUS    RESTARTS   AGE
kubia-readiness   0/1     Running   0          23s

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get svc
NAME         TYPE           CLUSTER-IP     EXTERNAL-IP   PORT&lt;span class="o"&gt;(&lt;/span&gt;S&lt;span class="o"&gt;)&lt;/span&gt;        AGE
kubernetes   ClusterIP      10.96.0.1      &amp;lt;none&amp;gt;        443/TCP        23d
kubia-svc    LoadBalancer   10.96.150.51   172.18.0.7    80:31868/TCP   33s
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl &lt;span class="nb"&gt;exec &lt;/span&gt;kubia-readiness &lt;span class="nt"&gt;--&lt;/span&gt; curl &lt;span class="nt"&gt;-s&lt;/span&gt; http://localhost:8080
You&lt;span class="s1"&gt;'ve hit kubia-readiness'&lt;/span&gt;

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl &lt;span class="nb"&gt;exec &lt;/span&gt;kubia-readiness &lt;span class="nt"&gt;--&lt;/span&gt; curl &lt;span class="nt"&gt;-s&lt;/span&gt; http://kubia-svc:80
&lt;span class="nb"&gt;command &lt;/span&gt;terminated with &lt;span class="nb"&gt;exit &lt;/span&gt;code 7

&lt;span class="nv"&gt;$ &lt;/span&gt;curl &lt;span class="nt"&gt;-sv&lt;/span&gt; http://172.18.0.7:80
&lt;span class="k"&gt;*&lt;/span&gt;   Trying 172.18.0.7:80...
&lt;span class="k"&gt;*&lt;/span&gt; Connected to 172.18.0.7 &lt;span class="o"&gt;(&lt;/span&gt;172.18.0.7&lt;span class="o"&gt;)&lt;/span&gt; port 80 &lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="c"&gt;#0)&lt;/span&gt;
&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; GET / HTTP/1.1
&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; Host: 172.18.0.7
&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; User-Agent: curl/7.79.1
&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; Accept: &lt;span class="k"&gt;*&lt;/span&gt;/&lt;span class="k"&gt;*&lt;/span&gt;
&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;
&lt;span class="k"&gt;*&lt;/span&gt; Empty reply from server
&lt;span class="k"&gt;*&lt;/span&gt; Closing connection 0
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl &lt;span class="nb"&gt;exec &lt;/span&gt;kubia-readiness &lt;span class="nt"&gt;--&lt;/span&gt; &lt;span class="nb"&gt;touch &lt;/span&gt;tmp/ready

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get po
NAME              READY   STATUS    RESTARTS   AGE
kubia-readiness   1/1     Running   0          2m38s

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl &lt;span class="nb"&gt;exec &lt;/span&gt;kubia-readiness &lt;span class="nt"&gt;--&lt;/span&gt; curl &lt;span class="nt"&gt;-s&lt;/span&gt; http://kubia-svc:80
You&lt;span class="s1"&gt;'ve hit kubia-readiness

$ curl -s http://172.18.0.7:80
You'&lt;/span&gt;ve hit kubia-readiness
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl delete &lt;span class="nt"&gt;-f&lt;/span&gt; pod-readiness-probe.yaml
pod &lt;span class="s2"&gt;"kubia-readiness"&lt;/span&gt; deleted

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl delete &lt;span class="nt"&gt;-f&lt;/span&gt; service-readiness-probe.yaml
service &lt;span class="s2"&gt;"kubia-svc"&lt;/span&gt; deleted
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  startupProbe
&lt;/h3&gt;

&lt;blockquote&gt;
&lt;p&gt;Indicates whether the application within the container is started. All other probes are disabled if a startup probe is provided, until it succeeds. If the startup probe fails, the kubelet kills the container.&lt;br&gt;
&lt;/p&gt;
&lt;/blockquote&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;liveness-port&lt;/span&gt;
  &lt;span class="na"&gt;containerPort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;8080&lt;/span&gt;

&lt;span class="na"&gt;livenessProbe&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;httpGet&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/healthz&lt;/span&gt;
    &lt;span class="na"&gt;port&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;liveness-port&lt;/span&gt;
  &lt;span class="na"&gt;failureThreshold&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;1&lt;/span&gt;
  &lt;span class="na"&gt;periodSeconds&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;10&lt;/span&gt;

&lt;span class="na"&gt;startupProbe&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;httpGet&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/healthz&lt;/span&gt;
    &lt;span class="na"&gt;port&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;liveness-port&lt;/span&gt;
  &lt;span class="na"&gt;failureThreshold&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;30&lt;/span&gt;
  &lt;span class="na"&gt;periodSeconds&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;10&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For more information about configuring probes, see &lt;a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/" rel="noopener noreferrer"&gt;Configure Liveness, Readiness and Startup Probes&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  Volumes &lt;a&gt;&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;Kubernetes &lt;a href="https://kubernetes.io/docs/concepts/storage/volumes/" rel="noopener noreferrer"&gt;volumes&lt;/a&gt; provide a way for containers in a pod to access and share data via the filesystem. Data sharing can be between different local processes within a container, or between different containers, or between Pods.&lt;/p&gt;

&lt;p&gt;Kubernetes supports several &lt;a href="https://kubernetes.io/docs/concepts/storage/volumes/#volume-types" rel="noopener noreferrer"&gt;types of volumes&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Ephemeral Volumes
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://kubernetes.io/docs/concepts/storage/ephemeral-volumes/" rel="noopener noreferrer"&gt;Ephemeral volumes&lt;/a&gt; are temporary storage that are intrinsically linked to the lifecycle of a Pod. Ephemeral volumes are designed for scenarios where data persistence is not required beyond the life of a single Pod.&lt;/p&gt;

&lt;p&gt;Kubernetes supports several different kinds of ephemeral volumes for different purposes: &lt;a href="https://kubernetes.io/docs/concepts/storage/volumes/#emptydir" rel="noopener noreferrer"&gt;emptyDir&lt;/a&gt;, &lt;a href="https://kubernetes.io/docs/concepts/storage/volumes/#configmap" rel="noopener noreferrer"&gt;configmap&lt;/a&gt;, &lt;a href="https://kubernetes.io/docs/concepts/storage/volumes/#downwardapi" rel="noopener noreferrer"&gt;downwardAPI&lt;/a&gt;, &lt;a href="https://kubernetes.io/docs/concepts/storage/volumes/#secret" rel="noopener noreferrer"&gt;secret&lt;/a&gt;, &lt;a href="https://kubernetes.io/docs/concepts/storage/volumes/#image" rel="noopener noreferrer"&gt;image&lt;/a&gt;, &lt;a href="https://kubernetes.io/docs/concepts/storage/ephemeral-volumes/#csi-ephemeral-volumes" rel="noopener noreferrer"&gt;CSI&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  emptyDir
&lt;/h4&gt;

&lt;blockquote&gt;
&lt;p&gt;For a Pod that defines an &lt;code&gt;emptyDir&lt;/code&gt; volume, the volume is created when the Pod is assigned to a node. The &lt;code&gt;emptyDir&lt;/code&gt; volume is initially empty.&lt;br&gt;
&lt;/p&gt;
&lt;/blockquote&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# pod-volume-emptydir.yaml&lt;/span&gt;

&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Pod&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;containers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx&lt;/span&gt;
    &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx:stable&lt;/span&gt;
    &lt;span class="na"&gt;volumeMounts&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;mountPath&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/tmp-cache&lt;/span&gt;
      &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;tmp&lt;/span&gt;
  &lt;span class="na"&gt;volumes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;tmp&lt;/span&gt;
    &lt;span class="na"&gt;emptyDir&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;{}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl create &lt;span class="nt"&gt;-f&lt;/span&gt; pod-volume-emptydir.yaml
pod/nginx created

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl &lt;span class="nb"&gt;exec &lt;/span&gt;nginx &lt;span class="nt"&gt;--&lt;/span&gt; &lt;span class="nb"&gt;ls&lt;/span&gt; &lt;span class="nt"&gt;-l&lt;/span&gt; | &lt;span class="nb"&gt;grep &lt;/span&gt;cache
drwxrwxrwx   2 root root 4096 Aug 11 08:13 tmp-cache
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl delete &lt;span class="nt"&gt;-f&lt;/span&gt; pod-volume-emptydir.yaml
pod &lt;span class="s2"&gt;"nginx"&lt;/span&gt; deleted
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can create a volume in memory using using &lt;code&gt;tmpfs&lt;/code&gt; file system:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;tmp&lt;/span&gt;
    &lt;span class="na"&gt;emptyDir&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;sizeLimit&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;500Mi&lt;/span&gt;
      &lt;span class="na"&gt;medium&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Memory&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Projected Volumes
&lt;/h3&gt;

&lt;p&gt;A &lt;a href="https://kubernetes.io/docs/concepts/storage/projected-volumes/" rel="noopener noreferrer"&gt;projected volume&lt;/a&gt; maps several existing volume sources into the same directory.&lt;/p&gt;

&lt;p&gt;Currently, the following types of volume sources can be projected: &lt;a href="https://kubernetes.io/docs/concepts/storage/volumes/#secret" rel="noopener noreferrer"&gt;secret&lt;/a&gt;, &lt;a href="https://kubernetes.io/docs/concepts/storage/volumes/#downwardapi" rel="noopener noreferrer"&gt;downwardAPI&lt;/a&gt;, &lt;a href="https://kubernetes.io/docs/concepts/storage/volumes/#configmap" rel="noopener noreferrer"&gt;configMap&lt;/a&gt;, &lt;a href="https://kubernetes.io/docs/concepts/storage/projected-volumes/#serviceaccounttoken" rel="noopener noreferrer"&gt;serviceAccountToken&lt;/a&gt;, &lt;a href="https://kubernetes.io/docs/concepts/storage/projected-volumes/#clustertrustbundle" rel="noopener noreferrer"&gt;clusterTrustBundle&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Persistent Volumes
&lt;/h3&gt;

&lt;p&gt;Persistent volumes offer durable storage, meaning the data stored within them persists even after the associated Pods are deleted, restarted, or rescheduled.&lt;/p&gt;

&lt;h4&gt;
  
  
  PersistentVolume
&lt;/h4&gt;

&lt;p&gt;A &lt;a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/" rel="noopener noreferrer"&gt;PersistentVolume&lt;/a&gt; (PV) is a piece of storage in the cluster that has been provisioned by an administrator or dynamically provisioned using &lt;a href="https://kubernetes.io/docs/concepts/storage/storage-classes/" rel="noopener noreferrer"&gt;Storage Classes&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;PersistentVolume&lt;/code&gt; types are implemented as plugins. Kubernetes currently supports the following plugins: &lt;a href="https://kubernetes.io/docs/concepts/storage/volumes/#csi" rel="noopener noreferrer"&gt;csi&lt;/a&gt;, &lt;a href="https://kubernetes.io/docs/concepts/storage/volumes/#fc" rel="noopener noreferrer"&gt;fc&lt;/a&gt;, &lt;a href="https://kubernetes.io/docs/concepts/storage/volumes/#iscsi" rel="noopener noreferrer"&gt;iscsi&lt;/a&gt;, &lt;a href="https://kubernetes.io/docs/concepts/storage/volumes/#local" rel="noopener noreferrer"&gt;local&lt;/a&gt;, &lt;a href="https://kubernetes.io/docs/concepts/storage/volumes/#nfs" rel="noopener noreferrer"&gt;nfs&lt;/a&gt;, &lt;a href="https://kubernetes.io/docs/concepts/storage/volumes/#hostpath" rel="noopener noreferrer"&gt;hostPath&lt;/a&gt;&lt;/p&gt;

&lt;h5&gt;
  
  
  hostPath
&lt;/h5&gt;

&lt;p&gt;A &lt;code&gt;hostPath&lt;/code&gt; volume mounts a file or directory from the host node's filesystem into your Pod.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# pod-volume-hostpath.yaml&lt;/span&gt;

&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Pod&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;containers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx&lt;/span&gt;
    &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx:stable&lt;/span&gt;
    &lt;span class="na"&gt;volumeMounts&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;mountPath&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/cache&lt;/span&gt;
      &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;cache&lt;/span&gt;
  &lt;span class="na"&gt;volumes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;cache&lt;/span&gt;
    &lt;span class="na"&gt;hostPath&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/data/cache&lt;/span&gt;
      &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;DirectoryOrCreate&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl create &lt;span class="nt"&gt;-f&lt;/span&gt; pod-volume-hostpath.yaml
pod/nginx created

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl &lt;span class="nb"&gt;exec &lt;/span&gt;nginx &lt;span class="nt"&gt;--&lt;/span&gt; &lt;span class="nb"&gt;ls&lt;/span&gt; &lt;span class="nt"&gt;-l&lt;/span&gt; | &lt;span class="nb"&gt;grep &lt;/span&gt;cache
drwxr-xr-x   2 root root 4096 Aug 11 12:27 cache
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl delete &lt;span class="nt"&gt;-f&lt;/span&gt; pod-volume-hostpath.yaml
pod &lt;span class="s2"&gt;"nginx"&lt;/span&gt; deleted
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# pv-hostpath.yaml&lt;/span&gt;

&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;PersistentVolume&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;pv-redis&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;capacity&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; 
    &lt;span class="na"&gt;storage&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;1Gi&lt;/span&gt;
  &lt;span class="na"&gt;accessModes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;ReadWriteOnce&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;ReadOnlyMany&lt;/span&gt;
  &lt;span class="na"&gt;persistentVolumeReclaimPolicy&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Retain&lt;/span&gt;
  &lt;span class="na"&gt;hostPath&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/data/redis&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl create &lt;span class="nt"&gt;-f&lt;/span&gt; pv-hostpath.yaml
persistentvolume/pv-redis created

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get pv
NAME         CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM   STORAGECLASS   VOLUMEATTRIBUTESCLASS   REASON   AGE
pv-redis   1Gi        RWO,ROX        Retain           Available                          &amp;lt;&lt;span class="nb"&gt;unset&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;                          44s
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  PersistentVolumeClaim
&lt;/h4&gt;

&lt;p&gt;A &lt;code&gt;PersistentVolumeClaim&lt;/code&gt; (PVC) is a request for storage by a user. A &lt;code&gt;PersistentVolumeClaim&lt;/code&gt; volume is used to mount a &lt;code&gt;PersistentVolume&lt;/code&gt; into a Pod.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# pvc-basic.yaml&lt;/span&gt;

&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;PersistentVolumeClaim&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;pvc-redis&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;resources&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;requests&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;storage&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;0.5Gi&lt;/span&gt;
  &lt;span class="na"&gt;accessModes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;ReadWriteOnce&lt;/span&gt;
  &lt;span class="na"&gt;storageClassName&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl create &lt;span class="nt"&gt;-f&lt;/span&gt; pvc-basic.yaml
persistentvolumeclaim/pvc-redis created

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get pvc
NAME        STATUS   VOLUME     CAPACITY   ACCESS MODES   STORAGECLASS   VOLUMEATTRIBUTESCLASS   AGE
pvc-redis   Bound    pv-redis   1Gi        RWO,ROX                       &amp;lt;&lt;span class="nb"&gt;unset&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;                 6s

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get pv
NAME       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM               STORAGECLASS   VOLUMEATTRIBUTESCLASS   REASON   AGE
pv-redis   1Gi        RWO,ROX        Retain           Bound    default/pvc-redis                  &amp;lt;&lt;span class="nb"&gt;unset&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;                          28s
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# pod-pvc.yaml&lt;/span&gt;

&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Pod&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;redis&lt;/span&gt; 
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;containers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;redis&lt;/span&gt;
    &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;redis:6.2&lt;/span&gt;
    &lt;span class="na"&gt;volumeMounts&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;redis-rdb&lt;/span&gt;
      &lt;span class="na"&gt;mountPath&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/data&lt;/span&gt;
  &lt;span class="na"&gt;volumes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;redis-rdb&lt;/span&gt;
    &lt;span class="na"&gt;persistentVolumeClaim&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;claimName&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;pvc-redis&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl create &lt;span class="nt"&gt;-f&lt;/span&gt; pod-pvc.yaml
pod/redis created

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get po redis &lt;span class="nt"&gt;-o&lt;/span&gt; &lt;span class="nv"&gt;jsonpath&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s1"&gt;'{.spec.volumes[?(@.name == "redis-rdb")]}'&lt;/span&gt;
&lt;span class="o"&gt;{&lt;/span&gt;&lt;span class="s2"&gt;"name"&lt;/span&gt;:&lt;span class="s2"&gt;"redis-rdb"&lt;/span&gt;,&lt;span class="s2"&gt;"persistentVolumeClaim"&lt;/span&gt;:&lt;span class="o"&gt;{&lt;/span&gt;&lt;span class="s2"&gt;"claimName"&lt;/span&gt;:&lt;span class="s2"&gt;"pvc-redis"&lt;/span&gt;&lt;span class="o"&gt;}}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl &lt;span class="nb"&gt;exec &lt;/span&gt;redis &lt;span class="nt"&gt;--&lt;/span&gt; redis-cli save
OK

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get po redis &lt;span class="nt"&gt;-o&lt;/span&gt; &lt;span class="nv"&gt;jsonpath&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s1"&gt;'{.spec.nodeName}'&lt;/span&gt;
kind-worker2

&lt;span class="nv"&gt;$ &lt;/span&gt;docker &lt;span class="nb"&gt;exec &lt;/span&gt;kind-worker2 &lt;span class="nb"&gt;ls&lt;/span&gt; &lt;span class="nt"&gt;-l&lt;/span&gt; tmp/redis
total 4
&lt;span class="nt"&gt;-rw-------&lt;/span&gt; 1 999 systemd-journal 102 Aug 11 14:47 dump.rdb
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl delete po/redis
pod &lt;span class="s2"&gt;"redis"&lt;/span&gt; deleted

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl delete pvc/pvc-redis
persistentvolumeclaim &lt;span class="s2"&gt;"pvc-redis"&lt;/span&gt; deleted

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get pvc
No resources found &lt;span class="k"&gt;in &lt;/span&gt;default namespace.

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get pv
NAME       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS     CLAIM               STORAGECLASS   VOLUMEATTRIBUTESCLASS   REASON   AGE
pv-redis   1Gi        RWO,ROX        Retain           Released   default/pvc-redis                  &amp;lt;&lt;span class="nb"&gt;unset&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;                          37m

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl create &lt;span class="nt"&gt;-f&lt;/span&gt; pvc-basic.yaml
persistentvolumeclaim/pvc-redis created

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get pvc
NAME        STATUS    VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS   VOLUMEATTRIBUTESCLASS   AGE
pvc-redis   Pending                                                     &amp;lt;&lt;span class="nb"&gt;unset&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;                 9s

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get pv
NAME       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS     CLAIM               STORAGECLASS   VOLUMEATTRIBUTESCLASS   REASON   AGE
pv-redis   1Gi        RWO,ROX        Retain           Released   default/pvc-redis                  &amp;lt;&lt;span class="nb"&gt;unset&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;                          40m

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl create &lt;span class="nt"&gt;-f&lt;/span&gt; pod-pvc.yaml
pod/redis created

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get po
NAME    READY   STATUS    RESTARTS   AGE
redis   0/1     Pending   0          92s

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl events pod/redis
LAST SEEN             TYPE      REASON             OBJECT                            MESSAGE
37m                   Normal    Scheduled          Pod/redis                         Successfully assigned default/redis to kind-worker2
37m                   Normal    Pulling            Pod/redis                         Pulling image &lt;span class="s2"&gt;"redis:6.2"&lt;/span&gt;
37m                   Normal    Pulled             Pod/redis                         Successfully pulled image &lt;span class="s2"&gt;"redis:6.2"&lt;/span&gt; &lt;span class="k"&gt;in &lt;/span&gt;5.993s &lt;span class="o"&gt;(&lt;/span&gt;5.993s including waiting&lt;span class="o"&gt;)&lt;/span&gt;&lt;span class="nb"&gt;.&lt;/span&gt; Image size: 40179474 bytes.
37m                   Normal    Created            Pod/redis                         Created container: redis
37m                   Normal    Started            Pod/redis                         Started container redis
6m57s                 Normal    Killing            Pod/redis                         Stopping container redis
2m4s                  Warning   FailedScheduling   Pod/redis                         0/4 nodes are available: pod has unbound immediate PersistentVolumeClaims. preemption: 0/4 nodes are available: 4 Preemption is not helpful &lt;span class="k"&gt;for &lt;/span&gt;scheduling.
8s &lt;span class="o"&gt;(&lt;/span&gt;x16 over 3m51s&lt;span class="o"&gt;)&lt;/span&gt;   Normal    FailedBinding      PersistentVolumeClaim/pvc-redis   no persistent volumes available &lt;span class="k"&gt;for &lt;/span&gt;this claim and no storage class is &lt;span class="nb"&gt;set&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl delete pv/pv-redis
persistentvolume &lt;span class="s2"&gt;"pv-redis"&lt;/span&gt; deleted

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get pvc
NAME        STATUS    VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS   VOLUMEATTRIBUTESCLASS   AGE
pvc-redis   Pending                                                     &amp;lt;&lt;span class="nb"&gt;unset&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;                 61s

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl create &lt;span class="nt"&gt;-f&lt;/span&gt; pv-hostpath.yaml
persistentvolume/pv-redis created

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get pvc
NAME        STATUS   VOLUME     CAPACITY   ACCESS MODES   STORAGECLASS   VOLUMEATTRIBUTESCLASS   AGE
pvc-redis   Bound    pv-redis   1Gi        RWO,ROX                       &amp;lt;&lt;span class="nb"&gt;unset&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;                 2m2s
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl delete pod/redis
pod &lt;span class="s2"&gt;"redis"&lt;/span&gt; deleted

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl delete pvc/pvc-redis
persistentvolumeclaim &lt;span class="s2"&gt;"pvc-redis"&lt;/span&gt; deleted

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl delete pv/pv-redis
persistentvolume &lt;span class="s2"&gt;"pv-redis"&lt;/span&gt; deleted
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Dynamic Volume Provisioning
&lt;/h4&gt;

&lt;p&gt;&lt;a href="https://kubernetes.io/docs/concepts/storage/dynamic-provisioning/" rel="noopener noreferrer"&gt;Dynamic volume provisioning&lt;/a&gt; allows storage volumes to be created on-demand. Without dynamic provisioning, cluster administrators have to manually make calls to their cloud or storage provider to create new storage volumes, and then create &lt;code&gt;PersistentVolume&lt;/code&gt; objects to represent them in Kubernetes.&lt;/p&gt;

&lt;h5&gt;
  
  
  StorageClass
&lt;/h5&gt;

&lt;p&gt;A &lt;a href="https://kubernetes.io/docs/concepts/storage/storage-classes/" rel="noopener noreferrer"&gt;StorageClass&lt;/a&gt; provides a way for administrators to describe the classes of storage they offer.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get storageclass
NAME                 PROVISIONER             RECLAIMPOLICY   VOLUMEBINDINGMODE      ALLOWVOLUMEEXPANSION   AGE
standard &lt;span class="o"&gt;(&lt;/span&gt;default&lt;span class="o"&gt;)&lt;/span&gt;   rancher.io/local-path   Delete          WaitForFirstConsumer   &lt;span class="nb"&gt;false                  &lt;/span&gt;25d
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# storageclass-local-path.yaml&lt;/span&gt;

&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;storage.k8s.io/v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;StorageClass&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;storageclass-redis&lt;/span&gt;
  &lt;span class="na"&gt;annotations&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;storageclass.kubernetes.io/is-default-class&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;false"&lt;/span&gt;
&lt;span class="na"&gt;provisioner&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;rancher.io/local-path&lt;/span&gt;
&lt;span class="na"&gt;volumeBindingMode&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Immediate&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl create &lt;span class="nt"&gt;-f&lt;/span&gt; storageclass-local-path.yaml
storageclass.storage.k8s.io/storageclass-redis created

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get sc
NAME                 PROVISIONER             RECLAIMPOLICY   VOLUMEBINDINGMODE      ALLOWVOLUMEEXPANSION   AGE
standard &lt;span class="o"&gt;(&lt;/span&gt;default&lt;span class="o"&gt;)&lt;/span&gt;   rancher.io/local-path   Delete          WaitForFirstConsumer   &lt;span class="nb"&gt;false                  &lt;/span&gt;26d
storageclass-redis   rancher.io/local-path   Delete          Immediate              &lt;span class="nb"&gt;false                  &lt;/span&gt;5m43s                 26s
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# pvc-sc.yaml&lt;/span&gt;

&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;PersistentVolumeClaim&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;pvc-dynamic-redis&lt;/span&gt;
  &lt;span class="na"&gt;annotations&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;volume.kubernetes.io/selected-node&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;kind-worker&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;resources&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;requests&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;storage&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;0.5Gi&lt;/span&gt;
  &lt;span class="na"&gt;accessModes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;ReadWriteOnce&lt;/span&gt;
  &lt;span class="na"&gt;storageClassName&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;storageclass-redis&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl create &lt;span class="nt"&gt;-f&lt;/span&gt; pvc-sc.yaml
persistentvolumeclaim/pvc-dynamic-redis created

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get pvc
NAME                STATUS    VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS         VOLUMEATTRIBUTESCLASS   AGE
pvc-dynamic-redis   Pending                                      storageclass-redis   &amp;lt;&lt;span class="nb"&gt;unset&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;                 8s

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get pvc
NAME                STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS         VOLUMEATTRIBUTESCLASS   AGE
pvc-dynamic-redis   Bound    pvc-0d78a617-e1ee-4d1e-8e59-37502fc711a9   512Mi      RWO            storageclass-redis   &amp;lt;&lt;span class="nb"&gt;unset&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;                 26s
                         0s
&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get pv
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                       STORAGECLASS         VOLUMEATTRIBUTESCLASS   REASON   AGE
pvc-0d78a617-e1ee-4d1e-8e59-37502fc711a9   512Mi      RWO            Delete           Bound    default/pvc-dynamic-redis   storageclass-redis   &amp;lt;&lt;span class="nb"&gt;unset&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;                          47s
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl delete sc/st
sc/standard            sc/storageclass-redis

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl delete sc/storageclass-redis
storageclass.storage.k8s.io &lt;span class="s2"&gt;"storageclass-redis"&lt;/span&gt; deleted

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get pv
No resources found
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  ConfigMaps &lt;a&gt;&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;A &lt;a href="https://kubernetes.io/docs/concepts/configuration/configmap/" rel="noopener noreferrer"&gt;ConfigMap&lt;/a&gt; is an API object used to store non-confidential data in key-value pairs. &lt;code&gt;Pods&lt;/code&gt; can consume &lt;code&gt;ConfigMaps&lt;/code&gt; as environment variables, command-line arguments, or as configuration files in a &lt;code&gt;volume&lt;/code&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Creating ConfigMaps
&lt;/h3&gt;

&lt;h4&gt;
  
  
  Imperative way
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight properties"&gt;&lt;code&gt;&lt;span class="c"&gt;# application.properties
&lt;/span&gt;&lt;span class="py"&gt;server.port&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;8080&lt;/span&gt;
&lt;span class="py"&gt;spring.profiles.active&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;development&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl create configmap my-config &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--from-literal&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nv"&gt;foo&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;bar &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--from-file&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;app.props&lt;span class="o"&gt;=&lt;/span&gt;application.properties
configmap/my-config created
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get cm/my-config
NAME        DATA   AGE
my-config   2      61s

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get cm/my-config &lt;span class="nt"&gt;-o&lt;/span&gt; yaml
apiVersion: v1
data:
  app.props: |-
    &lt;span class="c"&gt;# application.properties&lt;/span&gt;
    server.port&lt;span class="o"&gt;=&lt;/span&gt;8080
    spring.profiles.active&lt;span class="o"&gt;=&lt;/span&gt;development
  foo: bar
kind: ConfigMap
metadata:
  creationTimestamp: &lt;span class="s2"&gt;"2025-09-15T20:20:44Z"&lt;/span&gt;
  name: my-config
  namespace: default
  resourceVersion: &lt;span class="s2"&gt;"3636455"&lt;/span&gt;
  uid: 9c68ecb1-55ca-469a-b09e-3e1b625cd69b
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl delete cm my-config
configmap &lt;span class="s2"&gt;"my-config"&lt;/span&gt; deleted
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Declarative way
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# cm.yaml&lt;/span&gt;
&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ConfigMap&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;my-config&lt;/span&gt;
&lt;span class="na"&gt;data&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;app.props&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;
    &lt;span class="s"&gt;server.port=8080&lt;/span&gt;
    &lt;span class="s"&gt;spring.profiles.active=development&lt;/span&gt;
  &lt;span class="na"&gt;foo&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;bar&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; cm.yaml
configmap/my-config created
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get cm/my-config
NAME        DATA   AGE
my-config   2      19s

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get cm/my-config &lt;span class="nt"&gt;-o&lt;/span&gt; yaml
apiVersion: v1
data:
  app.props: |
    server.port&lt;span class="o"&gt;=&lt;/span&gt;8080
    spring.profiles.active&lt;span class="o"&gt;=&lt;/span&gt;development
  foo: bar
kind: ConfigMap
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: |
      &lt;span class="o"&gt;{&lt;/span&gt;&lt;span class="s2"&gt;"apiVersion"&lt;/span&gt;:&lt;span class="s2"&gt;"v1"&lt;/span&gt;,&lt;span class="s2"&gt;"data"&lt;/span&gt;:&lt;span class="o"&gt;{&lt;/span&gt;&lt;span class="s2"&gt;"app.props"&lt;/span&gt;:&lt;span class="s2"&gt;"server.port=8080&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s2"&gt;spring.profiles.active=development&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;,&lt;span class="s2"&gt;"foo"&lt;/span&gt;:&lt;span class="s2"&gt;"bar"&lt;/span&gt;&lt;span class="o"&gt;}&lt;/span&gt;,&lt;span class="s2"&gt;"kind"&lt;/span&gt;:&lt;span class="s2"&gt;"ConfigMap"&lt;/span&gt;,&lt;span class="s2"&gt;"metadata"&lt;/span&gt;:&lt;span class="o"&gt;{&lt;/span&gt;&lt;span class="s2"&gt;"annotations"&lt;/span&gt;:&lt;span class="o"&gt;{}&lt;/span&gt;,&lt;span class="s2"&gt;"name"&lt;/span&gt;:&lt;span class="s2"&gt;"my-config"&lt;/span&gt;,&lt;span class="s2"&gt;"namespace"&lt;/span&gt;:&lt;span class="s2"&gt;"default"&lt;/span&gt;&lt;span class="o"&gt;}}&lt;/span&gt;
  creationTimestamp: &lt;span class="s2"&gt;"2025-09-15T20:27:51Z"&lt;/span&gt;
  name: my-config
  namespace: default
  resourceVersion: &lt;span class="s2"&gt;"3637203"&lt;/span&gt;
  uid: a8d9fce1-f2bd-470c-93a2-3a7fcc560bbc
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Using ConfigMaps
&lt;/h3&gt;

&lt;h4&gt;
  
  
  Consuming an environment variable by a reference key
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# pod-cm-env.yaml&lt;/span&gt;
&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Pod&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;env-configmap&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;containers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;app&lt;/span&gt;
      &lt;span class="na"&gt;command&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;printenv"&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;MY_VAR"&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
      &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;busybox:latest&lt;/span&gt;
      &lt;span class="na"&gt;env&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;MY_VAR&lt;/span&gt;
          &lt;span class="na"&gt;valueFrom&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="na"&gt;configMapKeyRef&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
              &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;my-config&lt;/span&gt;
              &lt;span class="na"&gt;key&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;foo&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; pod-cm-env.yaml
pod/env-configmap created

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl logs pod/env-configmap
bar
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl delete &lt;span class="nt"&gt;-f&lt;/span&gt; pod-cm-env.yaml
pod &lt;span class="s2"&gt;"env-configmap"&lt;/span&gt; deleted
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Consuming all environment variables from the ConfigMap
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# pod-cm-envfrom.yaml&lt;/span&gt;
&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Pod&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;env-from-configmap&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;containers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;app&lt;/span&gt;
      &lt;span class="na"&gt;command&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;printenv"&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;config_foo"&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
      &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;busybox:latest&lt;/span&gt;
      &lt;span class="na"&gt;envFrom&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;prefix&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;config_&lt;/span&gt;
          &lt;span class="na"&gt;configMapRef&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;my-config&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; pod-cm-envfrom.yaml
pod/env-from-configmap created

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl logs pod/env-from-configmap
bar
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl delete &lt;span class="nt"&gt;-f&lt;/span&gt; pod-cm-envfrom.yaml
pod &lt;span class="s2"&gt;"env-from-configmap"&lt;/span&gt; deleted
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Using configMap volume
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# pod-cm-volumemount.yaml&lt;/span&gt;
&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Pod&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;configmap-volumemount&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;containers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;app&lt;/span&gt;
      &lt;span class="na"&gt;command&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;cat"&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;/etc/props/app.props"&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
      &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;busybox:latest&lt;/span&gt;
      &lt;span class="na"&gt;volumeMounts&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;app-props&lt;/span&gt;
          &lt;span class="na"&gt;mountPath&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;/etc/props"&lt;/span&gt;
          &lt;span class="na"&gt;readOnly&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
  &lt;span class="na"&gt;volumes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;app-props&lt;/span&gt;
    &lt;span class="na"&gt;configMap&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;my-config&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; pod-cm-volumemount.yaml
pod/configmap-volumemount created

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl logs pod/configmap-volumemount
server.port&lt;span class="o"&gt;=&lt;/span&gt;8080
spring.profiles.active&lt;span class="o"&gt;=&lt;/span&gt;development
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl delete &lt;span class="nt"&gt;-f&lt;/span&gt; pod-cm-volumemount.yaml
pod &lt;span class="s2"&gt;"configmap-volumemount"&lt;/span&gt; deleted
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Using configMap volume with items
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# pod-cm-volume-items.yaml&lt;/span&gt;
&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Pod&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;configmap-volume-items&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;containers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;app&lt;/span&gt;
      &lt;span class="na"&gt;command&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;cat"&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;/etc/configs/app.conf"&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
      &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;busybox:latest&lt;/span&gt;
      &lt;span class="na"&gt;volumeMounts&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;config&lt;/span&gt;
          &lt;span class="na"&gt;mountPath&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;/etc/configs"&lt;/span&gt;
          &lt;span class="na"&gt;readOnly&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
  &lt;span class="na"&gt;volumes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;config&lt;/span&gt;
      &lt;span class="na"&gt;configMap&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;my-config&lt;/span&gt;
        &lt;span class="na"&gt;items&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;key&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;foo&lt;/span&gt;
            &lt;span class="na"&gt;path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;app.conf&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; pod-cm-volume-items.yaml
pod/configmap-volume-items created

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl logs pod/configmap-volume-items
bar
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl delete &lt;span class="nt"&gt;-f&lt;/span&gt; pod-cm-volume-items.yaml
pod &lt;span class="s2"&gt;"configmap-volume-items"&lt;/span&gt; deleted
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  Secrets &lt;a&gt;&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;A &lt;a href="https://kubernetes.io/docs/concepts/configuration/secret/" rel="noopener noreferrer"&gt;Secret&lt;/a&gt; is an object that contains a small amount of sensitive data such as a password, a token, or a key. Such information might otherwise be put in a &lt;code&gt;Pod&lt;/code&gt; specification or in a container image. Using a &lt;code&gt;Secret&lt;/code&gt; means that you don't need to include confidential data in your application code..&lt;/p&gt;

&lt;h3&gt;
  
  
  Default Secrets in a Pod
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# pod-basic.yaml&lt;/span&gt;
&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Pod&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;kubia&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;containers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;luksa/kubia&lt;/span&gt;
    &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;kubia&lt;/span&gt;
    &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;containerPort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;8080&lt;/span&gt;
      &lt;span class="na"&gt;protocol&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;TCP&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; pod-basic.yaml
pod/kubia created

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get po/kubia &lt;span class="nt"&gt;-o&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nv"&gt;jsonpath&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s1"&gt;'{.spec.containers[0].volumeMounts}'&lt;/span&gt;
&lt;span class="o"&gt;[{&lt;/span&gt;&lt;span class="s2"&gt;"mountPath"&lt;/span&gt;:&lt;span class="s2"&gt;"/var/run/secrets/kubernetes.io/serviceaccount"&lt;/span&gt;,&lt;span class="s2"&gt;"name"&lt;/span&gt;:&lt;span class="s2"&gt;"kube-api-access-jd9vq"&lt;/span&gt;,&lt;span class="s2"&gt;"readOnly"&lt;/span&gt;:true&lt;span class="o"&gt;}]&lt;/span&gt;

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get po/kubia &lt;span class="nt"&gt;-o&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nv"&gt;jsonpath&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s1"&gt;'{.spec.volumes[?(@.name == "kube-api-access-jd9vq")].projected.sources}'&lt;/span&gt;
&lt;span class="o"&gt;[{&lt;/span&gt;&lt;span class="s2"&gt;"serviceAccountToken"&lt;/span&gt;:&lt;span class="o"&gt;{&lt;/span&gt;&lt;span class="s2"&gt;"expirationSeconds"&lt;/span&gt;:3607,&lt;span class="s2"&gt;"path"&lt;/span&gt;:&lt;span class="s2"&gt;"token"&lt;/span&gt;&lt;span class="o"&gt;}}&lt;/span&gt;,&lt;span class="o"&gt;{&lt;/span&gt;&lt;span class="s2"&gt;"configMap"&lt;/span&gt;:&lt;span class="o"&gt;{&lt;/span&gt;&lt;span class="s2"&gt;"items"&lt;/span&gt;:[&lt;span class="o"&gt;{&lt;/span&gt;&lt;span class="s2"&gt;"key"&lt;/span&gt;:&lt;span class="s2"&gt;"ca.crt"&lt;/span&gt;,&lt;span class="s2"&gt;"path"&lt;/span&gt;:&lt;span class="s2"&gt;"ca.crt"&lt;/span&gt;&lt;span class="o"&gt;}]&lt;/span&gt;,&lt;span class="s2"&gt;"name"&lt;/span&gt;:&lt;span class="s2"&gt;"kube-root-ca.crt"&lt;/span&gt;&lt;span class="o"&gt;}}&lt;/span&gt;,&lt;span class="o"&gt;{&lt;/span&gt;&lt;span class="s2"&gt;"downwardAPI"&lt;/span&gt;:&lt;span class="o"&gt;{&lt;/span&gt;&lt;span class="s2"&gt;"items"&lt;/span&gt;:[&lt;span class="o"&gt;{&lt;/span&gt;&lt;span class="s2"&gt;"fieldRef"&lt;/span&gt;:&lt;span class="o"&gt;{&lt;/span&gt;&lt;span class="s2"&gt;"apiVersion"&lt;/span&gt;:&lt;span class="s2"&gt;"v1"&lt;/span&gt;,&lt;span class="s2"&gt;"fieldPath"&lt;/span&gt;:&lt;span class="s2"&gt;"metadata.namespace"&lt;/span&gt;&lt;span class="o"&gt;}&lt;/span&gt;,&lt;span class="s2"&gt;"path"&lt;/span&gt;:&lt;span class="s2"&gt;"namespace"&lt;/span&gt;&lt;span class="o"&gt;}]}}]&lt;/span&gt;

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl &lt;span class="nb"&gt;exec &lt;/span&gt;po/kubia &lt;span class="nt"&gt;--&lt;/span&gt; &lt;span class="nb"&gt;ls&lt;/span&gt; /var/run/secrets/kubernetes.io/serviceaccount/
ca.crt
namespace
token

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl delete &lt;span class="nt"&gt;-f&lt;/span&gt; pod-basic.yaml
pod &lt;span class="s2"&gt;"kubia"&lt;/span&gt; deleted

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Creating Secrets
&lt;/h3&gt;

&lt;h4&gt;
  
  
  Imperative way
&lt;/h4&gt;

&lt;h5&gt;
  
  
  Opaque Secrets
&lt;/h5&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl create secret generic empty-secret
secret/empty-secret created

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get secret empty-secret
NAME           TYPE     DATA   AGE
empty-secret   Opaque   0      9s

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get secret/empty-secret &lt;span class="nt"&gt;-o&lt;/span&gt; yaml
apiVersion: v1
kind: Secret
metadata:
  creationTimestamp: &lt;span class="s2"&gt;"2025-11-05T17:19:07Z"&lt;/span&gt;
  name: empty-secret
  namespace: default
  resourceVersion: &lt;span class="s2"&gt;"6290557"&lt;/span&gt;
  uid: 031d7f8d-e96d-4e03-a90f-2cb96308354b
&lt;span class="nb"&gt;type&lt;/span&gt;: Opaque

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl delete secret/empty-secret
secret &lt;span class="s2"&gt;"empty-secret"&lt;/span&gt; deleted
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;openssl genrsa &lt;span class="nt"&gt;-out&lt;/span&gt; tls.key
Generating RSA private key, 2048 bit long modulus &lt;span class="o"&gt;(&lt;/span&gt;2 primes&lt;span class="o"&gt;)&lt;/span&gt;
...............................................................+++++
.................................+++++
e is 65537 &lt;span class="o"&gt;(&lt;/span&gt;0x010001&lt;span class="o"&gt;)&lt;/span&gt;

&lt;span class="nv"&gt;$ &lt;/span&gt;openssl req &lt;span class="nt"&gt;-new&lt;/span&gt; &lt;span class="nt"&gt;-x509&lt;/span&gt; &lt;span class="nt"&gt;-key&lt;/span&gt; tls.key &lt;span class="nt"&gt;-out&lt;/span&gt; tls.crt &lt;span class="nt"&gt;-subj&lt;/span&gt; /CN&lt;span class="o"&gt;=&lt;/span&gt;kubia.com

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl create secret generic kubia-secret &lt;span class="nt"&gt;--from-file&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;tls.key &lt;span class="nt"&gt;--from-file&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;tls.crt
secret/kubia-secret created

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get secret/kubia-secret &lt;span class="nt"&gt;-o&lt;/span&gt; yaml
apiVersion: v1
data:
  tls.crt: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tDQpNSUlEQ1RDQ0FmR2dBd0lCQWdJVUxxWEJaRn...LS1FTkQgQ0VSVElGSUNBVEUtLS0tLQ0K
  tls.key: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQ0KTUlJRXBBSUJBQUtDQVFFQXR4UlRYMD...U2VQK3N3PT0NCi0tLS0tRU5EIFJTQSBQUklWQVRFIEtFWS0tLS0tDQo&lt;span class="o"&gt;=&lt;/span&gt;
kind: Secret
metadata:
  creationTimestamp: &lt;span class="s2"&gt;"2025-11-05T17:26:21Z"&lt;/span&gt;
  name: kubia-secret
  namespace: default
  resourceVersion: &lt;span class="s2"&gt;"6291327"&lt;/span&gt;
  uid: a06d4be4-3e21-47ea-8009-d300c1c449f9
&lt;span class="nb"&gt;type&lt;/span&gt;: Opaque

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl delete secret/kubia-secret
secret &lt;span class="s2"&gt;"kubia-secret"&lt;/span&gt; deleted
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl create secret generic test-secret &lt;span class="nt"&gt;--from-literal&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s1"&gt;'username=admin'&lt;/span&gt; &lt;span class="nt"&gt;--from-literal&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s1"&gt;'password=39528$vdg7Jb'&lt;/span&gt;
secret/test-secret created

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get secret/test-secret &lt;span class="nt"&gt;-o&lt;/span&gt; yaml
apiVersion: v1
data:
  password: Mzk1MjgkdmRnN0pi
  username: &lt;span class="nv"&gt;YWRtaW4&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;
kind: Secret
metadata:
  creationTimestamp: &lt;span class="s2"&gt;"2025-11-05T18:21:28Z"&lt;/span&gt;
  name: test-secret
  namespace: default
  resourceVersion: &lt;span class="s2"&gt;"6297117"&lt;/span&gt;
  uid: 215daac1-7305-43f4-91c6-c7dbdeca2802
&lt;span class="nb"&gt;type&lt;/span&gt;: Opaque

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl delete secret/test-secret
secret &lt;span class="s2"&gt;"test-secret"&lt;/span&gt; deleted
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h5&gt;
  
  
  TLS Secrets
&lt;/h5&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl create secret tls my-tls-secret &lt;span class="nt"&gt;--key&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;tls.key &lt;span class="nt"&gt;--cert&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;tls.crt
secret/my-tls-secret created

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get secret/my-tls-secret &lt;span class="nt"&gt;-o&lt;/span&gt; yaml
apiVersion: v1
data:
  tls.crt: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tDQpNSUlEQ1RDQ0FmR2dBd0lCQWdJVUxxWEJaRn...LS1FTkQgQ0VSVElGSUNBVEUtLS0tLQ0K
  tls.key: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQ0KTUlJRXBBSUJBQUtDQVFFQXR4UlRYMD...U2VQK3N3PT0NCi0tLS0tRU5EIFJTQSBQUklWQVRFIEtFWS0tLS0tDQo&lt;span class="o"&gt;=&lt;/span&gt;
kind: Secret
metadata:
  creationTimestamp: &lt;span class="s2"&gt;"2025-11-05T17:37:45Z"&lt;/span&gt;
  name: my-tls-secret
  namespace: default
  resourceVersion: &lt;span class="s2"&gt;"6292515"&lt;/span&gt;
  uid: f15b375e-2404-4ca0-a08f-014a0efeec70
&lt;span class="nb"&gt;type&lt;/span&gt;: kubernetes.io/tls

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl delete secret/my-tls-secret
secret &lt;span class="s2"&gt;"my-tls-secret"&lt;/span&gt; deleted
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h5&gt;
  
  
  Docker config Secrets
&lt;/h5&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl create secret docker-registry my-docker-registry-secret &lt;span class="nt"&gt;--docker-username&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;robert &lt;span class="nt"&gt;--docker-password&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;passw123 &lt;span class="nt"&gt;--docker-server&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;nexus.registry.com:5000
secret/my-docker-registry-secret created

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get secret/my-docker-registry-secret &lt;span class="nt"&gt;-o&lt;/span&gt; yaml
apiVersion: v1
data:
  .dockerconfigjson: eyJhdXRocyI6eyJuZXh1cy5yZWdpc3RyeS5jb206NTAwMCI6eyJ1c2VybmFtZSI6InJvYmVydCIsInBhc3N3b3JkIjoicGFzc3cxMjMiLCJhdXRoIjoiY205aVpYSjBPbkJoYzNOM01USXoifX19
kind: Secret
metadata:
  creationTimestamp: &lt;span class="s2"&gt;"2025-11-05T17:44:10Z"&lt;/span&gt;
  name: my-docker-registry-secret
  namespace: default
  resourceVersion: &lt;span class="s2"&gt;"6293203"&lt;/span&gt;
  uid: c9d05ef7-8c8c-4e2b-bf6f-27f80a45d545
&lt;span class="nb"&gt;type&lt;/span&gt;: kubernetes.io/dockerconfigjson

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl delete secret/my-docker-registry-secret
secret &lt;span class="s2"&gt;"my-docker-registry-secret"&lt;/span&gt; deleted
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Declarative way
&lt;/h4&gt;

&lt;h5&gt;
  
  
  Opaque Secrets
&lt;/h5&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="nt"&gt;-n&lt;/span&gt; &lt;span class="s1"&gt;'my-app'&lt;/span&gt; | &lt;span class="nb"&gt;base64
&lt;/span&gt;bXktYXBw

&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="nt"&gt;-n&lt;/span&gt; &lt;span class="s1"&gt;'39528$vdg7Jb'&lt;/span&gt; | &lt;span class="nb"&gt;base64
&lt;/span&gt;Mzk1MjgkdmRnN0pi
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# opaque-secret.yaml&lt;/span&gt;
&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Secret&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;opaque-secret&lt;/span&gt;
&lt;span class="na"&gt;data&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;username&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;bXktYXBw&lt;/span&gt;
  &lt;span class="na"&gt;password&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Mzk1MjgkdmRnN0pi&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; opaque-secret.yaml
secret/opaque-secret created

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get secrets
NAME          TYPE     DATA   AGE
opaque-secret   Opaque   2      4s

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl delete &lt;span class="nt"&gt;-f&lt;/span&gt; opaque-secret.yaml
secret &lt;span class="s2"&gt;"opaque-secret"&lt;/span&gt; deleted
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h5&gt;
  
  
  Docker config Secrets
&lt;/h5&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# dockercfg-secret.yaml&lt;/span&gt;
&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Secret&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;secret-dockercfg&lt;/span&gt;
&lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;kubernetes.io/dockercfg&lt;/span&gt;
&lt;span class="na"&gt;data&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;.dockercfg&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;
    &lt;span class="s"&gt;eyJhdXRocyI6eyJodHRwczovL2V4YW1wbGUvdjEvIjp7ImF1dGgiOiJvcGVuc2VzYW1lIn19fQo=&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; dockercfg-secret.yaml
secret/secret-dockercfg created

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get secrets
NAME               TYPE                      DATA   AGE
secret-dockercfg   kubernetes.io/dockercfg   1      3s

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl describe secret/secret-dockercfg
Name:         secret-dockercfg
Namespace:    default
Labels:       &amp;lt;none&amp;gt;
Annotations:  &amp;lt;none&amp;gt;

Type:  kubernetes.io/dockercfg

Data
&lt;span class="o"&gt;====&lt;/span&gt;
.dockercfg:  56 bytes

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl delete &lt;span class="nt"&gt;-f&lt;/span&gt; dockercfg-secret.yaml
secret &lt;span class="s2"&gt;"secret-dockercfg"&lt;/span&gt; deleted
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h5&gt;
  
  
  Basic authentication Secret
&lt;/h5&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# basicauth-secret.yaml&lt;/span&gt;
&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Secret&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;secret-basic-auth&lt;/span&gt;
&lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;kubernetes.io/basic-auth&lt;/span&gt;
&lt;span class="na"&gt;stringData&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;username&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;admin&lt;/span&gt;
  &lt;span class="na"&gt;password&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;pass1234&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; basicauth-secret.yaml
secret/secret-basic-auth created

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get secrets
NAME                TYPE                       DATA   AGE
secret-basic-auth   kubernetes.io/basic-auth   2      3s

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl describe secret/secret-basic-auth
Name:         secret-basic-auth
Namespace:    default
Labels:       &amp;lt;none&amp;gt;
Annotations:  &amp;lt;none&amp;gt;

Type:  kubernetes.io/basic-auth

Data
&lt;span class="o"&gt;====&lt;/span&gt;
password:  8 bytes
username:  5 bytes

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl delete &lt;span class="nt"&gt;-f&lt;/span&gt; basicauth-secret.yaml
secret &lt;span class="s2"&gt;"secret-basic-auth"&lt;/span&gt; deleted
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Using Secrets
&lt;/h3&gt;

&lt;p&gt;&lt;code&gt;Secrets&lt;/code&gt; can be mounted as data volumes or exposed as environment variables to be used by a container in a &lt;code&gt;Pod&lt;/code&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl create secret generic test-secret &lt;span class="nt"&gt;--from-literal&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s1"&gt;'username=admin'&lt;/span&gt; &lt;span class="nt"&gt;--from-literal&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s1"&gt;'password=39528$vdg7Jb'&lt;/span&gt;
secret/test-secret created

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl describe secret test-secre
Name:         test-secret
Namespace:    default
Labels:       &amp;lt;none&amp;gt;
Annotations:  &amp;lt;none&amp;gt;

Type:  Opaque

Data
&lt;span class="o"&gt;====&lt;/span&gt;
password:  12 bytes
username:  5 bytes
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Using Secrets as files from a Pod
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# pod-secret-volumemount.yaml&lt;/span&gt;
&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Pod&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;secret-test-pod&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;containers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;test-container&lt;/span&gt;
      &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx&lt;/span&gt;
      &lt;span class="na"&gt;volumeMounts&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;secret-volume&lt;/span&gt;
          &lt;span class="na"&gt;mountPath&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/etc/secret-volume&lt;/span&gt;
          &lt;span class="na"&gt;readOnly&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
  &lt;span class="na"&gt;volumes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;secret-volume&lt;/span&gt;
      &lt;span class="na"&gt;secret&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;secretName&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;test-secret&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; pod-secret-volumemount.yaml
pod/secret-test-pod created

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get pod secret-test-pod
NAME              READY   STATUS    RESTARTS   AGE
secret-test-pod   1/1     Running   0          30s

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl &lt;span class="nb"&gt;exec &lt;/span&gt;secret-test-pod &lt;span class="nt"&gt;--&lt;/span&gt; &lt;span class="nb"&gt;ls&lt;/span&gt; /etc/secret-volume
password
username

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl &lt;span class="nb"&gt;exec &lt;/span&gt;secret-test-pod &lt;span class="nt"&gt;--&lt;/span&gt; &lt;span class="nb"&gt;head&lt;/span&gt; /etc/secret-volume/&lt;span class="o"&gt;{&lt;/span&gt;username,password&lt;span class="o"&gt;}&lt;/span&gt;
&lt;span class="o"&gt;==&amp;gt;&lt;/span&gt; /etc/secret-volume/username &amp;lt;&lt;span class="o"&gt;==&lt;/span&gt;
admin
&lt;span class="o"&gt;==&amp;gt;&lt;/span&gt; /etc/secret-volume/password &amp;lt;&lt;span class="o"&gt;==&lt;/span&gt;
39528&lt;span class="nv"&gt;$vdg7Jb&lt;/span&gt;

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl delete &lt;span class="nt"&gt;-f&lt;/span&gt; pod-secret-volumemount.yaml
pod &lt;span class="s2"&gt;"secret-test-pod"&lt;/span&gt; deleted
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h5&gt;
  
  
  Project Secret keys to specific file paths
&lt;/h5&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# pod-secret-volume-items.yaml&lt;/span&gt;
&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Pod&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;secret-test-pod&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;containers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;test-container&lt;/span&gt;
      &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx&lt;/span&gt;
      &lt;span class="na"&gt;volumeMounts&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;secret-volume&lt;/span&gt;
          &lt;span class="na"&gt;mountPath&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/etc/secret-volume&lt;/span&gt;
          &lt;span class="na"&gt;readOnly&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
  &lt;span class="na"&gt;volumes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;secret-volume&lt;/span&gt;
      &lt;span class="na"&gt;secret&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;secretName&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;test-secret&lt;/span&gt;
        &lt;span class="na"&gt;items&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;key&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;username&lt;/span&gt;
            &lt;span class="na"&gt;path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;my-group/my-username&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; pod-secret-volume-items.yaml
pod/secret-test-pod created

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl &lt;span class="nb"&gt;exec &lt;/span&gt;secret-test-pod &lt;span class="nt"&gt;--&lt;/span&gt; &lt;span class="nb"&gt;ls&lt;/span&gt; /etc/secret-volume
my-group

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl &lt;span class="nb"&gt;exec &lt;/span&gt;secret-test-pod &lt;span class="nt"&gt;--&lt;/span&gt; &lt;span class="nb"&gt;ls&lt;/span&gt; /etc/secret-volume/my-group
my-username

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl &lt;span class="nb"&gt;exec &lt;/span&gt;secret-test-pod &lt;span class="nt"&gt;--&lt;/span&gt; &lt;span class="nb"&gt;head&lt;/span&gt; /etc/secret-volume/my-group/my-username
admin

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl delete &lt;span class="nt"&gt;-f&lt;/span&gt; pod-secret-volume-items.yaml
pod &lt;span class="s2"&gt;"secret-test-pod"&lt;/span&gt; deleted
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Using Secrets as environment variables
&lt;/h4&gt;

&lt;h5&gt;
  
  
  Define a container environment variable with data from a single Secret
&lt;/h5&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# pod-secret-env-var.yaml&lt;/span&gt;
&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Pod&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;secret-test-pod&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;containers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;test-container&lt;/span&gt;
      &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx&lt;/span&gt;
      &lt;span class="na"&gt;env&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;SECRET_PASSWORD&lt;/span&gt;
        &lt;span class="na"&gt;valueFrom&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;secretKeyRef&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;test-secret&lt;/span&gt;
            &lt;span class="na"&gt;key&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;password&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; pod-secret-env-var.yaml
pod/secret-test-pod created

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl &lt;span class="nb"&gt;exec &lt;/span&gt;secret-test-pod &lt;span class="nt"&gt;--&lt;/span&gt; /bin/sh &lt;span class="nt"&gt;-c&lt;/span&gt; &lt;span class="s1"&gt;'echo $SECRET_PASSWORD'&lt;/span&gt;
39528&lt;span class="nv"&gt;$vdg7Jb&lt;/span&gt;

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl delete &lt;span class="nt"&gt;-f&lt;/span&gt; pod-secret-env-var.yaml
pod &lt;span class="s2"&gt;"secret-test-pod"&lt;/span&gt; deleted
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h5&gt;
  
  
  Define all of the Secret's data as container environment variables
&lt;/h5&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# pod-secret-envfrom.yaml&lt;/span&gt;
&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Pod&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;secret-test-pod&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;containers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;test-container&lt;/span&gt;
      &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx&lt;/span&gt;
      &lt;span class="na"&gt;envFrom&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;secretRef&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;test-secret&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; pod-secret-envfrom.yaml
pod/secret-test-pod created

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl &lt;span class="nb"&gt;exec &lt;/span&gt;secret-test-pod &lt;span class="nt"&gt;--&lt;/span&gt; /bin/sh &lt;span class="nt"&gt;-c&lt;/span&gt; &lt;span class="s1"&gt;'echo "username: $username\npassword: $password\n"'&lt;/span&gt;
username: admin
password: 39528&lt;span class="nv"&gt;$vdg7Jb&lt;/span&gt;

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl delete &lt;span class="nt"&gt;-f&lt;/span&gt; pod-secret-envfrom.yaml
pod &lt;span class="s2"&gt;"secret-test-pod"&lt;/span&gt; deleted
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl delete secrets test-secret
secret &lt;span class="s2"&gt;"test-secret"&lt;/span&gt; deleted
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  Deployments &lt;a&gt;&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;A &lt;a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/" rel="noopener noreferrer"&gt;Deployment&lt;/a&gt; is a high-level resource used to manage and scale applications while ensuring they remain in the desired state. It provides a declarative way to define how many Pods should run, which container images they should use, and how updates should be applied.&lt;/p&gt;

&lt;h3&gt;
  
  
  Creating Deployments
&lt;/h3&gt;

&lt;h4&gt;
  
  
  Imperative way
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl create deployment my-nginx-deployment &lt;span class="nt"&gt;--image&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;nginx &lt;span class="nt"&gt;--replicas&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;3 &lt;span class="nt"&gt;--port&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;80
deployment.apps/my-nginx-deployment created

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get deploy
NAME                  READY   UP-TO-DATE   AVAILABLE   AGE
my-nginx-deployment   3/3     3            3           20s

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl rollout status deployment/my-nginx-deployment
deployment &lt;span class="s2"&gt;"my-nginx-deployment"&lt;/span&gt; successfully rolled out

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get rs
NAME                             DESIRED   CURRENT   READY   AGE
my-nginx-deployment-677c645895   3         3         3       2m30s

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get po
NAME                                   READY   STATUS    RESTARTS   AGE
my-nginx-deployment-677c645895-d4c9q   1/1     Running   0          2m58s
my-nginx-deployment-677c645895-jdvtf   1/1     Running   0          2m58s
my-nginx-deployment-677c645895-mkjsc   1/1     Running   0          2m58s

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl port-forward deployments/my-nginx-deployment 80
Forwarding from 127.0.0.1:80 -&amp;gt; 80
Forwarding from &lt;span class="o"&gt;[&lt;/span&gt;::1]:80 -&amp;gt; 80

&lt;span class="nv"&gt;$ &lt;/span&gt;curl &lt;span class="nt"&gt;-sI&lt;/span&gt; localhost:80
HTTP/1.1 200 OK
Server: nginx/1.29.3
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl &lt;span class="nb"&gt;set &lt;/span&gt;image deployment/my-nginx-deployment &lt;span class="nv"&gt;nginx&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;nginx:1.16.1
deployment.apps/my-nginx-deployment image updated

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl rollout status deployment/my-nginx-deployment
Waiting &lt;span class="k"&gt;for &lt;/span&gt;deployment &lt;span class="s2"&gt;"my-nginx-deployment"&lt;/span&gt; rollout to finish: 1 out of 3 new replicas have been updated...
Waiting &lt;span class="k"&gt;for &lt;/span&gt;deployment &lt;span class="s2"&gt;"my-nginx-deployment"&lt;/span&gt; rollout to finish: 1 out of 3 new replicas have been updated...
Waiting &lt;span class="k"&gt;for &lt;/span&gt;deployment &lt;span class="s2"&gt;"my-nginx-deployment"&lt;/span&gt; rollout to finish: 2 out of 3 new replicas have been updated...
Waiting &lt;span class="k"&gt;for &lt;/span&gt;deployment &lt;span class="s2"&gt;"my-nginx-deployment"&lt;/span&gt; rollout to finish: 2 out of 3 new replicas have been updated...
Waiting &lt;span class="k"&gt;for &lt;/span&gt;deployment &lt;span class="s2"&gt;"my-nginx-deployment"&lt;/span&gt; rollout to finish: 2 out of 3 new replicas have been updated...
Waiting &lt;span class="k"&gt;for &lt;/span&gt;deployment &lt;span class="s2"&gt;"my-nginx-deployment"&lt;/span&gt; rollout to finish: 2 out of 3 new replicas have been updated...
Waiting &lt;span class="k"&gt;for &lt;/span&gt;deployment &lt;span class="s2"&gt;"my-nginx-deployment"&lt;/span&gt; rollout to finish: 1 old replicas are pending termination...
Waiting &lt;span class="k"&gt;for &lt;/span&gt;deployment &lt;span class="s2"&gt;"my-nginx-deployment"&lt;/span&gt; rollout to finish: 1 old replicas are pending termination...
deployment &lt;span class="s2"&gt;"my-nginx-deployment"&lt;/span&gt; successfully rolled out

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get deploy
NAME                  READY   UP-TO-DATE   AVAILABLE   AGE
my-nginx-deployment   3/3     3            3           5m13s

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get rs
NAME                             DESIRED   CURRENT   READY   AGE
my-nginx-deployment-677c645895   0         0         0       5m31s
my-nginx-deployment-68b8b6c496   3         3         3       101s

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get po
NAME                                   READY   STATUS    RESTARTS   AGE
my-nginx-deployment-68b8b6c496-6p9jg   1/1     Running   0          118s
my-nginx-deployment-68b8b6c496-mfcnj   1/1     Running   0          2m2s
my-nginx-deployment-68b8b6c496-ngm4b   1/1     Running   0          2m

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get po/my-nginx-deployment-68b8b6c496-6p9jg &lt;span class="nt"&gt;-o&lt;/span&gt; &lt;span class="nv"&gt;jsonpath&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s1"&gt;'{.spec.containers[0].image}'&lt;/span&gt;
nginx:1.16.1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl rollout &lt;span class="nb"&gt;history &lt;/span&gt;deployment/my-nginx-deployment
deployment.apps/my-nginx-deployment
REVISION  CHANGE-CAUSE
1         &amp;lt;none&amp;gt;
2         &amp;lt;none&amp;gt;

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl rollout &lt;span class="nb"&gt;history &lt;/span&gt;deployment/my-nginx-deployment &lt;span class="nt"&gt;--revision&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;2
deployment.apps/my-nginx-deployment with revision &lt;span class="c"&gt;#2&lt;/span&gt;
Pod Template:
  Labels:       &lt;span class="nv"&gt;app&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;my-nginx-deployment
        pod-template-hash&lt;span class="o"&gt;=&lt;/span&gt;68b8b6c496
  Containers:
   nginx:
    Image:      nginx:1.16.1
    Port:       80/TCP
    Host Port:  0/TCP
    Environment:        &amp;lt;none&amp;gt;
    Mounts:     &amp;lt;none&amp;gt;
  Volumes:      &amp;lt;none&amp;gt;
  Node-Selectors:       &amp;lt;none&amp;gt;
  Tolerations:  &amp;lt;none&amp;gt;

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl rollout undo deployment/my-nginx-deployment &lt;span class="nt"&gt;--to-revision&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;1
deployment.apps/my-nginx-deployment rolled back

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl rollout &lt;span class="nb"&gt;history &lt;/span&gt;deployment/my-nginx-deployment
deployment.apps/my-nginx-deployment
REVISION  CHANGE-CAUSE
2         &amp;lt;none&amp;gt;
3         &amp;lt;none&amp;gt;

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get rs
NAME                             DESIRED   CURRENT   READY   AGE
my-nginx-deployment-677c645895   3         3         3       11m
my-nginx-deployment-68b8b6c496   0         0         0       7m11s

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get po
NAME                                   READY   STATUS    RESTARTS   AGE
my-nginx-deployment-677c645895-cr2vd   1/1     Running   0          71s
my-nginx-deployment-677c645895-cxbpn   1/1     Running   0          73s
my-nginx-deployment-677c645895-l67cc   1/1     Running   0          68s

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get po/my-nginx-deployment-677c645895-cr2vd &lt;span class="nt"&gt;-o&lt;/span&gt; &lt;span class="nv"&gt;jsonpath&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s1"&gt;'{.spec.containers[0].image}'&lt;/span&gt;
nginx
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl scale deployment/my-nginx-deployment &lt;span class="nt"&gt;--replicas&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;5
deployment.apps/my-nginx-deployment scaled

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get deploy
NAME                  READY   UP-TO-DATE   AVAILABLE   AGE
my-nginx-deployment   5/5     5            5           14m

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get rs
NAME                             DESIRED   CURRENT   READY   AGE
my-nginx-deployment-677c645895   5         5         5       14m
my-nginx-deployment-68b8b6c496   0         0         0       10m

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get po
NAME                                   READY   STATUS    RESTARTS   AGE
my-nginx-deployment-677c645895-9zrmk   1/1     Running   0          21s
my-nginx-deployment-677c645895-cr2vd   1/1     Running   0          4m34s
my-nginx-deployment-677c645895-cxbpn   1/1     Running   0          4m36s
my-nginx-deployment-677c645895-l67cc   1/1     Running   0          4m31s
my-nginx-deployment-677c645895-qk4b5   1/1     Running   0          21s
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl rollout pause deployment/my-nginx-deployment
deployment.apps/my-nginx-deployment paused

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get po
NAME                                   READY   STATUS    RESTARTS   AGE
my-nginx-deployment-677c645895-9zrmk   1/1     Running   0          3m14s
my-nginx-deployment-677c645895-cr2vd   1/1     Running   0          7m27s
my-nginx-deployment-677c645895-cxbpn   1/1     Running   0          7m29s
my-nginx-deployment-677c645895-l67cc   1/1     Running   0          7m24s
my-nginx-deployment-677c645895-qk4b5   1/1     Running   0          3m14s

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl scale deployment/my-nginx-deployment &lt;span class="nt"&gt;--replicas&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;3
deployment.apps/my-nginx-deployment scaled

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get po
NAME                                   READY   STATUS    RESTARTS   AGE
my-nginx-deployment-677c645895-cr2vd   1/1     Running   0          8m28s
my-nginx-deployment-677c645895-cxbpn   1/1     Running   0          8m30s
my-nginx-deployment-677c645895-l67cc   1/1     Running   0          8m25s

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl &lt;span class="nb"&gt;set &lt;/span&gt;image deployment/my-nginx-deployment &lt;span class="nv"&gt;nginx&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;nginx:1.17.2
deployment.apps/my-nginx-deployment image updated

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get po
NAME                                   READY   STATUS    RESTARTS   AGE
my-nginx-deployment-677c645895-cr2vd   1/1     Running   0          8m43s
my-nginx-deployment-677c645895-cxbpn   1/1     Running   0          8m35s
my-nginx-deployment-677c645895-l67cc   1/1     Running   0          8m30s

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl rollout resume deployment/my-nginx-deployment
deployment.apps/my-nginx-deployment resumed

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get po
NAME                                   READY   STATUS    RESTARTS   AGE
my-nginx-deployment-75c7c977bb-hwx6r   1/1     Running   0          32s
my-nginx-deployment-75c7c977bb-qlfhc   1/1     Running   0          19s
my-nginx-deployment-75c7c977bb-z7l59   1/1     Running   0          43s

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get po/my-nginx-deployment-75c7c977bb-hwx6r &lt;span class="nt"&gt;-o&lt;/span&gt; &lt;span class="nv"&gt;jsonpath&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s1"&gt;'{.spec.containers[0].image}'&lt;/span&gt;
nginx:1.17.2
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl delete deploy/my-nginx-deployment
deployment.apps &lt;span class="s2"&gt;"my-nginx-deployment"&lt;/span&gt; deleted

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get deploy
No resources found &lt;span class="k"&gt;in &lt;/span&gt;default namespace.

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get rs
No resources found &lt;span class="k"&gt;in &lt;/span&gt;default namespace.

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get po
No resources found &lt;span class="k"&gt;in &lt;/span&gt;default namespace.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Declarative way
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# deployment-basic.yaml&lt;/span&gt;
&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;apps/v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Deployment&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;my-nginx-deployment&lt;/span&gt;
  &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx-app&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;replicas&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;3&lt;/span&gt;
  &lt;span class="na"&gt;selector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;matchLabels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx&lt;/span&gt;
  &lt;span class="na"&gt;template&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx&lt;/span&gt;
    &lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;containers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx&lt;/span&gt;
        &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx:latest&lt;/span&gt;
        &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;containerPort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;80&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; deployment-basic.yaml
deployment.apps/my-nginx-deployment created

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get deploy
NAME                  READY   UP-TO-DATE   AVAILABLE   AGE
my-nginx-deployment   3/3     3            3           10s

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl rollout status deployment/my-nginx-deployment
deployment &lt;span class="s2"&gt;"my-nginx-deployment"&lt;/span&gt; successfully rolled out

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl rollout status deployment/my-nginx-deployment
deployment &lt;span class="s2"&gt;"my-nginx-deployment"&lt;/span&gt; successfully rolled out

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get rs
NAME                           DESIRED   CURRENT   READY   AGE
my-nginx-deployment-96b9d695   3         3         3       31s

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get po
NAME                                 READY   STATUS    RESTARTS   AGE
my-nginx-deployment-96b9d695-7hgx5   1/1     Running   0          33s
my-nginx-deployment-96b9d695-nvb6h   1/1     Running   0          33s
my-nginx-deployment-96b9d695-r5t55   1/1     Running   0          33s

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl delete &lt;span class="nt"&gt;-f&lt;/span&gt; deployment-basic.yaml
deployment.apps &lt;span class="s2"&gt;"my-nginx-deployment"&lt;/span&gt; deleted
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ConfigMap&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx-conf&lt;/span&gt;
&lt;span class="na"&gt;data&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;nginx.conf&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;
    &lt;span class="s"&gt;user nginx;&lt;/span&gt;
    &lt;span class="s"&gt;worker_processes  1;&lt;/span&gt;
    &lt;span class="s"&gt;events {&lt;/span&gt;
      &lt;span class="s"&gt;worker_connections  10240;&lt;/span&gt;
    &lt;span class="s"&gt;}&lt;/span&gt;
    &lt;span class="s"&gt;http {&lt;/span&gt;
      &lt;span class="s"&gt;server {&lt;/span&gt;
        &lt;span class="s"&gt;listen 80;&lt;/span&gt;
        &lt;span class="s"&gt;server_name  _;&lt;/span&gt;
        &lt;span class="s"&gt;location ~ ^/(healthz|readyz)$ {&lt;/span&gt;
            &lt;span class="s"&gt;add_header Content-Type text/plain;&lt;/span&gt;
            &lt;span class="s"&gt;return 200 'OK';&lt;/span&gt;
        &lt;span class="s"&gt;}&lt;/span&gt;
      &lt;span class="s"&gt;}&lt;/span&gt;
    &lt;span class="s"&gt;}&lt;/span&gt;
&lt;span class="s"&gt;---&lt;/span&gt;
&lt;span class="c1"&gt;# deployment-probes.yaml&lt;/span&gt;

&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;apps/v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Deployment&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;my-nginx-deployment&lt;/span&gt;
  &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;my-nginx&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;progressDeadlineSeconds&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;600&lt;/span&gt;      &lt;span class="c1"&gt;# Wait for a deployment to make progress before marking it as stalled&lt;/span&gt;
  &lt;span class="na"&gt;replicas&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;3&lt;/span&gt;
  &lt;span class="na"&gt;selector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;matchLabels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;my-app&lt;/span&gt;
  &lt;span class="na"&gt;template&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;my-app&lt;/span&gt;
    &lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;containers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;my-app-container&lt;/span&gt;
        &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx:latest&lt;/span&gt;
        &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;containerPort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;80&lt;/span&gt;
        &lt;span class="na"&gt;livenessProbe&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;httpGet&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="na"&gt;path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/healthz&lt;/span&gt;          &lt;span class="c1"&gt;# Endpoint for liveness checks&lt;/span&gt;
            &lt;span class="na"&gt;port&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;80&lt;/span&gt;
          &lt;span class="na"&gt;initialDelaySeconds&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;15&lt;/span&gt;   &lt;span class="c1"&gt;# Wait 15 seconds before first liveness probe&lt;/span&gt;
          &lt;span class="na"&gt;periodSeconds&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;10&lt;/span&gt;         &lt;span class="c1"&gt;# Check every 10 seconds&lt;/span&gt;
          &lt;span class="na"&gt;timeoutSeconds&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;5&lt;/span&gt;         &lt;span class="c1"&gt;# Timeout after 5 seconds&lt;/span&gt;
          &lt;span class="na"&gt;failureThreshold&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;3&lt;/span&gt;       &lt;span class="c1"&gt;# Restart container after 3 consecutive failures&lt;/span&gt;
        &lt;span class="na"&gt;readinessProbe&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;httpGet&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="na"&gt;path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/readyz&lt;/span&gt;           &lt;span class="c1"&gt;# Endpoint for readiness checks&lt;/span&gt;
            &lt;span class="na"&gt;port&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;80&lt;/span&gt;
          &lt;span class="na"&gt;initialDelaySeconds&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;5&lt;/span&gt;    &lt;span class="c1"&gt;# Wait 5 seconds before first readiness probe&lt;/span&gt;
          &lt;span class="na"&gt;periodSeconds&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;5&lt;/span&gt;          &lt;span class="c1"&gt;# Check every 5 seconds&lt;/span&gt;
          &lt;span class="na"&gt;timeoutSeconds&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;3&lt;/span&gt;         &lt;span class="c1"&gt;# Timeout after 3 seconds&lt;/span&gt;
          &lt;span class="na"&gt;failureThreshold&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;1&lt;/span&gt;       &lt;span class="c1"&gt;# Consider not ready after 1 failure&lt;/span&gt;
        &lt;span class="na"&gt;volumeMounts&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx-conf&lt;/span&gt;
          &lt;span class="na"&gt;mountPath&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/etc/nginx/nginx.conf&lt;/span&gt;
          &lt;span class="na"&gt;subPath&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx.conf&lt;/span&gt;
          &lt;span class="na"&gt;readOnly&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
      &lt;span class="na"&gt;volumes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx-conf&lt;/span&gt;
        &lt;span class="na"&gt;configMap&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx-conf&lt;/span&gt;
          &lt;span class="na"&gt;items&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;key&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx.conf&lt;/span&gt;
              &lt;span class="na"&gt;path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx.conf&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; deployment-probes.yaml
configmap/nginx-conf created
deployment.apps/my-nginx-deployment created

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get deploy
NAME                  READY   UP-TO-DATE   AVAILABLE   AGE
my-nginx-deployment   3/3     3            3           12s

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get rs
NAME                             DESIRED   CURRENT   READY   AGE
my-nginx-deployment-55bc8948d6   3         3         3       52s

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get po
NAME                                   READY   STATUS    RESTARTS   AGE
my-nginx-deployment-55bc8948d6-4lhdd   1/1     Running   0          64s
my-nginx-deployment-55bc8948d6-mz5tx   1/1     Running   0          64s
my-nginx-deployment-55bc8948d6-nfkkx   1/1     Running   0          64s

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl delete &lt;span class="nt"&gt;-f&lt;/span&gt; deployment-probes.yaml
configmap &lt;span class="s2"&gt;"nginx-conf"&lt;/span&gt; deleted
deployment.apps &lt;span class="s2"&gt;"my-nginx-deployment"&lt;/span&gt; deleted
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  StatefulSet &lt;a&gt;&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;A &lt;a href="https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/" rel="noopener noreferrer"&gt;StatefulSet&lt;/a&gt; is a resource used to manage stateful applications by providing stable, unique network identifiers, persistent storage, and ordered, graceful deployment and scaling for pods. They are ideal for applications like databases that require each replica to have a predictable identity and persistent storage, unlike stateless applications managed by &lt;code&gt;Deployments&lt;/code&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl create &lt;span class="nt"&gt;-f&lt;/span&gt; storageclass-local-path.yaml
storageclass.storage.k8s.io/storageclass-redis created

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get sc
NAME                 PROVISIONER             RECLAIMPOLICY   VOLUMEBINDINGMODE      ALLOWVOLUMEEXPANSION   AGE
standard &lt;span class="o"&gt;(&lt;/span&gt;default&lt;span class="o"&gt;)&lt;/span&gt;   rancher.io/local-path   Delete          WaitForFirstConsumer   &lt;span class="nb"&gt;false                  &lt;/span&gt;115d
storageclass-redis   rancher.io/local-path   Delete          Immediate              &lt;span class="nb"&gt;false                  &lt;/span&gt;43s
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# statefulset.yaml&lt;/span&gt;

&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;apps/v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;StatefulSet&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;redis&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;serviceName&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;redis&lt;/span&gt;
  &lt;span class="na"&gt;replicas&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;3&lt;/span&gt;
  &lt;span class="na"&gt;selector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;matchLabels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;redis&lt;/span&gt;
  &lt;span class="na"&gt;template&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;redis&lt;/span&gt;
    &lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;containers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;redis&lt;/span&gt;
        &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;redis:latest&lt;/span&gt;
        &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;containerPort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;6379&lt;/span&gt;
          &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;redis&lt;/span&gt;
        &lt;span class="na"&gt;volumeMounts&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;data&lt;/span&gt;
          &lt;span class="na"&gt;mountPath&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/data&lt;/span&gt;
  &lt;span class="na"&gt;volumeClaimTemplates&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;data&lt;/span&gt;
    &lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;accessModes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;ReadWriteOnce"&lt;/span&gt; &lt;span class="pi"&gt;]&lt;/span&gt;
      &lt;span class="na"&gt;resources&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;requests&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;storage&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;0.5Gi&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; statefulset.yaml
statefulset.apps/redis created

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get pvc
NAME           STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   VOLUMEATTRIBUTESCLASS   AGE
data-redis-0   Bound    pvc-948cc23d-c7a8-4caf-9206-1b52c8f31f75   512Mi      RWO            standard       &amp;lt;&lt;span class="nb"&gt;unset&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;                 3m31s
data-redis-1   Bound    pvc-f9ff0efa-dd70-44c6-8601-72f17430f848   512Mi      RWO            standard       &amp;lt;&lt;span class="nb"&gt;unset&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;                 2m36s
data-redis-2   Bound    pvc-b2079d26-8db5-4855-b81b-6f1c78aeab0e   512Mi      RWO            standard       &amp;lt;&lt;span class="nb"&gt;unset&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;                 98s

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get pv
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                  STORAGECLASS   VOLUMEATTRIBUTESCLASS   REASON   AGE
pvc-948cc23d-c7a8-4caf-9206-1b52c8f31f75   512Mi      RWO            Delete           Bound    default/data-redis-0   standard       &amp;lt;&lt;span class="nb"&gt;unset&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;                          3m52s
pvc-b2079d26-8db5-4855-b81b-6f1c78aeab0e   512Mi      RWO            Delete           Bound    default/data-redis-2   standard       &amp;lt;&lt;span class="nb"&gt;unset&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;                          2m
pvc-f9ff0efa-dd70-44c6-8601-72f17430f848   512Mi      RWO            Delete           Bound    default/data-redis-1   standard       &amp;lt;&lt;span class="nb"&gt;unset&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;                          2m57s

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get statefulset/redis
NAME    READY   AGE
redis   3/3     4m19s

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get po
NAME      READY   STATUS    RESTARTS   AGE
redis-0   1/1     Running   0          4m55s
redis-1   1/1     Running   0          4m
redis-2   1/1     Running   0          3m2s

&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="k"&gt;for &lt;/span&gt;i &lt;span class="k"&gt;in&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;0..2&lt;span class="o"&gt;}&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;do &lt;/span&gt;kubectl &lt;span class="nb"&gt;exec&lt;/span&gt; &lt;span class="s2"&gt;"redis-&lt;/span&gt;&lt;span class="nv"&gt;$i&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="nt"&gt;--&lt;/span&gt; sh &lt;span class="nt"&gt;-c&lt;/span&gt; &lt;span class="s1"&gt;'hostname'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;done
&lt;/span&gt;redis-0
redis-1
redis-2
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl delete pod/redis-0
pod &lt;span class="s2"&gt;"redis-0"&lt;/span&gt; deleted

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get po
NAME      READY   STATUS    RESTARTS   AGE
redis-0   1/1     Running   0          4s
redis-1   1/1     Running   0          13m
redis-2   1/1     Running   0          12m
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl scale statefulset/redis &lt;span class="nt"&gt;--replicas&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;4
statefulset.apps/redis scaled

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get po
NAME      READY   STATUS    RESTARTS   AGE
redis-0   1/1     Running   0          2m53s
redis-1   1/1     Running   0          16m
redis-2   1/1     Running   0          15m
redis-3   0/1     Pending   0          2s

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get po
NAME      READY   STATUS              RESTARTS   AGE
redis-0   1/1     Running             0          2m56s
redis-1   1/1     Running             0          16m
redis-2   1/1     Running             0          15m
redis-3   0/1     ContainerCreating   0          5s

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get po
NAME      READY   STATUS              RESTARTS   AGE
redis-0   1/1     Running             0          2m58s
redis-1   1/1     Running             0          16m
redis-2   1/1     Running             0          15m
redis-3   0/1     ContainerCreating   0          7s

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get po
NAME      READY   STATUS    RESTARTS   AGE
redis-0   1/1     Running   0          2m59s
redis-1   1/1     Running   0          16m
redis-2   1/1     Running   0          15m
redis-3   1/1     Running   0          8s
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl delete statefulset/redis
statefulset.apps &lt;span class="s2"&gt;"redis"&lt;/span&gt; deleted

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get po
No resources found &lt;span class="k"&gt;in &lt;/span&gt;default namespace.

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get statefulsets
No resources found &lt;span class="k"&gt;in &lt;/span&gt;default namespace.

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get pv
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                  STORAGECLASS   VOLUMEATTRIBUTESCLASS   REASON   AGE
pvc-1b2cab34-7cce-4583-8cd3-3e7fce32f72c   512Mi      RWO            Delete           Bound    default/data-redis-3   standard       &amp;lt;&lt;span class="nb"&gt;unset&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;                          4m10s
pvc-948cc23d-c7a8-4caf-9206-1b52c8f31f75   512Mi      RWO            Delete           Bound    default/data-redis-0   standard       &amp;lt;&lt;span class="nb"&gt;unset&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;                          21m
pvc-b2079d26-8db5-4855-b81b-6f1c78aeab0e   512Mi      RWO            Delete           Bound    default/data-redis-2   standard       &amp;lt;&lt;span class="nb"&gt;unset&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;                          19m
pvc-f9ff0efa-dd70-44c6-8601-72f17430f848   512Mi      RWO            Delete           Bound    default/data-redis-1   standard       &amp;lt;&lt;span class="nb"&gt;unset&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;                          20m

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get pvc
NAME           STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   VOLUMEATTRIBUTESCLASS   AGE
data-redis-0   Bound    pvc-948cc23d-c7a8-4caf-9206-1b52c8f31f75   512Mi      RWO            standard       &amp;lt;&lt;span class="nb"&gt;unset&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;                 22m
data-redis-1   Bound    pvc-f9ff0efa-dd70-44c6-8601-72f17430f848   512Mi      RWO            standard       &amp;lt;&lt;span class="nb"&gt;unset&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;                 21m
data-redis-2   Bound    pvc-b2079d26-8db5-4855-b81b-6f1c78aeab0e   512Mi      RWO            standard       &amp;lt;&lt;span class="nb"&gt;unset&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;                 20m
data-redis-3   Bound    pvc-1b2cab34-7cce-4583-8cd3-3e7fce32f72c   512Mi      RWO            standard       &amp;lt;&lt;span class="nb"&gt;unset&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;                 4m55s
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl delete sc/storageclass-redis
storageclass.storage.k8s.io &lt;span class="s2"&gt;"storageclass-redis"&lt;/span&gt; deleted

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl delete pvc data-redis-&lt;span class="o"&gt;{&lt;/span&gt;0,1,2,3&lt;span class="o"&gt;}&lt;/span&gt;
persistentvolumeclaim &lt;span class="s2"&gt;"data-redis-0"&lt;/span&gt; deleted
persistentvolumeclaim &lt;span class="s2"&gt;"data-redis-1"&lt;/span&gt; deleted
persistentvolumeclaim &lt;span class="s2"&gt;"data-redis-2"&lt;/span&gt; deleted
persistentvolumeclaim &lt;span class="s2"&gt;"data-redis-3"&lt;/span&gt; deleted

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get pvc
No resources found &lt;span class="k"&gt;in &lt;/span&gt;default namespace.

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get pv
No resources found
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  ServiceAccount &lt;a&gt;&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;A &lt;a href="https://kubernetes.io/docs/concepts/security/service-accounts/" rel="noopener noreferrer"&gt;ServiceAccount&lt;/a&gt; provides an identity for processes and applications running within a Kubernetes cluster. &lt;code&gt;ServiceAccounts&lt;/code&gt; are designed for non-human entities like Pods, system components, or external tools that need to interact with the Kubernetes API.&lt;/p&gt;

&lt;h3&gt;
  
  
  Default ServiceAccount
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get sa
NAME      SECRETS   AGE
default   0         116d

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; pod-basic.yaml
pod/kubia created

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get pod/kubia &lt;span class="nt"&gt;-o&lt;/span&gt; &lt;span class="nv"&gt;jsonpath&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s1"&gt;'{.spec.serviceAccount}'&lt;/span&gt;
default

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl delete &lt;span class="nt"&gt;-f&lt;/span&gt; pod-basic.yaml
pod &lt;span class="s2"&gt;"kubia"&lt;/span&gt; deleted
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Creating a ServiceAccount
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl create sa my-sa
serviceaccount/my-sa created

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get sa
NAME      SECRETS   AGE
default   0         116d
my-sa     0         7s

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get sa/my-sa &lt;span class="nt"&gt;-o&lt;/span&gt; yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  creationTimestamp: &lt;span class="s2"&gt;"2025-11-10T10:27:44Z"&lt;/span&gt;
  name: my-sa
  namespace: default
  resourceVersion: &lt;span class="s2"&gt;"7002078"&lt;/span&gt;
  uid: 487bd1fa-353a-420e-be95-6ee876a277f5

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl describe sa/my-sa
Name:                my-sa
Namespace:           default
Labels:              &amp;lt;none&amp;gt;
Annotations:         &amp;lt;none&amp;gt;
Image pull secrets:  &amp;lt;none&amp;gt;
Mountable secrets:   &amp;lt;none&amp;gt;
Tokens:              &amp;lt;none&amp;gt;
Events:              &amp;lt;none&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Associate a Secret with a ServiceAccount
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# secret-sa-token.yaml&lt;/span&gt;

&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Secret&lt;/span&gt;
&lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;kubernetes.io/service-account-token&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;my-sa-token&lt;/span&gt;
  &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;default&lt;/span&gt;
  &lt;span class="na"&gt;annotations&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;kubernetes.io/service-account.name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;my-sa&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; secret-sa-token.yaml
secret/my-sa-token created

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get secrets
NAME          TYPE                                  DATA   AGE
my-sa-token   kubernetes.io/service-account-token   3      24s

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl describe secret/my-sa-token
Name:         my-sa-token
Namespace:    default
Labels:       &amp;lt;none&amp;gt;
Annotations:  kubernetes.io/service-account.name: my-sa
              kubernetes.io/service-account.uid: 487bd1fa-353a-420e-be95-6ee876a277f5

Type:  kubernetes.io/service-account-token

Data
&lt;span class="o"&gt;====&lt;/span&gt;
ca.crt:     1107 bytes
namespace:  7 bytes
token:      eyJhbGciOiJSUzI1NiIsImtpZCI6IkxiTHE0d29pbFBiUXpXNkI1bWxoMHFQNVZCa2o1cFl1c3...HLmhPxTcMYPc3WNUWIS4t_8E3556087H4f1e-13y8B_dUYYzh-B7NJuOIOp31_eiAxhYzaQYGw

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get secret/my-sa-token &lt;span class="nt"&gt;-o&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nv"&gt;jsonpath&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s1"&gt;'{.data.token}'&lt;/span&gt;  | &lt;span class="nb"&gt;base64&lt;/span&gt; &lt;span class="nt"&gt;--decode&lt;/span&gt;
eyJhbGciOiJSUzI1NiIsImtpZCI6IkxiTHE0d29pbFBiUXpXNkI1bWxoMHFQNVZCa2o1cFl1c3...HLmhPxTcMYPc3WNUWIS4t_8E3556087H4f1e-13y8B_dUYYzh-B7NJuOIOp31_eiAxhYzaQYGw

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl describe sa/my-sa
Name:                my-sa
Namespace:           default
Labels:              &amp;lt;none&amp;gt;
Annotations:         &amp;lt;none&amp;gt;
Image pull secrets:  &amp;lt;none&amp;gt;
Mountable secrets:   &amp;lt;none&amp;gt;
Tokens:              my-sa-token
Events:              &amp;lt;none&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Assign a ServiceAccount to a Pod
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# pod-sa.yaml&lt;/span&gt;

&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Pod&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;curl&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;serviceAccountName&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;my-sa&lt;/span&gt;
  &lt;span class="na"&gt;automountServiceAccountToken&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
  &lt;span class="na"&gt;containers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;alpine/curl&lt;/span&gt;
    &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;curl&lt;/span&gt;
    &lt;span class="na"&gt;command&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;sleep"&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;9999999"&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; pod-sa.yaml
pod/curl created

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get po
NAME   READY   STATUS    RESTARTS   AGE
curl   1/1     Running   0          4s

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get pod/curl &lt;span class="nt"&gt;-o&lt;/span&gt; &lt;span class="nv"&gt;jsonpath&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s1"&gt;'{.spec.serviceAccount}'&lt;/span&gt;
my-sa

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl &lt;span class="nb"&gt;exec&lt;/span&gt; &lt;span class="nt"&gt;-it&lt;/span&gt; pod/curl &lt;span class="nt"&gt;--&lt;/span&gt; &lt;span class="nb"&gt;cat&lt;/span&gt; /var/run/secrets/kubernetes.io/serviceaccount/token
eyJhbGciOiJSUzI1NiIsImtpZCI6IkxiTHE0d29pbFBiUXpXNkI1bWxoMHFQNVZCa2o1cFl1c3...9Wd5ONTHu2VyrTfM6u1FAxC72hKWK0_5zpNg
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl &lt;span class="nb"&gt;exec&lt;/span&gt; &lt;span class="nt"&gt;-it&lt;/span&gt; pod/curl &lt;span class="nt"&gt;--&lt;/span&gt; sh
/ &lt;span class="c"&gt;# NS=$(cat /var/run/secrets/kubernetes.io/serviceaccount/namespace)&lt;/span&gt;
/ &lt;span class="c"&gt;# TOKEN=$(cat /var/run/secrets/kubernetes.io/serviceaccount/token)&lt;/span&gt;
/ &lt;span class="c"&gt;# export CURL_CA_BUNDLE=/var/run/secrets/kubernetes.io/serviceaccount/ca.crt&lt;/span&gt;
/ &lt;span class="c"&gt;# curl -s https://kubernetes.default.svc.cluster.local/api/v1/namespaces/$NS/pods -H "Authorization: Bearer $TOKEN"&lt;/span&gt;
&lt;span class="o"&gt;{&lt;/span&gt;
  &lt;span class="s2"&gt;"kind"&lt;/span&gt;: &lt;span class="s2"&gt;"Status"&lt;/span&gt;,
  &lt;span class="s2"&gt;"apiVersion"&lt;/span&gt;: &lt;span class="s2"&gt;"v1"&lt;/span&gt;,
  &lt;span class="s2"&gt;"metadata"&lt;/span&gt;: &lt;span class="o"&gt;{}&lt;/span&gt;,
  &lt;span class="s2"&gt;"status"&lt;/span&gt;: &lt;span class="s2"&gt;"Failure"&lt;/span&gt;,
  &lt;span class="s2"&gt;"message"&lt;/span&gt;: &lt;span class="s2"&gt;"pods is forbidden: User &lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;system:serviceaccount:default:my-sa&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt; cannot list resource &lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;pods&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt; in API group &lt;/span&gt;&lt;span class="se"&gt;\"\"&lt;/span&gt;&lt;span class="s2"&gt; in the namespace &lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;default&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;,
  &lt;span class="s2"&gt;"reason"&lt;/span&gt;: &lt;span class="s2"&gt;"Forbidden"&lt;/span&gt;,
  &lt;span class="s2"&gt;"details"&lt;/span&gt;: &lt;span class="o"&gt;{&lt;/span&gt;
    &lt;span class="s2"&gt;"kind"&lt;/span&gt;: &lt;span class="s2"&gt;"pods"&lt;/span&gt;
  &lt;span class="o"&gt;}&lt;/span&gt;,
  &lt;span class="s2"&gt;"code"&lt;/span&gt;: 403
&lt;span class="o"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  RBAC &lt;a&gt;&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;Role-Based Access Control (&lt;a href="https://kubernetes.io/docs/reference/access-authn-authz/rbac/" rel="noopener noreferrer"&gt;RBAC&lt;/a&gt;) is a security model that grants access to systems and data based on user roles, not individual users.&lt;/p&gt;

&lt;h3&gt;
  
  
  Role
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# rbac-role.yaml&lt;/span&gt;

&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;rbac.authorization.k8s.io/v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Role&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;pod-reader&lt;/span&gt;
  &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;default&lt;/span&gt;
&lt;span class="na"&gt;rules&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;apiGroups&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;"&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
  &lt;span class="na"&gt;resources&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;pods"&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
  &lt;span class="na"&gt;verbs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;get"&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;watch"&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;list"&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; rbac-role.yaml &lt;span class="nt"&gt;-n&lt;/span&gt; default
role.rbac.authorization.k8s.io/pod-reader created

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get role &lt;span class="nt"&gt;-n&lt;/span&gt; default
NAME         CREATED AT
pod-reader   2025-11-11T09:32:20Z
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  RoleBinding
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# rbac-rolebinding.yaml&lt;/span&gt;

&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;rbac.authorization.k8s.io/v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;RoleBinding&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;read-pods-binding&lt;/span&gt;
  &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;default&lt;/span&gt;
&lt;span class="na"&gt;subjects&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ServiceAccount&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;my-sa&lt;/span&gt;
  &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;default&lt;/span&gt;
&lt;span class="na"&gt;roleRef&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Role&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;pod-reader&lt;/span&gt;
  &lt;span class="na"&gt;apiGroup&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;rbac.authorization.k8s.io&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; rbac-rolebinding.yaml &lt;span class="nt"&gt;-n&lt;/span&gt; default
rolebinding.rbac.authorization.k8s.io/read-pods-binding created

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get rolebindings &lt;span class="nt"&gt;-n&lt;/span&gt; default
NAME                ROLE              AGE
read-pods-binding   Role/pod-reader   35s
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Validate ServiceAccount access
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl &lt;span class="nb"&gt;exec&lt;/span&gt; &lt;span class="nt"&gt;-it&lt;/span&gt; pod/curl &lt;span class="nt"&gt;--&lt;/span&gt; sh
/ &lt;span class="c"&gt;# NS=$(cat /var/run/secrets/kubernetes.io/serviceaccount/namespace)&lt;/span&gt;
/ &lt;span class="c"&gt;# TOKEN=$(cat /var/run/secrets/kubernetes.io/serviceaccount/token)&lt;/span&gt;
/ &lt;span class="c"&gt;# export CURL_CA_BUNDLE=/var/run/secrets/kubernetes.io/serviceaccount/ca.crt&lt;/span&gt;
/ &lt;span class="c"&gt;# curl -s https://kubernetes.default.svc.cluster.local/api/v1/namespaces/$NS/pods -H "Authorization: Bearer $TOKEN"&lt;/span&gt;
&lt;span class="o"&gt;{&lt;/span&gt;
  &lt;span class="s2"&gt;"kind"&lt;/span&gt;: &lt;span class="s2"&gt;"PodList"&lt;/span&gt;,
  &lt;span class="s2"&gt;"apiVersion"&lt;/span&gt;: &lt;span class="s2"&gt;"v1"&lt;/span&gt;,
  &lt;span class="s2"&gt;"metadata"&lt;/span&gt;: &lt;span class="o"&gt;{&lt;/span&gt;
    &lt;span class="s2"&gt;"resourceVersion"&lt;/span&gt;: &lt;span class="s2"&gt;"7148388"&lt;/span&gt;
  &lt;span class="o"&gt;}&lt;/span&gt;,
  &lt;span class="s2"&gt;"items"&lt;/span&gt;: &lt;span class="o"&gt;[&lt;/span&gt;
    &lt;span class="o"&gt;{&lt;/span&gt;
      &lt;span class="s2"&gt;"metadata"&lt;/span&gt;: &lt;span class="o"&gt;{&lt;/span&gt;
        &lt;span class="s2"&gt;"name"&lt;/span&gt;: &lt;span class="s2"&gt;"curl"&lt;/span&gt;,
        &lt;span class="s2"&gt;"namespace"&lt;/span&gt;: &lt;span class="s2"&gt;"default"&lt;/span&gt;,
        &lt;span class="s2"&gt;"uid"&lt;/span&gt;: &lt;span class="s2"&gt;"4fc1f8e7-9884-42c3-941e-9e27df563592"&lt;/span&gt;,
        &lt;span class="s2"&gt;"resourceVersion"&lt;/span&gt;: &lt;span class="s2"&gt;"7007001"&lt;/span&gt;,
        &lt;span class="s2"&gt;"generation"&lt;/span&gt;: 1,
        &lt;span class="s2"&gt;"creationTimestamp"&lt;/span&gt;: &lt;span class="s2"&gt;"2025-11-10T11:13:48Z"&lt;/span&gt;,
...
    &lt;span class="o"&gt;}&lt;/span&gt;
  &lt;span class="o"&gt;]&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl delete &lt;span class="nt"&gt;-f&lt;/span&gt; rbac-rolebinding.yaml &lt;span class="nt"&gt;-n&lt;/span&gt; default
rolebinding.rbac.authorization.k8s.io &lt;span class="s2"&gt;"read-pods-binding"&lt;/span&gt; deleted

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get rolebindings &lt;span class="nt"&gt;-n&lt;/span&gt; default
No resources found &lt;span class="k"&gt;in &lt;/span&gt;default namespace.

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl delete role/pod-reader
role.rbac.authorization.k8s.io &lt;span class="s2"&gt;"pod-reader"&lt;/span&gt; deleted

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get roles
No resources found &lt;span class="k"&gt;in &lt;/span&gt;default namespace.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  ClusterRole
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# rbac-clusterrole.yaml&lt;/span&gt;

&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;rbac.authorization.k8s.io/v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ClusterRole&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;pv-reader&lt;/span&gt;
&lt;span class="na"&gt;rules&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;apiGroups&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;"&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
  &lt;span class="na"&gt;resources&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;persistentvolumes"&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
  &lt;span class="na"&gt;verbs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;get"&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;list"&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; rbac-clusterrole.yaml
clusterrole.rbac.authorization.k8s.io/pv-reader created

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl describe clusterrole/pv-reader
Name:         pv-reader
Labels:       &amp;lt;none&amp;gt;
Annotations:  &amp;lt;none&amp;gt;
PolicyRule:
  Resources          Non-Resource URLs  Resource Names  Verbs
  &lt;span class="nt"&gt;---------&lt;/span&gt;          &lt;span class="nt"&gt;-----------------&lt;/span&gt;  &lt;span class="nt"&gt;--------------&lt;/span&gt;  &lt;span class="nt"&gt;-----&lt;/span&gt;
  persistentvolumes  &lt;span class="o"&gt;[]&lt;/span&gt;                 &lt;span class="o"&gt;[]&lt;/span&gt;              &lt;span class="o"&gt;[&lt;/span&gt;get list]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  ClusterRoleBinding
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# rbac-clusterrolebinding.yaml&lt;/span&gt;

&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;rbac.authorization.k8s.io/v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ClusterRoleBinding&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;read-pv-binding&lt;/span&gt;
&lt;span class="na"&gt;subjects&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ServiceAccount&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;my-sa&lt;/span&gt;
  &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;default&lt;/span&gt;
&lt;span class="na"&gt;roleRef&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ClusterRole&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;pv-reader&lt;/span&gt;
  &lt;span class="na"&gt;apiGroup&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;rbac.authorization.k8s.io&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; rbac-clusterrolebinding.yaml
clusterrolebinding.rbac.authorization.k8s.io/read-pv-binding created

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl describe clusterrolebinding/read-pv-binding
Name:         read-pv-binding
Labels:       &amp;lt;none&amp;gt;
Annotations:  &amp;lt;none&amp;gt;
Role:
  Kind:  ClusterRole
  Name:  pv-reader
Subjects:
  Kind            Name   Namespace
  &lt;span class="nt"&gt;----&lt;/span&gt;            &lt;span class="nt"&gt;----&lt;/span&gt;   &lt;span class="nt"&gt;---------&lt;/span&gt;
  ServiceAccount  my-sa  default
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Validate ServiceAccount access
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl &lt;span class="nb"&gt;exec&lt;/span&gt; &lt;span class="nt"&gt;-it&lt;/span&gt; pod/curl &lt;span class="nt"&gt;--&lt;/span&gt; sh
/ &lt;span class="c"&gt;# NS=$(cat /var/run/secrets/kubernetes.io/serviceaccount/namespace)&lt;/span&gt;
/ &lt;span class="c"&gt;# TOKEN=$(cat /var/run/secrets/kubernetes.io/serviceaccount/token)&lt;/span&gt;
/ &lt;span class="c"&gt;# export CURL_CA_BUNDLE=/var/run/secrets/kubernetes.io/serviceaccount/ca.crt&lt;/span&gt;
/ &lt;span class="c"&gt;# curl -s https://kubernetes.default.svc.cluster.local/api/v1/persistentvolumes -H "Authorization: Bearer $TOKEN"&lt;/span&gt;
&lt;span class="o"&gt;{&lt;/span&gt;
  &lt;span class="s2"&gt;"kind"&lt;/span&gt;: &lt;span class="s2"&gt;"PersistentVolumeList"&lt;/span&gt;,
  &lt;span class="s2"&gt;"apiVersion"&lt;/span&gt;: &lt;span class="s2"&gt;"v1"&lt;/span&gt;,
  &lt;span class="s2"&gt;"metadata"&lt;/span&gt;: &lt;span class="o"&gt;{&lt;/span&gt;
    &lt;span class="s2"&gt;"resourceVersion"&lt;/span&gt;: &lt;span class="s2"&gt;"7162702"&lt;/span&gt;
  &lt;span class="o"&gt;}&lt;/span&gt;,
  &lt;span class="s2"&gt;"items"&lt;/span&gt;: &lt;span class="o"&gt;[]&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl delete sa/my-sa
serviceaccount &lt;span class="s2"&gt;"my-sa"&lt;/span&gt; deleted

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl delete clusterrolebinding/read-pv-binding
clusterrolebinding.rbac.authorization.k8s.io &lt;span class="s2"&gt;"read-pv-binding"&lt;/span&gt; deleted

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl delete clusterrole/pv-reader
clusterrole.rbac.authorization.k8s.io &lt;span class="s2"&gt;"pv-reader"&lt;/span&gt; deleted

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl delete po/curl
pod &lt;span class="s2"&gt;"curl"&lt;/span&gt; deleted
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  Pod Security &lt;a&gt;&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;The &lt;a href="https://kubernetes.io/docs/concepts/security/pod-security-admission/" rel="noopener noreferrer"&gt;Pod Security Admission&lt;/a&gt; (PSA) enforces the &lt;a href="https://kubernetes.io/docs/concepts/security/pod-security-standards/" rel="noopener noreferrer"&gt;Pod Security Standards&lt;/a&gt; (PSS).&lt;/p&gt;

&lt;p&gt;Kubernetes defines a set of labels that you can set to define which of the predefined &lt;code&gt;PSS&lt;/code&gt; levels you want to use for a namespace.&lt;/p&gt;

&lt;p&gt;The per-mode level label indicates which policy level (&lt;code&gt;privileged&lt;/code&gt;, &lt;code&gt;baseline&lt;/code&gt;, or &lt;code&gt;restricted&lt;/code&gt;) to apply for the mode (&lt;code&gt;enforce&lt;/code&gt;, &lt;code&gt;audit&lt;/code&gt;, or &lt;code&gt;warn&lt;/code&gt;).&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;pod-security.kubernetes.io/&amp;lt;mode&amp;gt;&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;&amp;lt;level&amp;gt;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Restricted level with warn mode
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# ns-psa-warn-restricted.yaml&lt;/span&gt;

&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Namespace&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;psa-warn-restricted&lt;/span&gt;
  &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;pod-security.kubernetes.io/warn&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;restricted&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; ns-psa-warn-restricted.yaml
namespace/psa-warn-restricted created
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# pod-warn-restricted.yaml&lt;/span&gt;

&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Pod&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;busybox&lt;/span&gt;
  &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;psa-warn-restricted&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;containers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;busybox:1.35.0&lt;/span&gt;
    &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;busybox&lt;/span&gt;
    &lt;span class="na"&gt;command&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;sh"&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;-c"&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;sleep&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;1h"&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; pod-warn-restricted.yaml
Warning: would violate PodSecurity &lt;span class="s2"&gt;"restricted:latest"&lt;/span&gt;: allowPrivilegeEscalation &lt;span class="o"&gt;!=&lt;/span&gt; &lt;span class="nb"&gt;false&lt;/span&gt; &lt;span class="o"&gt;(&lt;/span&gt;container &lt;span class="s2"&gt;"busybox"&lt;/span&gt; must &lt;span class="nb"&gt;set &lt;/span&gt;securityContext.allowPrivilegeEscalation&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nb"&gt;false&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt;, unrestricted capabilities &lt;span class="o"&gt;(&lt;/span&gt;container &lt;span class="s2"&gt;"busybox"&lt;/span&gt; must &lt;span class="nb"&gt;set &lt;/span&gt;securityContext.capabilities.drop&lt;span class="o"&gt;=[&lt;/span&gt;&lt;span class="s2"&gt;"ALL"&lt;/span&gt;&lt;span class="o"&gt;])&lt;/span&gt;, runAsNonRoot &lt;span class="o"&gt;!=&lt;/span&gt; &lt;span class="nb"&gt;true&lt;/span&gt; &lt;span class="o"&gt;(&lt;/span&gt;pod or container &lt;span class="s2"&gt;"busybox"&lt;/span&gt; must &lt;span class="nb"&gt;set &lt;/span&gt;securityContext.runAsNonRoot&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nb"&gt;true&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt;, seccompProfile &lt;span class="o"&gt;(&lt;/span&gt;pod or container &lt;span class="s2"&gt;"busybox"&lt;/span&gt; must &lt;span class="nb"&gt;set &lt;/span&gt;securityContext.seccompProfile.type to &lt;span class="s2"&gt;"RuntimeDefault"&lt;/span&gt; or &lt;span class="s2"&gt;"Localhost"&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt;
pod/busybox created

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get po &lt;span class="nt"&gt;-n&lt;/span&gt; psa-warn-restricted
NAME      READY   STATUS    RESTARTS   AGE
busybox   1/1     Running   0          32s
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl delete ns/psa-warn-restricted
namespace &lt;span class="s2"&gt;"psa-warn-restricted"&lt;/span&gt; deleted

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get po &lt;span class="nt"&gt;-n&lt;/span&gt; psa-warn-restricted
No resources found &lt;span class="k"&gt;in &lt;/span&gt;psa-warn-restricted namespace.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Restricted level with enforce mode
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# ns-psa-enforce-restricted.yaml&lt;/span&gt;

&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Namespace&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;psa-enforce-restricted&lt;/span&gt;
  &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;pod-security.kubernetes.io/enforce&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;restricted&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; ns-psa-enforce-restricted.yaml
namespace/psa-enforce-restricted created
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# pod-enforce-restricted.yaml&lt;/span&gt;

&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Pod&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;busybox&lt;/span&gt;
  &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;psa-enforce-restricted&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;containers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;busybox:1.35.0&lt;/span&gt;
    &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;busybox&lt;/span&gt;
    &lt;span class="na"&gt;command&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;sh"&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;-c"&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;sleep&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;1h"&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; pod-enforce-restricted.yaml
Error from server &lt;span class="o"&gt;(&lt;/span&gt;Forbidden&lt;span class="o"&gt;)&lt;/span&gt;: error when creating &lt;span class="s2"&gt;"pod-enforce-restricted.yaml"&lt;/span&gt;: pods &lt;span class="s2"&gt;"busybox"&lt;/span&gt; is forbidden: violates PodSecurity &lt;span class="s2"&gt;"restricted:latest"&lt;/span&gt;: allowPrivilegeEscalation &lt;span class="o"&gt;!=&lt;/span&gt; &lt;span class="nb"&gt;false&lt;/span&gt; &lt;span class="o"&gt;(&lt;/span&gt;container &lt;span class="s2"&gt;"busybox"&lt;/span&gt; must &lt;span class="nb"&gt;set &lt;/span&gt;securityContext.allowPrivilegeEscalation&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nb"&gt;false&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt;, unrestricted capabilities &lt;span class="o"&gt;(&lt;/span&gt;container &lt;span class="s2"&gt;"busybox"&lt;/span&gt; must &lt;span class="nb"&gt;set &lt;/span&gt;securityContext.capabilities.drop&lt;span class="o"&gt;=[&lt;/span&gt;&lt;span class="s2"&gt;"ALL"&lt;/span&gt;&lt;span class="o"&gt;])&lt;/span&gt;, runAsNonRoot &lt;span class="o"&gt;!=&lt;/span&gt; &lt;span class="nb"&gt;true&lt;/span&gt; &lt;span class="o"&gt;(&lt;/span&gt;pod or container &lt;span class="s2"&gt;"busybox"&lt;/span&gt; must &lt;span class="nb"&gt;set &lt;/span&gt;securityContext.runAsNonRoot&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nb"&gt;true&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt;, seccompProfile &lt;span class="o"&gt;(&lt;/span&gt;pod or container &lt;span class="s2"&gt;"busybox"&lt;/span&gt; must &lt;span class="nb"&gt;set &lt;/span&gt;securityContext.seccompProfile.type to &lt;span class="s2"&gt;"RuntimeDefault"&lt;/span&gt; or &lt;span class="s2"&gt;"Localhost"&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# pod-enforce-restricted-v2.yaml&lt;/span&gt;

&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Pod&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;busybox&lt;/span&gt;
  &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;psa-enforce-restricted&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;containers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;busybox:1.35.0&lt;/span&gt;
    &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;busybox&lt;/span&gt;
    &lt;span class="na"&gt;command&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;sh"&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;-c"&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;sleep&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;1h"&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
    &lt;span class="na"&gt;securityContext&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;allowPrivilegeEscalation&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;
      &lt;span class="na"&gt;capabilities&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;drop&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;ALL"&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
      &lt;span class="na"&gt;runAsNonRoot&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
      &lt;span class="na"&gt;runAsUser&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;2000&lt;/span&gt;
      &lt;span class="na"&gt;runAsGroup&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;3000&lt;/span&gt;
      &lt;span class="na"&gt;seccompProfile&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;RuntimeDefault&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; pod-enforce-restricted-v2.yaml
pod/busybox created

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get po &lt;span class="nt"&gt;-n&lt;/span&gt; psa-enforce-restricted
NAME      READY   STATUS    RESTARTS   AGE
busybox   1/1     Running   0          71s
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl delete ns/psa-enforce-restricted
namespace &lt;span class="s2"&gt;"psa-enforce-restricted"&lt;/span&gt; deleted

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get po &lt;span class="nt"&gt;-n&lt;/span&gt; psa-enforce-restricted
No resources found &lt;span class="k"&gt;in &lt;/span&gt;psa-enforce-restricted namespace.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Baseline level with enforce mode
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# ns-psa-enforce-baseline.yaml&lt;/span&gt;

&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Namespace&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;psa-enforce-baseline&lt;/span&gt;
  &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;pod-security.kubernetes.io/enforce&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;baseline&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; ns-psa-enforce-baseline.yaml
namespace/psa-enforce-baseline created
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# pod-enforce-baseline.yaml&lt;/span&gt;

&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Pod&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;busybox&lt;/span&gt;
  &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;psa-enforce-baseline&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;hostNetwork&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
  &lt;span class="na"&gt;containers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;busybox:1.35.0&lt;/span&gt;
    &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;busybox&lt;/span&gt;
    &lt;span class="na"&gt;command&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;sh"&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;-c"&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;sleep&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;1h"&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; pod-enforce-baseline.yaml
Error from server &lt;span class="o"&gt;(&lt;/span&gt;Forbidden&lt;span class="o"&gt;)&lt;/span&gt;: error when creating &lt;span class="s2"&gt;"pod-enforce-baseline.yaml"&lt;/span&gt;: pods &lt;span class="s2"&gt;"busybox"&lt;/span&gt; is forbidden: violates PodSecurity &lt;span class="s2"&gt;"baseline:latest"&lt;/span&gt;: host namespaces &lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="nv"&gt;hostNetwork&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nb"&gt;true&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl delete &lt;span class="nt"&gt;-f&lt;/span&gt; ns-psa-enforce-baseline.yaml
namespace &lt;span class="s2"&gt;"psa-enforce-baseline"&lt;/span&gt; deleted
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Privileged level with enforce mode
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# ns-psa-enforce-privileged.yaml&lt;/span&gt;

&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Namespace&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;psa-enforce-privileged&lt;/span&gt;
  &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;pod-security.kubernetes.io/enforce&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;privileged&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; ns-psa-enforce-privileged.yaml
namespace/psa-enforce-privileged created
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# pod-enforce-privileged.yaml&lt;/span&gt;

&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Pod&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;busybox&lt;/span&gt;
  &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;psa-enforce-privileged&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;hostNetwork&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
  &lt;span class="na"&gt;hostPID&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
  &lt;span class="na"&gt;hostIPC&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
  &lt;span class="na"&gt;securityContext&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;runAsUser&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;0&lt;/span&gt;
  &lt;span class="na"&gt;containers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;busybox:1.35.0&lt;/span&gt;
    &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;busybox&lt;/span&gt;
    &lt;span class="na"&gt;command&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;sh"&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;-c"&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;sleep&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;1h"&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
    &lt;span class="na"&gt;securityContext&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;privileged&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; pod-enforce-privileged.yaml
pod/busybox created

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl &lt;span class="nb"&gt;exec&lt;/span&gt; &lt;span class="nt"&gt;-ti&lt;/span&gt; pod/busybox &lt;span class="nt"&gt;-n&lt;/span&gt; psa-enforce-privileged &lt;span class="nt"&gt;--&lt;/span&gt; ps
PID   USER     TIME  COMMAND
    1 root      0:54 &lt;span class="o"&gt;{&lt;/span&gt;systemd&lt;span class="o"&gt;}&lt;/span&gt; /sbin/init
  116 root      1h50 /usr/local/bin/containerd
  185 root      5h06 /usr/bin/kubelet &lt;span class="nt"&gt;--bootstrap-kubeconfig&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;/etc/kubernetes/bootstrap-kubelet.conf &lt;span class="nt"&gt;--kubeconfig&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;/etc/kubernetes/kubelet.conf &lt;span class="nt"&gt;--config&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;/var/lib/kubelet/config.yaml &lt;span class="nt"&gt;--container-runtime-endpoint&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;unix:///run/containerd/containerd.sock &lt;span class="nt"&gt;--node-ip&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;172.18.0.3 &lt;span class="nt"&gt;--node-labels&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nt"&gt;--pod-infra-container-image&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;registry.k8s.io/pause:3.10 &lt;span class="nt"&gt;--provider-id&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;kind://docker/kind/kind-worker3 &lt;span class="nt"&gt;--runtime-cgroups&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;/system.slice/containerd.service
  358 root      5:49 /usr/local/bin/containerd-shim-runc-v2 &lt;span class="nt"&gt;-namespace&lt;/span&gt; k8s.io &lt;span class="nt"&gt;-id&lt;/span&gt; abfaf0f18c5ce3d14ba5236c34ed8048486151649c0183c0d5228240a64cdc39 &lt;span class="nt"&gt;-address&lt;/span&gt; /run/containerd/containerd.sock
  436 root      4:56 /usr/local/bin/containerd-shim-runc-v2 &lt;span class="nt"&gt;-namespace&lt;/span&gt; k8s.io &lt;span class="nt"&gt;-id&lt;/span&gt; 0c430d300ff99ed691cc7daca4b6276b41621c19739462c7fe1275abf8cd4f93 &lt;span class="nt"&gt;-address&lt;/span&gt; /run/containerd/containerd.sock
  494 65535     0:00 /pause
  554 65535     0:00 /pause
  665 root      5:38 /usr/local/bin/kube-proxy &lt;span class="nt"&gt;--config&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;/var/lib/kube-proxy/config.conf &lt;span class="nt"&gt;--hostname-override&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;kind-worker3
  698 root      8:15 /bin/kindnetd
151332 root      0:01 /lib/systemd/systemd-journald
336224 root      0:00 /usr/local/bin/containerd-shim-runc-v2 &lt;span class="nt"&gt;-namespace&lt;/span&gt; k8s.io &lt;span class="nt"&gt;-id&lt;/span&gt; 6b7c911e749486aca30a8e963ee6d4781391f73b63490e98cbdbb537fe4b538b &lt;span class="nt"&gt;-address&lt;/span&gt; /run/containerd/containerd.sock
336248 root      0:00 /pause
336274 root      0:00 &lt;span class="nb"&gt;sleep &lt;/span&gt;1h
336627 root      0:00 ps
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl delete &lt;span class="nt"&gt;-f&lt;/span&gt; ns-psa-enforce-privileged.yaml
namespace &lt;span class="s2"&gt;"psa-enforce-privileged"&lt;/span&gt; deleted
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  NetworkPolicy &lt;a&gt;&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;In contexts like Kubernetes, &lt;a href="https://kubernetes.io/docs/concepts/services-networking/network-policies/" rel="noopener noreferrer"&gt;Network Policies&lt;/a&gt; are used to control communication between pods, while broader network policies can be applied to devices like switches and routers to align the network with business needs.&lt;/p&gt;

&lt;p&gt;The &lt;code&gt;kind&lt;/code&gt; does not support the &lt;a href="https://github.com/kubernetes-sigs/kind/issues/842" rel="noopener noreferrer"&gt;NetworkPolicy&lt;/a&gt; by default, as it comes with a simple networking implementation &lt;code&gt;kindnetd&lt;/code&gt; as &lt;a href="https://kind.sigs.k8s.io/docs/user/configuration/#disable-default-cni" rel="noopener noreferrer"&gt;default CNI pluging&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Install a CNI Networking Plugin
&lt;/h3&gt;

&lt;h4&gt;
  
  
  Recreating a cluster
&lt;/h4&gt;

&lt;p&gt;Therefore, it is necessary recreate the &lt;code&gt;kind&lt;/code&gt; cluster using the CNI &lt;a href="https://github.com/projectcalico/calico" rel="noopener noreferrer"&gt;Calico&lt;/a&gt; plugin.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kind delete cluster
Deleting cluster &lt;span class="s2"&gt;"kind"&lt;/span&gt; ...
Deleted nodes: &lt;span class="o"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"kind-worker"&lt;/span&gt; &lt;span class="s2"&gt;"kind-control-plane"&lt;/span&gt; &lt;span class="s2"&gt;"kind-worker3"&lt;/span&gt; &lt;span class="s2"&gt;"kind-worker2"&lt;/span&gt;&lt;span class="o"&gt;]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# kind-cluster-cni.yaml&lt;/span&gt;

&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Cluster&lt;/span&gt;
&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;kind.x-k8s.io/v1alpha4&lt;/span&gt;
&lt;span class="na"&gt;networking&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;disableDefaultCNI&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
  &lt;span class="na"&gt;podSubnet&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;192.168.0.0/16&lt;/span&gt;
&lt;span class="na"&gt;nodes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;role&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;control-plane&lt;/span&gt;
&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;role&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;worker&lt;/span&gt;
&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;role&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;worker&lt;/span&gt;
&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;role&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;worker&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kind create cluster &lt;span class="nt"&gt;--config&lt;/span&gt; kind-cluster-cni.yaml &lt;span class="nt"&gt;--name&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;kind-calico
Creating cluster &lt;span class="s2"&gt;"kind-calico"&lt;/span&gt; ...
 • Ensuring node image &lt;span class="o"&gt;(&lt;/span&gt;kindest/node:v1.33.1&lt;span class="o"&gt;)&lt;/span&gt; 🖼  ...
 ✓ Ensuring node image &lt;span class="o"&gt;(&lt;/span&gt;kindest/node:v1.33.1&lt;span class="o"&gt;)&lt;/span&gt; 🖼
 • Preparing nodes 📦 📦 📦 📦   ...
 ✓ Preparing nodes 📦 📦 📦 📦
 • Writing configuration 📜  ...
 ✓ Writing configuration 📜
 • Starting control-plane 🕹️  ...
 ✓ Starting control-plane 🕹️
 • Installing StorageClass 💾  ...
 ✓ Installing StorageClass 💾
 • Joining worker nodes 🚜  ...
 ✓ Joining worker nodes 🚜
Set kubectl context to &lt;span class="s2"&gt;"kind-kind-calico"&lt;/span&gt;
You can now use your cluster with:

kubectl cluster-info &lt;span class="nt"&gt;--context&lt;/span&gt; kind-kind-calico

Thanks &lt;span class="k"&gt;for &lt;/span&gt;using kind! 😊
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get no
NAME                        STATUS     ROLES           AGE     VERSION
kind-calico-control-plane   NotReady   control-plane   3m43s   v1.33.1
kind-calico-worker          NotReady   &amp;lt;none&amp;gt;          3m30s   v1.33.1
kind-calico-worker2         NotReady   &amp;lt;none&amp;gt;          3m30s   v1.33.1
kind-calico-worker3         NotReady   &amp;lt;none&amp;gt;          3m30s   v1.33.1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Installing Calico plugin
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl create &lt;span class="nt"&gt;-f&lt;/span&gt; https://raw.githubusercontent.com/projectcalico/calico/v3.28.3/manifests/calico.yaml
poddisruptionbudget.policy/calico-kube-controllers created
serviceaccount/calico-kube-controllers created
serviceaccount/calico-node created
serviceaccount/calico-cni-plugin created
configmap/calico-config created
customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgpfilters.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/caliconodestatuses.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipreservations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created
clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrole.rbac.authorization.k8s.io/calico-node created
clusterrole.rbac.authorization.k8s.io/calico-cni-plugin created
clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrolebinding.rbac.authorization.k8s.io/calico-node created
clusterrolebinding.rbac.authorization.k8s.io/calico-cni-plugin created
daemonset.apps/calico-node created
deployment.apps/calico-kube-controllers created
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get pods &lt;span class="nt"&gt;-l&lt;/span&gt; k8s-app&lt;span class="o"&gt;=&lt;/span&gt;calico-node &lt;span class="nt"&gt;-A&lt;/span&gt; &lt;span class="nt"&gt;--watch&lt;/span&gt;
NAMESPACE     NAME                READY   STATUS     RESTARTS   AGE
kube-system   calico-node-c264z   0/1     Init:2/3   0          61s
kube-system   calico-node-d98t7   0/1     Init:2/3   0          61s
kube-system   calico-node-sps7w   0/1     Init:2/3   0          61s
kube-system   calico-node-ssd8q   0/1     Init:2/3   0          61s
kube-system   calico-node-ssd8q   0/1     PodInitializing   0          89s
kube-system   calico-node-d98t7   0/1     PodInitializing   0          89s
kube-system   calico-node-c264z   0/1     Init:2/3          0          89s
kube-system   calico-node-sps7w   0/1     PodInitializing   0          90s
kube-system   calico-node-ssd8q   0/1     Running           0          90s
kube-system   calico-node-d98t7   0/1     Running           0          90s
kube-system   calico-node-c264z   0/1     PodInitializing   0          90s
kube-system   calico-node-sps7w   0/1     Running           0          91s
kube-system   calico-node-c264z   0/1     Running           0          91s
kube-system   calico-node-d98t7   1/1     Running           0          101s
kube-system   calico-node-ssd8q   1/1     Running           0          102s
kube-system   calico-node-c264z   1/1     Running           0          102s
kube-system   calico-node-sps7w   1/1     Running           0          103s

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get no
NAME                        STATUS   ROLES           AGE     VERSION
kind-calico-control-plane   Ready    control-plane   4m28s   v1.33.1
kind-calico-worker          Ready    &amp;lt;none&amp;gt;          4m15s   v1.33.1
kind-calico-worker2         Ready    &amp;lt;none&amp;gt;          4m15s   v1.33.1
kind-calico-worker3         Ready    &amp;lt;none&amp;gt;          4m15s   v1.33.1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Setup a NetworkPolicy
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# ns-foo-bar.yaml&lt;/span&gt;

&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Namespace&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;foo&lt;/span&gt;
  &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;tenant&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;foo&lt;/span&gt;
&lt;span class="nn"&gt;---&lt;/span&gt;
&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Namespace&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;bar&lt;/span&gt;
  &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;tenant&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;bar&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; ns-foo-bar.yaml
namespace/foo created
namespace/bar created
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# pod-ns-foo-bar.yaml&lt;/span&gt;

&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Pod&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx-foo&lt;/span&gt;
  &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;foo&lt;/span&gt;
  &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;tier&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;foo&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;containers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx:latest&lt;/span&gt;
    &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx-foo&lt;/span&gt;
    &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;containerPort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;80&lt;/span&gt;
      &lt;span class="na"&gt;protocol&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;TCP&lt;/span&gt;
&lt;span class="nn"&gt;---&lt;/span&gt;
&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Pod&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx-bar&lt;/span&gt;
  &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;bar&lt;/span&gt;
  &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;tier&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;bar&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;containers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx:latest&lt;/span&gt;
    &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx-bar&lt;/span&gt;
    &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;containerPort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;80&lt;/span&gt;
      &lt;span class="na"&gt;protocol&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;TCP&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; pod-ns-foo-bar.yaml
pod/nginx-foo created
pod/nginx-bar created
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h5&gt;
  
  
  Ingress
&lt;/h5&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get po &lt;span class="nt"&gt;-n&lt;/span&gt; bar &lt;span class="nt"&gt;-o&lt;/span&gt; wide
NAME        READY   STATUS    RESTARTS   AGE     IP              NODE                  NOMINATED NODE   READINESS GATES
nginx-bar   1/1     Running   0          2m16s   192.168.52.65   kind-calico-worker2   &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl &lt;span class="nb"&gt;exec &lt;/span&gt;pod/nginx-foo &lt;span class="nt"&gt;-n&lt;/span&gt; foo &lt;span class="nt"&gt;--&lt;/span&gt; curl &lt;span class="nt"&gt;-sI&lt;/span&gt; http://192.168.52.65:80
HTTP/1.1 200 OK
Server: nginx/1.29.3
Date: Sat, 15 Nov 2025 12:34:19 GMT
Content-Type: text/html
Content-Length: 615
Last-Modified: Tue, 28 Oct 2025 12:05:10 GMT
Connection: keep-alive
ETag: &lt;span class="s2"&gt;"6900b176-267"&lt;/span&gt;
Accept-Ranges: bytes
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# networkpolicy-ingress.yaml&lt;/span&gt;

&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;networking.k8s.io/v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;NetworkPolicy&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;deny-all&lt;/span&gt;
  &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;bar&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;podSelector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;{}&lt;/span&gt;
  &lt;span class="na"&gt;policyTypes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;Ingress&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; networkpolicy-ingress.yaml
networkpolicy.networking.k8s.io/deny-all created

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get networkpolicy &lt;span class="nt"&gt;-n&lt;/span&gt; bar
NAME       POD-SELECTOR   AGE
deny-all   &amp;lt;none&amp;gt;         41s

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl &lt;span class="nb"&gt;exec &lt;/span&gt;pod/nginx-foo &lt;span class="nt"&gt;-n&lt;/span&gt; foo &lt;span class="nt"&gt;--&lt;/span&gt; curl &lt;span class="nt"&gt;-svI&lt;/span&gt; http://192.168.52.65:80
&lt;span class="k"&gt;*&lt;/span&gt;   Trying 192.168.52.65:80...
&lt;span class="k"&gt;*&lt;/span&gt; connect to 192.168.52.65 port 80 from 192.168.28.193 port 43618 failed: Connection timed out
&lt;span class="k"&gt;*&lt;/span&gt; Failed to connect to 192.168.52.65 port 80 after 135435 ms: Could not connect to server
&lt;span class="k"&gt;*&lt;/span&gt; closing connection &lt;span class="c"&gt;#0&lt;/span&gt;
&lt;span class="nb"&gt;command &lt;/span&gt;terminated with &lt;span class="nb"&gt;exit &lt;/span&gt;code 28
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h5&gt;
  
  
  Ingress
&lt;/h5&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get po &lt;span class="nt"&gt;-n&lt;/span&gt; foo &lt;span class="nt"&gt;-o&lt;/span&gt; wide
NAME        READY   STATUS    RESTARTS   AGE   IP               NODE                  NOMINATED NODE   READINESS GATES
nginx-foo   1/1     Running   0          15m   192.168.28.193   kind-calico-worker3   &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl &lt;span class="nb"&gt;exec &lt;/span&gt;pod/nginx-bar &lt;span class="nt"&gt;-n&lt;/span&gt; bar &lt;span class="nt"&gt;--&lt;/span&gt; curl &lt;span class="nt"&gt;-sI&lt;/span&gt; http://192.168.28.193:80
HTTP/1.1 200 OK
Server: nginx/1.29.3
Date: Sat, 15 Nov 2025 12:46:20 GMT
Content-Type: text/html
Content-Length: 615
Last-Modified: Tue, 28 Oct 2025 12:05:10 GMT
Connection: keep-alive
ETag: &lt;span class="s2"&gt;"6900b176-267"&lt;/span&gt;
Accept-Ranges: bytes
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# networkpolicy-egress.yaml&lt;/span&gt;

&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;networking.k8s.io/v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;NetworkPolicy&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;deny-pod-bar-egress&lt;/span&gt;
  &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;bar&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;podSelector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;matchLabels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;tier&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;bar&lt;/span&gt;
  &lt;span class="na"&gt;policyTypes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;Egress&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; networkpolicy-egress.yaml
networkpolicy.networking.k8s.io/deny-pod-bar-egress created

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get networkpolicy &lt;span class="nt"&gt;-n&lt;/span&gt; bar
NAME                  POD-SELECTOR   AGE
deny-all              &amp;lt;none&amp;gt;         18m
deny-pod-bar-egress   &lt;span class="nv"&gt;tier&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;bar       8s

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl &lt;span class="nb"&gt;exec &lt;/span&gt;pod/nginx-bar &lt;span class="nt"&gt;-n&lt;/span&gt; bar &lt;span class="nt"&gt;--&lt;/span&gt; curl &lt;span class="nt"&gt;-svI&lt;/span&gt; http://192.168.28.193:80
&lt;span class="k"&gt;*&lt;/span&gt;   Trying 192.168.28.193:80...
&lt;span class="k"&gt;*&lt;/span&gt; connect to 192.168.28.193 port 80 from 192.168.52.65 port 58978 failed: Connection timed out
&lt;span class="k"&gt;*&lt;/span&gt; Failed to connect to 192.168.28.193 port 80 after 135323 ms: Could not connect to server
&lt;span class="k"&gt;*&lt;/span&gt; closing connection &lt;span class="c"&gt;#0&lt;/span&gt;
&lt;span class="nb"&gt;command &lt;/span&gt;terminated with &lt;span class="nb"&gt;exit &lt;/span&gt;code 28
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl delete &lt;span class="nt"&gt;-n&lt;/span&gt; bar networkpolicies deny-all deny-pod-bar-egress
networkpolicy.networking.k8s.io &lt;span class="s2"&gt;"deny-all"&lt;/span&gt; deleted
networkpolicy.networking.k8s.io &lt;span class="s2"&gt;"deny-pod-bar-egress"&lt;/span&gt; deleted

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl delete ns foo bar
namespace &lt;span class="s2"&gt;"foo"&lt;/span&gt; deleted
namespace &lt;span class="s2"&gt;"bar"&lt;/span&gt; deleted
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  LimitRange &lt;a&gt;&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;A &lt;a href="https://kubernetes.io/docs/concepts/policy/limit-range/" rel="noopener noreferrer"&gt;LimitRange&lt;/a&gt; is a policy defined within a specific namespace to constrain resource allocations for Pods and Containers.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; ns-foo-bar.yaml
namespace/foo created
namespace/bar created
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# pod-resource-requests-limits.yaml&lt;/span&gt;

&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Pod&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;containers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx&lt;/span&gt;
    &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx:latest&lt;/span&gt;
    &lt;span class="na"&gt;resources&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;requests&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;cpu&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;200m&lt;/span&gt;
        &lt;span class="na"&gt;memory&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;10Mi&lt;/span&gt;
      &lt;span class="na"&gt;limits&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;cpu&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;1&lt;/span&gt;
        &lt;span class="na"&gt;memory&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;2Gi&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; pod-resource-requests-limits.yaml &lt;span class="nt"&gt;-n&lt;/span&gt; foo
pod/nginx created
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# limitrange.yaml&lt;/span&gt;

&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;LimitRange&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;resource-constraints&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;limits&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Pod&lt;/span&gt;
    &lt;span class="na"&gt;max&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;memory&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;2Gi&lt;/span&gt;
      &lt;span class="na"&gt;cpu&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;1&lt;/span&gt;
    &lt;span class="na"&gt;min&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;memory&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;50Mi&lt;/span&gt;
      &lt;span class="na"&gt;cpu&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;200m&lt;/span&gt;      
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Container&lt;/span&gt;
    &lt;span class="na"&gt;default&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;cpu&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;0.5&lt;/span&gt;
    &lt;span class="na"&gt;defaultRequest&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;cpu&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;0.2&lt;/span&gt;
    &lt;span class="na"&gt;max&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;memory&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;1Gi&lt;/span&gt;
      &lt;span class="na"&gt;cpu&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;800m&lt;/span&gt;
    &lt;span class="na"&gt;min&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;memory&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;50Mi&lt;/span&gt;
      &lt;span class="na"&gt;cpu&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;100m&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; limitrange.yaml &lt;span class="nt"&gt;-n&lt;/span&gt; bar
limitrange/resource-constraints created

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get limitranges &lt;span class="nt"&gt;-n&lt;/span&gt; bar
NAME                   CREATED AT
resource-constraints   2025-11-17T23:06:33Z

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; pod-resource-requests-limits.yaml &lt;span class="nt"&gt;-n&lt;/span&gt; bar
Error from server &lt;span class="o"&gt;(&lt;/span&gt;Forbidden&lt;span class="o"&gt;)&lt;/span&gt;: error when creating &lt;span class="s2"&gt;"pod-resource-requests-limits.yaml"&lt;/span&gt;: pods &lt;span class="s2"&gt;"nginx"&lt;/span&gt; is forbidden: &lt;span class="o"&gt;[&lt;/span&gt;minimum memory usage per Pod is 50Mi, but request is 10Mi, minimum memory usage per Container is 50Mi, but request is 10Mi, maximum cpu usage per Container is 800m, but limit is 1, maximum memory usage per Container is 1Gi, but limit is 2Gi]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# pod-basic.yaml&lt;/span&gt;

&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Pod&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;kubia&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;containers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;luksa/kubia&lt;/span&gt;
    &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;kubia&lt;/span&gt;
    &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;containerPort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;8080&lt;/span&gt;
      &lt;span class="na"&gt;protocol&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;TCP&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; pod-basic.yaml &lt;span class="nt"&gt;-n&lt;/span&gt; bar
pod/kubia created

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get pod/kubia &lt;span class="nt"&gt;-n&lt;/span&gt; bar &lt;span class="nt"&gt;-o&lt;/span&gt; &lt;span class="nv"&gt;jsonpath&lt;/span&gt;&lt;span class="o"&gt;={&lt;/span&gt;.spec.containers[0].resources&lt;span class="o"&gt;}&lt;/span&gt;
&lt;span class="o"&gt;{&lt;/span&gt;&lt;span class="s2"&gt;"limits"&lt;/span&gt;:&lt;span class="o"&gt;{&lt;/span&gt;&lt;span class="s2"&gt;"cpu"&lt;/span&gt;:&lt;span class="s2"&gt;"500m"&lt;/span&gt;,&lt;span class="s2"&gt;"memory"&lt;/span&gt;:&lt;span class="s2"&gt;"1Gi"&lt;/span&gt;&lt;span class="o"&gt;}&lt;/span&gt;,&lt;span class="s2"&gt;"requests"&lt;/span&gt;:&lt;span class="o"&gt;{&lt;/span&gt;&lt;span class="s2"&gt;"cpu"&lt;/span&gt;:&lt;span class="s2"&gt;"200m"&lt;/span&gt;,&lt;span class="s2"&gt;"memory"&lt;/span&gt;:&lt;span class="s2"&gt;"1Gi"&lt;/span&gt;&lt;span class="o"&gt;}}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl delete &lt;span class="nt"&gt;-f&lt;/span&gt; ns-foo-bar.yaml
namespace &lt;span class="s2"&gt;"foo"&lt;/span&gt; deleted
namespace &lt;span class="s2"&gt;"bar"&lt;/span&gt; deleted
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  ResourceQuota &lt;a&gt;&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;A &lt;a href="https://kubernetes.io/docs/concepts/policy/resource-quotas/" rel="noopener noreferrer"&gt;ResourceQuota&lt;/a&gt; is an object that allows cluster administrators to limit the aggregated consumption of compute resources (CPU, memory, storage) and the number of API objects within a specific namespace.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl create namespace baz
namespace/baz created
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# quota.yaml&lt;/span&gt;

&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ResourceQuota&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;compute-resources-quota&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;hard&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;requests.cpu&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;500m&lt;/span&gt;
    &lt;span class="na"&gt;requests.memory&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;1Gi&lt;/span&gt;
    &lt;span class="na"&gt;limits.cpu&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;2&lt;/span&gt;
    &lt;span class="na"&gt;limits.memory&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;2Gi&lt;/span&gt;
    &lt;span class="na"&gt;pods&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;3&lt;/span&gt;
    &lt;span class="na"&gt;secrets&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;10&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl create &lt;span class="nt"&gt;-f&lt;/span&gt; quota.yaml &lt;span class="nt"&gt;-n&lt;/span&gt; baz
resourcequota/compute-resources-quota created

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl describe &lt;span class="nt"&gt;-n&lt;/span&gt; baz quota
Name:            compute-resources-quota
Namespace:       baz
Resource         Used  Hard
&lt;span class="nt"&gt;--------&lt;/span&gt;         &lt;span class="nt"&gt;----&lt;/span&gt;  &lt;span class="nt"&gt;----&lt;/span&gt;
limits.cpu       0     2
limits.memory    0     2Gi
pods             0     3
requests.cpu     0     500m
requests.memory  0     1Gi
secrets          0     10
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# rs-resource-requests-limits.yaml&lt;/span&gt;

&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;apps/v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ReplicaSet&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;replicas&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;1&lt;/span&gt;
  &lt;span class="na"&gt;selector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;matchLabels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx&lt;/span&gt;
  &lt;span class="na"&gt;template&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx&lt;/span&gt;
    &lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;containers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx&lt;/span&gt;
        &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx:latest&lt;/span&gt;
        &lt;span class="na"&gt;resources&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;requests&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="na"&gt;cpu&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;200m&lt;/span&gt;
            &lt;span class="na"&gt;memory&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;10Mi&lt;/span&gt;
          &lt;span class="na"&gt;limits&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="na"&gt;cpu&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;1&lt;/span&gt;
            &lt;span class="na"&gt;memory&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;2Gi&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl create &lt;span class="nt"&gt;-f&lt;/span&gt; rs-resource-requests-limits.yaml &lt;span class="nt"&gt;-n&lt;/span&gt; baz
replicaset.apps/nginx created

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl describe &lt;span class="nt"&gt;-n&lt;/span&gt; baz quota
Name:            compute-resources-quota
Namespace:       baz
Resource         Used  Hard
&lt;span class="nt"&gt;--------&lt;/span&gt;         &lt;span class="nt"&gt;----&lt;/span&gt;  &lt;span class="nt"&gt;----&lt;/span&gt;
limits.cpu       1     2
limits.memory    2Gi   2Gi
pods             1     3
requests.cpu     200m  500m
requests.memory  10Mi  1Gi
secrets          0     10
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl scale &lt;span class="nt"&gt;-n&lt;/span&gt; baz rs/nginx &lt;span class="nt"&gt;--replicas&lt;/span&gt; 3
replicaset.apps/nginx scaled

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl describe &lt;span class="nt"&gt;-n&lt;/span&gt; baz quota
Name:            compute-resources-quota
Namespace:       baz
Resource         Used  Hard
&lt;span class="nt"&gt;--------&lt;/span&gt;         &lt;span class="nt"&gt;----&lt;/span&gt;  &lt;span class="nt"&gt;----&lt;/span&gt;
limits.cpu       1     2
limits.memory    2Gi   2Gi
pods             1     3
requests.cpu     200m  500m
requests.memory  10Mi  1Gi
secrets          0     10

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get rs &lt;span class="nt"&gt;-n&lt;/span&gt; baz
NAME    DESIRED   CURRENT   READY   AGE
nginx   3         1         1       1m52s

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl events &lt;span class="nt"&gt;-n&lt;/span&gt; baz rs/nginx
LAST SEEN               TYPE      REASON             OBJECT             MESSAGE
111s                    Normal    Scheduled          Pod/nginx-rwqnq    Successfully assigned baz/nginx-rwqnq to kind-calico-worker2
111s                    Normal    SuccessfulCreate   ReplicaSet/nginx   Created pod: nginx-rwqnq
110s                    Normal    Pulling            Pod/nginx-rwqnq    Pulling image &lt;span class="s2"&gt;"nginx:latest"&lt;/span&gt;
109s                    Normal    Started            Pod/nginx-rwqnq    Started container nginx
109s                    Normal    Created            Pod/nginx-rwqnq    Created container: nginx
109s                    Normal    Pulled             Pod/nginx-rwqnq    Successfully pulled image &lt;span class="s2"&gt;"nginx:latest"&lt;/span&gt; &lt;span class="k"&gt;in &lt;/span&gt;1.257s &lt;span class="o"&gt;(&lt;/span&gt;1.257s including waiting&lt;span class="o"&gt;)&lt;/span&gt;&lt;span class="nb"&gt;.&lt;/span&gt; Image size: 59774010 bytes.
27s                     Warning   FailedCreate       ReplicaSet/nginx   Error creating: pods &lt;span class="s2"&gt;"nginx-rvzhm"&lt;/span&gt; is forbidden: exceeded quota: compute-resources-quota, requested: limits.memory&lt;span class="o"&gt;=&lt;/span&gt;2Gi, used: limits.memory&lt;span class="o"&gt;=&lt;/span&gt;2Gi, limited: limits.memory&lt;span class="o"&gt;=&lt;/span&gt;2Gi
27s                     Warning   FailedCreate       ReplicaSet/nginx   Error creating: pods &lt;span class="s2"&gt;"nginx-64g4w"&lt;/span&gt; is forbidden: exceeded quota: compute-resources-quota, requested: limits.memory&lt;span class="o"&gt;=&lt;/span&gt;2Gi, used: limits.memory&lt;span class="o"&gt;=&lt;/span&gt;2Gi, limited: limits.memory&lt;span class="o"&gt;=&lt;/span&gt;2Gi
27s                     Warning   FailedCreate       ReplicaSet/nginx   Error creating: pods &lt;span class="s2"&gt;"nginx-lgkm2"&lt;/span&gt; is forbidden: exceeded quota: compute-resources-quota, requested: limits.memory&lt;span class="o"&gt;=&lt;/span&gt;2Gi, used: limits.memory&lt;span class="o"&gt;=&lt;/span&gt;2Gi, limited: limits.memory&lt;span class="o"&gt;=&lt;/span&gt;2Gi
27s                     Warning   FailedCreate       ReplicaSet/nginx   Error creating: pods &lt;span class="s2"&gt;"nginx-4776k"&lt;/span&gt; is forbidden: exceeded quota: compute-resources-quota, requested: limits.memory&lt;span class="o"&gt;=&lt;/span&gt;2Gi, used: limits.memory&lt;span class="o"&gt;=&lt;/span&gt;2Gi, limited: limits.memory&lt;span class="o"&gt;=&lt;/span&gt;2Gi
27s                     Warning   FailedCreate       ReplicaSet/nginx   Error creating: pods &lt;span class="s2"&gt;"nginx-2trm4"&lt;/span&gt; is forbidden: exceeded quota: compute-resources-quota, requested: limits.memory&lt;span class="o"&gt;=&lt;/span&gt;2Gi, used: limits.memory&lt;span class="o"&gt;=&lt;/span&gt;2Gi, limited: limits.memory&lt;span class="o"&gt;=&lt;/span&gt;2Gi
27s                     Warning   FailedCreate       ReplicaSet/nginx   Error creating: pods &lt;span class="s2"&gt;"nginx-c5t9z"&lt;/span&gt; is forbidden: exceeded quota: compute-resources-quota, requested: limits.memory&lt;span class="o"&gt;=&lt;/span&gt;2Gi, used: limits.memory&lt;span class="o"&gt;=&lt;/span&gt;2Gi, limited: limits.memory&lt;span class="o"&gt;=&lt;/span&gt;2Gi
27s                     Warning   FailedCreate       ReplicaSet/nginx   Error creating: pods &lt;span class="s2"&gt;"nginx-8jkkk"&lt;/span&gt; is forbidden: exceeded quota: compute-resources-quota, requested: limits.memory&lt;span class="o"&gt;=&lt;/span&gt;2Gi, used: limits.memory&lt;span class="o"&gt;=&lt;/span&gt;2Gi, limited: limits.memory&lt;span class="o"&gt;=&lt;/span&gt;2Gi
26s                     Warning   FailedCreate       ReplicaSet/nginx   Error creating: pods &lt;span class="s2"&gt;"nginx-44vv6"&lt;/span&gt; is forbidden: exceeded quota: compute-resources-quota, requested: limits.memory&lt;span class="o"&gt;=&lt;/span&gt;2Gi, used: limits.memory&lt;span class="o"&gt;=&lt;/span&gt;2Gi, limited: limits.memory&lt;span class="o"&gt;=&lt;/span&gt;2Gi
26s                     Warning   FailedCreate       ReplicaSet/nginx   Error creating: pods &lt;span class="s2"&gt;"nginx-4m8jm"&lt;/span&gt; is forbidden: exceeded quota: compute-resources-quota, requested: limits.memory&lt;span class="o"&gt;=&lt;/span&gt;2Gi, used: limits.memory&lt;span class="o"&gt;=&lt;/span&gt;2Gi, limited: limits.memory&lt;span class="o"&gt;=&lt;/span&gt;2Gi
7s &lt;span class="o"&gt;(&lt;/span&gt;x4 over 24s&lt;span class="o"&gt;)&lt;/span&gt;        Warning   FailedCreate       ReplicaSet/nginx   &lt;span class="o"&gt;(&lt;/span&gt;combined from similar events&lt;span class="o"&gt;)&lt;/span&gt;: Error creating: pods &lt;span class="s2"&gt;"nginx-k65jl"&lt;/span&gt; is forbidden: exceeded quota: compute-resources-quota, requested: limits.memory&lt;span class="o"&gt;=&lt;/span&gt;2Gi, used: limits.memory&lt;span class="o"&gt;=&lt;/span&gt;2Gi, limited: limits.memory&lt;span class="o"&gt;=&lt;/span&gt;2Gi
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl delete ns/baz
namespace &lt;span class="s2"&gt;"baz"&lt;/span&gt; deleted
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  HorizontalPodAutoscaler &lt;a&gt;&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/" rel="noopener noreferrer"&gt;Horizontal Pod Autoscaler&lt;/a&gt; is a component that automatically scales the number of &lt;code&gt;Pod&lt;/code&gt; replicas in a &lt;code&gt;Deployment&lt;/code&gt;, &lt;code&gt;ReplicaSet&lt;/code&gt;, or &lt;code&gt;StatefulSet&lt;/code&gt; based on observed metrics.&lt;/p&gt;

&lt;h3&gt;
  
  
  Installing the Metrics Server
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://github.com/kubernetes-sigs/metrics-server/" rel="noopener noreferrer"&gt;Metrics Server&lt;/a&gt; is a scalable, efficient source of container resource metrics for built-in autoscaling pipelines.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml
serviceaccount/metrics-server created
clusterrole.rbac.authorization.k8s.io/system:aggregated-metrics-reader created
clusterrole.rbac.authorization.k8s.io/system:metrics-server created
rolebinding.rbac.authorization.k8s.io/metrics-server-auth-reader created
clusterrolebinding.rbac.authorization.k8s.io/metrics-server:system:auth-delegator created
clusterrolebinding.rbac.authorization.k8s.io/system:metrics-server created
service/metrics-server created
deployment.apps/metrics-server created
apiservice.apiregistration.k8s.io/v1beta1.metrics.k8s.io created

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl patch &lt;span class="nt"&gt;-n&lt;/span&gt; kube-system deployment metrics-server &lt;span class="nt"&gt;--type&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;json &lt;span class="nt"&gt;-p&lt;/span&gt; &lt;span class="s1"&gt;'[{"op":"add","path":"/spec/template/spec/containers/0/args/-","value":"--kubelet-insecure-tls"}]'&lt;/span&gt;
deployment.apps/metrics-server patched
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get apiservices | &lt;span class="nb"&gt;grep &lt;/span&gt;metrics.k8s.io
v1beta1.metrics.k8s.io            kube-system/metrics-server   True        97s

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl top node
NAME                        CPU&lt;span class="o"&gt;(&lt;/span&gt;cores&lt;span class="o"&gt;)&lt;/span&gt;   CPU%   MEMORY&lt;span class="o"&gt;(&lt;/span&gt;bytes&lt;span class="o"&gt;)&lt;/span&gt;   MEMORY%
kind-calico-control-plane   146m         3%     1102Mi          13%
kind-calico-worker          52m          1%     426Mi           5%
kind-calico-worker2         49m          1%     303Mi           3%
kind-calico-worker3         49m          1%     383Mi           4%
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Setup the HPA
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# nginx-service.yaml&lt;/span&gt;

&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;apps/v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Deployment&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;selector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;matchLabels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx&lt;/span&gt;
  &lt;span class="na"&gt;template&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx&lt;/span&gt;
    &lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;containers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx&lt;/span&gt;
        &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx:latest&lt;/span&gt;
        &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;containerPort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;80&lt;/span&gt;
        &lt;span class="na"&gt;resources&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;limits&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="na"&gt;cpu&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;500m&lt;/span&gt;
          &lt;span class="na"&gt;requests&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="na"&gt;cpu&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;100m&lt;/span&gt;
&lt;span class="nn"&gt;---&lt;/span&gt;
&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Service&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx&lt;/span&gt;
  &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;port&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;80&lt;/span&gt;
  &lt;span class="na"&gt;selector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; nginx-service.yaml
deployment.apps/nginx created
service/nginx created

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get po
NAME                     READY   STATUS    RESTARTS   AGE
nginx-54ff6b8849-lnsn6   1/1     Running   0          16s
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# hpa.yaml&lt;/span&gt;

&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;autoscaling/v2&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;HorizontalPodAutoscaler&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx-hpa&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;scaleTargetRef&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;apps/v1&lt;/span&gt;
    &lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Deployment&lt;/span&gt;
    &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx&lt;/span&gt;
  &lt;span class="na"&gt;minReplicas&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;1&lt;/span&gt;
  &lt;span class="na"&gt;maxReplicas&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;5&lt;/span&gt;
  &lt;span class="na"&gt;metrics&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Resource&lt;/span&gt;
      &lt;span class="na"&gt;resource&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;cpu&lt;/span&gt;
        &lt;span class="na"&gt;target&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Utilization&lt;/span&gt;
          &lt;span class="na"&gt;averageUtilization&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;30&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; hpa.yaml
horizontalpodautoscaler.autoscaling/nginx-hpa created

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get hpa
NAME        REFERENCE          TARGETS       MINPODS   MAXPODS   REPLICAS   AGE
nginx-hpa   Deployment/nginx   cpu: 0%/30%   1         5         1          16

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get deployment
NAME    READY   UP-TO-DATE   AVAILABLE   AGE
nginx   1/1     1            1           83s
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl run &lt;span class="nt"&gt;-ti&lt;/span&gt; &lt;span class="nt"&gt;--rm&lt;/span&gt; load-generator &lt;span class="nt"&gt;--image&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;alpine/curl &lt;span class="nt"&gt;--restart&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;Never &lt;span class="nt"&gt;--pod-running-timeout&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;10m &lt;span class="nt"&gt;--&lt;/span&gt; /bin/sh &lt;span class="nt"&gt;-c&lt;/span&gt; &lt;span class="s2"&gt;"while true; do curl -sI http://nginx:80; done"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get hpa nginx-hpa &lt;span class="nt"&gt;--watch&lt;/span&gt;
NAME        REFERENCE          TARGETS       MINPODS   MAXPODS   REPLICAS   AGE
nginx-hpa   Deployment/nginx   cpu: 0%/30%   1         5         1          62s
nginx-hpa   Deployment/nginx   cpu: 10%/30%   1         5         1          91s
nginx-hpa   Deployment/nginx   cpu: 53%/30%   1         5         1          106s
nginx-hpa   Deployment/nginx   cpu: 48%/30%   1         5         2          2m1s
nginx-hpa   Deployment/nginx   cpu: 29%/30%   1         5         2          2m16s
nginx-hpa   Deployment/nginx   cpu: 25%/30%   1         5         2          2m31s
nginx-hpa   Deployment/nginx   cpu: 24%/30%   1         5         2          3m1s
nginx-hpa   Deployment/nginx   cpu: 25%/30%   1         5         2          3m16s
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl delete po load-generator
pod &lt;span class="s2"&gt;"load-generator"&lt;/span&gt; deleted

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get hpa nginx-hpa &lt;span class="nt"&gt;--watch&lt;/span&gt;
NAME        REFERENCE          TARGETS        MINPODS   MAXPODS   REPLICAS   AGE
nginx-hpa   Deployment/nginx   cpu: 25%/30%   1         5         2          4m3s
nginx-hpa   Deployment/nginx   cpu: 12%/30%   1         5         2          4m16s
nginx-hpa   Deployment/nginx   cpu: 8%/30%    1         5         2          4m31s
nginx-hpa   Deployment/nginx   cpu: 0%/30%    1         5         2          4m46s
nginx-hpa   Deployment/nginx   cpu: 0%/30%    1         5         2          9m1s
nginx-hpa   Deployment/nginx   cpu: 0%/30%    1         5         1          9m16s
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl delete deployments/nginx
deployment.apps &lt;span class="s2"&gt;"nginx"&lt;/span&gt; deleted

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl delete hpa/nginx-hpa
horizontalpodautoscaler.autoscaling &lt;span class="s2"&gt;"nginx-hpa"&lt;/span&gt; deleted
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  PodDisruptionBudget &lt;a&gt;&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;A &lt;a href="https://kubernetes.io/docs/concepts/workloads/pods/disruptions/#pod-disruption-budgets" rel="noopener noreferrer"&gt;PodDisruptionBudget&lt;/a&gt; is an object that ensures a specified minimum number or percentage of pods remain available during voluntary disruptions.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl create deployment nginx &lt;span class="nt"&gt;--image&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;nginx &lt;span class="nt"&gt;--replicas&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;1
deployment.apps/nginx created

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get deployments.apps nginx
NAME    READY   UP-TO-DATE   AVAILABLE   AGE
nginx   1/1     1            1           21s
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# pdb.yaml&lt;/span&gt;

&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;policy/v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;PodDisruptionBudget&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx-pdb&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;minAvailable&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;2&lt;/span&gt;
  &lt;span class="na"&gt;selector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;matchLabels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl create &lt;span class="nt"&gt;-f&lt;/span&gt; pdb.yaml
poddisruptionbudget.policy/nginx-pdb created

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get pdb
NAME        MIN AVAILABLE   MAX UNAVAILABLE   ALLOWED DISRUPTIONS   AGE
nginx-pdb   2               N/A               0                     29s
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get po &lt;span class="nt"&gt;-o&lt;/span&gt; wide
NAME                     READY   STATUS    RESTARTS   AGE   IP              NODE                  NOMINATED NODE   READINESS GATES
nginx-5869d7778c-cftzp   1/1     Running   0          11m   192.168.52.77   kind-calico-worker2   &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl drain kind-calico-worker2 &lt;span class="nt"&gt;--ignore-daemonsets&lt;/span&gt;
node/kind-calico-worker2 cordoned
Warning: ignoring DaemonSet-managed Pods: kube-system/calico-node-9ghkn, kube-system/kube-proxy-zcrm6
evicting pod default/nginx-5869d7778c-cftzp
error when evicting pods/&lt;span class="s2"&gt;"nginx-5869d7778c-cftzp"&lt;/span&gt; &lt;span class="nt"&gt;-n&lt;/span&gt; &lt;span class="s2"&gt;"default"&lt;/span&gt; &lt;span class="o"&gt;(&lt;/span&gt;will retry after 5s&lt;span class="o"&gt;)&lt;/span&gt;: Cannot evict pod as it would violate the pod&lt;span class="s1"&gt;'s disruption budget.
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get no
NAME                        STATUS                     ROLES           AGE    VERSION
kind-calico-control-plane   Ready                      control-plane   116m   v1.33.1
kind-calico-worker          Ready                      &amp;lt;none&amp;gt;          115m   v1.33.1
kind-calico-worker2         Ready,SchedulingDisabled   &amp;lt;none&amp;gt;          115m   v1.33.1
kind-calico-worker3         Ready                      &amp;lt;none&amp;gt;          115m   v1.33.1

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get po &lt;span class="nt"&gt;-o&lt;/span&gt; wide
NAME                     READY   STATUS    RESTARTS   AGE   IP              NODE                  NOMINATED NODE   READINESS GATES
nginx-5869d7778c-cftzp   1/1     Running   0          16m   192.168.52.77   kind-calico-worker2   &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl uncordon kind-calico-worker2
node/kind-calico-worker2 uncordoned

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get no
NAME                        STATUS   ROLES           AGE    VERSION
kind-calico-control-plane   Ready    control-plane   116m   v1.33.1
kind-calico-worker          Ready    &amp;lt;none&amp;gt;          116m   v1.33.1
kind-calico-worker2         Ready    &amp;lt;none&amp;gt;          116m   v1.33.1
kind-calico-worker3         Ready    &amp;lt;none&amp;gt;          116m   v1.33.1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl delete pdb/nginx-pdb
poddisruptionbudget.policy &lt;span class="s2"&gt;"nginx-pdb"&lt;/span&gt; deleted

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl delete deployments.apps/nginx
deployment.apps &lt;span class="s2"&gt;"nginx"&lt;/span&gt; deleted
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  Taints and Tolerations &lt;a&gt;&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/" rel="noopener noreferrer"&gt;Taints and Tolerations&lt;/a&gt; are mechanisms that work together to control pod placement on nodes.&lt;/p&gt;

&lt;h3&gt;
  
  
  Taints
&lt;/h3&gt;

&lt;p&gt;A &lt;code&gt;taint&lt;/code&gt; is applied to a node to indicate that the node should not accept certain pods. &lt;code&gt;Taints&lt;/code&gt; are key-value pairs with an associated effect.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get nodes
NAME                        STATUS   ROLES           AGE     VERSION
kind-calico-control-plane   Ready    control-plane   4d12h   v1.33.1
kind-calico-worker          Ready    &amp;lt;none&amp;gt;          4d12h   v1.33.1
kind-calico-worker2         Ready    &amp;lt;none&amp;gt;          4d12h   v1.33.1
kind-calico-worker3         Ready    &amp;lt;none&amp;gt;          4d12h   v1.33.1

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get node/kind-calico-control-plane &lt;span class="nt"&gt;-o&lt;/span&gt; &lt;span class="nv"&gt;jsonpath&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s1"&gt;'{.spec.taints}'&lt;/span&gt;
&lt;span class="o"&gt;[{&lt;/span&gt;&lt;span class="s2"&gt;"effect"&lt;/span&gt;:&lt;span class="s2"&gt;"NoSchedule"&lt;/span&gt;,&lt;span class="s2"&gt;"key"&lt;/span&gt;:&lt;span class="s2"&gt;"node-role.kubernetes.io/control-plane"&lt;/span&gt;&lt;span class="o"&gt;}]&lt;/span&gt;

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get node/kind-calico-worker &lt;span class="nt"&gt;-o&lt;/span&gt; &lt;span class="nv"&gt;jsonpath&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s1"&gt;'{.spec.taints}'&lt;/span&gt;
&lt;span class="err"&gt;$&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl taint node kind-calico-worker node-type&lt;span class="o"&gt;=&lt;/span&gt;production:NoSchedule
node/kind-calico-worker tainted

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get node/kind-calico-worker &lt;span class="nt"&gt;-o&lt;/span&gt; &lt;span class="nv"&gt;jsonpath&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s1"&gt;'{.spec.taints}'&lt;/span&gt;
&lt;span class="o"&gt;[{&lt;/span&gt;&lt;span class="s2"&gt;"effect"&lt;/span&gt;:&lt;span class="s2"&gt;"NoSchedule"&lt;/span&gt;,&lt;span class="s2"&gt;"key"&lt;/span&gt;:&lt;span class="s2"&gt;"node-type"&lt;/span&gt;,&lt;span class="s2"&gt;"value"&lt;/span&gt;:&lt;span class="s2"&gt;"production"&lt;/span&gt;&lt;span class="o"&gt;}]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl create deploy nginx &lt;span class="nt"&gt;--image&lt;/span&gt; nginx &lt;span class="nt"&gt;--replicas&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;5
deployment.apps/nginx created

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get po &lt;span class="nt"&gt;-o&lt;/span&gt; custom-columns&lt;span class="o"&gt;=&lt;/span&gt;NAME:.metadata.name,NODE:.spec.nodeName,STATUS:.status.phase
NAME                     NODE                  STATUS
nginx-5869d7778c-cxngx   kind-calico-worker2   Running
nginx-5869d7778c-d8fm2   kind-calico-worker3   Running
nginx-5869d7778c-lpfjl   kind-calico-worker2   Running
nginx-5869d7778c-mmwhl   kind-calico-worker3   Running
nginx-5869d7778c-mqt2k   kind-calico-worker2   Running
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Tolerations
&lt;/h3&gt;

&lt;p&gt;A &lt;code&gt;toleration&lt;/code&gt; is applied to a pod and allows that pod to be scheduled on a node with a matching taint.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# deployment-tolerations.yaml&lt;/span&gt;

&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;apps/v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Deployment&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx&lt;/span&gt;
  &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;replicas&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;5&lt;/span&gt;
  &lt;span class="na"&gt;selector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;matchLabels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx&lt;/span&gt;
  &lt;span class="na"&gt;template&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx&lt;/span&gt;
    &lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;containers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx&lt;/span&gt;
        &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx&lt;/span&gt;
  &lt;span class="na"&gt;tolerations&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;key&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;node-type&lt;/span&gt;
    &lt;span class="na"&gt;operator&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Equal&lt;/span&gt;
    &lt;span class="na"&gt;value&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;production&lt;/span&gt;
    &lt;span class="na"&gt;effect&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;NoSchedule&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; deployment-tolerations.yaml
Warning: resource deployments/nginx is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create &lt;span class="nt"&gt;--save-config&lt;/span&gt; or kubectl apply. The missing annotation will be patched automatically.
deployment.apps/nginx configured

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get po
NAME                     READY   STATUS    RESTARTS   AGE
nginx-6cd5747b88-75vfn   1/1     Running   0          3m40s
nginx-6cd5747b88-gr7r8   1/1     Running   0          3m40s
nginx-6cd5747b88-pqv6s   1/1     Running   0          3m37s
nginx-6cd5747b88-xzpjl   1/1     Running   0          3m36s
nginx-6cd5747b88-zzf7h   1/1     Running   0          3m39s

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get po &lt;span class="nt"&gt;-o&lt;/span&gt; custom-columns&lt;span class="o"&gt;=&lt;/span&gt;NAME:.metadata.name,NODE:.spec.nodeName,STATUS:.status.phase
NAME                     NODE                  STATUS
nginx-6cd5747b88-75vfn   kind-calico-worker    Running
nginx-6cd5747b88-gr7r8   kind-calico-worker3   Running
nginx-6cd5747b88-pqv6s   kind-calico-worker    Running
nginx-6cd5747b88-xzpjl   kind-calico-worker3   Running
nginx-6cd5747b88-zzf7h   kind-calico-worker2   Running
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl taint node kind-calico-worker2 node-type&lt;span class="o"&gt;=&lt;/span&gt;development:NoExecute
node/kind-calico-worker2 tainted

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get po &lt;span class="nt"&gt;-o&lt;/span&gt; custom-columns&lt;span class="o"&gt;=&lt;/span&gt;NAME:.metadata.name,NODE:.spec.nodeName,STATUS:.status.phase
NAME                     NODE                  STATUS
nginx-6cd5747b88-6k6s2   kind-calico-worker3   Running
nginx-6cd5747b88-75vfn   kind-calico-worker    Running
nginx-6cd5747b88-gr7r8   kind-calico-worker3   Running
nginx-6cd5747b88-pqv6s   kind-calico-worker    Running
nginx-6cd5747b88-xzpjl   kind-calico-worker3   Running
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl delete deployments.apps nginx
deployment.apps &lt;span class="s2"&gt;"nginx"&lt;/span&gt; deleted

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl taint node kind-calico-worker2 node-type&lt;span class="o"&gt;=&lt;/span&gt;development:NoExecute-
node/kind-calico-worker2 untainted

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl taint node kind-calico-worker node-type&lt;span class="o"&gt;=&lt;/span&gt;production:NoSchedule-
node/kind-calico-worker untainted
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  Affinity &lt;a&gt;&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;Kubernetes &lt;a href="https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#affinity-and-anti-affinity" rel="noopener noreferrer"&gt;affinity&lt;/a&gt; is a powerful feature that allows for more expressive and flexible control over how pods are scheduled onto nodes within a cluster. It provides a more advanced alternative to &lt;code&gt;nodeSelector&lt;/code&gt; for directing pod placement.&lt;/p&gt;

&lt;h3&gt;
  
  
  Node affinity
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get no
NAME                        STATUS   ROLES           AGE     VERSION
kind-calico-control-plane   Ready    control-plane   5d11h   v1.33.1
kind-calico-worker          Ready    &amp;lt;none&amp;gt;          5d11h   v1.33.1
kind-calico-worker2         Ready    &amp;lt;none&amp;gt;          5d11h   v1.33.1
kind-calico-worker3         Ready    &amp;lt;none&amp;gt;          5d11h   v1.33.1

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl label node kind-calico-worker &lt;span class="nv"&gt;disktype&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;ssd
node/kind-calico-worker labeled

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get no &lt;span class="nt"&gt;-L&lt;/span&gt; disktype
NAME                        STATUS   ROLES           AGE     VERSION   DISKTYPE
kind-calico-control-plane   Ready    control-plane   5d11h   v1.33.1
kind-calico-worker          Ready    &amp;lt;none&amp;gt;          5d11h   v1.33.1   ssd
kind-calico-worker2         Ready    &amp;lt;none&amp;gt;          5d11h   v1.33.1
kind-calico-worker3         Ready    &amp;lt;none&amp;gt;          5d11h   v1.33.1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Schedule a Pod using required node affinity
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# deployment-required-nodeaffinity.yaml&lt;/span&gt;
&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;apps/v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Deployment&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx&lt;/span&gt;
  &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;replicas&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;5&lt;/span&gt;
  &lt;span class="na"&gt;selector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;matchLabels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx&lt;/span&gt;
  &lt;span class="na"&gt;template&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx&lt;/span&gt;
    &lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;containers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx&lt;/span&gt;
        &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx&lt;/span&gt;
      &lt;span class="na"&gt;affinity&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;nodeAffinity&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;requiredDuringSchedulingIgnoredDuringExecution&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="na"&gt;nodeSelectorTerms&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;matchExpressions&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
              &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;key&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;disktype&lt;/span&gt;
                &lt;span class="na"&gt;operator&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;In&lt;/span&gt;
                &lt;span class="na"&gt;values&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
                &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;ssd&lt;/span&gt; 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get po
No resources found &lt;span class="k"&gt;in &lt;/span&gt;default namespace.

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; deployment-required-nodeaffinity.yaml
deployment.apps/nginx created

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get po &lt;span class="nt"&gt;-o&lt;/span&gt; custom-columns&lt;span class="o"&gt;=&lt;/span&gt;NAME:.metadata.name,NODE:.spec.nodeName,STATUS:.status.phase
NAME                     NODE                 STATUS
nginx-7bdd94c84c-6hxgp   kind-calico-worker   Running
nginx-7bdd94c84c-kcbhv   kind-calico-worker   Running
nginx-7bdd94c84c-kh2f7   kind-calico-worker   Running
nginx-7bdd94c84c-qshhl   kind-calico-worker   Running
nginx-7bdd94c84c-thspc   kind-calico-worker   Running
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Schedule a Pod using preferred node affinity
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl label node kind-calico-worker&lt;span class="o"&gt;{&lt;/span&gt;2,3&lt;span class="o"&gt;}&lt;/span&gt; &lt;span class="nv"&gt;gpu&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nb"&gt;true
&lt;/span&gt;node/kind-calico-worker2 labeled
node/kind-calico-worker3 labeled

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get no &lt;span class="nt"&gt;-L&lt;/span&gt; disktype,gpu
NAME                        STATUS   ROLES           AGE     VERSION   DISKTYPE   GPU
kind-calico-control-plane   Ready    control-plane   5d12h   v1.33.1
kind-calico-worker          Ready    &amp;lt;none&amp;gt;          5d12h   v1.33.1   ssd
kind-calico-worker2         Ready    &amp;lt;none&amp;gt;          5d12h   v1.33.1              &lt;span class="nb"&gt;true
&lt;/span&gt;kind-calico-worker3         Ready    &amp;lt;none&amp;gt;          5d12h   v1.33.1              &lt;span class="nb"&gt;true&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# deployment-preffered-nodeaffinity.yaml&lt;/span&gt;

&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;apps/v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Deployment&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx&lt;/span&gt;
  &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;replicas&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;5&lt;/span&gt;
  &lt;span class="na"&gt;selector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;matchLabels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx&lt;/span&gt;
  &lt;span class="na"&gt;template&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx&lt;/span&gt;
    &lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;containers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx&lt;/span&gt;
        &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx&lt;/span&gt;
      &lt;span class="na"&gt;affinity&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;nodeAffinity&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;preferredDuringSchedulingIgnoredDuringExecution&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;weight&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;80&lt;/span&gt;
            &lt;span class="na"&gt;preference&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
              &lt;span class="na"&gt;matchExpressions&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
              &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;key&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;disktype&lt;/span&gt;
                &lt;span class="na"&gt;operator&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;In&lt;/span&gt;
                &lt;span class="na"&gt;values&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
                &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;ssd&lt;/span&gt;
          &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;weight&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;20&lt;/span&gt;
            &lt;span class="na"&gt;preference&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
              &lt;span class="na"&gt;matchExpressions&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
              &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;key&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;gpu&lt;/span&gt;
                &lt;span class="na"&gt;operator&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;In&lt;/span&gt;
                &lt;span class="na"&gt;values&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
                &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;true"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; deployment-preffered-nodeaffinity.yaml
deployment.apps/nginx configured

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get po &lt;span class="nt"&gt;-o&lt;/span&gt; custom-columns&lt;span class="o"&gt;=&lt;/span&gt;NAME:.metadata.name,NODE:.spec.nodeName,STATUS:.status.phase
NAME                     NODE                  STATUS
nginx-5898d5cc8d-46z7p   kind-calico-worker2   Running
nginx-5898d5cc8d-5q8rw   kind-calico-worker    Running
nginx-5898d5cc8d-kkmdz   kind-calico-worker    Running
nginx-5898d5cc8d-q95zw   kind-calico-worker    Running
nginx-5898d5cc8d-r27b5   kind-calico-worker3   Running
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl delete deployments.apps nginx
deployment.apps &lt;span class="s2"&gt;"nginx"&lt;/span&gt; deleted
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Pod affinity and anti-affinity
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl run backend &lt;span class="nt"&gt;-l&lt;/span&gt; &lt;span class="nv"&gt;app&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;backend &lt;span class="nt"&gt;--image&lt;/span&gt; busybox &lt;span class="nt"&gt;--&lt;/span&gt; &lt;span class="nb"&gt;sleep &lt;/span&gt;999999
pod/backend created

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get po &lt;span class="nt"&gt;-o&lt;/span&gt; custom-columns&lt;span class="o"&gt;=&lt;/span&gt;NAME:.metadata.name,NODE:.spec.nodeName,STATUS:.status.phase
NAME      NODE                  STATUS
backend   kind-calico-worker3   Running
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# deployment-required-podaffinity.yaml&lt;/span&gt;

&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;apps/v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Deployment&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;frontend&lt;/span&gt;
  &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;replicas&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;5&lt;/span&gt;
  &lt;span class="na"&gt;selector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;matchLabels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx&lt;/span&gt;
  &lt;span class="na"&gt;template&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx&lt;/span&gt;
    &lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;containers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx&lt;/span&gt;
        &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx&lt;/span&gt;
      &lt;span class="na"&gt;affinity&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;podAffinity&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;requiredDuringSchedulingIgnoredDuringExecution&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;labelSelector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
              &lt;span class="na"&gt;matchLabels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
                &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;backend&lt;/span&gt;
            &lt;span class="na"&gt;topologyKey&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;kubernetes.io/hostname&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; deployment-required-podaffinity.yaml
deployment.apps/frontend created

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get po &lt;span class="nt"&gt;-o&lt;/span&gt; custom-columns&lt;span class="o"&gt;=&lt;/span&gt;NAME:.metadata.name,NODE:.spec.nodeName,STATUS:.status.phase
NAME                        NODE                  STATUS
backend                     kind-calico-worker3   Running
frontend-67c67944bf-lnw56   kind-calico-worker3   Pending
frontend-67c67944bf-wc7j5   kind-calico-worker3   Pending
frontend-67c67944bf-wxj65   kind-calico-worker3   Pending
frontend-67c67944bf-xlc9r   kind-calico-worker3   Running
frontend-67c67944bf-xs79k   kind-calico-worker3   Running
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# deployment-required-podantiaffinity.yaml&lt;/span&gt;

&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;apps/v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Deployment&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;frontend&lt;/span&gt;
  &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;replicas&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;5&lt;/span&gt;
  &lt;span class="na"&gt;selector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;matchLabels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx&lt;/span&gt;
  &lt;span class="na"&gt;template&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx&lt;/span&gt;
    &lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;containers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx&lt;/span&gt;
        &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx&lt;/span&gt;
      &lt;span class="na"&gt;affinity&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;podAntiAffinity&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;requiredDuringSchedulingIgnoredDuringExecution&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;labelSelector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
              &lt;span class="na"&gt;matchLabels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
                &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;backend&lt;/span&gt;
            &lt;span class="na"&gt;topologyKey&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;kubernetes.io/hostname&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; deployment-required-podaffinity.yaml
deployment.apps/frontend created

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get po &lt;span class="nt"&gt;-o&lt;/span&gt; custom-columns&lt;span class="o"&gt;=&lt;/span&gt;NAME:.metadata.name,NODE:.spec.nodeName,STATUS:.status.phase
NAME                        NODE                  STATUS
backend                     kind-calico-worker3   Running
frontend-6dd9b5dbb9-cnswz   kind-calico-worker    Running
frontend-6dd9b5dbb9-f8lmq   kind-calico-worker    Running
frontend-6dd9b5dbb9-lph98   kind-calico-worker2   Running
frontend-6dd9b5dbb9-m424v   kind-calico-worker2   Running
frontend-6dd9b5dbb9-mbqmm   kind-calico-worker2   Running
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl delete deployments.apps frontend
deployment.apps &lt;span class="s2"&gt;"frontend"&lt;/span&gt; deleted

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl delete pod/backend
pod &lt;span class="s2"&gt;"backend"&lt;/span&gt; deleted
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Cleanup
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kind delete clusters kind-calico
Deleted nodes: &lt;span class="o"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"kind-calico-worker"&lt;/span&gt; &lt;span class="s2"&gt;"kind-calico-worker3"&lt;/span&gt; &lt;span class="s2"&gt;"kind-calico-worker2"&lt;/span&gt; &lt;span class="s2"&gt;"kind-calico-control-plane"&lt;/span&gt;&lt;span class="o"&gt;]&lt;/span&gt;
Deleted clusters: &lt;span class="o"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"kind-calico"&lt;/span&gt;&lt;span class="o"&gt;]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






</description>
      <category>kubernetes</category>
      <category>kind</category>
      <category>devops</category>
    </item>
    <item>
      <title>Configure TLS in Redis Cluster</title>
      <dc:creator>Ježek</dc:creator>
      <pubDate>Wed, 09 Apr 2025 18:32:08 +0000</pubDate>
      <link>https://dev.to/hedgehog/configure-tls-in-redis-cluster-50ec</link>
      <guid>https://dev.to/hedgehog/configure-tls-in-redis-cluster-50ec</guid>
      <description>&lt;p&gt;&lt;a href="https://ru.wikipedia.org/wiki/TLS" rel="noopener noreferrer"&gt;Transport Layer Security (TLS)&lt;/a&gt; is a cryptographic protocol designed to provide communications security over a computer network, such as the Internet. The protocol is widely used in applications such as email, instant messaging, and voice over IP, but its use in securing HTTPS remains the most publicly visible.&lt;/p&gt;

&lt;p&gt;&lt;a href="http://redis.io/" rel="noopener noreferrer"&gt;Redis&lt;/a&gt;® also supports TLS by design.&lt;/p&gt;

&lt;p&gt;This article shows how to configure TLS in a Redis Cluster&lt;/p&gt;




&lt;h2&gt;
  
  
  Prerequisites
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;You have installed Redis Cluster in accordance with article &lt;a href="https://dev.to/hedgehog/setup-a-redis-cluster-using-redis-stack-4jdl"&gt;Setup a Redis Cluster with using Redis Stack&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;You have been issued certificates for the servers of your cluster nodes&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Configure Redis configuration files
&lt;/h2&gt;

&lt;p&gt;On all nodes add following mandatory settings into &lt;code&gt;redis_7000.conf&lt;/code&gt; file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight ini"&gt;&lt;code&gt;&lt;span class="c"&gt;# By default, TLS/SSL is disabled. To enable it, the "tls-port" configuration
# directive can be used to define TLS-listening ports. To enable TLS on the default port, use:
#
&lt;/span&gt;&lt;span class="err"&gt;port&lt;/span&gt; &lt;span class="err"&gt;0&lt;/span&gt;
&lt;span class="err"&gt;tls-port&lt;/span&gt; &lt;span class="err"&gt;7000&lt;/span&gt;

&lt;span class="c"&gt;# The cluster port is the port that the cluster bus will listen for inbound connections on. When set 
# to the default value, 0, it will be bound to the command port + 10000. Setting this value requires 
# you to specify the cluster bus port when executing cluster meet.
#
&lt;/span&gt;&lt;span class="err"&gt;cluster-port&lt;/span&gt; &lt;span class="err"&gt;17000&lt;/span&gt;

&lt;span class="c"&gt;# Configure a X.509 certificate and private key to use for authenticating the
# server to connected clients, masters or cluster peers.  These files should be PEM formatted.
#
&lt;/span&gt;&lt;span class="err"&gt;tls-cert-file&lt;/span&gt; &lt;span class="err"&gt;redis.crt&lt;/span&gt;
&lt;span class="err"&gt;tls-key-file&lt;/span&gt; &lt;span class="err"&gt;redis.key&lt;/span&gt;

&lt;span class="c"&gt;# Configure a CA certificate(s) bundle or directory to authenticate TLS/SSL clients and peers.
#
&lt;/span&gt;&lt;span class="err"&gt;tls-ca-cert-dir&lt;/span&gt; &lt;span class="err"&gt;/etc/ssl/certs&lt;/span&gt;

&lt;span class="c"&gt;# By default, clients (including replica servers) on a TLS port are required
# to authenticate using valid client side certificates.
# If "no" is specified, client certificates are not required and not accepted.
# If "optional" is specified, client certificates are accepted and must be
# valid if provided, but are not required.
#
&lt;/span&gt;&lt;span class="err"&gt;tls-auth-clients&lt;/span&gt; &lt;span class="err"&gt;yes&lt;/span&gt;

&lt;span class="c"&gt;# By default, a Redis replica does not attempt to establish a TLS connection with its master.
# Use the following directive to enable TLS on replication links.
#
&lt;/span&gt;&lt;span class="err"&gt;tls-replication&lt;/span&gt; &lt;span class="err"&gt;yes&lt;/span&gt;

&lt;span class="c"&gt;# By default, the Redis Cluster bus uses a plain TCP connection. To enable
# TLS for the bus protocol, use the following directive:
#
&lt;/span&gt;&lt;span class="err"&gt;tls-cluster&lt;/span&gt; &lt;span class="err"&gt;yes&lt;/span&gt;

&lt;span class="c"&gt;# By default, only TLSv1.2 and TLSv1.3 are enabled and it is highly recommended
# that older formally deprecated versions are kept disabled to reduce the attack surface.
# You can explicitly specify TLS versions to support.
# Allowed values are case insensitive and include "TLSv1", "TLSv1.1", "TLSv1.2",
# "TLSv1.3" (OpenSSL &amp;gt;= 1.1.1) or any combination.
#
&lt;/span&gt;&lt;span class="err"&gt;tls-protocols&lt;/span&gt; &lt;span class="err"&gt;"TLSv1.2"&lt;/span&gt;

&lt;span class="c"&gt;# Configure allowed ciphers.  See the ciphers(1ssl) manpage for more information about the syntax of this string.
# Note: this configuration applies only to &amp;lt;= TLSv1.2.
#
&lt;/span&gt;&lt;span class="err"&gt;tls-ciphers&lt;/span&gt; &lt;span class="err"&gt;DEFAULT:!MEDIUM&lt;/span&gt;

&lt;span class="c"&gt;# When choosing a cipher, use the server's preference instead of the client
# preference. By default, the server follows the client's preference.
#
&lt;/span&gt;&lt;span class="err"&gt;tls-prefer-server-ciphers&lt;/span&gt; &lt;span class="err"&gt;yes&lt;/span&gt;

&lt;span class="c"&gt;# Enable TLS session caching to allow faster and less expensive
# reconnections by clients that support it.
#
&lt;/span&gt;&lt;span class="err"&gt;tls-session-caching&lt;/span&gt; &lt;span class="err"&gt;yes&lt;/span&gt;

&lt;span class="c"&gt;# Change the default number of TLS sessions cached. A zero value sets the cache
# to unlimited size. The default size is 20480.
#
&lt;/span&gt;&lt;span class="err"&gt;tls-session-cache-size&lt;/span&gt; &lt;span class="err"&gt;20480&lt;/span&gt;

&lt;span class="c"&gt;# Change the default timeout of cached TLS sessions. The default timeout is 300 seconds.
#
&lt;/span&gt;&lt;span class="err"&gt;tls-session-cache-timeout&lt;/span&gt; &lt;span class="err"&gt;300&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Apply same relevant mandatory settings in &lt;code&gt;redis_7001.conf&lt;/code&gt; file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight ini"&gt;&lt;code&gt;&lt;span class="err"&gt;port&lt;/span&gt; &lt;span class="err"&gt;0&lt;/span&gt;
&lt;span class="err"&gt;tls-port&lt;/span&gt; &lt;span class="err"&gt;7001&lt;/span&gt;
&lt;span class="err"&gt;cluster-port&lt;/span&gt; &lt;span class="err"&gt;17001&lt;/span&gt;

&lt;span class="err"&gt;tls-cert-file&lt;/span&gt; &lt;span class="err"&gt;redis.crt&lt;/span&gt;
&lt;span class="err"&gt;tls-key-file&lt;/span&gt; &lt;span class="err"&gt;redis.key&lt;/span&gt;
&lt;span class="err"&gt;tls-ca-cert-dir&lt;/span&gt; &lt;span class="err"&gt;/etc/ssl/certs&lt;/span&gt;

&lt;span class="err"&gt;tls-auth-clients&lt;/span&gt; &lt;span class="err"&gt;yes&lt;/span&gt;
&lt;span class="err"&gt;tls-replication&lt;/span&gt; &lt;span class="err"&gt;yes&lt;/span&gt;
&lt;span class="err"&gt;tls-cluster&lt;/span&gt; &lt;span class="err"&gt;yes&lt;/span&gt;

&lt;span class="err"&gt;tls-protocols&lt;/span&gt; &lt;span class="err"&gt;"TLSv1.2"&lt;/span&gt;
&lt;span class="err"&gt;tls-ciphers&lt;/span&gt; &lt;span class="err"&gt;DEFAULT:!MEDIUM&lt;/span&gt;
&lt;span class="err"&gt;tls-prefer-server-ciphers&lt;/span&gt; &lt;span class="err"&gt;yes&lt;/span&gt;

&lt;span class="err"&gt;tls-session-caching&lt;/span&gt; &lt;span class="err"&gt;yes&lt;/span&gt;
&lt;span class="err"&gt;tls-session-cache-size&lt;/span&gt; &lt;span class="err"&gt;20480&lt;/span&gt;
&lt;span class="err"&gt;tls-session-cache-timeout&lt;/span&gt; &lt;span class="err"&gt;300&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  Redis commands with TLS
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Create a cluster:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;redis-cli &lt;span class="nt"&gt;--cluster&lt;/span&gt; create &lt;span class="se"&gt;\&lt;/span&gt;
    10.0.0.124:7000 10.0.0.125:7000 10.0.0.126:7000 &lt;span class="se"&gt;\&lt;/span&gt;
    10.0.0.124:7001 10.0.0.125:7001 10.0.0.126:7001 &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--cluster-replicas&lt;/span&gt; 1 &lt;span class="nt"&gt;--askpass&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--tls&lt;/span&gt; &lt;span class="nt"&gt;--cert&lt;/span&gt; redis.crt &lt;span class="nt"&gt;--key&lt;/span&gt; redis.key
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;List the cluster nodes:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;redis-cli &lt;span class="nt"&gt;-c&lt;/span&gt; &lt;span class="nt"&gt;-h&lt;/span&gt; 10.0.0.124 &lt;span class="nt"&gt;-p&lt;/span&gt; 7000 &lt;span class="nt"&gt;--askpass&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--tls&lt;/span&gt; &lt;span class="nt"&gt;--cert&lt;/span&gt; redis.crt &lt;span class="nt"&gt;--key&lt;/span&gt; redis.key &lt;span class="se"&gt;\&lt;/span&gt;
    cluster nodes
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Check the cluster:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;redis-cli &lt;span class="nt"&gt;--cluster&lt;/span&gt; check 10.0.0.124:7000 &lt;span class="nt"&gt;--askpass&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--tls&lt;/span&gt; &lt;span class="nt"&gt;--cert&lt;/span&gt; redis.crt &lt;span class="nt"&gt;--key&lt;/span&gt; redis.key
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Interactive mode:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;redis-cli &lt;span class="nt"&gt;-c&lt;/span&gt; &lt;span class="nt"&gt;-h&lt;/span&gt; 10.0.0.124 &lt;span class="nt"&gt;-p&lt;/span&gt; 7000 &lt;span class="nt"&gt;--askpass&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--tls&lt;/span&gt; &lt;span class="nt"&gt;--cert&lt;/span&gt; redis.crt &lt;span class="nt"&gt;--key&lt;/span&gt; redis.key
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>redis</category>
      <category>tls</category>
      <category>ssl</category>
    </item>
    <item>
      <title>Comprehensive guide to</title>
      <dc:creator>Ježek</dc:creator>
      <pubDate>Tue, 01 Apr 2025 09:35:24 +0000</pubDate>
      <link>https://dev.to/hedgehog/-3n2e</link>
      <guid>https://dev.to/hedgehog/-3n2e</guid>
      <description>&lt;div class="ltag__link--embedded"&gt;
  &lt;div class="crayons-story "&gt;
  &lt;a href="https://dev.to/hedgehog/setup-a-redis-cluster-using-redis-stack-4jdl" class="crayons-story__hidden-navigation-link"&gt;Setup a Redis Cluster with using Redis Stack&lt;/a&gt;


  &lt;div class="crayons-story__body crayons-story__body-full_post"&gt;
    &lt;div class="crayons-story__top"&gt;
      &lt;div class="crayons-story__meta"&gt;
        &lt;div class="crayons-story__author-pic"&gt;

          &lt;a href="/hedgehog" class="crayons-avatar  crayons-avatar--l  "&gt;
            &lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Fuser%2Fprofile_image%2F1712568%2Fc283581e-f5d4-4b95-827f-ea64399009b9.jpeg" alt="hedgehog profile" class="crayons-avatar__image"&gt;
          &lt;/a&gt;
        &lt;/div&gt;
        &lt;div&gt;
          &lt;div&gt;
            &lt;a href="/hedgehog" class="crayons-story__secondary fw-medium m:hidden"&gt;
              Ježek
            &lt;/a&gt;
            &lt;div class="profile-preview-card relative mb-4 s:mb-0 fw-medium hidden m:inline-block"&gt;
              
                Ježek
                
              
              &lt;div id="story-author-preview-content-2358576" class="profile-preview-card__content crayons-dropdown branded-7 p-4 pt-0"&gt;
                &lt;div class="gap-4 grid"&gt;
                  &lt;div class="-mt-4"&gt;
                    &lt;a href="/hedgehog" class="flex"&gt;
                      &lt;span class="crayons-avatar crayons-avatar--xl mr-2 shrink-0"&gt;
                        &lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Fuser%2Fprofile_image%2F1712568%2Fc283581e-f5d4-4b95-827f-ea64399009b9.jpeg" class="crayons-avatar__image" alt=""&gt;
                      &lt;/span&gt;
                      &lt;span class="crayons-link crayons-subtitle-2 mt-5"&gt;Ježek&lt;/span&gt;
                    &lt;/a&gt;
                  &lt;/div&gt;
                  &lt;div class="print-hidden"&gt;
                    
                      Follow
                    
                  &lt;/div&gt;
                  &lt;div class="author-preview-metadata-container"&gt;&lt;/div&gt;
                &lt;/div&gt;
              &lt;/div&gt;
            &lt;/div&gt;

          &lt;/div&gt;
          &lt;a href="https://dev.to/hedgehog/setup-a-redis-cluster-using-redis-stack-4jdl" class="crayons-story__tertiary fs-xs"&gt;&lt;time&gt;Mar 27 '25&lt;/time&gt;&lt;span class="time-ago-indicator-initial-placeholder"&gt;&lt;/span&gt;&lt;/a&gt;
        &lt;/div&gt;
      &lt;/div&gt;

    &lt;/div&gt;

    &lt;div class="crayons-story__indention"&gt;
      &lt;h2 class="crayons-story__title crayons-story__title-full_post"&gt;
        &lt;a href="https://dev.to/hedgehog/setup-a-redis-cluster-using-redis-stack-4jdl" id="article-link-2358576"&gt;
          Setup a Redis Cluster with using Redis Stack
        &lt;/a&gt;
      &lt;/h2&gt;
        &lt;div class="crayons-story__tags"&gt;
            &lt;a class="crayons-tag  crayons-tag--monochrome " href="/t/redis"&gt;&lt;span class="crayons-tag__prefix"&gt;#&lt;/span&gt;redis&lt;/a&gt;
            &lt;a class="crayons-tag  crayons-tag--monochrome " href="/t/devops"&gt;&lt;span class="crayons-tag__prefix"&gt;#&lt;/span&gt;devops&lt;/a&gt;
        &lt;/div&gt;
      &lt;div class="crayons-story__bottom"&gt;
        &lt;div class="crayons-story__details"&gt;
          &lt;a href="https://dev.to/hedgehog/setup-a-redis-cluster-using-redis-stack-4jdl" class="crayons-btn crayons-btn--s crayons-btn--ghost crayons-btn--icon-left"&gt;
            &lt;div class="multiple_reactions_aggregate"&gt;
              &lt;span class="multiple_reactions_icons_container"&gt;
                  &lt;span class="crayons_icon_container"&gt;
                    &lt;img src="https://assets.dev.to/assets/sparkle-heart-5f9bee3767e18deb1bb725290cb151c25234768a0e9a2bd39370c382d02920cf.svg" width="18" height="18"&gt;
                  &lt;/span&gt;
              &lt;/span&gt;
              &lt;span class="aggregate_reactions_counter"&gt;6&lt;span class="hidden s:inline"&gt; reactions&lt;/span&gt;&lt;/span&gt;
            &lt;/div&gt;
          &lt;/a&gt;
            &lt;a href="https://dev.to/hedgehog/setup-a-redis-cluster-using-redis-stack-4jdl#comments" class="crayons-btn crayons-btn--s crayons-btn--ghost crayons-btn--icon-left flex items-center"&gt;
              Comments


              2&lt;span class="hidden s:inline"&gt; comments&lt;/span&gt;
            &lt;/a&gt;
        &lt;/div&gt;
        &lt;div class="crayons-story__save"&gt;
          &lt;small class="crayons-story__tertiary fs-xs mr-2"&gt;
            8 min read
          &lt;/small&gt;
            
              &lt;span class="bm-initial"&gt;
                

              &lt;/span&gt;
              &lt;span class="bm-success"&gt;
                

              &lt;/span&gt;
            
        &lt;/div&gt;
      &lt;/div&gt;
    &lt;/div&gt;
  &lt;/div&gt;
&lt;/div&gt;

&lt;/div&gt;


</description>
      <category>redis</category>
    </item>
    <item>
      <title>Setup a Redis Cluster with using Redis Stack</title>
      <dc:creator>Ježek</dc:creator>
      <pubDate>Thu, 27 Mar 2025 03:35:12 +0000</pubDate>
      <link>https://dev.to/hedgehog/setup-a-redis-cluster-using-redis-stack-4jdl</link>
      <guid>https://dev.to/hedgehog/setup-a-redis-cluster-using-redis-stack-4jdl</guid>
      <description>&lt;p&gt;&lt;a href="http://redis.io/" rel="noopener noreferrer"&gt;Redis&lt;/a&gt;® Cluster is a fully distributed implementation with automated sharding capabilities (horizontal scaling capabilities), designed for high performance and linear scaling up to 1000 nodes.&lt;/p&gt;

&lt;p&gt;This article shows how to set up a Redis Cluster using &lt;a href="https://redis.io/about/about-stack/" rel="noopener noreferrer"&gt;Redis Stack&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvpuw27lta056dlge4tmb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvpuw27lta056dlge4tmb.png" alt=" " width="800" height="485"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  Prerequisites
&lt;/h2&gt;

&lt;p&gt;Linux preinstalled servers, e.g.:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Host name&lt;/th&gt;
&lt;th&gt;IP&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;redis-1&lt;/td&gt;
&lt;td&gt;10.0.0.124&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;redis-2&lt;/td&gt;
&lt;td&gt;10.0.0.125&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;redis-3&lt;/td&gt;
&lt;td&gt;10.0.0.126&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Cluster topology:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;redis-1&lt;/th&gt;
&lt;th&gt;redis-2&lt;/th&gt;
&lt;th&gt;redis-3&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Primary M1 (port 7000)&lt;/td&gt;
&lt;td&gt;Primary M2 (port 7000)&lt;/td&gt;
&lt;td&gt;Primary M3 (port 7000)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Replica S3 (port 7001)&lt;/td&gt;
&lt;td&gt;Replica S1 (port 7001)&lt;/td&gt;
&lt;td&gt;Replica S2 (port 7001)&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;




&lt;h2&gt;
  
  
  Setup Redis Cluster
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Install and configure three Redis nodes of the cluster
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;Connect to all hosts.&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Create the file &lt;code&gt;/etc/yum.repos.d/redis.repo&lt;/code&gt; with the following contents:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight ini"&gt;&lt;code&gt;&lt;span class="nn"&gt;[Redis]&lt;/span&gt;
&lt;span class="py"&gt;name&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;Redis&lt;/span&gt;
&lt;span class="py"&gt;baseurl&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;http://packages.redis.io/rpm/rhel9 # replace rhel7 with the appropriate value for your platform&lt;/span&gt;
&lt;span class="py"&gt;enabled&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;1&lt;/span&gt;
&lt;span class="py"&gt;gpgcheck&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;1&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Run the following commands to install &lt;code&gt;Redis Stack&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;curl &lt;span class="nt"&gt;-fsSL&lt;/span&gt; https://packages.redis.io/gpg &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; /tmp/redis.key
&lt;span class="nb"&gt;sudo &lt;/span&gt;rpm &lt;span class="nt"&gt;--import&lt;/span&gt; /tmp/redis.key
&lt;span class="nb"&gt;sudo &lt;/span&gt;yum &lt;span class="nb"&gt;install &lt;/span&gt;epel-release
&lt;span class="nb"&gt;sudo &lt;/span&gt;yum &lt;span class="nb"&gt;install &lt;/span&gt;redis-stack-server
&lt;span class="nb"&gt;sudo &lt;/span&gt;systemctl disable redis-stack-server.service
&lt;/code&gt;&lt;/pre&gt;


&lt;p&gt;&lt;em&gt;Note&lt;/em&gt;: in order to start the &lt;code&gt;redis-stack-server&lt;/code&gt; service, you may need to install the &lt;code&gt;libssl&lt;/code&gt; module (e.g. &lt;code&gt;libssl.so.10&lt;/code&gt;):&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;yum compat-openssl10.x86-64
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Create and edit the &lt;code&gt;/etc/rc.local&lt;/code&gt;, add the following content to the file:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#!/bin/sh -e

echo never &amp;gt; /sys/kernel/mm/transparent_hugepage/enabled
sysctl -w net.core.somaxconn=65535

exit 0
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Give executable permissions to the &lt;code&gt;/etc/rc.local&lt;/code&gt; file by running the following command:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo chmod&lt;/span&gt; +x /etc/rc.local
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Edit the &lt;code&gt;/etc/sysctl.conf&lt;/code&gt; file, add the following line at the end of the file:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight ini"&gt;&lt;code&gt;&lt;span class="py"&gt;vm.overcommit_memory&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;1&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Set &lt;code&gt;ulimit&lt;/code&gt; values to ensure optimal compatibility and performance for Redis. Create the file &lt;code&gt;/etc/security/limits.d/90-redis.conf&lt;/code&gt; with the following content:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight ini"&gt;&lt;code&gt;&lt;span class="err"&gt;@redis&lt;/span&gt; &lt;span class="err"&gt;-&lt;/span&gt; &lt;span class="err"&gt;nofile&lt;/span&gt; &lt;span class="err"&gt;20480&lt;/span&gt;
&lt;span class="err"&gt;@redis&lt;/span&gt; &lt;span class="err"&gt;-&lt;/span&gt; &lt;span class="err"&gt;stack&lt;/span&gt;  &lt;span class="err"&gt;10240&lt;/span&gt;
&lt;span class="err"&gt;@redis&lt;/span&gt; &lt;span class="err"&gt;-&lt;/span&gt; &lt;span class="err"&gt;nproc&lt;/span&gt;  &lt;span class="err"&gt;10240&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Create the necessary folders by running the following commands:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo mkdir&lt;/span&gt; &lt;span class="nt"&gt;-p&lt;/span&gt; /etc/redis/cluster/&lt;span class="o"&gt;{&lt;/span&gt;7000,7001&lt;span class="o"&gt;}&lt;/span&gt;
&lt;span class="nb"&gt;sudo mkdir&lt;/span&gt; &lt;span class="nt"&gt;-p&lt;/span&gt; /var/lib/redis/&lt;span class="o"&gt;{&lt;/span&gt;7000,7001&lt;span class="o"&gt;}&lt;/span&gt;
&lt;span class="nb"&gt;sudo mkdir&lt;/span&gt; &lt;span class="nt"&gt;-p&lt;/span&gt; /var/&lt;span class="o"&gt;{&lt;/span&gt;run,log&lt;span class="o"&gt;}&lt;/span&gt;/redis
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Create the file &lt;code&gt;/etc/redis/cluster/7000/redis_7000.conf&lt;/code&gt; with the following content:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight ini"&gt;&lt;code&gt;&lt;span class="err"&gt;protected-mode&lt;/span&gt; &lt;span class="err"&gt;no&lt;/span&gt;
&lt;span class="err"&gt;port&lt;/span&gt; &lt;span class="err"&gt;7000&lt;/span&gt;
&lt;span class="err"&gt;bind&lt;/span&gt; &lt;span class="err"&gt;0.0.0.0&lt;/span&gt;

&lt;span class="err"&gt;loadmodule&lt;/span&gt; &lt;span class="err"&gt;/opt/redis-stack/lib/rediscompat.so&lt;/span&gt;
&lt;span class="err"&gt;loadmodule&lt;/span&gt; &lt;span class="err"&gt;/opt/redis-stack/lib/redisearch.so&lt;/span&gt;
&lt;span class="err"&gt;loadmodule&lt;/span&gt; &lt;span class="err"&gt;/opt/redis-stack/lib/redistimeseries.so&lt;/span&gt;
&lt;span class="err"&gt;loadmodule&lt;/span&gt; &lt;span class="err"&gt;/opt/redis-stack/lib/rejson.so&lt;/span&gt;
&lt;span class="err"&gt;loadmodule&lt;/span&gt; &lt;span class="err"&gt;/opt/redis-stack/lib/redisbloom.so&lt;/span&gt;
&lt;span class="err"&gt;loadmodule&lt;/span&gt; &lt;span class="err"&gt;/opt/redis-stack/lib/redisgears.so&lt;/span&gt; &lt;span class="err"&gt;v8-plugin-path&lt;/span&gt; &lt;span class="err"&gt;/opt/redis-stack/lib/libredisgears_v8_plugin.so&lt;/span&gt;

&lt;span class="err"&gt;dir&lt;/span&gt; &lt;span class="err"&gt;/var/lib/redis/7000/&lt;/span&gt;
&lt;span class="err"&gt;appendonly&lt;/span&gt; &lt;span class="err"&gt;yes&lt;/span&gt;

&lt;span class="err"&gt;cluster-enabled&lt;/span&gt; &lt;span class="err"&gt;yes&lt;/span&gt;
&lt;span class="err"&gt;cluster-node-timeout&lt;/span&gt; &lt;span class="err"&gt;5000&lt;/span&gt;
&lt;span class="err"&gt;cluster-config-file&lt;/span&gt; &lt;span class="err"&gt;/etc/redis/cluster/7000/nodes_7000.conf&lt;/span&gt;

&lt;span class="err"&gt;pidfile&lt;/span&gt; &lt;span class="err"&gt;/var/run/redis/redis_7000.pid&lt;/span&gt;
&lt;span class="err"&gt;logfile&lt;/span&gt; &lt;span class="err"&gt;/var/log/redis/redis_7000.log&lt;/span&gt;
&lt;span class="err"&gt;loglevel&lt;/span&gt; &lt;span class="err"&gt;notice&lt;/span&gt;

&lt;span class="err"&gt;requirepass&lt;/span&gt; &lt;span class="nn"&gt;[ACCESSKEY]&lt;/span&gt;
&lt;span class="err"&gt;masterauth&lt;/span&gt; &lt;span class="nn"&gt;[ACCESSKEY]&lt;/span&gt;

&lt;span class="err"&gt;enable-debug-command&lt;/span&gt; &lt;span class="err"&gt;local&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Create the file &lt;code&gt;/etc/redis/cluster/7001/redis_7001.conf&lt;/code&gt; with the following content:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight ini"&gt;&lt;code&gt;&lt;span class="err"&gt;protected-mode&lt;/span&gt; &lt;span class="err"&gt;no&lt;/span&gt;
&lt;span class="err"&gt;port&lt;/span&gt; &lt;span class="err"&gt;7001&lt;/span&gt;
&lt;span class="err"&gt;bind&lt;/span&gt; &lt;span class="err"&gt;0.0.0.0&lt;/span&gt;

&lt;span class="err"&gt;loadmodule&lt;/span&gt; &lt;span class="err"&gt;/opt/redis-stack/lib/rediscompat.so&lt;/span&gt;
&lt;span class="err"&gt;loadmodule&lt;/span&gt; &lt;span class="err"&gt;/opt/redis-stack/lib/redisearch.so&lt;/span&gt;
&lt;span class="err"&gt;loadmodule&lt;/span&gt; &lt;span class="err"&gt;/opt/redis-stack/lib/redistimeseries.so&lt;/span&gt;
&lt;span class="err"&gt;loadmodule&lt;/span&gt; &lt;span class="err"&gt;/opt/redis-stack/lib/rejson.so&lt;/span&gt;
&lt;span class="err"&gt;loadmodule&lt;/span&gt; &lt;span class="err"&gt;/opt/redis-stack/lib/redisbloom.so&lt;/span&gt;
&lt;span class="err"&gt;loadmodule&lt;/span&gt; &lt;span class="err"&gt;/opt/redis-stack/lib/redisgears.so&lt;/span&gt; &lt;span class="err"&gt;v8-plugin-path&lt;/span&gt; &lt;span class="err"&gt;/opt/redis-stack/lib/libredisgears_v8_plugin.so&lt;/span&gt;

&lt;span class="err"&gt;dir&lt;/span&gt; &lt;span class="err"&gt;/var/lib/redis/7001/&lt;/span&gt;
&lt;span class="err"&gt;appendonly&lt;/span&gt; &lt;span class="err"&gt;yes&lt;/span&gt;

&lt;span class="err"&gt;cluster-enabled&lt;/span&gt; &lt;span class="err"&gt;yes&lt;/span&gt;
&lt;span class="err"&gt;cluster-node-timeout&lt;/span&gt; &lt;span class="err"&gt;5000&lt;/span&gt;
&lt;span class="err"&gt;cluster-config-file&lt;/span&gt; &lt;span class="err"&gt;/etc/redis/cluster/7001/nodes_7001.conf&lt;/span&gt;

&lt;span class="err"&gt;pidfile&lt;/span&gt; &lt;span class="err"&gt;/var/run/redis/redis_7001.pid&lt;/span&gt;
&lt;span class="err"&gt;logfile&lt;/span&gt; &lt;span class="err"&gt;/var/log/redis/redis_7001.log&lt;/span&gt;
&lt;span class="err"&gt;loglevel&lt;/span&gt; &lt;span class="err"&gt;notice&lt;/span&gt;

&lt;span class="err"&gt;requirepass&lt;/span&gt; &lt;span class="nn"&gt;[ACCESSKEY]&lt;/span&gt;
&lt;span class="err"&gt;masterauth&lt;/span&gt; &lt;span class="nn"&gt;[ACCESSKEY]&lt;/span&gt;

&lt;span class="err"&gt;enable-debug-command&lt;/span&gt; &lt;span class="err"&gt;local&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Create a &lt;code&gt;redis&lt;/code&gt; user and a &lt;code&gt;redis&lt;/code&gt; group for the Redis services and give them the correct permissions by running the following commands:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo chown &lt;/span&gt;redis:redis &lt;span class="nt"&gt;-R&lt;/span&gt; /var/lib/redis
&lt;span class="nb"&gt;sudo chmod &lt;/span&gt;770 &lt;span class="nt"&gt;-R&lt;/span&gt; /var/lib/redis
&lt;span class="nb"&gt;sudo chown &lt;/span&gt;redis:redis &lt;span class="nt"&gt;-R&lt;/span&gt; /etc/redis
&lt;span class="nb"&gt;sudo chown &lt;/span&gt;redis:redis /var/&lt;span class="o"&gt;{&lt;/span&gt;run,log&lt;span class="o"&gt;}&lt;/span&gt;/redis
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Create the file &lt;code&gt;/etc/systemd/system/redis_7000.service&lt;/code&gt; with the following content:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight ini"&gt;&lt;code&gt;&lt;span class="nn"&gt;[Unit]&lt;/span&gt;
&lt;span class="py"&gt;Description&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;Redis key-value database on 7000&lt;/span&gt;
&lt;span class="py"&gt;Wants&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;network-online.target&lt;/span&gt;
&lt;span class="py"&gt;After&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;network-online.target&lt;/span&gt;

&lt;span class="nn"&gt;[Service]&lt;/span&gt;
&lt;span class="py"&gt;ExecStart&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;/opt/redis-stack/bin/redis-server /etc/redis/cluster/7000/redis_7000.conf --daemonize no --supervised no&lt;/span&gt;
&lt;span class="py"&gt;Type&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;simple&lt;/span&gt;
&lt;span class="py"&gt;User&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;redis&lt;/span&gt;
&lt;span class="py"&gt;Group&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;redis&lt;/span&gt;
&lt;span class="py"&gt;RuntimeDirectory&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;redis&lt;/span&gt;
&lt;span class="py"&gt;RuntimeDirectoryMode&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;0755&lt;/span&gt;

&lt;span class="nn"&gt;[Install]&lt;/span&gt;
&lt;span class="py"&gt;WantedBy&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;multi-user.target&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Create the file &lt;code&gt;/etc/systemd/system/redis_7001.service&lt;/code&gt; with the following content:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight ini"&gt;&lt;code&gt;&lt;span class="nn"&gt;[Unit]&lt;/span&gt;
&lt;span class="py"&gt;Description&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;Redis key-value database on 7001&lt;/span&gt;
&lt;span class="py"&gt;Wants&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;network-online.target&lt;/span&gt;
&lt;span class="py"&gt;After&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;network-online.target&lt;/span&gt;

&lt;span class="nn"&gt;[Service]&lt;/span&gt;
&lt;span class="py"&gt;ExecStart&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;/usr/bin/redis-server /etc/redis/cluster/7001/redis_7001.conf --daemonize no --supervised no&lt;/span&gt;
&lt;span class="py"&gt;Type&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;simple&lt;/span&gt;
&lt;span class="py"&gt;User&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;redis&lt;/span&gt;
&lt;span class="py"&gt;Group&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;redis&lt;/span&gt;
&lt;span class="py"&gt;RuntimeDirectory&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;redis&lt;/span&gt;
&lt;span class="py"&gt;RuntimeDirectoryMode&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;0755&lt;/span&gt;

&lt;span class="nn"&gt;[Install]&lt;/span&gt;
&lt;span class="py"&gt;WantedBy&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;multi-user.target&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Tell &lt;code&gt;systemd&lt;/code&gt; to start the two services &lt;code&gt;redis_7000.service&lt;/code&gt; and &lt;code&gt;redis_7001.service&lt;/code&gt; automatically at server boot by running the following commands:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;systemctl &lt;span class="nb"&gt;enable&lt;/span&gt; /etc/systemd/system/redis_7000.service
&lt;span class="nb"&gt;sudo &lt;/span&gt;systemctl &lt;span class="nb"&gt;enable&lt;/span&gt; /etc/systemd/system/redis_7001.service
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Reboot the server:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;reboot
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Validate the installation of the Redis services
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Check the status of the &lt;code&gt;redis_7000&lt;/code&gt; and &lt;code&gt;redis_7001&lt;/code&gt; services by running the following command:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;systemctl status redis_7000
● redis_7000.service - Redis key-value database on 7000
 Loaded: loaded &lt;span class="o"&gt;(&lt;/span&gt;/etc/systemd/system/redis_7000.service&lt;span class="p"&gt;;&lt;/span&gt; enabled&lt;span class="p"&gt;;&lt;/span&gt; vendor preset: disabled&lt;span class="o"&gt;)&lt;/span&gt;
 Active: active &lt;span class="o"&gt;(&lt;/span&gt;running&lt;span class="o"&gt;)&lt;/span&gt; since Thu 2025-03-27 03:42:15 UTC&lt;span class="p"&gt;;&lt;/span&gt; 22min ago
Main PID: 727 &lt;span class="o"&gt;(&lt;/span&gt;redis-server&lt;span class="o"&gt;)&lt;/span&gt;
  Tasks: 7 &lt;span class="o"&gt;(&lt;/span&gt;limit: 4650&lt;span class="o"&gt;)&lt;/span&gt;
 Memory: 18.9M
    CPU: 3.365s
 CGroup: /system.slice/redis_7000.service
         └─ 727 &lt;span class="s2"&gt;"/opt/redis-stack/bin/redis-server *:7000 [cluster]"&lt;/span&gt; 
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Check the &lt;code&gt;redis_7000&lt;/code&gt; and &lt;code&gt;redis_7001&lt;/code&gt; logs on all servers:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;tail&lt;/span&gt; &lt;span class="nt"&gt;-n&lt;/span&gt; 3 /var/log/redis/redis_7000.log
727:M 27 Mar 2025 03:42:17.641 &lt;span class="k"&gt;*&lt;/span&gt; DB loaded from append only file: 0.312 seconds
727:M 27 Mar 2025 03:42:17.641 &lt;span class="k"&gt;*&lt;/span&gt; Opening AOF incr file appendonly.aof.1.incr.aof on server start
727:M 27 Mar 2025 03:42:17.642 &lt;span class="k"&gt;*&lt;/span&gt; Ready to accept connections tcp
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;You shouldn't have any warnings in log files.&lt;/p&gt;

&lt;p&gt;After checking that all servers are well configured, you can proceed with the Redis cluster configuration.&lt;/p&gt;




&lt;h2&gt;
  
  
  Set up the Redis Cluster
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Configure the cluster
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;Connect to one of the Redis server.&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;To create a cluster run the following command:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;redis-cli &lt;span class="nt"&gt;--cluster&lt;/span&gt; create &lt;span class="se"&gt;\&lt;/span&gt;
    10.0.0.124:7000 10.0.0.125:7000 10.0.0.126:7000 &lt;span class="se"&gt;\&lt;/span&gt;
    10.0.0.124:7001 10.0.0.125:7001 10.0.0.126:7001 &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--cluster-replicas&lt;/span&gt; 1 &lt;span class="nt"&gt;-a&lt;/span&gt; &lt;span class="o"&gt;[&lt;/span&gt;ACCESSKEY]
&lt;/code&gt;&lt;/pre&gt;


&lt;p&gt;The output should be similar to the following:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;gt;&amp;gt;&amp;gt; Performing hash slots allocation on 6 nodes...
Master[0] -&amp;gt; Slots 0 - 5460
Master[1] -&amp;gt; Slots 5461 - 10922
Master[2] -&amp;gt; Slots 10923 - 16383
Adding replica 10.0.0.125:7001 to 10.0.0.124:7000
Adding replica 10.0.0.126:7001 to 10.0.0.125:7000
Adding replica 10.0.0.124:7001 to 10.0.0.126:7000
M: 41b58d5cea81103d296ee70e31aa56dfa0f1aa30 10.0.0.124:7000
   slots:[0-5460] (5461 slots) master
M: 2ae13d480218fc91235bdc727c57edef941817a7 10.0.0.125:7000
   slots:[5461-10922] (5462 slots) master
M: 0848285de9827407af8ee8da81bcc645be57793c 10.0.0.126:7000
   slots:[10923-16383] (5461 slots) master
S: 779283793850c6c8a30425ae2ef780114585a64e 10.0.0.124:7001
   replicates 0848285de9827407af8ee8da81bcc645be57793c
S: b2fa3c3902639db13e0a245a4f08f8657a4d06ac 10.0.0.125:7001
   replicates 41b58d5cea81103d296ee70e31aa56dfa0f1aa30
S: 095098c7e15b2e1021e215abe0ace4fd0f841328 10.0.0.126:7001
   replicates 2ae13d480218fc91235bdc727c57edef941817a7
Can I set the above configuration? (type 'yes' to accept):
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Type &lt;code&gt;yes&lt;/code&gt; and press &lt;code&gt;Enter&lt;/code&gt; to accept the proposed configuration. &lt;br&gt;
You get the configuration details in the output:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Can I set the above configuration? (type 'yes' to accept): yes
&amp;gt;&amp;gt;&amp;gt; Nodes configuration updated
&amp;gt;&amp;gt;&amp;gt; Assign a different config epoch to each node
&amp;gt;&amp;gt;&amp;gt; Sending CLUSTER MEET messages to join the cluster
Waiting for the cluster to join
.
&amp;gt;&amp;gt;&amp;gt; Performing Cluster Check (using node 10.0.0.124:7000)
M: 41b58d5cea81103d296ee70e31aa56dfa0f1aa30 10.0.0.124:7000
slots:[0-5460] (5461 slots) master
1 additional replica(s)
M: 2ae13d480218fc91235bdc727c57edef941817a7 10.0.0.125:7000
slots:[5461-10922] (5462 slots) master
1 additional replica(s)
S: 095098c7e15b2e1021e215abe0ace4fd0f841328 10.0.0.126:7001
slots: (0 slots) slave
replicates 2ae13d480218fc91235bdc727c57edef941817a7
S: 779283793850c6c8a30425ae2ef780114585a64e 10.0.0.124:7001
slots: (0 slots) slave
replicates 0848285de9827407af8ee8da81bcc645be57793c
S: b2fa3c3902639db13e0a245a4f08f8657a4d06ac 10.0.0.125:7001
slots: (0 slots) slave
replicates 41b58d5cea81103d296ee70e31aa56dfa0f1aa30
M: 0848285de9827407af8ee8da81bcc645be57793c 10.0.0.126:7000
slots:[10923-16383] (5461 slots) master
1 additional replica(s)
[OK] All nodes agree about slots configuration.
&amp;gt;&amp;gt;&amp;gt; Check for open slots...
&amp;gt;&amp;gt;&amp;gt; Check slots coverage...
[OK] All 16384 slots covered.
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Checking the Redis Cluster
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Run the following command:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;redis-cli &lt;span class="nt"&gt;-c&lt;/span&gt; &lt;span class="nt"&gt;-h&lt;/span&gt; 10.0.0.124 &lt;span class="nt"&gt;-p&lt;/span&gt; 7000 &lt;span class="nt"&gt;-a&lt;/span&gt; &lt;span class="o"&gt;[&lt;/span&gt;ACCESSKEY] cluster nodes
&lt;/code&gt;&lt;/pre&gt;


&lt;p&gt;The output should be similar to the following:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;2ae13d480218fc91235bdc727c57edef941817a7 10.0.0.125:7000@17000 master - 0 1743042394000 2 connected 5461-10922
095098c7e15b2e1021e215abe0ace4fd0f841328 10.0.0.126:7001@17001 slave 2ae13d480218fc91235bdc727c57edef941817a7 0 1743042394892 2 connected
779283793850c6c8a30425ae2ef780114585a64e 10.0.0.124:7001@17001 slave 0848285de9827407af8ee8da81bcc645be57793c 0 1743042394587 3 connected
b2fa3c3902639db13e0a245a4f08f8657a4d06ac 10.0.0.125:7001@17001 slave 41b58d5cea81103d296ee70e31aa56dfa0f1aa30 0 1743042394586 1 connected
41b58d5cea81103d296ee70e31aa56dfa0f1aa30 10.0.0.124:7000@17000 myself,master - 0 0 1 connected 0-5460
0848285de9827407af8ee8da81bcc645be57793c 10.0.0.126:7000@17000 master - 0 1743042393568 3 connected 10923-16383 
&lt;/code&gt;&lt;/pre&gt;


&lt;p&gt;Since everything looks OK, perform a several &lt;code&gt;SET&lt;/code&gt; and &lt;code&gt;GET&lt;/code&gt; commands to test if the Redis Cluster behaves as expected.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Connect to Redis Master:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;redis-cli &lt;span class="nt"&gt;-c&lt;/span&gt; &lt;span class="nt"&gt;-h&lt;/span&gt; 10.0.0.124 &lt;span class="nt"&gt;-p&lt;/span&gt; 7000 &lt;span class="nt"&gt;-a&lt;/span&gt; &lt;span class="o"&gt;[&lt;/span&gt;ACCESSKEY]
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Run a several &lt;code&gt;SET&lt;/code&gt; and &lt;code&gt;GET&lt;/code&gt; commands to check the behavior of the Redis Cluster:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;10.0.0.124:7000&amp;gt; SET foo bar
-&amp;gt; Redirected to slot &lt;span class="o"&gt;[&lt;/span&gt;12182] located at 10.0.0.126:7000
OK
10.0.0.124:7000&amp;gt; GET foo
&lt;span class="s2"&gt;"bar"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;


&lt;p&gt;As you can see the Redis sends the key to the correct hash slot.&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;




&lt;h2&gt;
  
  
  Testing the failover
&lt;/h2&gt;

&lt;p&gt;This section shows how to test the failover behavior of a Redis Cluster.&lt;/p&gt;

&lt;h3&gt;
  
  
  Master node failure
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Run the following command on Master node:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;redis-cli &lt;span class="nt"&gt;-c&lt;/span&gt; &lt;span class="nt"&gt;-h&lt;/span&gt; 127.0.0.0 &lt;span class="nt"&gt;-p&lt;/span&gt; 7000 &lt;span class="nt"&gt;-a&lt;/span&gt; &lt;span class="o"&gt;[&lt;/span&gt;ACCESSKEY] DEBUG &lt;span class="nb"&gt;sleep &lt;/span&gt;40
&lt;/code&gt;&lt;/pre&gt;


&lt;p&gt;This command makes this Master unreachable by the replicas during 40 seconds, forcing the associated Replica (port 7001) on server &lt;code&gt;redis-2&lt;/code&gt; to take over and change its role to Master.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Check roles:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;redis-cli &lt;span class="nt"&gt;-c&lt;/span&gt; &lt;span class="nt"&gt;-h&lt;/span&gt; 127.0.0.0 &lt;span class="nt"&gt;-p&lt;/span&gt; 7000 &lt;span class="nt"&gt;-a&lt;/span&gt; &lt;span class="o"&gt;[&lt;/span&gt;ACCESSKEY] info replication
&lt;span class="c"&gt;# Replication&lt;/span&gt;
role:slave
master_host:10.0.0.125
master_port:7001
...

&lt;span class="nv"&gt;$ &lt;/span&gt;redis-cli &lt;span class="nt"&gt;-c&lt;/span&gt; &lt;span class="nt"&gt;-h&lt;/span&gt; 10.0.0.125 &lt;span class="nt"&gt;-p&lt;/span&gt; 7000 &lt;span class="nt"&gt;-a&lt;/span&gt; &lt;span class="o"&gt;[&lt;/span&gt;ACCESSKEY] info replication
&lt;span class="c"&gt;# Replication&lt;/span&gt;
role:master
connected_slaves:1
slave0:ip&lt;span class="o"&gt;=&lt;/span&gt;10.0.0.126,port&lt;span class="o"&gt;=&lt;/span&gt;7001,state&lt;span class="o"&gt;=&lt;/span&gt;online,offset&lt;span class="o"&gt;=&lt;/span&gt;3410,lag&lt;span class="o"&gt;=&lt;/span&gt;1

&lt;span class="nv"&gt;$ &lt;/span&gt;redis-cli &lt;span class="nt"&gt;-c&lt;/span&gt; &lt;span class="nt"&gt;-h&lt;/span&gt; 10.0.0.124 &lt;span class="nt"&gt;-p&lt;/span&gt; 7000 &lt;span class="nt"&gt;-a&lt;/span&gt; &lt;span class="o"&gt;[&lt;/span&gt;ACCESSKEY] cluster nodes
41b58d5cea81103d296ee70e31aa56dfa0f1aa30 10.0.0.124:7000@17000 myself,slave b2fa3c3902639db13e0a245a4f08f8657a4d06ac 0 0 7 connected
2ae13d480218fc91235bdc727c57edef941817a7 10.0.0.125:7000@17000 master - 0 1743045119565 2 connected 5461-10922
0848285de9827407af8ee8da81bcc645be57793c 10.0.0.126:7000@17000 master - 0 1743045118000 3 connected 10923-16383
779283793850c6c8a30425ae2ef780114585a64e 10.0.0.124:7001@17001 slave 0848285de9827407af8ee8da81bcc645be57793c 0 1743045119564 3 connected
b2fa3c3902639db13e0a245a4f08f8657a4d06ac 10.0.0.125:7001@17001 master - 0 1743045119053 7 connected 0-5460
095098c7e15b2e1021e215abe0ace4fd0f841328 10.0.0.126:7001@17001 slave 2ae13d480218fc91235bdc727c57edef941817a7 0 1743045118035 2 connected
&lt;/code&gt;&lt;/pre&gt;


&lt;p&gt;From the output, you can see that the Replica running in &lt;code&gt;redis-2&lt;/code&gt; got promoted to Master.&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Replica node failure
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Run the following command on Replica in server &lt;code&gt;redis-3&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;redis-cli &lt;span class="nt"&gt;-c&lt;/span&gt; &lt;span class="nt"&gt;-h&lt;/span&gt; 127.0.0.0 &lt;span class="nt"&gt;-p&lt;/span&gt; 7001 &lt;span class="nt"&gt;-a&lt;/span&gt; &lt;span class="o"&gt;[&lt;/span&gt;ACCESSKEY] DEBUG &lt;span class="nb"&gt;sleep &lt;/span&gt;40
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Check roles in the cluster:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;redis-cli &lt;span class="nt"&gt;-c&lt;/span&gt; &lt;span class="nt"&gt;-h&lt;/span&gt; 10.0.0.126 &lt;span class="nt"&gt;-p&lt;/span&gt; 7000 &lt;span class="nt"&gt;-a&lt;/span&gt; &lt;span class="o"&gt;[&lt;/span&gt;ACCESSKEY] cluster nodes
41b58d5cea81103d296ee70e31aa56dfa0f1aa30 10.0.0.124:7000@17000 slave b2fa3c3902639db13e0a245a4f08f8657a4d06ac 0 1743046096000 7 connected
779283793850c6c8a30425ae2ef780114585a64e 10.0.0.124:7001@17001 slave 0848285de9827407af8ee8da81bcc645be57793c 0 1743046097459 3 connected
2ae13d480218fc91235bdc727c57edef941817a7 10.0.0.125:7000@17000 master - 0 1743046097900 2 connected 5461-10922
b2fa3c3902639db13e0a245a4f08f8657a4d06ac 10.0.0.125:7001@17001 master - 0 1743046096395 7 connected 0-5460
0848285de9827407af8ee8da81bcc645be57793c 10.0.0.126:7000@17000 myself,master - 0 0 3 connected 10923-16383
095098c7e15b2e1021e215abe0ace4fd0f841328 10.0.0.126:7001@17001 slave 2ae13d480218fc91235bdc727c57edef941817a7 0 1743046095519 2 connected
&lt;/code&gt;&lt;/pre&gt;


&lt;p&gt;As you can see, the cluster topology isn't changed. &lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Congrats! You have configured the Redis Cluster.&lt;/p&gt;




&lt;h2&gt;
  
  
  Summary
&lt;/h2&gt;

&lt;p&gt;In this post, we have seen:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;how to install the Redis Stack Server&lt;/li&gt;
&lt;li&gt;how to set up the Redis Cluster&lt;/li&gt;
&lt;li&gt;hot to test the failover of a cluster.&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  References
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://dev.toInstall%20Redis%20Stack%20on%20Linux"&gt;Install Redis Stack on Linux&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://success.outsystems.com/documentation/how_to_guides/infrastructure/configuring_outsystems_with_redis_in_memory_session_storage/set_up_a_redis_cluster_for_production_environments/" rel="noopener noreferrer"&gt;Set up a Redis Cluster for Production environments&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://severalnines.com/blog/installing-redis-cluster-cluster-mode-enabled-auto-failover/" rel="noopener noreferrer"&gt;Installing Redis Cluster (cluster mode enabled) with auto failover&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>redis</category>
      <category>devops</category>
    </item>
    <item>
      <title>Setup a Docker Swarm cluster using Vagrant</title>
      <dc:creator>Ježek</dc:creator>
      <pubDate>Mon, 21 Oct 2024 23:48:00 +0000</pubDate>
      <link>https://dev.to/hedgehog/setup-a-docker-swarm-cluster-mkn</link>
      <guid>https://dev.to/hedgehog/setup-a-docker-swarm-cluster-mkn</guid>
      <description>&lt;p&gt;Docker includes &lt;a href="https://docs.docker.com/engine/swarm/" rel="noopener noreferrer"&gt;Swarm mode&lt;/a&gt; for natively managing a cluster of Docker Engines called a swarm.&lt;/p&gt;

&lt;p&gt;There is excellent tutorial &lt;a href="https://docs.docker.com/engine/swarm/swarm-tutorial/" rel="noopener noreferrer"&gt;Getting started with Swarm mode&lt;/a&gt;, this tutorial describes how to create docker swarm cluster with 1 manager and 2 worker nodes. &lt;/p&gt;

&lt;p&gt;This article shows how to setup high-availability (HA) Docker Swarm cluster using &lt;a href="https://developer.hashicorp.com/vagrant" rel="noopener noreferrer"&gt;Vagrant&lt;/a&gt;.&lt;/p&gt;




&lt;h2&gt;
  
  
  Preconditions
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;You have a computer with at least 12 GB of free RAM&lt;/li&gt;
&lt;li&gt;You have installed &lt;a href="https://developer.hashicorp.com/vagrant" rel="noopener noreferrer"&gt;Vagrant&lt;/a&gt; on this computer&lt;/li&gt;
&lt;li&gt;Install a virtualization product such as &lt;a href="https://www.virtualbox.org/" rel="noopener noreferrer"&gt;VirtualBox&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Setup
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Initialize a project directory
&lt;/h3&gt;

&lt;p&gt;Make a new directory for the project you will work:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;mkdir &lt;/span&gt;docker-swarm-ha
&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;cd &lt;/span&gt;docker-swarm-ha
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Inventory file
&lt;/h3&gt;

&lt;p&gt;Create an &lt;code&gt;inventory.yaml&lt;/code&gt; file in your current directory, and open the file in you favorite editor:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;touch &lt;/span&gt;inventory.yaml
&lt;span class="c"&gt;# vim inventory.yaml&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Fill the file like this one:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="nn"&gt;---&lt;/span&gt;
&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;role&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Manager&lt;/span&gt;
  &lt;span class="na"&gt;availability&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;drain&lt;/span&gt;
  &lt;span class="na"&gt;hostname&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;manager-1&lt;/span&gt;
  &lt;span class="na"&gt;ip&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;192.168.100.10&lt;/span&gt;
  &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;centos/8&lt;/span&gt;
  &lt;span class="na"&gt;ram&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;2048&lt;/span&gt;
  &lt;span class="na"&gt;cpus&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;1&lt;/span&gt;
&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;role&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Manager&lt;/span&gt;
  &lt;span class="na"&gt;availability&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;drain&lt;/span&gt;
  &lt;span class="na"&gt;hostname&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;manager-2&lt;/span&gt;
  &lt;span class="na"&gt;ip&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;192.168.100.11&lt;/span&gt;
  &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;centos/8&lt;/span&gt;
  &lt;span class="na"&gt;ram&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;2048&lt;/span&gt;
  &lt;span class="na"&gt;cpus&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;1&lt;/span&gt;
&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;role&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Manager&lt;/span&gt;
  &lt;span class="na"&gt;availability&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;drain&lt;/span&gt;
  &lt;span class="na"&gt;hostname&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;manager-3&lt;/span&gt;
  &lt;span class="na"&gt;ip&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;192.168.100.12&lt;/span&gt;
  &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;centos/8&lt;/span&gt;
  &lt;span class="na"&gt;ram&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;2048&lt;/span&gt;
  &lt;span class="na"&gt;cpus&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;1&lt;/span&gt;
&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;role&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Worker&lt;/span&gt;
  &lt;span class="na"&gt;availability&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;active&lt;/span&gt;
  &lt;span class="na"&gt;hostname&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;worker-1&lt;/span&gt;
  &lt;span class="na"&gt;ip&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;192.168.100.13&lt;/span&gt;
  &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;centos/8&lt;/span&gt;
  &lt;span class="na"&gt;ram&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;4096&lt;/span&gt;
  &lt;span class="na"&gt;cpus&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;1&lt;/span&gt;
&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;role&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Worker&lt;/span&gt;
  &lt;span class="na"&gt;availability&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;active&lt;/span&gt;
  &lt;span class="na"&gt;hostname&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;worker-2&lt;/span&gt;
  &lt;span class="na"&gt;ip&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;192.168.100.14&lt;/span&gt;
  &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;centos/8&lt;/span&gt;
  &lt;span class="na"&gt;ram&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;4096&lt;/span&gt;
  &lt;span class="na"&gt;cpus&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;1&lt;/span&gt;
&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;role&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Worker&lt;/span&gt;
  &lt;span class="na"&gt;availability&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;active&lt;/span&gt;
  &lt;span class="na"&gt;hostname&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;worker-3&lt;/span&gt;
  &lt;span class="na"&gt;ip&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;192.168.100.15&lt;/span&gt;
  &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;centos/8&lt;/span&gt;
  &lt;span class="na"&gt;ram&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;4096&lt;/span&gt;
  &lt;span class="na"&gt;cpus&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;1&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This is inventory file of your docker swarm cluster with relevant virtual machine (VM) characteristics.&lt;/p&gt;

&lt;h3&gt;
  
  
  Vagrant file
&lt;/h3&gt;

&lt;p&gt;Create a Vagrantfile in your current directory, and open the file in you favorite editor:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;touch &lt;/span&gt;Vagrantfile
&lt;span class="c"&gt;# code .&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now, it's time to code Vagrantfile using Ruby language...&lt;/p&gt;

&lt;h4&gt;
  
  
  On top of the file add following code
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight ruby"&gt;&lt;code&gt;&lt;span class="c1"&gt;# -*- mode: ruby -*-&lt;/span&gt;
&lt;span class="c1"&gt;# vi: set ft=ruby :&lt;/span&gt;

&lt;span class="nb"&gt;require&lt;/span&gt; &lt;span class="s1"&gt;'yaml'&lt;/span&gt;

&lt;span class="n"&gt;current_dir&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="no"&gt;File&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;dirname&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="no"&gt;File&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;expand_path&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="kp"&gt;__FILE__&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;
&lt;span class="n"&gt;servers&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="no"&gt;YAML&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;load_file&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="si"&gt;#{&lt;/span&gt;&lt;span class="n"&gt;current_dir&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;/inventory.yaml"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;leader&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;servers&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;find&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="n"&gt;favor&lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="n"&gt;favor&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s1"&gt;'hostname'&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;==&lt;/span&gt; &lt;span class="s1"&gt;'manager-1'&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="n"&gt;group&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"Docker Swarm"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here, we load the inventory file, and then we define a leader node in the cluster. This node is used to initialize docker swarm. &lt;br&gt;
Also we defined &lt;code&gt;Docker Swarm&lt;/code&gt; as a group of VMs in VirtualBox GUI.&lt;/p&gt;
&lt;h4&gt;
  
  
  Add Vagrant configuration below
&lt;/h4&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight ruby"&gt;&lt;code&gt;&lt;span class="no"&gt;Vagrant&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;configure&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;"2"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;do&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="n"&gt;config&lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;

&lt;span class="k"&gt;end&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;The &lt;code&gt;"2"&lt;/code&gt; in &lt;code&gt;Vagrant.configure&lt;/code&gt; configures the configuration version.&lt;br&gt;
Inside this block is the place where the main code will be placed. &lt;/p&gt;
&lt;h4&gt;
  
  
  Define all VMs in accordance with inventory file
&lt;/h4&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight ruby"&gt;&lt;code&gt;  &lt;span class="n"&gt;servers&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;each&lt;/span&gt; &lt;span class="k"&gt;do&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="n"&gt;machine&lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;
    &lt;span class="n"&gt;config&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;vm&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;define&lt;/span&gt; &lt;span class="n"&gt;machine&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s1"&gt;'hostname'&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="k"&gt;do&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="n"&gt;node&lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;
        &lt;span class="n"&gt;node&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;vm&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;box&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;machine&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s1"&gt;'image'&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
        &lt;span class="n"&gt;node&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;vm&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;hostname&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;machine&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s1"&gt;'hostname'&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
        &lt;span class="n"&gt;node&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;vm&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;network&lt;/span&gt; &lt;span class="ss"&gt;:private_network&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="ss"&gt;ip: &lt;/span&gt;&lt;span class="n"&gt;machine&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s1"&gt;'ip'&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
        &lt;span class="n"&gt;node&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;vm&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;provider&lt;/span&gt; &lt;span class="ss"&gt;:virtualbox&lt;/span&gt; &lt;span class="k"&gt;do&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="n"&gt;vb&lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;
          &lt;span class="n"&gt;vb&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;customize&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"modifyvm"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="ss"&gt;:id&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s2"&gt;"--groups"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s2"&gt;"/&lt;/span&gt;&lt;span class="si"&gt;#{&lt;/span&gt;&lt;span class="n"&gt;group&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;/&lt;/span&gt;&lt;span class="si"&gt;#{&lt;/span&gt;&lt;span class="n"&gt;machine&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s1"&gt;'role'&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
          &lt;span class="n"&gt;vb&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;name&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;machine&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s1"&gt;'hostname'&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
          &lt;span class="n"&gt;vb&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;memory&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;machine&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s1"&gt;'ram'&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
          &lt;span class="n"&gt;vb&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;cpus&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;machine&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s1"&gt;'cpus'&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
        &lt;span class="k"&gt;end&lt;/span&gt;
    &lt;span class="k"&gt;end&lt;/span&gt;
  &lt;span class="k"&gt;end&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h4&gt;
  
  
  Define hosts file on all VMs
&lt;/h4&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight ruby"&gt;&lt;code&gt;  &lt;span class="n"&gt;servers&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;each&lt;/span&gt; &lt;span class="k"&gt;do&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="n"&gt;machine&lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;
    &lt;span class="n"&gt;config&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;vm&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;provision&lt;/span&gt; &lt;span class="s2"&gt;"Setup hosts"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="ss"&gt;type: :shell&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="ss"&gt;:args&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;machine&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s1"&gt;'ip'&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt; &lt;span class="n"&gt;machine&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s1"&gt;'hostname'&lt;/span&gt;&lt;span class="p"&gt;]],&lt;/span&gt; &lt;span class="ss"&gt;inline: &lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&amp;lt;-&lt;/span&gt;&lt;span class="no"&gt;SHELL&lt;/span&gt;&lt;span class="sh"&gt;
      sudo echo "$1  $2"  &amp;gt;&amp;gt; /etc/hosts
&lt;/span&gt;&lt;span class="no"&gt;    SHELL&lt;/span&gt;
  &lt;span class="k"&gt;end&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h4&gt;
  
  
  Setup Linux package repositories
&lt;/h4&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight ruby"&gt;&lt;code&gt;  &lt;span class="n"&gt;config&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;vm&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;provision&lt;/span&gt; &lt;span class="s2"&gt;"Setup repositories"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="ss"&gt;type: :shell&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="ss"&gt;privileged: &lt;/span&gt;&lt;span class="kp"&gt;true&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="ss"&gt;inline: &lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&amp;lt;-&lt;/span&gt;&lt;span class="no"&gt;SHELL&lt;/span&gt;&lt;span class="sh"&gt;
    sed -i s/mirror.centos.org/vault.centos.org/g /etc/yum.repos.d/CentOS-*.repo
    sed -i s/^mirrorlist=http/#mirrorlist=http/g /etc/yum.repos.d/CentOS-*.repo
    sed -i s/^#.*baseurl=http/baseurl=http/g /etc/yum.repos.d/CentOS-*.repo
    yum clean all
&lt;/span&gt;&lt;span class="no"&gt;  SHELL&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h4&gt;
  
  
  Install Docker Engine on all VMs
&lt;/h4&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight ruby"&gt;&lt;code&gt;&lt;span class="n"&gt;config&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;vm&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;provision&lt;/span&gt; &lt;span class="s2"&gt;"Install Docker"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="ss"&gt;type: :docker&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h4&gt;
  
  
  Install Redis as a cache in docker container
&lt;/h4&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight ruby"&gt;&lt;code&gt;  &lt;span class="n"&gt;config&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;vm&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;provision&lt;/span&gt; &lt;span class="s2"&gt;"Run Redis cache"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="ss"&gt;type: :docker&lt;/span&gt; &lt;span class="k"&gt;do&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="n"&gt;d&lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;
    &lt;span class="n"&gt;d&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;run&lt;/span&gt; &lt;span class="s2"&gt;"redis"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="ss"&gt;cmd: &lt;/span&gt;&lt;span class="s2"&gt;"--bind 0.0.0.0"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="ss"&gt;args: &lt;/span&gt;&lt;span class="s2"&gt;"-p 0.0.0.0:6379:6379"&lt;/span&gt;
  &lt;span class="k"&gt;end&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;In our case, Redis cache is used to temporary store docker swarm join tokens.&lt;/p&gt;
&lt;h4&gt;
  
  
  Initialize docker swarm
&lt;/h4&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight ruby"&gt;&lt;code&gt;  &lt;span class="n"&gt;config&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;vm&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;define&lt;/span&gt; &lt;span class="s2"&gt;"manager-1"&lt;/span&gt; &lt;span class="k"&gt;do&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="n"&gt;node&lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;
    &lt;span class="n"&gt;node&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;vm&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;provision&lt;/span&gt; &lt;span class="s2"&gt;"Swarm init"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="ss"&gt;type: :shell&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="ss"&gt;privileged: &lt;/span&gt;&lt;span class="kp"&gt;true&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="ss"&gt;:args&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;leader&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s1"&gt;'ip'&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt; &lt;span class="n"&gt;leader&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s1"&gt;'availability'&lt;/span&gt;&lt;span class="p"&gt;]],&lt;/span&gt;
      &lt;span class="ss"&gt;inline: &lt;/span&gt;&lt;span class="s2"&gt;"docker swarm init --advertise-addr $1 --availability $2 || true"&lt;/span&gt;

    &lt;span class="n"&gt;node&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;vm&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;provision&lt;/span&gt; &lt;span class="s2"&gt;"Save join-tokens"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="ss"&gt;type: :shell&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="ss"&gt;privileged: &lt;/span&gt;&lt;span class="kp"&gt;true&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="ss"&gt;inline: &lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&amp;lt;-&lt;/span&gt;&lt;span class="no"&gt;SHELL&lt;/span&gt;&lt;span class="sh"&gt;
        docker exec redis redis-cli set manager-join-token $(docker swarm join-token manager -q)
        docker exec redis redis-cli set worker-join-token $(docker swarm join-token worker -q)
&lt;/span&gt;&lt;span class="no"&gt;      SHELL&lt;/span&gt;
  &lt;span class="k"&gt;end&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;As you can see above, join tokens are saved into Redis cache.&lt;/p&gt;
&lt;h4&gt;
  
  
  And finally join all VMs to the docker swarm cluster
&lt;/h4&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight ruby"&gt;&lt;code&gt;  &lt;span class="n"&gt;servers&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;select&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="n"&gt;favor&lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="n"&gt;favor&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s1"&gt;'hostname'&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;!=&lt;/span&gt; &lt;span class="s1"&gt;'manager-1'&lt;/span&gt; &lt;span class="p"&gt;}.&lt;/span&gt;&lt;span class="nf"&gt;each&lt;/span&gt; &lt;span class="k"&gt;do&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="n"&gt;machine&lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;
    &lt;span class="n"&gt;config&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;vm&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;define&lt;/span&gt; &lt;span class="n"&gt;machine&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s1"&gt;'hostname'&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="k"&gt;do&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="n"&gt;node&lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;
      &lt;span class="n"&gt;node&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;vm&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;provision&lt;/span&gt; &lt;span class="s2"&gt;"Add a &lt;/span&gt;&lt;span class="si"&gt;#{&lt;/span&gt;&lt;span class="n"&gt;machine&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s1"&gt;'role'&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s2"&gt; to the swarm"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="ss"&gt;type: :shell&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="ss"&gt;privileged: &lt;/span&gt;&lt;span class="kp"&gt;true&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; 
        &lt;span class="ss"&gt;:args&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;leader&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s1"&gt;'ip'&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt; &lt;span class="n"&gt;machine&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s1"&gt;'role'&lt;/span&gt;&lt;span class="p"&gt;].&lt;/span&gt;&lt;span class="nf"&gt;downcase&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;machine&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s1"&gt;'availability'&lt;/span&gt;&lt;span class="p"&gt;]],&lt;/span&gt;
        &lt;span class="ss"&gt;inline: &lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&amp;lt;-&lt;/span&gt;&lt;span class="no"&gt;SHELL&lt;/span&gt;&lt;span class="sh"&gt;
          token=$(docker exec redis redis-cli -h $1 get $2-join-token)
          docker swarm join --availability $3 --token $token $1 || true
&lt;/span&gt;&lt;span class="no"&gt;        SHELL&lt;/span&gt;
    &lt;span class="k"&gt;end&lt;/span&gt;
  &lt;span class="k"&gt;end&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Boot an environment
&lt;/h2&gt;
&lt;h3&gt;
  
  
  Bring up a virtual machine
&lt;/h3&gt;

&lt;p&gt;Run the following from your terminal:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;vagrant up
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This command will finish and you will have VMs running with docker swarm.&lt;/p&gt;

&lt;h3&gt;
  
  
  Check swarm nodes
&lt;/h3&gt;

&lt;p&gt;Connect to a master node using SSH:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;vagrant ssh manager-1
&lt;span class="o"&gt;[&lt;/span&gt;vagrant@manager-1 ~]&lt;span class="err"&gt;$&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Finally, verify the status of the cluster nodes using the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="o"&gt;[&lt;/span&gt;vagrant@manager-1 ~]&lt;span class="nv"&gt;$ &lt;/span&gt;docker node &lt;span class="nb"&gt;ls
&lt;/span&gt;ID                            HOSTNAME    STATUS    AVAILABILITY   MANAGER STATUS   ENGINE VERSION
illxi93is8446ghgl86lihg1f &lt;span class="k"&gt;*&lt;/span&gt;   manager-1   Ready     Drain          Leader           26.1.3
es0ktcmq5d9403p16b688knat     manager-2   Ready     Drain          Reachable        26.1.3
j1wnwxnysm763eex9r024h8if     manager-3   Ready     Drain          Reachable        26.1.3
comemxromzh4y8a6ciou6smk8     worker-1    Ready     Active                          26.1.3
smz5sf6vdbzszc850zxwddfie     worker-2    Ready     Active                          26.1.3
p7r1khx4h8famoi9fb0itx8kg     worker-3    Ready     Active                          26.1.3
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  VirualBox GUI
&lt;/h3&gt;

&lt;p&gt;Now VirualBox GUI should looks like this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu6305xae6wni5sqpepqm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu6305xae6wni5sqpepqm.png" alt="VirualBox GUI" width="770" height="554"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  Deploy a service
&lt;/h2&gt;

&lt;p&gt;In order to verify that everything is OK, we will deploy the simplest docker service to a swarm:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;docker service create &lt;span class="nt"&gt;--name&lt;/span&gt; my_web &lt;span class="se"&gt;\&lt;/span&gt;
                        &lt;span class="nt"&gt;--replicas&lt;/span&gt; 3 &lt;span class="se"&gt;\&lt;/span&gt;
                        &lt;span class="nt"&gt;--publish&lt;/span&gt; &lt;span class="nv"&gt;published&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;8080,target&lt;span class="o"&gt;=&lt;/span&gt;80 &lt;span class="se"&gt;\&lt;/span&gt;
                        nginx:latest
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The service is scheduled on available nodes. To confirm that the service was created and started successfully, use the &lt;code&gt;docker service ls&lt;/code&gt; command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="o"&gt;[&lt;/span&gt;vagrant@manager-1 ~]&lt;span class="nv"&gt;$ &lt;/span&gt;docker service create &lt;span class="nt"&gt;--name&lt;/span&gt; my_web &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;                         &lt;span class="nt"&gt;--replicas&lt;/span&gt; 3 &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;                         &lt;span class="nt"&gt;--publish&lt;/span&gt; &lt;span class="nv"&gt;published&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;8080,target&lt;span class="o"&gt;=&lt;/span&gt;80 &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;                         nginx:latest
s3vgnoaca0p14jgd27ag52th4
overall progress: 3 out of 3 tasks
1/3: running   &lt;span class="o"&gt;[==================================================&amp;gt;]&lt;/span&gt;
2/3: running   &lt;span class="o"&gt;[==================================================&amp;gt;]&lt;/span&gt;
3/3: running   &lt;span class="o"&gt;[==================================================&amp;gt;]&lt;/span&gt;
verify: Service s3vgnoaca0p14jgd27ag52th4 converged
&lt;span class="o"&gt;[&lt;/span&gt;vagrant@manager-1 ~]&lt;span class="nv"&gt;$ &lt;/span&gt;docker service &lt;span class="nb"&gt;ls
&lt;/span&gt;ID             NAME      MODE         REPLICAS   IMAGE          PORTS
s3vgnoaca0p1   my_web    replicated   3/3        nginx:latest   &lt;span class="k"&gt;*&lt;/span&gt;:8080-&amp;gt;80/tcp
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now, you may open just deployed nginx service in web browser using one of the VM's IP:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2u0bkhuawjjap8vhi2hu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2u0bkhuawjjap8vhi2hu.png" alt="Nginx Welcome page" width="800" height="423"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  Cleanup
&lt;/h2&gt;

&lt;p&gt;To delete docker service, use the &lt;code&gt;docker service rm&lt;/code&gt; command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="o"&gt;[&lt;/span&gt;vagrant@manager-1 ~]&lt;span class="nv"&gt;$ &lt;/span&gt;docker service &lt;span class="nb"&gt;rm &lt;/span&gt;my_web
my_web
&lt;span class="o"&gt;[&lt;/span&gt;vagrant@manager-1 ~]&lt;span class="nv"&gt;$ &lt;/span&gt;docker service &lt;span class="nb"&gt;ls
&lt;/span&gt;ID        NAME      MODE      REPLICAS   IMAGE     PORTS
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To stop VMs that Vagrant is managing and remove all the resources created during the machine-creation process, use the &lt;code&gt;vagrant destroy&lt;/code&gt; command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;vagrant destroy
    worker-3: Are you sure you want to destroy the &lt;span class="s1"&gt;'worker-3'&lt;/span&gt; VM? &lt;span class="o"&gt;[&lt;/span&gt;y/N]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To gracefully shut down the VMs, use the &lt;code&gt;vagrant halt&lt;/code&gt; command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;vagrant halt
&lt;span class="o"&gt;==&amp;gt;&lt;/span&gt; worker-3: Attempting graceful shutdown of VM...
&lt;span class="o"&gt;==&amp;gt;&lt;/span&gt; worker-2: Attempting graceful shutdown of VM...
&lt;span class="o"&gt;==&amp;gt;&lt;/span&gt; worker-2: Forcing shutdown of VM...
&lt;span class="o"&gt;==&amp;gt;&lt;/span&gt; worker-1: Attempting graceful shutdown of VM...
&lt;span class="o"&gt;==&amp;gt;&lt;/span&gt; manager-3: Attempting graceful shutdown of VM...
&lt;span class="o"&gt;==&amp;gt;&lt;/span&gt; manager-2: Attempting graceful shutdown of VM...
&lt;span class="o"&gt;==&amp;gt;&lt;/span&gt; manager-1: Attempting graceful shutdown of VM...
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  Summary
&lt;/h2&gt;

&lt;p&gt;In this post, we’ve seen:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;how to easily setup high-availability (HA) Docker Swarm cluster using &lt;a href="https://developer.hashicorp.com/vagrant" rel="noopener noreferrer"&gt;Vagrant&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;how to deploy docker service to a swarm cluster&lt;/li&gt;
&lt;li&gt;how to clean up environment.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The codebase for this article can be found &lt;a href="https://github.com/viastakhov/vagrants/blob/master/docker-swarm/" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>docker</category>
      <category>swarm</category>
      <category>vagrant</category>
      <category>devops</category>
    </item>
    <item>
      <title>Run Ansible on Windows without WSL</title>
      <dc:creator>Ježek</dc:creator>
      <pubDate>Wed, 25 Sep 2024 11:47:49 +0000</pubDate>
      <link>https://dev.to/hedgehog/run-ansible-on-windows-without-wsl-5hd3</link>
      <guid>https://dev.to/hedgehog/run-ansible-on-windows-without-wsl-5hd3</guid>
      <description>&lt;p&gt;Can Ansible run on Windows?&lt;br&gt;
There is the simple, uncomplicated answer in &lt;a href="https://docs.ansible.com/ansible/latest/os_guide/windows_faq.html#can-ansible-run-on-windows" rel="noopener noreferrer"&gt;ansible.com&lt;/a&gt;:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;No, Ansible can only manage Windows hosts. Ansible cannot run on a Windows host natively, though it can run under the Windows Subsystem for Linux (WSL).&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;My friend, now I going to show you how you can run Ansible without WSL. Go ahead!&lt;/p&gt;

&lt;p&gt;First of all in order to run Ansible on Windows without WSL you have to install &lt;a href="https://www.cygwin.com/install.html" rel="noopener noreferrer"&gt;Cygwin&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Once you have installed Cygwin for Windows you should to install relevant Cygwin packages for Ansible.&lt;/p&gt;


&lt;h2&gt;
  
  
  Install from the Internet
&lt;/h2&gt;

&lt;p&gt;This is the easiest way to install Cygwin packages. &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Run &lt;code&gt;cygwin-setup-x86_64.exe&lt;/code&gt;:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwtt74clt1sr8rybq8rho.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwtt74clt1sr8rybq8rho.png" alt="Cygwin #1" width="708" height="483"&gt;&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Press &lt;code&gt;Next&lt;/code&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Choose &lt;code&gt;Install from Internet&lt;/code&gt;, and press twice &lt;code&gt;Next&lt;/code&gt;:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy3y2sjg7yexprexswbnt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy3y2sjg7yexprexswbnt.png" alt="Cygwin #2" width="708" height="483"&gt;&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Select a local package directory, and press &lt;code&gt;Next&lt;/code&gt;: &lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq72fakw319tx7bx59tl5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq72fakw319tx7bx59tl5.png" alt="Cygwin #3" width="708" height="483"&gt;&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Select appropriate connection type, and press &lt;code&gt;Next&lt;/code&gt;:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1s4ozsmu0lzz399rai8b.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1s4ozsmu0lzz399rai8b.png" alt="Cygwin #4" width="708" height="483"&gt;&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Choose appropriate download site, and press &lt;code&gt;Next&lt;/code&gt;:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv3satl2y8wc9rdl259uj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv3satl2y8wc9rdl259uj.png" alt="Cygwin #5" width="708" height="483"&gt;&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Select &lt;code&gt;ansible&lt;/code&gt; and required for you packages, then press twice &lt;code&gt;Next&lt;/code&gt;:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0kczkezdikwao8bkayo9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0kczkezdikwao8bkayo9.png" alt="Cygwin #6" width="800" height="474"&gt;&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Wait for installation succeed&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsi62j9bqm622ukomu8xp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsi62j9bqm622ukomu8xp.png" alt="Cygwin #7" width="708" height="483"&gt;&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Run installed &lt;code&gt;Cygwin64 Terminal&lt;/code&gt;, and check &lt;code&gt;ansible&lt;/code&gt; version:&lt;br&gt;
&lt;/p&gt;

&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;ansible &lt;span class="nt"&gt;--version&lt;/span&gt;
ansible 2.8.4
&lt;/code&gt;&lt;/pre&gt;




&lt;/li&gt;

&lt;li&gt;

&lt;p&gt;Test whether &lt;code&gt;ansible&lt;/code&gt; is working correctly:&lt;br&gt;
&lt;/p&gt;

&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;ansible 127.0.0.1  &lt;span class="nt"&gt;-m&lt;/span&gt; shell &lt;span class="nt"&gt;-a&lt;/span&gt; &lt;span class="s1"&gt;'echo Hello world!'&lt;/span&gt;
127.0.0.1 | CHANGED | &lt;span class="nv"&gt;rc&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;0 &lt;span class="o"&gt;&amp;gt;&amp;gt;&lt;/span&gt;
Hello world!
&lt;/code&gt;&lt;/pre&gt;




&lt;/li&gt;

&lt;/ol&gt;

&lt;p&gt;Congrats, my friend, you have installed &lt;code&gt;Ansible&lt;/code&gt; on Windows without WSL!&lt;/p&gt;




&lt;h2&gt;
  
  
  Install from the local directory (portable)
&lt;/h2&gt;

&lt;p&gt;This installation option is useful for those who do not have Internet access.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;First of all, you have to archive a folder where you have installed &lt;code&gt;cygwin&lt;/code&gt; packages (in my case: &lt;code&gt;c:\cygwin-packages\&lt;/code&gt;)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Copy &lt;code&gt;cygwin-setup-x86_64.exe&lt;/code&gt; and the archive to a computer w/o Internet access&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Extract archive to appropriate folders&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Run &lt;code&gt;cygwin-setup-x86_64.exe&lt;/code&gt;, and press &lt;code&gt;Next&lt;/code&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Choose &lt;code&gt;Install from Local Directory&lt;/code&gt;, and press twice &lt;code&gt;Next&lt;/code&gt;:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcrjxgzj87y9211p3szl3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcrjxgzj87y9211p3szl3.png" alt="Cygwin #2-1" width="708" height="483"&gt;&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Select a local package directory where you have extracted the archive with packages, and press &lt;code&gt;Next&lt;/code&gt;: &lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqmksmr93fu9vq8q0d4wq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqmksmr93fu9vq8q0d4wq.png" alt="Cygwin #2-2" width="708" height="483"&gt;&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Select &lt;code&gt;ansible&lt;/code&gt; and required for you packages, then press twice &lt;code&gt;Next&lt;/code&gt;:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk52b7532pmi3ynpkhib7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk52b7532pmi3ynpkhib7.png" alt="Cygwin #2-3" width="708" height="483"&gt;&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Wait for installation succeed&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Run installed &lt;code&gt;Cygwin64 Terminal&lt;/code&gt;, and check &lt;code&gt;ansible&lt;/code&gt; version:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;ansible &lt;span class="nt"&gt;--version&lt;/span&gt;
ansible 2.8.4
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Test whether &lt;code&gt;ansible&lt;/code&gt; is working correctly:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;ansible 127.0.0.1  &lt;span class="nt"&gt;-m&lt;/span&gt; shell &lt;span class="nt"&gt;-a&lt;/span&gt; &lt;span class="s1"&gt;'echo Hello world!'&lt;/span&gt;
127.0.0.1 | CHANGED | &lt;span class="nv"&gt;rc&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;0 &lt;span class="o"&gt;&amp;gt;&amp;gt;&lt;/span&gt;
Hello world!
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Congrats, my friend, you have installed &lt;code&gt;Ansible&lt;/code&gt; on Windows without WSL and Internet access on a computer!&lt;/p&gt;




&lt;h2&gt;
  
  
  Summary
&lt;/h2&gt;

&lt;p&gt;In this post, we have seen:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;how to install &lt;em&gt;Ansible&lt;/em&gt; on Windows w/o WSL using Internet access&lt;/li&gt;
&lt;li&gt;how to install &lt;em&gt;Ansible&lt;/em&gt; on Windows w/o WSL and w/o Internet access&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I hope this post was useful for you.&lt;br&gt;
See you soon!&lt;/p&gt;

</description>
      <category>ansible</category>
      <category>cygwin</category>
      <category>devops</category>
    </item>
    <item>
      <title>Set up Redis diskless replication</title>
      <dc:creator>Ježek</dc:creator>
      <pubDate>Tue, 24 Sep 2024 20:16:12 +0000</pubDate>
      <link>https://dev.to/hedgehog/set-up-redis-diskless-replication-359</link>
      <guid>https://dev.to/hedgehog/set-up-redis-diskless-replication-359</guid>
      <description>&lt;p&gt;If you are building a production-grade application and your application uses Redis database (RDB) then you should replicate your data, so that in case of any disaster for your master data you can still use the replicated data.&lt;/p&gt;

&lt;p&gt;Redis provides replication in two ways:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Master-Slave replication&lt;/li&gt;
&lt;li&gt;Redis Cluster Replication&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The most basic form of replication in Redis is Master-Slave replication. Data from the Master node is replicated to one or more Slave nodes (Replicas). Replicas can serve read operations, but all write operations are performed on the Master.&lt;/p&gt;

&lt;p&gt;Master-Slave replication is a method of replicating RDB in order to improve performance and redundancy. The system has a Master that acts as the interface to the outside world, handling all external read and write requests. Whenever a change is made to the Master RDB, the change is propagated to the Replica connected to the Master. Master-Slave replication can be synchronous (in which changes to the replica RDB are made instantaneously) or asynchronous (in which changes are made only after some time).&lt;/p&gt;

&lt;p&gt;The use cases of Master-Slave replication include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Improving performance by scaling out the workload to multiple slave RDB.&lt;/li&gt;
&lt;li&gt;Creating backups from the replica RDB, without disrupting the master RDB.&lt;/li&gt;
&lt;li&gt;Running BI and analytics workloads on the slave RDB, without disrupting the master RDB.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;By default, Master-Slave replication in Redis is asynchronous. Redis Master-Slave replication is non-blocking, which means that the Master can continue to operate while the Replica synchronize the data. In addition, Replica will be able to handle queries using the out-of-date version of the RDB, except for a brief period during which the new data is loaded.&lt;/p&gt;

&lt;p&gt;Redis Replicas are able to perform a partial resynchronization with the Master if the replication link is lost for a relatively small amount of time. New Replicas and reconnecting Replicas that are not able to continue the replication process just receiving differences, need to do what is called a "full synchronization". An RDB file is transmitted from the Master to the Replicas.&lt;/p&gt;

&lt;p&gt;The transmission can happen in two different ways:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;disk-backed: the Master creates a new process &lt;code&gt;redis-rdb-bgsave&lt;/code&gt; that writes the RDB file on disk. Later the file is transferred by the parent process &lt;code&gt;redis-server&lt;/code&gt; to the Replicas incrementally.&lt;/li&gt;
&lt;li&gt;diskless: the Master creates a new process &lt;code&gt;redis-rdb-to-slaves&lt;/code&gt; that directly writes the RDB file to replica sockets, without touching the disk at all.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff6z3wklkh6x4zteu4p8a.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff6z3wklkh6x4zteu4p8a.png" alt="Replication strategy" width="800" height="352"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;With slow disks and fast (large bandwidth) networks, diskless replication works better: this can provide faster synchronization times.&lt;/p&gt;




&lt;h2&gt;
  
  
  The problem
&lt;/h2&gt;

&lt;p&gt;In order to persist the RDB on disk you have to define &lt;code&gt;save&lt;/code&gt; directives in &lt;code&gt;redis.conf&lt;/code&gt; configuration file, also you can run &lt;code&gt;BGSAVE&lt;/code&gt; command manually.&lt;/p&gt;

&lt;p&gt;According to &lt;a href="https://redis.io/docs/manual/persistence/#rdb-disadvantages" rel="noopener noreferrer"&gt;Redis RDB disadvantages&lt;/a&gt;:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;RDB needs to fork() often in order to persist on disk using a child process. fork() can be time consuming if the dataset is big, and may result in Redis stopping serving clients for some milliseconds or even for one second if the dataset is very big and the CPU performance is not great.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The name of the child process is &lt;code&gt;redis-rdb-bgsave&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;According to the &lt;a href="https://linux.die.net/man/2/fork" rel="noopener noreferrer"&gt;Linux man page&lt;/a&gt;:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;fork() creates a new process by duplicating the calling process. The new process, referred to as the child, is an exact duplicate of the calling process, referred to as the parent. Under Linux, fork() is implemented using copy-on-write pages, so the only penalty that it incurs is the time and memory required to duplicate the parent's page tables, and to create a unique task structure for the child.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Thus, fork() can cause the Master to freeze when performing BGSAVE, the related problem is described in the issue &lt;a href="https://github.com/redis/redis/issues/9503" rel="noopener noreferrer"&gt;#9503&lt;/a&gt; and &lt;code&gt;antirez&lt;/code&gt; blog posts &lt;a href="http://antirez.com/news/83" rel="noopener noreferrer"&gt;#83&lt;/a&gt;, &lt;a href="http://antirez.com/news/84" rel="noopener noreferrer"&gt;#84&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;The &lt;code&gt;diskless replication&lt;/code&gt; is an option to mitigate the problem.&lt;/p&gt;




&lt;h2&gt;
  
  
  Prerequisites
&lt;/h2&gt;

&lt;p&gt;You have installed Redis with Sentinel in accordance with article &lt;a href="https://dev.to/hedgehog/set-up-a-redis-sentinel-3m50"&gt;Set up a Redis Sentinel&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Configuring the diskless replication
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;In order to disable forking the child process &lt;code&gt;redis-rdb-bgsave&lt;/code&gt; you have to disable RDB persistence by commenting out all "save" lines in &lt;code&gt;/etc/redis/redis.conf&lt;/code&gt; on all nodes:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# save 900 1
# save 300 10
# save 60 10000
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;To enable the diskless replication set following mandatory parameters on all nodes:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;repl-diskless-sync yes
repl-diskless-sync-delay 5
repl-diskless-load on-empty-db
&lt;/code&gt;&lt;/pre&gt;


&lt;blockquote&gt;
&lt;p&gt;&lt;code&gt;repl-diskless-sync-delay 5&lt;/code&gt;: the delay in seconds the server waits in order to spawn the child that transfers the RDB via socket to the replicas.&lt;br&gt;
&lt;code&gt;repl-diskless-load on-empty-db&lt;/code&gt;: use diskless loading the RDB directly from the socket only when it is completely safe for Replica.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;When the diskless replication is enabled there are several scenarios could occur:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Slave node is rebooted:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fff1dy4fg4ip4wwwtwqgf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fff1dy4fg4ip4wwwtwqgf.png" alt="Scenario #1" width="568" height="653"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This scenario is not abnormal. After Slave node rebooted Master performs "full synchronization" with Replica. &lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Master node is powered off:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb9wy2hc46d5rgycuke5p.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb9wy2hc46d5rgycuke5p.png" alt="Scenario #2" width="561" height="661"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This scenario is also not abnormal as well, because of sometimes shutting down a node is required for a long time due to maintenance tasks. In this case, Sentinel promotes Replica node to Master, and Replica's RDB in memory remains the same as before the failover. &lt;/p&gt;

&lt;p&gt;After the old Master node is loaded, it will become a Replica and "full synchronization" will be performed in accordance with Scenario #1&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;The process "redis-server" has been killed on Master node&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwn0ysmyxdmeyjb41dhwk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwn0ysmyxdmeyjb41dhwk.png" alt="Scenario #3a" width="798" height="951"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This scenario is dangerous because of the risk of data loss on all nodes as described in &lt;code&gt;antirez&lt;/code&gt; blog post &lt;a href="http://antirez.com/news/80" rel="noopener noreferrer"&gt;#80&lt;/a&gt;.&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;To eliminate the possibility of the dangerous scenario #3, it is necessary to add a delay before starting the 'redis-server' process on the Master node so that Sentinel would promote one of the Replicas to Master:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7idelefz5lwh59ihfla7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7idelefz5lwh59ihfla7.png" alt="Scenario #3b" width="733" height="1275"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Save the following bash-script on all nodes as &lt;code&gt;redis-wait-for-slave-role.sh&lt;/code&gt; to the folder &lt;code&gt;/usr/local/bin/&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;#!/bin/bash&lt;/span&gt;

&lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="o"&gt;[[&lt;/span&gt; &lt;span class="nv"&gt;$# &lt;/span&gt;&lt;span class="nt"&gt;-ne&lt;/span&gt; 2 &lt;span class="o"&gt;]]&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;then
  &lt;/span&gt;&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"Illegal number of parameters"&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&amp;amp;2
  &lt;span class="nb"&gt;exit &lt;/span&gt;2
&lt;span class="k"&gt;fi

&lt;/span&gt;&lt;span class="nv"&gt;IFS&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;","&lt;/span&gt;
&lt;span class="nv"&gt;TIME_OUT_COMMAND&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;5s

&lt;span class="nv"&gt;redis_conf_path&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$1&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
&lt;span class="nv"&gt;hosts_ip&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$2&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;

&lt;span class="nv"&gt;my_ip&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;hostname&lt;/span&gt; &lt;span class="nt"&gt;--ip-address&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;
&lt;span class="nv"&gt;password&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;awk&lt;/span&gt; &lt;span class="s1"&gt;'/^requirepass/ {print $2}'&lt;/span&gt; &lt;span class="nv"&gt;$redis_conf_path&lt;/span&gt; | &lt;span class="nb"&gt;tr&lt;/span&gt; &lt;span class="nt"&gt;-d&lt;/span&gt; &lt;span class="s1"&gt;'\"'&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;

get_role&lt;span class="o"&gt;()&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
  &lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="nt"&gt;-n&lt;/span&gt; &lt;span class="s2"&gt;"Getting a role from the node '&lt;/span&gt;&lt;span class="nv"&gt;$host&lt;/span&gt;&lt;span class="s2"&gt;' ... "&lt;/span&gt;  1&amp;gt;&amp;amp;2
  &lt;span class="nb"&gt;local &lt;/span&gt;&lt;span class="nv"&gt;role&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;timeout&lt;/span&gt; &lt;span class="nv"&gt;$TIME_OUT_COMMAND&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
               redis-cli &lt;span class="nt"&gt;-h&lt;/span&gt; &lt;span class="nv"&gt;$host&lt;/span&gt; &lt;span class="nt"&gt;-p&lt;/span&gt; 6379 &lt;span class="nt"&gt;--pass&lt;/span&gt; &lt;span class="nv"&gt;$password&lt;/span&gt; &lt;span class="nt"&gt;--no-auth-warning&lt;/span&gt; info replication | &lt;span class="se"&gt;\&lt;/span&gt;
               &lt;span class="nb"&gt;awk&lt;/span&gt; &lt;span class="s1"&gt;'/^role/ {split($0,a,":");print a[2]}'&lt;/span&gt; | &lt;span class="se"&gt;\&lt;/span&gt;
               &lt;span class="nb"&gt;tr&lt;/span&gt; &lt;span class="nt"&gt;-d&lt;/span&gt; &lt;span class="s1"&gt;'\r'&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;
  &lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="nv"&gt;$role&lt;/span&gt; 1&amp;gt;&amp;amp;2
  &lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="nv"&gt;$role&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;

&lt;span class="k"&gt;while&lt;/span&gt; :&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;do
  for &lt;/span&gt;host &lt;span class="k"&gt;in&lt;/span&gt; &lt;span class="nv"&gt;$hosts_ip&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;do
    if&lt;/span&gt; &lt;span class="o"&gt;[&lt;/span&gt; &lt;span class="nv"&gt;$host&lt;/span&gt; &lt;span class="o"&gt;!=&lt;/span&gt; &lt;span class="nv"&gt;$my_ip&lt;/span&gt; &lt;span class="o"&gt;]&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;then
      &lt;/span&gt;&lt;span class="nv"&gt;role&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;get_role&lt;span class="si"&gt;)&lt;/span&gt;
      &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="o"&gt;[&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$role&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s1"&gt;'master'&lt;/span&gt; &lt;span class="o"&gt;]&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;then
        &lt;/span&gt;&lt;span class="nb"&gt;break &lt;/span&gt;2
      &lt;span class="k"&gt;fi
    fi 
  done

  &lt;/span&gt;&lt;span class="nb"&gt;sleep &lt;/span&gt;1
&lt;span class="k"&gt;done&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Make the file executable:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;sudo chmod&lt;/span&gt; +x /usr/local/bin/redis-wait-for-slave-role.sh
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Run the following command on all nodes:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;systemctl edit redis-server.service
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Add new &lt;code&gt;Service&lt;/code&gt; section:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[Service]
ExecStartPre=/usr/local/bin/redis-wait-for-slave-role.sh  /etc/redis/redis.conf 10.0.0.21,10.0.0.22,10.0.0.23
TimeoutStartSec=infinity
&lt;/code&gt;&lt;/pre&gt;


&lt;blockquote&gt;
&lt;p&gt;&lt;code&gt;10.0.0.21,10.0.0.22,10.0.0.23&lt;/code&gt;: IPs of Master and Slave nodes&lt;br&gt;
&lt;code&gt;TimeoutStartSec=infinity&lt;/code&gt;: if you are using a version of &lt;code&gt;systemd&lt;/code&gt; older than &lt;code&gt;229&lt;/code&gt;, you will need to use &lt;code&gt;0&lt;/code&gt; instead of &lt;code&gt;infinity&lt;/code&gt; to disable the timeout. &lt;/p&gt;
&lt;/blockquote&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Restart &lt;code&gt;redis-server&lt;/code&gt; service on all &lt;strong&gt;Slave&lt;/strong&gt; nodes only:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;systemctl restart redis-server.service
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Check &lt;code&gt;redis-server&lt;/code&gt; service started:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;systemctl status redis-server.service
● redis-server.service - Advanced key-value store
 Loaded: loaded &lt;span class="o"&gt;(&lt;/span&gt;/lib/systemd/system/redis-server.service&lt;span class="p"&gt;;&lt;/span&gt; enabled&lt;span class="p"&gt;;&lt;/span&gt; vendor preset: enabled&lt;span class="o"&gt;)&lt;/span&gt;
Drop-In: /etc/systemd/system/redis-server.service.d
         └─override.conf
 Active: active &lt;span class="o"&gt;(&lt;/span&gt;running&lt;span class="o"&gt;)&lt;/span&gt; since Tue 2024-09-24 14:53:07 MSK&lt;span class="p"&gt;;&lt;/span&gt; 17s ago
   Docs: http://redis.io/documentation,
         man:redis-server&lt;span class="o"&gt;(&lt;/span&gt;1&lt;span class="o"&gt;)&lt;/span&gt;
Process: 390540 &lt;span class="nv"&gt;ExecStartPre&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;/usr/local/bin/redis-wait-for-slave-role.sh /etc/redis/redis.conf 10.0.0.21,10.0.0.22,10.0.0.23 &lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="nv"&gt;code&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;exited, &lt;span class="nv"&gt;status&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;0/SUCCESS&lt;span class="o"&gt;)&lt;/span&gt;
Main PID: 390549 &lt;span class="o"&gt;(&lt;/span&gt;redis-server&lt;span class="o"&gt;)&lt;/span&gt;
 Status: &lt;span class="s2"&gt;"MASTER &amp;lt;-&amp;gt; REPLICA sync: Finished with success. Ready to accept connections in read-write mode."&lt;/span&gt;
  Tasks: 5 &lt;span class="o"&gt;(&lt;/span&gt;limit: 7057&lt;span class="o"&gt;)&lt;/span&gt;
 Memory: 2.2G
    CPU: 6.536s
 CGroup: /system.slice/redis-server.service
         └─390549 /usr/bin/redis-server 0.0.0.0:6379    
Sep 24 14:53:06 redis-3 systemd[1]: Starting Advanced key-value store...
Sep 24 14:53:06 redis-3 redis-wait-for-slave-role.sh[390543]: Getting a role from the node &lt;span class="s1"&gt;'10.0.0.21'&lt;/span&gt; ... master
Sep 24 14:53:07 redis-3 systemd[1]: Started Advanced key-value store.
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Restart &lt;code&gt;redis-server&lt;/code&gt; service on the old &lt;strong&gt;Master&lt;/strong&gt; node:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;systemctl restart redis-server.service
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Check &lt;code&gt;redis-server&lt;/code&gt; service started:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;systemctl status redis-server.service
● redis-server.service - Advanced key-value store
 Loaded: loaded &lt;span class="o"&gt;(&lt;/span&gt;/lib/systemd/system/redis-server.service&lt;span class="p"&gt;;&lt;/span&gt; enabled&lt;span class="p"&gt;;&lt;/span&gt; vendor preset: enabled&lt;span class="o"&gt;)&lt;/span&gt;
Drop-In: /etc/systemd/system/redis-server.service.d
         └─override.conf
 Active: active &lt;span class="o"&gt;(&lt;/span&gt;running&lt;span class="o"&gt;)&lt;/span&gt; since Tue 2024-09-24 14:58:33 MSK&lt;span class="p"&gt;;&lt;/span&gt; 6min ago
   Docs: http://redis.io/documentation,
         man:redis-server&lt;span class="o"&gt;(&lt;/span&gt;1&lt;span class="o"&gt;)&lt;/span&gt;
Process: 27935 &lt;span class="nv"&gt;ExecStartPre&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;/usr/local/bin/redis-wait-for-slave-role.sh /etc/redis/redis.conf 10.0.0.21,10.0.0.22,10.0.0.23 &lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="nv"&gt;code&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;exited, &lt;span class="nv"&gt;status&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;0/SUCCESS&lt;span class="o"&gt;)&lt;/span&gt;
Main PID: 28014 &lt;span class="o"&gt;(&lt;/span&gt;redis-server&lt;span class="o"&gt;)&lt;/span&gt;
 Status: &lt;span class="s2"&gt;"MASTER &amp;lt;-&amp;gt; REPLICA sync: Finished with success. Ready to accept connections in read-write mode."&lt;/span&gt;
  Tasks: 5 &lt;span class="o"&gt;(&lt;/span&gt;limit: 7057&lt;span class="o"&gt;)&lt;/span&gt;
 Memory: 2.2G
    CPU: 9.343s
 CGroup: /system.slice/redis-server.service
         └─28014 /usr/bin/redis-server 0.0.0.0:6379
Sep 24 14:58:16 redis-1 redis-wait-for-slave-role.sh[27969]: Getting a role from the node &lt;span class="s1"&gt;'10.0.0.22'&lt;/span&gt; ... slave
Sep 24 14:58:19 redis-1 redis-wait-for-slave-role.sh[27976]: Getting a role from the node &lt;span class="s1"&gt;'10.0.0.23'&lt;/span&gt; ... slave
Sep 24 14:58:25 redis-1 redis-wait-for-slave-role.sh[27984]: Getting a role from the node &lt;span class="s1"&gt;'10.0.0.22'&lt;/span&gt; ... slave
Sep 24 14:58:25 redis-1 redis-wait-for-slave-role.sh[27991]: Getting a role from the node &lt;span class="s1"&gt;'10.0.0.23'&lt;/span&gt; ... slave
Sep 24 14:58:31 redis-1 redis-wait-for-slave-role.sh[27999]: Getting a role from the node &lt;span class="s1"&gt;'10.0.0.22'&lt;/span&gt; ... slave
Sep 24 14:58:31 redis-1 redis-wait-for-slave-role.sh[28007]: Getting a role from the node &lt;span class="s1"&gt;'10.0.0.23'&lt;/span&gt; ... master
Sep 24 14:58:33 redis-1 systemd[1]: Started Advanced key-value store.
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;/ol&gt;




&lt;h2&gt;
  
  
  Testing the failover
&lt;/h2&gt;

&lt;p&gt;This section shows to test the failover of the high availability Redis using Sentinel with enabled diskless replication.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Find new Master node by running the following command on a Sentinel node:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;redis-cli &lt;span class="nt"&gt;-p&lt;/span&gt; 26379 &lt;span class="nt"&gt;--askpass&lt;/span&gt; sentinel get-master-addr-by-name mymaster
Please input password: &lt;span class="k"&gt;****************&lt;/span&gt;
1&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="s2"&gt;"10.0.0.23"&lt;/span&gt;
2&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="s2"&gt;"6379"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Set new test key in Master:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;redis-cli &lt;span class="nt"&gt;-h&lt;/span&gt; 10.0.0.23 &lt;span class="nt"&gt;-p&lt;/span&gt; 6379 &lt;span class="nt"&gt;--askpass&lt;/span&gt; &lt;span class="nb"&gt;set &lt;/span&gt;test_key Hello!
Please input password: &lt;span class="k"&gt;****************&lt;/span&gt;
OK
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Check the value of this key on all Replicas:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;redis-cli &lt;span class="nt"&gt;-h&lt;/span&gt; 10.0.0.21 &lt;span class="nt"&gt;-p&lt;/span&gt; 6379 &lt;span class="nt"&gt;--askpass&lt;/span&gt; get test_key
Please input password: &lt;span class="k"&gt;****************&lt;/span&gt;
&lt;span class="s2"&gt;"Hello!"&lt;/span&gt;
&lt;span class="nv"&gt;$ &lt;/span&gt;redis-cli &lt;span class="nt"&gt;-h&lt;/span&gt; 10.0.0.22 &lt;span class="nt"&gt;-p&lt;/span&gt; 6379 &lt;span class="nt"&gt;--askpass&lt;/span&gt; get test_key
Please input password: &lt;span class="k"&gt;****************&lt;/span&gt;
&lt;span class="s2"&gt;"Hello!"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Execute the failover manually running following command:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;redis-cli &lt;span class="nt"&gt;-p&lt;/span&gt; 26379 &lt;span class="nt"&gt;--askpass&lt;/span&gt; sentinel failover mymaster
Please input password: &lt;span class="k"&gt;****************&lt;/span&gt;
OK
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Find new Master node by running the following command on a Sentinel node:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;redis-cli &lt;span class="nt"&gt;-p&lt;/span&gt; 26379 &lt;span class="nt"&gt;--askpass&lt;/span&gt; sentinel get-master-addr-by-name mymaster
Please input password: &lt;span class="k"&gt;****************&lt;/span&gt;
1&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="s2"&gt;"10.0.0.21"&lt;/span&gt;
2&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="s2"&gt;"6379"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Check the value of the test key on the new Master and all Replicas:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;redis-cli &lt;span class="nt"&gt;-h&lt;/span&gt; 10.0.0.21 &lt;span class="nt"&gt;-p&lt;/span&gt; 6379 &lt;span class="nt"&gt;--askpass&lt;/span&gt; get test_key
Please input password: &lt;span class="k"&gt;****************&lt;/span&gt;
&lt;span class="s2"&gt;"Hello!"&lt;/span&gt;
&lt;span class="nv"&gt;$ &lt;/span&gt;redis-cli &lt;span class="nt"&gt;-h&lt;/span&gt; 10.0.0.22 &lt;span class="nt"&gt;-p&lt;/span&gt; 6379 &lt;span class="nt"&gt;--askpass&lt;/span&gt; get test_key
Please input password: &lt;span class="k"&gt;****************&lt;/span&gt;
&lt;span class="s2"&gt;"Hello!"&lt;/span&gt;
&lt;span class="nv"&gt;$ &lt;/span&gt;redis-cli &lt;span class="nt"&gt;-h&lt;/span&gt; 10.0.0.23 &lt;span class="nt"&gt;-p&lt;/span&gt; 6379 &lt;span class="nt"&gt;--askpass&lt;/span&gt; get test_key
Please input password: &lt;span class="k"&gt;****************&lt;/span&gt;
&lt;span class="s2"&gt;"Hello!"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Delete the RDB file on the Master node (don't do this on production environment):&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;rm&lt;/span&gt; /var/lib/redis/dump.rdb 
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Kill the &lt;code&gt;redis-server&lt;/code&gt; process on the Master node:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;pkill &lt;span class="s1"&gt;'redis-server'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Wait for the &lt;code&gt;redis-server&lt;/code&gt; process to start:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;systemctl status redis-server.service 
● redis-server.service - Advanced key-value store
 Loaded: loaded &lt;span class="o"&gt;(&lt;/span&gt;/lib/systemd/system/redis-server.service&lt;span class="p"&gt;;&lt;/span&gt; enabled&lt;span class="p"&gt;;&lt;/span&gt; vendor preset: enabled&lt;span class="o"&gt;)&lt;/span&gt;
Drop-In: /etc/systemd/system/redis-server.service.d
         └─override.conf
 Active: active &lt;span class="o"&gt;(&lt;/span&gt;running&lt;span class="o"&gt;)&lt;/span&gt; since Tue 2024-09-24 21:42:13 MSK&lt;span class="p"&gt;;&lt;/span&gt; 23s ago
   Docs: http://redis.io/documentation,
         man:redis-server&lt;span class="o"&gt;(&lt;/span&gt;1&lt;span class="o"&gt;)&lt;/span&gt;
Process: 63631 &lt;span class="nv"&gt;ExecStartPre&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;/usr/local/bin/redis-wait-for-slave-role.sh /etc/redis/redis.conf 10.0.0.21,10.0.0.22,10.0.0.23 &lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="nv"&gt;code&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;exited, &lt;span class="nv"&gt;status&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;0/SUCCESS&lt;span class="o"&gt;)&lt;/span&gt;
Main PID: 64578 &lt;span class="o"&gt;(&lt;/span&gt;redis-server&lt;span class="o"&gt;)&lt;/span&gt;
 Status: &lt;span class="s2"&gt;"Ready to accept connections"&lt;/span&gt;
  Tasks: 5 &lt;span class="o"&gt;(&lt;/span&gt;limit: 7057&lt;span class="o"&gt;)&lt;/span&gt;
 Memory: 69.1M
    CPU: 1.163s
 CGroup: /system.slice/redis-server.service
         └─64578 /usr/bin/redis-server 0.0.0.0:6379
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Check the value of the test key on the new Master and all Replicas:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;redis-cli &lt;span class="nt"&gt;-h&lt;/span&gt; 10.0.0.21 &lt;span class="nt"&gt;-p&lt;/span&gt; 6379 &lt;span class="nt"&gt;--askpass&lt;/span&gt; get test_key
Please input password: &lt;span class="k"&gt;****************&lt;/span&gt;
&lt;span class="s2"&gt;"Hello!"&lt;/span&gt;
&lt;span class="nv"&gt;$ &lt;/span&gt;redis-cli &lt;span class="nt"&gt;-h&lt;/span&gt; 10.0.0.22 &lt;span class="nt"&gt;-p&lt;/span&gt; 6379 &lt;span class="nt"&gt;--askpass&lt;/span&gt; get test_key
Please input password: &lt;span class="k"&gt;****************&lt;/span&gt;
&lt;span class="s2"&gt;"Hello!"&lt;/span&gt;
&lt;span class="nv"&gt;$ &lt;/span&gt;redis-cli &lt;span class="nt"&gt;-h&lt;/span&gt; 10.0.0.23 &lt;span class="nt"&gt;-p&lt;/span&gt; 6379 &lt;span class="nt"&gt;--askpass&lt;/span&gt; get test_key
Please input password: &lt;span class="k"&gt;****************&lt;/span&gt;
&lt;span class="s2"&gt;"Hello!"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;/ol&gt;




&lt;h2&gt;
  
  
  Summary
&lt;/h2&gt;

&lt;p&gt;The concept of replication without persistence is obviously impressive and causes anxiety and even wariness. Supporting diskless replication removes undesirable side-effect of disks (since disk I/O is slow and lazy), also it eliminates heavy forking a child process, being used for RDB persistence, with large datasets in memory.&lt;/p&gt;

</description>
      <category>redis</category>
    </item>
    <item>
      <title>Set up a Redis Sentinel</title>
      <dc:creator>Ježek</dc:creator>
      <pubDate>Wed, 18 Sep 2024 21:59:49 +0000</pubDate>
      <link>https://dev.to/hedgehog/set-up-a-redis-sentinel-3m50</link>
      <guid>https://dev.to/hedgehog/set-up-a-redis-sentinel-3m50</guid>
      <description>&lt;p&gt;&lt;a href="http://redis.io/" rel="noopener noreferrer"&gt;Redis&lt;/a&gt;® Sentinel provides high availability for Redis. Redis Sentinel also provides other collateral tasks such as monitoring, notifications and acts as a configuration provider for clients.&lt;/p&gt;

&lt;p&gt;This article shows how to set up a high-availability architecture for Redis using Sentinel in Debian GNU/Linux:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyr41eqttzsgp8f0xur9g.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyr41eqttzsgp8f0xur9g.png" alt="Redis Sentinel" width="800" height="889"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  Prerequisites
&lt;/h2&gt;

&lt;p&gt;Debian preinstalled servers, e.g.:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Host name&lt;/th&gt;
&lt;th&gt;IP&lt;/th&gt;
&lt;th&gt;Role&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;redis-1&lt;/td&gt;
&lt;td&gt;10.0.0.21&lt;/td&gt;
&lt;td&gt;Master node&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;redis-2&lt;/td&gt;
&lt;td&gt;10.0.0.22&lt;/td&gt;
&lt;td&gt;Slave node&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;redis-3&lt;/td&gt;
&lt;td&gt;10.0.0.23&lt;/td&gt;
&lt;td&gt;Slave node&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;sentinel-1&lt;/td&gt;
&lt;td&gt;10.0.0.24&lt;/td&gt;
&lt;td&gt;Sentinel node&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;sentinel-2&lt;/td&gt;
&lt;td&gt;10.0.0.25&lt;/td&gt;
&lt;td&gt;Sentinel node&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;sentinel-3&lt;/td&gt;
&lt;td&gt;10.0.0.26&lt;/td&gt;
&lt;td&gt;Sentinel node&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;




&lt;h2&gt;
  
  
  Setup Redis replication
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Install Redis
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;Connect to hosts with roles: &lt;code&gt;Master node&lt;/code&gt;, &lt;code&gt;Slave node&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Run the following commands:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;apt-get update
&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;apt-get &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;-y&lt;/span&gt; redis-server
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Activate the redis-server service to start when Redis server boots:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;systemctl &lt;span class="nb"&gt;enable &lt;/span&gt;redis-server
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Check redis-server status:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;systemctl status redis-server
● redis-server.service - Advanced key-value store
 Loaded: loaded &lt;span class="o"&gt;(&lt;/span&gt;/lib/systemd/system/redis-server.service&lt;span class="p"&gt;;&lt;/span&gt; enabled&lt;span class="p"&gt;;&lt;/span&gt; vendor &lt;span class="o"&gt;&amp;gt;&lt;/span&gt;
 Active: active &lt;span class="o"&gt;(&lt;/span&gt;running&lt;span class="o"&gt;)&lt;/span&gt; since Mon 2024-09-16 21:06:13&lt;span class="p"&gt;;&lt;/span&gt; 19min ago
   Docs: http://redis.io/documentation,
         man:redis-server&lt;span class="o"&gt;(&lt;/span&gt;1&lt;span class="o"&gt;)&lt;/span&gt;
Main PID: 3057 &lt;span class="o"&gt;(&lt;/span&gt;redis-server&lt;span class="o"&gt;)&lt;/span&gt;
 Status: &lt;span class="s2"&gt;"Ready to accept connections"&lt;/span&gt;
  Tasks: 5 &lt;span class="o"&gt;(&lt;/span&gt;limit: 2307&lt;span class="o"&gt;)&lt;/span&gt;
 Memory: 9.2M
    CPU: 2.146s
 CGroup: /system.slice/redis-server.service
         └─3057 /usr/bin/redis-server 127.0.0.1:6379

&lt;/code&gt;&lt;/pre&gt;


&lt;p&gt;Service runs and ready to accept connections.&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Set up the Master node
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;Open the configuration file at &lt;code&gt;/etc/redis/redis.conf&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Set following mandatory parameters:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;protected-mode yes
bind &amp;lt;masterip: 10.0.0.21&amp;gt; 127.0.0.1
requirepass &amp;lt;master-password&amp;gt;
masterauth &amp;lt;master-password&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Restart the redis-server service:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;systemctl restart redis-server
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Make sure that Redis accepts connections with authentication:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;redis-cli &lt;span class="nt"&gt;--askpass&lt;/span&gt; info
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Set up the Slave nodes
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;Open the configuration file at &lt;code&gt;/etc/redis/redis.conf&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Set following mandatory parameters:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;bind &amp;lt;slaveip: 10.0.0.21/22&amp;gt; 127.0.0.1
requirepass &amp;lt;master-password&amp;gt;
masterauth &amp;lt;master-password&amp;gt;
replicaof &amp;lt;masterip: 10.0.0.21&amp;gt; 6379
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Restart the redis-server service:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;systemctl restart redis-server
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Make sure that Redis accepts connections with authentication:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;redis-cli &lt;span class="nt"&gt;--askpass&lt;/span&gt; info
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Check Redis replication setup
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Check the redis-server log on Master node:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ tail /var/log/redis/redis-server.log -n 20
3498:M 16 Sep 2024 22:25:53.086 * RDB memory usage when created 0.77 Mb
3498:M 16 Sep 2024 22:25:53.086 * DB loaded from disk: 0.000 seconds
3498:M 16 Sep 2024 22:25:53.086 * Ready to accept connections
3498:M 16 Sep 2024 23:29:52.119 * Replica 10.0.0.22:6379 asks for synchronization
3498:M 16 Sep 2024 23:29:52.119 * Full resync requested by replica 10.0.0.22:6379
3498:M 16 Sep 2024 23:29:52.119 * Replication backlog created, my new replication IDs are '368571d1e52c29535615e0043a3a6d6ddc1db70b' and '0000000000000000000000000000000000000000'
3498:M 16 Sep 2024 23:29:52.119 * Starting BGSAVE for SYNC with target: disk
3498:M 16 Sep 2024 23:29:52.119 * Background saving started by pid 3684
3684:C 16 Sep 2024 23:29:52.132 * DB saved on disk
3684:C 16 Sep 2024 23:29:52.133 * RDB: 0 MB of memory used by copy-on-write
3498:M 16 Sep 2024 23:29:52.185 * Background saving terminated with success
3498:M 16 Sep 2024 23:29:52.185 * Synchronization with replica 10.0.0.22:6379 succeeded
3498:M 16 Sep 2024 23:30:09.952 * Replica 10.0.0.23:6379 asks for synchronization
3498:M 16 Sep 2024 23:30:09.952 * Full resync requested by replica 10.0.0.23:6379
3498:M 16 Sep 2024 23:30:09.952 * Starting BGSAVE for SYNC with target: disk
3498:M 16 Sep 2024 23:30:09.952 * Background saving started by pid 3688
3688:C 16 Sep 2024 23:30:09.965 * DB saved on disk
3688:C 16 Sep 2024 23:30:09.965 * RDB: 0 MB of memory used by copy-on-write
3498:M 16 Sep 2024 23:30:09.985 * Background saving terminated with success
3498:M 16 Sep 2024 23:30:09.985 * Synchronization with replica 10.0.0.23:6379 succeeded
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Check replication status on Master node:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;redis-cli &lt;span class="nt"&gt;--askpass&lt;/span&gt; info replication
Please input password: &lt;span class="k"&gt;****************&lt;/span&gt;
&lt;span class="c"&gt;# Replication&lt;/span&gt;
role:master
connected_slaves:2
slave0:ip&lt;span class="o"&gt;=&lt;/span&gt;10.0.0.22,port&lt;span class="o"&gt;=&lt;/span&gt;6379,state&lt;span class="o"&gt;=&lt;/span&gt;online,offset&lt;span class="o"&gt;=&lt;/span&gt;1652,lag&lt;span class="o"&gt;=&lt;/span&gt;1
slave1:ip&lt;span class="o"&gt;=&lt;/span&gt;10.0.0.23,port&lt;span class="o"&gt;=&lt;/span&gt;6379,state&lt;span class="o"&gt;=&lt;/span&gt;online,offset&lt;span class="o"&gt;=&lt;/span&gt;1652,lag&lt;span class="o"&gt;=&lt;/span&gt;1
master_replid:368571d1e52c29535615e0043a3a6d6ddc1db70b
master_replid2:0000000000000000000000000000000000000000
master_repl_offset:1652
second_repl_offset:-1
repl_backlog_active:1
repl_backlog_size:1048576
repl_backlog_first_byte_offset:1
repl_backlog_histlen:1652
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Run the following command on Master node:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ $ &lt;/span&gt;redis-cli &lt;span class="nt"&gt;--askpass&lt;/span&gt; &lt;span class="nb"&gt;set &lt;/span&gt;abc 123
Please input password: &lt;span class="k"&gt;****************&lt;/span&gt;
OK
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Run the following command on Slave nodes:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;redis-cli &lt;span class="nt"&gt;--askpass&lt;/span&gt; get abc
Please input password: &lt;span class="k"&gt;****************&lt;/span&gt;
&lt;span class="s2"&gt;"123"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Congrats! Master-to-Slave data replication succeed.&lt;/p&gt;




&lt;h2&gt;
  
  
  Setup Redis Sentinel
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Install Redis Sentinel
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;Connect to hosts with the role &lt;code&gt;Sentinel&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Run the following commands:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;apt-get update
&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;apt-get &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;-y&lt;/span&gt; redis-sentinel
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Activate the redis-sentinel service to start when Sentinel server boots:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;systemctl &lt;span class="nb"&gt;enable &lt;/span&gt;redis-sentinel
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Check redis-sentinel status:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;systemctl status redis-sentinel
● redis-sentinel.service - Advanced key-value store
 Loaded: loaded &lt;span class="o"&gt;(&lt;/span&gt;/lib/systemd/system/redis-sentinel.service&lt;span class="p"&gt;;&lt;/span&gt; enabled&lt;span class="p"&gt;;&lt;/span&gt; vendo&amp;gt;
 Active: active &lt;span class="o"&gt;(&lt;/span&gt;running&lt;span class="o"&gt;)&lt;/span&gt; since Tue 2024-09-17 00:56:22 MSK&lt;span class="p"&gt;;&lt;/span&gt; 1min 29s ago
   Docs: http://redis.io/documentation,
         man:redis-sentinel&lt;span class="o"&gt;(&lt;/span&gt;1&lt;span class="o"&gt;)&lt;/span&gt;
Main PID: 3259 &lt;span class="o"&gt;(&lt;/span&gt;redis-sentinel&lt;span class="o"&gt;)&lt;/span&gt;
 Status: &lt;span class="s2"&gt;"Ready to accept connections"&lt;/span&gt;
  Tasks: 5 &lt;span class="o"&gt;(&lt;/span&gt;limit: 2307&lt;span class="o"&gt;)&lt;/span&gt;
 Memory: 7.2M
    CPU: 190ms
 CGroup: /system.slice/redis-sentinel.service
         └─3259 /usr/bin/redis-sentinel 127.0.0.1:26379 &lt;span class="o"&gt;[&lt;/span&gt;sentinel]
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Set up Sentinel nodes
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;Open the configuration file at &lt;code&gt;/etc/redis/sentinel.conf&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Set following mandatory parameters:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;protected-mode yes
bind &amp;lt;sentinelip: 10.0.0.24/25/26&amp;gt; 127.0.0.1
sentinel monitor mymaster &amp;lt;masterip: 10.0.0.21&amp;gt; 6379 2
sentinel down-after-milliseconds mymaster 30000
sentinel failover-timeout mymaster 180000
sentinel parallel-syncs mymaster 1
sentinel auth-pass mymaster &amp;lt;master-password&amp;gt;
requirepass &amp;lt;sentinel-password&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Restart the redis-sentinel service:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;systemctl restart redis-sentinel
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Make sure that Redis Sentinel accepts connections with authentication:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;redis-cli &lt;span class="nt"&gt;-p&lt;/span&gt; 26379 &lt;span class="nt"&gt;--askpass&lt;/span&gt; info sentinel
Please input password: &lt;span class="k"&gt;****************&lt;/span&gt;
&lt;span class="c"&gt;# Sentinel&lt;/span&gt;
sentinel_masters:1
sentinel_tilt:0
sentinel_running_scripts:0
sentinel_scripts_queue_length:0
sentinel_simulate_failure_flags:0
master0:name&lt;span class="o"&gt;=&lt;/span&gt;mymaster,status&lt;span class="o"&gt;=&lt;/span&gt;ok,address&lt;span class="o"&gt;=&lt;/span&gt;10.0.0.21:6379,slaves&lt;span class="o"&gt;=&lt;/span&gt;2,sentinels&lt;span class="o"&gt;=&lt;/span&gt;3

&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;/ol&gt;




&lt;h2&gt;
  
  
  Testing the failover
&lt;/h2&gt;

&lt;p&gt;This section shows how to test the failover of the high availability Redis using Sentinel.&lt;/p&gt;

&lt;h3&gt;
  
  
  Master node failure
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Run the following command on Master node:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;redis-cli &lt;span class="nt"&gt;--askpass&lt;/span&gt; client pause 40000
Please input password: &lt;span class="k"&gt;****************&lt;/span&gt;
OK
&lt;/code&gt;&lt;/pre&gt;


&lt;blockquote&gt;
&lt;p&gt;Note: &lt;code&gt;client pause 40000&lt;/code&gt; due to &lt;code&gt;sentinel down-after-milliseconds mymaster 30000&lt;/code&gt;&lt;/p&gt;
&lt;/blockquote&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Run the following command on one of the Sentinel nodes:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;tail&lt;/span&gt; &lt;span class="nt"&gt;-10&lt;/span&gt; /var/log/redis/redis-sentinel.log
6175:X 18 Sep 2024 21:39:49.309 &lt;span class="c"&gt;# +sdown master mymaster 10.0.0.21 6379&lt;/span&gt;
6175:X 18 Sep 2024 21:39:49.555 &lt;span class="c"&gt;# +new-epoch 1&lt;/span&gt;
6175:X 18 Sep 2024 21:39:49.558 &lt;span class="c"&gt;# +vote-for-leader 77fc4fbc9c6c7d063d62e38f528910ab67e8d907 1&lt;/span&gt;
6175:X 18 Sep 2024 21:39:50.450 &lt;span class="c"&gt;# +odown master mymaster 10.0.0.21 6379 #quorum 3/2&lt;/span&gt;
6175:X 18 Sep 2024 21:39:50.451 &lt;span class="c"&gt;# Next failover delay: I will not start a failover before Wed Sep 18 21:45:50 2024&lt;/span&gt;
6175:X 18 Sep 2024 21:39:51.853 &lt;span class="c"&gt;# +config-update-from sentinel 77fc4fbc9c6c7d063d62e38f528910ab67e8d907 10.0.0.26 26379 @ mymaster 10.0.0.21 6379&lt;/span&gt;
6175:X 18 Sep 2024 21:39:51.853 &lt;span class="c"&gt;# +switch-master mymaster 10.0.0.21 6379 10.0.0.22 6379&lt;/span&gt;
6175:X 18 Sep 2024 21:39:51.853 &lt;span class="k"&gt;*&lt;/span&gt; +slave slave 10.0.0.23:6379 10.0.0.23 6379 @ mymaster 10.0.0.22 6379
6175:X 18 Sep 2024 21:39:51.853 &lt;span class="k"&gt;*&lt;/span&gt; +slave slave 10.0.0.21:6379 10.0.0.21 6379 @ mymaster 10.0.0.22 6379
6175:X 18 Sep 2024 21:40:09.323 &lt;span class="k"&gt;*&lt;/span&gt; +convert-to-slave slave 10.0.0.21:6379 10.0.0.21 6379 @ mymaster 10.0.0.22 6379
&lt;/code&gt;&lt;/pre&gt;


&lt;blockquote&gt;
&lt;p&gt;Note: &lt;code&gt;Next failover delay: I will not start a failover before Wed Sep 18 21:45:50 2024&lt;/code&gt; due to &lt;code&gt;sentinel failover-timeout mymaster 180000&lt;/code&gt;&lt;/p&gt;
&lt;/blockquote&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Run the following command on one of the Sentinel nodes:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;redis-cli &lt;span class="nt"&gt;-p&lt;/span&gt; 26379 &lt;span class="nt"&gt;--askpass&lt;/span&gt; sentinel get-master-addr-by-name mymaster
Please input password: &lt;span class="k"&gt;****************&lt;/span&gt;
1&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="s2"&gt;"10.0.0.22"&lt;/span&gt;
2&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="s2"&gt;"6379"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;


&lt;p&gt;Redis node with IP &lt;code&gt;10.0.0.22&lt;/code&gt; is the new Master node.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Run the following command on the new Master node:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;redis-cli &lt;span class="nt"&gt;--askpass&lt;/span&gt; info replication
&lt;span class="c"&gt;# Replication&lt;/span&gt;
role:master
connected_slaves:2
slave0:ip&lt;span class="o"&gt;=&lt;/span&gt;10.0.0.23,port&lt;span class="o"&gt;=&lt;/span&gt;6379,state&lt;span class="o"&gt;=&lt;/span&gt;online,offset&lt;span class="o"&gt;=&lt;/span&gt;519426,lag&lt;span class="o"&gt;=&lt;/span&gt;1
slave1:ip&lt;span class="o"&gt;=&lt;/span&gt;10.0.0.21,port&lt;span class="o"&gt;=&lt;/span&gt;6379,state&lt;span class="o"&gt;=&lt;/span&gt;online,offset&lt;span class="o"&gt;=&lt;/span&gt;519559,lag&lt;span class="o"&gt;=&lt;/span&gt;0
master_replid:ade635098e62dc68f3ee65acf37970f98e1a2f8a
master_replid2:8777248b814d71f4c609f437c37a529e01b7d0c5
master_repl_offset:519692
second_repl_offset:195253
repl_backlog_active:1
repl_backlog_size:1048576
repl_backlog_first_byte_offset:1
repl_backlog_histlen:519692
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Sentinel node failure
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Run the following command on the first Sentinel node:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;systemctl stop redis-sentinel
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Check &lt;code&gt;redis-sentinel.log&lt;/code&gt; on the second Sentinel node:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;tail&lt;/span&gt; &lt;span class="nt"&gt;-1&lt;/span&gt; /var/log/redis/redis-sentinel.log
5587:X 18 Sep 2024 22:14:36.112 &lt;span class="c"&gt;# +sdown sentinel d01aa131dc0ba6cc4069f3ec67bf6b4fa6b501f8 10.0.0.24 26379 @ mymaster 10.0.0.22 6379&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Run the following command on Master node:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;redis-cli &lt;span class="nt"&gt;--askpass&lt;/span&gt; client pause 40000
Please input password: &lt;span class="k"&gt;****************&lt;/span&gt;
OK
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Check &lt;code&gt;redis-sentinel.log&lt;/code&gt; on the second Sentinel node:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;tail&lt;/span&gt; &lt;span class="nt"&gt;-10&lt;/span&gt; /var/log/redis/redis-sentinel.log
5587:X 18 Sep 2024 22:20:49.232 &lt;span class="c"&gt;# +failover-state-reconf-slaves master mymaster 10.0.0.22 6379&lt;/span&gt;
5587:X 18 Sep 2024 22:20:49.255 &lt;span class="k"&gt;*&lt;/span&gt; +slave-reconf-sent slave 10.0.0.23:6379 10.0.0.23 6379 @ mymaster 10.0.0.22 6379
5587:X 18 Sep 2024 22:20:49.691 &lt;span class="c"&gt;# -odown master mymaster 10.0.0.22 6379&lt;/span&gt;
5587:X 18 Sep 2024 22:20:54.812 &lt;span class="c"&gt;# -sdown master mymaster 10.0.0.22 6379&lt;/span&gt;
5587:X 18 Sep 2024 22:20:56.253 &lt;span class="k"&gt;*&lt;/span&gt; +slave-reconf-inprog slave 10.0.0.23:6379 10.0.0.23 6379 @ mymaster 10.0.0.22 6379
5587:X 18 Sep 2024 22:20:56.254 &lt;span class="k"&gt;*&lt;/span&gt; +slave-reconf-done slave 10.0.0.23:6379 10.0.0.23 6379 @ mymaster 10.0.0.22 6379
5587:X 18 Sep 2024 22:20:56.278 &lt;span class="c"&gt;# +failover-end master mymaster 10.0.0.22 6379&lt;/span&gt;
5587:X 18 Sep 2024 22:20:56.278 &lt;span class="c"&gt;# +switch-master mymaster 10.0.0.22 6379 10.0.0.21 6379&lt;/span&gt;
5587:X 18 Sep 2024 22:20:56.279 &lt;span class="k"&gt;*&lt;/span&gt; +slave slave 10.0.0.23:6379 10.0.0.23 6379 @ mymaster 10.0.0.21 6379
5587:X 18 Sep 2024 22:20:56.279 &lt;span class="k"&gt;*&lt;/span&gt; +slave slave 10.0.0.22:6379 10.0.0.22 6379 @ mymaster 10.0.0.21 6379
&lt;/code&gt;&lt;/pre&gt;


&lt;p&gt;According to the log the node with IP &lt;code&gt;10.0.0.21&lt;/code&gt; is the new Master node.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Run the following command on the second Sentinel node:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;systemctl stop redis-sentinel
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Check &lt;code&gt;redis-sentinel.log&lt;/code&gt; on the third Sentinel node:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;tail&lt;/span&gt; &lt;span class="nt"&gt;-1&lt;/span&gt; /var/log/redis/redis-sentinel.log
5406:X 18 Sep 2024 22:26:57.116 &lt;span class="c"&gt;# +sdown sentinel 4a1264d7e6946072945569a8a6c91ad5bff182aa 10.0.0.25 26379 @ mymaster 10.0.0.21 6379&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;


&lt;p&gt;At this point, the only single Sentinel node &lt;code&gt;10.0.0.26&lt;/code&gt; running, and the Redis node with IP &lt;code&gt;10.0.0.21&lt;/code&gt; is the new Master node.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Run the following command on Master node:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;redis-cli &lt;span class="nt"&gt;--askpass&lt;/span&gt; client pause 40000
Please input password: &lt;span class="k"&gt;****************&lt;/span&gt;
OK
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Check &lt;code&gt;redis-sentinel.log&lt;/code&gt; on the third Sentinel node:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;tail&lt;/span&gt; &lt;span class="nt"&gt;-2&lt;/span&gt; /var/log/redis/redis-sentinel.log
5406:X 18 Sep 2024 22:26:57.116 &lt;span class="c"&gt;# +sdown sentinel 4a1264d7e6946072945569a8a6c91ad5bff182aa 10.0.0.25 26379 @ mymaster 10.0.0.21 6379&lt;/span&gt;
5406:X 18 Sep 2024 22:42:37.815 &lt;span class="c"&gt;# +sdown master mymaster 10.0.0.21 6379&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;


&lt;p&gt;According to the log the Master node is down, but there was no voting for the new leader. This is due to the missing of quorum among the Sentinel nodes.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Don't forget to start &lt;code&gt;redis-sentinel&lt;/code&gt; service on the first and second Sentinel nodes:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;systemctl start redis-sentinel
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Congrats! You have configured Redis Sentinel.&lt;/p&gt;




&lt;h2&gt;
  
  
  Summary
&lt;/h2&gt;

&lt;p&gt;In this post, we’ve seen:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;how to set up Redis Master-Slave replication&lt;/li&gt;
&lt;li&gt;how to set up Redis Sentinel&lt;/li&gt;
&lt;li&gt;hot to test the failover of the high availability Redis using Sentinel.&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>redis</category>
    </item>
  </channel>
</rss>
