<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Thodoris Velmachos</title>
    <description>The latest articles on DEV Community by Thodoris Velmachos (@tvelmachos).</description>
    <link>https://dev.to/tvelmachos</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/tvelmachos"/>
    <language>en</language>
    <item>
      <title>Do you want to change Storageclass to your PVs?</title>
      <dc:creator>Thodoris Velmachos</dc:creator>
      <pubDate>Fri, 17 Jan 2025 18:45:57 +0000</pubDate>
      <link>https://dev.to/tvelmachos/do-you-want-to-change-storageclass-to-your-pvs-5hb2</link>
      <guid>https://dev.to/tvelmachos/do-you-want-to-change-storageclass-to-your-pvs-5hb2</guid>
      <description>&lt;p&gt;If the answer is yes, then keep reading, in the past few days one of my clients ask me to migrate the Kubernetes Cluster from GKE to a different Cloud provider as an unmanaged cluster, I believe the most difficult and time consuming task is to migrate the data from the GKE cluster to a unmanaged cluster, so I decided to use Longhorn as part of my solution to the problem but in the GKE cluster we didn’t use Longhorn as a result the Persistent volumes was the gcp native storageclass (kubernetes.io/gce-pd) after googling the problem I have decided to use Rsync to copy the data between pvs. So the approach I have followed is the following&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;I provision the respective pvc/pv with storageclass longhorn.&lt;/li&gt;
&lt;li&gt;I have build a simple rsync docker image.&lt;/li&gt;
&lt;li&gt;I have deployed a simple pod to copy the data from the original pvc to the new pvc.
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cat &amp;lt;&amp;lt; EOF | k apply -f-
apiVersion: v1
kind: Pod
metadata:
  name: migrate-pv-1
  namespace: default
spec:
  affinity:
    nodeAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
        nodeSelectorTerms:
          - matchExpressions:
              - key: kubernetes.io/hostname
                operator: In
                values:
                  - "&amp;lt;nodename&amp;gt;"
  containers:
    - command:
      - sh
      - -c
      - |
        set -x
        n=0
        rc=1
        retries=10
        attempts=$((retries+1))
        period=5

        while [[ $n -le $retries ]]
        do
          rsync -av --info=progress2,misc0,flist0 --no-inc-recursive -e "ssh -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -o ConnectTimeout=5" -z /source// /dest//  &amp;amp;&amp;amp; rc=0 &amp;amp;&amp;amp; break
          n=$((n+1))
          echo "rsync attempt $n/$attempts failed, waiting $period seconds before trying again"
          sleep $period
        done

        if [[ $rc -ne 0 ]]; then
          echo "rsync job failed after $retries retries"
        fi
        exit $rc
      image: &amp;lt;registry&amp;gt;/rsync-image:v1
      name: rsync
      volumeMounts:
      - mountPath: /source
        name: vol-0
        readOnly: true
      - mountPath: /dest
        name: vol-1
  restartPolicy: Never
  volumes:
  - name: vol-0
    persistentVolumeClaim:
      claimName: data-minio-distributed-0
      readOnly: true
  - name: vol-1
    persistentVolumeClaim:
      claimName: minio-distributed-0
EOF
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Another option is to use the folowing awesome project &lt;a href="https://github.com/utkuozdemir/pv-migrate" rel="noopener noreferrer"&gt;https://github.com/utkuozdemir/pv-migrate&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Thank you.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Do you want to upgrade MongoDB from version 5 to 7 on K8s?</title>
      <dc:creator>Thodoris Velmachos</dc:creator>
      <pubDate>Fri, 19 Jul 2024 13:59:36 +0000</pubDate>
      <link>https://dev.to/tvelmachos/do-you-want-to-upgrade-mongodb-from-version-5-to-7-on-k8s-1dmb</link>
      <guid>https://dev.to/tvelmachos/do-you-want-to-upgrade-mongodb-from-version-5-to-7-on-k8s-1dmb</guid>
      <description>&lt;p&gt;Are you wondering how you can upgrade MongoDB from version 5 to 7 without affecting old MongoDB instance and without spending time and resource to migrate the data from the old MongoDB instance to the new one?&lt;/p&gt;

&lt;p&gt;The answer to the question at least regarding the data is Velero, as you can see from the following snippet I am using Velero to take incremental snapshots of the stateful workloads running in the K8s Clusters (GKE) and helm to deploy a new instance of MongoDB (just to mention the values found bellow is just for testing), it goes without saying that if you want to deploy any workload to K8s you will use Gitops tools like FluxCD / ArgoCD instead of just deploy them imperatively using the cli tools helm, kubectl.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Clone Volume
1. velero restore create test-mongorestore-only-pvc-pv --include-resources PersistentVolume,PersistentVolumeClaim --from-backup backup-app-mongodb-auto-20240719020007 --namespace-mappings default:dblabs
2. 
helm repo add bitnami https://charts.bitnami.com/bitnami
helm repo update
helm upgrade --install test-mongo-upgrade-7 bitnami/mongodb --version 12.1.31 -f values.yaml
---
image:
  tag: "6.0"
auth:
  enabled: true
  rootPassword: "&amp;lt;same pass&amp;gt;"
persistence:
  enabled: true
  existingClaim: datadir-mongodb-0
3. Set featureCompatibilityVersion = 6.0
db.adminCommand({ setFeatureCompatibilityVersion: "6.0" });
db.adminCommand( { getParameter: 1, featureCompatibilityVersion: 1 } ).featureCompatibilityVersion.version
4 helm upgrade --install test-mongo-upgrade-6 my-repo/mongodb --version 12.1.31 -f values.yaml
---
image:
  tag: "7.0"
auth:
  enabled: true
  rootPassword: "&amp;lt;same pass&amp;gt;"
persistence:
  enabled: true
  existingClaim: datadir-mongodb-0
5. Set featureCompatibilityVersion = 7.0
db.adminCommand({ setFeatureCompatibilityVersion: "7.0", confirm: true });
db.adminCommand( { getParameter: 1, featureCompatibilityVersion: 1 } ).featureCompatibilityVersion.version
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;More information can be found in the following links&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;https://www.digitalocean.com/community/questions/mongodb-updated-this-morning-to-7-0-1-now-wont-restart

https://www.mongodb.com/docs/manual/reference/command/setFeatureCompatibilityVersion/
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;I hope it helps. Cheers!&lt;/p&gt;

</description>
      <category>devops</category>
      <category>kubernetes</category>
      <category>mongodb</category>
      <category>backup</category>
    </item>
    <item>
      <title>Reindex Indices with Python</title>
      <dc:creator>Thodoris Velmachos</dc:creator>
      <pubDate>Fri, 15 Dec 2023 21:18:43 +0000</pubDate>
      <link>https://dev.to/tvelmachos/reindex-indices-with-python-25em</link>
      <guid>https://dev.to/tvelmachos/reindex-indices-with-python-25em</guid>
      <description>&lt;p&gt;Hello, do you want to re index multiple indices and until now you are doing it manually, then checkout the repo I have created this simple Python Script to automate this time consuming process.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Ey0EPKwP--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/sqe1dou236cbelufj3k0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Ey0EPKwP--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/sqe1dou236cbelufj3k0.png" alt="Image description" width="800" height="653"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Link to the repo: &lt;a href="https://github.com/t-velmachos/py-elastic-reindex-script"&gt;reindex script&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Give it try, I hope it helps, Cheers!&lt;/p&gt;

</description>
      <category>programming</category>
      <category>python</category>
      <category>elasticsearch</category>
      <category>script</category>
    </item>
    <item>
      <title>Consul Service Discovery on Prometheus.</title>
      <dc:creator>Thodoris Velmachos</dc:creator>
      <pubDate>Fri, 31 Mar 2023 17:45:51 +0000</pubDate>
      <link>https://dev.to/tvelmachos/consul-service-discovery-on-prometheus-3o2d</link>
      <guid>https://dev.to/tvelmachos/consul-service-discovery-on-prometheus-3o2d</guid>
      <description>&lt;p&gt;Hello, I hope you are well and safe, I don't know if you scrape Metrics from your Kubernetes workloads dynamically using Consul Service Catalog as Service Discovery Method.&lt;/p&gt;

&lt;p&gt;I would like to mention that all the workloads are running on Kubernetes, so in order to monitor them I am using Prometheus Operator, as a result, I have configured it to identify the pods that match specific criteria in the metadata fields.&lt;/p&gt;

&lt;p&gt;Finally, in the following example, I will try to show you how you can define a Prometheus scraper that uses Consul Service Catalog as a Service Registry.&lt;/p&gt;

&lt;p&gt;Let's Dive in...&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;          - job_name: "test-consul-sd-backend"
            consul_sd_configs:
              - server: "consul-server.consul-system.svc.cluster.local:8500"
            relabel_configs:
              - source_labels: [__meta_consul_service]
                regex: .*backend.*
                action: keep
              - source_labels: [__meta_consul_service]
                target_label: job
            scrape_interval: 60s
            metrics_path: "/api/metrics"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;of course, you can any of the fields found in the Prometheus documentation: &lt;a href="https://prometheus.io/docs/prometheus/latest/configuration/configuration/#consul_sd_config" rel="noopener noreferrer"&gt;https://prometheus.io/docs/prometheus/latest/configuration/configuration/#consul_sd_config&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgu7jc6wdvif79j0ckvtb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgu7jc6wdvif79j0ckvtb.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flmg2v045riopqh37adu3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flmg2v045riopqh37adu3.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I hope you like the tutorial, if you do give it a thumps up! and follow me on Twitter, also you can subscribe to my Newsletter in order to avoid missing any of the upcoming tutorials.&lt;/p&gt;

&lt;p&gt;Media Attribution&lt;/p&gt;

&lt;p&gt;I would like to thank Clark Tibbs for designing the awesome photo I am using in my posts.&lt;/p&gt;

&lt;p&gt;Thank you, Cheers!!!&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>devops</category>
      <category>observabillity</category>
    </item>
    <item>
      <title>Do you want to enforce Least Privilege Principle to k8s Api Server?</title>
      <dc:creator>Thodoris Velmachos</dc:creator>
      <pubDate>Mon, 05 Dec 2022 22:23:47 +0000</pubDate>
      <link>https://dev.to/tvelmachos/do-you-want-to-enforce-least-privilege-principle-to-k8s-api-server-3d8o</link>
      <guid>https://dev.to/tvelmachos/do-you-want-to-enforce-least-privilege-principle-to-k8s-api-server-3d8o</guid>
      <description>&lt;p&gt;Hello, in this tutorial I will describe the steps needed to be followed in order to provide access to the Kubernetes Clusters easier in more controlled way by leveraging Kubernetes RBAC in order to Fine Grain the assigned permissions to k8s users i.e Development Teams. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://goteleport.com" rel="noopener noreferrer"&gt;Teleport &lt;/a&gt;in the rescue, let's Dive in...&lt;/p&gt;

&lt;p&gt;Prerequisite Steps:&lt;br&gt;
&lt;strong&gt;A Teleport Instance, please see the following links:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://goteleport.com/docs/kubernetes-access/getting-started/" rel="noopener noreferrer"&gt;https://goteleport.com/docs/kubernetes-access/getting-started/&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://goteleport.com/docs/deploy-a-cluster/open-source/" rel="noopener noreferrer"&gt;https://goteleport.com/docs/deploy-a-cluster/open-source/&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://dev.to/tvelmachos/teleport-database-access-management-4b53"&gt;https://dev.to/tvelmachos/teleport-database-access-management-4b53&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Lets proceed with the next steps.&lt;/p&gt;
&lt;h4&gt;
  
  
  On the k8s Cluster lets create a Service Account a Cluster Role and a Cluster Role Binding.
&lt;/h4&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# The imperative way
cat &amp;lt;&amp;lt; EOF | kubectl delete -f -
apiVersion: v1
kind: ServiceAccount
metadata:
  name: developers-view-sa
  namespace: default

---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: developers-view-cr 
rules:
- verbs: ["get", "list", "watch"]
  resources: 
  - namespaces
  - services
  - endpoints
  - pods
  - deployments
  - configmaps
  - jobs
  - cronjobs
  - daemonsets
  - statefulsets
  - replicasets
  - persistentvolumes
  apiGroups: ["","apps","batch"]
- verbs: ["get", "list", "watch"]
  resources:
  - pods/portforward
  - svc/portforward
  apiGroups: [""]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: developers-view-rb
  namespace: default
subjects:
- kind: ServiceAccount
  name: developers-view-sa
  namespace: default
roleRef:
  kind: ClusterRole
  name: developers-view-cr
  apiGroup: rbac.authorization.k8s.io
EOF

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Finally, you need to create a Role and assign it to the members of your Development team, personally I prefer to leverage Github SSO in order to avoid creating manually the Users see the following link: - &lt;a href="https://goteleport.com/docs/kubernetes-access/controls/" rel="noopener noreferrer"&gt;https://goteleport.com/docs/kubernetes-access/controls/&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Login to the Web Portal go to Team and then to Auth Connectors and create an Auth Connector.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kind: github
metadata:
  name: github
spec:
  client_id: xxxxxxxxxxxxxxx
  client_secret: xxxxxxxxxxxxxxx
  display: GitHub
  endpoint_url: ""
  redirect_url: https://&amp;lt;domain&amp;gt;/v1/webapi/github/callback
  teams_to_logins:
  - logins:
    - access
    - &amp;lt;k8s-role&amp;gt; i.e kube-dev-access
    organization: &amp;lt;GithubOrg&amp;gt;
    team: &amp;lt;GithubTeam&amp;gt;
  teams_to_roles: null
version: v3
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then go to Roles and create an a new Role.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kind: role
metadata:
  id: 1670274429591976402
  name: kube-dev-access
spec:
  allow:
    kubernetes_groups:
    - developers-view-cr
    kubernetes_labels:
      '*': '*'
    kubernetes_users:
    - system:serviceaccount:default:developers-view-sa
    rules:
    - resources:
      - '*'
      verbs:
      - get
      - list
      - watch
  deny: {}
  options:
    cert_format: standard
    create_host_user: false
    desktop_clipboard: true
    desktop_directory_sharing: true
    enhanced_recording:
    - command
    - network
    forward_agent: false
    max_session_ttl: 30h0m0s
    pin_source_ip: false
    port_forwarding: true
    record_session:
      default: best_effort
      desktop: true
    ssh_file_copy: true
version: v5
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;I hope you like the tutorial, if you do give a thumps up! and follow me in &lt;a href="https://twitter.com/TVelmachos" rel="noopener noreferrer"&gt;Twitter&lt;/a&gt;, also you can subscribe to my &lt;a href="https://dashboard.mailerlite.com/forms/167581/67759331736553243/share" rel="noopener noreferrer"&gt;Newsletter &lt;/a&gt; in order to avoid missing any of the upcoming tutorials.&lt;/p&gt;

&lt;h4&gt;
  
  
  Media Attribution
&lt;/h4&gt;

&lt;p&gt;I would like to thank &lt;a href="https://unsplash.com/@clarktibbs" rel="noopener noreferrer"&gt;Clark Tibbs&lt;/a&gt; for designing the awesome &lt;a href="https://unsplash.com/photos/oqStl2L5oxI" rel="noopener noreferrer"&gt;photo &lt;/a&gt;I am using in my posts.&lt;/p&gt;

&lt;p&gt;Happy Teleporting, Thank you, Cheers!!!&lt;/p&gt;

</description>
      <category>javascript</category>
      <category>webdev</category>
      <category>discuss</category>
    </item>
    <item>
      <title>How to Migrate Indices from Elasticsearch.</title>
      <dc:creator>Thodoris Velmachos</dc:creator>
      <pubDate>Mon, 14 Nov 2022 22:01:39 +0000</pubDate>
      <link>https://dev.to/tvelmachos/how-to-migrate-indices-from-elasticsearch-47mb</link>
      <guid>https://dev.to/tvelmachos/how-to-migrate-indices-from-elasticsearch-47mb</guid>
      <description>&lt;p&gt;Hello, in this tutorial I will describe the steps needed to be follow  in order to migrate Indices from Elasticsearch to another Elasticsearch Instance. To accomplish this goal I will use this fantastic cli called &lt;a href="https://github.com/elasticsearch-dump/elasticsearch-dump"&gt;elasticsearch-dump&lt;/a&gt; to perform the necessary operations (PUT/GET) and &lt;a href="https://min.io/"&gt;Minio&lt;/a&gt; S3 Compatible Object Storage to temporarily store the ES indices.&lt;/p&gt;

&lt;p&gt;Lets Dive in...&lt;/p&gt;

&lt;p&gt;Step1. Clone the repository and build the image.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;1. git clone https://github.com/elasticsearch-dump/elasticsearch-dump

2. cd elasticsearch-dump 

3. Add any missing packages like curl nano e.t.c
### Start DockerFile 
FROM node:14-buster-slim
LABEL maintainer="ferronrsmith@gmail.com"
ARG ES_DUMP_VER
ENV ES_DUMP_VER=${ES_DUMP_VER:-latest}
ENV NODE_ENV production

RUN npm install elasticdump@${ES_DUMP_VER} -g
RUN apt-get update \
    &amp;amp;&amp;amp; apt-get install -y curl \
    &amp;amp;&amp;amp; rm -rf /var/lib/{apt,dpkg,cache,log}
COPY docker-entrypoint.sh /usr/local/bin/

ENTRYPOINT ["docker-entrypoint.sh"]

CMD ["elasticdump"]

4. docker build . -t &amp;lt;docherhub-user&amp;gt;/&amp;lt;image-name&amp;gt;:&amp;lt;release-version&amp;gt; 

5. docker login -u &amp;lt;username&amp;gt; 

6. docker push &amp;lt;docherhub-user&amp;gt;/&amp;lt;image-name&amp;gt;:&amp;lt;release-version&amp;gt; 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Step2. Create a Persistent Volume for the Kubernetes job&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cat &amp;lt;&amp;lt; EOF | kubectl apply -f -
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  labels:
    app.kubernetes.io/name: elasticdumpclient-pvc
  name: elasticdumpclient-pvc
  namespace: default
spec:
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 10Gi
  storageClassName: &amp;lt;depends on the Cloud Provider&amp;gt;
  volumeMode: Filesystem
EOF
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Step3. Create a new Kubernetes Job and mount the previously provisioned Persistent Volume.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cat &amp;lt;&amp;lt; EOF | kubectl apply -f -
apiVersion: batch/v1
kind: Job
metadata:
  name: elasticdumpclient
  labels:
    app: elasticdumpclient
spec:
  template:
    spec:
      initContainers:
        - name: fix-data-dir-permissions
          image: alpine:3.16.2
          command:
            - chown
            - -R  
            - 1001:1001
            - /data
          volumeMounts:
            - name: elasticdumpclient-pvc
              mountPath: /data
      containers:
        - name: elasticdumpclient
          image:  thvelmachos/elasticdump:6.68.0-v1
          command: ["/bin/sh", "-ec", "sleep 460000"] 
          imagePullPolicy: Always
          volumeMounts:
            - name: elasticdumpclient-pvc
              mountPath: /data
      restartPolicy: Never
      volumes:
        - name: elasticdumpclient-pvc
          persistentVolumeClaim:
            claimName: elasticdumpclient-pvc
  backoffLimit: 1
EOF

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Step3. Upload a Script to the Kubernetes Job&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl cp &amp;lt;local-path&amp;gt;/migrate-es-indices.sh &amp;lt;pod-name&amp;gt;:/data/migrate-es-indices.sh

# cat migrate-es-indices.sh
#!/bin/sh
arr=$(curl -X GET -L 'http://elasticsearch-master.default.svc.cluster.local:9200/_cat/indices/?pretty&amp;amp;s=store.size:desc' | grep -E '[^access](logs-)' | awk '{print $3}')

for idx in $arr; do
  echo "Working: $idx"
  elasticdump --s3AccessKeyId "&amp;lt;access-key&amp;gt;" --s3SecretAccessKey "&amp;lt;secret-key&amp;gt;" --input=http://http://elasticsearch-master.default.svc.cluster.local:9200/$idx --output "s3://migrate-elastic-indices/$idx.json" --s3ForcePathStyle true --s3Endpoint https://&amp;lt;minio-endpoint&amp;gt;:&amp;lt;port&amp;gt;
done

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Step4. Execute the Script.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;chmod +x /data/migrate-es-indices.sh
# Send job to the Background with nohup
nohup sh /data/migrate-es-indices.sh  &amp;gt; /data/migrate.out  2&amp;gt;&amp;amp;1  &amp;amp;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That's it, Enjoy!!!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--sPe62tZ9--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/84dyg1jinbeufrtgjlok.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--sPe62tZ9--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/84dyg1jinbeufrtgjlok.png" alt="Image description" width="880" height="175"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I hope you like the tutorial, if you do give a thumps up! and follow me in &lt;a href="https://twitter.com/TVelmachos"&gt;Twitter&lt;/a&gt;, also you can subscribe to my &lt;a href="https://dashboard.mailerlite.com/forms/167581/67759331736553243/share"&gt;Newsletter&lt;/a&gt; in order to avoid missing any of the upcoming tutorials.&lt;/p&gt;

&lt;h4&gt;
  
  
  Media Attribution
&lt;/h4&gt;

&lt;p&gt;I would like to thank &lt;a href="https://unsplash.com/@clarktibbs"&gt;Clark Tibbs&lt;/a&gt; for designing the awesome &lt;a href="https://unsplash.com/photos/oqStl2L5oxI"&gt;photo &lt;/a&gt;I am using in my posts.&lt;/p&gt;

</description>
      <category>devops</category>
      <category>kubernetes</category>
      <category>elasticsearch</category>
    </item>
    <item>
      <title>Deploy Promtail as a Sidecar to you Main App.</title>
      <dc:creator>Thodoris Velmachos</dc:creator>
      <pubDate>Sat, 12 Nov 2022 11:48:27 +0000</pubDate>
      <link>https://dev.to/tvelmachos/deploy-promtail-as-a-sidecar-to-you-main-app-2fk5</link>
      <guid>https://dev.to/tvelmachos/deploy-promtail-as-a-sidecar-to-you-main-app-2fk5</guid>
      <description>&lt;p&gt;Hello, in this tutorial the goal is to describe the steps needed to deploy Promtail as a Sidecar container to your app in order to ship only the logs you will need to the Log Management System in our case we will use &lt;a href="https://grafana.com/oss/loki/"&gt;Grafana Loki&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Before we start, I would like to explain to you the reasoning behind the use of the two Kubernetes Objects a &lt;a href="https://t-velmachos.notion.site/1df9773802444753973d8c15d4047a61"&gt;Configmap&lt;/a&gt; and &lt;a href="https://kubernetes.io/docs/concepts/storage/volumes/"&gt;emptyDir Volume&lt;/a&gt;. So we will use a emptyDir Volume create a shared temporary space between Containers in which the app logs will reside, also we will use a Configmap to store the necessary configuration used by Promtail in order to know which files need to monitor and where we want to ship the Logs in our case the Loki Url.&lt;/p&gt;

&lt;p&gt;So, lets Dive in…&lt;/p&gt;

&lt;p&gt;Step1. Create the ConfigMap to store the configuration (promtail.yaml) for Promtail.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: v1
kind: ConfigMap
metadata:
  name: promtail-sidecar-config-map
data:
  promtail.yaml: |
      server:
        http_listen_port: 9080
        grpc_listen_port: 0
        log_level: "debug"
      positions:
        filename: /tmp/positions.yaml
      clients: # Specify target
        - url: http://loki.monitoring.svc.cluster.local:3100/loki/api/v1/push
      scrape_configs:
        - job_name:  "&amp;lt;app-name&amp;gt;" 
          static_configs: 
            - targets: 
                - localhost 
              labels:
                app: "storage-service"
                environment: "&amp;lt;environment-name&amp;gt;" 
                __path__: /app/logs/*.log # Any file .log in the EmptyDir Volume.

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Step2. Make the necessary changes in the Deployment Manifest.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: apps/v1
kind: Deployment
metadata:
  name: &amp;lt;app-service&amp;gt;
  labels:
    app: &amp;lt;app-service&amp;gt;
spec:
  replicas: 1
  selector:
    matchLabels:
      app: &amp;lt;app-service&amp;gt;
  template:
    metadata:
      labels:
        app: &amp;lt;app-service&amp;gt;
    spec:
      containers:
        - name: &amp;lt;app-service&amp;gt;
          image: &amp;lt;your-name&amp;gt;/&amp;lt;app-service&amp;gt;
          imagePullPolicy: Always
          ports:
            - containerPort: &amp;lt;app-port&amp;gt;
          readinessProbe:
            exec:
              command: ["&amp;lt;your health-check&amp;gt;"]
            initialDelaySeconds: 5
          livenessProbe:
            exec:
              command: ["&amp;lt;your health-check&amp;gt;"]
            initialDelaySeconds: 10
          env:
            - name: &amp;lt;ENV-VAR-1&amp;gt;
              valueFrom:
                configMapKeyRef:
                  name: &amp;lt;app-service&amp;gt;-config-map
                  key: appName
            - name:  &amp;lt;ENV-VAR-2&amp;gt;
              valueFrom:
                secretKeyRef:
                  name: &amp;lt;app-service&amp;gt;-secret
                  key: &amp;lt;secret-key&amp;gt;
          volumeMounts:
           - name: shared-logs # shared space monitored with Promtail
             mountPath: /app/logs
        # Sidecar Container Promtail
        - name: promtail
          image: grafana/promtail:master
          args: 
            - "-config.file=/etc/promtail/promtail.yaml" # Found in the ConfigMap
          volumeMounts:
            - name: config
              mountPath: /etc/promtail
            - name: shared-logs # shared space
              mountPath: /app/logs
      imagePullSecrets:
        - name: &amp;lt;registry-secret&amp;gt; # if needed
      volumes:
         - name: config
           configMap:
            name: promtail-sidecar-config-map
         - name: shared-logs  # shared space monitored with Promtail
           emptyDir: 
          sizeLimit: 500Mi
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;I hope you like the tutorial, if you do give a thumps up! and follow me in &lt;a href="https://twitter.com/TVelmachos"&gt;Twitter&lt;/a&gt;, also you can subscribe to my &lt;a href="https://dashboard.mailerlite.com/forms/167581/67759331736553243/share"&gt;Newsletter&lt;/a&gt; in order to avoid missing any of the upcoming tutorials.&lt;/p&gt;

&lt;h4&gt;
  
  
  Media Attribution
&lt;/h4&gt;

&lt;p&gt;I would like to thank &lt;a href="https://unsplash.com/@clarktibbs"&gt;Clark Tibbs&lt;/a&gt; for designing the awesome &lt;a href="https://unsplash.com/photos/oqStl2L5oxI"&gt;photo &lt;/a&gt;I am using in my posts.&lt;/p&gt;

</description>
      <category>devops</category>
      <category>kubernetes</category>
      <category>observabillity</category>
      <category>grafana</category>
    </item>
    <item>
      <title>How to Backup a Mongo DB In K8s.</title>
      <dc:creator>Thodoris Velmachos</dc:creator>
      <pubDate>Fri, 11 Nov 2022 19:39:33 +0000</pubDate>
      <link>https://dev.to/tvelmachos/how-to-backup-a-mongo-db-in-k8s-3897</link>
      <guid>https://dev.to/tvelmachos/how-to-backup-a-mongo-db-in-k8s-3897</guid>
      <description>&lt;p&gt;Hello, in this tutorial the goal is to describe the steps needed to backup a Mongo Database in a simple straight forward way, so I will leverage the use of the Kubernetes Job to perform this task.&lt;/p&gt;

&lt;p&gt;The Commands for Mongo Backup/Restore is the following.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;- mongodump --uri 'mongodb://root:$MONGODB_ROOT_PASSWORD@mongo-mongodb.default.svc.cluster.local:27017/org1?authSource=admin&amp;amp;ext.auth.askPassword=true' --gzip --archive &amp;gt; /data/dump_&amp;lt;db-name&amp;gt;.gz

- mongorestore --uri "mongodb://root:&amp;lt;password&amp;gt;@mongodb.default.svc.cluster.local:27017/&amp;lt;sdb-name&amp;gt;?authMechanism=SCRAM-SHA-256&amp;amp;authSource=admin"  --gzip --archive=/data/mongo/dump_&amp;lt;db-name&amp;gt;.gz
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;So, Lets Start...&lt;/p&gt;

&lt;p&gt;Step1. Provision a Persistent Volume.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cat &amp;lt;&amp;lt; EOF | kubectl apply -f -
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  labels:
    app.kubernetes.io/name: pgdumpclient-pvc
  name: mongodumpclient-pvc
  namespace: default
spec:
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 20Gi
  mongodumpclient: standard #GKE -- Depends on the Cloud Provider
  volumeMode: Filesystem
EOF
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Step2. Start a Kubernetes Job in which we are referencing the previously provision PVC (aka Persistent Volume Claim) and before starting the process we are changing the owner of the volume mount i.e "/data" in order the mongodump to be able to  save the dump.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Kubernetes Dump Job
cat &amp;lt;&amp;lt; EOF | kubectl apply -f -
apiVersion: batch/v1
kind: Job
metadata:
  name: mongodump
  labels:
    app: mongodump
spec:
  template:
    spec:
      initContainers:
        - name: fix-data-dir-permissions
          image: alpine:3.16.2
          command:
            - chown
            - -R  
            - 1001:1001
            - /data
          volumeMounts:
            - name: mongodumpclient-pvc
              mountPath: /data
      containers:
        - name: mongodump
          image: docker.io/bitnami/mongodb:5.0.3 
          command: ["/bin/sh", "-ec", "sleep 120","mongodump --uri 'mongodb://root:$MONGODB_ROOT_PASSWORD@mongo-mongodb.default.svc.cluster.local:27017/org1?authSource=admin&amp;amp;ext.auth.askPassword=true' --gzip --archive &amp;gt; /data/dump_&amp;lt;db-name&amp;gt;.gz"] 
          imagePullPolicy: Always
          env:
            - name: MONGODB_ROOT_PASSWORD
              valueFrom:
                secretKeyRef:
                  name: mongodb
                  key: mongodb-root-password
          volumeMounts:
            - name: mongodumpclient-pvc
              mountPath: /data
      restartPolicy: Never
      volumes:
        - name: mongodumpclient-pvc
          persistentVolumeClaim:
            claimName: mongodumpclient-pvc
  backoffLimit: 1
EOF
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Step3. Finally we are retrieving the MongoDB Dump file from the Pod.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl cp mongodump-&amp;lt;random&amp;gt;:/data/dump_&amp;lt;db-name&amp;gt;.gz /&amp;lt;local-path&amp;gt;/
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now In the New Cluster we will follow the same approach in order to restore the Mongo Database to the other MongoDB Instance.&lt;/p&gt;

&lt;p&gt;I hope you like the tutorial, if you do give a thumps up! and follow me in &lt;a href="https://twitter.com/TVelmachos"&gt;Twitter&lt;/a&gt;, also you can subscribe to my &lt;a href="https://dashboard.mailerlite.com/forms/167581/67759331736553243/share"&gt;Newsletter&lt;/a&gt; in order to avoid missing any of the upcoming tutorials.&lt;/p&gt;

&lt;h4&gt;
  
  
  Media Attribution
&lt;/h4&gt;

&lt;p&gt;I would like to thank &lt;a href="https://unsplash.com/@clarktibbs"&gt;Clark Tibbs&lt;/a&gt; for designing the awesome &lt;a href="https://unsplash.com/photos/oqStl2L5oxI"&gt;photo &lt;/a&gt;I am using in my posts.&lt;/p&gt;

</description>
      <category>devops</category>
      <category>kubernetes</category>
      <category>mongodb</category>
      <category>operations</category>
    </item>
    <item>
      <title>Pods running in GKE are not mounting their PVs.</title>
      <dc:creator>Thodoris Velmachos</dc:creator>
      <pubDate>Thu, 03 Nov 2022 21:17:49 +0000</pubDate>
      <link>https://dev.to/tvelmachos/pods-running-in-gke-are-not-mounting-their-pvs-217</link>
      <guid>https://dev.to/tvelmachos/pods-running-in-gke-are-not-mounting-their-pvs-217</guid>
      <description>&lt;p&gt;If you have faced the following problem (see the screenshot), then you know probably the specific node where the persistent volume resides is not working properly and more specifically the kubelet is not working properly!&lt;/p&gt;

&lt;p&gt;So, I would suggest the following two actions:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Check the Volume Attachments more details can be found in the following link: &lt;a href="https://veducate.co.uk/kubelet-unable-attach-volumes/" rel="noopener noreferrer"&gt;https://veducate.co.uk/kubelet-unable-attach-volumes/&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdbbkaqdulab325s5ouow.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdbbkaqdulab325s5ouow.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;See also the Google Documentation how to add a new node to the nodepool&lt;br&gt;
&lt;a href="https://cloud.google.com/kubernetes-engine/docs/how-to/node-pools" rel="noopener noreferrer"&gt;https://cloud.google.com/kubernetes-engine/docs/how-to/node-pools&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
Add a new node in the nodepool and drain the problematic one!
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl drain &amp;lt;node-name&amp;gt; --ignore daemonsets -- delete-emptydir-data
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fskhvlvcc446ktwfe9o2a.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fskhvlvcc446ktwfe9o2a.jpg" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I hope you like the tutorial, if you do give a thumps up! and follow me in &lt;a href="https://twitter.com/TVelmachos" rel="noopener noreferrer"&gt;Twitter&lt;/a&gt;, also you can subscribe to my &lt;a href="https://dashboard.mailerlite.com/forms/167581/67759331736553243/share" rel="noopener noreferrer"&gt;Newsletter&lt;/a&gt; in order to avoid missing any of the upcoming tutorials.&lt;/p&gt;

&lt;h4&gt;
  
  
  Media Attribution
&lt;/h4&gt;

&lt;p&gt;I would like to thank &lt;a href="https://unsplash.com/@clarktibbs" rel="noopener noreferrer"&gt;Clark Tibbs&lt;/a&gt; for designing the awesome &lt;a href="https://unsplash.com/photos/oqStl2L5oxI" rel="noopener noreferrer"&gt;photo &lt;/a&gt;I am using in my posts.&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>devops</category>
      <category>troubleshouting</category>
      <category>gke</category>
    </item>
    <item>
      <title>Do you know the project called Otomi?</title>
      <dc:creator>Thodoris Velmachos</dc:creator>
      <pubDate>Thu, 03 Nov 2022 20:34:06 +0000</pubDate>
      <link>https://dev.to/tvelmachos/do-you-know-the-project-called-otomi-3epo</link>
      <guid>https://dev.to/tvelmachos/do-you-know-the-project-called-otomi-3epo</guid>
      <description>&lt;p&gt;Have you heard this awesome project called &lt;strong&gt;Otomi&lt;/strong&gt; (&lt;a href="https://otomi.io/"&gt;https://otomi.io/&lt;/a&gt;) by Red Kubes (&lt;a href="https://www.redkubes.com/"&gt;https://www.redkubes.com/&lt;/a&gt;).&lt;/p&gt;

&lt;p&gt;Documentation: &lt;a href="https://otomi.io/docs/installation/get-started"&gt;https://otomi.io/docs/installation/get-started&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The Installation is super easy (a simple Helm install), I have installed on top of a k3s (&lt;a href="https://k3s.io/"&gt;https://k3s.io/&lt;/a&gt;)!&lt;/p&gt;

&lt;p&gt;The most Awesome thing about Otomi is that it comes with all the goodies baked in!&lt;/p&gt;

&lt;p&gt;Visit the following url to see all the Features:&lt;br&gt;
&lt;a href="https://www.redkubes.com/features-otomi-container-platform/"&gt;https://www.redkubes.com/features-otomi-container-platform/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Security Best Practices&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--p_170Wsc--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/pd9h06ley4xnr8nqba0c.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--p_170Wsc--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/pd9h06ley4xnr8nqba0c.png" alt="Image description" width="880" height="399"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  A Complete Suite of Applications
&lt;/h4&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--_6YyDv8j--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ctlxrho2c3je8d705i2p.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--_6YyDv8j--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ctlxrho2c3je8d705i2p.png" alt="Image description" width="880" height="324"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Also the best thing is that everything is managed the GitOps Way! Literally any change is pushed to the management Gitea and are being deployed with Drone(&lt;a href="https://docs.drone.io/"&gt;https://docs.drone.io/&lt;/a&gt;) of course you can integrate your favorite CD tool like ArgoCD or FluxCD!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--V8ctB34E--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/0b0xcm54vw3rz1jl7t0d.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--V8ctB34E--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/0b0xcm54vw3rz1jl7t0d.png" alt="Image description" width="880" height="473"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--GTY6x_Dq--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/uujhgtvrhnhfhxdcu5nz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--GTY6x_Dq--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/uujhgtvrhnhfhxdcu5nz.png" alt="Image description" width="880" height="496"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I think it is Awesome, Check it out and tell in the comments if you like it !&lt;/p&gt;

&lt;p&gt;I hope you like the tutorial, if you do give a thumps up! and follow me in &lt;a href="https://twitter.com/TVelmachos"&gt;Twitter&lt;/a&gt;, also you can subscribe to my &lt;a href="https://dashboard.mailerlite.com/forms/167581/67759331736553243/share"&gt;Newsletter&lt;/a&gt; in order to avoid missing any of the upcoming tutorials.&lt;/p&gt;

&lt;h4&gt;
  
  
  Media Attribution
&lt;/h4&gt;

&lt;p&gt;I would like to thank &lt;a href="https://unsplash.com/@clarktibbs"&gt;Clark Tibbs&lt;/a&gt; for designing the awesome &lt;a href="https://unsplash.com/photos/oqStl2L5oxI"&gt;photo &lt;/a&gt;I am using in my posts.&lt;/p&gt;

</description>
      <category>devops</category>
      <category>kubernetes</category>
      <category>gitops</category>
      <category>paas</category>
    </item>
    <item>
      <title>How to execute Ansible Playbook with a Github Action.</title>
      <dc:creator>Thodoris Velmachos</dc:creator>
      <pubDate>Thu, 03 Nov 2022 20:14:34 +0000</pubDate>
      <link>https://dev.to/tvelmachos/how-to-execute-ansible-playbook-with-a-github-action-5819</link>
      <guid>https://dev.to/tvelmachos/how-to-execute-ansible-playbook-with-a-github-action-5819</guid>
      <description>&lt;p&gt;Hello, are you wondering if you can execute an Ansible Playbook with a Github action, then please continue reading.&lt;/p&gt;

&lt;p&gt;In the particular case I needed to build a Docker image and push it on Dockerhub and after that manipulate the Docker Compose Manifest in order to update the service of the Stacks running in Swarm Mode. &lt;/p&gt;

&lt;p&gt;So, Let's Dive In...&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
name: Build-Push-Deploy-Docker-Image
on:
  workflow_dispatch:
    inputs:
      imageTag:
        description: "Image Tag"
        required: true
        default: "0.0.0"
      action:
        description: "Both runs CI/CD Stages"
        required: true
        default: "both"
jobs:
  build:
    if: inputs.action == 'both'
    runs-on: ubuntu-latest
    steps:
      - name: Checkout
        uses: actions/checkout@v3

      - name: Login to Docker Hub
        uses: docker/login-action@v2
        with:
          username: ${{ secrets.DOCKER_USER }}
          password: ${{ secrets.DOCKER_PASS }}
      - name: Prep Docker Image (Metadata)
        id: meta
        uses: docker/metadata-action@v4
        with:
          images: # &amp;lt;specify the image name&amp;gt;
          tags: ${{ inputs.imageTag }} # Tag
      - name: Build and push
        uses: docker/build-push-action@v3
        with:
          context: ./
          platforms: linux/amd64
          push: ${{ github.event_name != 'pull_request' }}
          tags: ${{ steps.meta.outputs.tags }}
          labels: ${{ steps.meta.outputs.labels }}

  deployAnsible:
    if: inputs.action == 'both'
    needs: build
    runs-on: ubuntu-latest
    env: 
      SSH_PRIVATE_KEY: ${{ secrets.SSH_PRIVATE_KEY }} 
      NEW_IMAGE_TAG: app:${{ inputs.imageTag }}
    steps:
      - uses: actions/checkout@v3
      - uses: ./.github/actions/ansible
        with: 
          playbook: ./.github/ansible/playbook.yml
          inventory: ./.github/ansible/inventory

  testDeployAnsible:
    if: inputs.action == 'test-cd'
    runs-on: ubuntu-latest
    env: 
      SSH_PRIVATE_KEY: ${{ secrets.SSH_PRIVATE_KEY }} 
      NEW_IMAGE_TAG: app:${{ inputs.imageTag }}
    steps:
      - uses: actions/checkout@v3
      - uses: ./.github/actions/ansible
        with: 
          playbook: ./.github/ansible/playbook.yml
          inventory: ./.github/ansible/inventory


#./.github/ansible/playbook.yml
---
- hosts: all
  remote_user: deploy
  gather_facts: false

  pre_tasks:
    - name: Loading environment variables
      tags: always
      set_fact:
        SSH_PRIVATE_KEY: "{{ lookup('env', 'SSH_PRIVATE_KEY') }}"
        NEW_IMAGE_TAG: "{{ lookup('env', 'NEW_IMAGE_TAG') }}"

  tasks:
    - name: create .ssh directory
      file:
        path: /root/.ssh/
        state: directory
        mode: '0600'
      delegate_to: localhost
      delegate_facts: true

    - name: Write SSH Key 
      copy:
        content: "{{ SSH_PRIVATE_KEY }}"
        dest: /root/.ssh/ansible_key
        mode: 0400 
      delegate_to: localhost
      delegate_facts: true

    - name: Get hostname
      shell: echo "$(hostname)"
      register: result

    ### Actual Deploy of the app new version
    - name: Backup Stack File
      shell: "cp -vf /home/deploy/&amp;lt;app&amp;gt;/&amp;lt;stack&amp;gt;.yaml /home/deploy/&amp;lt;app&amp;gt;/&amp;lt;stack&amp;gt;.yaml.bak"
      when:  result.stdout == "&amp;lt;hostname&amp;gt;"

    - name: Modify the App Version
      shell: sed -i -E 's/app:[0-9\.]+/{{ NEW_IMAGE_TAG }}/g' /home/deploy/&amp;lt;app&amp;gt;/&amp;lt;stack&amp;gt;.yaml
      when:  result.stdout == "&amp;lt;hostname&amp;gt;"

    - name: Restart Stack to Pull Image
      shell: "docker stack deploy -c /home/deploy/&amp;lt;app&amp;gt;/&amp;lt;stack&amp;gt;.yaml &amp;lt;app&amp;gt;-stack --with-registry-auth"
      when:  result.stdout == "&amp;lt;hostname&amp;gt;"

    - name: Sleep for 10 seconds 
      ansible.builtin.wait_for:
        timeout: 10
      delegate_to: localhost

    - name:  List the deployed Docker Swarm Serv
      shell: "docker service ls"
      when:  result.stdout == "&amp;lt;hostname&amp;gt;"

    - name:  List the deployed Docker Swarm Serv
      shell: "docker ps -a --filter status=running"
      when:  result.stdout == "&amp;lt;hostname&amp;gt;"

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;I hope you like the tutorial, if you do give a thumps up! and follow me in &lt;a href="https://twitter.com/TVelmachos"&gt;Twitter&lt;/a&gt;, also you can subscribe to my &lt;a href="https://dashboard.mailerlite.com/forms/167581/67759331736553243/share"&gt;Newsletter&lt;/a&gt; in order to avoid missing any of the upcoming tutorials.&lt;/p&gt;

&lt;h4&gt;
  
  
  Media Attribution
&lt;/h4&gt;

&lt;p&gt;I would like to thank &lt;a href="https://unsplash.com/@clarktibbs"&gt;Clark Tibbs&lt;/a&gt; for designing the awesome &lt;a href="https://unsplash.com/photos/oqStl2L5oxI"&gt;photo &lt;/a&gt;I am using in my posts.&lt;/p&gt;

</description>
      <category>devops</category>
      <category>docker</category>
      <category>ansible</category>
      <category>github</category>
    </item>
    <item>
      <title>How to Update an Kubernetes Image Imperatively.</title>
      <dc:creator>Thodoris Velmachos</dc:creator>
      <pubDate>Thu, 03 Nov 2022 19:33:08 +0000</pubDate>
      <link>https://dev.to/tvelmachos/how-to-update-an-kubernetes-image-3n63</link>
      <guid>https://dev.to/tvelmachos/how-to-update-an-kubernetes-image-3n63</guid>
      <description>&lt;p&gt;In the rare case you need to update the Container Images Manually instead of managing the Deployment the GitOps Way via FluxCD or ArgoCD. Then see the commands bellow.&lt;/p&gt;

&lt;p&gt;With FluxCD you can do the following:&lt;/p&gt;

&lt;h4&gt;
  
  
  Ask FluxCD to stop managing the Deployment / Statefulset.
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;flux suspend kustomization &amp;lt;kustomization-name&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Update the image with the following commands.
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl set image statefulsets.apps/&amp;lt;statefulset-name&amp;gt; &amp;lt;container-name&amp;gt;=&amp;lt;image-name&amp;gt;
OR
kubectl patch statefulsets.apps/&amp;lt;statefulset-name&amp;gt; -p '{"spec":{"containers":[{"name":"&amp;lt;container-name&amp;gt;","image":"&amp;lt;image-name&amp;gt;"}]}}'
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Ensure the Image has been updated.
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl get statefulsets.apps/&amp;lt;statefulset-name&amp;gt;-o=jsonpath='{$.spec.template.spec.containers[:1].image}'
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;I hope you like the tutorial, if you do give a thumps up! and follow me in &lt;a href="https://twitter.com/TVelmachos"&gt;Twitter&lt;/a&gt;, also you can subscribe to my &lt;a href="https://dashboard.mailerlite.com/forms/167581/67759331736553243/share"&gt;Newsletter&lt;/a&gt; in order to avoid missing any of the upcoming tutorials.&lt;/p&gt;

&lt;h4&gt;
  
  
  Media Attribution
&lt;/h4&gt;

&lt;p&gt;I would like to thank &lt;a href="https://unsplash.com/@clarktibbs"&gt;Clark Tibbs&lt;/a&gt; for designing the awesome &lt;a href="https://unsplash.com/photos/oqStl2L5oxI"&gt;photo &lt;/a&gt;I am using in my posts.&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>devops</category>
    </item>
  </channel>
</rss>
