<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Kyriakos Akriotis</title>
    <description>The latest articles on DEV Community by Kyriakos Akriotis (@akyriako).</description>
    <link>https://dev.to/akyriako</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/akyriako"/>
    <language>en</language>
    <item>
      <title>Kubernetes Logging with Grafana Loki &amp; Promtail in under 10 minutes</title>
      <dc:creator>Kyriakos Akriotis</dc:creator>
      <pubDate>Tue, 21 Feb 2023 08:49:59 +0000</pubDate>
      <link>https://dev.to/akyriako/kubernetes-logging-with-grafana-loki-promtail-in-under-10-minutes-3o9j</link>
      <guid>https://dev.to/akyriako/kubernetes-logging-with-grafana-loki-promtail-in-under-10-minutes-3o9j</guid>
      <description>&lt;h2&gt;
  
  
  What is the goal?
&lt;/h2&gt;

&lt;p&gt;After completing this lab, we will have consolidated all the logs generated in our Kubernetes cluster in a tidy, neat, real-time dashboard in Grafana.&lt;/p&gt;

&lt;h2&gt;
  
  
  What are we going to need?
&lt;/h2&gt;

&lt;p&gt;We are going to need a:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Kubernetes cluster.&lt;/li&gt;
&lt;li&gt;Grafana installation.&lt;/li&gt;
&lt;li&gt;Grafana Loki installation.&lt;/li&gt;
&lt;li&gt;Promtail agent on every node of the Kubernetes cluster.&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;If you don’t have a Kubernetes cluster already in place, follow this guide to get started quickly with a containerised variant based on &lt;a href="https://k3s.io/" rel="noopener noreferrer"&gt;K3S&lt;/a&gt;:&lt;br&gt;
&lt;a href="https://akyriako.medium.com/provision-a-high-availability-k3s-cluster-with-k3d-a7519f476c9c" rel="noopener noreferrer"&gt;Provision a Highly Available K3S Cluster with K3D&lt;/a&gt;&lt;br&gt;
or this one if you want a full-blown environment based on virtual machines in the cloud or on-premises:&lt;br&gt;
&lt;a href="https://akyriako.medium.com/install-kubernetes-on-ubuntu-20-04-f1791e8cf799" rel="noopener noreferrer"&gt;Install Kubernetes 1.26 on Ubuntu 20.04&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  What is Grafana?
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://grafana.com/" rel="noopener noreferrer"&gt;Grafana &lt;/a&gt; is an analytics and interactive visualisation platform. It provides a rich variety of charts, graphs, and alerts and connects to plead of supported data sources as Prometheus, time-series databases or the known RDBMs. It allows you to query, visualise, create alerts on your metrics regardless where they are stored.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;You have to think it as the equivalent of Kibana in the ELK stack.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The installation is fairly simple and we are going to perform it via &lt;em&gt;Helm&lt;/em&gt;. If you haven’t Helm already installed on your workstation you can do it either with brew if you are working on &lt;strong&gt;MacOS&lt;/strong&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;brew install helm
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;or with the following bash commands if you are working on &lt;strong&gt;Debian/Ubuntu&lt;/strong&gt; Linux:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;curl https://baltocdn.com/helm/signing.asc | gpg --dearmor | sudo tee /usr/share/keyrings/helm.gpg &amp;gt; /dev/null

sudo apt-get install apt-transport-https --yes
echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/helm.gpg] https://baltocdn.com/helm/stable/debian/ all main" | sudo tee /etc/apt/sources.list.d/helm-stable-debian.list

sudo apt-get update
sudo apt-get install helm --yes
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Putting this behind, we can now install the Helm chart for Grafana:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;helm repo add grafana https://grafana.github.io/helm-charts
helm repo update

helm install grafana grafana/grafana --namespace grafana --create-namespace
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Let's see what's provisioned so far:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl get all -n grafana
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjwjuzpf446q161pgradp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjwjuzpf446q161pgradp.png" alt=" " width="800" height="241"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;code&gt;service/grafana&lt;/code&gt; service would be of type &lt;code&gt;ClusterIP&lt;/code&gt;in a vanilla installation, in my case I am already using MetalLB as a network loadbalancer in my cluster and I have patched the service as of type &lt;code&gt;LoadBalancer&lt;/code&gt;. Feel free to ignore this, we are going to port-forward this service later.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  What is Grafana Loki &amp;amp; Promtail?
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://grafana.com/oss/loki/" rel="noopener noreferrer"&gt;Grafana Loki&lt;/a&gt; is a logs aggregation system, more specifically as stated in their website: ”&lt;em&gt;is a horizontally scalable, highly available, multi-tenant log aggregation system inspired by Prometheus. It is designed to be very cost effective and easy to operate. It does not index the contents of the logs, but rather a set of labels for each log stream.&lt;/em&gt;” It’s a fairly new open source project that was started in 2018 at Grafana Labs.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm4755w0pxv4y9bzmpuq5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm4755w0pxv4y9bzmpuq5.png" alt=" " width="800" height="312"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Loki uses Promtail to aggregate logs. &lt;a href="https://grafana.com/docs/loki/latest/clients/promtail/" rel="noopener noreferrer"&gt;Promtail&lt;/a&gt; is a &lt;strong&gt;logs collector agent&lt;/strong&gt; that collects, (re)labels and ships logs to Loki. It is built specifically for Loki — an instance of Promtail will run on every Kubernetes node. It uses the exact same service discovery as _Prometheus _and support similar methods for labeling, transforming, and filtering logs before their ingestion to Loki.&lt;/p&gt;

&lt;p&gt;Loki &lt;strong&gt;doesn’t index&lt;/strong&gt; the actual text of the logs. The log entries are grouped into streams and then indexed with labels. In that way, Loki not only reduces the overall costs but additionally reduces the time between ingestion of log entries and their availability in queries.&lt;/p&gt;

&lt;p&gt;It comes with its own query language, &lt;em&gt;LogQL&lt;/em&gt;, which can be used from its own command-line interface or directly from Grafana. Last but not least, it can tightly integrate with the Alert Manager of Prometheus — though, the last two are &lt;em&gt;out of the scope of this article&lt;/em&gt;.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;You have to think it as the equivalent (not 1–1 but in bigger context) of Elasticsearch in the ELK stack.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Loki, consists of multiple &lt;a href="https://grafana.com/docs/loki/latest/fundamentals/architecture/deployment-modes/" rel="noopener noreferrer"&gt;components/microservices&lt;/a&gt;:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fklqq0gd9tw11xbe13bos.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fklqq0gd9tw11xbe13bos.png" alt=" " width="726" height="556"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;that can be deployed in &lt;em&gt;3 different modes&lt;/em&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://grafana.com/docs/loki/latest/fundamentals/architecture/deployment-modes/#monolithic-mode" rel="noopener noreferrer"&gt;Monolithic&lt;/a&gt; mode, all of Loki’s microservice components run inside a single process as a single binary.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://grafana.com/docs/loki/latest/fundamentals/architecture/deployment-modes/#simple-scalable-deployment-mode" rel="noopener noreferrer"&gt;Simple Scalable&lt;/a&gt; mode, if you want to separate read and write paths.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://grafana.com/docs/loki/latest/fundamentals/architecture/deployment-modes/#microservices-mode" rel="noopener noreferrer"&gt;Microservices&lt;/a&gt; mode, every Loki component runs as a distinct processes.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The scalable installation requires a S3 compatible object store such as AWS S3, Google Cloud Storage, Open Telekom Cloud OBS or a self-hosted store such as &lt;a href="https://min.io/" rel="noopener noreferrer"&gt;MinIO&lt;/a&gt;. In the _monolithic _deployment mode only the filesystem can be used for storage. &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;For more information concerning how to configure Loki's storage consult this &lt;a href="https://grafana.com/docs/loki/latest/installation/helm/configure-storage/" rel="noopener noreferrer"&gt;link&lt;/a&gt;.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;In this lab, we are going to use the microservices deployment mode with &lt;a href="https://open-telekom-cloud.com/en/products-services/core-services/object-storage-service" rel="noopener noreferrer"&gt;Open Telekom Cloud OBS&lt;/a&gt; as Loki’s storage. The installation (and essentially the configuration) of Loki and Promtail is performed by two distinct and independent charts.&lt;/p&gt;

&lt;p&gt;First let’s download the default chart values for every chart and make the necessary changes. For Loki (given that you chose as well to go with the &lt;code&gt;loki-distributed&lt;/code&gt; chart)&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;helm show values grafana/loki-distributed &amp;gt; loki-distributed-overrides.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you are planning to go with an S3 compatible storage and not with the filesystem, make the following changes to your chart values:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxfnum3l7xysczhxsk1da.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxfnum3l7xysczhxsk1da.png" alt=" " width="800" height="245"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsgzvdvptay96wj25ld9r.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsgzvdvptay96wj25ld9r.png" alt=" " width="800" height="196"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;The format of S3 endpoint is: &lt;br&gt;
&lt;code&gt;s3://{AK}:{SK}@{endpoint}/{region}/{bucket}&lt;br&gt;
&lt;/code&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Next let's enable and configure the compactor:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzclecz5izp8ys38d8jrf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzclecz5izp8ys38d8jrf.png" alt=" " width="800" height="155"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnoj7eq0tbx1qtkouig30.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnoj7eq0tbx1qtkouig30.png" alt=" " width="800" height="69"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Loki values are now set, let’s install it and move to Promtail:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;helm upgrade --install --values loki-distributed-overrides.yaml loki grafana/loki-distributed -n grafana-loki --create-namespace
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;helm show values grafana/promtail &amp;gt; promtail-overrides.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Get all the components that we installed from Loki chart:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl get all -n grafana-loki
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpqw4owp28d8y5yz7f819.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpqw4owp28d8y5yz7f819.png" alt=" " width="800" height="524"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We are going to need the endpoint of Loki’s gateway as the designated endpoint that Promtail will use in order to push logs to Loki. In our case that would be &lt;code&gt;loki-loki-distributed-gateway.grafana-loki.svc.cluster.local&lt;/code&gt;, so let’s add it in the Promtail chart values:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh04mc624p9b4ade8pyow.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh04mc624p9b4ade8pyow.png" alt=" " width="800" height="64"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We are ready to now to deploy Promtail. Run the command and wait for a bit till all pods come to a Ready state.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;helm upgrade --install --values promtail-overrides.yaml promtail grafana/promtail -n grafana-loki
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Configure Grafana Data Sources &amp;amp; Dashboard
&lt;/h2&gt;

&lt;p&gt;All the deployments now are completed. It is time we set up our Grafana. As we saw before Grafana has a simple service, let’s then port-forward it and access the Grafana directly from &lt;code&gt;http://localhost:8080/&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl port-forward service/grafana 8080:80 -n grafana
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;Of course you are free to expose this service in a different way either with assigning to it an external IP by a Load Balancer or as an ingress route via the Ingress solution of choice.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuyt7jk6dmjqdgr6w9ygy.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuyt7jk6dmjqdgr6w9ygy.jpeg" alt=" " width="800" height="700"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You are going to need those credentials in order to login. Default user is admin, but the password will need a bit of work to be retrieved. Get all the &lt;code&gt;Secrets&lt;/code&gt;in the grafana namespace:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl get secrets -n grafana
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw98icwahyna7372rewg5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw98icwahyna7372rewg5.png" alt=" " width="800" height="105"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This is where our password lives. Let’s extract it and decode it:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl get secret grafana -n grafana -o jsonpath="{.data.admin-password}" | base64 --decode ; echo
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We are now in. As next we need to add Grafana Loki as a data source:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb75gywvo5pkwebbx20ui.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb75gywvo5pkwebbx20ui.png" alt=" " width="800" height="132"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fss8ykogixzq0m056ykk1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fss8ykogixzq0m056ykk1.png" alt=" " width="800" height="380"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As URL, use the endpoint of the Grafana Loki gateway service: &lt;code&gt;http://loki-loki-distributed-gateway.grafana-loki.svc.cluster.local&lt;/code&gt;. Test, save and exit.&lt;/p&gt;

&lt;p&gt;Last step, now we need to add a dashboard in order to see eventually our logs. At the very beginning you can step on the shoulders of existing ones and then tailor them according to your needs. A good stepping stone is this &lt;a href="https://grafana.com/grafana/dashboards/15141-kubernetes-service-logs/" rel="noopener noreferrer"&gt;one&lt;/a&gt;. Copy the dashboard template ID from the web page:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxgiuwjigp7dj9nu7xnsy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxgiuwjigp7dj9nu7xnsy.png" alt=" " width="800" height="488"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;and in your Grafana environment, choose to Import a new Dashboard:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8vl7n8lfqcoo9bkl8owg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8vl7n8lfqcoo9bkl8owg.png" alt=" " width="800" height="194"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Paste the template ID we just acquired and load the dashboard:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fceb90laiobkkxlu76ydj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fceb90laiobkkxlu76ydj.png" alt=" " width="800" height="731"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now all the puzzle pieces should come together and you should be able to see logs from your Kubernetes workloads directly into your Grafana interface as an almost real-time experience:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnw7d3vdaq63xjwqjvo42.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnw7d3vdaq63xjwqjvo42.png" alt=" " width="800" height="419"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Summary
&lt;/h2&gt;

&lt;p&gt;Definitively, when it comes to Kubernetes monitoring and observability this is only scratching the surface, but nevertheless it is a robust first step you can fulfil with almost minimum effort and in less than 10 minutes.&lt;/p&gt;

&lt;p&gt;Stay tuned for more Kubernetes topics.&lt;/p&gt;

&lt;p&gt;Article originally posted at &lt;a href="https://akyriako.medium.com/kubernetes-logging-with-grafana-loki-promtail-in-under-10-minutes-d2847d526f9e" rel="noopener noreferrer"&gt;Medium&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>authentication</category>
      <category>security</category>
      <category>developer</category>
      <category>help</category>
    </item>
    <item>
      <title>Strato DynDNS Controller for Kubernetes</title>
      <dc:creator>Kyriakos Akriotis</dc:creator>
      <pubDate>Wed, 15 Feb 2023 08:01:39 +0000</pubDate>
      <link>https://dev.to/akyriako/strato-dyndns-controller-for-kubernetes-4p2m</link>
      <guid>https://dev.to/akyriako/strato-dyndns-controller-for-kubernetes-4p2m</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;For some years, I have gradually moved my home lab’s workloads from virtual machines to Docker containers, and eventually, in Kubernetes, I am looking for an efficient solution to the problem of keeping my domains’ DNS records in sync with the dynamic IP address assigned by my ISP.&lt;/p&gt;

&lt;p&gt;I used &lt;a href="https://www.directupdate.net/" rel="noopener noreferrer"&gt;DirectUpdate&lt;/a&gt; for a long time, which, although it costs ~25EUR and is really worth its money, comes with a downside: it runs only on Windows, and after a point wasting so many resources only for a simple DynDNS-updater client was an overkill. So I started looking into other solutions like Cloudflare, DigitalOcean, No-IP DynDNS, and others. But I was still not satisfied. I was pretty bored jumping across different dashboards, providers, and panels to periodically have an overview of the DNS records of my domains so I make sure my reverse proxy and my Kubernetes Ingress were not in trouble. I decided I needed my very own solution (why not?) that had to fulfil three criteria:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;It shouldn’t cost me a penny&lt;/li&gt;
&lt;li&gt;It should integrate with Kubernetes, so I wouldn’t need to jump from dashboard to dashboard&lt;/li&gt;
&lt;li&gt;It should be fully autonomous, self-healing, and periodic&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The obvious solution meeting all those criteria were to go for a custom Kubernetes controller with custom CRDs, and what better tool than Kubebuilder to start with? &lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/kubernetes-sigs/kubebuilder" rel="noopener noreferrer"&gt;Kubebuilder&lt;/a&gt; is a framework for building Kubernetes APIs using &lt;a href="https://kubernetes.io/docs/tasks/access-kubernetes-api/extend-api-custom-resource-definitions" rel="noopener noreferrer"&gt;custom resource definitions (CRDs)&lt;/a&gt;. It does all the heavy lifting for us, building the project structure and scaffolding the basic components needed to code, build and deploy our artifacts.&lt;/p&gt;

&lt;p&gt;In a nutshell, the story is pretty simple and consists mainly of two parts: You extend Kubernetes control plane by expressing your artifacts as custom resource definitions (CRDs), and you create a custom controller which, either periodically or by responding to changes on this CRs, tries to adjust the actual observed state of those CRs, so it matches with the desired one.&lt;/p&gt;

&lt;p&gt;In our case, this translates to a CRD, which will be called Domain and practically is a representation of the domain (or subdomain) you want to periodically update its DNS records on &lt;a href="https://www.strato.de/" rel="noopener noreferrer"&gt;STRATO&lt;/a&gt;; and a custom Controller that takes over the Sisyphean task of reconciling the CRs states and the propagation of the IP changes to STRATO DynDNS endpoints.&lt;/p&gt;

&lt;p&gt;Additionally, we will need a Secret, but its role is pure supplementary as it contributes only as a safekeeper for the credentials required to issue requests against STRATO DynDNS endpoints.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Why STRATO in the first place? Simply because this is where I have registered all my domains.&lt;/p&gt;

&lt;p&gt;Strato AG is a German internet hosting service provider with headquarters in Berlin. It is a subsidiary of United Internet AG that bought it from Deutsche Telekom AG back in 2016. Strato operates mainly in Germany, the Netherlands, Spain, France, UK and Sweden and serves more than 2 million customers.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Custom Resource Definition (CRD)
&lt;/h2&gt;

&lt;p&gt;The &lt;code&gt;Domain&lt;/code&gt; consists of two properties (mainly — what is &lt;code&gt;TypeMeta&lt;/code&gt; and &lt;code&gt;ObjectMeta&lt;/code&gt; you can look it up in Kubebuilder book) which we have briefly discussed earlier. &lt;code&gt;Spec&lt;/code&gt;, type of &lt;code&gt;DomainSpec&lt;/code&gt;, is the desired state and &lt;code&gt;Status&lt;/code&gt;, type of &lt;code&gt;DomainStatus&lt;/code&gt; is the actual (observed) state of our Domain Customer Resource (CR) at any given moment.&lt;/p&gt;

&lt;p&gt;If you notice, the struct is decorated with a bunch of attributes that are prefixed with &lt;code&gt;+kubebuilder:printcolumn&lt;/code&gt; and they dictating which columns will be displayed when we will inquire about an object or a list of objects of that &lt;code&gt;Kind&lt;/code&gt; for example, with &lt;code&gt;kubectl&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl get domains --all-namespaces
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;The value of each column can either derive from the desired state (&lt;code&gt;.spec.XXX&lt;/code&gt;) or from the observed state (&lt;code&gt;.status.XXX&lt;/code&gt;).&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// Domain is the Schema for the domains API
// +kubebuilder:printcolumn:name="Fqdn",type=string,JSONPath=`.spec.fqdn`
// +kubebuilder:printcolumn:name="IP Address",type=string,JSONPath=`.status.ipAddress`
// +kubebuilder:printcolumn:name="Mode",type=string,JSONPath=`.status.mode`
// +kubebuilder:printcolumn:name="Successful",type=boolean,JSONPath=`.status.lastResult`
// +kubebuilder:printcolumn:name="Last Run",type=string,JSONPath=`.status.lastLoop`
// +kubebuilder:printcolumn:name="Enabled",type=boolean,JSONPath=`.spec.enabled`
type Domain struct {
 metav1.TypeMeta   `json:",inline"`
 metav1.ObjectMeta `json:"metadata,omitempty"`

 Spec   DomainSpec   `json:"spec,omitempty"`
 Status DomainStatus `json:"status,omitempty"`
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;The desired state, &lt;code&gt;DomainSpec&lt;/code&gt;, has five properties. &lt;code&gt;Fqdn&lt;/code&gt; that is the fully qualified name of your domain or subdomain you want to track. The &lt;code&gt;IpAddress&lt;/code&gt; is optional, and if it is set, then we implicitly enforce the manual mode, and when it is empty, our Controller will discover the current IP address assigned to us by our ISP (dynamic mode). &lt;code&gt;Enabled&lt;/code&gt; is something that needs no further explanation. &lt;code&gt;IntervalInMinutes&lt;/code&gt; is defining the intervals between two consecutive reconciliation loops and Password is a reference to the Secret resource that will hold the password for our Strato DynDNS service.&lt;/p&gt;

&lt;p&gt;Those properties can also be decorated with attributes that enforce or dictate various behavioral aspects of the object. For instance, we enforce validation via a regular expression for &lt;code&gt;Fqdn&lt;/code&gt; so we make sure it is a valid domain name and for &lt;code&gt;IpAddress&lt;/code&gt; that &lt;u&gt;is a valid IPv4 address&lt;/u&gt;. For &lt;code&gt;IntervalInMinutes&lt;/code&gt; we want to ensure that it cannot be more frequent than five minutes, and in case of absence, that would be the default value assigned automatically when deployed.&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// DomainSpec defines the desired state of Domain
type DomainSpec struct {
 // INSERT ADDITIONAL SPEC FIELDS - desired state of cluster
 // Important: Run "make" to regenerate code after modifying this file

 // +kubebuilder:validation:Required
 // +kubebuilder:validation:Pattern:=`^([a-zA-Z0-9]|[a-zA-Z0-9][a-zA-Z0-9\-]{0,61}[a-zA-Z0-9])(\.([a-zA-Z0-9]|[a-zA-Z0-9][a-zA-Z0-9\-]{0,61}[a-zA-Z0-9]))*$`
 Fqdn string `json:"fqdn"`

 // +optional
 // +kubebuilder:validation:Required
 // +kubebuilder:validation:Pattern:=`^((25[0-5]|(2[0-4]|1\d|[1-9]|)\d)\.?\b){4}$`
 IpAddress *string `json:"ipAddress,omitempty"`

 // +optional
 // +kubebuilder:default:=true
 // +kubebuilder:validation:Type=boolean
 Enabled bool `json:"enabled,omitempty"`

 // +optional
 // +kubebuilder:default=5
 // +kubebuilder:validation:Minimum=5
 IntervalInMinutes *int32 `json:"interval,omitempty"`

 Password *v1.SecretReference `json:"password"`
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;The observed state, &lt;code&gt;DomainStatus&lt;/code&gt;, is way simpler. Their values are calculated in every reconciliation loop either based on the output of the reconciliation (&lt;code&gt;IpAddress&lt;/code&gt; the IP that was updated in STRATO records, &lt;code&gt;LastReconciliationLoop&lt;/code&gt; when the last update attempt took place and &lt;code&gt;LastReconciliationResult&lt;/code&gt; whether the last attempt was successful or not) or on the current desired state that is processed in that loop (&lt;code&gt;Enabled&lt;/code&gt; or &lt;code&gt;Mode&lt;/code&gt;).&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// DomainStatus defines the observed state of Domain
type DomainStatus struct {
 // INSERT ADDITIONAL STATUS FIELD - define observed state of cluster
 // Important: Run "make" to regenerate code after modifying this file
 Enabled                  bool         `json:"enabled,omitempty"`
 IpAddress                string       `json:"ipAddress,omitempty"`
 Mode                     string       `json:"mode,omitempty"`
 LastReconciliationLoop   *metav1.Time `json:"lastLoop,omitempty"`
 LastReconciliationResult *bool        `json:"lastResult,omitempty"`
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;When we are done coding those structs, you can find them under &lt;code&gt;/api/v1alpha1/domain_types.go&lt;/code&gt;, we can then update the rest of our project with Kubebuilder and install them as CRDs to our development cluster.&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;make manifests
make install
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Making manifests will create, among others, some sample YAML files based on the structs we coded earlier under &lt;code&gt;/config/samples&lt;/code&gt;.&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: dyndns.contrib.strato.com/v1alpha1
kind: Domain
metadata:
  name: www-example-de
spec:
  fqdn: "www.example.de"
  enabled: true
  interval: 5
  password:
    name: strato-dyndns-password
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Change the values, so they point to a domain, or subdomain, of yours.&lt;/p&gt;
&lt;h2&gt;
  
  
  Secret
&lt;/h2&gt;

&lt;p&gt;Manifests will not create a scaffold for the &lt;code&gt;Secret&lt;/code&gt; — it is not a CRD but a core resource of Kubernetes. We have to create it ourselves. STRATO DynDNS endpoints require a username and a password, where username is always the domain or subdomain itself, and password is the password you created when you activated DynDNS for this (sub)domain or the master-password for DynDNS of your STRATO customer account. You choose which one to use, but before proceeding with the YAML of Secret we need to &lt;strong&gt;encode&lt;/strong&gt; this password in &lt;strong&gt;base64&lt;/strong&gt;:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;echo -n "password" | base64
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Create an empty YAML file under &lt;code&gt;/config/samples&lt;/code&gt; and as name declare the password.name you used in Domain YAML, and as &lt;code&gt;data.password&lt;/code&gt; the encoded value of the password you just generated.&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: v1
kind: Secret
metadata:
  name: strato-dyndns-password
type: Opaque
data:
  password: cGFzc3dvcmQ=
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;blockquote&gt;
&lt;p&gt;Remember, the value is &lt;u&gt;encoded&lt;/u&gt; and NOT &lt;strong&gt;encrypted&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Deploy both YAMLs to your cluster:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl apply -f config/samples
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;If everything worked out, you could see a &lt;code&gt;www-example-de&lt;/code&gt;, if you requested to get the domains, and strato-dyndns-password if you requested the secrets in your cluster:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl get domains --all-namespaces
kubectl get secrets --all-namespaces
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h2&gt;
  
  
  Custom Controller
&lt;/h2&gt;

&lt;p&gt;As mentioned before, it’s out of the scope of this article to explain to you how &lt;strong&gt;a&lt;/strong&gt; Custom Controller works — so I will stick to how &lt;strong&gt;this&lt;/strong&gt; Controller works. Do your prep if that is a new topic for you.&lt;/p&gt;

&lt;p&gt;First, we want to ensure that our Controller has adequate permissions to watch or update various resources. We want, of course, to have full control on &lt;code&gt;Domains&lt;/code&gt;, but we want additionally to be able to get and observe &lt;code&gt;Secrets&lt;/code&gt; and to create or update Kubernetes Events. We manage this by the &lt;code&gt;+kubebuilder:rbac&lt;/code&gt; attribute.&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;//+kubebuilder:rbac:groups=dyndns.contrib.strato.com,resources=domains,verbs=get;list;watch;create;update;patch;delete
//+kubebuilder:rbac:groups=dyndns.contrib.strato.com,resources=domains/status,verbs=get;update;patch
//+kubebuilder:rbac:groups=dyndns.contrib.strato.com,resources=domains/finalizers,verbs=update
//+kubebuilder:rbac:groups="",resources=events,verbs=create;patch
//+kubebuilder:rbac:groups="",resources=secrets,verbs=get;list;watch;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;When you issue make manifests a bunch of YAML files, among others, will be created under &lt;code&gt;/config/rbac&lt;/code&gt; based on those attributes.&lt;/p&gt;

&lt;p&gt;The flow of our reconciliation loop is simple. Get the &lt;code&gt;Domain&lt;/code&gt;, if that fails, terminate the loop permanently and &lt;strong&gt;don’t requeue&lt;/strong&gt;.&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;var domain dyndnsv1alpha1.Domain
 if err := r.Get(ctx, req.NamespacedName, &amp;amp;domain); err != nil {
  if apierrors.IsNotFound(err) {
   logger.Error(err, "finding Domain failed")
   return ctrl.Result{}, nil
  }

  logger.Error(err, "fetching Domain failed")
  return ctrl.Result{}, err
 }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Check the desired status (&lt;code&gt;.Spec.Enabled&lt;/code&gt;), if it is not enabled, update the status of the CR in Kubernetes (&lt;code&gt;.Status.Enabled&lt;/code&gt;) accordingly and exit the reconciliation loop permanently, if enabled is false.&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// update status and break reconciliation loop if is not enabled
 if !domain.Spec.Enabled {
  domainCopy.Status.Enabled = domain.Spec.Enabled
  // update the status of the CR
  if err := r.Status().Update(ctx, &amp;amp;domainCopy); err != nil {
   logger.Error(err, "updating status failed") //

   requeueAfterUpdateStatusFailure := time.Now().Add(time.Second * time.Duration(15))
   return ctrl.Result{RequeueAfter: time.Until(requeueAfterUpdateStatusFailure)}, err
  }

  return ctrl.Result{}, nil
 }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Ensure an acceptable interval is in place and decide if the desired state dictates us to proceed in &lt;code&gt;Manual&lt;/code&gt; or &lt;code&gt;Dynamic&lt;/code&gt; mode.&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// define interval between reconciliation loops
 interval := defaultIntervalInMinutes
 if domain.Spec.IntervalInMinutes != nil {
  interval = *domain.Spec.IntervalInMinutes
 }

 // change mode to manual in presence of an explicit ip address in specs
 if domain.Spec.IpAddress != nil {
  mode = Manual
 }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;If the reconciliation loop kicked in earlier than the interval defines (maybe an external change in the YAML files or an internal Kubernetes event), make sure you skip this turn and wait until its next scheduled execution.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Otherwise, we might create an overflow of frequent requests to STRATO and we don’t want to do that because we will either hit the rate limiter of Kubernetes or of STRATO itself, and last thing you want is to be benched for a period of time due to abusing their API.&lt;br&gt;
&lt;/p&gt;
&lt;/blockquote&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// is reconciliation loop started too soon because of an external event?
 if domain.Status.LastReconciliationLoop != nil &amp;amp;&amp;amp; mode == Dynamic {
  if time.Since(domain.Status.LastReconciliationLoop.Time) &amp;lt; (time.Minute*time.Duration(interval)) &amp;amp;&amp;amp; wasSuccess {
   sinceLastRunDuration := time.Since(domain.Status.LastReconciliationLoop.Time)
   intervalDuration := time.Minute * time.Duration(interval)
   requeueAfter := intervalDuration - sinceLastRunDuration

   logger.Info("skipped turn", "sinceLastRun", sinceLastRunDuration, "requeueAfter", requeueAfter)
   return ctrl.Result{RequeueAfter: time.Until(time.Now().Add(requeueAfter))}, nil
  }
 }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;If mode is Manual our IP address is the one defined in desired state (&lt;code&gt;.Spec.IpAddress&lt;/code&gt;). Otherwise, we discover our external IP address, the one our ISP assigned to our router.&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;currentIpAddress := domain.Status.IpAddress
 var newIpAddress *string

 switch mode {
 case Dynamic:
  externalIpAddress, err := r.getExternalIpAddress()
  if err != nil {
   logger.Error(err, "retrieving external ip failed")
   r.Recorder.Eventf(instance, v1core.EventTypeWarning, "RetrieveExternalIpFailed", err.Error())

   success = false
  } else {
   newIpAddress = externalIpAddress
  }
 case Manual:
  newIpAddress = domain.Spec.IpAddress
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;If the new desired state of our IP address matches the observed state, do nothing — remember, play nice, and don’t abuse their endpoints for no reason. If not, get the Secret and retrieve your password and propagate the desired changes to STRATO DNS servers.&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// proceed to update Strato DynDNS only if a valid IP address was found
 if newIpAddress != nil {
  // if last reconciliation loop was successful and there is no ip change skip the loop
  if *newIpAddress == currentIpAddress &amp;amp;&amp;amp; wasSuccess {
   logger.Info("updating dyndns skipped, ip is up-to-date", "ipAddress", currentIpAddress, "mode", mode.String())
   r.Recorder.Event(instance, v1core.EventTypeNormal, "DynDnsUpdateSkipped", "updating skipped, ip is up-to-date")
  } else {
   logger.Info("updating dyndns", "ipAddress", newIpAddress, "mode", mode.String())

   passwordRef := domain.Spec.Password
   objectKey := client.ObjectKey{
    Namespace: req.Namespace,
    Name:      passwordRef.Name,
   }

   var secret v1core.Secret
   if err := r.Get(ctx, objectKey, &amp;amp;secret); err != nil {
    if apierrors.IsNotFound(err) {
     logger.Error(err, "finding Secret failed")
     return ctrl.Result{}, nil
    }

    logger.Error(err, "fetching Secret failed")
    return ctrl.Result{}, err
   }

   password := string(secret.Data["password"])
   if err := r.updateDns(domain.Spec.Fqdn, domain.Spec.Fqdn, password, *newIpAddress); err != nil {
    logger.Error(err, "updating dyndns failed")
    r.Recorder.Eventf(instance, v1core.EventTypeWarning, "DynDnsUpdateFailed", err.Error())

    success = false
   } else {
    logger.Info("updating dyndns completed")
    r.Recorder.Eventf(instance, v1core.EventTypeNormal, "DynDnsUpdateCompleted", "updating dyndns completed")

    success = true
   }
  }
 }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Updating STRATO DynDNS is easy as pie. You need to issue a &lt;strong&gt;GET&lt;/strong&gt; request to do, and that looks like this:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;https://%s:%s@dyndns.strato.com/nic/update?hostname=%s&amp;amp;myip=%s
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;The first two parameters are &lt;code&gt;username&lt;/code&gt; and &lt;code&gt;password&lt;/code&gt;, respectively, &lt;code&gt;hostname&lt;/code&gt; is your (sub)domain name, and &lt;code&gt;myip&lt;/code&gt; is the new IP address you want to update the DNS records.&lt;/p&gt;

&lt;p&gt;Lastly, we update the status of our CR, and we reschedule the following:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// update the status of the CR no matter what, but assign a new IP address in the status
 // only when Strato DynDNS update was successful
 if success {
  domainCopy.Status.IpAddress = *newIpAddress
 }

 domainCopy.Status.LastReconciliationLoop = &amp;amp;v1meta.Time{Time: time.Now()}
 domainCopy.Status.LastReconciliationResult = &amp;amp;success
 domainCopy.Status.Enabled = domain.Spec.Enabled
 domainCopy.Status.Mode = mode.String()

 // update the status of the CR
 if err := r.Status().Update(ctx, &amp;amp;domainCopy); err != nil {
  logger.Error(err, "updating status failed") //

  requeueAfterUpdateStatusFailure := time.Now().Add(time.Second * time.Duration(15))
  return ctrl.Result{RequeueAfter: time.Until(requeueAfterUpdateStatusFailure)}, err
 }

 // if Mode is Manual, and we updated DynDNS with success, then we don't requeue, and we will rely only on
 // events that will be triggered externally from YAML updates of the CR
 if mode == Manual &amp;amp;&amp;amp; success {
  return ctrl.Result{}, nil
 }

 requeueAfter := time.Now().Add(time.Minute * time.Duration(interval))

 logger.Info("requeue", "nextRun", fmt.Sprintf("%s", requeueAfter.Local().Format(time.RFC822)))
 logger.V(10).Info("finished dyndns update")

 return ctrl.Result{RequeueAfter: time.Until(requeueAfter)}, nil
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Now, we are ready to try our controller (externally without deploying it to a cluster):&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;make run
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4pc8i4ym8dim7vowxs64.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4pc8i4ym8dim7vowxs64.png" alt="Updating STRATO DynDNS for our domain was successful!" width="800" height="153"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9bashaa288asfl8sq0cr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9bashaa288asfl8sq0cr.png" alt="kubectl describe… of our Domain via K9S, after reconciliation and update" width="800" height="177"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frfp6o2ar3w995yzi5467.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frfp6o2ar3w995yzi5467.png" alt="kubectl get domains — all-namespaces command as it looks in K9S" width="800" height="159"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  Summary
&lt;/h2&gt;

&lt;p&gt;You can find the whole source code in GitHub along with instructions on how to build this as a container and deploy it to your cluster:&lt;/p&gt;


&lt;div class="ltag-github-readme-tag"&gt;
  &lt;div class="readme-overview"&gt;
    &lt;h2&gt;
      &lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fassets.dev.to%2Fassets%2Fgithub-logo-5a155e1f9a670af7944dd5e12375bc76ed542ea80224905ecaf878b9157cdefc.svg" alt="GitHub logo"&gt;
      &lt;a href="https://github.com/akyriako" rel="noopener noreferrer"&gt;
        akyriako
      &lt;/a&gt; / &lt;a href="https://github.com/akyriako/strato-dyndns" rel="noopener noreferrer"&gt;
        strato-dyndns
      &lt;/a&gt;
    &lt;/h2&gt;
    &lt;h3&gt;
      Strato DynDNS Controller updates your domains' DNS records on STRATO AG. A custom Controller is observing Domain CRs and syncing their desired state with STRATO DNS servers. THIS SOFTWARE IS IN NO WAY ASSOCIATED OR AFFILIATED WITH STRATO AG
    &lt;/h3&gt;
  &lt;/div&gt;
  &lt;div class="ltag-github-body"&gt;
    
&lt;div id="readme" class="md"&gt;
&lt;div class="markdown-heading"&gt;
&lt;h1 class="heading-element"&gt;Strato DynDNS Controller for Kubernetes&lt;/h1&gt;
&lt;/div&gt;

&lt;p&gt;Strato DynDNS Controller updates your domains' DNS records on STRATO AG using Kubernetes Custom Resources and Controller&lt;/p&gt;

&lt;p&gt;&lt;a rel="noopener noreferrer" href="https://github.com/akyriako/strato-dyndnsassets/SCR-20221124-gda.png"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fgithub.com%2Fakyriako%2Fstrato-dyndnsassets%2FSCR-20221124-gda.png" alt="k9s domains list"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;div class="markdown-heading"&gt;
&lt;h2 class="heading-element"&gt;Disclaimer&lt;/h2&gt;
&lt;/div&gt;

&lt;p&gt;THIS SOFTWARE IS IN NO WAY ASSOCIATED OR AFFILIATED WITH &lt;a href="https://www.strato.de" rel="nofollow noopener noreferrer"&gt;STRATO AG&lt;/a&gt;&lt;/p&gt;

&lt;div class="markdown-heading"&gt;
&lt;h2 class="heading-element"&gt;Description&lt;/h2&gt;
&lt;/div&gt;

&lt;p&gt;A custom Controller is observing Domain CRs and syncing their desired state with STRATO DNS servers. You can either define explicitely an IPv4 address (Manual mode) or let the Controller discover you public IPv4 assigned to you by your ISP (Dynamic mode)&lt;/p&gt;

&lt;div class="markdown-heading"&gt;
&lt;h2 class="heading-element"&gt;Getting Started&lt;/h2&gt;

&lt;/div&gt;

&lt;p&gt;You’ll need a Kubernetes cluster to run against. You can use &lt;a href="https://sigs.k8s.io/kind" rel="nofollow noopener noreferrer"&gt;KIND&lt;/a&gt; or &lt;a href="https://k3d.io/v5.4.6/" rel="nofollow noopener noreferrer"&gt;K3D&lt;/a&gt; to get a local cluster for testing, or run against a remote cluster.
&lt;strong&gt;Note:&lt;/strong&gt; Your controller will automatically use the current context in your kubeconfig file (i.e. whatever cluster &lt;code&gt;kubectl cluster-info&lt;/code&gt; shows).&lt;/p&gt;
&lt;div class="markdown-heading"&gt;
&lt;h3 class="heading-element"&gt;Running on the cluster&lt;/h3&gt;

&lt;/div&gt;
&lt;ol&gt;
&lt;li&gt;Build and push your image to the location specified by &lt;code&gt;IMG&lt;/code&gt; in &lt;code&gt;Makefile&lt;/code&gt;:&lt;/li&gt;
&lt;/ol&gt;
&lt;div class="highlight highlight-source-shell notranslate position-relative overflow-auto js-code-highlight"&gt;
&lt;pre&gt;&lt;span class="pl-c"&gt;&lt;span class="pl-c"&gt;#&lt;/span&gt; Image URL to use all&lt;/span&gt;&lt;/pre&gt;…
&lt;/div&gt;
&lt;/div&gt;
  &lt;/div&gt;
  &lt;div class="gh-btn-container"&gt;&lt;a class="gh-btn" href="https://github.com/akyriako/strato-dyndns" rel="noopener noreferrer"&gt;View on GitHub&lt;/a&gt;&lt;/div&gt;
&lt;/div&gt;


&lt;p&gt;Try out this controller, and feel free to fork the repo and extend it as you see fit, or drop your feedback in the comments below or on Github. Till the next time…&lt;/p&gt;

</description>
      <category>career</category>
      <category>productivity</category>
      <category>mentorship</category>
      <category>community</category>
    </item>
    <item>
      <title>Merge multiple kubeconfig files</title>
      <dc:creator>Kyriakos Akriotis</dc:creator>
      <pubDate>Wed, 15 Feb 2023 07:24:04 +0000</pubDate>
      <link>https://dev.to/akyriako/merge-multiple-kubeconfig-files-20gb</link>
      <guid>https://dev.to/akyriako/merge-multiple-kubeconfig-files-20gb</guid>
      <description>&lt;p&gt;We use &lt;code&gt;kubeconfig&lt;/code&gt; files to organize information about clusters, users, namespaces, and authentication mechanisms. &lt;code&gt;kubectl&lt;/code&gt; command-line tool itself, uses &lt;code&gt;kubeconfig&lt;/code&gt; files to source the information it needs in order to connect and communicate with the API server of a cluster.&lt;/p&gt;

&lt;p&gt;By default, &lt;code&gt;kubectl&lt;/code&gt; requires a file named config that lives under &lt;code&gt;$HOME/.kube&lt;/code&gt; directory. You can multiple cluster entries in that file or specify additional kubeconfig files by setting the &lt;code&gt;KUBECONFIG&lt;/code&gt; environment variable or by setting the &lt;code&gt;--kubeconfig&lt;/code&gt; flag.&lt;/p&gt;

&lt;p&gt;When you have multiple &lt;code&gt;kubeconfig&lt;/code&gt; files and you want to merge them in one without using multiple files and switching among them with the &lt;code&gt;kubeconfig&lt;/code&gt; flag, I personally get a headache because there is no way I remember how to merge those files from the command line. But there is a simpler way so please don’t waste brain cells to remember complicated bash commands.&lt;/p&gt;

&lt;p&gt;As we mentioned before the &lt;strong&gt;easy way&lt;/strong&gt; is by setting the &lt;code&gt;KUBECONFIG&lt;/code&gt; environment variable. You can specify there multiple config files divided by the colon symbol (&lt;strong&gt;:&lt;/strong&gt;) and &lt;code&gt;kubectl&lt;/code&gt; will merge those automatically for you.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;export KUBECONFIG=~/.kube/config:~/.rancher/local:~/.kube/kubeconfig.json
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;Special treat: You can mix and match YAML and JSON files!&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;If you want to see now the current merged configuration that your &lt;code&gt;kubectl&lt;/code&gt; is working with, just issue the command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl config view
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;and if you want to export this configuration for future use as a single file you can do it by running:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl config view --flatten &amp;gt; my-config.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;and then you can replace your &lt;code&gt;~/.kube/config&lt;/code&gt; file with the file above for permanent effect.&lt;/p&gt;

&lt;p&gt;Don’t over-complicate staff, don’t try to memorize staff.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Photo by Growtika Developer Marketing Agency on Unsplash&lt;/p&gt;
&lt;/blockquote&gt;

</description>
      <category>vibecoding</category>
    </item>
  </channel>
</rss>
