<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Diomede</title>
    <description>The latest articles on DEV Community by Diomede (@oen).</description>
    <link>https://dev.to/oen</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/oen"/>
    <language>en</language>
    <item>
      <title>Forecastle: my services Dashboard of choice</title>
      <dc:creator>Diomede</dc:creator>
      <pubDate>Thu, 13 May 2021 07:22:40 +0000</pubDate>
      <link>https://dev.to/oen/forecastle-my-services-dashboard-of-choice-1h8m</link>
      <guid>https://dev.to/oen/forecastle-my-services-dashboard-of-choice-1h8m</guid>
      <description>&lt;p&gt;Some weeks ago, I started the migration of my homelab from many docker-compose files to a more organized and reliable Kubernetes cluster.&lt;/p&gt;

&lt;p&gt;In both these scenarios, I needed a way to collect all the links to the various services I have in one place. In the docker-compose implementation, I've used &lt;a href="https://github.com/linuxserver/Heimdall"&gt;Heimdall&lt;/a&gt;, a great tool that gets the job done, but it requires some manual work. With the Kubernetes version of my homelab, I'm trying to automate as many tasks as possible.&lt;/p&gt;

&lt;p&gt;I don't remember what the search phrase used to find this was (I couldn't replicate the results), but I come across &lt;a href="https://github.com/stakater/Forecastle"&gt;Forecastle&lt;/a&gt; and decide to give it a try.&lt;/p&gt;

&lt;p&gt;The installation is pretty simple, follow the steps described on the repository, and you're good to go. I went for the "Vanilla Manifests" path and ended copy the manifest in my homelab repo to inspect it better.&lt;/p&gt;

&lt;p&gt;Looking into the manifest, I see that it will create the classic resources, Deployment, Service, ConfigMap, etc. And also a CRD called &lt;strong&gt;ForecastleApp&lt;/strong&gt; and later on, we'll see how it can be used.&lt;/p&gt;

&lt;p&gt;As mentioned in my previous post, where I described &lt;a href="https://dev.to/oen/kubernetes-external-dns-pi-hole-and-a-custom-domain-3nj9"&gt;my journey with &lt;strong&gt;external-dns&lt;/strong&gt; and &lt;strong&gt;Pi-hole&lt;/strong&gt;&lt;/a&gt;, I use &lt;a href="https://kustomize.io/"&gt;Kustomize&lt;/a&gt; to manage all my manifest. After downloading the &lt;strong&gt;Forecastle&lt;/strong&gt; manifest, I've added it to a &lt;code&gt;kustomization.yml&lt;/code&gt; file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Kustomization&lt;/span&gt;
&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;kustomize.config.k8s.io/v1beta1&lt;/span&gt;
&lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Forecastle&lt;/span&gt;
&lt;span class="na"&gt;resources&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;forecastle.yaml&lt;/span&gt;
&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;ingress.yml&lt;/span&gt;
&lt;span class="na"&gt;patchesStrategicMerge&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;configmap-patch.yml&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Other than the &lt;code&gt;forecastle.yaml&lt;/code&gt; file (this is the one copied from the repo), you can notice the &lt;code&gt;ingress.yml&lt;/code&gt; and the &lt;code&gt;configmap-patch.yml&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;The ingress is nothing special, just an internal domain that points to the &lt;code&gt;forecastle&lt;/code&gt; service, created with the base manifest:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;networking.k8s.io/v1beta1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Ingress&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;annotations&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="s"&gt;kubernetes.io/ingress.class&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;forecastle&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;rules&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;host&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;hub.diomedet.internal&lt;/span&gt;
    &lt;span class="na"&gt;http&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;paths&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;backend&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;serviceName&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;forecastle&lt;/span&gt;
          &lt;span class="na"&gt;servicePort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;http&lt;/span&gt;
        &lt;span class="na"&gt;path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Note: if you want to know how to create an internal domain, check the post I've linked above ;)&lt;/p&gt;

&lt;p&gt;And then, we have the &lt;code&gt;configmap-patch.yml&lt;/code&gt; containing a patch applied to the ** Forecastle** config map. Using this patch method allows me not to worry about the base file, which I can re-download every time I want. Kustomize will apply the patch until the config map name does not change.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ConfigMap&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;forecastle&lt;/span&gt;
&lt;span class="na"&gt;data&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="s"&gt;config.yaml&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|-&lt;/span&gt;
    &lt;span class="s"&gt;crdEnabled: true&lt;/span&gt;
    &lt;span class="s"&gt;namespaceSelector:&lt;/span&gt;
      &lt;span class="s"&gt;any: true&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This patch is essential. It tells &lt;strong&gt;Forecastle&lt;/strong&gt; to watch all the namespaces and to enable the CRDs.&lt;/p&gt;

&lt;p&gt;For me, the &lt;code&gt;crdEnable: true&lt;/code&gt; option is essential because I have some services outside the cluster that I want to have in my dashboard; this allows me to create a &lt;strong&gt;ForecastleApp&lt;/strong&gt; like this one:&lt;/p&gt;

&lt;p&gt;&lt;em&gt;unraind.yml&lt;/em&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;forecastle.stakater.com/v1alpha1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ForecastleApp&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;unraid&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Unraid&lt;/span&gt;
  &lt;span class="na"&gt;group&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Infrastructure&lt;/span&gt;
  &lt;span class="na"&gt;icon&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;&amp;lt;icon_url&amp;gt;&lt;/span&gt;
  &lt;span class="na"&gt;url&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;http://&amp;lt;unraid_url&amp;gt;:3080&lt;/span&gt;
  &lt;span class="na"&gt;networkRestricted&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="no"&gt;true&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For all the services inside the cluster that needs to be in the main dashboard, I use the annotations on the Ingress, like:&lt;/p&gt;

&lt;p&gt;&lt;em&gt;home-assistant/ingress.yml&lt;/em&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;networking.k8s.io/v1beta1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Ingress&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;annotations&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="s"&gt;kubernetes.io/ingress.class&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx&lt;/span&gt;
    &lt;span class="s"&gt;cert-manager.io/cluster-issuer&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;letsencrypt-prod"&lt;/span&gt;
    &lt;span class="s"&gt;forecastle.stakater.com/expose&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="no"&gt;true&lt;/span&gt;
    &lt;span class="s"&gt;forecastle.stakater.com/appName&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Home Assistant&lt;/span&gt;
    &lt;span class="s"&gt;forecastle.stakater.com/group&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Home Automation&lt;/span&gt;
    &lt;span class="s"&gt;forecastle.stakater.com/icon&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;https://&amp;lt;ha_url&amp;gt;/static/icons/favicon-apple-180x180.png&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;home-assistant&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;rules&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;host&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;&amp;lt;ha_url&amp;gt;&lt;/span&gt;
    &lt;span class="na"&gt;http&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;paths&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;backend&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;serviceName&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;home-assistant&lt;/span&gt;
          &lt;span class="na"&gt;servicePort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ha&lt;/span&gt;
        &lt;span class="na"&gt;path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/&lt;/span&gt;
  &lt;span class="na"&gt;tls&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;hosts&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;&amp;lt;ha_url&amp;gt;&lt;/span&gt;
    &lt;span class="na"&gt;secretName&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;home-assistant-tls&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Technically, Home Assistant is not inside the cluster yet, but this Ingress fits the example I need. If you want to know how I've exposed Home Assistant through the cluster, check my other post on &lt;a href="https://dev.to/oen/expose-an-external-resource-with-a-kubernetes-ingress-om"&gt;how to expose an external resource with Kubernetes&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;In conclusion, now I have two ways of automatically create a new entry in my service dashboard, no more manual insert! &lt;/p&gt;

&lt;p&gt;Heimdall does something more than collects links; for some apps, likes Pi-hole, it can show you some stats (like the number of queries blocked) right in the dashboard without even open the link. Even if I had many apps supported by Heimdall, I never used that feature so much, so I'm not going to miss it. &lt;/p&gt;

&lt;p&gt;There is one thing I'm going to miss about Heimdall; it comes with many icons you only have to pick.&lt;/p&gt;

&lt;p&gt;Anyway, I'm not saying that &lt;strong&gt;Forecastle&lt;/strong&gt; is better than &lt;strong&gt;Heimdall&lt;/strong&gt; it only suits my use case better.&lt;/p&gt;

&lt;p&gt;See you the next time, bye!&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>homelab</category>
    </item>
    <item>
      <title>Kubernetes, external-dns, Pi-hole and a custom domain</title>
      <dc:creator>Diomede</dc:creator>
      <pubDate>Sat, 08 May 2021 16:29:32 +0000</pubDate>
      <link>https://dev.to/oen/kubernetes-external-dns-pi-hole-and-a-custom-domain-3nj9</link>
      <guid>https://dev.to/oen/kubernetes-external-dns-pi-hole-and-a-custom-domain-3nj9</guid>
      <description>&lt;p&gt;During these days, I'm tidying up my homelab and found the necessity of having an internal domain to expose some apps inside my local network but not to the internet.&lt;/p&gt;

&lt;p&gt;For example, I use &lt;a href="https://www.vaultproject.io/"&gt;Vault&lt;/a&gt; to store secrets, and I want an easy way to access the web-ui rather than using the IP address. The solution in Kubernetes is to create an &lt;a href="https://kubernetes.io/docs/concepts/services-networking/ingress/"&gt;Ingress&lt;/a&gt;; right now, I only have Ingresses with my main domain &lt;code&gt;diomedet.com&lt;/code&gt; but if I use it will be exposed to the whole internet, and I don't want that.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/kubernetes-sigs/external-dns"&gt;external-dns&lt;/a&gt; is my tool of choice to handle the synchronization between my Ingresses and the DNS provider; on my local network, I use &lt;a href="https://pi-hole.net/"&gt;Pi-hole&lt;/a&gt; to filter all my DNS request and to block some of them.&lt;/p&gt;

&lt;p&gt;Pi-hole already has a "Local DNS Records" section where you could list an arbitrary domain and point it to a specific IP inside or outside your network.&lt;br&gt;
So, if there is a way to make &lt;strong&gt;external-dns&lt;/strong&gt; updates that list, what I'm trying to achieve would be possible with a bit of effort. Unfortunately, there is no way to update the list of local DNS records on Pi-hole programmatically at the moment of writing, so we've to find another way to do that.&lt;/p&gt;

&lt;p&gt;Messing around with the interface of Pi-hole, I've noticed that under "Settings -&amp;gt; DNS" you can choose which DNS server redirects all the incoming requests that the blacklist has not blocked. Besides the classic list of "Upstream DNS Servers" there is also a list of custom upstream DNS servers:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--UejvbvzE--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://www.diomedet.com/imgs/posts/kubernetes-external-dns-pihole/pihole-dns.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--UejvbvzE--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://www.diomedet.com/imgs/posts/kubernetes-external-dns-pihole/pihole-dns.png" alt="Pi-hole DNS Settings"&gt;&lt;/a&gt; &lt;/p&gt;

&lt;p&gt;So, the idea is to create a custom DNS server that can be updated by &lt;strong&gt;external-dns&lt;/strong&gt; and used by Pi-hole as an &lt;strong&gt;upstream DNS server&lt;/strong&gt;. In this way, every Ingress with my internal domain will be resolved to the IP of my Kubernetes cluster.&lt;/p&gt;

&lt;p&gt;Great, we've got a plan. Now it's time to make it real!&lt;/p&gt;
&lt;h2&gt;
  
  
  First things first, we need a DNS server
&lt;/h2&gt;

&lt;p&gt;Scouting between the providers supported by &lt;strong&gt;external-dns&lt;/strong&gt; there a bunch of choices that can be self-hosted, something like &lt;a href="https://www.powerdns.com/"&gt;PowerDNS&lt;/a&gt; or &lt;a href="https://coredns.io/"&gt;CoreDNS&lt;/a&gt;, at this point I was like: &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;"mmh interesting, CoreDNS is the one used by Kubernetes internally must be a good choice, let's go with it."&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;A colleague suggested using PowerDNS*&lt;em&gt;, but I was already set on my path, so I stuck with **CoreDNS&lt;/em&gt;*.&lt;/p&gt;

&lt;p&gt;To be clear, it's not a wrong choice, but it might be a little overkill for this specific purpose but let's see what difficulties this path reserved for us.&lt;/p&gt;

&lt;p&gt;In the &lt;strong&gt;external-dns&lt;/strong&gt; repo, there is a folder &lt;code&gt;docs/tutorial&lt;/code&gt; with a markdown file for each supported provider (I think each didn't count), we're looking for the CoreDNS one: &lt;a href="https://github.com/kubernetes-sigs/external-dns/blob/master/docs/tutorials/coredns.md"&gt;https://github.com/kubernetes-sigs/external-dns/blob/master/docs/tutorials/coredns.md&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;It is a tutorial for minikube, but ignoring that part, we can use it for every Kubernetes cluster and bonus point, show us even how to install &lt;strong&gt;CoreDNS&lt;/strong&gt;, great two birds with one stone.&lt;/p&gt;

&lt;p&gt;If you've opened the file, you can see from the very beginning that the birds are not two anymore but three. The more, the merrier, right, right?!&lt;/p&gt;

&lt;p&gt;If you haven't opened the link, let me recap that for you what we need to install:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;CoreDNS (obviously)&lt;/li&gt;
&lt;li&gt;etcd&lt;/li&gt;
&lt;li&gt;another instance of external-dns (you need an instance of external-dns for each dns provider you're going to support)&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
  
  
  Wait, why we need etcd?
&lt;/h2&gt;

&lt;p&gt;We need etcd because this is the way how &lt;strong&gt;external-dns&lt;/strong&gt; talks to &lt;strong&gt;CoreDNS&lt;/strong&gt;, we have to create a section in the configuration of CoreDNS that tells him to read the value from a specific path on the &lt;strong&gt;etcd&lt;/strong&gt; instance we're going to configure and external-dns will update the same path with the information about the Ingresses we're going to create with our internal domain.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; Before switching to &lt;strong&gt;etcd&lt;/strong&gt; directly, &lt;strong&gt;CoreDNS&lt;/strong&gt; was using &lt;a href="https://github.com/skynetservices/skydns"&gt;SkyDNS&lt;/a&gt; (a service built on top of etcd) to serve these kinds of request, so, in the manifest files, we're going to see you'll find some refuse of that implementation.&lt;/p&gt;
&lt;h2&gt;
  
  
  Install etcd
&lt;/h2&gt;

&lt;p&gt;Let's get down to business and install &lt;strong&gt;etcd&lt;/strong&gt;. In the end, it is a core component of Kubernetes; there nothing wrong with learning more about it.&lt;br&gt;
Just to let you know, don't use the internal &lt;strong&gt;etcd&lt;/strong&gt; for a user application (like the one we want to install here); it is not meant for that.&lt;/p&gt;

&lt;p&gt;The tutorial linked above suggests we use the &lt;a href="https://github.com/coreos/etcd-operator"&gt;etcd-operator&lt;/a&gt; and use &lt;a href="https://raw.githubusercontent.com/coreos/etcd-operator/HEAD/example/example-etcd-cluster.yaml"&gt;https://raw.githubusercontent.com/coreos/etcd-operator/HEAD/example/example-etcd-cluster.yaml&lt;/a&gt; to create our etcd cluster.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Great, an operator nothing more simple than that...&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Slow down; the &lt;code&gt;etcd-operator&lt;/code&gt; repo was archived more than a year ago; even if it could work for a case like this, we don't want to install an operator that is not maintained anymore, so let's see how to deploy it manually.&lt;/p&gt;

&lt;p&gt;After searching around, I ended up on this documentation page &lt;a href="https://etcd.io/docs/v3.4/op-guide/container/#docker"&gt;https://etcd.io/docs/v3.4/op-guide/container/#docker&lt;/a&gt; that shows how to deploy, etcd with a single node configuration; prefect is what we need here.&lt;/p&gt;

&lt;p&gt;Basically we need to port the command showed in the link in a manifest for kubernetes:&lt;/p&gt;

&lt;p&gt;&lt;em&gt;docker run from etcd documentation&lt;/em&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker run &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-p&lt;/span&gt; 2379:2379 &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-p&lt;/span&gt; 2380:2380 &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--volume&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;DATA_DIR&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;:/etcd-data &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--name&lt;/span&gt; etcd &lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;REGISTRY&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;:latest &lt;span class="se"&gt;\&lt;/span&gt;
  /usr/local/bin/etcd &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--data-dir&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;/etcd-data &lt;span class="nt"&gt;--name&lt;/span&gt; node1 &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--initial-advertise-peer-urls&lt;/span&gt; http://&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;NODE1&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;:2380 &lt;span class="nt"&gt;--listen-peer-urls&lt;/span&gt; http://0.0.0.0:2380 &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--advertise-client-urls&lt;/span&gt; http://&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;NODE1&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;:2379 &lt;span class="nt"&gt;--listen-client-urls&lt;/span&gt; http://0.0.0.0:2379 &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--initial-cluster&lt;/span&gt; &lt;span class="nv"&gt;node1&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;http://&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;NODE1&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;:2380
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Manifest file &lt;em&gt;etc-sts.yml&lt;/em&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;apps/v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;StatefulSet&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;etcd&lt;/span&gt;
  &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="s"&gt;app.kubernetes.io/name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;etcd&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;serviceName&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;etcd&lt;/span&gt;
  &lt;span class="na"&gt;replicas&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;1&lt;/span&gt;
  &lt;span class="na"&gt;updateStrategy&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;OnDelete&lt;/span&gt;
  &lt;span class="na"&gt;selector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;matchLabels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="s"&gt;app.kubernetes.io/name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;etcd&lt;/span&gt;
  &lt;span class="na"&gt;template&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="s"&gt;app.kubernetes.io/name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;etcd&lt;/span&gt;
    &lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;containers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;etcd&lt;/span&gt;
          &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;gcr.io/etcd-development/etcd:latest&lt;/span&gt;
          &lt;span class="na"&gt;imagePullPolicy&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;IfNotPresent&lt;/span&gt;
          &lt;span class="na"&gt;command&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;/usr/local/bin/etcd&lt;/span&gt;
          &lt;span class="na"&gt;env&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ETCD_NAME&lt;/span&gt;
              &lt;span class="na"&gt;value&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;node1&lt;/span&gt;
            &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ETCD_DATA_DIR&lt;/span&gt;
              &lt;span class="na"&gt;value&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/etcd-data&lt;/span&gt;
            &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ETCD_LISTEN_PEER_URLS&lt;/span&gt;
              &lt;span class="na"&gt;value&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;http://0.0.0.0:2380&lt;/span&gt;
            &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ETCD_LISTEN_CLIENT_URLS&lt;/span&gt;
              &lt;span class="na"&gt;value&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;http://0.0.0.0:2379&lt;/span&gt;
            &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ETCD_INITIAL_ADVERTISE_PEER_URLS&lt;/span&gt;
              &lt;span class="na"&gt;value&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;http://0.0.0.0:2380&lt;/span&gt;
            &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ETCD_ADVERTISE_CLIENT_URLS&lt;/span&gt;
              &lt;span class="na"&gt;value&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;http://0.0.0.0:2379&lt;/span&gt;
            &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ETCD_INITIAL_CLUSTER&lt;/span&gt;
              &lt;span class="na"&gt;value&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;node1=http://0.0.0.0:2380"&lt;/span&gt;
          &lt;span class="na"&gt;volumeMounts&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;data&lt;/span&gt;
              &lt;span class="na"&gt;mountPath&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/etcd-data&lt;/span&gt;
          &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;containerPort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;2379&lt;/span&gt;
              &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;client&lt;/span&gt;
            &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;containerPort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;2380&lt;/span&gt;
              &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;peer&lt;/span&gt;
  &lt;span class="na"&gt;volumeClaimTemplates&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;data&lt;/span&gt;
      &lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;accessModes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;ReadWriteOnce&lt;/span&gt;
        &lt;span class="na"&gt;resources&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;requests&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="na"&gt;storage&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;1Gi&lt;/span&gt; &lt;span class="c1"&gt;# we don't need much space to store DNS information&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We're going to use a &lt;code&gt;StatefulSet&lt;/code&gt; because &lt;code&gt;etcd&lt;/code&gt; is a stateful app and needs a volume to persist its data. Rather than the classic &lt;code&gt;Deploy&lt;/code&gt; with a &lt;code&gt;StatefulSet&lt;/code&gt; we're certain that the generated pod will always receive the same name, and the volume attached to it will always be the same. &lt;a href="https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/"&gt;More on the StatefulSet&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The only noticeable difference between the &lt;code&gt;docker run ...&lt;/code&gt; command and this manifest file is that we're using environment variables instead of configuration flags. I had some trouble getting the flags working, and I like more the environment variables, anyway; here a list of &lt;a href="https://etcd.io/docs/v3.4/op-guide/configuration/"&gt;etcd configuration flags&lt;/a&gt; with the matching variable.&lt;/p&gt;

&lt;p&gt;Now, in order to expose &lt;code&gt;etcd&lt;/code&gt; to the other applications in the cluster we need to create a &lt;code&gt;Service&lt;/code&gt; too:&lt;/p&gt;

&lt;p&gt;&lt;em&gt;etc-service.yml&lt;/em&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Service&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;etcd&lt;/span&gt;
  &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="s"&gt;app.kubernetes.io/name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;etcd&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;port&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;2379&lt;/span&gt;
      &lt;span class="na"&gt;targetPort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;2379&lt;/span&gt;   
      &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;client&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;port&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;2380&lt;/span&gt;
      &lt;span class="na"&gt;targetPort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;2380&lt;/span&gt;
      &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;peer&lt;/span&gt;
  &lt;span class="na"&gt;selector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="s"&gt;app.kubernetes.io/name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;etcd&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Nothing special here, but this completes the manifest needed for our etcd instance.&lt;/p&gt;

&lt;h2&gt;
  
  
  Install CoreDNS
&lt;/h2&gt;

&lt;p&gt;Now that we have our &lt;strong&gt;etcd&lt;/strong&gt;, we can continue with the tutorial and install our custom version of CoreDSN. You can use &lt;code&gt;helm&lt;/code&gt; to install it, or if you want a more instructive approach, you can use &lt;code&gt;helm template&lt;/code&gt; to render the file and applying them manually or with kustomize. In this way, you can check them individually to see what the chart will create in your cluster.&lt;/p&gt;

&lt;p&gt;Since my homelab is a way to learn more about Kubernetes, I choose to render the file with &lt;code&gt;helm template&lt;/code&gt; and use &lt;a href="https://kustomize.io/"&gt;kustomize&lt;/a&gt; to apply them later.&lt;/p&gt;

&lt;p&gt;Whichever way you choose, the important part is to set a couple of options inside the &lt;code&gt;values.yml&lt;/code&gt; file correctly.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# if you don't have RBAC enabled on your cluster, I think you can set this to false&lt;/span&gt;
&lt;span class="na"&gt;rbac&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;create&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="no"&gt;true&lt;/span&gt;

&lt;span class="c1"&gt;# isClusterService specifies whether the chart should be deployed as cluster-service or regular k8s app.&lt;/span&gt;
&lt;span class="na"&gt;isClusterService&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="no"&gt;true&lt;/span&gt;

&lt;span class="na"&gt;servers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;zones&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;  
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;zone&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;.&lt;/span&gt;
  &lt;span class="na"&gt;port&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;53&lt;/span&gt;
  &lt;span class="na"&gt;plugins&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;  
  &lt;span class="s"&gt;...&lt;/span&gt;
  &lt;span class="c1"&gt;# all other plugins&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;forward&lt;/span&gt;
    &lt;span class="na"&gt;parameters&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;. 8.8.8.8:53&lt;/span&gt; &lt;span class="c1"&gt;# tells where to forward all the DNS requests that CoreDNS can't solve&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;etcd&lt;/span&gt;
    &lt;span class="na"&gt;parameters&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;diomedet.internal&lt;/span&gt; &lt;span class="c1"&gt;# insert your domain here&lt;/span&gt;
    &lt;span class="na"&gt;configBlock&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|-&lt;/span&gt;
      &lt;span class="s"&gt;stubzones&lt;/span&gt;
      &lt;span class="s"&gt;path /skydns&lt;/span&gt;
      &lt;span class="s"&gt;endpoint http://etcd:2379&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The most important part is the last one, we're going to configure the &lt;code&gt;etcd&lt;/code&gt; plugin and tell &lt;strong&gt;CoreDNS&lt;/strong&gt; to look inside the &lt;code&gt;http://etcd:2379&lt;/code&gt; to find the information about the domain &lt;code&gt;diomedet.internal&lt;/code&gt; (this is my internal domain)&lt;/p&gt;

&lt;p&gt;Also, the &lt;code&gt;forward&lt;/code&gt; part is important; it tells CoreDNS where to redirect all the DNS that it can't solve. Later on, I'll explain why it is crucial.&lt;/p&gt;

&lt;p&gt;With these values, we can run the command. &lt;/p&gt;

&lt;p&gt;&lt;code&gt;helm template custom coredns/coredns --output-dir . --values values.yaml&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;(custom is the name of my release, then it'll turn out in &lt;code&gt;custom-coredns&lt;/code&gt;)&lt;/p&gt;

&lt;p&gt;Helm will create a folder &lt;code&gt;coredns/template&lt;/code&gt; with five files in it:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;coredns/templates
├── clusterrole.yaml
├── clusterrolebinding.yaml
├── configmap.yaml
├── deployment.yaml
└── service.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now the only thing we've to do is to &lt;code&gt;kubectl apply&lt;/code&gt; these files, and we'll end up with a working CoreDNS instance. Working but still not reachable outside the cluster, if you have &lt;a href="https://metallb.universe.tf/"&gt;MetalLB&lt;/a&gt; configured, you can change the &lt;code&gt;ServiceType&lt;/code&gt; from &lt;code&gt;ClusterIP&lt;/code&gt; to &lt;code&gt;LoadBalancer&lt;/code&gt; to get an IP.&lt;br&gt;
I haven't this feature in my cluster yet, so for now, I'm going to use the &lt;code&gt;NodePort&lt;/code&gt; type; this allows me to use a port of my node and point it to the service.&lt;/p&gt;

&lt;p&gt;With &lt;code&gt;kustomize&lt;/code&gt;, there is the concept of patches, so I can create a patch that will modify the &lt;code&gt;service.yaml&lt;/code&gt; file without directly touching it. I prefer this way, so if I have to re-run &lt;code&gt;helm template ...&lt;/code&gt; I don't have to mind any modification I could have made because kustomize will patch everything.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;patches/service.yaml&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Service&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;custom-coredns&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;NodePort&lt;/span&gt;
  &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="pi"&gt;{&lt;/span&gt;&lt;span class="nv"&gt;port&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="nv"&gt;53&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="nv"&gt;protocol&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="nv"&gt;UDP&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="nv"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="nv"&gt;udp-53&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="nv"&gt;nodePort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="nv"&gt;30053&lt;/span&gt;&lt;span class="pi"&gt;}&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="pi"&gt;{&lt;/span&gt;&lt;span class="nv"&gt;port&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="nv"&gt;53&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="nv"&gt;protocol&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="nv"&gt;TCP&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="nv"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="nv"&gt;tcp-53&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="nv"&gt;nodePort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="nv"&gt;30053&lt;/span&gt;&lt;span class="pi"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here I tell Kubernetes to use the port &lt;code&gt;30053&lt;/code&gt; for both &lt;code&gt;UDP&lt;/code&gt; and &lt;code&gt;TCP&lt;/code&gt;. With the &lt;code&gt;NodePort&lt;/code&gt;, you can use ports from 30000 to 32767 if you do not modify it.&lt;/p&gt;

&lt;p&gt;To wrap it up, here my &lt;code&gt;kustomization.yml&lt;/code&gt; file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Kustomization&lt;/span&gt;
&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;kustomize.config.k8s.io/v1beta1&lt;/span&gt;
&lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;custom-coredns&lt;/span&gt;
&lt;span class="na"&gt;resources&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;templates/clusterrole.yaml&lt;/span&gt;
&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;templates/clusterrolebinding.yaml&lt;/span&gt;
&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;templates/configmap.yaml&lt;/span&gt;
&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;templates/deployment.yaml&lt;/span&gt;
&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;templates/service.yaml&lt;/span&gt;
&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;etcd-sts.yml&lt;/span&gt;
&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;etcd-service.yml&lt;/span&gt;
&lt;span class="na"&gt;patchesStrategicMerge&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;patches/service.yaml&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you followed my path, you should have all the files to make it work, anyway. If you've installed the helm chart directly, you can always change the service manifest directly on Kubernetes. You can even set the &lt;code&gt;serviceType&lt;/code&gt; in the &lt;code&gt;values.yaml&lt;/code&gt; file, but I didn't find a way to specify the &lt;code&gt;nodePort&lt;/code&gt; to use, so I decided to go with the patch.&lt;/p&gt;

&lt;h2&gt;
  
  
  Finally, install external-dns
&lt;/h2&gt;

&lt;p&gt;Now we can finally install the instance of &lt;strong&gt;external-dns&lt;/strong&gt; that will monitor the &lt;code&gt;Ingress&lt;/code&gt; created with our internal domain.&lt;/p&gt;

&lt;p&gt;I have RBAC enabled on my cluster, so my manifest look like this:&lt;/p&gt;

&lt;p&gt;&lt;em&gt;external-dns.yml&lt;/em&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="nn"&gt;---&lt;/span&gt;
&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;rbac.authorization.k8s.io/v1beta1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ClusterRole&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;external-dns&lt;/span&gt;
&lt;span class="na"&gt;rules&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;apiGroups&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;"&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
  &lt;span class="na"&gt;resources&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;services"&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;endpoints"&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;pods"&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
  &lt;span class="na"&gt;verbs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;get"&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;watch"&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;list"&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;apiGroups&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;extensions"&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;networking.k8s.io"&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
  &lt;span class="na"&gt;resources&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;ingresses"&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
  &lt;span class="na"&gt;verbs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;get"&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;watch"&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;list"&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;apiGroups&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;"&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
  &lt;span class="na"&gt;resources&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;nodes"&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
  &lt;span class="na"&gt;verbs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;list"&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
&lt;span class="nn"&gt;---&lt;/span&gt;
&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;rbac.authorization.k8s.io/v1beta1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ClusterRoleBinding&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;external-dns-viewer&lt;/span&gt;
&lt;span class="na"&gt;roleRef&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;apiGroup&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;rbac.authorization.k8s.io&lt;/span&gt;
  &lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ClusterRole&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;external-dns&lt;/span&gt;
&lt;span class="na"&gt;subjects&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ServiceAccount&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;external-dns&lt;/span&gt;
  &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;external-dns&lt;/span&gt;
&lt;span class="s"&gt;--------&lt;/span&gt;
&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ServiceAccount&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;external-dns&lt;/span&gt;
  &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;external-dns&lt;/span&gt;
&lt;span class="s"&gt;--------&lt;/span&gt;
&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;apps/v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Deployment&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;external-dns&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;strategy&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Recreate&lt;/span&gt;
  &lt;span class="na"&gt;selector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;matchLabels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;external-dns&lt;/span&gt;
  &lt;span class="na"&gt;template&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;external-dns&lt;/span&gt;
    &lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;serviceAccountName&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;external-dns&lt;/span&gt;
      &lt;span class="na"&gt;containers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;external-dns&lt;/span&gt;
        &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;k8s.gcr.io/external-dns/external-dns:v0.7.6&lt;/span&gt;
        &lt;span class="na"&gt;args&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;--source=ingress&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;--provider=coredns&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;--log-level=debug&lt;/span&gt; &lt;span class="c1"&gt;# debug only&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;--domain-filter=diomedet.internal&lt;/span&gt;
        &lt;span class="na"&gt;env&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ETCD_URLS&lt;/span&gt;
          &lt;span class="na"&gt;value&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;http://etcd.custom-coredns:2379&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you don't have RBAC enable, you can use only the &lt;code&gt;Deployment&lt;/code&gt; manifest.&lt;/p&gt;

&lt;p&gt;This is the most straightforward part, just set the correct &lt;code&gt;ETCD_URLS&lt;/code&gt; with the correct value, and you're good to go. I have deployed my &lt;strong&gt;external-dns&lt;/strong&gt; in a namespace different than the &lt;strong&gt;etcd&lt;/strong&gt; one, so in the &lt;code&gt;ETCD_URLS&lt;/code&gt; variable, I have to specify the service with the namespace too &lt;code&gt;http://etcd.custom-coredns:2379&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Once you applied your manifest you can create an ingress with the internal domain you chose, in my case is something like:&lt;/p&gt;

&lt;p&gt;&lt;em&gt;vault/ingress.yml&lt;/em&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;networking.k8s.io/v1beta1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Ingress&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;annotations&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="s"&gt;kubernetes.io/ingress.class&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;vault-ui-internal&lt;/span&gt;
  &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;vault&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;rules&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;host&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;vault.diomedet.internal&lt;/span&gt;
    &lt;span class="na"&gt;http&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;paths&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;backend&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;serviceName&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;vault&lt;/span&gt;
          &lt;span class="na"&gt;servicePort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;http&lt;/span&gt;
        &lt;span class="na"&gt;path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After you create an Ingress with your internal domain on the external-dns pod, you should see a log like the following one:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;level=debug msg="Endpoints generated from ingress: vault/vault-ui-internal: [vault.diomedet.internal 0 IN A  10.10.5.123 []]"&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;code&gt;10.10.5.123&lt;/code&gt; is the IP address of my Kubernetes cluster, it's called "Scyther", the Pokédex number of Scyther is #123, so here explained my IP, not that you asked, but here it is anyway :P&lt;/p&gt;

&lt;p&gt;Now, if I use &lt;code&gt;dig&lt;/code&gt; to check the name resolution, it should work correctly:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;❯ dig @10.10.5.123 &lt;span class="nt"&gt;-p&lt;/span&gt; 30053 vault.diomedet.internal

&lt;span class="p"&gt;;&lt;/span&gt; &amp;lt;&amp;lt;&lt;span class="o"&gt;&amp;gt;&amp;gt;&lt;/span&gt; DiG 9.11.3-1ubuntu1.13-Ubuntu &amp;lt;&amp;lt;&lt;span class="o"&gt;&amp;gt;&amp;gt;&lt;/span&gt; @10.10.5.123 &lt;span class="nt"&gt;-p&lt;/span&gt; 30053 vault.diomedet.internal
&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="o"&gt;(&lt;/span&gt;1 server found&lt;span class="o"&gt;)&lt;/span&gt;
&lt;span class="p"&gt;;;&lt;/span&gt; global options: +cmd
&lt;span class="p"&gt;;;&lt;/span&gt; Got answer:
&lt;span class="p"&gt;;;&lt;/span&gt; -&amp;gt;&amp;gt;HEADER&lt;span class="o"&gt;&amp;lt;&amp;lt;-&lt;/span&gt; &lt;span class="no"&gt;opcode&lt;/span&gt;&lt;span class="sh"&gt;: QUERY, status: NOERROR, id: 5546
;; flags: qr aa rd; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1
;; WARNING: recursion requested but not available

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
; COOKIE: bc40278d825e2b16 (echoed)
;; QUESTION SECTION:
;vault.diomedet.internal.       IN      A

;; ANSWER SECTION:
vault.diomedet.internal. 30     IN      A       10.10.5.123

;; Query time: 3 msec
;; SERVER: 10.10.5.123#30053(10.10.5.123)
;; WHEN: Sat May 08 16:11:48 CEST 2021
;; MSG SIZE  rcvd: 103
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;But if I run the &lt;code&gt;nslookup&lt;/code&gt; command, I still get an error:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;❯ nslookup vault.diomedet.internal
Server:         172.29.96.1
Address:        172.29.96.1#53

&lt;span class="k"&gt;**&lt;/span&gt; server can&lt;span class="s1"&gt;'t find vault.diomedet.internal: NXDOMAIN
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This error appears because we still have to change the Pi-hole configurations.&lt;/p&gt;

&lt;h2&gt;
  
  
  Configure Pi-hole to use our new DNS Server
&lt;/h2&gt;

&lt;p&gt;To configure Pi-hole, you need to return to DNS Setting tab &lt;code&gt;http://pihole.local/admin/settings.php?tab=dns&lt;/code&gt;, uncheck all the "Upstream DNS Servers" and insert your custom one, in my case &lt;code&gt;10.10.5.123#300123&lt;/code&gt; (the # is used to specify the port).&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--bHqrNQwx--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://www.diomedet.com/imgs/posts/kubernetes-external-dns-pihole/pihole-dns-updated.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--bHqrNQwx--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://www.diomedet.com/imgs/posts/kubernetes-external-dns-pihole/pihole-dns-updated.png" alt="Pi-hole DNS Settings Updated"&gt;&lt;/a&gt; &lt;/p&gt;

&lt;p&gt;Now, if you run the &lt;code&gt;nslookup&lt;/code&gt; command again, you should end with the correct result:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;❯ nslookup vault.diomedet.internal
Server:         172.29.96.1
Address:        172.29.96.1#53

Non-authoritative answer:
Name:   vault.diomedet.internal
Address: 10.10.5.123
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Great! We can create as many Ingress with our internal domain as we want, and they will always be resolved to our cluster IP.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Unfortunately, our instance of &lt;strong&gt;CoreDNS&lt;/strong&gt; will become a central point for our network in this scenario. If something happens to our cluster or the CoreDNS pod stops, we'll lose the ability to resolve domain names. I'm still searching for a way to solve this problem and have a more reliable solution, but for now, I have to stick with this downside.&lt;/p&gt;

&lt;p&gt;Do you remember the &lt;code&gt;forward&lt;/code&gt; value that we set on the &lt;code&gt;values.yaml&lt;/code&gt; for &lt;strong&gt;CoreDNS&lt;/strong&gt;?&lt;/p&gt;

&lt;p&gt;That option has become the only way to choose which DNS server we want to use to solve all the DNS requests that can't be solved internally and aren't blocked by Pi-hole. This is because if we check some of the "Upstream DNS Server", we'll lose the ability to resolve our internal domain.&lt;/p&gt;

&lt;p&gt;I have some ideas on how to solve that:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A second Pi-hole that is going to be my "Custom 2" upstream DNS Server&lt;/li&gt;
&lt;li&gt;An ingress that masks the IP of the DNS server I want to use, something like I've done in a previous post &lt;a href="https://dev.to/oen/expose-an-external-resource-with-a-kubernetes-ingress-om"&gt;Expose an external resource with a Kubernetes Ingress&lt;/a&gt;. A mask is needed because if you insert &lt;code&gt;8.8.8.8&lt;/code&gt; into the "Custom 2" field, Pi-hole will automatically check the Google server for you.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;But I haven't tested any of that, so, for today, this is it.&lt;/p&gt;

&lt;p&gt;I'm also looking for a way to have a certificate on my internal domain, so I don't get those annoying alerts when I'm trying to access my apps via &lt;code&gt;HTTPS&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;I hope you've found this article helpful. Stay tuned for future updates!&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>homelab</category>
      <category>pihole</category>
    </item>
    <item>
      <title>Expose an external resource with a Kubernetes Ingress</title>
      <dc:creator>Diomede</dc:creator>
      <pubDate>Fri, 07 May 2021 09:10:25 +0000</pubDate>
      <link>https://dev.to/oen/expose-an-external-resource-with-a-kubernetes-ingress-om</link>
      <guid>https://dev.to/oen/expose-an-external-resource-with-a-kubernetes-ingress-om</guid>
      <description>&lt;p&gt;The other night I was wandering around my homelab and noticed that I didn't correctly expose my new home assistant instance to the internet. Some months ago, home assistant (from now on HA) found its new permanent home, a Rasperry PI 4 4GB; since we're in lockdown, I didn't need to reach HA outside my local network.&lt;/p&gt;

&lt;p&gt;I'm still working from home (and love it), but it's time to fix this now. While choosing which method was the best to expose it, I had an idea, why not use an Ingress in my Kubernetes cluster?&lt;/p&gt;

&lt;p&gt;It could seem a little odd, but hear me out:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;I already have a Kubernetes cluster, with &lt;a href="https://cert-manager.io/docs/"&gt;cert-manager&lt;/a&gt; configured and working&lt;/li&gt;
&lt;li&gt;I'm planning to move some service into the cluster, and they'll need to reach HA anyway&lt;/li&gt;
&lt;li&gt;I want to learn as much as I can about Kubernetes, and this seemed a good chance&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;So, to achieve my goal, I need one thing: how can I point a Kubernetes Service to an external resource?&lt;/p&gt;

&lt;p&gt;Wait, why a Service? Didn't you mention an Ingress before? Yes, you're right but bear with me.&lt;/p&gt;

&lt;p&gt;In Kubernetes, an Ingress needs a Service to redirect all the requests it receives, so to properly expose HA with an Ingress, we need to create a Service that can point to it.&lt;/p&gt;

&lt;p&gt;Usually, a Service is used to expose a set of Pods; in this way, if different applications need to talk to each other, they don't need to use their pod names (that could change variously), but they can use the Service name.&lt;/p&gt;

&lt;p&gt;When we create a Service, another resource is automatically created, an Endpoint. This Endpoint includes the references (all the IP addresses) of the Pods that match the selector we've specified in the Service selector spec.&lt;/p&gt;

&lt;p&gt;The Endpoint created in this way will have the same name as the Service we've created.&lt;/p&gt;

&lt;p&gt;But here's the thing, we can create a Service without a selector, and if we do that, we also need to create an Endpoint ourselves, and this Endpoint could point to an IP outside the Kubernetes cluster.&lt;/p&gt;

&lt;p&gt;So let's see the YAML we need to create to expose HA using a Kubernetes Ingress.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;endpoint.yml&lt;/em&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Endpoints&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;home-assistant&lt;/span&gt;
&lt;span class="na"&gt;subsets&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;addresses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;ip&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;1.1.1.1&lt;/span&gt; &lt;span class="c1"&gt;# Insert your home-assistant IP here&lt;/span&gt;
  &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ha&lt;/span&gt;
    &lt;span class="na"&gt;port&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;8123&lt;/span&gt;
    &lt;span class="na"&gt;protocol&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;TCP&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;em&gt;service.yml&lt;/em&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Service&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;home-assistant&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ha&lt;/span&gt;
    &lt;span class="na"&gt;port&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;80&lt;/span&gt;
    &lt;span class="na"&gt;protocol&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;TCP&lt;/span&gt;
    &lt;span class="na"&gt;targetPort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;8123&lt;/span&gt;
  &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ClusterIP&lt;/span&gt;
  &lt;span class="na"&gt;clusterIP&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;None&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Note: we set the &lt;code&gt;clusterIP&lt;/code&gt; property to &lt;code&gt;None&lt;/code&gt; on purpose; this tells Kubernetes not to provide an IP to this Service; we don't need it. A Service like this is also known as headless Service.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;ingress.yml&lt;/em&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;networking.k8s.io/v1beta1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Ingress&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;annotations&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="s"&gt;kubernetes.io/ingress.class&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx&lt;/span&gt;
    &lt;span class="s"&gt;cert-manager.io/cluster-issuer&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;letsencrypt-prod"&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;home-assistant&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;rules&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;host&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ha.awesome-domain.com&lt;/span&gt; &lt;span class="c1"&gt;# insert your domain here&lt;/span&gt;
    &lt;span class="na"&gt;http&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;paths&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;backend&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;serviceName&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;home-assistant&lt;/span&gt;
          &lt;span class="na"&gt;servicePort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ha&lt;/span&gt;
        &lt;span class="na"&gt;path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/&lt;/span&gt;
  &lt;span class="na"&gt;tls&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;hosts&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;ha.awesome-domain.com&lt;/span&gt; &lt;span class="c1"&gt;# insert your domain here&lt;/span&gt;
    &lt;span class="na"&gt;secretName&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;home-assistant-tls&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now, if we run &lt;code&gt;kubectl apply -f ...&lt;/code&gt; for all these three files after cert-manager has finished its work, we'll end with a domain with a valid certificate that points to our Home Assistant instance outside of Kubernetes.&lt;/p&gt;

&lt;p&gt;And remember that I told you that I want to move some service that uses HA inside my k8s cluster? Now, all I have to do is deploy them and use `home-assistant as the URL for HA instead of using the IP address.&lt;/p&gt;

&lt;p&gt;For the sake of completion, if you already have a local domain for your HA instance, you can skip the creation of the Endpoint and use the &lt;a href="https://kubernetes.io/docs/concepts/services-networking/service/#externalname"&gt;Service's &lt;code&gt;externalName&lt;/code&gt; property&lt;/a&gt; directly. &lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>k8s</category>
      <category>homelab</category>
      <category>homeassistant</category>
    </item>
  </channel>
</rss>
