<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: donnercody | Thoren</title>
    <description>The latest articles on DEV Community by donnercody | Thoren (@donnercody).</description>
    <link>https://dev.to/donnercody</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/donnercody"/>
    <language>en</language>
    <item>
      <title>Lowest Price for your own kubernetes using Hetzner Cloud(incl. Storage Provisioner)</title>
      <dc:creator>donnercody | Thoren</dc:creator>
      <pubDate>Sun, 12 May 2024 21:20:24 +0000</pubDate>
      <link>https://dev.to/donnercody/lowest-price-for-your-own-kubernetes-using-hetzner-cloudincl-storage-provisioner-3db</link>
      <guid>https://dev.to/donnercody/lowest-price-for-your-own-kubernetes-using-hetzner-cloudincl-storage-provisioner-3db</guid>
      <description>&lt;h2&gt;
  
  
  Lowest Price for your own kubernetes using Hetzner Cloud(incl. Storage Provisioner)
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F9976%2F1%2AeT_7sdVv6jND7MKMv15Yeg.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F9976%2F1%2AeT_7sdVv6jND7MKMv15Yeg.jpeg"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In my day job as a software architect and developer, I use Kubernetes day by day. It is an amazing software and helps a lot. Not only when scaling super high availability but also in starting testing and deploying docker containers in a cloud environment.&lt;/p&gt;

&lt;p&gt;*&lt;em&gt;But when it comes to my home lab, all options for a Kubernetes cloud are very expensive. *&lt;/em&gt;(Scaleway is 150$ for a “basic” home lab with some power, DigitalOcean is much more expensive).&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;10vcpu&lt;/strong&gt;, &lt;strong&gt;24GB&lt;/strong&gt;, &lt;strong&gt;100GB&lt;/strong&gt; Disk for 5*&lt;em&gt;0$ / Month&lt;/em&gt;*&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;So I decided to give it a try to self-host it in the Hetzner cloud environment.&lt;/p&gt;

&lt;p&gt;Here is a step-by-step tutorial on how you can create your own Kubernetes home lab with more than 30 GB and not more than 60$ per month.&lt;/p&gt;

&lt;h2&gt;
  
  
  1. Create a Hetzner Cloud Account
&lt;/h2&gt;

&lt;p&gt;You need to create a Hetzner cloud account.&lt;/p&gt;

&lt;p&gt;Create a project: “homelab”. The goal is to create 3 different nodes inside of the project so that in the end it looks like this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F3054%2F1%2AsUBEPWGnOVmSRyNtD5fH0Q.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F3054%2F1%2AsUBEPWGnOVmSRyNtD5fH0Q.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  2. Create each server (Master, Worker and Persistence)
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F2714%2F1%2AefDpSKsuDzr146nU4nvcBA.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F2714%2F1%2AefDpSKsuDzr146nU4nvcBA.png" alt="We choose Helsinki because it's the cheapest location."&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;For the kube-master and the kube-worker1 select a CPX31. So you have enough power.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F2594%2F1%2A6uWL2McwS7REFCU8RLFIvg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F2594%2F1%2A6uWL2McwS7REFCU8RLFIvg.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;For the master and worker select Ubuntu in the latest version available on Hetzner cloud.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F2552%2F1%2AqumQYW93fwoonolq7GG5uw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F2552%2F1%2AqumQYW93fwoonolq7GG5uw.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F2000%2F1%2A4oAIFNeHiDsIpy7_km2zTg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F2000%2F1%2A4oAIFNeHiDsIpy7_km2zTg.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;For persistence select CX31 flavor and CentOS.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F2556%2F1%2AX2gBd4-IbZYv6KiRe_-5Mw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F2556%2F1%2AX2gBd4-IbZYv6KiRe_-5Mw.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And create a volume with 100GB for later.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F2000%2F1%2AnoHSNXSQa-M8tvi1exDelQ.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F2000%2F1%2AnoHSNXSQa-M8tvi1exDelQ.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F2000%2F1%2AQGbjwaORKVKxqrXsBp1TXA.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F2000%2F1%2AQGbjwaORKVKxqrXsBp1TXA.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F2000%2F1%2APbaFXsuf9Px1lSHfr6aMFA.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F2000%2F1%2APbaFXsuf9Px1lSHfr6aMFA.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Finally, your servers should look similar to this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F3054%2F1%2AsUBEPWGnOVmSRyNtD5fH0Q.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F3054%2F1%2AsUBEPWGnOVmSRyNtD5fH0Q.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  2. Private Network Configuration
&lt;/h2&gt;

&lt;p&gt;Create a private network and add all nodes to this private network.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F2000%2F1%2ABj7rfFMGqpbvlPT5KRBlsQ.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F2000%2F1%2ABj7rfFMGqpbvlPT5KRBlsQ.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F3008%2F1%2AwfwYtonl2vgBrvcJNsB9Uw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F3008%2F1%2AwfwYtonl2vgBrvcJNsB9Uw.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  3. Prepare Everything
&lt;/h2&gt;

&lt;p&gt;To make it more understandable we have 3 nodes:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;kube-master — 10.0.0.4&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;kube-worker1–10.0.0.2&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;kube-persistence — 10.0.0.3&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;You can even have more than 3 nodes — the steps above can be reproduced for each. But you can also do it for the 3 and then create an image from your kube-worker1.&lt;/p&gt;

&lt;p&gt;Update all packages on all machines to the newest versions.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kube-master:~# 
kube-worker:~#

sudo apt-get update

kube-persistence:~#

sudo yum update
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Create an ssh key pair on the master and save it in your password tool. You will need it later.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kube-master:~#

sudo ssh-keygen -t rsa -b 4096
sudo cat /root/.ssh/id_rsa.pub
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Next go to the kube-master, kube-worker1, and kube-persistence and add the &lt;strong&gt;publickey&lt;/strong&gt; to the file &lt;strong&gt;/root/.ssh/authorized_keys.&lt;/strong&gt;&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kube-worker1:~# 
&amp;amp;
kube-master:~#

echo "$yourpublickey" &amp;gt;&amp;gt; /root/.ssh/authoirzed_keys
# restart ssh service
sudo systemctl restart ssh.service

kube-persistence:~#

echo "$yourpublickey" &amp;gt;&amp;gt; /root/.ssh/authoirzed_keys
# restart ssh service
service sshd restart
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Now lets validate if you can connect to the clients from the master.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kube-master1:~#

ssh root@10.0.0.3
# you should now be connected from the master to persistence and worker node
[root@kube-persistence ~]#
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h2&gt;
  
  
  4. Install Kubernetes with kubespray and ansible
&lt;/h2&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kube-master1:~#

sudo apt install python3-pip -y
sudo apt install python3-virtualenv -y

git clone https://github.com/kubernetes-sigs/kubespray.git

VENVDIR=kubespray-venv
KUBESPRAYDIR=kubespray
ANSIBLE_VERSION=2.12
virtualenv  --python=$(which python3) $VENVDIR
source $VENVDIR/bin/activate 


(kubespray-venv) kube-master1:~# 
cd $KUBESPRAYDIR

pip install -U -r requirements.txt

cp -rfp inventory/sample inventory/mycluster
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Next, declare the IP addresses before starting kubespray.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;(kubespray-venv) kube-master1:~#

declare -a IPS=(10.0.0.4 10.0.0.2)
CONFIG_FILE=inventory/mycluster/hosts.yaml python3 contrib/inventory_builder/inventory.py ${IPS[@]}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Now check if “calico” is your configuration for the cloud network plugin.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;(kubespray-venv) kube-master1:~#

nano inventory/mycluster/group_vars/k8s_cluster/k8s-cluster.yml

# ----
# find this Line:
## Choose the network plugin (cilium, calico, kube-ovn, weave, or flann...
# ...
kube_network_plugin: calico
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Finally, run the Ansible script and install Kubernetes.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;(kubespray-venv) kube-master1:~#

ansible-playbook -i inventory/mycluster/hosts.yaml  --become --become-user=root cluster.yml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;blockquote&gt;
&lt;p&gt;Now wait for 10–20 minutes until installation finished.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  Error Handling
&lt;/h3&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;TASK [bootstrap-os : Fetch /etc/os-release] **************************************************************************************************************
fatal: [node2]: UNREACHABLE! =&amp;gt; {"changed": false, "msg": "Failed to connect to the host via ssh: Warning: Permanently added '10.0.0.4' (ED25519) to the list of known hosts.\r\nroot@10.0.0.4: Permission denied (publickey,password).", "unreachable": true}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;If you see the error above, that means you missed the step to add your public key from step #1 to the /root/.ssh/authorized_keys file.&lt;/p&gt;

&lt;h3&gt;
  
  
  Let's validate the installation
&lt;/h3&gt;

&lt;p&gt;If you can see something like this&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;...
container-engine/containerd : Containerd | Unpack containerd archive ------------------------------------------ 3.53s
kubernetes-apps/ansible : Kubernetes Apps | Start Resources --------------------------------------------------- 3.44s
container-engine/crictl : Extract_file | Unpacking archive ---------------------------------------------------- 3.37s
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;That means your installation is finished. Now let's execute some commands to check.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;(kubespray-venv) kube-master1:~#

kubectl get pods -A
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F2000%2F1%2AMEWm-L4aOeWkyY-zy0qLew.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F2000%2F1%2AMEWm-L4aOeWkyY-zy0qLew.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Hooray!&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  5. Download config and test with client
&lt;/h2&gt;

&lt;p&gt;Now you can show and download the kube config file.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;(kubespray-venv) kube-master1:~#

cat /root/.kube/config

# Copy the output of this to your local ~/.kube/config

# You only need to change this line:
# server: https://127.0.0.1:6443
# to: https://YOUR_PUBLIC_MASTER_P:6443
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;blockquote&gt;
&lt;p&gt;You can also merge multiple kubernetes clouds into one config file by merging all the attributes like “clusters”, “contexts” and “users” together.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Now let's see if we can connect to the cloud.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;your-local-env: ~#

kubectl get pods -A
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F2000%2F1%2Apkq66DDSvlKVtw6IWAhMSg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F2000%2F1%2Apkq66DDSvlKVtw6IWAhMSg.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  6. (Optional) Use Aptakube to have a good dashboard for your cloud.
&lt;/h2&gt;

&lt;p&gt;When you switch between different kube systems it becomes very annoying to change your config file every time before connecting. But with the “&lt;a href="https://aptakube.com/" rel="noopener noreferrer"&gt;aptakube&lt;/a&gt;” application it becomes very easy to connect and see all your relevant information or connect to shells.&lt;br&gt;
&lt;a href="https://aptakube.com/" rel="noopener noreferrer"&gt;&lt;strong&gt;Aptakube - Kubernetes GUI for Mac, Windows &amp;amp; Linux&lt;/strong&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  7. Install Metrics Server
&lt;/h2&gt;

&lt;p&gt;When you use aptakube you will find that in the “nodes” tab you won’t see any CPU usage or memory usage. To change this, you need to install the metrics server in Kubernetes.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F3124%2F1%2AuEQjE9A0ot8-q9lcp_wcmA.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F3124%2F1%2AuEQjE9A0ot8-q9lcp_wcmA.png" alt="Missing the metrics of the nodes."&gt;&lt;/a&gt;&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;(kubespray-venv) kube-master1:~#

curl -LO https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml

nano components.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Now add this line to the components yaml in case of self-signed certificates.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;- --kubelet-insecure-tls
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F2000%2F1%2AXZtansrmJVJlzSqDfGjvGw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F2000%2F1%2AXZtansrmJVJlzSqDfGjvGw.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Next, run the kubectl apply command.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl apply -f components.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Check if the pod is running by entering the kubectl list pods command. If you now check the kubectl list nodes or &lt;a href="https://aptakube.com/" rel="noopener noreferrer"&gt;aptakube&lt;/a&gt; it should look like this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F3088%2F1%2A3RvBAbn4le0bBGuzRZhDAA.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F3088%2F1%2A3RvBAbn4le0bBGuzRZhDAA.png" alt="Hooray! We see some performance metrics from our nodes."&gt;&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Finally getting some metrics.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  8. Install a Persistence Server
&lt;/h2&gt;

&lt;p&gt;After installing Kubernetes it is necessary to install a persistence server inside of your Kubernetes environment, because otherwise you cannot launch pods that require persistent volume claims. For that you need to install a storage class.&lt;/p&gt;

&lt;p&gt;To make it very simple, create a NFS server with a shared directory and add it as a storage class into Kubernetes. Kubernetes then creates directories for each persistent volume and saves the data inside. For more flexibility create this in a hetzner volume so you can shrink or extend the volume as needed.&lt;/p&gt;

&lt;p&gt;Connect with the shell to your persistence machine.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kube-persistence:~#

sudo yum install -y nfs-utils
systemctl start nfs-server rpcbind
systemctl enable nfs-server rpcbind
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Next go into your mounted volume in the path: cd /mnt/HC_Volume_XXXX/ and create a directory.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kube-persistence:~#

cd /mnt/HC_Volume_X/
mkdir nfsshare

chmod 777 nfsshare
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;In the next step add the directory to the shared entries.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kube-persistence:~#

nano /etc/exports

# Add this entry:
/mnt/HC_Volume_XXXXX/nfsshare 10.0.0.0/24(rw,sync,no_root_squash)

# save the file
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Now restart the export process.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kube-persistence:~#

exportfs -r
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h3&gt;
  
  
  Lets check the mounted directory from the server and worker machines.
&lt;/h3&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kube-master1:~# 
+
kube-worker1:~#

apt install nfs-common

showmount -e 10.0.0.3
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;This command should then show you:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Export list for 10.0.0.3:&lt;br&gt;
/mnt/HC_Volume_XXXX/nfsshare 10.0.0.0/24&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Create a namespace for the provisionier
&lt;/h2&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl create namespace k8s-nfs-storage
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Next checkout the repository for the NFS provisionier.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;git clone https://github.com/kubernetes-incubator/external-storage.git kubernetes-incubator 

cd kubernetes-incubator/nfs-client/

sed -i'' "s/namespace:.*/namespace: k8s-nfs-storage/g" ./deploy/rbac.yaml
sed -i'' "s/namespace:.*/namespace: k8s-nfs-storage/g" ./deploy/deployment.yaml

kubectl create -f ./deploy/rbac.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Now edit the provisioner yaml.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;nano ./deploy/deployment.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Modify the parts that are orange in the original file.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F2280%2F1%2AReIa9o5jhn4Vyv6WCCOSqg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F2280%2F1%2AReIa9o5jhn4Vyv6WCCOSqg.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Modify the storage class and add the same identifier as above for the provisioner name.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;nano deploy/class.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F2952%2F1%2ARzUXMNnVXDOW7QvbaHGg5A.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F2952%2F1%2ARzUXMNnVXDOW7QvbaHGg5A.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Next create the provisioner and the storage class.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl create -f deploy/class.yaml
kubectl create -f deploy/deployment.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;After that you should see the storage class in your Kubernetes. Set this storage class as default.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl patch storageclass managed-nfs-storage -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F4272%2F1%2AMwGhTLqzMTHE8J5sMFe1cg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F4272%2F1%2AMwGhTLqzMTHE8J5sMFe1cg.png" alt="Hooray here is our new storage class."&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  9. Let's test with a postgresql Database (Optional)
&lt;/h2&gt;

&lt;p&gt;If you want to test your Kubernetes cloud, use helm to create a postgresql database that creates a persistent volume.&lt;/p&gt;

&lt;p&gt;The command will deploy a helm chart with all the needed environment for a postgres database.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;helm install postgresdb1 oci://registry-1.docker.io/bitnamicharts/postgresql
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;After that, you will see some persistent volume claims and persistent volumes inside of your Kubernetes.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F4036%2F1%2AYS1vZDqXMgkg61eYzZm_rA.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F4036%2F1%2AYS1vZDqXMgkg61eYzZm_rA.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And if you look into your shared mount inside of the kube-persistence you will see this kind of directory:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F3468%2F1%2AemZ09_mC4xtnCo9ruQgI1Q.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F3468%2F1%2AemZ09_mC4xtnCo9ruQgI1Q.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;You made it! Congrats!&lt;/p&gt;
&lt;/blockquote&gt;

</description>
    </item>
    <item>
      <title>This Little kubernetes Port Forwarder Helped me to Save a lot of Time and Headache</title>
      <dc:creator>donnercody | Thoren</dc:creator>
      <pubDate>Wed, 08 May 2024 14:48:32 +0000</pubDate>
      <link>https://dev.to/donnercody/this-little-kubernetes-port-forwarder-helped-me-to-save-a-lot-of-time-and-headache-1ba9</link>
      <guid>https://dev.to/donnercody/this-little-kubernetes-port-forwarder-helped-me-to-save-a-lot-of-time-and-headache-1ba9</guid>
      <description>&lt;p&gt;I have built open-source kubelinkr to get connected to different kubernetes systems with one click — it saves me a lot of time.&lt;/p&gt;

&lt;p&gt;When you are a web developer or full-stack developer, you need to test and debug systems daily. So in my case, I have more than 10 kubernetes systems that I use for developing different projects.&lt;/p&gt;

&lt;p&gt;Sometimes I need to connect to databases from my local machine directly to a kubernetes, other times I only need to access an application inside kubernetes without being publicly available.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F2000%2F1%2AEACREt5YIGOZcGuAUH4ojQ.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F2000%2F1%2AEACREt5YIGOZcGuAUH4ojQ.png" alt="Image of kubelinkr structure"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Before I developed kubelinkr I switched between the kubernetes by changing the “current-context” inside the config and then each time connecting and disconnecting via the terminal. After 10x a day — it was annoying.&lt;/p&gt;

&lt;h3&gt;
  
  
  But why not use Kubeforwarder?
&lt;/h3&gt;

&lt;p&gt;I know some of you would argue, “But there is &lt;a href="https://kube-forwarder.pixelpoint.io/" rel="noopener noreferrer"&gt;kubeforwarder&lt;/a&gt;” doing the same. Yeah, but I tried kubeforwarder for MySQL and MongoDB port forwarding. In both cases, I got a lot of connection drops and that's very bad for my applications.&lt;/p&gt;

&lt;p&gt;That's why I built kubelinkr — and for me, it's doing a great job so far.&lt;/p&gt;

&lt;h2&gt;
  
  
  How to download kubelinkr
&lt;/h2&gt;

&lt;p&gt;You can download kubelinkr for free on my GitHub:&lt;br&gt;
&lt;a href="https://github.com/donnercody/kubelinkr" rel="noopener noreferrer"&gt;&lt;strong&gt;GitHub - donnercody/kubelinkr: A Menubar One-Click Application for Kubernetes forwarding multiple…&lt;/strong&gt;&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Please check it out and give me some feedback.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  How to use kubelinkr
&lt;/h2&gt;

&lt;p&gt;After you have downloaded and installed kubelinkr it will show a small icon in your tray.&lt;/p&gt;

&lt;p&gt;When you click on that Icon, you will see Dialog where you can now connect to your different projects.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F2038%2F1%2AghMT6SAu3g-CWHCrpPMy3Q.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F2038%2F1%2AghMT6SAu3g-CWHCrpPMy3Q.png" alt="Click on the play or stop buttons to connect to different kubernetes."&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;When you click on “Play” on any of the projects in the image above, you can directly start the port forwardings and access your systems by “localhost:yourport” on your local machine. By pressing “stop” all your port forwards stop immediately.&lt;/p&gt;

&lt;h2&gt;
  
  
  How to configure kubelinkr
&lt;/h2&gt;

&lt;p&gt;First: The configuration for kubelinkr is your kube config file in your directory: “&lt;strong&gt;~/.kube/config”.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;When you open kubelinkr for the first time, it will look like this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F2000%2F1%2ACpqgOt4ZuDL1nHXbGSNJKA.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F2000%2F1%2ACpqgOt4ZuDL1nHXbGSNJKA.png" alt="add new project in kubelinkr"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To create a new project click on “Add New Project”.&lt;/p&gt;

&lt;h3&gt;
  
  
  What is a project
&lt;/h3&gt;

&lt;p&gt;For my purposes, a project is a set of different port-forwards to one or multiple kubernetes. I want to start, stop, and switch between different projects, so I can quickly test and validate.&lt;/p&gt;

&lt;p&gt;I also use different “stages” for different projects. (project-a test and project-a live are different projects with the same port forward logic on different kubernetes).&lt;/p&gt;

&lt;h3&gt;
  
  
  Create a project
&lt;/h3&gt;

&lt;p&gt;Now you enter any name inside the Dialog and click on Create.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F2000%2F1%2AsNNu8hV8CYavxQ16S9Oc_A.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F2000%2F1%2AsNNu8hV8CYavxQ16S9Oc_A.png" alt="Create a new project (kubelinkr)"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After that, you can now create your port forwards on that project. Expand the created project and click on “Create new Port Forward”.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F2000%2F1%2A--fzVnmhJycJc09GA_EwKw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F2000%2F1%2A--fzVnmhJycJc09GA_EwKw.png" alt="Create a new port forwarding for your project (kubelinkr)"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now your kubernetes config comes into play. You select the kubernetes system you want to create a port forward to.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F2000%2F1%2An2uMR8Js5fG9x-9aEWafPg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F2000%2F1%2An2uMR8Js5fG9x-9aEWafPg.png" alt="select your namespace | kubelinkr"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Then select your namespace, the pods you want to connect are inside.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F2000%2F1%2Ayn1UFlcyEhBlqe1VD1_TKA.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F2000%2F1%2Ayn1UFlcyEhBlqe1VD1_TKA.png" alt="select the target pod  | kubelinkr"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After this step, you can select the target pod.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F2000%2F1%2AB-RXuoc1GMhNZkDFCbcb8Q.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F2000%2F1%2AB-RXuoc1GMhNZkDFCbcb8Q.png" alt="modify ports  | kubelinkr"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In the next step, you enter your local and remote ports.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F2000%2F1%2AeAoYEG8nytlkfY9Se4D1rQ.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F2000%2F1%2AeAoYEG8nytlkfY9Se4D1rQ.png" alt="modify remote ports | kubelinkr"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Then you click “Create” on the Dialog:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F2034%2F1%2Ab3gc7hrfKXZd8ONYohxMpw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F2034%2F1%2Ab3gc7hrfKXZd8ONYohxMpw.png" alt="create the project  | kubelinkr"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F2000%2F1%2AEJ-QgizBvupWnGrFaTBysA.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F2000%2F1%2AEJ-QgizBvupWnGrFaTBysA.png" alt="Showing the port forwards in your created project."&gt;&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Finished — now you can press “Play” to connect to your project in kubernetes and port forward any pod to your local machine.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Do you want your own cheap kubernetes?
&lt;/h2&gt;

&lt;p&gt;Here is another story that may be interesting for you.&lt;br&gt;
&lt;a href="https://medium.com/@thoren.lederer/host-a-on-premise-kubernetes-cloud-using-hetzner-and-save-a-lot-of-money-nfs-metrics-and-more-157f467d7977" rel="noopener noreferrer"&gt;&lt;strong&gt;Host an On-Premise Kubernetes Cloud Using Hetzner and Save a lot of Money. (NFS, Metrics and More)&lt;/strong&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Thanks for reading,&lt;/p&gt;

&lt;p&gt;Thoren&lt;/p&gt;

</description>
    </item>
    <item>
      <title>MongoDB aggregation lookup to compare a string and ObjectId field easily</title>
      <dc:creator>donnercody | Thoren</dc:creator>
      <pubDate>Wed, 08 May 2024 14:43:48 +0000</pubDate>
      <link>https://dev.to/donnercody/mongodb-aggregation-lookup-to-compare-a-string-and-objectid-field-easily-g4k</link>
      <guid>https://dev.to/donnercody/mongodb-aggregation-lookup-to-compare-a-string-and-objectid-field-easily-g4k</guid>
      <description>&lt;p&gt;If you are looking for a solution to aggregate a lookup between string and ObjectId, look no further.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F2700%2F1%2AflMFmGJDD3FvVXthLS2HGg.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F2700%2F1%2AflMFmGJDD3FvVXthLS2HGg.jpeg" alt="Mapping between ObjectId and String"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Let’s say you have the following collections.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F2700%2F1%2AChHVoBH-CIrw940_PXaVcg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F2700%2F1%2AChHVoBH-CIrw940_PXaVcg.png" alt="Example Image of the mongo data"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now you want to expand the “apartments” field by the collection entry. Usually you would do this:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;db.getCollection("buildings").aggregate([
 {
     $lookup: {
         from: "apartments",
         localField: "apartments._id",
         foreignField: "_id",
         as: "apartments"
     }
 }
])
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Your result will mostly be this:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{ 
 ...
  "apartments" : [  ], 
 ...
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;The “apartments” field is empty, because MongoDB in &lt;strong&gt;most versions can not compare string and ObjectId correctly.&lt;/strong&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  How to fix this problem without changing your data or bad performance of “unwinds”
&lt;/h2&gt;

&lt;p&gt;Most solutions in the web will use $unwind and then $addField, but this is very time consuming when you have a large amount of documents.&lt;/p&gt;

&lt;p&gt;Here is a very easy solution for that: use &lt;strong&gt;&lt;em&gt;$addField with $map.&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[{
    $addFields: {
      "apartments": {
        $map: {
          input: "$apartments",
          in: {
            $toObjectId: "$$this._id" 
          }
        }
      }
    }
},
{
     $lookup: {
         from: "apartments",
         localField: "apartments",
         foreignField: "_id",
         as: "apartments"
     }
 }
]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;After running the query above, your results will look like how you would expect it:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{ ...
  "title" : "Pearl Court", 
  "apartments" : 
      [ 
        { 
            "_id" : ObjectId("6425695ba69a66001b667dc5"), 
            "title" : "Unit 007", 
            "description" : "Lorem ipsum..." 
            ...
        } 
      ],   
  ...
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Hope I helped you save a lot of time looking for a solution to expand your collections.&lt;/p&gt;

&lt;p&gt;Thanks for reading,&lt;/p&gt;

&lt;p&gt;Thoren&lt;/p&gt;

</description>
    </item>
  </channel>
</rss>
