<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Alex Wellnitz</title>
    <description>The latest articles on DEV Community by Alex Wellnitz (@alexohneander).</description>
    <link>https://dev.to/alexohneander</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/alexohneander"/>
    <language>en</language>
    <item>
      <title>Highly scalable Minecraft cluster</title>
      <dc:creator>Alex Wellnitz</dc:creator>
      <pubDate>Sat, 04 Nov 2023 09:21:05 +0000</pubDate>
      <link>https://dev.to/alexohneander/highly-scalable-minecraft-cluster-4447</link>
      <guid>https://dev.to/alexohneander/highly-scalable-minecraft-cluster-4447</guid>
      <description>&lt;p&gt;Are you planning a very large Minecraft LAN party? Then this article is for you. Here I show you how to set up a highly scalable Minecraft cluster.&lt;/p&gt;

&lt;h3&gt;
  
  
  What is a Minecraft cluster?
&lt;/h3&gt;

&lt;p&gt;A Minecraft cluster is a Minecraft server network that consists of multiple Minecraft servers. These servers are connected to each other via a network and can therefore be shared. This means that you can play with your friends on a server that consists of multiple servers.&lt;/p&gt;

&lt;h3&gt;
  
  
  How does a Minecraft cluster work?
&lt;/h3&gt;

&lt;p&gt;A Minecraft cluster consists of several components. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fgithub.com%2FMultiPaper%2FMultiPaper%2Fraw%2Fmain%2Fassets%2Fmultipaper-diagram.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fgithub.com%2FMultiPaper%2FMultiPaper%2Fraw%2Fmain%2Fassets%2Fmultipaper-diagram.jpg" alt="Minecraft cluster"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  Master database
&lt;/h4&gt;

&lt;p&gt;First, there is the master database. This database allows servers to store data in a central location that all servers can access. Servers store chunks, maps, level.dat, player data, banned players, and more in this database. This database also records which chunk belongs to which server and coordinates communication between servers.&lt;/p&gt;

&lt;h4&gt;
  
  
  Server
&lt;/h4&gt;

&lt;p&gt;The master database is great for storing data, but not so good at synchronizing data in real time between servers. This is where peer-to-peer communication comes in. Each server establishes a connection to another server so that data between them can be updated in real time. When a player on server A attacks another player on server B, server A sends this data directly to server B so that server B can damage the player and apply any knockback.&lt;/p&gt;

&lt;h4&gt;
  
  
  Load Balancer
&lt;/h4&gt;

&lt;p&gt;The load balancer is the last component of the cluster. A load balancer is required to distribute players evenly across your servers. A load balancer automatically distributes players between servers to distribute the load evenly across the individual servers.&lt;/p&gt;

&lt;h3&gt;
  
  
  Why do I need multiple servers?
&lt;/h3&gt;

&lt;p&gt;By having multiple servers, we can distribute the load across multiple servers. This means that we can have more players on our servers without the servers becoming overloaded. With this setup, we can also easily add new servers if we get more players. If the number of players decreases again, the server can be removed again.&lt;/p&gt;

&lt;h2&gt;
  
  
  Preparation
&lt;/h2&gt;

&lt;p&gt;You should be familiar with Kubernetes and have set up a Kubernetes cluster. I recommend &lt;a href="https://k3s.io/" rel="noopener noreferrer"&gt;k3s&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;You should also be familiar with Helm. I recommend &lt;a href="https://helm.sh/docs/intro/install/" rel="noopener noreferrer"&gt;Helm 3&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Installation
&lt;/h2&gt;

&lt;p&gt;First, you should clone the repository.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

git clone git@github.com:alexohneander/MultiPaperHelm.git
&lt;span class="nb"&gt;cd &lt;/span&gt;MultiPaperHelm/


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;I installed the entire setup in a separate namespace. You can create this namespace with the following command.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

kubectl create namespace minecraft


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Next, we install the Minecraft cluster with Helm.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

helm &lt;span class="nb"&gt;install &lt;/span&gt;multipaper &lt;span class="nb"&gt;.&lt;/span&gt; &lt;span class="nt"&gt;--namespace&lt;/span&gt; minecraf


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Once the Helm chart is installed, you can view the port of the proxy service.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

kubectl describe service multipaper-master-proxy &lt;span class="nt"&gt;-n&lt;/span&gt; minecraft


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;This port is the port that you need to enter in your Minecraft client.&lt;/p&gt;

&lt;h2&gt;
  
  
  Configuration
&lt;/h2&gt;

&lt;p&gt;The Helm chart creates several ConfigMaps. In these ConfigMaps, you can customize the configuration of your cluster.&lt;/p&gt;

&lt;p&gt;For example, you can set the number of maximum players or change the description of the server.&lt;/p&gt;

&lt;p&gt;For more information on the individual config files, see &lt;a href="https://github.com/MultiPaper/MultiPaper" rel="noopener noreferrer"&gt;MultiPaper&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;With this setup, you can easily set up a highly scalable Minecraft cluster. You can easily add new servers if you get more players and remove them again if the number of players decreases again.&lt;/p&gt;

&lt;p&gt;You can test this setup under the following Server Address: &lt;code&gt;minecraft.alexohneander.de:31732&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;If you have any questions, feel free to contact me on &lt;a href="//mailto:moin@wellnitz-alex.de"&gt;Email&lt;/a&gt; or on &lt;a href="https://matrix.to/#/@alexohneander:dev-null.rocks" rel="noopener noreferrer"&gt;Matrix&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>minecraft</category>
      <category>cluster</category>
    </item>
    <item>
      <title>Writing Backup Scripts with Borg</title>
      <dc:creator>Alex Wellnitz</dc:creator>
      <pubDate>Mon, 19 Sep 2022 07:18:46 +0000</pubDate>
      <link>https://dev.to/alexohneander/writing-backup-scripts-with-borg-2mad</link>
      <guid>https://dev.to/alexohneander/writing-backup-scripts-with-borg-2mad</guid>
      <description>&lt;p&gt;Since we all know that the first rule is "no backup, no pity", I'll show you how you can use Borg to back up your important data in an encrypted way with relative ease.&lt;/p&gt;

&lt;p&gt;If you do not want to use a second computer, but an external hard drive, you can adjust this later in the script and ignore the points in the instructions for the second computer.&lt;/p&gt;

&lt;h3&gt;
  
  
  Requirements
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;2 Linux Computers&lt;/li&gt;
&lt;li&gt;Borg&lt;/li&gt;
&lt;li&gt;SSH&lt;/li&gt;
&lt;li&gt;Storage&lt;/li&gt;
&lt;li&gt;More than 5 brain cells&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Installation
&lt;/h3&gt;

&lt;p&gt;First we need to install borg on both computers so that we can back up on one and save on the other.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;apt &lt;span class="nb"&gt;install &lt;/span&gt;borgbackup
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then we create a Borg repository. We can either use an external target or a local path.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;External Target:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;borg init &lt;span class="nt"&gt;--encryption&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;repokey ssh://user@192.168.2.42:22/mnt/backup/borg
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Local Path:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;borg init &lt;span class="nt"&gt;--encryption&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;repokey /path/to/backup_folder
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you are using an external destination, I recommend that you store your SSH key on the destination.&lt;br&gt;
This way you don't have to enter a password and is simply nicer from my point of view.&lt;/p&gt;

&lt;p&gt;Once you have created everything and prepared the script with your parameters, I recommend that you run the script as a CronJob so that you no longer have to remember to back up your things yourself.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;crontab example:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;#Minute Hour    Day    Month  Day(Week)      command&lt;/span&gt;
&lt;span class="c"&gt;#(0-59) (0-23)  (1-31)  (1-12)  (1-7;1=Mo)&lt;/span&gt;
00 2 &lt;span class="k"&gt;*&lt;/span&gt; &lt;span class="k"&gt;*&lt;/span&gt; &lt;span class="k"&gt;*&lt;/span&gt; /srv/scripts/borgBackup.sh
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Automated script
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;#!/bin/sh&lt;/span&gt;

&lt;span class="c"&gt;# VARS&lt;/span&gt;
&lt;span class="nv"&gt;BACKUPSERVER&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"192.168.2.42"&lt;/span&gt;
&lt;span class="nv"&gt;BACKUPDIR&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"/mnt/backup/borg"&lt;/span&gt;

&lt;span class="c"&gt;# Here you can either use your external destination or the local path.&lt;/span&gt;
&lt;span class="c"&gt;# External target&lt;/span&gt;
&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;BORG_REPO&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"user@&lt;/span&gt;&lt;span class="nv"&gt;$BACKUPSERVER&lt;/span&gt;&lt;span class="s2"&gt;:&lt;/span&gt;&lt;span class="nv"&gt;$BACKUPDIR&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;

&lt;span class="c"&gt;# Local path&lt;/span&gt;
&lt;span class="c"&gt;# export BORG_REPO=/path/to/backup_folder&lt;/span&gt;

&lt;span class="c"&gt;# Your repository password must be stored here.&lt;/span&gt;
&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;BORG_PASSPHRASE&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s1"&gt;'S0m3th1ngV3ryC0mpl1c4t3d'&lt;/span&gt;

info&lt;span class="o"&gt;()&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt; &lt;span class="nb"&gt;printf&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s2"&gt;%s %s&lt;/span&gt;&lt;span class="se"&gt;\n\n&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt; &lt;span class="nb"&gt;date&lt;/span&gt; &lt;span class="si"&gt;)&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$*&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&amp;amp;2&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="o"&gt;}&lt;/span&gt;
&lt;span class="nb"&gt;trap&lt;/span&gt; &lt;span class="s1"&gt;'echo $( date ) Backup interrupted &amp;gt;&amp;amp;2; exit 2'&lt;/span&gt; INT TERM

info &lt;span class="s2"&gt;"Start backup"&lt;/span&gt;

&lt;span class="c"&gt;#Here the backup is created, adjust it the way you would like to have it.&lt;/span&gt;
borg create                    &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--stats&lt;/span&gt;                    &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--compression&lt;/span&gt; lz4          &lt;span class="se"&gt;\&lt;/span&gt;
    ::&lt;span class="s1"&gt;'BackupName-{now}'&lt;/span&gt;        &lt;span class="se"&gt;\&lt;/span&gt;
    /etc/nginx                  &lt;span class="se"&gt;\&lt;/span&gt;
    /home/user

&lt;span class="nv"&gt;backup_exit&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nv"&gt;$?&lt;/span&gt;

info &lt;span class="s2"&gt;"Deleting old backups"&lt;/span&gt;
&lt;span class="c"&gt;# Automatic deletion of old backups&lt;/span&gt;
borg prune                          &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="nt"&gt;--prefix&lt;/span&gt; &lt;span class="s1"&gt;'BackupName-'&lt;/span&gt;              &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--keep-daily&lt;/span&gt;    7              &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--keep-weekly&lt;/span&gt;  4                &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--keep-monthly&lt;/span&gt;  6

&lt;span class="nv"&gt;prune_exit&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nv"&gt;$?&lt;/span&gt;

&lt;span class="c"&gt;# Information on whether the backup worked.&lt;/span&gt;
&lt;span class="nv"&gt;global_exit&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="k"&gt;$((&lt;/span&gt; backup_exit &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; prune_exit ? backup_exit : prune_exit &lt;span class="k"&gt;))&lt;/span&gt;

&lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="o"&gt;[&lt;/span&gt; &lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;global_exit&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt; &lt;span class="nt"&gt;-eq&lt;/span&gt; 0 &lt;span class="o"&gt;]&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;then
    &lt;/span&gt;info &lt;span class="s2"&gt;"Backup and Prune finished successfully"&lt;/span&gt;
&lt;span class="k"&gt;elif&lt;/span&gt; &lt;span class="o"&gt;[&lt;/span&gt; &lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;global_exit&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt; &lt;span class="nt"&gt;-eq&lt;/span&gt; 1 &lt;span class="o"&gt;]&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;then
    &lt;/span&gt;info &lt;span class="s2"&gt;"Backup and/or Prune finished with warnings"&lt;/span&gt;
&lt;span class="k"&gt;else
    &lt;/span&gt;info &lt;span class="s2"&gt;"Backup and/or Prune finished with errors"&lt;/span&gt;
&lt;span class="k"&gt;fi

&lt;/span&gt;&lt;span class="nb"&gt;exit&lt;/span&gt; &lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;global_exit&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Get your data from the backup
&lt;/h3&gt;

&lt;p&gt;First, we create a temporary directory in which we can mount the backup.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;mkdir&lt;/span&gt; /tmp/borg-backup
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once our mount point is created, we can mount our backup repo.&lt;br&gt;
At this point you must remember that you can use an external destination or a local path.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;borg mount ssh://user@192.168.2.42/mnt/backup/borg /tmp/borg-backup
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once our repo is mounted, we can change into the directory and restore files via &lt;strong&gt;rsync&lt;/strong&gt; or &lt;strong&gt;cp&lt;/strong&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Conclusion
&lt;/h3&gt;

&lt;p&gt;I hope you could understand everything and now secure your shit sensibly. Because without a backup we are all lost!&lt;/p&gt;

</description>
      <category>borg</category>
      <category>linux</category>
      <category>backup</category>
    </item>
    <item>
      <title>Why Docker isn't always a good idea Part 1</title>
      <dc:creator>Alex Wellnitz</dc:creator>
      <pubDate>Thu, 15 Sep 2022 13:01:24 +0000</pubDate>
      <link>https://dev.to/alexohneander/why-docker-isnt-always-a-good-idea-part-1-5ha1</link>
      <guid>https://dev.to/alexohneander/why-docker-isnt-always-a-good-idea-part-1-5ha1</guid>
      <description>&lt;p&gt;To briefly explain the situation:&lt;br&gt;
We have a &lt;strong&gt;HAProxy&lt;/strong&gt; running on a Debian server as a Docker container. This is the entrance node to a &lt;strong&gt;Docker Swarm&lt;/strong&gt; cluster.&lt;/p&gt;

&lt;p&gt;Now, in the last few days, there have been several small outages of the websites running in the &lt;strong&gt;Docker Swarm&lt;/strong&gt; cluster. After getting an overview, we noticed that no new connections can be established.&lt;/p&gt;

&lt;p&gt;As soon as we restarted the &lt;strong&gt;HAProxy&lt;/strong&gt;, everything went back to normal. After that I did some research on TCP connections and found out that there is a socket limit.&lt;/p&gt;

&lt;p&gt;In Linux we have a limit of sockets that can be opened at the same time. At this point, I unfortunately did not understand that this limit refers to a client connection. So we note that a client can establish a maximum of &lt;strong&gt;65535&lt;/strong&gt; socket connections to a server.&lt;/p&gt;

&lt;p&gt;This limit refers to a range of ports that you release. We had about 35k sockets available on our server (&lt;strong&gt;HAProxy&lt;/strong&gt;). Now the pages are always down when this limit is reached. Thinking back for a moment, we should never get to that limit as it relates to a client. But the problem with us was that Docker's softlayer network didn't route the client address cleanly through the NAT, so everything was coming from one client.&lt;/p&gt;

&lt;p&gt;After stopping the Docker container and installing &lt;strong&gt;HAProxy&lt;/strong&gt; natively on the server, we were able to cross that boundary as well.&lt;/p&gt;

&lt;h2&gt;
  
  
  Assumption
&lt;/h2&gt;

&lt;p&gt;Because we NAT all the requests through the Docker network, the source address is always the same. This is how we reach the socket limit. If we omit the NAT and use HAProxy natively, we do not reach this limit, because the source address is no longer always the same.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fupload.wikimedia.org%2Fwikipedia%2Fcommons%2Fthumb%2Fc%2Fc7%2FNAT_Concept-en.svg%2F1920px-NAT_Concept-en.svg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fupload.wikimedia.org%2Fwikipedia%2Fcommons%2Fthumb%2Fc%2Fc7%2FNAT_Concept-en.svg%2F1920px-NAT_Concept-en.svg.png" title="Network Address Translation" alt="Network Address Translation"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;With this setup, we get overhead into the system that we don't need. We have an extra abstraction layer, every request has to go through the Docker network and we reach the socket limit. All these points fall away when we use it natively.&lt;/p&gt;

&lt;p&gt;If we use a lot of micro services it is important that we use something like Docker, because then we can share the kernel and it makes the deployment much easier.&lt;/p&gt;

&lt;p&gt;But if we have only one application that is very important, it is better to keep it simple.&lt;/p&gt;

</description>
      <category>docker</category>
      <category>network</category>
      <category>haproxy</category>
    </item>
    <item>
      <title>Baremetal CNI Setup with Cilium</title>
      <dc:creator>Alex Wellnitz</dc:creator>
      <pubDate>Thu, 20 Jan 2022 13:04:48 +0000</pubDate>
      <link>https://dev.to/alexohneander/baremetal-cni-setup-with-cilium-6df</link>
      <guid>https://dev.to/alexohneander/baremetal-cni-setup-with-cilium-6df</guid>
      <description>&lt;p&gt;In a freshly set up Kubernetes cluster, we need a so-called CNI. This CNI is not always present after installation.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is a Container Network Interface (CNI)?
&lt;/h2&gt;

&lt;p&gt;CNI is a network framework that allows the dynamic configuration of networking resources through a group of Go-written specifications and libraries. The specification mentioned for the plugin outlines an interface that would configure the network, provisioning the IP addresses, and mantain multi-host connectivity.&lt;/p&gt;

&lt;p&gt;In the Kubernetes context, the CNI seamlessly integrates with the kubelet to allow automatic network configuration between pods using an underlay or overlay network. An underlay network is defined at the physical level of the networking layer composed of routers and switches. In contrast, the overlay network uses a virtual interface like VxLAN to encapsulate the network traffic.&lt;/p&gt;

&lt;p&gt;Once the network configuration type is specified, the runtime defines a network for containers to join and calls the CNI plugin to add the interface into the container namespace and allocate the linked subnetwork and routes by making calls to IPAM (IP Address Management) plugin.&lt;/p&gt;

&lt;p&gt;In addition to Kubernetes networking, CNI also supports Kubernetes-based platforms like OpenShift to provide a unified container communication across the cluster through software-defined networking (SDN) approach.&lt;/p&gt;

&lt;h3&gt;
  
  
  What is Cilium?
&lt;/h3&gt;

&lt;p&gt;Cilium is an open-source, highly scalable Kubernetes CNI solution developed by Linux kernel developers. Cilium secures network connectivity between Kubernetes services by adding high-level application rules utilizing eBPF filtering technology. Cilium is deployed as a daemon &lt;code&gt;cilium-agent&lt;/code&gt; on each node of the Kubernetes cluster to manage operations and translates the network definitions to eBPF programs.&lt;/p&gt;

&lt;p&gt;The communication between pods happens over an overlay network or utilizing a routing protocol. Both IPv4 and IPv6 addresses are supported for cases. Overlay network implementation utilizes VXLAN tunneling for packet encapsulation while native routing happens through unencapsulated BGP protocol.&lt;/p&gt;

&lt;p&gt;Cilium can be used with multiple Kubernetes clusters and can provide multi CNI features, a high level of inspection,pod-to-pod connectivity across all clusters.&lt;/p&gt;

&lt;p&gt;Its network and application layer awareness manages packet inspection, and the application protocol packets are using.&lt;/p&gt;

&lt;p&gt;Cilium also has support for Kubernetes Network Policies through HTTP request filters. The policy configuration can be written into a YAML or JSON file and offers both ingress and egress enforcements. Admins can accept or reject requests based on the request method or path header while integrating policies with service mesh like Istio.&lt;/p&gt;

&lt;h3&gt;
  
  
  Preparation
&lt;/h3&gt;

&lt;p&gt;For the installation we need the CLI from Cilium.&lt;br&gt;
We can install this with the following commands:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Mac OSx&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;curl &lt;span class="nt"&gt;-L&lt;/span&gt; &lt;span class="nt"&gt;--remote-name-all&lt;/span&gt; https://github.com/cilium/cilium-cli/releases/latest/download/cilium-darwin-amd64.tar.gz&lt;span class="o"&gt;{&lt;/span&gt;,.sha256sum&lt;span class="o"&gt;}&lt;/span&gt;
shasum &lt;span class="nt"&gt;-a&lt;/span&gt; 256 &lt;span class="nt"&gt;-c&lt;/span&gt; cilium-darwin-amd64.tar.gz.sha256sum
&lt;span class="nb"&gt;sudo tar &lt;/span&gt;xzvfC cilium-darwin-amd64.tar.gz /usr/local/bin
&lt;span class="nb"&gt;rm &lt;/span&gt;cilium-darwin-amd64.tar.gz&lt;span class="o"&gt;{&lt;/span&gt;,.sha256sum&lt;span class="o"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Linux&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;curl &lt;span class="nt"&gt;-L&lt;/span&gt; &lt;span class="nt"&gt;--remote-name-all&lt;/span&gt; https://github.com/cilium/cilium-cli/releases/latest/download/cilium-linux-amd64.tar.gz&lt;span class="o"&gt;{&lt;/span&gt;,.sha256sum&lt;span class="o"&gt;}&lt;/span&gt;
&lt;span class="nb"&gt;sha256sum&lt;/span&gt; &lt;span class="nt"&gt;--check&lt;/span&gt; cilium-linux-amd64.tar.gz.sha256sum
&lt;span class="nb"&gt;sudo tar &lt;/span&gt;xzvfC cilium-linux-amd64.tar.gz /usr/local/bin
&lt;span class="nb"&gt;rm &lt;/span&gt;cilium-linux-amd64.tar.gz&lt;span class="o"&gt;{&lt;/span&gt;,.sha256sum&lt;span class="o"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Install Cilium
&lt;/h3&gt;

&lt;p&gt;You can install Cilium on any Kubernetes cluster. These are the generic instructions on how to install Cilium into any Kubernetes cluster. The installer will attempt to automatically pick the best configuration options for you.&lt;/p&gt;

&lt;h4&gt;
  
  
  Requirements
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Kubernetes must be configured to use CNI&lt;/li&gt;
&lt;li&gt;Linux kernel &amp;gt;= 4.9.17&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Install
&lt;/h4&gt;

&lt;p&gt;Install Cilium into the Kubernetes cluster pointed to by your current kubectl context:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;cilium &lt;span class="nb"&gt;install&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If the installation fails for some reason, run &lt;code&gt;cilium status&lt;/code&gt; to retrieve the overall status of the Cilium deployment and inspect the logs of whatever pods are failing to be deployed.&lt;/p&gt;

&lt;h3&gt;
  
  
  Validate the Installation
&lt;/h3&gt;

&lt;p&gt;To validate that Cilium has been properly installed, you can run&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;cilium status &lt;span class="nt"&gt;--wait&lt;/span&gt;
   /¯¯&lt;span class="se"&gt;\&lt;/span&gt;
/¯¯&lt;span class="se"&gt;\_&lt;/span&gt;_/¯¯&lt;span class="se"&gt;\ &lt;/span&gt;   Cilium:         OK
&lt;span class="se"&gt;\_&lt;/span&gt;_/¯¯&lt;span class="se"&gt;\_&lt;/span&gt;_/    Operator:       OK
/¯¯&lt;span class="se"&gt;\_&lt;/span&gt;_/¯¯&lt;span class="se"&gt;\ &lt;/span&gt;   Hubble:         disabled
&lt;span class="se"&gt;\_&lt;/span&gt;_/¯¯&lt;span class="se"&gt;\_&lt;/span&gt;_/    ClusterMesh:    disabled
   &lt;span class="se"&gt;\_&lt;/span&gt;_/

DaemonSet         cilium             Desired: 2, Ready: 2/2, Available: 2/2
Deployment        cilium-operator    Desired: 2, Ready: 2/2, Available: 2/2
Containers:       cilium-operator    Running: 2
                  cilium             Running: 2
Image versions    cilium             quay.io/cilium/cilium:v1.9.5: 2
                  cilium-operator    quay.io/cilium/operator-generic:v1.9.5: 2
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Run the following command to validate that your cluster has proper network connectivity:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;cilium connectivity &lt;span class="nb"&gt;test
&lt;/span&gt;ℹ️  Monitor aggregation detected, will skip some flow validation steps
✨ &lt;span class="o"&gt;[&lt;/span&gt;k8s-cluster] Creating namespace &lt;span class="k"&gt;for &lt;/span&gt;connectivity check...
&lt;span class="o"&gt;(&lt;/span&gt;...&lt;span class="o"&gt;)&lt;/span&gt;
&lt;span class="nt"&gt;---------------------------------------------------------------------------------------------------------------------&lt;/span&gt;
📋 Test Report
&lt;span class="nt"&gt;---------------------------------------------------------------------------------------------------------------------&lt;/span&gt;
✅ 69/69 tests successful &lt;span class="o"&gt;(&lt;/span&gt;0 warnings&lt;span class="o"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Congratulations! You have a fully functional Kubernetes cluster with Cilium. 🎉&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>cni</category>
      <category>cluster</category>
      <category>devops</category>
    </item>
    <item>
      <title>Site to Site VPN for Google Kubernetes Engine</title>
      <dc:creator>Alex Wellnitz</dc:creator>
      <pubDate>Thu, 06 May 2021 06:37:08 +0000</pubDate>
      <link>https://dev.to/alexohneander/site-to-site-vpn-for-google-kubernetes-engine-3ign</link>
      <guid>https://dev.to/alexohneander/site-to-site-vpn-for-google-kubernetes-engine-3ign</guid>
      <description>&lt;p&gt;In this tutorial I will try to explain you briefly and concisely how you can set up a site-to-site VPN for the Google Cloud Network.&lt;/p&gt;

&lt;h3&gt;
  
  
  Prerequisites
&lt;/h3&gt;

&lt;p&gt;We need 2 virtual machines. The first one on the side of our office and the other one on the side of Google.&lt;/p&gt;

&lt;h4&gt;
  
  
  Setup OpenVPN Clients
&lt;/h4&gt;

&lt;h5&gt;
  
  
  Site-to-Site Client Office Side
&lt;/h5&gt;

&lt;p&gt;We need to install OpenVPN, we do it as follows:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;apt &lt;span class="nb"&gt;install &lt;/span&gt;openvpn &lt;span class="nt"&gt;-y&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After that we add our OpenVPN configuration under this path &lt;code&gt;/etc/openvpn/s2s.conf&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;s2s.conf&lt;/em&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Use a dynamic tun device.
# For Linux 2.2 or non-Linux OSes,
# you may want to use an explicit
# unit number such as "tun1".
# OpenVPN also supports virtual
# ethernet "tap" devices.
dev tun

# Our OpenVPN peer is the Google gateway.
remote IP_GOOGLE_VPN_CLIENT 

ifconfig 4.1.0.2 4.1.0.1

route 10.156.0.0 255.255.240.0            # Google Cloud VM Network
route 10.24.0.0 255.252.0.0               # Google Kubernetes Pod Network

push "route 192.168.10.0 255.255.255.0"   # Office Network 

# Our pre-shared static key
#secret static.key

# Cipher to use
cipher AES-256-CBC

port 1195

user nobody
group nogroup

# Uncomment this section for a more reliable detection when a system
# loses its connection.  For example, dial-ups or laptops that
# travel to other locations.
 ping 15
 ping-restart 45
 ping-timer-rem
 persist-tun
 persist-key

# Verbosity level.
# 0 -- quiet except for fatal errors.
# 1 -- mostly quiet, but display non-fatal network errors.
# 3 -- medium output, good for normal operation.
# 9 -- verbose, good for troubleshooting
verb 3

log /etc/openvpn/s2s.log
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We also have to enable the IPv4 forward function in the kernel, so we go to &lt;code&gt;/etc/sysctl.conf&lt;/code&gt; and comment out the following line:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;net.ipv4.ip_forward=1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We can then start our OpenVPN client with this command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;systemctl start openvpn@s2s
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;On the Office side we have to open the port for the OpenVPN client that the other side can connect.&lt;/p&gt;

&lt;h5&gt;
  
  
  Site-to-Site Client Google Side
&lt;/h5&gt;

&lt;p&gt;When setting up the OpenVPN client on Google's site, we need to consider the following settings when creating it. When we create the machine, we need to enable this option in the network settings:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi.imgur.com%2FOXEkhxo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi.imgur.com%2FOXEkhxo.png" alt="Google Cloud Network Settings"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Also on this side we have to install the OpenVPN client again and then add this config under the path &lt;code&gt;/etc/openvpn/s2s.conf&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Use a dynamic tun device.
# For Linux 2.2 or non-Linux OSes,
# you may want to use an explicit
# unit number such as "tun1".
# OpenVPN also supports virtual
# ethernet "tap" devices.
dev tun

# Our OpenVPN peer is the Office gateway.
remote IP_OFFICE_VPN_CLIENT 

ifconfig 4.1.0.2 4.1.0.1

route 192.168.10.0 255.255.255.0          # Office Network

push "route 10.156.0.0 255.255.240.0"     # Google Cloud VM Network
push "route 10.24.0.0 255.252.0.0"        # Google Kubernetes Pod Network

# Our pre-shared static key
#secret static.key

# Cipher to use
cipher AES-256-CBC

port 1195

user nobody
group nogroup

# Uncomment this section for a more reliable detection when a system
# loses its connection.  For example, dial-ups or laptops that
# travel to other locations.
 ping 15
 ping-restart 45
 ping-timer-rem
 persist-tun
 persist-key

# Verbosity level.
# 0 -- quiet except for fatal errors.
# 1 -- mostly quiet, but display non-fatal network errors.
# 3 -- medium output, good for normal operation.
# 9 -- verbose, good for troubleshooting
verb 3

log /etc/openvpn/s2s.log
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We also have to enable the IPv4 forward function in the kernel, so we go to &lt;code&gt;/etc/sysctl.conf&lt;/code&gt; and comment out the following line:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;net.ipv4.ip_forward=1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h5&gt;
  
  
  Connection test
&lt;/h5&gt;

&lt;p&gt;Now that both clients are basically configured we can test the connection. Both clients have to be started with systemctl. After that we look at the logs with &lt;code&gt;tail -f /etc/openvpn/s2s-log&lt;/code&gt; and wait for this message:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Wed May  5 08:28:01 2021 /sbin/ip route add 10.28.0.0/20 via 4.1.0.1
Wed May  5 08:28:01 2021 TCP/UDP: Preserving recently used remote address: [AF_INET]0.0.0.0:1195
Wed May  5 08:28:01 2021 Socket Buffers: R=[212992-&amp;gt;212992] S=[212992-&amp;gt;212992]
Wed May  5 08:28:01 2021 UDP link local (bound): [AF_INET][undef]:1195
Wed May  5 08:28:01 2021 UDP link remote: [AF_INET]0.0.0.0:1195
Wed May  5 08:28:01 2021 GID set to nogroup
Wed May  5 08:28:01 2021 UID set to nobody
Wed May  5 08:28:11 2021 Peer Connection Initiated with [AF_INET]0.0.0.0:1195
Wed May  5 08:28:12 2021 WARNING: this configuration may cache passwords in memory -- use the auth-nocache option to prevent this
Wed May  5 08:28:12 2021 Initialization Sequence Completed
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If we can't establish a connection, we need to check if the ports are opened on both sides.&lt;/p&gt;

&lt;h4&gt;
  
  
  Routing Google Cloud Network
&lt;/h4&gt;

&lt;p&gt;After our clients have finished installing and configuring, we need to set the routes on Google. I will not map the Office side, as this is always different. But you have to route the networks for the Google network there as well.&lt;/p&gt;

&lt;p&gt;To set the route on Google we go to the network settings and then to Routes. Here you have to specify your office network so that the clients in the Google network know what to do.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi.imgur.com%2F6Q2Drf4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi.imgur.com%2F6Q2Drf4.png" alt="Google Cloud Network Route"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  IP-Masquerade-Agent
&lt;/h4&gt;

&lt;p&gt;IP masquerading is a form of network address translation (NAT) used to perform many-to-one IP address translations, which allows multiple clients to access a destination using a single IP address. A GKE cluster uses IP masquerading so that destinations outside of the cluster only receive packets from node IP addresses instead of Pod IP addresses. This is useful in environments that expect to only receive packets from node IP addresses.&lt;/p&gt;

&lt;p&gt;You have to edit the ip-masq-agent and this configuration is responsible for letting the pods inside the nodes, reach other parts of the GCP VPC Network, more specifically the VPN. So, it allows pods to communicate with the devices that are accessible through the VPN.&lt;/p&gt;

&lt;p&gt;First of all we're gonna be working inside the kube-system namespace, and we're gonna put the configmap that configures our ip-masq-agent, put this in a config file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;nonMasqueradeCIDRs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;10.24.0.0/14&lt;/span&gt;  &lt;span class="c1"&gt;# The IPv4 CIDR the cluster is using for Pods (required)&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;10.156.0.0/20&lt;/span&gt; &lt;span class="c1"&gt;# The IPv4 CIDR of the subnetwork the cluster is using for Nodes (optional, works without but I guess its better with it)&lt;/span&gt;
&lt;span class="na"&gt;masqLinkLocal&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;
&lt;span class="na"&gt;resyncInterval&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;60s&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;and run &lt;code&gt;kubectl create configmap ip-masq-agent --from-file config --namespace kube-system&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;afterwards, configure the ip-masq-agent, put this in a &lt;code&gt;ip-masq-agent.yml&lt;/code&gt; file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;extensions/v1beta1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;DaemonSet&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ip-masq-agent&lt;/span&gt;
  &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;kube-system&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;template&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;k8s-app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ip-masq-agent&lt;/span&gt;
    &lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;hostNetwork&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
      &lt;span class="na"&gt;containers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ip-masq-agent&lt;/span&gt;
        &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;gcr.io/google-containers/ip-masq-agent-amd64:v2.4.1&lt;/span&gt;
        &lt;span class="na"&gt;args&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;--masq-chain=IP-MASQ&lt;/span&gt;
            &lt;span class="c1"&gt;# To non-masquerade reserved IP ranges by default, uncomment the line below.&lt;/span&gt;
            &lt;span class="c1"&gt;# - --nomasq-all-reserved-ranges&lt;/span&gt;
        &lt;span class="na"&gt;securityContext&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;privileged&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
        &lt;span class="na"&gt;volumeMounts&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;config&lt;/span&gt;
            &lt;span class="na"&gt;mountPath&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/etc/config&lt;/span&gt;
      &lt;span class="na"&gt;volumes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;config&lt;/span&gt;
          &lt;span class="na"&gt;configMap&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="c1"&gt;# Note this ConfigMap must be created in the same namespace as the daemon pods - this spec uses kube-system&lt;/span&gt;
            &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ip-masq-agent&lt;/span&gt;
            &lt;span class="na"&gt;optional&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
            &lt;span class="na"&gt;items&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
              &lt;span class="c1"&gt;# The daemon looks for its config in a YAML file at /etc/config/ip-masq-agent&lt;/span&gt;
              &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;key&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;config&lt;/span&gt;
                &lt;span class="na"&gt;path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ip-masq-agent&lt;/span&gt;
      &lt;span class="na"&gt;tolerations&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;effect&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;NoSchedule&lt;/span&gt;
        &lt;span class="na"&gt;operator&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Exists&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;effect&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;NoExecute&lt;/span&gt;
        &lt;span class="na"&gt;operator&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Exists&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;key&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;CriticalAddonsOnly"&lt;/span&gt;
        &lt;span class="na"&gt;operator&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Exists"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;and run &lt;code&gt;kubectl -n kube-system apply -f ip-masq-agent.yml&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Now our site-to-site VPN should be set up. You should now test if you can ping the pods and if all other services work as you expect them to.&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>openvpn</category>
      <category>google</category>
    </item>
    <item>
      <title>Kubernetes Storage Backup Discussion</title>
      <dc:creator>Alex Wellnitz</dc:creator>
      <pubDate>Mon, 12 Apr 2021 07:55:05 +0000</pubDate>
      <link>https://dev.to/alexohneander/kubernetes-storage-backup-discussion-3hb</link>
      <guid>https://dev.to/alexohneander/kubernetes-storage-backup-discussion-3hb</guid>
      <description>&lt;p&gt;I would be interested to know how you back up your storage and what tools you use.&lt;/p&gt;

&lt;p&gt;In our development environment we use a NFS which is backed up via a cronjob every 4 hours.&lt;br&gt;
MySQL and MongoDB are also backed up.&lt;/p&gt;

&lt;p&gt;In the live environment we don't use NFS for many reasons.&lt;br&gt;
Here I ask myself how I can backup the single read write volumes.&lt;br&gt;
I have thought of it this way until now: I shut down the deployment with a cronjob, then mount the storage into a backup image, when that is done the deployment is started again.&lt;/p&gt;

&lt;p&gt;How do you do that?&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>backup</category>
      <category>docker</category>
      <category>devops</category>
    </item>
    <item>
      <title>Backup MySQL Databases in Kubernetes</title>
      <dc:creator>Alex Wellnitz</dc:creator>
      <pubDate>Wed, 03 Mar 2021 14:22:42 +0000</pubDate>
      <link>https://dev.to/alexohneander/backup-mysql-databases-in-kubernetes-2g6k</link>
      <guid>https://dev.to/alexohneander/backup-mysql-databases-in-kubernetes-2g6k</guid>
      <description>&lt;p&gt;In this post, we will show you how to create a MySQL server backup using Kubernetes CronJobs.&lt;/p&gt;

&lt;p&gt;In our case, we do not have a managed MySQL server. But we want to backup it to our NAS, so that we have a backup in case of emergency.&lt;br&gt;
For this we first build a container that can execute our tasks, because we will certainly need several tasks to backup our cluster.&lt;/p&gt;
&lt;h2&gt;
  
  
  CronJob Agent Container
&lt;/h2&gt;

&lt;p&gt;First, we'll show you our Dockerfile so you know what we need.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight docker"&gt;&lt;code&gt;&lt;span class="k"&gt;FROM&lt;/span&gt;&lt;span class="s"&gt; alpine:3.10&lt;/span&gt;

&lt;span class="c"&gt;# Update&lt;/span&gt;
&lt;span class="k"&gt;RUN &lt;/span&gt;apk &lt;span class="nt"&gt;--update&lt;/span&gt; add &lt;span class="nt"&gt;--no-cache&lt;/span&gt; bash nodejs-current yarn curl busybox-extras vim rsync git mysql-client openssh-client 
&lt;span class="k"&gt;RUN &lt;/span&gt;curl &lt;span class="nt"&gt;-LO&lt;/span&gt; https://storage.googleapis.com/kubernetes-release/release/v1.18.0/bin/linux/amd64/kubectl &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="nb"&gt;chmod&lt;/span&gt; +x ./kubectl &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="nb"&gt;mv&lt;/span&gt; ./kubectl /usr/local/bin/kubectl

&lt;span class="c"&gt;# Scripts&lt;/span&gt;
&lt;span class="k"&gt;RUN &lt;/span&gt;&lt;span class="nb"&gt;mkdir&lt;/span&gt; /srv/jobs
&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; jobs/* /srv/jobs/&lt;/span&gt;

&lt;span class="c"&gt;# Backup Folder&lt;/span&gt;
&lt;span class="k"&gt;RUN &lt;/span&gt;&lt;span class="nb"&gt;mkdir&lt;/span&gt; /var/backup
&lt;span class="k"&gt;RUN &lt;/span&gt;&lt;span class="nb"&gt;mkdir&lt;/span&gt; /var/backup/mysql
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Backup Script
&lt;/h2&gt;

&lt;p&gt;And now our backup script which the container executes.&lt;/p&gt;

&lt;p&gt;Our script is quite simple, we get all tables with the mysql client, export them as sql file, pack them in a zip file and send them in a 8 hours interval to our NAS.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;#!/bin/bash&lt;/span&gt;

&lt;span class="c"&gt;############# SET VARIABLES #############&lt;/span&gt;

&lt;span class="c"&gt;# Env Variables&lt;/span&gt;
&lt;span class="nv"&gt;BACKUPSERVER&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"8.8.8.8"&lt;/span&gt; &lt;span class="c"&gt;# Backup Server Ip&lt;/span&gt;
&lt;span class="nv"&gt;BACKUPDIR&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;/var/backup/mysql
&lt;span class="nv"&gt;BACKUPREMOTEDIR&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"/mnt/backup/kubernetes/"&lt;/span&gt;
&lt;span class="nv"&gt;HOST&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"mariadb.default"&lt;/span&gt;
&lt;span class="nv"&gt;NOW&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;date&lt;/span&gt; +&lt;span class="s2"&gt;"%Y-%m-%d"&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
&lt;span class="nv"&gt;STARTTIME&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;date&lt;/span&gt; +&lt;span class="s2"&gt;"%s"&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;
&lt;span class="nv"&gt;USER&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;mysqlUser
&lt;span class="nv"&gt;PASS&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;mysqlPassword


&lt;span class="c"&gt;############# BUILD ENVIROMENT #############&lt;/span&gt;
&lt;span class="c"&gt;# Check if temp Backup Directory is empty&lt;/span&gt;
&lt;span class="nb"&gt;mkdir&lt;/span&gt; &lt;span class="nv"&gt;$BACKUPDIR&lt;/span&gt;

&lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="o"&gt;[&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;ls&lt;/span&gt; &lt;span class="nt"&gt;-A&lt;/span&gt; &lt;span class="nv"&gt;$BACKUPDIR&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="o"&gt;]&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;then
    &lt;/span&gt;&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"Take action &lt;/span&gt;&lt;span class="nv"&gt;$BACKUPDIR&lt;/span&gt;&lt;span class="s2"&gt; is not Empty"&lt;/span&gt;
    &lt;span class="nb"&gt;rm&lt;/span&gt; &lt;span class="nt"&gt;-f&lt;/span&gt; &lt;span class="nv"&gt;$BACKUPDIR&lt;/span&gt;/&lt;span class="k"&gt;*&lt;/span&gt;.gz
    &lt;span class="nb"&gt;rm&lt;/span&gt; &lt;span class="nt"&gt;-f&lt;/span&gt; &lt;span class="nv"&gt;$BACKUPDIR&lt;/span&gt;/&lt;span class="k"&gt;*&lt;/span&gt;.mysql
&lt;span class="k"&gt;else
    &lt;/span&gt;&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$BACKUPDIR&lt;/span&gt;&lt;span class="s2"&gt; is Empty"&lt;/span&gt;
&lt;span class="k"&gt;fi&lt;/span&gt;

&lt;span class="c"&gt;############# BACKUP SQL DATABASES #############&lt;/span&gt;
&lt;span class="k"&gt;for &lt;/span&gt;DB &lt;span class="k"&gt;in&lt;/span&gt; &lt;span class="si"&gt;$(&lt;/span&gt;mysql &lt;span class="nt"&gt;-u&lt;/span&gt;&lt;span class="nv"&gt;$USER&lt;/span&gt; &lt;span class="nt"&gt;-p&lt;/span&gt;&lt;span class="nv"&gt;$PASS&lt;/span&gt; &lt;span class="nt"&gt;-h&lt;/span&gt; &lt;span class="nv"&gt;$HOST&lt;/span&gt; &lt;span class="nt"&gt;-e&lt;/span&gt; &lt;span class="s1"&gt;'show databases'&lt;/span&gt; &lt;span class="nt"&gt;-s&lt;/span&gt; &lt;span class="nt"&gt;--skip-column-names&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;do
    &lt;/span&gt;mysqldump &lt;span class="nt"&gt;-u&lt;/span&gt;&lt;span class="nv"&gt;$USER&lt;/span&gt; &lt;span class="nt"&gt;-p&lt;/span&gt;&lt;span class="nv"&gt;$PASS&lt;/span&gt; &lt;span class="nt"&gt;-h&lt;/span&gt; &lt;span class="nv"&gt;$HOST&lt;/span&gt; &lt;span class="nt"&gt;--lock-tables&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nb"&gt;false&lt;/span&gt; &lt;span class="nv"&gt;$DB&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$BACKUPDIR&lt;/span&gt;&lt;span class="s2"&gt;/&lt;/span&gt;&lt;span class="nv"&gt;$DB&lt;/span&gt;&lt;span class="s2"&gt;.sql"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;done&lt;/span&gt;

&lt;span class="c"&gt;############# ZIP BACKUP #############&lt;/span&gt;
&lt;span class="nb"&gt;cd&lt;/span&gt; &lt;span class="nv"&gt;$BACKUPDIR&lt;/span&gt;
&lt;span class="nb"&gt;tar&lt;/span&gt; &lt;span class="nt"&gt;-zcvf&lt;/span&gt; backup-&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;NOW&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;.tar.gz &lt;span class="k"&gt;*&lt;/span&gt;.sql

&lt;span class="c"&gt;############# MOVE BACKUP TO REMOTE #############&lt;/span&gt;
rsync &lt;span class="nt"&gt;-avz&lt;/span&gt; &lt;span class="nv"&gt;$BACKUPDIR&lt;/span&gt;/backup-&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;NOW&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;.tar.gz root@&lt;span class="nv"&gt;$BACKUPSERVER&lt;/span&gt;:&lt;span class="nv"&gt;$BACKUPREMOTEDIR&lt;/span&gt;

&lt;span class="c"&gt;# done&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Kubernetes CronJob Deployment
&lt;/h2&gt;

&lt;p&gt;Finally we show you the kubernetes deployment for our agent.&lt;/p&gt;

&lt;p&gt;In the deployment, our agent is defined as a CronJob that runs every 8 hours.&lt;br&gt;
In addition, we have added an SSH key as a Conifg map so that this can write to the NAS and a certain security is given.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;batch/v1beta1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;CronJob&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;backup-mariadb&lt;/span&gt;
  &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;default&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;schedule&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;0&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;8&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;*&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;*&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;*"&lt;/span&gt;
  &lt;span class="na"&gt;successfulJobsHistoryLimit&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;1&lt;/span&gt;
  &lt;span class="na"&gt;failedJobsHistoryLimit&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;1&lt;/span&gt;
  &lt;span class="na"&gt;jobTemplate&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;template&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;containers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;cronjob-agent&lt;/span&gt;
              &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;xxx/cronjob-agent&lt;/span&gt;
              &lt;span class="na"&gt;command&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;bash"&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt;  &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;/srv/jobs/backup-mariadb.sh"&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
              &lt;span class="na"&gt;volumeMounts&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
                &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;mountPath&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/root/.ssh/id_rsa.pub&lt;/span&gt;
                  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;cronjob-default-config&lt;/span&gt;
                  &lt;span class="na"&gt;subPath&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;id_rsa.pub&lt;/span&gt;
                &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;mountPath&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/root/.ssh/id_rsa&lt;/span&gt;
                  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;cronjob-default-config&lt;/span&gt;
                  &lt;span class="na"&gt;subPath&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;id_rsa&lt;/span&gt;
                  &lt;span class="na"&gt;readOnly&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="no"&gt;true&lt;/span&gt;
                &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;mountPath&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/root/.ssh/config&lt;/span&gt;
                  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;cronjob-default-config&lt;/span&gt;
                  &lt;span class="na"&gt;subPath&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;config&lt;/span&gt;
          &lt;span class="na"&gt;volumes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;cronjob-default-config&lt;/span&gt;
              &lt;span class="na"&gt;configMap&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
                &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;cronjob-default-config&lt;/span&gt;
                &lt;span class="na"&gt;defaultMode&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;256&lt;/span&gt;
          &lt;span class="na"&gt;restartPolicy&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Never&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



</description>
      <category>docker</category>
      <category>kubernetes</category>
      <category>cronjob</category>
      <category>bash</category>
    </item>
  </channel>
</rss>
