<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Arthur</title>
    <description>The latest articles on DEV Community by Arthur (@arthurkay).</description>
    <link>https://dev.to/arthurkay</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/arthurkay"/>
    <language>en</language>
    <item>
      <title>Setting Up OpenZFS on Rocky Linux</title>
      <dc:creator>Arthur</dc:creator>
      <pubDate>Thu, 28 Sep 2023 22:45:17 +0000</pubDate>
      <link>https://dev.to/arthurkay/setting-up-openzfs-on-rocky-linux-351m</link>
      <guid>https://dev.to/arthurkay/setting-up-openzfs-on-rocky-linux-351m</guid>
      <description>&lt;p&gt;The RHEL distribution of OpenZFS has two main implementations of ZFS, outlined below:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;DKIMS (Dynamic Kernel Module Support)&lt;br&gt;
This implementation is built on the premise that; the system should automatically recompile all DKMS modules if a new kernel version is installed. This allows drivers and devices outside the mainline kernel to continue working after a Linux kernel upgrade.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;kABI-tracking kmod packages&lt;br&gt;
The Kernel Application Binary Interface (kABI) is a set of in-kernel symbols used by drivers and other kernel modules. Each major and minor RHEL kernel release has a bunch of in-kernel symbols that are whitelisted. A kABI-tracking kmod package contains a kernel module that is compatible with a given kABI, that is, for a given major and minor release of the EL kernel.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The RHEL OpenZFS packages are provided by the following repository:&lt;/p&gt;

&lt;p&gt;For EL7:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;yum &lt;span class="nb"&gt;install &lt;/span&gt;https://zfsonlinux.org/epel/zfs-release-2-3&lt;span class="si"&gt;$(&lt;/span&gt;rpm &lt;span class="nt"&gt;--eval&lt;/span&gt; &lt;span class="s2"&gt;"%{dist}"&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;.noarch.rpm
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;and for EL8 and 9:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;dnf install https://zfsonlinux.org/epel/zfs-release-2-3$(rpm --eval "%{dist}").noarch.rpm
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After adding that repository, update your repository cache with either;&lt;/p&gt;

&lt;p&gt;EL7&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;yum update &lt;span class="nt"&gt;-y&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;EL8 And above&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;dnf update &lt;span class="nt"&gt;-y&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After that, you have the option to install either the DKMS or kABI-tracking kmod style packages.&lt;br&gt;
I have not had any luck using the DKMS style package (which is the default) and as such, my personal preference is the kABI-tracking.&lt;/p&gt;

&lt;p&gt;The commands that follow will only show how to install OpenZFS using EL 8 and above if you need to do this for EL 7 and below; replace &lt;code&gt;dnf&lt;/code&gt; with &lt;code&gt;yum&lt;/code&gt; and &lt;code&gt;dnf config-manager&lt;/code&gt; with &lt;code&gt;yum-config-manager&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;DKMS&lt;/p&gt;

&lt;p&gt;Installing the DKMS style, requires the following three (3) commands:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;dnf &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;-y&lt;/span&gt; epel-release
dnf &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;-y&lt;/span&gt; kernel-devel
dnf &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;-y&lt;/span&gt; zfs
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;kABI Style&lt;/p&gt;

&lt;p&gt;Installing the kABI-Tracking kmod, disable the default DKMS style and then install zfs with the following commands.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;dnf config-manager &lt;span class="nt"&gt;--disable&lt;/span&gt; zfs
dnf config-manager &lt;span class="nt"&gt;--enable&lt;/span&gt; zfs-kmod
dnf &lt;span class="nb"&gt;install &lt;/span&gt;zfs
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;By default, the OpenZFS kernel modules are automatically loaded when a ZFS pool is detected. If you would prefer to always load the modules at boot time you can create such configuration in &lt;code&gt;/etc/modules-load.d&lt;/code&gt;:&lt;/p&gt;

&lt;p&gt;Below is a helper command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;echo &lt;/span&gt;zfs &lt;span class="o"&gt;&amp;gt;&lt;/span&gt;/etc/modules-load.d/zfs.conf
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After that, you can confirm if zfs is properly set by using modprobe, as shown below:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;/sbin/modprobe zfs
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you get an empty response, then all is well. Otherwise, get a cup of coffee and have fun debugging the issue.&lt;/p&gt;

</description>
      <category>zfs</category>
      <category>opensource</category>
      <category>filesystem</category>
      <category>linux</category>
    </item>
    <item>
      <title>Kubernetes (RKE2)</title>
      <dc:creator>Arthur</dc:creator>
      <pubDate>Thu, 17 Aug 2023 13:08:00 +0000</pubDate>
      <link>https://dev.to/arthurkay/kubernetes-rke2-eoi</link>
      <guid>https://dev.to/arthurkay/kubernetes-rke2-eoi</guid>
      <description>&lt;p&gt;This guide will help you quickly launch a cluster with default options for Rancher Kubernetes Engine&lt;/p&gt;

&lt;h2&gt;
  
  
  Prerequisites
&lt;/h2&gt;

&lt;p&gt;Make sure your environment fulfills the requirements. If NetworkManager is installed and enabled on your hosts, ensure that it is configured to ignore CNI-managed interfaces.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;NOTE:&lt;/code&gt; For RKE2 versions 1.21 and higher, if the host kernel supports AppArmor, the AppArmor tools (usually available via the apparmor-parser package) must also be present prior to installing RKE2.&lt;/p&gt;

&lt;p&gt;The RKE2 installation process must be run as the root user or through sudo.&lt;/p&gt;

&lt;h2&gt;
  
  
  Server Node Installation
&lt;/h2&gt;

&lt;p&gt;RKE2 provides an installation script that is a convenient way to install it as a service on systemd based systems. This script is available at &lt;a href="https://get.rke2.io"&gt;https://get.rke2.io&lt;/a&gt;. To install RKE2 using this method do the following:&lt;/p&gt;

&lt;h3&gt;
  
  
  1. Run the installer
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;curl &lt;span class="nt"&gt;-sfL&lt;/span&gt; https://get.rke2.io | sh -
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This will install the rke2-server service and the rke2 binary onto your machine. Due to its nature, It will fail unless it runs as the root user or through sudo.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Enable the rke2-server service
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;systemctl &lt;span class="nb"&gt;enable &lt;/span&gt;rke2-server.service
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  3. Start the service
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;systemctl start rke2-server.service
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  4. Follow the logs, if you like
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;journalctl &lt;span class="nt"&gt;-u&lt;/span&gt; rke2-server &lt;span class="nt"&gt;-f&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Conclusion:
&lt;/h2&gt;

&lt;p&gt;The rke2-server service will be installed. The rke2-server service will be configured to automatically restart after node reboots or if the process crashes or is killed.&lt;br&gt;
Additional utilities will be installed at /var/lib/rancher/rke2/bin/. &lt;/p&gt;

&lt;p&gt;They include: kubectl, crictl, and ctr. Note that these are not on your path by default.&lt;/p&gt;

&lt;p&gt;Two cleanup scripts, rke2-killall.sh and rke2-uninstall.sh, will be installed to the path at:&lt;/p&gt;

&lt;p&gt;/usr/local/bin for regular file systems&lt;/p&gt;

&lt;p&gt;/opt/rke2/bin for read-only and brtfs file systems&lt;/p&gt;

&lt;p&gt;INSTALL_RKE2_TAR_PREFIX/bin if INSTALL_RKE2_TAR_PREFIX is set&lt;/p&gt;

&lt;p&gt;A kubeconfig file will be written to /etc/rancher/rke2/rke2.yaml.&lt;/p&gt;

&lt;p&gt;A token that can be used to register other servers or agent nodes will be created at /var/lib/rancher/rke2/server/node-token&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Setting Up Kubernetes Cluster with K3S</title>
      <dc:creator>Arthur</dc:creator>
      <pubDate>Tue, 18 Apr 2023 11:51:55 +0000</pubDate>
      <link>https://dev.to/arthurkay/setting-up-kubernetes-cluster-with-k3s-4m7k</link>
      <guid>https://dev.to/arthurkay/setting-up-kubernetes-cluster-with-k3s-4m7k</guid>
      <description>&lt;p&gt;Deploying a high availability Kubernetes cluster using k3s with 3 masters and ETCD as the storage, the backend is a reliable way to ensure that your applications run seamlessly, even when one or more nodes fail. In this article, we will guide you through the process of setting up such a cluster with detailed examples.&lt;/p&gt;

&lt;h2&gt;
  
  
  Prerequisites
&lt;/h2&gt;

&lt;p&gt;Before you start deploying a k3s cluster with high availability, you will need to prepare the following:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Three or more servers with 3 running any Linux distribution (these servers will be used as Kubernetes nodes).&lt;/li&gt;
&lt;li&gt;A user account with sudo privileges on each server.&lt;/li&gt;
&lt;li&gt;A working network connection between the servers.&lt;/li&gt;
&lt;li&gt;A firewall is installed on each server, allowing traffic on ports 6443, 2379, 2380, and 8472.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  NEW CLUSTER
&lt;/h2&gt;

&lt;p&gt;To run K3s in this mode, you must have an odd number of server nodes. We recommend starting with three nodes. The odd number of nodes is to prevent a split brain. A split-brain is a state of a server cluster where nodes diverge from each other and have conflicts when handling incoming I/O operations. The servers may record the same data inconsistently or compete for resources. In a situation where there is an odd number, there will always be a majority vote, which can automatically be elected as the source of truth for the conflict.&lt;br&gt;
To get started, first launch a server node with the cluster-init flag to enable clustering and a token that will be used as a shared secret to join additional servers to the cluster. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;NOTE&lt;/strong&gt; Replace the token with something secure and keep it safe from third-party access&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;
curl &lt;span class="nt"&gt;-sfL&lt;/span&gt; https://get.k3s.io |K3S_TOKEN&lt;span class="o"&gt;=&lt;/span&gt;123456789 sh &lt;span class="nt"&gt;-s&lt;/span&gt; - server &lt;span class="nt"&gt;--cluster-init&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After launching the first server, join the second and third servers to the cluster using the shared secret with the server IP address of the first node (the one used in the initial step), in this scenario, we will assume that IP is 10.51.10.10 :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;
curl &lt;span class="nt"&gt;-sfL&lt;/span&gt; https://get.k3s.io | &lt;span class="nv"&gt;K3S_TOKEN&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;123456789 sh &lt;span class="nt"&gt;-s&lt;/span&gt; - server &lt;span class="nt"&gt;--server&lt;/span&gt; https://10.51.10.10:6443

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now you have a highly available control plane. Any successfully clustered servers can be used in the --server argument to join additional server and worker nodes. Joining additional worker nodes to the cluster follows the same procedure as a single server cluster.&lt;br&gt;
There are a few config flags that must be the same in all server nodes (You don't have to worry about these if you use the above method as it automatically configures all these for you):&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Network-related flags: --cluster-dns, --cluster-domain, --cluster-cidr, --service-cidr&lt;/li&gt;
&lt;li&gt;Flags controlling the deployment of certain components: --disable-helm-controller, --disable-kube-proxy, --disable-network-policy and any component passed to --disable&lt;/li&gt;
&lt;li&gt;Feature-related flags: --secrets-encryption&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
  
  
  Cluster Access
&lt;/h2&gt;

&lt;p&gt;The &lt;code&gt;kubeconfig&lt;/code&gt; file is stored at &lt;code&gt;/etc/rancher/k3s/k3s.yaml&lt;/code&gt; is used to configure access to the Kubernetes cluster. If you have installed upstream Kubernetes command line tools such as kubectl or helm you will need to configure them with the correct kubeconfig path. This can be done by either exporting the &lt;code&gt;KUBECONFIG&lt;/code&gt; environment variable or invoking the --kubeconfig command line flag. Refer to the examples below for details.&lt;br&gt;
The configuration file &lt;code&gt;/etc/rancher/k3s/k3s.yaml&lt;/code&gt; can be found on any of the master servers in your cluster. Copy this file to your local machine and install &lt;code&gt;kubectl&lt;/code&gt; on your local dev machine:&lt;br&gt;
Check the official Kubernetes website for instructions on how to install kubectl on your operating system.&lt;br&gt;
Leverage the &lt;code&gt;KUBECONFIG&lt;/code&gt; environment variable:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;
&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;KUBECONFIG&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;/etc/rancher/k3s/k3s.yaml

kubectl get pods &lt;span class="nt"&gt;--all-namespaces&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you get a list of namespaces on your cluster, then you are good to go.&lt;br&gt;
While this setup is sufficient to get you started, we also need to have a highly available storage service. K3S comes with hostPath storage, which is not ideal for HA workloads.&lt;/p&gt;
&lt;h2&gt;
  
  
  Longhorn
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://longhorn.io"&gt;Longhorn&lt;/a&gt; delivers simplified, easy-to-deploy and upgrade, 100% open source, cloud-native persistent block storage without the cost overhead of open core or proprietary alternatives.&lt;br&gt;
The Longhorn block storage can easily be added to your cluster with the following command&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;
kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; https://raw.githubusercontent.com/longhorn/longhorn/master/deploy/longhorn.yaml

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;You have now finally deployed an enterprise-grade Kubernetes cluster with k3s. You can now deploy some work on this cluster. Some components to take note of are for ingress, you already have &lt;a href="https://traefik.io"&gt;Traefik&lt;/a&gt; installed, &lt;a href="https://longhorn.io"&gt;longhorn&lt;/a&gt; will handle storage and &lt;a href="https://containerd.io"&gt;Containerd&lt;/a&gt; as the container runtime engine.&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>k3s</category>
      <category>opensource</category>
    </item>
    <item>
      <title>Kubernetes Ingress With Traefik &amp; SSL Certificate</title>
      <dc:creator>Arthur</dc:creator>
      <pubDate>Tue, 18 Apr 2023 11:34:35 +0000</pubDate>
      <link>https://dev.to/arthurkay/kubernetes-ingress-with-traefik-ssl-certificate-2p86</link>
      <guid>https://dev.to/arthurkay/kubernetes-ingress-with-traefik-ssl-certificate-2p86</guid>
      <description>&lt;h2&gt;
  
  
  Create Secret
&lt;/h2&gt;

&lt;p&gt;SSL certificates in kubernetes are stored as secrets.&lt;br&gt;
To do this, create a TLS secret with the following command.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;
kubectl create secret tls example-co-zm-ssl &lt;span class="nt"&gt;--key&lt;/span&gt; private.key &lt;span class="nt"&gt;--cert&lt;/span&gt; cert.crt &lt;span class="nt"&gt;-n&lt;/span&gt; longhorn-system

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The above command creates a TLS secret in the longhorn-system namespace.&lt;/p&gt;

&lt;p&gt;We need to create another secret to store the basic authentication users, this can be done as:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;
htpasswd &lt;span class="nt"&gt;-nb&lt;/span&gt; arthur P@55w0rd | openssl &lt;span class="nb"&gt;base64&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The add the output in the &lt;code&gt;secret.yaml&lt;/code&gt; file, to look like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;
&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Secret&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;basic-auth-secret&lt;/span&gt;
  &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;longhorn-system&lt;/span&gt;
&lt;span class="na"&gt;data&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;users&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;
    &lt;span class="s"&gt;YXJ0aHVyOiRhcHIxJER1dXdPUmtMJGlsOFJiZnpiNjgzdGpPU0dLSGxNczAKCg==&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;With all the certififcates out of the way, we can now create the two middlewares, one for redirecting to https if traffic originates from http&lt;br&gt;
and the other to request a basic authentication prompt for users to provide username and password to access the resource on the url.&lt;/p&gt;
&lt;h2&gt;
  
  
  Basic Authentication
&lt;/h2&gt;

&lt;p&gt;Create a basic authentication middleware with the following contents:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;
&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;traefik.containo.us/v1alpha1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Middleware&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;basic-auth&lt;/span&gt;
  &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;longhorn-system&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;basicAuth&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;secret&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;basic-auth-secret&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Note&lt;/strong&gt;: This middleware contains the basic authentication secret that stores the username password combination for allowed users.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  HTTP to HTTPS Middleware
&lt;/h2&gt;

&lt;p&gt;Creata middleware to route all HTTP to HTTPS with below content.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;
&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;traefik.containo.us/v1alpha1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Middleware&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;enforce-https&lt;/span&gt;
  &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;longhorn-system&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;redirectScheme&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;scheme&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;https&lt;/span&gt;
    &lt;span class="na"&gt;permanent&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="no"&gt;true&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Note&lt;/strong&gt;: The above middleware contains the scheme to redirect to,which is https and the permanent attribute set to true to make this a permanent 301 redirect.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;With all that done, you can create an ingress that contains the following contents.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;
&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;networking.k8s.io/v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Ingress&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;longhorn-ingress&lt;/span&gt;
  &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;longhorn-system&lt;/span&gt;
  &lt;span class="na"&gt;annotations&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;traefik.ingress.kubernetes.io/router.middlewares&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;longhorn-system-enforce-https@kubernetescrd,longhorn-system-basic-auth@kubernetescrd&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;tls&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;secretName&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;example-co-zm-ssl&lt;/span&gt;
  &lt;span class="na"&gt;rules&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;host&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;storage.example.co.zm&lt;/span&gt;
    &lt;span class="na"&gt;http&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;paths&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/&lt;/span&gt;
        &lt;span class="na"&gt;pathType&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Prefix&lt;/span&gt;
        &lt;span class="na"&gt;backend&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;service&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;longhorn-frontend&lt;/span&gt;
            &lt;span class="na"&gt;port&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; 
              &lt;span class="na"&gt;number&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;80&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The above ingress contains annotations that add two middlewares to this ingress.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;NOTE:&lt;/strong&gt; The middleware has a syntax that requires that the middleware semantic follow a pattern of:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;
&amp;lt; namespace &lt;span class="o"&gt;&amp;gt;&lt;/span&gt;-&amp;lt; middleware &lt;span class="o"&gt;&amp;gt;&lt;/span&gt;@kubernetescrd

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Which for the basic-auth middleware which is in the longhorn-system namespace translates to &lt;code&gt;longhorn-system-basic-auth@kubernetescrd&lt;/code&gt; &lt;br&gt;
The middlewares can be chained together by comma separating them.&lt;/p&gt;

&lt;p&gt;THe &lt;code&gt;tls&lt;/code&gt; part contains a list of TLS secrets, in our case, we only have one.&lt;/p&gt;

&lt;p&gt;The &lt;code&gt;rules&lt;/code&gt; dictate how route matching is processed, what service to connect to and the port on the service being connected to.&lt;/p&gt;

&lt;p&gt;To apply these configurations, use &lt;code&gt;kubectl apply -f&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;
kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; secret.yaml &lt;span class="nt"&gt;-f&lt;/span&gt; basic-auth-middleware.yaml &lt;span class="nt"&gt;-f&lt;/span&gt; https-middleware.yaml  &lt;span class="nt"&gt;-f&lt;/span&gt; longhorn-ingress.yaml

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;This is with the assumption that your kubernetes cluster has traefik as the ingress controller and longhorn as the block storage controller already setup.&lt;br&gt;
Otherwise,you might need tomodify this toi suit the ingress controller being used and the service being connected to.&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>k3s</category>
    </item>
    <item>
      <title>Kubernetes kubeconfig scoped to a namespace</title>
      <dc:creator>Arthur</dc:creator>
      <pubDate>Sun, 09 Apr 2023 12:10:41 +0000</pubDate>
      <link>https://dev.to/arthurkay/kubernetes-kubeconfig-scoped-to-a-namespace-2nbj</link>
      <guid>https://dev.to/arthurkay/kubernetes-kubeconfig-scoped-to-a-namespace-2nbj</guid>
      <description>&lt;p&gt;This article is meant to be a guide in setting up a multi-user namespace scoped kubernetes cluster.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://kubernetes.io/"&gt;Kubernetes&lt;/a&gt;, also known as K8s, is an open-source system for automating deployment, scaling, and management of containerized applications.&lt;/p&gt;

&lt;p&gt;While kubernetes does not have a notion of users, it has what are called service accounts. These are accounts which define the scope of the role(s) or operations which can be performed on different kubernetes resources. A service account provides an identity for processes that run in a Pod.&lt;/p&gt;

&lt;p&gt;Before you can access the kubernetes API Service, a service account with the necessary roles is required.&lt;br&gt;
This article assumes that you already have a roles and namespaces already set. You can ignore the namespace if you don't want to scope the service account to a namespace.&lt;/p&gt;

&lt;p&gt;To create a service account,&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ServiceAccount&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;devspace&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;arthur&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Aside from the above, you also need to create a secret before getting the token to use with your service accounts as follows:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="s"&gt;kubectl apply -f - &amp;lt;&amp;lt;EOF&lt;/span&gt;
&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Secret&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;devspace&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;auth-secret&lt;/span&gt;
  &lt;span class="na"&gt;annotations&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;kubernetes.io/service-account.name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;arthur&lt;/span&gt;
&lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;kubernetes.io/service-account-token&lt;/span&gt;
&lt;span class="s"&gt;EOF&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;With the service and tokens created, we can proceed to creating a kubeconfig file, (used to authenticate operations sent to the API service).&lt;/p&gt;

&lt;p&gt;The &lt;code&gt;kubeconfig&lt;/code&gt; file is a yaml file that can be created by replacing the bash file below with your own values.&lt;br&gt;
Create a bash script file and give a name, e.g &lt;code&gt;kubeconfig.sh&lt;/code&gt;, make it executable&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;chmod&lt;/span&gt; +x ./kubeconfig.sh
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;and finally add the content below to the file. Make any changes to suit your needs.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;#!/usr/bin/env sh&lt;/span&gt;

&lt;span class="c"&gt;# The script returns a kubeconfig for the ServiceAccount given&lt;/span&gt;
&lt;span class="c"&gt;# you need to have kubectl on PATH with the context set to the cluster you want to create the config for&lt;/span&gt;

&lt;span class="c"&gt;# Cosmetics for the created config&lt;/span&gt;
&lt;span class="nv"&gt;clusterName&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s1"&gt;'SwiftCloudCluster'&lt;/span&gt;
&lt;span class="c"&gt;# your server address goes here get it via `kubectl cluster-info`&lt;/span&gt;
&lt;span class="nv"&gt;server&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s1"&gt;'https://kube-master:6443'&lt;/span&gt;
&lt;span class="c"&gt;# the Namespace and ServiceAccount name that is used for the config&lt;/span&gt;
&lt;span class="nv"&gt;namespace&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s1"&gt;'devspace'&lt;/span&gt;
&lt;span class="nv"&gt;serviceAccount&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s1"&gt;'arthur'&lt;/span&gt;

&lt;span class="c"&gt;# The following automation does not work from Kubernetes 1.24 and up.&lt;/span&gt;
&lt;span class="c"&gt;# You need to&lt;/span&gt;
&lt;span class="c"&gt;# define a Secret, reference the ServiceAccount there and set the secretName as described in the [article](dev.to/arthurkay)!&lt;/span&gt;
&lt;span class="c"&gt;# See https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/#manually-create-a-long-lived-api-token-for-a-serviceaccount for details&lt;/span&gt;
&lt;span class="c"&gt;#secretName=$(kubectl --namespace="$namespace" get serviceAccount "$serviceAccount" -o=jsonpath='{.secrets[0].name}')&lt;/span&gt;

&lt;span class="c"&gt;# For kubernetes v1.24 and above, use:&lt;/span&gt;
&lt;span class="nv"&gt;secretName&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"arthur-secret"&lt;/span&gt;

&lt;span class="c"&gt;######################&lt;/span&gt;
&lt;span class="c"&gt;# actual script starts&lt;/span&gt;
&lt;span class="nb"&gt;set&lt;/span&gt; &lt;span class="nt"&gt;-o&lt;/span&gt; errexit


&lt;span class="nv"&gt;ca&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;kubectl &lt;span class="nt"&gt;--namespace&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$namespace&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; get secret/&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$secretName&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="nt"&gt;-o&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nv"&gt;jsonpath&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s1"&gt;'{.data.ca\.crt}'&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;
&lt;span class="nv"&gt;token&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;kubectl &lt;span class="nt"&gt;--namespace&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$namespace&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; get secret/&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$secretName&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="nt"&gt;-o&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nv"&gt;jsonpath&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s1"&gt;'{.data.token}'&lt;/span&gt; | &lt;span class="nb"&gt;base64&lt;/span&gt; &lt;span class="nt"&gt;--decode&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;

&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"
---
apiVersion: v1
kind: Config
clusters:
  - name: &lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;clusterName&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;
    cluster:
      certificate-authority-data: &lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;ca&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;
      server: &lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;server&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;
contexts:
  - name: &lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;serviceAccount&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;@&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;clusterName&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;
    context:
      cluster: &lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;clusterName&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;
      namespace: &lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;namespace&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;
      user: &lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;serviceAccount&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;
users:
  - name: &lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;serviceAccount&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;
    user:
      token: &lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;token&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;
current-context: &lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;serviceAccount&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;@&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;clusterName&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;
"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To create the actual kubeconfig file, you need to execute the created bash script and pipe the result to a yaml file.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;./kubeconfig.sh &lt;span class="o"&gt;&amp;gt;&amp;gt;&lt;/span&gt; kubeconfig
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This creates a file &lt;code&gt;kubeconfig&lt;/code&gt; that can be used for authenticating with your kubernetes cluster.&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>k8s</category>
    </item>
    <item>
      <title>K3S Node Status (NotReady)</title>
      <dc:creator>Arthur</dc:creator>
      <pubDate>Sun, 09 Apr 2023 06:56:24 +0000</pubDate>
      <link>https://dev.to/arthurkay/k3s-node-status-notready-4j3m</link>
      <guid>https://dev.to/arthurkay/k3s-node-status-notready-4j3m</guid>
      <description>&lt;p&gt;K3S is a lightweight production grade kubernetes platform.&lt;br&gt;
Its light weight because a lot of unnecessary components have been removed; the unnecessary components include cloud providers, beta features, defaults and the deployment runs in a single binary.&lt;/p&gt;

&lt;p&gt;Having a single binary makes the deployment a whole lot easier. And now back to the topic of discussion.&lt;/p&gt;

&lt;p&gt;NotReady status can be caused by a lot of things, it might be that the node was initially connected to the cluster but is no longer able to communicate with the cluster or the underlying machine is not active. This can be fixed by either starting up the machine if it's off or making sure that the node is able to establish two communication with other nodes in the cluster. This in most cases will turn out to be the underlying host machines networking.&lt;/p&gt;

&lt;p&gt;If upon checking on the machine, and it turns out that the machine is running and network connectivity is just fine. Check on the status of the node status using &lt;code&gt;kubectl&lt;/code&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl describe node &amp;lt;node-hostname&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The above command gives an very verbose out put of the state of the node. Among the things to look out for in this output are, memory pressure, disk pressure, PID pressure.&lt;/p&gt;

&lt;p&gt;A pressure in any of the above can cause the kubelet to fail to run and hence, the node not being usable.&lt;/p&gt;

&lt;p&gt;If you notice that the kubelet is not able to start due to memory pressure or CPU &amp;amp; memory limits going going beyond 100%. This is a sign of over-committing resources on your node. This happens when the sum of all Kubernetes resource limits is bigger than the capacity of that resource. When you are over-committing resources in your cluster, everything might run perfectly in normal conditions, but in high load scenarios, the containers could start consuming CPU and memory up to their limit. And in some instances make your node unavailable.&lt;/p&gt;

&lt;p&gt;To make your node available with over-committed resources, you'll need to allow this in your kernel, by creating a file in &lt;code&gt;/etc/sysctl.d&lt;/code&gt;, lets call this file &lt;code&gt;/etc/sysctl.d/50-kubelet.conf&lt;/code&gt;. In that file, add the following:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight conf"&gt;&lt;code&gt;&lt;span class="n"&gt;kernel&lt;/span&gt;.&lt;span class="n"&gt;panic&lt;/span&gt;=&lt;span class="m"&gt;10&lt;/span&gt;
&lt;span class="n"&gt;kernel&lt;/span&gt;.&lt;span class="n"&gt;panic_on_oops&lt;/span&gt;=&lt;span class="m"&gt;1&lt;/span&gt;
&lt;span class="n"&gt;vm&lt;/span&gt;.&lt;span class="n"&gt;overcommit_memory&lt;/span&gt;=&lt;span class="m"&gt;1&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To enble the changes just added, execute:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;sysctl &lt;span class="nt"&gt;--system&lt;/span&gt; &lt;span class="nt"&gt;-p&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After this, restart the k3s daemon if need be (K3S is set to auto-restart by default).&lt;/p&gt;

&lt;p&gt;Another issue that might bring up such an issue is the the &lt;code&gt;invalidDiskCapacity 0&lt;/code&gt;. This is just a warning that normally clears up after sometime, but during the time it shows up, the node wont be available. For further debugging check the github issue &lt;a href="https://github.com/k3s-io/k3s/issues/1857"&gt;#1857&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;While the above will allow the node to be available with over-committed resources, this is not good practice. To solve this issue you might need to add compute resource limits in your service deployment files that are well within your nodes range.&lt;/p&gt;

&lt;p&gt;An even better approach is to set the limits with role(s)/clusterRole(s) so pods cannot use more than allocated.&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>k3s</category>
      <category>k8s</category>
      <category>kubelet</category>
    </item>
    <item>
      <title>GCP App Engine Primer</title>
      <dc:creator>Arthur</dc:creator>
      <pubDate>Mon, 13 Mar 2023 15:40:20 +0000</pubDate>
      <link>https://dev.to/arthurkay/gcp-app-engine-primer-kp</link>
      <guid>https://dev.to/arthurkay/gcp-app-engine-primer-kp</guid>
      <description>&lt;p&gt;Google App Engine is a cloud computing platform as a service for developing and hosting web applications in Google-managed data centres.&lt;/p&gt;

&lt;p&gt;This comes with the added advantage of running apps load balanced across multiple servers without the worry of managing underlying infrastructure.&lt;/p&gt;

&lt;p&gt;The other cool thing is you get a free SSL certificate and URL endpoint provided by google. While this is fine, a lot of the times, we want our own domain. Thankfully, this is also provided out of the box.&lt;/p&gt;

&lt;p&gt;But before you can use a custom domain, you need to have one. If you don't have, you can purchase from any domain registrar, even google itself.&lt;/p&gt;

&lt;p&gt;With that out of the way; head over to your &lt;a href="https://console.cloud.google.com/"&gt;google cloud console&lt;/a&gt;.&lt;br&gt;
This is with the assumption that you already have a GCP project with app engine API enabled.&lt;/p&gt;

&lt;p&gt;If you have not yet done that, you can search for app engine in cloud console.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--AmnXIRIQ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/vnvbffzaj9u3kswfg54j.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--AmnXIRIQ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/vnvbffzaj9u3kswfg54j.png" alt="Search for app engine" width="708" height="271"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Then proceed to creating a new project. After you have that taken care of, you can go ahead and get the gcloud CLI (SDK as google incorrectly calls it).&lt;/p&gt;

&lt;p&gt;This can be installed from &lt;a href="https://cloud.google.com/sdk/docs/install"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Allow the gcloud CLI to login into your GCP with  the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;
gcloud auth login

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The above command opens your default browser asking you to login into you GCP account. Once authorised, the CLI can be used to deploy your app engine service(s).&lt;/p&gt;

&lt;h1&gt;
  
  
  App Engine Environments
&lt;/h1&gt;

&lt;p&gt;App engine provides two deployment environments, i.e standard and flexible.&lt;/p&gt;

&lt;h2&gt;
  
  
  Standard
&lt;/h2&gt;

&lt;p&gt;According to google, in the standard environment,the "applications run in a secure, sandboxed environment, allowing the standard environment to distribute requests across multiple servers and scale servers to meet traffic demands. Your application runs within its own secure, reliable environment that is independent of the hardware, operating system, or physical location of the server". Aside from this, the only supported languages under standard environment are PHP, Nodejs, GO, Java, Ruby and Python as of the time of this writing.&lt;/p&gt;

&lt;p&gt;The standard package also provides a free tier, if resource consumption is low.&lt;/p&gt;

&lt;p&gt;For the actual free tier resources, follow this &lt;a href="https://cloud.google.com/free/docs/free-cloud-features#app-engine"&gt;link&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Flexible
&lt;/h2&gt;

&lt;p&gt;The flexible environment has everything understand, with more "flexibility" as the name suggest.&lt;br&gt;
This means that you are not limited by the choice of the programming language.&lt;/p&gt;

&lt;p&gt;Using a programming language that is not part of the standard app engine programming languages is as easy as adding your own Dockerfile, and setting the runtime value to custom in the &lt;code&gt;app.yaml&lt;/code&gt; file.&lt;/p&gt;

&lt;p&gt;Aside from this, you can also specify the compute resources (i.e CPU, RAM,etc)to allocate to the app engine instances&lt;/p&gt;

&lt;h1&gt;
  
  
  Conclusion
&lt;/h1&gt;

&lt;p&gt;This is going to be a series of articles meant to help one get started with GCP app engine, and the next article will dive into adding the &lt;code&gt;app.yaml&lt;/code&gt; file to your project to make it deployable to app engine.&lt;/p&gt;

&lt;p&gt;Be on the look out.&lt;/p&gt;

</description>
      <category>gcp</category>
      <category>appengine</category>
      <category>serverless</category>
      <category>paas</category>
    </item>
    <item>
      <title>Self Hosted Supabase with External Postgresql</title>
      <dc:creator>Arthur</dc:creator>
      <pubDate>Tue, 28 Feb 2023 18:24:35 +0000</pubDate>
      <link>https://dev.to/arthurkay/self-hosted-supabase-with-external-postgresql-apd</link>
      <guid>https://dev.to/arthurkay/self-hosted-supabase-with-external-postgresql-apd</guid>
      <description>&lt;p&gt;&lt;strong&gt;Supabase&lt;/strong&gt; is a self-hosted open source application back-end with a cloud offering (hosted by the developers of the platform). The platform is considered an open source firebase alternative by many. Whether or not that statement is true, is a discussion for another day. &lt;/p&gt;

&lt;p&gt;What really stands out mostly for me, is the on the fly API &amp;amp; API documentation and most importantly simplified &lt;code&gt;Row Level Security&lt;/code&gt; and &lt;code&gt;realtime&lt;/code&gt; notifications are by far the biggest selling points.&lt;/p&gt;

&lt;h2&gt;
  
  
  Requirements
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Linux (Debian)&lt;/li&gt;
&lt;li&gt;Docker&lt;/li&gt;
&lt;li&gt;Docker-compose&lt;/li&gt;
&lt;li&gt;postgres&lt;/li&gt;
&lt;li&gt;make&lt;/li&gt;
&lt;li&gt;git&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Dependencies
&lt;/h2&gt;

&lt;p&gt;Install &lt;a href="https://www.google.com/url?sa=t&amp;amp;rct=j&amp;amp;q=&amp;amp;esrc=s&amp;amp;source=web&amp;amp;cd=&amp;amp;cad=rja&amp;amp;uact=8&amp;amp;ved=2ahUKEwj_4LD1-aj5AhWJdcAKHfbWAMEQFnoECAoQAQ&amp;amp;url=https%3A%2F%2Fwww.postgresql.org%2Fdownload%2F&amp;amp;usg=AOvVaw1s9I9j-sVmmwVRYb_trcix"&gt;postgres&lt;/a&gt; and configure it to allow tcp connections from networks other than localhost.&lt;/p&gt;

&lt;p&gt;You'll need to edit the postgres.conf and pg_hda.conf files.&lt;/p&gt;

&lt;p&gt;Enter your psql terminal:&lt;/p&gt;

&lt;p&gt;Get the &lt;code&gt;postgres.conf&lt;/code&gt; file&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="k"&gt;SHOW&lt;/span&gt; &lt;span class="n"&gt;config_file&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;You'll get the following output:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;               &lt;span class="n"&gt;config_file&lt;/span&gt;               
&lt;span class="c1"&gt;-----------------------------------------&lt;/span&gt;
 &lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;etc&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;postgresql&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="mi"&gt;12&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;main&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;postgresql&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;conf&lt;/span&gt;
&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt; &lt;span class="k"&gt;row&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Open the file, then change the line:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight conf"&gt;&lt;code&gt; &lt;span class="n"&gt;listen_addresses&lt;/span&gt; = &lt;span class="s1"&gt;'localhost'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;To&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight conf"&gt;&lt;code&gt; &lt;span class="n"&gt;listen_addresses&lt;/span&gt; = &lt;span class="s1"&gt;'*'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Get the &lt;code&gt;pg_hba.conf&lt;/code&gt; file&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="k"&gt;SHOW&lt;/span&gt; &lt;span class="n"&gt;hba_file&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;You'll get the following output:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;              hba_file               
&lt;span class="nt"&gt;-------------------------------------&lt;/span&gt;
 /etc/postgresql/12/main/pg_hba.conf
&lt;span class="o"&gt;(&lt;/span&gt;1 row&lt;span class="o"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Open this file and add the following lines:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight conf"&gt;&lt;code&gt;&lt;span class="n"&gt;host&lt;/span&gt;    &lt;span class="n"&gt;all&lt;/span&gt;             &lt;span class="n"&gt;all&lt;/span&gt;              &lt;span class="m"&gt;0&lt;/span&gt;.&lt;span class="m"&gt;0&lt;/span&gt;.&lt;span class="m"&gt;0&lt;/span&gt;.&lt;span class="m"&gt;0&lt;/span&gt;/&lt;span class="m"&gt;0&lt;/span&gt;                       &lt;span class="n"&gt;md5&lt;/span&gt;
&lt;span class="n"&gt;host&lt;/span&gt;    &lt;span class="n"&gt;all&lt;/span&gt;             &lt;span class="n"&gt;all&lt;/span&gt;              ::/&lt;span class="m"&gt;0&lt;/span&gt;                            &lt;span class="n"&gt;md5&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Then restart the postgres daemon, for the above changes to take, effect.&lt;br&gt;
The default postgres user &lt;strong&gt;postgres&lt;/strong&gt; does not have a password, now would be the right time to create a user with password, that will be used by supabase, and grant that user superuser with login, create db, create role, bypass rls roles&lt;/p&gt;

&lt;p&gt;To install custom postgres extensions, the dev postgresl packages are required with make. To get these:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;
&lt;span class="nb"&gt;sudo &lt;/span&gt;apt &lt;span class="nb"&gt;install sudo &lt;/span&gt;apt &lt;span class="nb"&gt;install &lt;/span&gt;postgresql-server-dev-XX cmake make

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Where &lt;code&gt;xx&lt;/code&gt; is the version of your postgresql db. In my case, the above command is:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;
&lt;span class="nb"&gt;sudo &lt;/span&gt;apt &lt;span class="nb"&gt;install sudo &lt;/span&gt;apt &lt;span class="nb"&gt;install &lt;/span&gt;postgresql-server-dev-12 cmake make

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Install &lt;a href="https://docs.docker.com/engine/install/"&gt;docker&lt;/a&gt; and &lt;a href="https://docs.docker.com/compose/install/"&gt;docker-compose&lt;/a&gt;. You'll also need to install &lt;a href="https://docs.docker.com/compose/install/"&gt;Git&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Get the pg_jwt source code from github by cloning the project to your local machine:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;
git clone https://github.com/michelp/pgjwt

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Then change directory into the pgjwt directory, then:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;
&lt;span class="nb"&gt;sudo &lt;/span&gt;make &lt;span class="nb"&gt;install&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;The above creates the pgjwt extension which will be installed with postgres &lt;strong&gt;create extension pgjwt&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;At this point we now need to get supabase from github; clone the repository t your local machine and cd into the project directory&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;
git clone https://github.com/supabase/supabase &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="nb"&gt;cd &lt;/span&gt;supabase

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h2&gt;
  
  
  Setting up secrets
&lt;/h2&gt;

&lt;p&gt;Whilst you can use the defaults provided, it is advisable to set your own secrets.&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;cp&lt;/span&gt; .env.example .env
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;I recommend you change the default anon key and service keys in the .env file&lt;/p&gt;

&lt;p&gt;Use your JWT_SECRET to generate a anon and service API keys using the &lt;a href="https://supabase.com/docs/guides/hosting/overview#api-keys"&gt;JWT Geneartor&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Replace the values in these files:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight conf"&gt;&lt;code&gt; .&lt;span class="n"&gt;env&lt;/span&gt;:
    &lt;span class="n"&gt;ANON_KEY&lt;/span&gt; - &lt;span class="n"&gt;replace&lt;/span&gt; &lt;span class="n"&gt;with&lt;/span&gt; &lt;span class="n"&gt;an&lt;/span&gt; &lt;span class="n"&gt;anon&lt;/span&gt; &lt;span class="n"&gt;key&lt;/span&gt;
    &lt;span class="n"&gt;SERVICE_ROLE_KEY&lt;/span&gt; - &lt;span class="n"&gt;replace&lt;/span&gt; &lt;span class="n"&gt;with&lt;/span&gt; &lt;span class="n"&gt;a&lt;/span&gt; &lt;span class="n"&gt;service&lt;/span&gt; &lt;span class="n"&gt;key&lt;/span&gt;
 &lt;span class="n"&gt;volumes&lt;/span&gt;/&lt;span class="n"&gt;api&lt;/span&gt;/&lt;span class="n"&gt;kong&lt;/span&gt;.&lt;span class="n"&gt;yml&lt;/span&gt;
    &lt;span class="n"&gt;anon&lt;/span&gt; - &lt;span class="n"&gt;replace&lt;/span&gt; &lt;span class="n"&gt;with&lt;/span&gt; &lt;span class="n"&gt;an&lt;/span&gt; &lt;span class="n"&gt;anon&lt;/span&gt; &lt;span class="n"&gt;key&lt;/span&gt;
    &lt;span class="n"&gt;service_role&lt;/span&gt; - &lt;span class="n"&gt;replace&lt;/span&gt; &lt;span class="n"&gt;with&lt;/span&gt; &lt;span class="n"&gt;a&lt;/span&gt; &lt;span class="n"&gt;service&lt;/span&gt; &lt;span class="n"&gt;key&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Having made the above changes; in the &lt;code&gt;.env&lt;/code&gt; file add credentials to your external postgresql database&lt;/p&gt;

&lt;p&gt;POSTGRES_PASSWORD=your-super-secret-and-long-postgres-password&lt;/p&gt;

&lt;p&gt;POSTGRES_HOST=host.docker.internal&lt;br&gt;
POSTGRES_DB=postgres&lt;br&gt;
POSTGRES_USER=postgres&lt;br&gt;
POSTGRES_PORT=5432&lt;/p&gt;

&lt;p&gt;The &lt;code&gt;host.docker.internal&lt;/code&gt; on &lt;code&gt;POSTGRES_HOST&lt;/code&gt; makes it possible for the container to connect to the host machine. Further the &lt;code&gt;docker-compose.yaml&lt;/code&gt; has been changed ti include the line:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;extra_hosts&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;host.docker.internal:host-gateway"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;at the end of each service to allow them to connect with the host. And the  depends-on db removed everywhere in the file. The final file wil look as show below:&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;



&lt;h2&gt;
  
  
  Migrations
&lt;/h2&gt;

&lt;p&gt;You are now ready to run database migrations. Migrations are essentially table schemas that supabase will need to initialize and set up everything it needs. These also include postgres extensions; Whilst supabase self hosted, now supports graphql, I will not include its configurations has it  only works with postgres 14, which I have not yet tried.&lt;br&gt;
While in the same supabase project, navigate to:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;cd &lt;/span&gt;docker/volumes/db/init
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In that directory you'll find files that need to be run in their order.&lt;/p&gt;

&lt;p&gt;And you should be good to go.&lt;/p&gt;

</description>
      <category>supabase</category>
      <category>postgres</category>
    </item>
  </channel>
</rss>
