<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: jxtro</title>
    <description>The latest articles on DEV Community by jxtro (@justrox).</description>
    <link>https://dev.to/justrox</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/justrox"/>
    <language>en</language>
    <item>
      <title>Kubernetes vs Philippine Power Outages - On setting up k0s over Tailscale</title>
      <dc:creator>jxtro</dc:creator>
      <pubDate>Mon, 01 Jul 2024 16:35:38 +0000</pubDate>
      <link>https://dev.to/justrox/kubernetes-vs-philippine-power-outages-on-setting-up-k0s-over-tailscale-2k97</link>
      <guid>https://dev.to/justrox/kubernetes-vs-philippine-power-outages-on-setting-up-k0s-over-tailscale-2k97</guid>
      <description>&lt;p&gt;Building a reliable IT system in the Philippines presents unique challenges such as frequent power outages and unreliable internet connectivity.&lt;/p&gt;

&lt;p&gt;To address these issues effectively, our team has implemented a resilient setup ensuring uninterrupted access to critical services for our end-users. &lt;/p&gt;

&lt;p&gt;This guide will walk you through a similar setup using Tailscale and k0s, which can be replicated in your homelab environment. If you're curious only about the setup, feel free to jump to Section II.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fsgp1.digitaloceanspaces.com%2Fjustrox%2Fjustrox-blog%2F2024%2F06%2Fimage-3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fsgp1.digitaloceanspaces.com%2Fjustrox%2Fjustrox-blog%2F2024%2F06%2Fimage-3.png" alt="Philippine Map"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  I. The Challenge
&lt;/h2&gt;

&lt;p&gt;To give you some background, our team manages multiple projects for various clients, hosting most services on local servers near them. However, a significant issue we face is frequent power interruptions due to maintenance or emergencies at local substations. These disruptions occur nearly every week, sometimes lasting 8-12 hours, effectively rendering our services unavailable for entire workdays.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fsgp1.digitaloceanspaces.com%2Fjustrox%2Fjustrox-blog%2F2024%2F06%2FPasted-image-20240701004946.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fsgp1.digitaloceanspaces.com%2Fjustrox%2Fjustrox-blog%2F2024%2F06%2FPasted-image-20240701004946.png" alt="Power Interruption assignment"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Side-note: Why not host on cloud?
&lt;/h3&gt;

&lt;p&gt;Well, it's a bit complicated. Some services work well in the cloud (so we've put them there), but others have their own unique needs. For example, let's take a closer look to two of our main projects: &lt;a href="https://impactville.com" rel="noopener noreferrer"&gt;Impactville&lt;/a&gt; and &lt;a href="https://lupain.ai/landing/new" rel="noopener noreferrer"&gt;Lupain.AI&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Impactville deals with private data from organizations that prefer not to store it in the cloud due to privacy concerns. Meanwhile, Lupain.AI handles sensitive land data from local government units, requiring secure and local storage.&lt;/p&gt;

&lt;p&gt;From a cost perspective, Lupain.AI involves intensive processing of satellite data. Using cloud resources could drive up costs, especially with the increasing USD-PHP exchange rates. It's more cost-effective for us to manage these tasks using a self-hosted cluster of GPU nodes.&lt;/p&gt;

&lt;h3&gt;
  
  
  Back to the challenge
&lt;/h3&gt;

&lt;p&gt;To summarize our scenario:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;We have full control and ownership of nodes distributed across the country.&lt;/li&gt;
&lt;li&gt;Our goal is to achieve fault-tolerant services with minimal downtimes (ideally less than 1 minute, with 5-10 minutes being acceptable).&lt;/li&gt;
&lt;li&gt;Data redundancy and high availability are essential requirements.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fsgp1.digitaloceanspaces.com%2Fjustrox%2Fjustrox-blog%2F2024%2F06%2Fimage-2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fsgp1.digitaloceanspaces.com%2Fjustrox%2Fjustrox-blog%2F2024%2F06%2Fimage-2.png" alt="Servers distributed across the country"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Given these needs, the most viable solution is to set up an orchestrator capable of detecting downtimes, automatically rescheduling services, and distributing them across a cluster of nodes. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fsgp1.digitaloceanspaces.com%2Fjustrox%2Fjustrox-blog%2F2024%2F06%2Fimage-5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fsgp1.digitaloceanspaces.com%2Fjustrox%2Fjustrox-blog%2F2024%2F06%2Fimage-5.png" alt="Interconnected servers"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In this setup, if one server experiences a power outage, the services will be temporarily shifted to other servers until normal operations resume.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fsgp1.digitaloceanspaces.com%2Fjustrox%2Fjustrox-blog%2F2024%2F06%2Fimage-4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fsgp1.digitaloceanspaces.com%2Fjustrox%2Fjustrox-blog%2F2024%2F06%2Fimage-4.png" alt="Traffic routing to another server"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Therefore, adopting &lt;a href="https://kubernetes.io/" rel="noopener noreferrer"&gt;Kubernetes&lt;/a&gt; is an obvious choice for us. Kubernetes is an open-source system designed specifically for automating deployment, scaling, and management of containerized applications.&lt;br&gt;
This guide will walk you through the basic setup of deploying your own Kubernetes cluster using &lt;a href="https://k0sproject.io/" rel="noopener noreferrer"&gt;k0s&lt;/a&gt; and &lt;a href="https://tailscale.com/" rel="noopener noreferrer"&gt;Tailscale&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Disclaimer&lt;/strong&gt;: The setup described here is simplified and may differ from our production setup. Our production environment addresses various complexities such as ISP DNS issues &lt;a href="https://answers.netlify.com/t/every-netlify-site-i-visit-cant-be-reached-from-the-philippines/49205?page=2" rel="noopener noreferrer"&gt;[1]&lt;/a&gt;&lt;a href="https://www.reddit.com/r/PinoyProgrammer/comments/wo7qcl/any_pldt_dev_here_why_pldt_blocks_netlify/" rel="noopener noreferrer"&gt;[2]&lt;/a&gt;, rate-limiting, weather-related challenges for Starlink-connected nodes, server security hardening, encryption of redundant data, and cluster ingress. These topics require detailed discussions and are either reserved for future posts or treated as internal know-how.&lt;/p&gt;

&lt;p&gt;For a small homelab setup, however, this guide should provide sufficient guidance.&lt;/p&gt;
&lt;h2&gt;
  
  
  Kubernetes setup
&lt;/h2&gt;

&lt;p&gt;In this guide, we'll set up a Kubernetes cluster using k0s and connect our nodes via Tailscale. Here's an overview of the technologies involved:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;k0s&lt;/strong&gt; k0s is an open-source Kubernetes distribution designed for simplicity and versatility. It includes all necessary features to build a Kubernetes cluster and is lightweight enough to run on various environments such as cloud, bare metal, edge, and IoT devices. Its minimal setup and easy configuration make it ideal for our needs.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Tailscale&lt;/strong&gt; While any VPN could be used, Tailscale stands out for its ease of setup, comprehensive documentation, and reliable networking capabilities. MagicDNS, which simplifies DNS management, adds an extra layer of convenience.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Distributed storage&lt;/strong&gt; For distributed storage, you have various options to choose from. The simplest approach is setting up NFS or NAS, though configuring it for high availability (HA) may be required. In our setup, we've chosen to use SeaweedFS, a distributed storage system that provides scalability and efficient management of large data volumes. Note that configuring SeaweedFS for HA is beyond the scope of this guide.&lt;/p&gt;
&lt;h3&gt;
  
  
  Instructions
&lt;/h3&gt;

&lt;p&gt;To begin setting up your Kubernetes cluster, follow these steps:&lt;/p&gt;
&lt;h4&gt;
  
  
  1. Manual node setup
&lt;/h4&gt;

&lt;p&gt;First, ensure SSH is configured securely to access your nodes. Verify you have SSH access to all nodes and that they use key-based authentication. Disable password authentication temporarily for cluster setup; you can re-enable it later in your SSH configuration.&lt;/p&gt;
&lt;h4&gt;
  
  
  2. Connect them over VPN
&lt;/h4&gt;

&lt;p&gt;Next, establish a secure connection between your nodes using Tailscale:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Sign up for Tailscale and add your devices to the network. Check &lt;a href="https://tailscale.com/" rel="noopener noreferrer"&gt;https://tailscale.com/&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Follow Tailscale's instructions to install and connect Tailscale to your network. See &lt;a href="https://tailscale.com/kb/1017/install" rel="noopener noreferrer"&gt;https://tailscale.com/kb/1017/install&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Check that tailscale0 appears in your network interfaces (e.g., via ifconfig).&lt;/li&gt;
&lt;li&gt;Ensure your control machine, where you'll run kubectl, is also connected to the Tailscale network.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fsgp1.digitaloceanspaces.com%2Fjustrox%2Fjustrox-blog%2F2024%2F06%2Fimage-1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fsgp1.digitaloceanspaces.com%2Fjustrox%2Fjustrox-blog%2F2024%2F06%2Fimage-1.png" alt="Tailscale network interface"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h4&gt;
  
  
  3. Install k0s
&lt;/h4&gt;

&lt;p&gt;To set up your Kubernetes cluster, follow these steps:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Install k0s in your control machine.&lt;/strong&gt; Begin by installing k0s on your control machine. You can find detailed instructions at &lt;a href="https://docs.k0sproject.io/stable/install/" rel="noopener noreferrer"&gt;k0s Installation&lt;/a&gt;. Use the command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;curl -sSLf https://get.k0s.sh | sudo sh
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Use k0sctl for automated deployment:&lt;/strong&gt; To streamline the installation across nodes, use k0sctl:&lt;/p&gt;

&lt;p&gt;Install k0sctl depending on your OS. Refer to &lt;a href="https://github.com/k0sproject/k0sctl#installation" rel="noopener noreferrer"&gt;k0sctl Installation&lt;/a&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;brew install k0sproject/tap/k0sctl
choco install k0sctl
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Generate a k0sctl configuration file:&lt;/strong&gt; Create a k0sctl configuration file to define your cluster setup:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;k0sctl init &amp;gt; k0sctl.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Customize the configuration as needed. Here's an example configuration:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: k0sctl.k0sproject.io/v1beta1
kind: Cluster
metadata:
  name: k0s-cluster
spec:
  hosts:
  - role: controller
    ssh:
      address: 10.0.0.1 # replace with the controller's IP address
      user: root
      keyPath: ~/.ssh/id_rsa
  - role: worker
    ssh:
      address: 10.0.0.2 # replace with the worker's IP address
      user: root
      keyPath: ~/.ssh/id_rsa
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For further customization, refer to the &lt;a href="https://docs.k0sproject.io/stable/configuration/" rel="noopener noreferrer"&gt;k0sctl Configuration Documentation&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Bootstrap the Cluster:&lt;/strong&gt; To initialize and deploy your Kubernetes cluster, execute the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;k0sctl apply --config k0sctl.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;k0sctl will automatically install and deploy k0s on the designated machines within your network, configuring the Kubernetes cluster for operation. Once deployed, generate the kubeconfig file to manage the cluster using kubectl:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;k0sctl kubeconfig &amp;gt; kubeconfig
kubectl get pods --kubeconfig kubeconfig -A
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Uninstall or Reset the Cluster:&lt;/strong&gt; If you need to reconfigure or remove the cluster, use the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;k0sctl reset --config k0sctl.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This command resets the cluster configuration, allowing for subsequent deployments or modifications as needed.&lt;/p&gt;

&lt;h4&gt;
  
  
  4. Configuration
&lt;/h4&gt;

&lt;p&gt;While the previous instructions provide a standard setup, additional configurations are necessary to integrate with Tailscale and manage private container registries effectively.&lt;/p&gt;

&lt;h5&gt;
  
  
  Tailscale-connected Nodes
&lt;/h5&gt;

&lt;p&gt;To ensure proper IP assignment by k0sctl for Tailscale-connected machines, specify the correct network interface in the configuration. For Tailscale, use tailscale0. Here’s an example configuration snippet:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: k0sctl.k0sproject.io/v1beta1
kind: Cluster
  hosts:
    - ssh:
        address: machine-1
      privateInterface: tailscale0
      role: controller

    - ssh:
        address: node-2
      privateInterface: tailscale0
      role: worker
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h5&gt;
  
  
  Private Container Registry
&lt;/h5&gt;

&lt;p&gt;If your application's images needs extra privacy, chances are you're storing container images in a private registry. To ensure that k0s (specifically, containerd) pull the image from the right registry, follow the these instructions:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Create a custom configuration file for containerd on each worker node:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo nano /etc/k0s/containerd.d registry.toml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Add the following content:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[plugins."io.containerd.grpc.v1.cri".registry]
   config_path = "/etc/containerd/certs.d"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This directs containerd to look for hosts in &lt;code&gt;/etc/containerd/certs.d&lt;/code&gt;.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Create a &lt;code&gt;hosts.toml&lt;/code&gt; file for your registry domain or IP
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo nano /etc/containerd/certs.d/&amp;lt;registry-domain or ip&amp;gt;:&amp;lt;registry port&amp;gt;/hosts.toml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Populate it with:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;server = "http://&amp;lt;domain or ip&amp;gt;:&amp;lt;port&amp;gt;"

[host."http://&amp;lt;domain or ip&amp;gt;:&amp;lt;port&amp;gt;"]
  skip_verify = true # If your registry is not configured via TLS
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Note: For production environments, ensure TLS certificates are correctly configured. Refer to &lt;a href="https://github.com/containerd/containerd/blob/main/docs/hosts.md" rel="noopener noreferrer"&gt;containerd documentation&lt;/a&gt; for additional configuration details.&lt;br&gt;
Once configured, k0s will utilize these settings to pull private images from your registry as needed.&lt;/p&gt;
&lt;h5&gt;
  
  
  Networking
&lt;/h5&gt;

&lt;p&gt;Network configuration has proven to be quite challenging, consuming considerable time and effort as we troubleshooted various issues with our server setups. After days of painstaking work, here's what we've uncovered.&lt;/p&gt;

&lt;p&gt;For networking, k0s supports various providers for managing inter-pod networking, known formally as a Container Network Interface (CNI). For more detailed information about k0s networking capabilities, you can refer to the official documentation &lt;a href="https://docs.k0sproject.io/stable/networking" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;By default, k0s uses the kube-router CNI, known for its lightweight and performant nature. However, we encountered specific issues where inter-pod communication between nodes failed. Key diagnostics such as nslookup failing to connect to nameservers and traceroute showing asterisks led us to investigate further.&lt;/p&gt;

&lt;p&gt;After extensive troubleshooting involving iptables, we determined that kube-router was not utilizing the correct interface for communication—in our case, Tailscale. Additionally, kube-router does not currently support explicitly setting the network interface and may not add this functionality in the near future (refer to &lt;a href="https://github.com/cloudnativelabs/kube-router/issues/567" rel="noopener noreferrer"&gt;GitHub issue #567&lt;/a&gt;). As a result, we've made the decision to transition to a different CNI.&lt;/p&gt;

&lt;p&gt;Another built-in CNI option for k0s is Calico, which offers more flexible configuration options, including network interface settings. If you're encountering issues with kube-router and need to switch to Calico, you can use the following configuration during cluster bootstrap:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: k0sctl.k0sproject.io/v1beta1
kind: Cluster
metadata:
  name: k0s-cluster
spec:
  k0s:
    config:
      spec:
        network:
          provider: calico # &amp;lt;-- use Calico
          calico:
            envVars:
              IP_AUTODETECTION_METHOD: "interface=tailscale0" # &amp;lt;-- use tailscale
  hosts:
      - role: controller
        ssh:
          address: 10.0.0.1 # replace with the controller's IP address
          user: root
          keyPath: ~/.ssh/id_rsa
      - role: worker
        ssh:
          address: 10.0.0.2 # replace with the worker's IP address
          user: root
          keyPath: ~/.ssh/id_rsa
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;It's important to note potential edge cases when integrating Calico with Tailscale as discussed &lt;a href="https://github.com/tailscale/tailscale/issues/591" rel="noopener noreferrer"&gt;here&lt;/a&gt;. To avoid conflicts, we recommend remapping Calico's netfilter packets. This ensures compatibility and smooth operation in your network setup.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: k0sctl.k0sproject.io/v1beta1
kind: Cluster
metadata:
  name: k0s-cluster
spec:
  k0s:
    config:
      spec:
        network:
          provider: calico
          calico:
            envVars:
              FELIX_IPTABLESMARKMASK: "0xff00ff00" # &amp;lt;- use mask
              IP_AUTODETECTION_METHOD: "interface=tailscale0"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After redeployment, pods can now communicate with each other across different nodes!&lt;/p&gt;

&lt;h5&gt;
  
  
  Node-local load balancing
&lt;/h5&gt;

&lt;p&gt;With the nodes set up, our Kubernetes cluster now handles inter-node communication effectively, even during power outages. However, there's an important scenario we need to address: what happens if the control node experiences an outage? Without a functioning control node, there's no orchestrator to manage pod events, which could lead to downtime for critical services.&lt;/p&gt;

&lt;p&gt;To ensure continuous operation, it's essential to plan for high availability of the control plane. This can be achieved by setting up multiple control plane nodes within the cluster.&lt;/p&gt;

&lt;p&gt;Fortunately, k0s offers a built-in solution for this with &lt;a href="https://docs.k0sproject.io/stable/nllb/" rel="noopener noreferrer"&gt;Node-local load balancing&lt;/a&gt;. Adjusting a small portion of the configuration allows us to enhance our setup:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: k0sctl.k0sproject.io/v1beta1
kind: Cluster
metadata:
  name: k0s-cluster
spec:
  k0s:
    config:
      spec:
        network:
          nodeLocalLoadBalancing:
            enabled: true
            type: EnvoyProxy
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Use your Kubernetes cluster
&lt;/h4&gt;

&lt;p&gt;Now that your Kubernetes cluster is deployed and configured using the steps outlined above, the final step is to set up kubectl, the Kubernetes command-line tool, on your local machine. This tool allows you to manage your cluster effectively.&lt;/p&gt;

&lt;p&gt;Follow these steps to complete the setup:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Install kubectl&lt;/strong&gt;: Install the Kubernetes command-line tool, kubectl, on your local machine. You can download it from the official Kubernetes documentation or use package managers like apt or brew.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Configure kubeconfig&lt;/strong&gt;: Once your cluster is deployed, set the generated kubeconfig file as your default configuration by copying it to the appropriate directory&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cp kubeconfig ~/.kube/config
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This step ensures that kubectl uses the correct credentials and configuration to access your Kubernetes cluster.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Verify setup:&lt;/strong&gt; Confirm that kubectl is correctly configured by checking the status of pods in your cluster&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl get pods -A
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This command will list all pods across all namespaces (-A flag), indicating that your cluster is operational and ready to deploy applications.&lt;/p&gt;

&lt;p&gt;With kubectl configured, you're now equipped to manage and orchestrate containerized applications on your Kubernetes cluster seamlessly!&lt;/p&gt;

&lt;h2&gt;
  
  
  Final Thoughts
&lt;/h2&gt;

&lt;p&gt;Thank you for reading! I appreciate you taking the time to read up to this point ❤️. If you find any parts confusing or have any issues with replication, I'd be happy to help. Just shoot me an email (&lt;a href="//mailto://thepiesaresquared@gmail.com"&gt;thepiesaresquared@gmail.com&lt;/a&gt;) or DM/tweet me at &lt;a href="https://twitter.com/justfizzbuzz" rel="noopener noreferrer"&gt;@justfizzbuzz&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;I thoroughly enjoyed the process of making this guide! If you're interested in more posts like this, I invite you to subscribe to this blog, or let's connect and share our posts on &lt;a href="https://twitter.com/justfizzbuzz" rel="noopener noreferrer"&gt;Twitter/X&lt;/a&gt;!&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>devops</category>
      <category>opensource</category>
      <category>docker</category>
    </item>
    <item>
      <title>Self-hosting Ghost with Docker and PlanetScale</title>
      <dc:creator>jxtro</dc:creator>
      <pubDate>Sun, 14 Jan 2024 13:27:07 +0000</pubDate>
      <link>https://dev.to/justrox/self-hosting-ghost-with-docker-and-planetscale-4do3</link>
      <guid>https://dev.to/justrox/self-hosting-ghost-with-docker-and-planetscale-4do3</guid>
      <description>&lt;p&gt;&lt;a href="https://planetscale.com/" rel="noopener noreferrer"&gt;PlanetScale&lt;/a&gt; and &lt;a href="https://ghost.org/" rel="noopener noreferrer"&gt;Ghost&lt;/a&gt; were previously incompatible due to differences in their support for foreign key constraints. With PlanetScale now &lt;a href="https://planetscale.com/blog/announcing-foreign-key-constraints-support" rel="noopener noreferrer"&gt;supporting foreign key constraints&lt;/a&gt;, a seamless collaboration between the two is achievable. Nonetheless, there remain minor incompatibilities that require resolution.&lt;/p&gt;

&lt;p&gt;The first part of this post will show you how to set up Ghost using Docker and PlanetScale. In the second part, we'll talk about the issues when putting PlanetScale and Ghost together and how to fix them. If you just want to get things going fast, you can skip the other parts.&lt;/p&gt;




&lt;p&gt;ℹ️ This post is originally published on &lt;a href="https://justrox.me/ghost-blog-planet-scale/?ref=devto" rel="noopener noreferrer"&gt;my blog&lt;/a&gt;. If you encounter any formatting issues here, you might find better readability on the original post.&lt;/p&gt;




&lt;h2&gt;
  
  
  Part 1 - Setting up Ghost and PlanetScale
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Some warning
&lt;/h3&gt;

&lt;p&gt;For PlanetScale to work with Ghost, make sure foreign key constraints are turned on. It's important to note that foreign key constraints are in beta on PlanetScale. If you decide to revert your database from beta, it could potentially disrupt your Ghost website. Additional information can be found &lt;a href="https://planetscale.com/blog/announcing-foreign-key-constraints-support" rel="noopener noreferrer"&gt;here&lt;/a&gt; and &lt;a href="https://planetscale.com/docs/concepts/foreign-key-constraints?ref=justrox.me" rel="noopener noreferrer"&gt;here&lt;/a&gt;. This is a crucial point to weigh before changing providers.&lt;/p&gt;

&lt;h3&gt;
  
  
  A. Setting up database
&lt;/h3&gt;

&lt;h4&gt;
  
  
  1. Create a database in PlanetScale
&lt;/h4&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fres-2.cloudinary.com%2Fplanzen%2Fimage%2Fupload%2Fq_auto%2Fv1%2Fjustrox%2F2024%2F01%2Ftntgbkrbu3omspujsg3s.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fres-2.cloudinary.com%2Fplanzen%2Fimage%2Fupload%2Fq_auto%2Fv1%2Fjustrox%2F2024%2F01%2Ftntgbkrbu3omspujsg3s.png" alt="PlanetScale"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Begin by creating a new database in PlanetScale. Follow the official quickstart guide available here. Assign a name to your database and make sure to jot down your connection details, such as the host, username, and password.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fres-5.cloudinary.com%2Fplanzen%2Fimage%2Fupload%2Fq_auto%2Fv1%2Fjustrox%2F2024%2F01%2Fdlmp87mvunwcsxt8pdu5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fres-5.cloudinary.com%2Fplanzen%2Fimage%2Fupload%2Fq_auto%2Fv1%2Fjustrox%2F2024%2F01%2Fdlmp87mvunwcsxt8pdu5.png" alt="Create database"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  2. Activate "Foreign key constraints" in your database.
&lt;/h4&gt;

&lt;p&gt;Ghost requires the use of foreign key constraints to function properly. Head to your database settings by selecting Database &amp;gt; Settings &amp;gt; Beta Features. Then, click on "enroll" next to "Foreign key constraints" to enable this crucial feature.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fres-2.cloudinary.com%2Fplanzen%2Fimage%2Fupload%2Fq_auto%2Fv1%2Fjustrox%2F2024%2F01%2Fvlyi8zg26xyumfkcdznc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fres-2.cloudinary.com%2Fplanzen%2Fimage%2Fupload%2Fq_auto%2Fv1%2Fjustrox%2F2024%2F01%2Fvlyi8zg26xyumfkcdznc.png" alt="Enable foreign key constraints"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Great! Let's now move on to setting up Ghost itself.&lt;/p&gt;

&lt;h3&gt;
  
  
  B. Setting up Ghost
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fres-3.cloudinary.com%2Fplanzen%2Fimage%2Fupload%2Fq_auto%2Fv1%2Fjustrox%2F2024%2F01%2Frksqzqyqiuj8socsihtc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fres-3.cloudinary.com%2Fplanzen%2Fimage%2Fupload%2Fq_auto%2Fv1%2Fjustrox%2F2024%2F01%2Frksqzqyqiuj8socsihtc.png" alt="Ghost"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  1. Install Ghost.
&lt;/h4&gt;

&lt;p&gt;Installing Ghost can be done in various ways, and you can choose the one that suits you best by referring to its official installation guide. However, for the sake of reproducibility, this guide extends Ghost's official Docker image.&lt;/p&gt;

&lt;h4&gt;
  
  
  2. Customize your Ghost installation.
&lt;/h4&gt;

&lt;p&gt;At this point, you have the option to add themes and adapters. However, delving into these aspects is beyond the scope of this article.&lt;/p&gt;

&lt;h4&gt;
  
  
  3. Patch the Ghost installation.
&lt;/h4&gt;

&lt;p&gt;Ghost and PlanetScale have a minor incompatibility, which can be resolved by applying a patch. To do this, create a script named patch.sh with the following content:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/JustroX/ghost-docker-planetscale/blob/main/bin/patch.sh?ref=justrox.me" rel="noopener noreferrer"&gt;https://github.com/JustroX/ghost-docker-planetscale/blob/main/bin/patch.sh&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Finally, execute the patch by running:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;chmod +x patch.sh
./patch.sh
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Please be aware that, at the time of writing, this patch has only been tested with the latest Docker image (ghost:5.75.3) and its effectiveness may vary in future versions (or might become unnecessary). The specifics of the patch will be elaborated upon in the second part of this guide.&lt;/p&gt;

&lt;p&gt;For Docker installation, here's the corresponding Dockerfile&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;FROM ghost:latest
COPY patch.sh patch.sh
RUN chmod +x patch.sh &amp;amp;&amp;amp; ./patch.sh
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Assuming everything goes smoothly, Ghost should now be configured to work seamlessly with PlanetScale. Now, let's proceed to connect them.&lt;/p&gt;

&lt;h3&gt;
  
  
  C. Connecting to the database
&lt;/h3&gt;

&lt;p&gt;For configuring the database connection, use the PlanetScale connection details obtained from the previous steps and paste them into your Ghost configuration. Keep in mind that the value for &lt;code&gt;database__connection__ssl&lt;/code&gt; should be set to &lt;code&gt;[{"rejectUnauthorized":true}]&lt;/code&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;database__client=mysql
database__connection__database=&amp;lt;Your Planetscale Database Name&amp;gt;
database__connection__host=&amp;lt;Your Planetscale Database Host&amp;gt;
database__connection__password=&amp;lt;Your Planetscale Database Password&amp;gt;
database__connection__user=&amp;lt;Your Planetscale Database User&amp;gt;
database__connection__ssl=[{"rejectUnauthorized":true}]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The rest of the configuration process is akin to setting up a standard Ghost installation, which you can refer to in the &lt;a href="https://ghost.org/docs/config/?ref=justrox.me" rel="noopener noreferrer"&gt;official reference found here&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  D. Run your Ghost instance
&lt;/h3&gt;

&lt;p&gt;For Docker installation, launch your Docker container using the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker build . --tag ghost_example
docker run --env-file ./env ghost_example
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Assuming a smooth process, you should observe your Ghost blog initializing and generating the necessary database tables. Once this initialization is complete, your Ghost blog integrated with PlanetScale should be live and ready!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fres-3.cloudinary.com%2Fplanzen%2Fimage%2Fupload%2Fq_auto%2Fv1%2Fjustrox%2F2024%2F01%2Fiahzjqgayjuywtwbti10.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fres-3.cloudinary.com%2Fplanzen%2Fimage%2Fupload%2Fq_auto%2Fv1%2Fjustrox%2F2024%2F01%2Fiahzjqgayjuywtwbti10.png" alt="Ghost instance initializing the database"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  E. Deployment (Optional)
&lt;/h3&gt;

&lt;p&gt;I've created an example repository here for those looking to swiftly deploy a Ghost blog using PlanetScale. Simply follow the aforementioned steps for setting up the database and connecting to it. Beyond that, all that's required is to build and run the Dockerfile.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/JustroX/ghost-docker-planetscale" rel="noopener noreferrer"&gt;Github link here&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I tested this using &lt;a href="https://railway.app/?referralCode=oUrsXu" rel="noopener noreferrer"&gt;Railway&lt;/a&gt;, where all that's needed is to provide the repo fork and the necessary environment variables. &lt;a href="https://railway.app/?referralCode=oUrsXu" rel="noopener noreferrer"&gt;Railway&lt;/a&gt; then automatically identifies the Dockerfile for building and running. The outcome is my &lt;a href="https://justrox.me" rel="noopener noreferrer"&gt;blog website&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  F. Conclusion
&lt;/h3&gt;

&lt;p&gt;That concludes the setup and deployment guide. Thank you for reading! If any parts are unclear or if you encounter issues with replication, feel free to reach out. I'd be happy to help. Just shoot me an email (&lt;a href="mailto:thepiesaresquared@gmail.com"&gt;thepiesaresquared@gmail.com&lt;/a&gt;) or DM/tweet me at &lt;a href="https://twitter.com/justfizzbuzz" rel="noopener noreferrer"&gt;@justfizzbuzz&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;If you're intrigued by the challenges I encountered while integrating PlanetScale and Ghost, along with the eventual solution in the form of a patch, you can proceed to Part 2. 😄&lt;/p&gt;




&lt;p&gt;ℹ️ This post is also published on &lt;a href="https://justrox.me/ghost-blog-planet-scale/?ref=devto" rel="noopener noreferrer"&gt;my blog&lt;/a&gt;. If you found this post helpful, you might discover other useful posts there as well!&lt;/p&gt;




&lt;h2&gt;
  
  
  Part 2 - Integrating Ghost 🤝 PlanetScale
&lt;/h2&gt;

&lt;p&gt;In my view, there are three significant hurdles that both myself and fellow self-hosters faced when attempting to integrate Ghost and PlanetScale:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Configuring the value for &lt;code&gt;database.connection.ssl&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Ensuring foreign key constraints are supported by PlanetScale.&lt;/li&gt;
&lt;li&gt;Addressing Ghost setup failures during initialization.&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  A. Configuring the value for &lt;code&gt;database.connection.ssl&lt;/code&gt;
&lt;/h3&gt;

&lt;p&gt;To overcome this initial challenge, it's crucial to establish a secure connection to the database since it resides outside the same network as the server. Attempting a direct connection without SSL/TLS results in the following error:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fres-2.cloudinary.com%2Fplanzen%2Fimage%2Fupload%2Fq_auto%2Fv1%2Fjustrox%2F2024%2F01%2Fxaglqbfrba5guwa1h1wt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fres-2.cloudinary.com%2Fplanzen%2Fimage%2Fupload%2Fq_auto%2Fv1%2Fjustrox%2F2024%2F01%2Fxaglqbfrba5guwa1h1wt.png" alt="Error"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;While seemingly minor, locating the value for &lt;code&gt;database.connection.ssl&lt;/code&gt; can be challenging. Experienced developers might find it obvious, but for those who simply copy-paste from PlanetScale, it may not be as apparent. The community has resolved this configuration value since March last year, as &lt;a href="https://forum.ghost.org/t/self-hosting-ghost-with-docker-and-planetscale/" rel="noopener noreferrer"&gt;discussed in this forum&lt;/a&gt;. From the discussion, the correct value for the configuration should be:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;database__connection__ssl=[{"rejectUnauthorized":true}]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  B. Foreign key constraints support for PlanetScale.
&lt;/h3&gt;

&lt;p&gt;The second and primary challenge arose from the absence of foreign key constraints support in PlanetScale at that time. This presented a significant hurdle as Ghost heavily relies on this feature. In a &lt;a href="https://github.com/planetscale/discussion/discussions/88?ref=justrox.me#discussioncomment-1236473" rel="noopener noreferrer"&gt;Github discussion&lt;/a&gt;, PlanetScale's rep said that:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;It looks there are at least a couple of big blockers here ... Seems like [Ghost] makes heavy use of foreign key constraints, a feature we don't support on PlanetScale&lt;br&gt;
This is the point where most discussions on integration come to a halt and encounter a roadblock.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Fast forwarding to last month, PlanetScale made a significant announcement regarding the introduction of foreign key constraints support as a beta feature:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://planetscale.com/blog/announcing-foreign-key-constraints-support?ref=justrox.me" rel="noopener noreferrer"&gt;Announcing foreign key constraints support — PlanetScale&lt;br&gt;
You can now use foreign key constraints in PlanetScale databases&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;With this recent development, achieving the integration is now technically feasible, a great time to resume and advance the integration efforts.&lt;/p&gt;
&lt;h3&gt;
  
  
  C. Addressing Ghost setup failures during initialization
&lt;/h3&gt;
&lt;h4&gt;
  
  
  i. The Problem
&lt;/h4&gt;

&lt;p&gt;This particular challenge is a bit tricky. PlanetScale generates a database for you, and all you need to do is establish a connection. However, Ghost encounters startup issues because it erroneously assumes that the database still needs to be created. This manifests as either Ghost attempting to create the database despite its existence, or Ghost failing to detect that the PlanetScale database has already been established.&lt;/p&gt;

&lt;p&gt;This concern was previously addressed in this &lt;a href="https://github.com/planetscale/discussion/discussions/88?ref=justrox.me#discussioncomment-1236473" rel="noopener noreferrer"&gt;discussion&lt;/a&gt; and ended with the following conclusion:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;It doesn't appear that there's a ghost CLI option to run the initial setup without attempting to create the database, or any other built-in workaround for the problem you encountered.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;To address this, my initial plan was centered around creating a Ghost fork and incorporating an option to skip the database creation step. Luckily, looking back, the solution turned out to be remarkably straightforward, with the bulk of my time dedicated to tracing the error and navigating through the code.&lt;/p&gt;
&lt;h4&gt;
  
  
  ii. Tracing Ghost's code
&lt;/h4&gt;

&lt;p&gt;My initial strategy involves tracing the source of the error.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fres-4.cloudinary.com%2Fplanzen%2Fimage%2Fupload%2Fq_auto%2Fv1%2Fjustrox%2F2024%2F01%2Fz185bghvqvst94ukgk6s.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fres-4.cloudinary.com%2Fplanzen%2Fimage%2Fupload%2Fq_auto%2Fv1%2Fjustrox%2F2024%2F01%2Fz185bghvqvst94ukgk6s.png" alt="Error"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Given my limited familiarity with the majority of the codebase, I began following the execution flow of the source code, available at &lt;a href="https://github.com/TryGhost/Ghost/blob/main/ghost/core" rel="noopener noreferrer"&gt;https://github.com/TryGhost/Ghost/blob/main/ghost/core&lt;/a&gt;. This journey starts from the initial Ghost initialization and progresses until I stumble upon a piece of code indicating the database creation. I tracked the code references, traversing through files such as &lt;code&gt;index.js&lt;/code&gt; -&amp;gt; &lt;code&gt;ghost.js&lt;/code&gt; -&amp;gt; &lt;code&gt;boot.js&lt;/code&gt; -&amp;gt; and finally &lt;code&gt;DatabaseStateManager.js&lt;/code&gt;, until reaching what appeared to be a dead-end.&lt;/p&gt;

&lt;p&gt;In &lt;code&gt;DatabaseStateManager.js&lt;/code&gt;, Ghost triggers the initialization of the database by invoking the init method from a knexMigrator instance:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;  await this.knexMigrator.init();
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The knexMigrator is initialized using a class imported from another package named &lt;code&gt;knex-migrator&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const KnexMigrator = require('knex-migrator');
...
this.knexMigrator = new KnexMigrator({
  knexMigratorFilePath
});
...
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Given that it originates from another package, it seems like a dead-end because it likely adheres to a standard protocol for database initialization that might be fundamentally incompatible with how PlanetScale is configured. &lt;/p&gt;

&lt;p&gt;Additionally, tracing the code upward suggests there might be another package that needs forking to resolve this integration challenge between Ghost and PlanetScale.&lt;/p&gt;

&lt;p&gt;Confronted with this challenge, I spent hours attempting to find a workaround to skip the database initialization within the ghost/core codebase. However, it turned out to be the REAL dead-end with the real breakthrough occurring when I decided to delve deeper into the "knex-migrator" package.&lt;/p&gt;

&lt;h4&gt;
  
  
  iii. Tracing knex-migrator package
&lt;/h4&gt;

&lt;p&gt;Having exhausted other options, I decided to delve deeply into the "knex-migrator" package. The turning point in this ordeal occurred when I discovered that the "knex-migrator" package was owned by the TryGhost organization. You can view the repository here &lt;a href="https://github.com/TryGhost/knex-migrator" rel="noopener noreferrer"&gt;https://github.com/TryGhost/knex-migrator&lt;/a&gt;. This revelation indicated that the package was designed with Ghost in mind, and a potential solution might be just a simple pull request away from this repository.&lt;/p&gt;

&lt;p&gt;Resuming the code tracing process, after investing some time, I successfully narrowed down the origin of the error when I identified the precise SQL command sent to PlanetScale. See line 126.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fres-2.cloudinary.com%2Fplanzen%2Fimage%2Fupload%2Fq_auto%2Fv1%2Fjustrox%2F2024%2F01%2Fjgmpiqz3m39vvquz8jxe.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fres-2.cloudinary.com%2Fplanzen%2Fimage%2Fupload%2Fq_auto%2Fv1%2Fjustrox%2F2024%2F01%2Fjgmpiqz3m39vvquz8jxe.png" alt="Source code of knex-migrator"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Notably, at line number 130, it became evident that the scenario where the database already exists had been accounted for. This discovery was somewhat surprising, considering the initial error reported by Ghost regarding the database already existing in PlanetScale. It implies that with PlanetScale, the SQL command triggers a different error number than the expected 1007 when attempting to create a database that already exists.&lt;/p&gt;

&lt;p&gt;To confirm this, I inserted a &lt;code&gt;console.error&lt;/code&gt; to log the error being caught at line 129, revealing:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fres-2.cloudinary.com%2Fplanzen%2Fimage%2Fupload%2Fq_auto%2Fv1%2Fjustrox%2F2024%2F01%2Fuxwdb6xyhmbf2x0ioqh9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fres-2.cloudinary.com%2Fplanzen%2Fimage%2Fupload%2Fq_auto%2Fv1%2Fjustrox%2F2024%2F01%2Fuxwdb6xyhmbf2x0ioqh9.png" alt="Error trace"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Bingo! The error's errno is indeed different from &lt;code&gt;1007&lt;/code&gt;, which ultimately leads to the crash of the Ghost setup.&lt;/p&gt;

&lt;h4&gt;
  
  
  iv. Creating the patch
&lt;/h4&gt;

&lt;p&gt;Based on the findings above, it appears that I don't necessarily need to create a Ghost setup option to skip database creation. Instead, I need Ghost to recognize that the error number originating from PlanetScale's database is not &lt;code&gt;1007&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;To really resolve this issue, ensuring compatibility in error numbers is crucial. At this point, I don't have insight into where this difference in error numbers originates. It could be from the driver Ghost has utilized for communication with the MySQL database, or it might be specific to PlanetScale's implementation of the error. &lt;strong&gt;Individuals from the PlanetScale or Ghost's team reading this blog might have more ideas on this aspect than I do. 😀&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;But for my specific case, my primary goal is to achieve a functional deployment of Ghost. As a result, I am currently content with patching the &lt;code&gt;currents/node_modules/knex-migrator/lib/database.js&lt;/code&gt; file with the following code:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;  &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="k"&gt;catch&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nf"&gt;function &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;err&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="c1"&gt;// CASE: DB exists&lt;/span&gt;
      &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;err&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;errno&lt;/span&gt; &lt;span class="o"&gt;===&lt;/span&gt; &lt;span class="mi"&gt;1007&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
          &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nb"&gt;Promise&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;resolve&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
      &lt;span class="p"&gt;}&lt;/span&gt;

      &lt;span class="c1"&gt;// Here's the patch&lt;/span&gt;
      &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;isPlanetScaleDBExists&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt;
        &lt;span class="nx"&gt;err&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;errno&lt;/span&gt; &lt;span class="o"&gt;==&lt;/span&gt; &lt;span class="mi"&gt;1105&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; 
        &lt;span class="nx"&gt;err&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;sqlMessage&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;endsWith&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;database exists&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

      &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;isPlanetScaleDBExists&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
          &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nb"&gt;Promise&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;resolve&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
      &lt;span class="p"&gt;}&lt;/span&gt;



      &lt;span class="k"&gt;throw&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nx"&gt;errors&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;DatabaseError&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
          &lt;span class="na"&gt;message&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;err&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;message&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
          &lt;span class="na"&gt;err&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;err&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
          &lt;span class="na"&gt;code&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;DATABASE_CREATION_FAILED&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;
      &lt;span class="p"&gt;});&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To facilitate this, I crafted a script that patches Ghost during Docker build time.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;file_path="current/node_modules/knex-migrator/lib/database.js"
text_to_add="const isDBExists = err.errno == 1105 &amp;amp;&amp;amp; err.sqlMessage.endsWith('database exists');\\
if(isDBExists) return Promise.resolve();\\
"
line_number=129

sed -i "${line_number}i${text_to_add}" "$file_path"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The script above is designed to inject JavaScript code into the source code of the knex-migrator.&lt;/p&gt;

&lt;h4&gt;
  
  
  v. Final result
&lt;/h4&gt;

&lt;p&gt;After crafting the patch, it took an additional hour or two to ensure my environment variables were correctly configured. Once everything fell into place, I was thrilled to finally deploy my Ghost website!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fres-3.cloudinary.com%2Fplanzen%2Fimage%2Fupload%2Fq_auto%2Fv1%2Fjustrox%2F2024%2F01%2Fiahzjqgayjuywtwbti10.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fres-3.cloudinary.com%2Fplanzen%2Fimage%2Fupload%2Fq_auto%2Fv1%2Fjustrox%2F2024%2F01%2Fiahzjqgayjuywtwbti10.png" alt="Creating tables"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;With the successful import of my blogs from the previous deployment, this Ghost site was up and running smoothly, with the database now hosted on PlanetScale! 🙌&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fres-4.cloudinary.com%2Fplanzen%2Fimage%2Fupload%2Fq_auto%2Fv1%2Fjustrox%2F2024%2F01%2Fhwehlmtbrt5ptibx2n5t.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fres-4.cloudinary.com%2Fplanzen%2Fimage%2Fupload%2Fq_auto%2Fv1%2Fjustrox%2F2024%2F01%2Fhwehlmtbrt5ptibx2n5t.png" alt="Ghost site using PlanetScale"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Final Thoughts
&lt;/h2&gt;

&lt;p&gt;Thank you for reading!&lt;/p&gt;

&lt;p&gt;I appreciate you taking the time to read up to this point ❤️. If you find any parts confusing or have any issues with replication, I'd be happy to help. Just shoot me an email (&lt;a href="mailto:thepiesaresquared@gmail.com"&gt;thepiesaresquared@gmail.com&lt;/a&gt;) or DM/tweet me at &lt;a href="https://twitter.com/justfizzbuzz" rel="noopener noreferrer"&gt;@justfizzbuzz&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;I thoroughly enjoyed the process of making PlanetScale and Ghost work seamlessly together! If you're interested in more posts like this, I invite you to subscribe to &lt;a href="https://justrox.me?ref=devto" rel="noopener noreferrer"&gt;my blog&lt;/a&gt;, or let's connect and share our &lt;a href="https://twitter.com/justfizzbuzz" rel="noopener noreferrer"&gt;posts on Twitter&lt;/a&gt;!&lt;/p&gt;

&lt;p&gt;Thank you 👨‍💻&lt;/p&gt;

</description>
      <category>mysql</category>
      <category>database</category>
      <category>tutorial</category>
      <category>docker</category>
    </item>
    <item>
      <title>CRDTs: A beginner's overview for building a collaborative app</title>
      <dc:creator>jxtro</dc:creator>
      <pubDate>Sat, 03 Dec 2022 11:50:26 +0000</pubDate>
      <link>https://dev.to/justrox/crdts-a-beginners-overview-for-building-a-collaborative-app-1a38</link>
      <guid>https://dev.to/justrox/crdts-a-beginners-overview-for-building-a-collaborative-app-1a38</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;This is also published in &lt;a href="https://justrox.me/g/post/slug/crdts-a-beginners-overview-for-building-a-collaborative-app"&gt;my blog&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Conflict-free replicated data types (CRDT) is one data structure I found truly fascinating. Basically, it enables distributed machines to eventually sync their data over time, i.e., eventual consistency.&lt;/p&gt;

&lt;p&gt;To illustrate what I mean, consider the following scenario:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Two person, A and B, are doing collaborative work in a shared document, e.g., Google Docs&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;There is no centralized server managing the state b/w A and B.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;A sends state updates to B, then B changes it's state to reflect A's updates, and vice versa.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Now, for some unknown reason, they got disconnected from each other for some time intervals. During this time, both person made changes to the document. Once they are reconnected, how should the program resolve their updates such that both have equivalent states?&lt;/p&gt;

&lt;p&gt;One simple approach is to append timestamps to the updates and order events based on the timestamps. This approach, however, is predicated on the assumption that both machines have synchronous clocks that are immune to drifting and modification. This may result in data corruption depending on your use case.&lt;/p&gt;

&lt;p&gt;CRDTs are intended to address this issue. Essentially, the goal is to structure the state updates in such a way that merge conflicts cannot occur*. This is accomplished by creating a data structure that has the following properties:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Merging updates are commutative - which means that if we have a MERGE function that applies state updates, the order in which each update is applied should be irrelevant to the resulting state.  &lt;/p&gt;

&lt;p&gt;C = MERGE( A, B) = MERGE(B, A)&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Merging updates are idempotent - which means that merging the same updates more than once results in the same state.  &lt;/p&gt;

&lt;p&gt;C = MERGE( A, B ) = MERGE(C , A) = MERGE(C, B)&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This is essentially combining updates in any order and frequency while producing the same result. This makes it ideal for distributed systems where communication packets may arrive duplicated, late, or in an erroneous order.&lt;/p&gt;

&lt;p&gt;This approach is quite powerful and is widely used in distributed systems today. It's the ability to resolve conflicts in distributed databases as well as synchronize multiple nodes of a distributed system makes it an essential tool in today's systems.&lt;/p&gt;

&lt;h2&gt;
  
  
  Playing with CRDTs
&lt;/h2&gt;

&lt;p&gt;There are a lot of implementations of CRDTs out there. In JavaScript, for instance, we have Y.js (&lt;a href="https://github.com/yjs/yjs"&gt;https://github.com/yjs/yjs&lt;/a&gt;) and automerge (&lt;a href="https://github.com/automerge/automerge"&gt;https://github.com/automerge/automerge&lt;/a&gt;). There’s also a Y.js demo (&lt;a href="https://demos.yjs.dev/prosemirror/prosemirror.html"&gt;https://demos.yjs.dev/prosemirror/prosemirror.html&lt;/a&gt;) that allows you to play around with them and have your collaborative app running in just a few seconds. All messages are exchanged via webRTC while the state is managed via CRDTs. This can be a great sandbox to understand how CRDTs work.&lt;/p&gt;

&lt;p&gt;I also attempted to build an offline-first application with Y.js-powered device syncing. The goal is to create a habit-tracking app that does not rely on a central server (&lt;a href="https://habit-board.netlify.app"&gt;here&lt;/a&gt;).&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnk1gaw6hollullb1qmgr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnk1gaw6hollullb1qmgr.png" alt="Tweet snippet showcasing demo of the application" width="800" height="814"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://twitter.com/justfizzbuzz"&gt;@justfizzbuzz&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I had a lot of fun messing around with it. Overall, CRDTs are an incredibly useful structure for distributed systems and are gaining a lot of momentum. It's definitely worth investing time in them to better understand how they work and how you can apply them.&lt;/p&gt;

&lt;h3&gt;
  
  
  Conclusion
&lt;/h3&gt;

&lt;p&gt;In conclusion, CRDTs provide an efficient way to keep distributed systems in sync. By structuring their data in such a way that merging is commutative and idempotent, they are able to maintain the same consistency regardless of delivery order and duplicate packets. This technique is widely used in distributed systems today and is gaining more and more traction. It’s definitely worth looking into this structure and understanding how it works and how you can apply it in your own systems.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;* I may be relaxing some jargon here, what I mean is that there’s a deterministic algorithm or scheme that is guaranteed to combine any state updates such that no merge conflict, that demands manual resolution, is produced.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;References&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=M8-WFTjZoA0&amp;amp;t=1821s&amp;amp;ab_channel=TL%3BDR%2F%2FJavaScriptcodecastsforworkingdevs"&gt;https://www.youtube.com/watch?v=M8-WFTjZoA0&amp;amp;t=1821s&amp;amp;ab_channel=TL%3BDR%2F%2FJavaScriptcodecastsforworkingdevs&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=iEFcmfmdh2w&amp;amp;t=835s&amp;amp;ab_channel=CodingTech"&gt;https://www.youtube.com/watch?v=iEFcmfmdh2w&amp;amp;t=835s&amp;amp;ab_channel=CodingTech&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>javascript</category>
      <category>webdev</category>
      <category>beginners</category>
      <category>crdt</category>
    </item>
  </channel>
</rss>
