<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Nabeel Sulieman</title>
    <description>The latest articles on DEV Community by Nabeel Sulieman (@nabsul).</description>
    <link>https://dev.to/nabsul</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/nabsul"/>
    <language>en</language>
    <item>
      <title>Fast Multi-Platform Builds on GitHub</title>
      <dc:creator>Nabeel Sulieman</dc:creator>
      <pubDate>Mon, 09 Feb 2026 15:15:00 +0000</pubDate>
      <link>https://dev.to/nabsul/fast-multi-platform-builds-on-github-3jb0</link>
      <guid>https://dev.to/nabsul/fast-multi-platform-builds-on-github-3jb0</guid>
      <description>&lt;p&gt;If you want to build multi-architecture Docker containers in GitHub Actions, the standard recommendation you'll find online is to install &lt;a href="https://github.com/nabsul/gh/blob/main/.github/workflows/cross-platform-builds/qemu.yml" rel="noopener noreferrer"&gt;BuildX and QEMU&lt;/a&gt;. The downside of this approach is that QEMU emulation is about 10x slower than native hardware. Building my simple Hello World project went from 30 seconds to 3 minutes.&lt;/p&gt;

&lt;p&gt;In this post, I will show you several ways to speed up your builds. The options you have are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Switching to runners that have cross-platform remote builds pre-configured&lt;/li&gt;
&lt;li&gt;Building single-architecture images in a matrix and merging them manually&lt;/li&gt;
&lt;li&gt;Using GitHub Actions instances as remote builders with Tailscale&lt;/li&gt;
&lt;li&gt;Setting up your own machines for remote builds&lt;/li&gt;
&lt;li&gt;Using a Kubernetes cluster for remote builds&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Switching to a different Runner Provider
&lt;/h2&gt;

&lt;p&gt;This is the simplest solution, but it requires signing up for a new service and will cost some money. There are alternative runner providers that have their runners preconfigured with BuildX and remote builders with native hardware.&lt;/p&gt;

&lt;p&gt;One such provider that I tried is &lt;a href="https://namespace.so/" rel="noopener noreferrer"&gt;Namespace.so&lt;/a&gt;. Signing up was fast and easy, and &lt;a href="https://github.com/nabsul/gh/blob/main/.github/workflows/cross-platform-builds/namespace.yml" rel="noopener noreferrer"&gt;switching to them in my workflows&lt;/a&gt; only required changing the &lt;code&gt;runs-on&lt;/code&gt; field in my YAML.&lt;/p&gt;

&lt;h2&gt;
  
  
  Building Single Images and Merging
&lt;/h2&gt;

&lt;p&gt;This option is probably the simplest way to get cross-platform builds without leaving GitHub. You build an individual image for each of the architectures that you want, then merge them with a buildx command like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker buildx imagetools create &lt;span class="nt"&gt;-t&lt;/span&gt; nabsul/myproject:v1.0.0 nabsul/myproject:v1.0.0-amd64 nabsul/myproject:v1.0.0-arm64
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In &lt;a href="https://github.com/nabsul/gh/blob/main/.github/workflows/cross-platform-builds/matrix.yml" rel="noopener noreferrer"&gt;this example&lt;/a&gt;, I use a matrix of jobs to reduce duplicate YAML, and then a &lt;code&gt;merge&lt;/code&gt; job to create the final image.&lt;/p&gt;

&lt;h2&gt;
  
  
  GitHub Remote Builders with Tailscale
&lt;/h2&gt;

&lt;p&gt;Honestly, &lt;a href="https://github.com/nabsul/gh/blob/main/.github/workflows/cross-platform-builds/tailscale.yml" rel="noopener noreferrer"&gt;this is cool in a nerdy way&lt;/a&gt;, but I wouldn't recommend it for production. For each hardware architecture, I spin up a job that starts a buildkitd server. I then join my tailnet with a pre-determined hostname that allows the builder to find the machine. The final step in the job is &lt;code&gt;printf "HTTP/1.1 200 OK\r\nContent-Length: 16\r\n\r\nShutting down..." | nc -l -p 8080&lt;/code&gt; which just waits until someone hits port 8080 and then shuts down.&lt;/p&gt;

&lt;p&gt;The main build step configures itself with the remote builders from the previous step. It then joins the tailnet and uses those remote instances to do a cross-platform build. After the build is done, I use a &lt;code&gt;curl&lt;/code&gt; command to cause the other jobs to end.&lt;/p&gt;

&lt;p&gt;Like I said, this is a pretty cool setup, but there's just so much that can go wrong. If the &lt;code&gt;curl&lt;/code&gt; fails, you'll get jobs that hang for a long time, and you'll have to worry about tailnet configurations and security.&lt;/p&gt;

&lt;h2&gt;
  
  
  Kubernetes Remote Builders
&lt;/h2&gt;

&lt;p&gt;If you happen to have a Kubernetes cluster that has both Intel and ARM nodes in it, you can use them as remote builders. In &lt;a href="https://github.com/nabsul/gh/blob/main/.github/workflows/cross-platform-builds/remote-k8s.yml" rel="noopener noreferrer"&gt;this example&lt;/a&gt;, I create a temporary namespace for each build, run the builds there, and then clean up afterwards.&lt;/p&gt;

&lt;p&gt;Overall this is not a bad option if you already have a Kubernetes cluster being used for other purposes. But you probably don't want to be creating one just for the purpose of your builds.&lt;/p&gt;

&lt;h2&gt;
  
  
  TCP Remote Builders
&lt;/h2&gt;

&lt;p&gt;You can also just run individual VMs of your different hardware types and use them for remote builds. In &lt;a href="https://github.com/nabsul/gh/blob/main/.github/workflows/cross-platform-builds/remote-tcp.yml" rel="noopener noreferrer"&gt;this example&lt;/a&gt;, I created one ARM and one Intel VM and secured them with TLS certs. I then configured BuildX to use those remote runners for the build. You could also leverage Tailscale for this and avoid the need for TLS certificates.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;So there you have it, several options to get faster multi-architecture builds on GitHub Actions. Personally, I currently lean towards Namespace.so simply because it only costs me about $2 a month and I'm lazy. If Namespace started to get expensive, I would probably go with the separate builds and merge pattern. And if my builds were starting to get expensive on GitHub, I might look into setting up builders at home and doing remote builds over Tailscale.&lt;/p&gt;

</description>
      <category>github</category>
      <category>docker</category>
      <category>tailscale</category>
    </item>
    <item>
      <title>Talos Kubernetes in Five Minutes</title>
      <dc:creator>Nabeel Sulieman</dc:creator>
      <pubDate>Sun, 28 Sep 2025 22:20:00 +0000</pubDate>
      <link>https://dev.to/nabsul/talos-kubernetes-in-five-minutes-1p1h</link>
      <guid>https://dev.to/nabsul/talos-kubernetes-in-five-minutes-1p1h</guid>
      <description>&lt;p&gt;Original post: &lt;a href="https://nabeel.dev/2025/09/28/talos-in-five" rel="noopener noreferrer"&gt;https://nabeel.dev/2025/09/28/talos-in-five&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.siderolabs.com/talos-linux/" rel="noopener noreferrer"&gt;Talos Linux&lt;/a&gt; is an OS designed specifically for running Kubernetes.&lt;br&gt;
It is locked down with no SSH access. All operations are done through a secured API.&lt;br&gt;
The documentation is (understandably) catered to setting up multi-node Kubernetes clusters that are resilient to failure.&lt;br&gt;
But what if you want the cheapest possible Kubernetes cluster, for testing purposes for example, where reliability isn't super important?&lt;/p&gt;

&lt;p&gt;In this article I'll show you how to set up a simple single-node Talos cluster in less than five minutes.&lt;br&gt;
By following these instructions, you can have a full Kubernetes cluster running on a single VM,&lt;br&gt;
without the extra costs of control planes and load balancers that cloud providers normally add onto their Kubernetes services.&lt;/p&gt;
&lt;h2&gt;
  
  
  Summary
&lt;/h2&gt;

&lt;p&gt;The basic outline of steps to create a single-node cluster is:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Get a Talos ISO image&lt;/li&gt;
&lt;li&gt;Create a blank Talos VM instance&lt;/li&gt;
&lt;li&gt;Update your config to allow workloads on control plane nodes&lt;/li&gt;
&lt;li&gt;Initialize the Talos VM and bootstrap the cluster&lt;/li&gt;
&lt;li&gt;Install MetalLB&lt;/li&gt;
&lt;li&gt;Install Envoy Gateway&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
  
  
  Step 0: Get a Talos ISO Image
&lt;/h2&gt;

&lt;p&gt;Okay, this is where I cheat a little. I'm not counting the time it takes to download and upload a Talos VM image as part of the 5 minutes.&lt;br&gt;
This step depends on which cloud provider (or home lab setup) you have.&lt;br&gt;
The good news is that the &lt;a href="https://www.talos.dev/v1.11/talos-guides/install/" rel="noopener noreferrer"&gt;official documentation&lt;/a&gt; is quite good.&lt;br&gt;
Find the section that matches your setup and follow those instructions.&lt;/p&gt;

&lt;p&gt;Essentially, you are going to be downloading a Talos Linux ISO.&lt;br&gt;
If you are using a cloud provider (Azure, AWS, OCI, DigitalOcean, etc.),&lt;br&gt;
you will then need to upload that image so that VMs can be created from that image.&lt;br&gt;
I have done this on DigitalOcean and Oracle Cloud. It takes a bit of time, maybe 10-15 minutes,&lt;br&gt;
but it's not hard and you only need to do it once to create as many VMs as you like going forward.&lt;/p&gt;
&lt;h2&gt;
  
  
  Step 1: Create a Talos VM
&lt;/h2&gt;

&lt;p&gt;Next you will need to create a Talos Linux VM (or server if you're installing on bare metal).&lt;br&gt;
As with the previous section, you will need to follow the instructions based on the infrastructure you are using.&lt;br&gt;
I've been most recently using DigitalOcean and automating everything with PowerShell.&lt;br&gt;
For me, creating a new blank Talos VM looks like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;doctl compute droplet create &lt;span class="nt"&gt;--region&lt;/span&gt; sfo3 &lt;span class="nt"&gt;--image&lt;/span&gt; &lt;span class="nv"&gt;$talosImageId&lt;/span&gt; &lt;span class="nt"&gt;--size&lt;/span&gt; s-2vcpu-4gb &lt;span class="nt"&gt;--enable-private-networking&lt;/span&gt; &lt;span class="nt"&gt;--ssh-keys&lt;/span&gt; &lt;span class="nv"&gt;$sshKeyId&lt;/span&gt; &lt;span class="nv"&gt;$vmName&lt;/span&gt; &lt;span class="nt"&gt;--wait&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After creating your blank VM, &lt;strong&gt;&lt;em&gt;DO NOT&lt;/em&gt;&lt;/strong&gt; follow any other instructions from the documentation!&lt;br&gt;
Specifically, do not execute any of the &lt;code&gt;talosctl&lt;/code&gt; commands described there.&lt;br&gt;
This is where we will diverge from the official documentation.&lt;/p&gt;

&lt;p&gt;Once your VM or machine is created, make note of its IP address for the following steps.&lt;/p&gt;
&lt;h2&gt;
  
  
  Step 2: Bootstrap the Cluster
&lt;/h2&gt;

&lt;p&gt;Now we are going to initialize our Talos Kubernetes cluster.&lt;br&gt;
Do this with the following commands:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;talosctl gen config &lt;span class="nv"&gt;$vmName&lt;/span&gt; &lt;span class="s2"&gt;"https://&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;VM_IP&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;:6443"&lt;/span&gt; &lt;span class="nt"&gt;--additional-sans&lt;/span&gt; &lt;span class="nv"&gt;$VM_IP&lt;/span&gt; &lt;span class="nt"&gt;-o&lt;/span&gt; &lt;span class="nv"&gt;$CONFIG_DIR&lt;/span&gt;
&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;TALOSCONFIG&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$CONFIG_DIR&lt;/span&gt;&lt;span class="s2"&gt;/talosconfig"&lt;/span&gt;
talosctl config endpoint &lt;span class="nv"&gt;$VM_IP&lt;/span&gt;
talosctl config node &lt;span class="nv"&gt;$VM_IP&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This will create a directory and populate it with an auto-generated cert and some default configuration files.&lt;br&gt;
Note the following:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;--additional-sans&lt;/code&gt; ensures that the certificate is valid for the VM's public IP address&lt;/li&gt;
&lt;li&gt;Set the &lt;code&gt;TALOSCONFIG&lt;/code&gt; environment variable so you don't have to add &lt;code&gt;--talosconfig mydir/talosconfig&lt;/code&gt; every time you use &lt;code&gt;talosctl&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Talos normally configures separate control plane and worker nodes.&lt;br&gt;
This is good practice for production clusters, but is expensive when you just want to test or kick the tires.&lt;br&gt;
Instead, we want to create a single control plane VM that will also be our worker.&lt;br&gt;
To do this, edit &lt;code&gt;controlplane.yaml&lt;/code&gt; in the Talos config directory.&lt;br&gt;
Scroll to the end of the file and uncomment (remove the &lt;code&gt;#&lt;/code&gt;) the line &lt;code&gt;# allowSchedulingOnControlPlanes: true&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Now we are ready to initialize the VM. By default, a freshly created VM waits for someone to configure it.&lt;br&gt;
Once you run this command, the VM is locked down to only work with the certificate that you generated with the &lt;code&gt;talosctl gen config&lt;/code&gt; command.&lt;br&gt;
Technically, there's a risk that someone could randomly beat you to configuring the VM and take ownership.&lt;br&gt;
The likelihood of this happening is very low, but if it did, you would see a failure in the &lt;code&gt;apply-config&lt;/code&gt; command,&lt;br&gt;
and you would simply delete the VM.&lt;br&gt;
There are more secure ways to do this, specifically generating an ISO that is preconfigured to only respond to your cert.&lt;br&gt;
However, that is beyond the scope of this simple tutorial.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;talosctl apply-config &lt;span class="nt"&gt;--insecure&lt;/span&gt; &lt;span class="nt"&gt;--nodes&lt;/span&gt; &lt;span class="nv"&gt;$VM_IP&lt;/span&gt; &lt;span class="nt"&gt;--file&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$CONFIG_DIR&lt;/span&gt;&lt;span class="s2"&gt;/controlplane.yaml"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Give the VM a few seconds (I wait 10) to apply the configuration, then run &lt;code&gt;talosctl bootstrap&lt;/code&gt;.&lt;br&gt;
You can then run &lt;code&gt;talosctl health&lt;/code&gt; or &lt;code&gt;talosctl dashboard&lt;/code&gt; to watch the cluster come alive in real-time.&lt;/p&gt;

&lt;p&gt;At this point, your Kubernetes cluster is alive and you just need to generate the kubeconfig to use it:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;talosctl kubeconfig &lt;span class="nv"&gt;$CONFIG_DIR&lt;/span&gt;
&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;KUBECONFIG&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nv"&gt;$CONFIG_DIR&lt;/span&gt;/kubeconfig
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You should now be able to run commands like &lt;code&gt;kubectl get pods --all-namespaces&lt;/code&gt; or &lt;code&gt;k9s&lt;/code&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 3: Install MetalLB
&lt;/h2&gt;

&lt;p&gt;Have you ever created a &lt;code&gt;LoadBalancer&lt;/code&gt; type service in AKS, EKS, etc., to create a load balancer that routes traffic to your cluster?&lt;br&gt;
MetalLB will give that same functionality, but for free on your bare VM.&lt;br&gt;
You can install MetalLB as &lt;a href="https://metallb.io/installation/#installation-by-manifest" rel="noopener noreferrer"&gt;the documentation&lt;/a&gt; prescribes with:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; https://raw.githubusercontent.com/metallb/metallb/v0.15.2/config/manifests/metallb-native.yaml
kubectl &lt;span class="nb"&gt;wait&lt;/span&gt; &lt;span class="nt"&gt;--timeout&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;5m &lt;span class="nt"&gt;--for&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nv"&gt;condition&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;available &lt;span class="nt"&gt;--all&lt;/span&gt; deployments &lt;span class="nt"&gt;-n&lt;/span&gt; metallb-system
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After that, we need to configure an &lt;code&gt;IPAddressPool&lt;/code&gt; so MetalLB is aware of the IP address we want it to use:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;metallb.io/v1beta1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;IPAddressPool&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;lab-pool&lt;/span&gt;
  &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;metallb-system&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;addresses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;192.168.1.100/32&lt;/span&gt;  &lt;span class="c1"&gt;# Replace with your VM's public IP&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Replace &lt;code&gt;192.168.1.100&lt;/code&gt; with the public IP address of your VM, save the YAML to &lt;code&gt;metallb-ipaddresspool.yaml&lt;/code&gt; and then run &lt;code&gt;kubectl apply -f metallb-ipaddresspool.yaml&lt;/code&gt;.&lt;br&gt;
Congratulations, you now have MetalLB installed and ready to work with your Gateway Controller.&lt;/p&gt;
&lt;h2&gt;
  
  
  Step 4: Install Envoy Gateway Controller
&lt;/h2&gt;

&lt;p&gt;Finally, you will probably want to use the &lt;a href="https://kubernetes.io/docs/concepts/services-networking/gateway/" rel="noopener noreferrer"&gt;Kubernetes Gateway API&lt;/a&gt;&lt;br&gt;
to route traffic through the public IP address to services running in your cluster.&lt;br&gt;
I found that Envoy Gateway was the easiest solution to achieve this.&lt;br&gt;
The &lt;a href="https://gateway.envoyproxy.io/docs/tasks/quickstart/" rel="noopener noreferrer"&gt;quick start documentation&lt;/a&gt; worked flawlessly, but in summary:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;helm &lt;span class="nb"&gt;install &lt;/span&gt;eg oci://docker.io/envoyproxy/gateway-helm &lt;span class="nt"&gt;--version&lt;/span&gt; v1.5.1 &lt;span class="nt"&gt;-n&lt;/span&gt; envoy-gateway-system &lt;span class="nt"&gt;--create-namespace&lt;/span&gt;
kubectl &lt;span class="nb"&gt;wait&lt;/span&gt; &lt;span class="nt"&gt;--timeout&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;5m &lt;span class="nt"&gt;-n&lt;/span&gt; envoy-gateway-system deployment/envoy-gateway &lt;span class="nt"&gt;--for&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nv"&gt;condition&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;Available
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can test that everything works as it should with the following:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; https://github.com/envoyproxy/gateway/releases/download/v1.5.1/quickstart.yaml &lt;span class="nt"&gt;-n&lt;/span&gt; default
curl &lt;span class="nt"&gt;--verbose&lt;/span&gt; &lt;span class="nt"&gt;--header&lt;/span&gt; &lt;span class="s2"&gt;"Host: www.example.com"&lt;/span&gt; http://&lt;span class="nv"&gt;$VM_IP&lt;/span&gt;/get
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;And there you have it! A simple single-node Kubernetes cluster in less than the time it took to read this article.&lt;br&gt;
You can create as many as you like, tear them down, and create more when you need them.&lt;br&gt;
I ended up automating all of this in a PowerShell script, and the time to run is 3-4 minutes.&lt;br&gt;
This script likely won't work right out of the box for you, but it should be fairly easy to adapt it if you like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight powershell"&gt;&lt;code&gt;&lt;span class="n"&gt;echo&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Getting VM parameters..."&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="nv"&gt;$sshKey&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;doctl&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;compute&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;ssh-key&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;list&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;-o&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;json&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;ConvertFrom-Json&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;where&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="bp"&gt;$_&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;name&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;Contains&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s1"&gt;'dummy'&lt;/span&gt;&lt;span class="p"&gt;)})&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;id&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="nv"&gt;$imageId&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;doctl&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;compute&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;image&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;list&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;-o&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;json&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;ConvertFrom-Json&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;where&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="bp"&gt;$_&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;name&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;Contains&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s1"&gt;'Talos'&lt;/span&gt;&lt;span class="p"&gt;)})&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;id&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="nv"&gt;$timestamp&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;Get-Date&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;-Format&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"yyyyMMdd-HHmmss"&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="nv"&gt;$vmName&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"kcert-test-&lt;/span&gt;&lt;span class="nv"&gt;$timestamp&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="n"&gt;mkdir&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;$vmName&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;Out-Null&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="nx"&gt;echo&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Creating droplet..."&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="nv"&gt;$vmJson&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;doctl&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;compute&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;droplet&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;create&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;--region&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;sfo3&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;--image&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;$imageId&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;--size&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;s-2vcpu-4gb&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;--enable-private-networking&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;--ssh-keys&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;$sshKey&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;$vmName&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;--wait&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;-o&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;json&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="nv"&gt;$vm&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;$vmJson&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;ConvertFrom-Json&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="nv"&gt;$vmIp&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;$vm&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;networks&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;v4&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;where&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="bp"&gt;$_&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;type&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;-eq&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s1"&gt;'public'&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;Select-Object&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;-ExpandProperty&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;ip_address&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="n"&gt;echo&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"VM created with IP address: &lt;/span&gt;&lt;span class="nv"&gt;$vmIp&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="n"&gt;echo&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;$vmIp&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;&amp;gt;&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;$vmName&lt;/span&gt;&lt;span class="nx"&gt;/ip.txt&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="n"&gt;echo&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Initializing Talos cluster at &lt;/span&gt;&lt;span class="nv"&gt;$vmIp&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="n"&gt;talosctl&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;gen&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;config&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;$vmName&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"https://&lt;/span&gt;&lt;span class="nv"&gt;${vmIp}&lt;/span&gt;&lt;span class="s2"&gt;:6443"&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;--additional-sans&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;$vmIp&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;-o&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;$vmName&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="nv"&gt;$&lt;/span&gt;&lt;span class="nn"&gt;env&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="nv"&gt;TALOSCONFIG&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;Resolve-Path&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$vmName&lt;/span&gt;&lt;span class="s2"&gt;/talosconfig"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;Path&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="nx"&gt;talosctl&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;config&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;endpoint&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;$vmIp&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="n"&gt;talosctl&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;config&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;node&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;$vmIp&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="nv"&gt;$yaml&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;Get-Content&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;-Path&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;${vmName}&lt;/span&gt;&lt;span class="s2"&gt;/controlplane.yaml"&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="nv"&gt;$yaml&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;$yaml&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;-replace&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s1"&gt;'# allowSchedulingOnControlPlanes:'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s1"&gt;'allowSchedulingOnControlPlanes:'&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="n"&gt;Set-Content&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;-Path&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;${vmName}&lt;/span&gt;&lt;span class="s2"&gt;/controlplane.yaml"&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;-Value&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;$yaml&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="n"&gt;talosctl&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;apply-config&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;--insecure&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;--nodes&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;$vmIp&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;--file&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;${vmName}&lt;/span&gt;&lt;span class="s2"&gt;/controlplane.yaml"&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="n"&gt;echo&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Sleeping for 10 seconds to allow the node to initialize..."&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="n"&gt;Start-Sleep&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;-Seconds&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;10&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="n"&gt;talosctl&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;bootstrap&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="n"&gt;echo&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Sleeping for 10 seconds to allow the cluster to stabilize..."&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="n"&gt;Start-Sleep&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;-Seconds&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;10&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="n"&gt;talosctl&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;health&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="n"&gt;talosctl&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;kubeconfig&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;$vmName&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="nv"&gt;$&lt;/span&gt;&lt;span class="nn"&gt;env&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="nv"&gt;KUBECONFIG&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;Resolve-Path&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$vmName&lt;/span&gt;&lt;span class="s2"&gt;/kubeconfig"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;Path&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="nx"&gt;echo&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Setting up MetalLB"&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="n"&gt;kubectl&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;apply&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;-f&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;https://raw.githubusercontent.com/metallb/metallb/v0.15.2/config/manifests/metallb-native.yaml&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="n"&gt;kubectl&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;wait&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;--timeout&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;5&lt;/span&gt;&lt;span class="n"&gt;m&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;--for&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;condition&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;available&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;--all&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;deployments&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;-n&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;metallb-system&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="sh"&gt;@"
apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
  name: lab-pool
  namespace: metallb-system
spec:
  addresses:
  - &lt;/span&gt;&lt;span class="nv"&gt;$vmIp&lt;/span&gt;&lt;span class="sh"&gt;/32
"@&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;kubectl&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;apply&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;-f&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="n"&gt;echo&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Setting up Envoy"&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="n"&gt;helm&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;install&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;eg&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;oci://docker.io/envoyproxy/gateway-helm&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;--version&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;v1.5.1&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;-n&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;envoy-gateway-system&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;--create-namespace&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="n"&gt;kubectl&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;wait&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;--timeout&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;5&lt;/span&gt;&lt;span class="n"&gt;m&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;-n&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;envoy-gateway-system&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;deployment/envoy-gateway&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;--for&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;condition&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;Available&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="nx"&gt;echo&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Here are your environment variables:"&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="nv"&gt;$envVars&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;@(&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="se"&gt;`$&lt;/span&gt;&lt;span class="s2"&gt;env:KUBECONFIG = '&lt;/span&gt;&lt;span class="nv"&gt;$&lt;/span&gt;&lt;span class="nn"&gt;env&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="nv"&gt;KUBECONFIG&lt;/span&gt;&lt;span class="s2"&gt;'"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="se"&gt;`$&lt;/span&gt;&lt;span class="s2"&gt;env:TALOSCONFIG = '&lt;/span&gt;&lt;span class="nv"&gt;$&lt;/span&gt;&lt;span class="nn"&gt;env&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="nv"&gt;TALOSCONFIG&lt;/span&gt;&lt;span class="s2"&gt;'"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="se"&gt;`$&lt;/span&gt;&lt;span class="s2"&gt;env:VMIP = '&lt;/span&gt;&lt;span class="nv"&gt;$vmIp&lt;/span&gt;&lt;span class="s2"&gt;'"&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="nv"&gt;$envVars&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;ForEach-Object&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;echo&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="bp"&gt;$_&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="nv"&gt;$envVars&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;Out-File&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;-FilePath&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$vmName&lt;/span&gt;&lt;span class="s2"&gt;/env.txt"&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;-Encoding&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;utf8&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



</description>
      <category>talos</category>
      <category>kubernetes</category>
    </item>
    <item>
      <title>NATS: You Need it Now!</title>
      <dc:creator>Nabeel Sulieman</dc:creator>
      <pubDate>Sun, 18 Sep 2022 15:58:09 +0000</pubDate>
      <link>https://dev.to/nabsul/nats-you-need-it-now-15lf</link>
      <guid>https://dev.to/nabsul/nats-you-need-it-now-15lf</guid>
      <description>&lt;p&gt;Original Post: &lt;a href="https://nabeel.dev/2022/09/17/nats"&gt;https://nabeel.dev/2022/09/17/nats&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you are running Kubernetes, or really any kind of microservice architecture, you will eventually run into challenges with communication and synchronization between your instances. To solve this, I recommend deploying an instance of &lt;a href="https://nats.io"&gt;NATS&lt;/a&gt; as part of your initial infrastructure setup.&lt;br&gt;
NATS is great because:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;It's tiny, light-weight, and easy to run&lt;/li&gt;
&lt;li&gt;A single instance will likely be sufficient for the needs of your whole cluster&lt;/li&gt;
&lt;li&gt;It will be there, ready and waiting, when you need it&lt;/li&gt;
&lt;li&gt;It solves the problem of one-to-many communication&lt;/li&gt;
&lt;li&gt;It can be used to build extensible event-driven systems&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  What is NATS?
&lt;/h2&gt;

&lt;p&gt;NATS is a light-weight, easy to deploy service that provides pub-sub functionality with very little fuss. It is a tiny application, written in Go, that listens on a port for connections from clients.&lt;/p&gt;

&lt;p&gt;The NATS executable is a few MB in size and runs out of the box with sensible defaults. It has no dependencies or required configuration parameters. As a Kubernetes service, it can be deployed &lt;a href="https://gist.github.com/nabsul/11eccd4536cdf1a872293f0bc0dd868e"&gt;very easily with this yaml&lt;/a&gt;. With that simple deployment, your microservices can use NATS by connecting to &lt;code&gt;nats://nats:4222&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Clients can send and receive messages to each other by publishing and subscribing to subjects. For example, two clients could be subscribed to subject &lt;code&gt;x&lt;/code&gt;. If any client publishes a message to subject &lt;code&gt;x&lt;/code&gt;, all subscribed clients will receive that message.&lt;/p&gt;

&lt;h2&gt;
  
  
  NATS Use-Cases
&lt;/h2&gt;

&lt;p&gt;NATS can replace and streamline many service-to-service communication scenarios. The following sections describe a few of them:&lt;/p&gt;

&lt;h3&gt;
  
  
  Broadcast to All Instances of a Distributed Service
&lt;/h3&gt;

&lt;p&gt;This was my first use for NATS. I had a deployment with multiple instances running in the cluster, and when a configuration change is made, I need all instances to reload their configurations from a database.&lt;/p&gt;

&lt;p&gt;To solve this problem, every instance of my service subscribes to &lt;code&gt;myapp.refresh&lt;/code&gt;. When the configuration changes, I publish a message to that subject, and all instances will take action by reloading their configurations.&lt;/p&gt;

&lt;h3&gt;
  
  
  Ping-Pong
&lt;/h3&gt;

&lt;p&gt;Want to get some information or a status report from all instances of your service? To fetch information about all running instances:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;All instances listen to &lt;code&gt;myapp.ping&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Start listening to some unique temporary subject called &lt;code&gt;myapp.pong.[UNIQUE_GUID]&lt;/code&gt; for example&lt;/li&gt;
&lt;li&gt;The single instance will publish a message such as &lt;code&gt;replyto=myapp.pong.[UNIQUE_GUID]&lt;/code&gt; to the &lt;code&gt;myapp.ping&lt;/code&gt; subject&lt;/li&gt;
&lt;li&gt;Every instance listening to &lt;code&gt;myapp.ping&lt;/code&gt; will then respond to the &lt;code&gt;myapp.pong.[UNIQUE_GUID]&lt;/code&gt; subject with the relevant information&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You can listen to the &lt;code&gt;myapp.pong.[UNIQUE_GUID]&lt;/code&gt; subject for a certain amount of time and then unsubscribe from it. It should only take a few milliseconds to receive messages from all listening instances.&lt;/p&gt;

&lt;h3&gt;
  
  
  Event-Driven Systems
&lt;/h3&gt;

&lt;p&gt;The beauty of NATS is that multiple clients can subscribe to the same subject without any fancy configuration or setup. This can be very handy when building a future-proof system that can easily be extended. Take the following scenario for example:&lt;/p&gt;

&lt;p&gt;Imagine you are running a microservice-based e-commerce system.&lt;br&gt;
One microservice handles payments and another one handles the front-end UI that customers see. The front-end might send a message requesting that a payment be processed (using NATS or a REST API), and then it might listen on a predetermined subject (&lt;code&gt;payments.updates.[TXN_ID]&lt;/code&gt; for example) for a notification that the payment has completed.&lt;/p&gt;

&lt;p&gt;Imagine now that you want to add a quota system that automatically updates inventory numbers whenever a purchase is made. You might be tempted to add that logic to either your front-end or payment microservice. However, this functionality doesn't logically fit into either of these services. With NATS, you could create a new microservice that subscribes to &lt;code&gt;payments.updates.*&lt;/code&gt; to receive notifications of all payment updates. It could then perform the desired action, and we did all this without modifying any of the existing systems.&lt;/p&gt;

&lt;h2&gt;
  
  
  Performance Concerns
&lt;/h2&gt;

&lt;p&gt;A simple instance of NATS should be fine for most workloads. Some possible concerns might be:&lt;/p&gt;

&lt;h3&gt;
  
  
  Speed
&lt;/h3&gt;

&lt;p&gt;Although using NATS involves an extra network hop compared to direct communication, you have to remember that this is all done over an already open TCP connection (no handshake overhead),&lt;br&gt;
and that this will all likely be communication between machines that are quite close to each other physically. The round-trip times I typically observe in my Digital Ocean cluster is less than 70ms (which means a one-way of about 30ms).&lt;/p&gt;

&lt;h3&gt;
  
  
  Volume
&lt;/h3&gt;

&lt;p&gt;You might be worried that a single instance won't be able to handle the number of services and messages that you need to send.&lt;br&gt;
But remember, a single instance of NATS should be able to easily handle thousands of simultaneous connections. Furthermore, NATS is fairly stateless and should not demanding in terms of CPU or memory. It simply receives a message, forwards it to all the subscribers, and then forgets about it.&lt;/p&gt;

&lt;h3&gt;
  
  
  Reliability
&lt;/h3&gt;

&lt;p&gt;What happens if NATS goes offline? What about network issues? These are valid concerns, but:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Statistically speaking, the smaller your cluster the more rare outages are&lt;/li&gt;
&lt;li&gt;For non-mission-critical applications a small outage is likely not going to cause major issues&lt;/li&gt;
&lt;li&gt;NATS doesn't really make this problem worse, it exists either way&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If your reliability needs really aren't met by a simple NATS instance, there are solutions: ACKs and retries, periodic refreshes from persisted source of truth, or running NATS in a &lt;a href="https://docs.nats.io/running-a-nats-service/nats-kubernetes#nats-ha-setup"&gt;high-availability configuration&lt;/a&gt;&lt;br&gt;
(also see &lt;a href="https://docs.nats.io/nats-concepts/jetstream"&gt;Jetstream documentation&lt;/a&gt;).&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusions
&lt;/h2&gt;

&lt;p&gt;NATS is an easy to use service that provides extremely useful functionality for today's distributed microservices. It provides the right balance simplicity vs. performance to be useful for many applications, and it can grow as your needs do. I also highly recommend checking out this &lt;a href="https://changelog.com/gotime/130"&gt;Changelog podcast episode about NATS&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>nats</category>
      <category>kubernetes</category>
    </item>
    <item>
      <title>Replacing YAML with TypeScript</title>
      <dc:creator>Nabeel Sulieman</dc:creator>
      <pubDate>Sat, 30 Apr 2022 17:39:07 +0000</pubDate>
      <link>https://dev.to/nabsul/replacing-yaml-with-typescript-pg6</link>
      <guid>https://dev.to/nabsul/replacing-yaml-with-typescript-pg6</guid>
      <description>&lt;p&gt;Original Post: &lt;a href="https://nabeel.dev/2022/04/30/yaml-alternative"&gt;https://nabeel.dev/2022/04/30/yaml-alternative&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  TL;DR
&lt;/h2&gt;

&lt;p&gt;Are you tired of copy-pasting and editing a ton YAML files? In this post I suggest using TypeScript to define your services and either Handlebars templates or the Kubernetes NodeJS client to more easily manage your deployments. You can find some sample code that demonstrates this at &lt;a href="https://github.com/nabsul/k8s-yaml-alternative"&gt;https://github.com/nabsul/k8s-yaml-alternative&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;I've been using YAML files checked into a Git repo to manage my Kubernetes deployment for many years now. But as the number of deployments grows, this approach starts to run into problems:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A lot of the YAML is boilerplate that is repeated over and over again.&lt;/li&gt;
&lt;li&gt;Global changes are tedious, requiring editing all the files in your repo.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I've recently started experimenting with managing my deployments in a different way. In this post I will describe the steps I take to reach this design. The example that I will be using is a cluster running three application:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;An NGINX service with two replicas exposing port 80&lt;/li&gt;
&lt;li&gt;A single instance of NATS exposing ports 4222 and 8222&lt;/li&gt;
&lt;li&gt;A custom application that doesn't have ports but requires some environment variables &lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Start with your Own Definition
&lt;/h2&gt;

&lt;p&gt;I usually start building out my deployments by first deciding what language/system I will use. For my cluster this has been the YAML files, but it could also have been something like Terraform or Helm charts. With that approach, my application requirements take a back seat and the focus becomes: "What does this system require to work?". Take for example a &lt;a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#creating-a-deployment"&gt;Kubernetes deployment&lt;/a&gt; specified in YAML:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;apps/v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Deployment&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;service1&lt;/span&gt;
  &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;k8s-ts-test&lt;/span&gt;
  &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;service1&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;replicas&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;2&lt;/span&gt;
  &lt;span class="na"&gt;selector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;matchLabels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;service1&lt;/span&gt;
  &lt;span class="na"&gt;template&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;service1&lt;/span&gt;
    &lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;containers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;service1&lt;/span&gt;
        &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx:latest&lt;/span&gt;
        &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;containerPort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;80&lt;/span&gt;
          &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;port80&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;What parts of of the above specification do I really care about or control? Number of replicas, image tag, and the application port. That's it, three lines out of twenty. Everything else is boilerplate.&lt;/p&gt;

&lt;p&gt;Instead of trying to fit our application into what Kubernetes wants, let's start with a specification that only includes the details that we care about. Based on the initial list of services that I want to run, we can define everything in as follows:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;services&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; 
    &lt;span class="na"&gt;service1&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;nginx:latest&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="na"&gt;replicas&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;80&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
    &lt;span class="p"&gt;},&lt;/span&gt;
    &lt;span class="na"&gt;service2&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;nats:latest&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="na"&gt;replicas&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;4222&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;8222&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
    &lt;span class="p"&gt;},&lt;/span&gt;
    &lt;span class="na"&gt;service3&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;app:latest&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="na"&gt;replicas&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="na"&gt;env&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="na"&gt;VAR1&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Some Value&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="na"&gt;VAR2&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;another value&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Starting with the specification like this keeps me focused on the details that I care about. You can clearly see how many applications I plan to deploy, and how those applications differ from each other. We can worry about YAML/Terraform/Helm details later. Moreover, I can change my mind about the YAML/Terraform/Helm question and still keep this definition as the starting point for everything I do later.&lt;/p&gt;

&lt;h2&gt;
  
  
  From Custom Definition to YAML
&lt;/h2&gt;

&lt;p&gt;Converting this custom specification into regular YAML can easily be done using &lt;a href="https://handlebarsjs.com/"&gt;Handlebars&lt;/a&gt; templates. You can see the all of the &lt;a href="https://github.com/nabsul/k8s-yaml-alternative/tree/main/templates"&gt;templates here&lt;/a&gt;, but here is a small sample:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    spec:
      containers:
      - name: {{name}}
        image: {{image}}
        {{#if ports}}
        ports:
        {{#each ports}}
        - containerPort: {{this}}
          name: port{{this}}
        {{/each}}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The handlebars library is used in &lt;code&gt;/generate.ts&lt;/code&gt; to create all of the YAML in one go. With this approach notice that:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;All of my deployments are guaranteed to look the same.&lt;/li&gt;
&lt;li&gt;If I need to make a change to all of my services (API version or namespace change for example) I can easily do it with one template change.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You can generate &lt;code&gt;/generated.yaml&lt;/code&gt; by running &lt;code&gt;npm run generate&lt;/code&gt;. You can then deploy all of the services in one go with &lt;code&gt;kubectl apply -f generated.yaml&lt;/code&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Skipping YAML Altogether
&lt;/h2&gt;

&lt;p&gt;In this repo I also demonstrate how to eliminate the need for YAML completely. Instead of generating YAML and running &lt;code&gt;kubectl&lt;/code&gt;, I use the &lt;code&gt;@kubernetes/client-node&lt;/code&gt; npm package to directly deploy to Kubernetes. This requires a little more work than YAML generation, but has several advantages. The biggest advantage is that the npm package includes TypeScript definitions for &lt;code&gt;V1Secret&lt;/code&gt;, &lt;code&gt;V1Service&lt;/code&gt;, and so on. This makes your deployment strongly typed and  reduces the possibility of errors compared to authoring YAML. You can follow the &lt;code&gt;/deploy.ts&lt;/code&gt; script to see how this all works, but at the heart of it is a simple loop:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="kd"&gt;var&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;n&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;s&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="k"&gt;of&lt;/span&gt; &lt;span class="nb"&gt;Object&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;entries&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;services&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;`Deploying &lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;getName&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;namespace&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;n&lt;/span&gt;&lt;span class="p"&gt;)}&lt;/span&gt;&lt;span class="s2"&gt; started`&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;makeSecret&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;namespace&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;n&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;s&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;makeDeployment&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;namespace&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;n&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;s&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;makeService&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;namespace&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;n&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;s&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;`Deploying &lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;getName&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;namespace&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;n&lt;/span&gt;&lt;span class="p"&gt;)}&lt;/span&gt;&lt;span class="s2"&gt; complete\n`&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  What Next?
&lt;/h2&gt;

&lt;p&gt;I'm only starting to rethink how I want to manage and deploy my Kubernetes clusters. I think this is a good start, and I hope to share more learnings as I continue to experiment. As next steps I'm going to be looking into removing more of the "click-ops" that I do when creating my cluster:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Logging into the digital ocean dashboard to create a new cluster&lt;/li&gt;
&lt;li&gt;Manually setting up the load balancer&lt;/li&gt;
&lt;li&gt;Finding the load balancer IP address and configuring DNS to point to the new cluster&lt;/li&gt;
&lt;li&gt;Configuring all the infrastructure (docker repo, AWS) and application secrets&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;As for challenges I foresee: In this example, the applications are defined generically such that you could use them even if you wanted to skip Kubernetes altogether. However, I have two Kubernetes-specific applications that I run in the cluster: &lt;a href="https://github.com/nabsul/kcert"&gt;KCert&lt;/a&gt; and &lt;a href="https://github.com/nabsul/k8s-ecr-login-renew"&gt;ecr-login-renewal&lt;/a&gt;. These applications require special Kubernetes configurations around service accounts and permissions. I'm not yet sure how to cleanly encode those.&lt;/p&gt;

&lt;p&gt;If you like this idea, give it a try in your own setup. If you're comfortable writing code in NodeJS/TypeScript, try out the Kubernetes client approach. Or if you're just looking to simplify your templates, give Handlebars templates a try.&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>typescript</category>
      <category>yaml</category>
    </item>
    <item>
      <title>My "Artisinal" Ingress</title>
      <dc:creator>Nabeel Sulieman</dc:creator>
      <pubDate>Tue, 19 Apr 2022 00:07:42 +0000</pubDate>
      <link>https://dev.to/nabsul/my-artisinal-ingress-28me</link>
      <guid>https://dev.to/nabsul/my-artisinal-ingress-28me</guid>
      <description>&lt;p&gt;Original Post: &lt;a href="https://nabeel.dev/2022/04/17/myingress-intro"&gt;https://nabeel.dev/2022/04/17/myingress-intro&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Summary
&lt;/h1&gt;

&lt;p&gt;I built a replacement for &lt;code&gt;nginx&lt;/code&gt; and &lt;code&gt;cert-manager&lt;/code&gt; in my Kubernetes cluster. It leverages &lt;a href="https://nats.io"&gt;NATS&lt;/a&gt; and &lt;a href="https://www.cockroachlabs.com/"&gt;CockroachDB&lt;/a&gt;, and is written in .NET Core C#.&lt;/p&gt;

&lt;p&gt;It's simple, easy to setup, and easy to understand. It features a web-interface for management and configuration. It's also horizontally scalable out of the box and aims to follow all the best practices for high availability and observability.&lt;/p&gt;

&lt;p&gt;Finally, it's pre-alpha at the moment and I'm not ready to open-source the project. However, I'm looking for like-minded people who might be interested in turning this into something more broadly useful.&lt;/p&gt;

&lt;h1&gt;
  
  
  Background
&lt;/h1&gt;

&lt;p&gt;For the past year or so, I've been working on replacing &lt;code&gt;cert-manager&lt;/code&gt; in my Kuberentes cluster. It started with a &lt;code&gt;cert-manager&lt;/code&gt; outage due to a DNS bug, and by &lt;a href="https://nabeel.blog/2020/10/23/k8s-letsencrypt-manual"&gt;learning how to manually manage certificates&lt;/a&gt;. I then automated all of that with &lt;a href="https://nabeel.blog/2021/02/06/kcert"&gt;KCert&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;When I got KCert done, I realized there was another part of my setup that could be improved: the NGINX Ingress controller. So I decided to continue the effort and replace that as well.&lt;/p&gt;

&lt;p&gt;And that is how I created My "Artisinal" Ingress. I call it artisinal because I built it to my own personal taste. So far I'm pleased with the result. I'm using it in my personal Kubernetes cluster, which is serving the page you are reading right now.&lt;/p&gt;

&lt;h1&gt;
  
  
  Design Decisions
&lt;/h1&gt;

&lt;p&gt;When I decided to build a replacement for NGINX Ingress Controller, I came up with several goals:&lt;/p&gt;

&lt;h2&gt;
  
  
  Simple Setup
&lt;/h2&gt;

&lt;p&gt;I'm not a fan of &lt;a href="https://helm.sh"&gt;Helm&lt;/a&gt; or &lt;a href="https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/"&gt;CRDs&lt;/a&gt; (more accurately: I love the idea of CRDs, but I think they're often used unnecessarily). They are frequently used to create overly complex systems, and make it extremely difficult to debug when things goes wrong. Sometimes your only option is to delete the whole cluster and start again.&lt;/p&gt;

&lt;p&gt;For example, take a look at the &lt;a href="https://cert-manager.io/v0.14-docs/installation/kubernetes/"&gt;cert-manager&lt;/a&gt; and &lt;a href="https://kubernetes.github.io/ingress-nginx/deploy/#quick-start/"&gt;NGINX Ingress Controller&lt;/a&gt; installation:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The installation of &lt;code&gt;cert-manager&lt;/code&gt; requires 7000+ lines of yaml&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;cert-manager&lt;/code&gt; runs three pods in the cluster (Is cert management really that complex?)&lt;/li&gt;
&lt;li&gt;NGINX Ingress controller can be installed via Helm or &lt;a href="https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.1.2/deploy/static/provider/cloud/deploy.yaml"&gt;almost 700 lines of yaml&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Is all this necessary? What if something goes wrong? Will you know how to debug and fix any of this?&lt;/p&gt;

&lt;h2&gt;
  
  
  No State in Kubernetes
&lt;/h2&gt;

&lt;p&gt;New versions of Kubernetes are released quite frequently. The easiest way for me to stay up to date is to create a brand-new cluster, move all my services there, and destroy the old one.&lt;/p&gt;

&lt;p&gt;This requires copying everything over from the old cluster to the new one. While my deployement and service definitions are checked into a git repository, secrets and certificates are not. Those objects need to be manually copied over. &lt;/p&gt;

&lt;p&gt;For this reason I decided to eliminate the need to copy certificates and ingress configurations. I decided to store the state of my ingress controller in a central CockroachDB store. This really could have been anything: S3, Azure Key Vault, etc. But the main idea is to store all of this information &lt;em&gt;outside&lt;/em&gt; of the Kubeneretes cluster.&lt;/p&gt;

&lt;p&gt;This approach has two advantages:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;I don't have to copy Kubernetes certificates from one cluster to another&lt;/li&gt;
&lt;li&gt;I can deploy multiple Kubernetes clusters that rely on the same source of truth&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Must be Easy to Scale Horizontally
&lt;/h2&gt;

&lt;p&gt;My first ingress controller in Kubernetes was Traefik, and I was happy with it for a long time. Then I discovered that it doesn't support multiple instances (what they called high-availability).&lt;br&gt;
Certificates were stored on the local disk and couldn't be shared across multiple instances of the service. The paid version of Traefik did not have this limitation, and that did not sit well with me. I even &lt;a href="https://nabeel.blog/2019/11/traefik"&gt;tried to fix that myself&lt;/a&gt;, and eventually &lt;a href="https://nabeel.blog/2020/03/07/traefik-stop"&gt;gave up&lt;/a&gt; and moved to NGINX.&lt;/p&gt;

&lt;p&gt;For this reason, I set out from the start to design my ingress controller to scale seamlessly. Using CockroachDB as my data store is the first part of solving this, but there is also the problem of keeping all nodes synchronized when things change. I decided to leverage &lt;a href="https://nats.io"&gt;NATS&lt;/a&gt; for this purpose.&lt;br&gt;
Using NATS made it easy for all instances of the service to stay synchronized and exchange messages.&lt;/p&gt;

&lt;h2&gt;
  
  
  Built in Certificate Management
&lt;/h2&gt;

&lt;p&gt;I thought about using &lt;a href="https://nabeel.blog/2021/03/21/kcert-release"&gt;KCert&lt;/a&gt;&lt;br&gt;
together with this new system, but decided against it. It feels like they should be two separate systems, but especially with ACME HTTP challenges, it becomes difficult to cleanly separate the two.&lt;/p&gt;

&lt;p&gt;The other issue is that KCert is specifically geared towards working with Kubernetes ingresses and certificates. However, I had decided that I want to store that information outside of Kubernetes for this project. I therefore couldn't use KCert without decoupling it completely from Kubernetes and making it much more complex.&lt;/p&gt;

&lt;p&gt;I therefore decided to build certificate management directly into my new controller.&lt;br&gt;
This wasn't too much work, since I could reuse the code I wrote in KCert.&lt;/p&gt;

&lt;h2&gt;
  
  
  Good Observability Practices
&lt;/h2&gt;

&lt;p&gt;I wanted to make sure that the system is easy to debug, monitor, and maintain. For the monitoring piece, I tried Azure's Application Insights, Datadog, and Honeycomb. All of these options are great, but:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;I'm sure there are other great options out there.&lt;/li&gt;
&lt;li&gt;Pulling in all those client libraries doesn't feel right.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I'm therefore leaning towards a more generic approach: I will use &lt;a href="https://opentelemetry.io/"&gt;Open Telemetry&lt;/a&gt;, which is the standard the industry is converging to. Most monitoring systems support Open Telemetry, either natively or through side-car shims.&lt;/p&gt;

&lt;h2&gt;
  
  
  Other Considerations
&lt;/h2&gt;

&lt;p&gt;If I were optimizing for broad community adoption, I would have written this in Go or Rust. However, I really enjoy writing in C# and I can practice Go and Rust at work. For this reason I decided to go with .NET Core C# and used &lt;a href="https://microsoft.github.io/reverse-proxy/"&gt;YARP&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Additionally, I've been looking for an excuse to learn and use both CockroachDB and NATS. I use CockrochDB's cloud service as my data store and NATS to keep my load balancer instances synchronized.&lt;/p&gt;

&lt;h1&gt;
  
  
  Where's the Code?
&lt;/h1&gt;

&lt;p&gt;The project is currently private and I'm probably not going to open-source it any time soon. I expect that I will open-source it at some point,&lt;br&gt;
but for now I want to have the freedom to make drastic design changes. Open-sourcing it would leave me worried about affecting anyone using the code.&lt;/p&gt;

&lt;p&gt;I would however love to collaborate on this idea if there is interest. If you are interested in seeing the code and helping turn it into a usable open-source project, please reach out! The easiest ways to contact me are &lt;a href="https://www.linkedin.com/in/nabsul/"&gt;LinkedIn&lt;/a&gt; and &lt;a href="https://twitter.com/nabsul"&gt;Twitter&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>nginx</category>
    </item>
    <item>
      <title>KCert - V1 Release</title>
      <dc:creator>Nabeel Sulieman</dc:creator>
      <pubDate>Thu, 03 Mar 2022 00:39:27 +0000</pubDate>
      <link>https://dev.to/nabsul/kcert-v1-release-a91</link>
      <guid>https://dev.to/nabsul/kcert-v1-release-a91</guid>
      <description>&lt;p&gt;Original post: &lt;a href="https://nabeel.blog/2022/02/27/kcert-v1"&gt;https://nabeel.blog/2022/02/27/kcert-v1&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;It's been about a year since I introduced my certificate manager: &lt;a href="https://nabeel.blog/2021/03/21/kcert-release"&gt;KCert&lt;/a&gt;. For my specific needs KCert was a great improvement over the ubiquitous &lt;a href="https://cert-manager.io/docs/"&gt;cert-manager&lt;/a&gt;. You can read about the differences/advantages in the project's &lt;a href="https://github.com/nabsul/kcert/blob/main/README.md"&gt;README&lt;/a&gt;. However, I did not take the time to review the design to make it more generally useful.&lt;/p&gt;

&lt;p&gt;This month I finally got around to revisiting KCert. I have redesigned several aspects of the tool, and I think what I have now is much more refined, simplified, and easy to use.&lt;/p&gt;

&lt;p&gt;The changes I've made are:&lt;/p&gt;

&lt;h1&gt;
  
  
  Get Started Fast
&lt;/h1&gt;

&lt;p&gt;A basic installation of KCert is now super fast. You should be able to get started in a matter of minutes. Just edit three lines of the provided &lt;code&gt;deploy.yml&lt;/code&gt; file and use &lt;code&gt;kubectl apply&lt;/code&gt; to deploy it to your cluster. You can now start creating ingresses and KCert will issue the needed certificates. It's that simple!&lt;/p&gt;

&lt;h1&gt;
  
  
  Less Reliance on a UI
&lt;/h1&gt;

&lt;p&gt;Before the refactor, setting up KCert required entering your initial settings via the web UI. I came to the conclusion that this is not consistent with how Kubernetes is usually managed. Everything is now configured in the more standard "config as code" approach.&lt;/p&gt;

&lt;p&gt;The web UI still exists, but it is mostly now a read-only view of the tool's status. The only actions you can take in there are sending a test email and manually renewing certificates&lt;br&gt;
(and I'm considering removing that second feature).&lt;/p&gt;

&lt;h1&gt;
  
  
  Watching for Ingress Changes
&lt;/h1&gt;

&lt;p&gt;Before this release, certificates were created in the web UI with a browser-based form. The secret name and hosts are entered and KCert creates a Kubernetes secret based on that information. That is all gone now.&lt;/p&gt;

&lt;p&gt;Instead, KCert now watches for changes to ingresses marked with the &lt;code&gt;kcert.dev/ingress=managed&lt;/code&gt; label. Whenever a change occurs, KCert will check if it needs to issue new certificates accordingly. KCert supports issuing multi-host (but not wildcard) certificates. If multiple TLS definition exist across different Ingress definitions, a single certificate will be created for all referenced hosts.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--FLdWtUd7--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/x1sqvpo66gl94j8921om.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--FLdWtUd7--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/x1sqvpo66gl94j8921om.png" alt="KCert Main Page" width="880" height="303"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Conclusion
&lt;/h1&gt;

&lt;p&gt;This project has been dormant for a while, but I'm really excited about this latest update. I think it's finally in a state that should make it useful to many people.&lt;/p&gt;

&lt;p&gt;So give it a try!&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>dotnet</category>
    </item>
    <item>
      <title>KCert: Simple Let's Encrypt for Kubernetes</title>
      <dc:creator>Nabeel Sulieman</dc:creator>
      <pubDate>Sun, 28 Mar 2021 21:40:14 +0000</pubDate>
      <link>https://dev.to/nabsul/kcert-simple-let-s-encrypt-for-kubernetes-23im</link>
      <guid>https://dev.to/nabsul/kcert-simple-let-s-encrypt-for-kubernetes-23im</guid>
      <description>&lt;p&gt;Original post &lt;a href="https://nabeel.blog/2021/03/21/kcert-release"&gt;here&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Last month I &lt;a href="https://nabeel.blog/2021/02/06/kcert"&gt;wrote about a tool&lt;/a&gt; I've been using in my cluster to manager Let's Encrypt certificates. For me, building this tool has been a fantastic learning experience, and as the author of the tool I am extremely pleased with the result. I doubt I will ever go back to using cer-manager.&lt;/p&gt;

&lt;p&gt;Today I would like to announce that &lt;a href="https://github.com/nabsul/kcert"&gt;KCert&lt;/a&gt; is stable enough for broader usage. I would love for people to try it out, submit feedback, and help me turn this into a tool that is useful to more people than just myself.&lt;/p&gt;

&lt;h1&gt;
  
  
  What is KCert?
&lt;/h1&gt;

&lt;p&gt;KCert is meant to replace &lt;a href="https://cert-manager.io/docs/"&gt;cert-manager&lt;/a&gt; in your Kubernetes cluster. It offers a simple alternative to the complex system that is cert-manager. Instead of thousands of lines of yaml, multiple services and custom resource types, KCert runs as a simple, single-instance service in your cluster. It will automatically renew certs before they expire and can send you email notifications of actions taken. It also has a web UI for manual configuration and management.&lt;/p&gt;

&lt;h1&gt;
  
  
  How Reliable is the Tool?
&lt;/h1&gt;

&lt;p&gt;I've been running KCert in my own cluster since December. Over the past few months, I've been tweaking the UI and reorganizing the way the tool works to make it as simple as possible. Of course, as the author of the tool I am biased and my experience is not the same as a new user. I've very keen to see if other people will be as excited about this tool as I am.&lt;/p&gt;

&lt;h1&gt;
  
  
  What If I Have Trouble Using KCert?
&lt;/h1&gt;

&lt;p&gt;If you have any issues or questions using KCert, please submit your question or feature request in &lt;a href="https://github.com/nabsul/kcert/issues"&gt;GitHub&lt;/a&gt;.&lt;/p&gt;

&lt;h1&gt;
  
  
  Potential Caveats
&lt;/h1&gt;

&lt;p&gt;This tool works great for me in my cluster and environment. However, as a side-project, I've only been working on it part time.This is not a perfectly polished, production-grade tool. There are no automated tests and I've only tested it in my own cluster with the following setup:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Kubernetes 1.19.3 on Digital Ocean&lt;/li&gt;
&lt;li&gt;Standard nginx ingress controller&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I suspect that folks with different configurations will encounter bugs. If you take the time to describe the issues you face,&lt;br&gt;
I will happily try to resolve them. Specifically, as requests come in, I expect to expand KCert to support:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Other types of Ingress controllers (HAProxy, Traefik, Kong, etc.)&lt;/li&gt;
&lt;li&gt;Other architectures such as ARM for Raspberry Pi&lt;/li&gt;
&lt;/ul&gt;

&lt;h1&gt;
  
  
  Getting Started
&lt;/h1&gt;

&lt;p&gt;Trying out KCert is extremely simple and low-risk. Simply apply the &lt;code&gt;deploy.yml&lt;/code&gt; file to your cluster. Unlike cert-manager, the file is less than 100 lines and should be easy to follow and understand.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;deploy.yml&lt;/code&gt; will create a few resources in a new namespace called &lt;code&gt;kcert&lt;/code&gt;, as well as a global ClusterRole and ClusterRoleBinding for accessing TLS certs. Deleting KCert is as simple is deleting those resources.&lt;/p&gt;

&lt;p&gt;Please let me know how it goes!&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>letsencrypt</category>
    </item>
    <item>
      <title>DigitalOcean Hackathon Submission: Meal Match</title>
      <dc:creator>Nabeel Sulieman</dc:creator>
      <pubDate>Mon, 11 Jan 2021 06:26:07 +0000</pubDate>
      <link>https://dev.to/nabsul/digital-hackathon-submission-meal-match-4d7o</link>
      <guid>https://dev.to/nabsul/digital-hackathon-submission-meal-match-4d7o</guid>
      <description>&lt;h2&gt;
  
  
  What I built
&lt;/h2&gt;

&lt;p&gt;An App to help families and friends decide what to eat.&lt;/p&gt;

&lt;h3&gt;
  
  
  Category Submission:
&lt;/h3&gt;

&lt;p&gt;Random Roulette&lt;/p&gt;

&lt;h3&gt;
  
  
  App Link
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://mealmatch.mydemo.dev/"&gt;https://mealmatch.mydemo.dev/&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Screenshots
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--wtESSS-y--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://github.com/mealmatch/app/raw/main/screenshots/screenshot1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--wtESSS-y--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://github.com/mealmatch/app/raw/main/screenshots/screenshot1.png" alt="Main Page Screenshot"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--BEyQEIiG--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://github.com/mealmatch/app/raw/main/screenshots/screenshot2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--BEyQEIiG--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://github.com/mealmatch/app/raw/main/screenshots/screenshot2.png" alt="Voting Screenshot"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--9TtNx5gl--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://github.com/mealmatch/app/raw/main/screenshots/screenshot3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--9TtNx5gl--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://github.com/mealmatch/app/raw/main/screenshots/screenshot3.png" alt="Results Screenshot"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Description
&lt;/h3&gt;

&lt;p&gt;Day after day, families, couples, and any group of people wanting to eat together face the age-old question:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;What's for dinner tonight?&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;This app can help. Build your own list of options, or search and copy results from Yelp. Share a link and see what people like! The vote can be set to automatically end when enough matches are found.&lt;/p&gt;

&lt;h3&gt;
  
  
  Link to Source Code
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://github.com/mealmatch/app"&gt;https://github.com/mealmatch/app&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Permissive License
&lt;/h3&gt;

&lt;p&gt;MIT&lt;/p&gt;

&lt;h2&gt;
  
  
  Background
&lt;/h2&gt;

&lt;p&gt;My brother-in-law came up with this idea: Wouldn't it be great if there was something like a dating app, but for deciding what to eat?&lt;/p&gt;

&lt;h3&gt;
  
  
  How I built it
&lt;/h3&gt;

&lt;p&gt;I started building this app in .net core because that is what I'm most comfortable with. The front page needed a bit of client-side magic, so I threw in some VueJS to handle that. The VueJS was simple enough that I could add it without any addition build (webpack) step.&lt;/p&gt;

&lt;p&gt;I also wanted to try to build an app that respects privacy and doesn't even ask users for their emails. Vote sessions are assigned a random GUID. The user can bookmark the URL if they want to return to the page.&lt;/p&gt;

&lt;p&gt;For data storage I used Azure Table Storage. It's a basic NoSQL database that's easy to use.&lt;/p&gt;

&lt;p&gt;For getting Yelp results, I created a developer account at Yelp and used their REST API. I tried my best to follow their usage guidelines and make clear which results came from their data.&lt;/p&gt;

&lt;p&gt;I tested the app locally until I got everything working. From there I created a Docker file to build it, put it on GitHub and created the Digital Ocean App instance.&lt;/p&gt;

&lt;p&gt;I was pleasantly surprised at how seamless Digital Ocean App was. The platform automatically built my docker image and deployed it. Adding the needed environment variables and a custom domain was intuitive and took very little effort.&lt;/p&gt;

&lt;h3&gt;
  
  
  Additional Resources/Info
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://dotnet.microsoft.com/"&gt;https://dotnet.microsoft.com/&lt;/a&gt;&lt;br&gt;
&lt;a href="https://vuejs.org/"&gt;https://vuejs.org/&lt;/a&gt;&lt;br&gt;
&lt;a href="https://www.yelp.com/developers/documentation/v3/business_search"&gt;https://www.yelp.com/developers/documentation/v3/business_search&lt;/a&gt;&lt;br&gt;
&lt;a href="https://azure.microsoft.com/en-us/services/storage/tables"&gt;https://azure.microsoft.com/en-us/services/storage/tables&lt;/a&gt;&lt;/p&gt;

</description>
      <category>dohackathon</category>
    </item>
    <item>
      <title>How to Effectively use a Single Monitor</title>
      <dc:creator>Nabeel Sulieman</dc:creator>
      <pubDate>Fri, 27 Mar 2020 18:49:52 +0000</pubDate>
      <link>https://dev.to/nabsul/wfh-tip-how-to-effectively-use-a-single-monitor-314l</link>
      <guid>https://dev.to/nabsul/wfh-tip-how-to-effectively-use-a-single-monitor-314l</guid>
      <description>&lt;p&gt;Original post: &lt;a href="https://nabeel.dev/2020/03/27/single-monitor/"&gt;https://nabeel.dev/2020/03/27/single-monitor/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Many people I know love working with multiple monitors. Personally, I'm not a huge fan. The only times I find myself using a second monitor are&lt;br&gt;
when I'm on a video conference (or watching a video) and want to keep working while I listen. Other than that, I'm generally a single-monitor user.&lt;/p&gt;

&lt;p&gt;I feel like using a single monitor helps keep me focused. Instead of moving my head left and right between monitors, I use keyboard shortcuts to quickly switch windows on my screen. With many people working from home these days, I imagine many don't have the space for two monitors anyways.&lt;/p&gt;

&lt;p&gt;Here are some tricks I use to make the most of my single monitor:&lt;/p&gt;

&lt;h2&gt;
  
  
  Virtual Desktops
&lt;/h2&gt;

&lt;p&gt;I use virtual desktops the way many people use multiple monitors. In Windows, you can press &lt;code&gt;win+tab&lt;/code&gt; to view all your windows, and at the top you'll see a list of virtual desktops. Press the &lt;code&gt;+&lt;/code&gt; sign to create as many as you need.&lt;/p&gt;

&lt;p&gt;Switching between virtual desktops is fast and easy: hold the &lt;code&gt;ctrl&lt;/code&gt; and &lt;code&gt;win&lt;/code&gt; keys down, and press left or right to move between desktops. If you're not already comfortable with the &lt;code&gt;alt+tab&lt;/code&gt; keyboard shortcut, then I highly recommend mastering it to quickly switch between windows.&lt;/p&gt;

&lt;h2&gt;
  
  
  Split the Screen
&lt;/h2&gt;

&lt;p&gt;The most common justification I hear for two monitors is: "I need to see two documents side-by-side." For most cases there's a simple alternative: Split Screen. In fact, I would argue that split screen is better than two monitors,&lt;br&gt;
since the two documents can be placed closer together (or even overlapping!)&lt;br&gt;
to make side-by-side comparisons even easier.&lt;/p&gt;

&lt;p&gt;In Windows I use &lt;code&gt;win+left&lt;/code&gt; and &lt;code&gt;win+right&lt;/code&gt; all the time for this purpose. That keyboard shortcut will automatically size a window to occupy&lt;br&gt;
the left or right half of your screen respectively.&lt;/p&gt;

&lt;h2&gt;
  
  
  Invest in a Good Monitor
&lt;/h2&gt;

&lt;p&gt;In my home office, I use a 27" Dell LED monitor. Not all 27" monitors are created equal. Consider getting a high-quality monitor and not the cheapest one possible. And definitely consider getting one high-quality monitor over two low-quality monitors.&lt;/p&gt;

&lt;h2&gt;
  
  
  Your Laptop is your Second Monitor
&lt;/h2&gt;

&lt;p&gt;My laptop is my primary work machine. I have docking stations at work and at home, so I just plug in and start working wherever I am. Usually my laptop's screen is turned off while I'm plugged into the docking station. For the few situations where I want a second monitor, I simply turn that screen on.&lt;/p&gt;

&lt;h2&gt;
  
  
  End
&lt;/h2&gt;

</description>
      <category>productivity</category>
      <category>wfh</category>
    </item>
    <item>
      <title>Connecting Kubernetes to AWS ECR</title>
      <dc:creator>Nabeel Sulieman</dc:creator>
      <pubDate>Mon, 23 Mar 2020 21:22:57 +0000</pubDate>
      <link>https://dev.to/nabsul/connecting-kubernetes-to-aws-ecr-25md</link>
      <guid>https://dev.to/nabsul/connecting-kubernetes-to-aws-ecr-25md</guid>
      <description>&lt;p&gt;I'm pleased to announce the release of &lt;code&gt;k8s-ecr-login-renew&lt;/code&gt; &lt;br&gt;
(&lt;a href="https://github.com/nabsul/k8s-ecr-login-renew"&gt;GitHub&lt;/a&gt; / &lt;a href="https://hub.docker.com/repository/docker/nabsul/k8s-ecr-login-renew"&gt;Docker&lt;/a&gt;). It's a small tool written in Go that simplifies working with Amazon's Elastic Container Registry (ECR). It addresses the fact that ECR Docker login credentials &lt;a href="https://docs.aws.amazon.com/cli/latest/reference/ecr/get-login-password.html"&gt;expire every 12 hours&lt;/a&gt;. &lt;code&gt;k8s-ecr-login-renew&lt;/code&gt; solves this by:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Fetching Docker login credentials from an AWS &lt;/li&gt;
&lt;li&gt;Creating/Updating a Docker login secret in Kubernetes&lt;/li&gt;
&lt;li&gt;Running as a cron job to prevent the Docker secret from expiring&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The source code and Docker image are published here: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Source Code: &lt;a href="https://github.com/nabsul/k8s-ecr-login-renew"&gt;https://github.com/nabsul/k8s-ecr-login-renew&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Docker Image: &lt;a href="https://hub.docker.com/repository/docker/nabsul/k8s-ecr-login-renew"&gt;https://hub.docker.com/repository/docker/nabsul/k8s-ecr-login-renew&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I'm also quite proud of the &lt;a href="https://github.com/nabsul/k8s-ecr-login-renew/blob/master/README.md"&gt;README&lt;/a&gt; and example code. My hope is that they will make getting started extremely easy.&lt;/p&gt;

&lt;p&gt;Enjoy!&lt;/p&gt;

&lt;p&gt;PS: Feedback is welcome and desired! Please also let me know if you found this tool useful (or if you had trouble using it).&lt;/p&gt;

</description>
      <category>k8s</category>
      <category>kubernetes</category>
      <category>aws</category>
      <category>docker</category>
    </item>
    <item>
      <title>Analyzing WordPress Hook Usage with Azure Data Lake</title>
      <dc:creator>Nabeel Sulieman</dc:creator>
      <pubDate>Fri, 31 May 2019 19:05:09 +0000</pubDate>
      <link>https://dev.to/nabsul/analyzing-wordpress-hook-usage-with-azure-data-lake-nb0</link>
      <guid>https://dev.to/nabsul/analyzing-wordpress-hook-usage-with-azure-data-lake-nb0</guid>
      <description>&lt;p&gt;Note: I wrote this post in 2017, so keep in mind that the code may need updating and things like Azure prices have probably changed.&lt;/p&gt;

&lt;p&gt;WordPress provides a large number of hooks that allow plugins to extend and modify its behavior. A few months ago, I was curious about which of these hooks are popular, and which of them are hardly ever used. I was also looking for an excuse to give Microsoft’s Data Lake Analytics a spin. U-SQL looked especially attractive as it brought back fond memories of petabyte-scale data crunching at Bing.&lt;/p&gt;

&lt;p&gt;With that in mind, I set out to build some tools that would calculate the usage of WordPress’s hooks. Breaking that up into smaller steps, I came up with:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Crawl all published plugins on WordPress.org&lt;/li&gt;
&lt;li&gt;Extract which hooks are used by each plugin&lt;/li&gt;
&lt;li&gt;Extract a list of WordPress hooks&lt;/li&gt;
&lt;li&gt;For each WordPress hook, calculate its usage&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;On the technical side, I set the following goals for this project:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The code should be developed in C# and U-SQL&lt;/li&gt;
&lt;li&gt;The project should use .NET Core so that it’s cross-platform (Windows, Linux, Mac)&lt;/li&gt;
&lt;li&gt;The project should be usable in Visual Studio, VS Code or from the command line&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In this article I talk about the approach and algorithms in general. For the nitty-gritty details, you can check out the source code here: &lt;a href="https://github.com/nabsul/WordPressPluginAnalytics"&gt;https://github.com/nabsul/WordPressPluginAnalytics&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;See the &lt;a href="https://github.com/nabsul/WordPressPluginAnalytics/blob/master/README.md"&gt;README.md&lt;/a&gt; file for instructions on building and running the code.&lt;/p&gt;

&lt;h2&gt;
  
  
  Crawling for Plugins
&lt;/h2&gt;

&lt;p&gt;I decided to crawl the &lt;a href="https://wordpress.org/plugins/?s="&gt;WordPress.org plugins directory&lt;/a&gt; to extract a list of all the plugins. All of the plugins can also be accessed from a common SVN repository, but with different branches and tag folders, I felt that would be slightly more tedious than crawling the html pages to extract the official link to each zip file. The HtmlAgilityPack library makes parsing HTML and extracting information very easy. I use it to parse each page of plugins for the links to each individual plugin page, and then I parsed each plugin page for the zip file URL.&lt;/p&gt;

&lt;p&gt;Once I have the zip file URL, I uploaded it to Azure Blob Storage. I considered skipping this and working directly with the data from WordPress.org, but I felt this approach allowed me to have a stable snapshot of the original data to experiment on without repeatedly hitting wordpress.org for the same data.&lt;/p&gt;

&lt;p&gt;Running the process sequentially takes nearly 5 hours from a Digital Ocean droplet, but about 90% of that time is just waiting on I/O. Therefore, adding some parallelism to this process made a lot sense. This was done very simply by fetching all 12 plugins per page in parallel. This brought the run time down to just over an hour.&lt;/p&gt;

&lt;h2&gt;
  
  
  Extracting Data
&lt;/h2&gt;

&lt;p&gt;Now that I have my raw data, the next step is to extract useful information out of it. I used &lt;code&gt;System.IO.Compression.ZipArchive&lt;/code&gt; to iterate over each PHP file in the zip file. I then considered writing my own code to parse each PHP file, but quickly gave up on the idea when I realized how complicated that would get. So I looked around and found &lt;code&gt;Devsense.Php.Parser&lt;/code&gt;. Using this library, I was able to work directly on tokenized data and avoided all the hassle of parsing text myself.&lt;/p&gt;

&lt;p&gt;With that library, I extracted each hook usage and creation in the PHP files. I only count instances where the hook name is a constant string, since it would be impossible to predict the hook name for code like &lt;code&gt;add_action( "updated_$myvar", ...)&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;The final result needed to be in a format that can be easily analyzed with U-SQL and Azure Data Lake Analytics. U-SQL comes with built in TSV extractors, so if you upload your raw data in that format, you don’t need custom C# code to process it. Data Lake Analytics can automatically uncompress gzipped files, which is great since my TSV files compress to about 10% of their uncompressed size.&lt;/p&gt;

&lt;p&gt;Extracting the plugins takes less than 1 hour, so I didn’t bother to run parts of that code in parallel.&lt;/p&gt;

&lt;h2&gt;
  
  
  Running the Analysis
&lt;/h2&gt;

&lt;p&gt;The final step of the process is running a U-SQL script to analyze the data and generate the final report. You can upload the data manually or using the command line tool included in the project. You should have two extraction files: One for the WordPress source code and one for all the plugins. The final step is to run the U-SQL script. Again, you can edit and submit the script manually, or if you followed the naming conventions used in the program you can submit the job using the command line tool.&lt;/p&gt;

&lt;p&gt;U-SQL is a SQL-like language. If you’re familiar with SQL, the code in the script should all make sense. The raw data is read from the uploaded files. The WordPress data is filtered by hooks created and the plugins are filtered by hooks used. Hook usage is counted using a &lt;code&gt;GROUP BY&lt;/code&gt; statement. The hooks from WordPress and the plugins are then cross-referenced using a &lt;code&gt;JOIN&lt;/code&gt;. The graph of the job looks like this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--SkznYC27--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/w9px72dcl60oxrj5ezhh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--SkznYC27--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/w9px72dcl60oxrj5ezhh.png" alt="U-SQL Job"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  The Cost of Data Lake Analytics
&lt;/h2&gt;

&lt;p&gt;The job should take a couple of minutes to run and costs around $0.03 (US). However, I learned a few important lessons on the pricing of Data Lake jobs. First, when running on a few GB of data make sure you run with a parallelism of 1. Increasing the parallelism on a small data set is just a waste of money. For example, my 3-cent job cost 12 cents when I ran it with a parallelism of 5. I also suspect that compressing my data files helped reduce the cost of jobs. Compressed data should mean less data travelling over the network, which can often result in significantly faster (and cheaper) jobs.&lt;/p&gt;

&lt;p&gt;The second and more important point is about using custom code and libraries in your scripts: It is possible to upload and use custom .NET DLLs in your U-SQL scripts, but I highly recommend avoiding that unless it’s absolutely necessary. I experimented with uploading the individual plugin zip files to Data Lake storage and using a custom extractor library that directly processed the zip files and tokenizes the PHP. The cost of running such a job was around $5. This is way more than the cost of working on TSV files but it does makes sense since doing the Zip extraction and PHP parsing on Microsoft’s Azure infrastructure will consume far more CPU cycles than if you do most of the pre-processing separately.&lt;/p&gt;

&lt;p&gt;As you can see, unlike simpler services like storage, the cost of using this type of service can vary widely depending on how you design your data pipelines. It is therefore important to spend some time researching and carefully considering these decisions before settling on an approach.&lt;/p&gt;

&lt;h2&gt;
  
  
  Viewing the Results
&lt;/h2&gt;

&lt;p&gt;The final result of running the script is a small TSV formatted report with the follow pieces of information:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Hook Name: The name of the hook (prefixed with action_ and filter_ to differentiate those two types of hooks)&lt;/li&gt;
&lt;li&gt;Num Plugins: Number of plugins using the hook&lt;/li&gt;
&lt;li&gt;Num Usages: Number of times the hook is used&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The data can be imported to a spread sheet for further analysis and charting:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://1drv.ms/x/s!AoNGbuElNYPMjMUVzq5931eX9YzSuA"&gt;https://1drv.ms/x/s!AoNGbuElNYPMjMUVzq5931eX9YzSuA&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Conclusions
&lt;/h2&gt;

&lt;p&gt;Overall, I felt like there was definitely a learning curve to Azure Data Lake services, but it wasn’t all too bad. I’m definitely curious how all of this could be done in the Hadoop ecosystem, which I’m much less familiar with. If anyone would like to try replicating these results in Hadoop, I would greatly appreciate a tutorial and/or shared source code.&lt;/p&gt;

&lt;p&gt;This code could easily be expanded to perform other types of analysis. For example, it might be interesting to see the usage of various WordPress functions and classes. It also might be interesting to reduce the list of plugins to the most popular ones to get more realistic usage information for the hooks.&lt;/p&gt;

</description>
      <category>datascience</category>
      <category>azure</category>
      <category>wordpress</category>
    </item>
  </channel>
</rss>
