<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: LoftLabs</title>
    <description>The latest articles on DEV Community by LoftLabs (@loft).</description>
    <link>https://dev.to/loft</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/loft"/>
    <language>en</language>
    <item>
      <title>Reimagining Local Kubernetes: Replacing Kind with vind — A Deep Dive</title>
      <dc:creator>vCluster</dc:creator>
      <pubDate>Mon, 02 Mar 2026 17:12:16 +0000</pubDate>
      <link>https://dev.to/loft/reimagining-local-kubernetes-replacing-kind-with-vind-a-deep-dive-4gkn</link>
      <guid>https://dev.to/loft/reimagining-local-kubernetes-replacing-kind-with-vind-a-deep-dive-4gkn</guid>
      <description>&lt;p&gt;Kubernetes developers including myself have long relied on tools like KinD, aka, &lt;strong&gt;kind (Kubernetes in Docker)&lt;/strong&gt; to spin up disposable clusters locally for development, testing, and CI/CD workflows. I love the product and have used it many times but there were certain limitations which were a bit annoying like not being able to use service type LoadBalancer, accessing the homelab kind clusters from the web, using the pull through cache, adding a GPU node to your local kind cluster.&lt;/p&gt;

&lt;p&gt;Introducing &lt;strong&gt;vind (vCluster in Docker)&lt;/strong&gt; - An open source alternative to kind or you can say kind on steroids. vind enables Kubernetes clusters as first-class Docker containers — offering improved performance, modern features, and a more enhanced developer experience.&lt;/p&gt;

&lt;p&gt;In this post, we'll explore what vind is, how it compares to kind, and walk through real-world usage with examples from the vind repository.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is vind?
&lt;/h2&gt;

&lt;p&gt;At its core, vind is a way to run Kubernetes clusters directly as Docker containers. Which you can also do using kind and other tooling, so why vind and why should you try it? Well, here is the thing:&lt;/p&gt;

&lt;h3&gt;
  
  
  vind gives you
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;FREE vCluster platform UI&lt;/strong&gt; which you can connect to your local cluster from anywhere in the world — a perfect use case is when you have to give a homelab demo at a conference&lt;/li&gt;
&lt;li&gt;Native service type LoadBalancer support&lt;/li&gt;
&lt;li&gt;Pull-through image cache via Docker daemon&lt;/li&gt;
&lt;li&gt;Easy multi node Kubernetes cluster creation&lt;/li&gt;
&lt;li&gt;Attaching external nodes from an EC2 instance or a GPU node to your local cluster via vCluster VPN and no additional tooling required&lt;/li&gt;
&lt;li&gt;Flexible CNI choices&lt;/li&gt;
&lt;li&gt;Sleep and wakeup your cluster — you can pause the clusters to save resources, resume instantly&lt;/li&gt;
&lt;li&gt;Snapshot and backup your cluster is also coming in the next release&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In essence, vind is "a better kind" — bringing modern features that matter for real developer workflows.&lt;/p&gt;

&lt;h2&gt;
  
  
  vind vs Kind
&lt;/h2&gt;

&lt;p&gt;Let's see how vind stacks up against Kind:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Feature&lt;/th&gt;
&lt;th&gt;vind&lt;/th&gt;
&lt;th&gt;Kind&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Built-in UI&lt;/td&gt;
&lt;td&gt;✅ via vCluster Platform&lt;/td&gt;
&lt;td&gt;❌&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Sleep/Wake&lt;/td&gt;
&lt;td&gt;✅ Native&lt;/td&gt;
&lt;td&gt;❌ requires delete &amp;amp; recreate&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Load Balancers&lt;/td&gt;
&lt;td&gt;✅ Automatic&lt;/td&gt;
&lt;td&gt;❌ Manual setup&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Image Caching&lt;/td&gt;
&lt;td&gt;✅ Via Docker cache&lt;/td&gt;
&lt;td&gt;❌ External registries&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;External Nodes&lt;/td&gt;
&lt;td&gt;✅ Supported&lt;/td&gt;
&lt;td&gt;❌ Local only&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;CNI / CSI Options&lt;/td&gt;
&lt;td&gt;✅ Flexible&lt;/td&gt;
&lt;td&gt;Limited&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Snapshots&lt;/td&gt;
&lt;td&gt;🕐 Coming soon&lt;/td&gt;
&lt;td&gt;❌&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Kind is useful for several use cases, but vind expands these capabilities to support modern tooling like &lt;strong&gt;vCluster&lt;/strong&gt; and &lt;strong&gt;Docker drivers&lt;/strong&gt; providing a richer multi-cloud developer experience.&lt;/p&gt;

&lt;h2&gt;
  
  
  Installing and Setting Up vind
&lt;/h2&gt;

&lt;p&gt;The docs that I have created are updated with the instructions needed, so we will just follow that and then attach an external node to my local cluster.&lt;/p&gt;

&lt;p&gt;Before we dive into examples, let's get vind installed.&lt;/p&gt;

&lt;h3&gt;
  
  
  1. Prerequisites
&lt;/h3&gt;

&lt;p&gt;Ensure you have:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Docker installed and running&lt;/li&gt;
&lt;li&gt;vCluster CLI (v0.31.0 or later)&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  2. Install / Update vCluster CLI
&lt;/h3&gt;

&lt;p&gt;If you have never installed vCluster CLI then you can install from &lt;a href="https://www.vcluster.com/install" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;If you already have the CLI then:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;vcluster upgrade &lt;span class="nt"&gt;--version&lt;/span&gt; v0.31.0
vcluster use driver docker
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnim2milttodmxieg5ejb.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnim2milttodmxieg5ejb.jpeg" alt="vCluster CLI upgrade" width="800" height="103"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  3. Start Optional UI
&lt;/h3&gt;

&lt;p&gt;The vCluster Platform UI gives a nice web interface:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;vcluster platform start
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9mrmovdhi7zgzlwjk35g.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9mrmovdhi7zgzlwjk35g.jpeg" alt="vCluster Platform Start" width="800" height="262"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once booted, you can manage clusters visually — something kind doesn't provide out of the box.&lt;/p&gt;

&lt;p&gt;You will get login details that you can use to login to the UI and manage your clusters via the UI.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fetie0k6vzk7050ffc5rv.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fetie0k6vzk7050ffc5rv.jpeg" alt="vCluster Platform UI Login" width="800" height="426"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Create Your First vind Cluster
&lt;/h3&gt;

&lt;p&gt;Once set up, creating your first Kubernetes cluster using vind is simple:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;vcluster create my-cluster
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then, validate the cluster:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl get nodes
kubectl get namespaces
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7qg4gwn9ky3r5xk3w69e.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7qg4gwn9ky3r5xk3w69e.jpeg" alt="vind cluster creation" width="800" height="325"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You now have a &lt;strong&gt;Kubernetes cluster running inside Docker&lt;/strong&gt;, ready for development.&lt;/p&gt;

&lt;h2&gt;
  
  
  Cluster with multiple nodes?
&lt;/h2&gt;

&lt;p&gt;Let's try to create a cluster with multiple nodes.&lt;/p&gt;

&lt;p&gt;You can use one of the examples — &lt;a href="https://github.com/loft-sh/vind/blob/main/examples/multi-node-cluster.yaml" rel="noopener noreferrer"&gt;multi-node-cluster.yaml&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;experimental&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;docker&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="c1"&gt;# Network configuration&lt;/span&gt;
    &lt;span class="na"&gt;network&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;vind-multi-node"&lt;/span&gt;

    &lt;span class="c1"&gt;# Load balancer and registry proxy are enabled by default&lt;/span&gt;

    &lt;span class="c1"&gt;# Additional worker nodes&lt;/span&gt;
    &lt;span class="na"&gt;nodes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;worker-1&lt;/span&gt;
        &lt;span class="na"&gt;env&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;CUSTOM_VAR=value1"&lt;/span&gt;

      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;worker-2&lt;/span&gt;
        &lt;span class="na"&gt;env&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;CUSTOM_VAR=value2"&lt;/span&gt;

      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;worker-3&lt;/span&gt;
        &lt;span class="na"&gt;env&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;CUSTOM_VAR=value3"&lt;/span&gt;
&lt;span class="na"&gt;privateNodes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;enabled&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
  &lt;span class="na"&gt;vpn&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;enabled&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
    &lt;span class="na"&gt;nodeToNode&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;enabled&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And run the command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;vcluster create my-vcluster &lt;span class="nt"&gt;-f&lt;/span&gt; multi-node-cluster.yaml &lt;span class="nt"&gt;--upgrade&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1vqw272gsob1uthiq4fk.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1vqw272gsob1uthiq4fk.jpeg" alt="Multi-node cluster creation" width="800" height="584"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Boom!! You have a multi node Kubernetes cluster!&lt;/p&gt;

&lt;h2&gt;
  
  
  Adding external node
&lt;/h2&gt;

&lt;p&gt;This is my favourite one!&lt;/p&gt;

&lt;p&gt;In case you want to attach a GPU or external node to your local cluster, you can do that using vCluster VPN that comes with vCluster Free tier.&lt;/p&gt;

&lt;p&gt;Add below to the multi-node yaml manifest and recreate the cluster:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;privateNodes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;enabled&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
  &lt;span class="na"&gt;vpn&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;enabled&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
    &lt;span class="na"&gt;nodeToNode&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;enabled&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Create the token:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;vcluster token createdemo ~ vcluster token create

curl &lt;span class="nt"&gt;-fsSLk&lt;/span&gt; &lt;span class="s2"&gt;"https://25punio.loft.host/kubernetes/project/default/virtualcluster/my-cluster/node/join?token=eerawx.dwets9a8adfw52gz"&lt;/span&gt; | sh -
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now you can create an instance in any cloud provider — I have created an instance in Google Cloud:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;gcloud compute instances create my-vm &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--zone&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;us-central1-a &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--machine-type&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;e2-micro &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--image-family&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;ubuntu-2204-lts &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--image-project&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;ubuntu-os-cloud
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1ejuz7h3ne91py219937.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1ejuz7h3ne91py219937.jpeg" alt="GCP instance creation" width="800" height="309"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Next ssh into the instance:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;gcloud compute ssh my-vm &lt;span class="nt"&gt;--zone&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;us-central1-a
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Run the curl command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;saiyam@my-vm:~$ sudo su 
root@my-vm:/home/saiyam# curl -fsSLk "https://25punio.loft.host/kubernetes/project/default/virtualcluster/my-cluster/node/join?token=eerawx.hdnueyd9a8myfr52gz" | sh -
Detected OS: ubuntu
Preparing node for Kubernetes installation...
Kubernetes version: v1.34.0
Installing Kubernetes binaries...
Downloading Kubernetes binaries from https://github.com/loft-sh/kubernetes/releases/download...
Loading bridge and br_netfilter modules...
insmod /lib/modules/6.8.0-1046-gcp/kernel/net/llc/llc.ko 
insmod /lib/modules/6.8.0-1046-gcp/kernel/net/802/stp.ko 
insmod /lib/modules/6.8.0-1046-gcp/kernel/net/bridge/bridge.ko 
insmod /lib/modules/6.8.0-1046-gcp/kernel/net/bridge/br_netfilter.ko 
Activating ip_forward...
net.ipv4.ip_forward = 1
net.ipv6.conf.all.forwarding = 1
Resetting node...
Ensuring kubelet is stopped...
kubelet service not found
* Applying /etc/sysctl.d/10-console-messages.conf ...
kernel.printk = 4 4 1 7
* Applying /etc/sysctl.d/10-ipv6-privacy.conf ...
net.ipv6.conf.all.use_tempaddr = 2
net.ipv6.conf.default.use_tempaddr = 2
* Applying /etc/sysctl.d/10-kernel-hardening.conf ...
kernel.kptr_restrict = 1
* Applying /etc/sysctl.d/10-magic-sysrq.conf ...
kernel.sysrq = 176
* Applying /etc/sysctl.d/10-network-security.conf ...
net.ipv4.conf.default.rp_filter = 2
net.ipv4.conf.all.rp_filter = 2
* Applying /etc/sysctl.d/10-ptrace.conf ...
kernel.yama.ptrace_scope = 1
* Applying /etc/sysctl.d/10-zeropage.conf ...
vm.mmap_min_addr = 65536
* Applying /usr/lib/sysctl.d/50-default.conf ...
kernel.core_uses_pid = 1
net.ipv4.conf.default.rp_filter = 2
net.ipv4.conf.default.accept_source_route = 0
sysctl: setting key "net.ipv4.conf.all.accept_source_route": Invalid argument
net.ipv4.conf.default.promote_secondaries = 1
sysctl: setting key "net.ipv4.conf.all.promote_secondaries": Invalid argument
net.ipv4.ping_group_range = 0 2147483647
net.core.default_qdisc = fq_codel
fs.protected_hardlinks = 1
fs.protected_symlinks = 1
fs.protected_regular = 1
fs.protected_fifos = 1
* Applying /usr/lib/sysctl.d/50-pid-max.conf ...
kernel.pid_max = 4194304
* Applying /etc/sysctl.d/60-gce-network-security.conf ...
net.ipv4.tcp_syncookies = 1
net.ipv4.conf.all.accept_source_route = 0
net.ipv4.conf.default.accept_source_route = 0
net.ipv4.conf.all.accept_redirects = 0
net.ipv4.conf.default.accept_redirects = 0
net.ipv4.conf.all.secure_redirects = 1
net.ipv4.conf.default.secure_redirects = 1
net.ipv4.ip_forward = 0
net.ipv4.conf.all.send_redirects = 0
net.ipv4.conf.default.send_redirects = 0
net.ipv4.conf.all.rp_filter = 1
net.ipv4.conf.default.rp_filter = 1
net.ipv4.icmp_echo_ignore_broadcasts = 1
net.ipv4.icmp_ignore_bogus_error_responses = 1
net.ipv4.conf.all.log_martians = 1
net.ipv4.conf.default.log_martians = 1
kernel.randomize_va_space = 2
kernel.panic = 10
* Applying /etc/sysctl.d/99-cloudimg-ipv6.conf ...
net.ipv6.conf.all.use_tempaddr = 0
net.ipv6.conf.default.use_tempaddr = 0
* Applying /etc/sysctl.d/99-gce-strict-reverse-path-filtering.conf ...
* Applying /usr/lib/sysctl.d/99-protect-links.conf ...
fs.protected_fifos = 1
fs.protected_hardlinks = 1
fs.protected_regular = 2
fs.protected_symlinks = 1
* Applying /etc/sysctl.d/99-sysctl.conf ...
* Applying /etc/sysctl.d/99-tailscale.conf ...
* Applying /etc/sysctl.conf ...
Starting vcluster-vpn...
Created symlink /etc/systemd/system/multi-user.target.wants/vcluster-vpn.service → /etc/systemd/system/vcluster-vpn.service.
Waiting for vcluster-vpn to be ready...
Waiting for vcluster-vpn to be ready...
Configuring node to node vpn...
Waiting for a tailscale ip...
Starting containerd...
Created symlink /etc/systemd/system/multi-user.target.wants/containerd.service → /etc/systemd/system/containerd.service.
Importing pause image...
registry.k8s.io/pause:3.10               saved 
application/vnd.oci.image.manifest.v1+json sha256:a883b8d67f5fe8ae50f857fb4c11c789913d31edff664135b9d4df44d3cb85cb
Importing elapsed: 0.2 s total:   0.0 B (0.0 B/s) 
Starting kubelet...
Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /etc/systemd/system/kubelet.service.
Installation successful!
Joining node into cluster...
[preflight] Running pre-flight checks
W0209 16:03:28.474612    1993 file.go:102] [discovery] Could not access the cluster-info ConfigMap for refreshing the cluster-info information, but the TLS cert is valid so proceeding...
[preflight] Reading configuration from the "kubeadm-config" ConfigMap in namespace "kube-system"...
[preflight] Use 'kubeadm init phase upload-config kubeadm --config your-config-file' to re-upload it.
W0209 16:03:30.253845    1993 utils.go:69] The recommended value for "clusterDNS" in "KubeletConfiguration" is: [10.96.0.10]; the provided value is: [10.109.18.131]
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is healthy after 1.501478601s
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This node has joined the cluster:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Certificate signing request was sent to apiserver and a response was received.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The Kubelet was informed of the new secure connection details.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Run kubectl get nodes on the control-plane to see this node join the cluster.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7z1z0cb0s1wn2whvwvbb.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7z1z0cb0s1wn2whvwvbb.jpeg" alt="External node joined" width="800" height="211"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;How cool is this!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi9a16ioe1e524f3pfdb8.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi9a16ioe1e524f3pfdb8.jpeg" alt="Cluster with external node" width="800" height="459"&gt;&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl get po &lt;span class="nt"&gt;-A&lt;/span&gt;
NAMESPACE            NAME                                      READY   STATUS    RESTARTS   AGE
kube-flannel         kube-flannel-ds-nfb6g                     1/1     Running   0          14m
kube-flannel         kube-flannel-ds-nkxfq                     1/1     Running   0          15m
kube-flannel         kube-flannel-ds-r28k7                     1/1     Running   0          42s
kube-flannel         kube-flannel-ds-tr9cm                     1/1     Running   0          14m
kube-flannel         kube-flannel-ds-xx7p7                     1/1     Running   0          14m
kube-system          coredns-75bb76df-5tbdc                    1/1     Running   0          15m
kube-system          kube-proxy-hfgw7                          1/1     Running   0          15m
kube-system          kube-proxy-l86w5                          1/1     Running   0          42s
kube-system          kube-proxy-lqprr                          1/1     Running   0          14m
kube-system          kube-proxy-sz6f2                          1/1     Running   0          14m
kube-system          kube-proxy-wrkzl                          1/1     Running   0          14m
local-path-storage   local-path-provisioner-6f6fd5d9d9-4tpvj   1/1     Running   0          15m
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;There are many other features like &lt;strong&gt;Sleep Mode&lt;/strong&gt;, where you can sleep and wakeup of a vind Kubernetes cluster. Accessing your clusters from the UI and attaching external nodes along with working of LoadBalancer service without any additional third party tooling is just amazing!&lt;/p&gt;

&lt;p&gt;So I have replaced my kind setup.&lt;/p&gt;

&lt;p&gt;For teams focused on:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Fast local spins&lt;/li&gt;
&lt;li&gt;CI/CD pipelines&lt;/li&gt;
&lt;li&gt;Kubernetes testing&lt;/li&gt;
&lt;li&gt;Hybrid workflows&lt;/li&gt;
&lt;li&gt;Efficient resource usage&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;…vind is worth serious consideration.&lt;/p&gt;

&lt;h2&gt;
  
  
  Summary
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Aspect&lt;/th&gt;
&lt;th&gt;Kind&lt;/th&gt;
&lt;th&gt;vind&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Local cluster creation&lt;/td&gt;
&lt;td&gt;✅&lt;/td&gt;
&lt;td&gt;✅&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Ease of management&lt;/td&gt;
&lt;td&gt;⚠️ CLI only&lt;/td&gt;
&lt;td&gt;✅ UI available&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Resource efficiency&lt;/td&gt;
&lt;td&gt;⚠️ Must recreate&lt;/td&gt;
&lt;td&gt;✅ Sleep/Wake&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Networking &amp;amp; LB&lt;/td&gt;
&lt;td&gt;⚠️ requires plugins&lt;/td&gt;
&lt;td&gt;✅ automatic&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Developer experience&lt;/td&gt;
&lt;td&gt;Good&lt;/td&gt;
&lt;td&gt;Great&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;If you're evaluating a modern replacement for kind in your tooling — especially for team and CI/Devflow use cases — vind puts a compelling stake in the ground.&lt;/p&gt;

&lt;p&gt;⭐ &lt;a href="https://github.com/loft-sh/vind/tree/main" rel="noopener noreferrer"&gt;Star the repo&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Start creating issues and let's discuss to make it even better!&lt;/p&gt;

&lt;p&gt;💬 &lt;a href="https://slack.vcluster.com" rel="noopener noreferrer"&gt;Join our Slack&lt;/a&gt;&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>devops</category>
      <category>docker</category>
      <category>cloudnative</category>
    </item>
    <item>
      <title>vCluster Free: Enterprise Kubernetes Features at No Cost</title>
      <dc:creator>vCluster</dc:creator>
      <pubDate>Mon, 23 Feb 2026 13:01:22 +0000</pubDate>
      <link>https://dev.to/loft/vcluster-free-enterprise-kubernetes-features-at-no-cost-1e09</link>
      <guid>https://dev.to/loft/vcluster-free-enterprise-kubernetes-features-at-no-cost-1e09</guid>
      <description>&lt;p&gt;We had such an amazing year 2025 and our commercial offering is gaining a ton of traction with large enterprises, neoclouds and AI factories alike. We doubled our team size and almost tripled our revenue in 2025. This success is rooted in our strong open source project and in our vCluster community, so to kick off 2026, we wanted to make sure to give back to our strongest supporters in the community and we thought hard about how to do this. The result of this are two additions to vCluster that we’ve been working on over the past few months and we’re announcing both of them today:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;vind - vCluster in Docker&lt;/strong&gt; (100% open source, no strings attached)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;vCluster Free&lt;/strong&gt; - Our New Free Tier (makes many Enterprise feature available for free)&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;To learn more about vind, check out our livestream today on &lt;a href="https://www.youtube.com/watch?v=In8vzpKecLs" rel="noopener noreferrer"&gt;Youtube&lt;/a&gt; or &lt;a href="https://www.linkedin.com/events/7417677957579718656/" rel="noopener noreferrer"&gt;LinkedIn&lt;/a&gt;. To learn more about vCluster Free, we will run a separate livestream next week (Youtube, LinkedIn) but of course, we’ll show some of it off today. For details about vind and vCluster Free outside of these livestreams, keep reading.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is vCluster Free?
&lt;/h2&gt;

&lt;p&gt;vCluster Free gives anyone access to many of our enterprise features at no cost. The following features are included in this free tier:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;CRD Sync&lt;/li&gt;
&lt;li&gt;Sync Patches&lt;/li&gt;
&lt;li&gt;Namespace 1:1 Syncing&lt;/li&gt;
&lt;li&gt;Custom DNS Entries&lt;/li&gt;
&lt;li&gt;Embedded etcd&lt;/li&gt;
&lt;li&gt;Private Nodes&lt;/li&gt;
&lt;li&gt;Auto Nodes&lt;/li&gt;
&lt;li&gt;Standalone&lt;/li&gt;
&lt;li&gt;vCluster Platform core features enabling self-service: Templates, CRDs, User Management, RBAC, etc.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;It's quite a long list, isn't it? All of these features were previously part of our Enterprise plan, but now you can use them for free.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why not open source?
&lt;/h2&gt;

&lt;p&gt;We're an open source company deeply committed to ensuring that vCluster thrives as an open source project. To ensure this long-term, we need to build a successful business around it. That's the best way to make open source sustainable.&lt;/p&gt;

&lt;p&gt;If we open sourced all of the features above, many enterprises would build around the open source project instead of purchasing our commercial offering, which would make it difficult to sustain the business and continue investing in vCluster. This is already happening today—vCluster provides significant value and enables teams to build real platforms on top of the open source project. Even commercial vendors like SpectroCloud, Rafay, Taikun Cloud, Uffizzi, and others have integrated open source vCluster into their products.&lt;/p&gt;

&lt;p&gt;There's clearly a lot of value in vCluster open source. We're continuing to add more features, including our announcement of vind today, which allows you to run vCluster in Docker similar to KinD—entirely available in our open source project. &lt;a href="https://www.linkedin.com/events/7417677957579718656/" rel="noopener noreferrer"&gt;Watch our livestream&lt;/a&gt; to learn more about vind. We will continue investing in our open source project and building features valuable to our open source community.&lt;/p&gt;

&lt;p&gt;However, to make more of our enterprise features available to the community in a way that doesn't cannibalize our Enterprise offering, enable competitors, or reduce large enterprises' interest in our commercial offering, we decided to launch vCluster Free—an offering between vCluster Open Source and vCluster Enterprise. We designed this free tier to be actually useful and set the limits intentionally high so you can get significant value from it.&lt;/p&gt;

&lt;h2&gt;
  
  
  What are the limits of vCluster Free?
&lt;/h2&gt;

&lt;p&gt;This free tier is not a trial but our attempt to make more enterprise features permanently available to the community. There is no time limit on how long you can use this free tier. Instead, it's limited by infrastructure size:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Up to 64 vCPU cores&lt;/li&gt;
&lt;li&gt;Up to 32 GPUs&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;As long as you stay below these numbers, you can use any of the features above and you get:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Unlimited Users&lt;/li&gt;
&lt;li&gt;Unlimited Host Clusters&lt;/li&gt;
&lt;li&gt;Unlimited Virtual Clusters (max 1 HA)
We wanted this free tier to be actually useful and enable anyone who loves vCluster to access more of the features we're building for our enterprise customers.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  What can I do with vCluster Free?
&lt;/h2&gt;

&lt;p&gt;vCluster Free lets anyone use many of our most advanced vCluster Enterprise features without speaking to our sales team or providing a credit card. We believe the limits above are high enough to cover many use cases entirely. Such use cases include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;vCluster as dev environments in startups or R&amp;amp;D teams within larger orgs&lt;/li&gt;
&lt;li&gt;Ephemeral CI environments for running e2e tests (especially for operators/CRDs)&lt;/li&gt;
&lt;li&gt;Simulating cluster changes (upgrading platform components)&lt;/li&gt;
&lt;li&gt;Running vendor software in a more isolated way (e.g., GitLab runners, etc.)&lt;/li&gt;
&lt;li&gt;Ephemeral clusters for demos (product demos, conference talks, etc.)&lt;/li&gt;
&lt;li&gt;Running a small production cluster&lt;/li&gt;
&lt;li&gt;Home labs and personal test environments&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;There are probably many more use cases. Many of them can already be addressed by our open source project, but with the new vCluster Free offering, you get access to even more functionality, a great UI, plus all the CRDs and controllers that come with vCluster Platform.&lt;/p&gt;

&lt;p&gt;I hope you enjoy this new offering. You can try it today by signing up for &lt;a href="https://vcluster.cloud/" rel="noopener noreferrer"&gt;vCluster Cloud&lt;/a&gt; or spinning up vCluster Platform using any of the commands below:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Run in Docker (for local testing)
vcluster platform start --docker
# Run in Kubernetes
vcluster platform start
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We’re excited to see what you build with vCluster Free.&lt;/p&gt;

&lt;p&gt;Watch the liverstream demo &lt;a href="https://www.linkedin.com/events/7422544917408755712/" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Originally published at &lt;a href="https://www.vcluster.com" rel="noopener noreferrer"&gt;https://www.vcluster.com&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>vcluster</category>
    </item>
    <item>
      <title>One giant Kubernetes cluster for everything</title>
      <dc:creator>Nicolas Fränkel</dc:creator>
      <pubDate>Thu, 20 Mar 2025 09:02:00 +0000</pubDate>
      <link>https://dev.to/loft/one-giant-kubernetes-cluster-for-everything-1bm6</link>
      <guid>https://dev.to/loft/one-giant-kubernetes-cluster-for-everything-1bm6</guid>
      <description>&lt;p&gt;The ideal size of your Kubernetes clusters is a day 0 question and demands a definite answer.&lt;/p&gt;

&lt;p&gt;You find one giant cluster on one end of the spectrum and many small-sized ones on the other, with every combination in between. This decision will impact your organization for years to come. Worse, if you decide to change your topology, you're in for a time-wasting and expensive ride.&lt;/p&gt;

&lt;p&gt;I want to list each approach's pros and cons in this post. Then, I'll settle the discussion once and for all and argue why selecting the giant cluster option is better.&lt;/p&gt;

&lt;h2&gt;
  
  
  The one giant cluster approach
&lt;/h2&gt;

&lt;p&gt;Deciding on a single giant cluster has great benefits.&lt;/p&gt;

&lt;h3&gt;
  
  
  Better resource utilization
&lt;/h3&gt;

&lt;p&gt;Kubernetes was designed to handle large-scale deployments, initially focusing on managing thousands of nodes to support extensive and complex containerized applications. This scalability was a key feature from its inception, enabling it to orchestrate resources across vast, distributed systems efficiently.&lt;/p&gt;

&lt;p&gt;Thus, a Kubernetes cluster is a scheduler at its core: it knows how to run workloads on nodes according to constraints. Without constraints, it will happily balance workloads across its available nodes. If you split the cluster into multiple clusters, you lose this benefit. You can have situations where one cluster is idle while another cluster is close to resource starvation and must cancel workloads and kill pods.&lt;/p&gt;

&lt;h3&gt;
  
  
  Lower operational overhead
&lt;/h3&gt;

&lt;p&gt;Good Kubernetes practices mandate that you back up your etcd data, monitor your cluster metrics, log your cluster events, provide security-related tools, etc. Size aside, it stands to reason that it's more time-effective to operate fewer clusters.&lt;/p&gt;

&lt;p&gt;For example, regarding metrics, you'd set up a single Prometheus instance, potentially clustered to handle additional traffic, and be done with it. Automation can mitigate the repetitive aspect of installing and maintaining an instance for each cluster, but you'll still end up with as many instances as you have clusters (or more). Prometheus is just one example because many cluster admins have a long list of tools they run in every cluster.&lt;/p&gt;

&lt;h3&gt;
  
  
  Straightforward networking and service communication
&lt;/h3&gt;

&lt;p&gt;Service-to-service communication inside a single cluster is straightforward. Point to &lt;code&gt;&amp;lt;service-name&amp;gt;.&amp;lt;namespace&amp;gt;.svc.cluster.local&lt;/code&gt; and be done with it. Even better, you only need the service name part inside the same namespace.&lt;/p&gt;

&lt;p&gt;You'll need a tool to help you with inter-cluster communication, from the simplicity of External DNS with LoadBalancer to the complexity of a full-fledged solution like Istio—both ends of the spectrum mandate time and operation costs.&lt;/p&gt;

&lt;h3&gt;
  
  
  Simplified governance
&lt;/h3&gt;

&lt;p&gt;Since every object is part of the same cluster inside a single cluster, one can enforce a centralized set of policies with a standardized approach. For example, you can create a namespace per team and environment, restricting access to only that team's members.&lt;/p&gt;

&lt;p&gt;Once you start having multiple clusters, even if you take the same approach, you'll duplicate the policy rules across clusters, with potential differences that will drift further with time.&lt;/p&gt;

&lt;h3&gt;
  
  
  Cost efficiency
&lt;/h3&gt;

&lt;p&gt;A single cluster means a single control plane, simplifying management and reducing overhead. While a control plane is essential for orchestration, its value comes from enabling efficient workload execution rather than directly running business applications.&lt;/p&gt;

&lt;p&gt;Additionally, many of the points above tie into cost optimization. With a single cluster, you only need to configure monitoring (&lt;em&gt;e.g.&lt;/em&gt;, Prometheus), logging, and security tools once, reducing duplication. In-place automation streamlines operations, helping manage costs without adding unnecessary infrastructure expenses.&lt;/p&gt;

&lt;h2&gt;
  
  
  Downsides of a one giant cluster approach
&lt;/h2&gt;

&lt;p&gt;Unfortunately, the giant cluster option is not only unicorns and rainbows; there are definite downsides.&lt;/p&gt;

&lt;h3&gt;
  
  
  Larger blast radius
&lt;/h3&gt;

&lt;p&gt;The larger the cluster, the more teams will use it. Unfortunately, this means that if something bad happens to the cluster, it will wreak havoc on the work of more teams. It occurs regardless of whether the outage results from a malicious actor, a wrong configuration, or resource starvation.&lt;/p&gt;

&lt;p&gt;When an actual malicious actor does indeed breach access, a larger cluster exposes more workloads, and the actor can compromise more of them.&lt;/p&gt;

&lt;p&gt;Even without malicious actors, every maintenance operation and upgrade on a cluster can affect its users; the bigger the cluster, the larger the potential impact. When planning an upgrade for a single cluster, you need to conduct an impact analysis that encompasses all users and teams in the organization.&lt;/p&gt;

&lt;h3&gt;
  
  
  Complex multi-tenancy management
&lt;/h3&gt;

&lt;p&gt;Even within a single organization, multiple teams will use the Kubernetes cluster. Even if every team member behaves professionally, we must set strict policies to avoid issues. If the cluster resembles a building, you'd still put locks on your apartment even if you have friendly neighbors. Likewise, the cluster administrator must enforce strict rules to make sharing a cluster acceptable. At the very least, we need strict namespace isolation to avoid unnecessary access and resource quotas and enforce fairness across teams. The problems caused by using a single cluster by teams across a single organization multiply a hundredfold if the cluster is multi-tenant and shared across several organizations.&lt;/p&gt;

&lt;h3&gt;
  
  
  Scalability limits
&lt;/h3&gt;

&lt;p&gt;Regardless of how exceptional Kubernetes is, it's still a physically based system with physical limits. For example, some objects have a clear limit, but even if you never reach them, getting close to them will require excellent system administration skills with some &lt;a href="https://openai.com/index/scaling-kubernetes-to-2500-nodes/" rel="noopener noreferrer"&gt;fine-tuning&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Even then, the more load you put on a Kubernetes API server, the more sluggish your system will be. If you're lucky, it will degrade linearly, but chances are it will hit some system limit and degrade all at once.&lt;/p&gt;

&lt;h3&gt;
  
  
  Cluster-wide objects
&lt;/h3&gt;

&lt;p&gt;Kubernetes objects are namespace-scoped, but a couple of them are cluster-scoped. A cluster-scoped object can only have a single instance across the whole cluster. For example, a Custom Resource Definition is cluster-scoped.&lt;/p&gt;

&lt;p&gt;It means that if a team wants to use v1 of a CRD, then every team on the same cluster is stuck with v1 if they wish to use this CRD. Worse, if any team wants to upgrade to v2, they must coordinate across all teams using the CRD to synchronize the upgrade.&lt;/p&gt;

&lt;h2&gt;
  
  
  What's the ideal size, then?
&lt;/h2&gt;

&lt;p&gt;I could describe the pros and cons of very granular clusters, but they mirror the opposite of what we have just seen. For example, very granular clusters allow each team to work on their version of a CRD without stepping on another team's toes. For this reason, I'll avoid repeating myself.&lt;/p&gt;

&lt;p&gt;Most, if not all, articles evaluating the pros and cons of each end of the spectrum advise a meet-in-the-middle approach: "a couple" of clusters to mitigate the worst aspects of each extreme approach. It's all well and good, but none of them, at least none I've read, tell precisely how many "a couple" is. Is it a cluster per environment, &lt;em&gt;i.e.&lt;/em&gt;, production, staging, or development? Is it a cluster per team?&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fol9t2z8hjsaxbdvqqcge.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fol9t2z8hjsaxbdvqqcge.png" alt="What's the ideal cluster topology?" width="800" height="357"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I'll take a risk and advertise for two clusters: one for production and the other for everything else. How would you manage the cons mentioned above? Read on.&lt;/p&gt;

&lt;h2&gt;
  
  
  vCluster
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://www.vcluster.com/" rel="noopener noreferrer"&gt;vCluster&lt;/a&gt; is an Open Source product that allows creating of so-called virtual clusters. vCluster is part of the CNCF landscape, specifically a &lt;a href="https://www.cncf.io/training/certification/software-conformance/" rel="noopener noreferrer"&gt;certified Kubernetes distribution&lt;/a&gt;. Being a certified distro means a virtual cluster offers every Kubernetes API you can expect, and you can deploy any application to it just like any other Kubernetes cluster.&lt;/p&gt;

&lt;p&gt;vCluster operates by creating the virtual cluster in a dedicated namespace. You can specify the latter, or vCluster will infer it from the virtual cluster's name. By default, it creates a control plane using the vanilla k8s distribution, but you can choose another one, such as k3s. Likewise, by default, it stores its configuration in an SQLite database, which works particularly well for temporary and pre-production clusters, such as those you create for a pull request. Alternatively, you can rely on a regular etcd or even external databases such as MySQL or Postgres as a data store for more permanent usage and better resilience and scalability of the virtual cluster.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fka7y5y0cyfafm5mku1r8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fka7y5y0cyfafm5mku1r8.png" alt="Virtual clusters inside a host cluster" width="800" height="583"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once you've created a virtual cluster via the CLI or the Helm chart, you can connect to it. The client-side CLI creates a dedicated reusable kubeconfig context. From within a virtual cluster, users see no other virtual clusters.&lt;/p&gt;

&lt;p&gt;If you need to access the host cluster resources from the virtual cluster or vice versa, vCluster uses a so-called syncer that syncs objects back and forth according to a configuration file. This way, you can set up an Ingress Controller on the host cluster and define your Ingress objects in the virtual cluster(s).&lt;/p&gt;

&lt;h2&gt;
  
  
  How vCluster mitigates the downsides of a giant cluster
&lt;/h2&gt;

&lt;p&gt;Let's review each downside of a giant cluster and how vCluster handles it.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Larger blast radius: When using a virtual cluster, the blast radius is automatically contained inside its boundaries. If you want to be conservative, aim for a small granularity approach, such as a cluster per team and environment.&lt;/li&gt;
&lt;li&gt;Complex multi-tenancy management: Gone are multi-tenancy problems since your tenants don't see each other and are isolated inside their respective virtual clusters.&lt;/li&gt;
&lt;li&gt;Scalability limits: While the limits are still there, the chances of reaching them decrease with the number of virtual clusters. If your giant cluster had 100k services, they are now spread throughout all virtual clusters. Even if the distribution is not even (and won't be), it gives you fresh air.&lt;/li&gt;
&lt;li&gt;Upgrades and maintenance risks: Upgrade and maintenance tasks are limited to the scope of a single virtual cluster. You can do them in turn, and they will only affect the virtual clusters you target. &lt;/li&gt;
&lt;li&gt;Cluster-wide objects: Finally, with virtual clusters, every team can install their version of a CRD, and its virtual cluster binds the CRD. It allows each team to be entirely independent of each other regarding the version of a CRD they use.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  But I need different clusters!
&lt;/h2&gt;

&lt;p&gt;While a single giant cluster provides compelling advantages, there are contexts in which a multi-cluster approach is justified. The most common reason is geographic distribution—specific applications require clusters in multiple regions to meet compliance requirements, reduce latency, or provide disaster recovery. For example, companies operating under GDPR or financial regulations may need strict data residency enforcement, which requires region-specific clusters. Similarly, organizations with stringent security postures may enforce complete isolation between environments or business units, making separate clusters a hard requirement.&lt;/p&gt;

&lt;p&gt;However, even in these cases, vCluster remains relevant. It allows for minimizing the number of physical clusters while still enabling workload separation at a virtual level. Instead of creating a sprawling landscape of Kubernetes clusters, teams can deploy regional virtual clusters within a single host cluster, balancing isolation and operation complexity.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Kubernetes cluster topology decisions are critical and long-lasting. While many advocate for a middle-ground approach between a single cluster and many small ones, they rarely specify the exact setup. Instead of guessing how many clusters to create, consolidating everything into a single, well-managed giant cluster makes more sense. The benefits—better resource utilization, lower operational overhead, simplified networking, centralized governance, and cost efficiency—outweigh the downsides.&lt;/p&gt;

&lt;p&gt;That said, the traditional downsides of a giant cluster, such as a larger blast radius, multi-tenancy complexities, scalability limits, upgrade challenges, and cluster-wide object constraints, are valid concerns. This is where vCluster changes the game. By using virtual clusters, you retain all the advantages of a single giant cluster while mitigating its worst drawbacks. vCluster isolates workloads, reduces operational risk, scales dynamically, simplifies upgrades, and removes conflicts over cluster-wide objects.&lt;/p&gt;

&lt;p&gt;Enhanced with vCluster, one cluster for production and one giant cluster for everything else, is the best approach for long-term scalability, efficiency, and ease of operations.&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>devops</category>
      <category>sizing</category>
      <category>opinion</category>
    </item>
    <item>
      <title>WebAssembly on Kubernetes</title>
      <dc:creator>Nicolas Fränkel</dc:creator>
      <pubDate>Thu, 06 Mar 2025 09:02:00 +0000</pubDate>
      <link>https://dev.to/loft/webassembly-on-kubernetes-5bdb</link>
      <guid>https://dev.to/loft/webassembly-on-kubernetes-5bdb</guid>
      <description>&lt;p&gt;Like a couple of innovative technologies, different people have different viewpoints on where WebAssembly fits the technology landscape.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;WebAssembly (also called Wasm) is certainly the subject of much hype right now. But what is it? Is it the JavaScript Killer? Is it a new programming language for the web? Is it (as we like to say) the next wave of cloud compute? We’ve heard it called many things: a better eBPF, the alternative to RISC V, a competitor to Java (or Flash), a performance booster for browsers, a replacement for Docker.&lt;/p&gt;

&lt;p&gt;-- &lt;a href="https://www.fermyon.com/blog/how-to-think-about-wasm" rel="noopener noreferrer"&gt;How to think about WebAssembly&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;In this post, I'll stay away from these debates and focus solely on how to use WebAssembly on Kubernetes.&lt;/p&gt;

&lt;h2&gt;
  
  
  My approach and the use case
&lt;/h2&gt;

&lt;p&gt;Unlike regular programming languages, you don't write WebAssembly directly: you write code that generates WebAssembly. At the moment, Go and Rust are the main source languages. I know Kotlin and Python are working toward this objective. There might be other languages I'm not aware of.&lt;/p&gt;

&lt;p&gt;I've settled on Rust for this post because of my familiarity with the language. In particular, I'll keep the same code across three different architectures:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Regular Rust-to-native code as the baseline&lt;/li&gt;
&lt;li&gt;Rust-to-WebAssembly using a WasmEdge embedded runtime&lt;/li&gt;
&lt;li&gt;Rust-to-WebAssembly using an external runtime&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Don't worry; I'll explain the difference between the two last approaches later.&lt;/p&gt;

&lt;p&gt;The use case should be more advanced than Hello World to highlight the capabilities of WebAssembly. I've implemented an HTTP server mimicking a single endpoint of the excellent &lt;a href="https://httpbin.org/" rel="noopener noreferrer"&gt;httpbin&lt;/a&gt; API testing utility. The code itself is not essential as the post is not about Rust, but in case you're interested, you can find it on &lt;a href="https://github.com/ajavageek/wasm-kubernetes/blob/master/src/main.rs" rel="noopener noreferrer"&gt;GitHub&lt;/a&gt;. I add a field to the response to explicitly return the underlying approach, respectively &lt;code&gt;native&lt;/code&gt;, &lt;code&gt;embed&lt;/code&gt;, or &lt;code&gt;runtime&lt;/code&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Baseline: regular Rust-to-native
&lt;/h2&gt;

&lt;p&gt;For the regular native compilation, I'm using a multistage Docker file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight docker"&gt;&lt;code&gt;&lt;span class="k"&gt;FROM&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;rust:1.84-slim&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;AS&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;build                                             #1&lt;/span&gt;

&lt;span class="k"&gt;RUN &lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&amp;lt;&lt;/span&gt;&lt;span class="no"&gt;EOB&lt;/span&gt;&lt;span class="sh"&gt;                                                                #2&lt;/span&gt;
  apt-get update
  apt-get install -y musl-tools musl-dev
  rustup target add aarch64-unknown-linux-musl                           &lt;span class="c"&gt;#3&lt;/span&gt;
EOB

&lt;span class="k"&gt;WORKDIR&lt;/span&gt;&lt;span class="s"&gt; /native&lt;/span&gt;

&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; native/Cargo.toml Cargo.toml&lt;/span&gt;

&lt;span class="k"&gt;WORKDIR&lt;/span&gt;&lt;span class="s"&gt; /&lt;/span&gt;

&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; src src&lt;/span&gt;

&lt;span class="k"&gt;WORKDIR&lt;/span&gt;&lt;span class="s"&gt; /native&lt;/span&gt;

&lt;span class="k"&gt;RUN &lt;/span&gt;&lt;span class="nv"&gt;RUSTFLAGS&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"-C target-feature=+crt-static"&lt;/span&gt; cargo build &lt;span class="nt"&gt;--target&lt;/span&gt; aarch64-unknown-linux-musl &lt;span class="nt"&gt;--release&lt;/span&gt; &lt;span class="c"&gt;#4&lt;/span&gt;

&lt;span class="k"&gt;FROM&lt;/span&gt;&lt;span class="s"&gt; gcr.io/distroless/static                                            #5&lt;/span&gt;

&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; --from=build /native/target/aarch64-unknown-linux-musl/release/httpbin httpbin #6&lt;/span&gt;

&lt;span class="k"&gt;ENTRYPOINT&lt;/span&gt;&lt;span class="s"&gt; ["./httpbin"]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;ol&gt;
&lt;li&gt;Start from the latest Rust image&lt;/li&gt;
&lt;li&gt;Heredocs for the win&lt;/li&gt;
&lt;li&gt;Install the necessary toolchain to cross-compile&lt;/li&gt;
&lt;li&gt;Statically compile&lt;/li&gt;
&lt;li&gt;I could potentially use &lt;code&gt;FROM scratch&lt;/code&gt;, but after reading &lt;a href="https://labs.iximiuz.com/tutorials/pitfalls-of-from-scratch-images" rel="noopener noreferrer"&gt;this&lt;/a&gt;, I prefer to use distroless&lt;/li&gt;
&lt;li&gt;Copy the executable from the previous compilation phase&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The final &lt;code&gt;wasm-kubernetes:native&lt;/code&gt; image weighs 8.71M, with its base image &lt;code&gt;distroless/static&lt;/code&gt; taking 6.03M of them.&lt;/p&gt;
&lt;h2&gt;
  
  
  Adapting to WebAssembly
&lt;/h2&gt;

&lt;p&gt;The main idea behind WebAssembly is that it's secure because it can't access the host system. However, we must open a socket to listen to incoming requests to run an HTTP server. WebAssembly can't do that. We need a runtime that provides this feature and other system-dependent capabilities. It's the goal of WASI.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;The WebAssembly System Interface (WASI) is a group of standards-track API specifications for software compiled to the W3C WebAssembly (Wasm) standard. WASI is designed to provide a secure standard interface for applications that can be compiled to Wasm from any language, and that may run anywhere—from browsers to clouds to embedded devices.&lt;/p&gt;

&lt;p&gt;-- &lt;a href="https://wasi.dev/" rel="noopener noreferrer"&gt;Introduction to WASI&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The specification &lt;a href="https://wasi.dev/interfaces#wasi-02" rel="noopener noreferrer"&gt;v0.2&lt;/a&gt; defines the following system interfaces:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Clocks&lt;/li&gt;
&lt;li&gt;Random&lt;/li&gt;
&lt;li&gt;Filesystem&lt;/li&gt;
&lt;li&gt;Sockets&lt;/li&gt;
&lt;li&gt;CLI&lt;/li&gt;
&lt;li&gt;HTTP&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;A couple of runtimes already implement the specification:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://wasmtime.dev/" rel="noopener noreferrer"&gt;Wasmtime&lt;/a&gt;, developed by the Bytecode Alliance&lt;/li&gt;
&lt;li&gt;&lt;a href="https://wasmer.io/" rel="noopener noreferrer"&gt;Wasmer&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://wazero.io/" rel="noopener noreferrer"&gt;Wazero&lt;/a&gt;, Go-based&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://wasmedge.org/" rel="noopener noreferrer"&gt;WasmEdge&lt;/a&gt;, designed for cloud, edge computing, and AI applications&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://www.fermyon.com/spin" rel="noopener noreferrer"&gt;Spin&lt;/a&gt; for serverless workloads&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I had to choose without being an expert in any of these. I finally decided on WasmEdge because of its focus on the Cloud.&lt;/p&gt;

&lt;p&gt;We must intercept code that calls with system APIs and redirect them to the runtime. Instead of runtime interception, the Rust ecosystem provides a patch mechanism: we replace code that calls system APIs with code that calls WASI APIs. We must know which dependency calls which system API and hope a patch exists for our dependency version.&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight rust"&gt;&lt;code&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;patch&lt;/span&gt;&lt;span class="py"&gt;.crates&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;io&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
&lt;span class="n"&gt;tokio&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="n"&gt;git&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"https://github.com/second-state/wasi_tokio.git"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;branch&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"v1.36.x"&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;  &lt;span class="err"&gt;#&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;2&lt;/span&gt;
&lt;span class="n"&gt;socket2&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="n"&gt;git&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"https://github.com/second-state/socket2.git"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;branch&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"v0.5.x"&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;    &lt;span class="err"&gt;#&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;

&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;dependencies&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
&lt;span class="n"&gt;tokio&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="n"&gt;version&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"1.36"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;features&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s"&gt;"rt"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s"&gt;"macros"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s"&gt;"net"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s"&gt;"time"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s"&gt;"io-util"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;     &lt;span class="err"&gt;#&lt;/span&gt;&lt;span class="mi"&gt;2&lt;/span&gt;
&lt;span class="n"&gt;axum&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"0.8"&lt;/span&gt;
&lt;span class="n"&gt;serde&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="n"&gt;version&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"1.0.217"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;features&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s"&gt;"derive"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;ol&gt;
&lt;li&gt;Patch the &lt;code&gt;tokio&lt;/code&gt; and &lt;code&gt;socket2&lt;/code&gt; crates with WASI-related calls&lt;/li&gt;
&lt;li&gt;The latest &lt;code&gt;tokio&lt;/code&gt; crate is 1.43, but the latest (and only) patch v1.36. We can't use the latest version because there's no patch.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;We must change the Dockerfile to compiler WebAssembly code instead of native:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight docker"&gt;&lt;code&gt;&lt;span class="k"&gt;FROM&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;--platform=$BUILDPLATFORM rust:1.84-slim&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;AS&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;build&lt;/span&gt;

&lt;span class="k"&gt;RUN &lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&amp;lt;&lt;/span&gt;&lt;span class="no"&gt;EOT&lt;/span&gt;&lt;span class="sh"&gt; bash&lt;/span&gt;
    set -ex
    apt-get update
    apt-get install -y git clang
    rustup target add wasm32-wasip1                                      &lt;span class="c"&gt;#1&lt;/span&gt;
EOT

&lt;span class="k"&gt;WORKDIR&lt;/span&gt;&lt;span class="s"&gt; /wasm&lt;/span&gt;

&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; wasm/Cargo.toml Cargo.toml&lt;/span&gt;

&lt;span class="k"&gt;WORKDIR&lt;/span&gt;&lt;span class="s"&gt; /&lt;/span&gt;

&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; src src&lt;/span&gt;

&lt;span class="k"&gt;WORKDIR&lt;/span&gt;&lt;span class="s"&gt; /wasm&lt;/span&gt;

&lt;span class="k"&gt;RUN &lt;/span&gt;&lt;span class="nv"&gt;RUSTFLAGS&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"--cfg wasmedge --cfg tokio_unstable"&lt;/span&gt; cargo build &lt;span class="nt"&gt;--target&lt;/span&gt; wasm32-wasip1 &lt;span class="nt"&gt;--release&lt;/span&gt; &lt;span class="c"&gt;#2-3&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;ol&gt;
&lt;li&gt;Install the WASM target&lt;/li&gt;
&lt;li&gt;Compile to WASM&lt;/li&gt;
&lt;li&gt;We must activate the &lt;code&gt;wasmedge&lt;/code&gt; flag, as well as the &lt;code&gt;tokio_unstable&lt;/code&gt; one, to successfully compile to WebAssembly&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;At this stage, we have two options for the second stage:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Use the WasmEdge runtime as a base image:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight docker"&gt;&lt;code&gt;&lt;span class="k"&gt;FROM&lt;/span&gt;&lt;span class="s"&gt; --platform=$BUILDPLATFORM wasmedge/slim-runtime:0.13.5&lt;/span&gt;

&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; --from=build /wasm/target/wasm32-wasip1/release/httpbin.wasm /httpbin.wasm&lt;/span&gt;

&lt;span class="k"&gt;CMD&lt;/span&gt;&lt;span class="s"&gt; ["wasmedge", "--dir", ".:/", "/httpbin.wasm"]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;


&lt;p&gt;From a usage perspective, it's pretty similar to the native approach.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Copy the WebAssembly file and make it a runtime responsibility:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight docker"&gt;&lt;code&gt;&lt;span class="k"&gt;FROM&lt;/span&gt;&lt;span class="s"&gt; scratch&lt;/span&gt;

&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; --from=build /wasm/target/wasm32-wasip1/release/httpbin.wasm /httpbin.wasm&lt;/span&gt;

&lt;span class="k"&gt;ENTRYPOINT&lt;/span&gt;&lt;span class="s"&gt; ["/httpbin.wasm"]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;


&lt;p&gt;It's where things get interesting.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The &lt;code&gt;native&lt;/code&gt; approach is slightly better than the &lt;code&gt;embed&lt;/code&gt; one, but the &lt;code&gt;runtime&lt;/code&gt; is the leanest since it contains only a single Webassembly file.&lt;/p&gt;
&lt;h2&gt;
  
  
  Running the Wasm image on Docker
&lt;/h2&gt;

&lt;p&gt;Not all Docker runtimes are equal, and to run Wasm workloads, we need to delve a bit into the Docker name. While Docker, the company, created Docker as the product, the current reality is that containers have evolved beyond Docker and now answer to specifications.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;The &lt;strong&gt;Open Container Initiative&lt;/strong&gt; is an open governance structure for the express purpose of creating open industry standards around container formats and runtimes.&lt;/p&gt;

&lt;p&gt;Established in June 2015 by Docker and other leaders in the container industry, the OCI currently contains three specifications: the Runtime Specification (runtime-spec), the Image Specification (image-spec) and the Distribution Specification (distribution-spec). The Runtime Specification outlines how to run a “filesystem bundle” that is unpacked on disk. At a high-level an OCI implementation would download an OCI Image then unpack that image into an OCI Runtime filesystem bundle. At this point the OCI Runtime Bundle would be run by an OCI Runtime.&lt;/p&gt;

&lt;p&gt;-- &lt;a href="https://opencontainers.org/" rel="noopener noreferrer"&gt;Open Container Initiative&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;From then on, I'll use the proper terminology for OCI images and containers. Not all OCI runtimes are equal, and far from all of them can run Wasm workloads: OrbStack, my current OCI runtime, can't, but &lt;a href="https://docs.docker.com/desktop/features/wasm/" rel="noopener noreferrer"&gt;Docker Desktop can&lt;/a&gt;, as an &lt;em&gt;experimental&lt;/em&gt; feature. As per the documentation, we must:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Use &lt;code&gt;containerd&lt;/code&gt; for pulling and storing images&lt;/li&gt;
&lt;li&gt;Enable Wasm&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Finally, we can run the above OCI image containing the Wasm file by selecting a Wasm runtime, Wasmedge, in my case. Let's do it:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker run &lt;span class="nt"&gt;--rm&lt;/span&gt; &lt;span class="nt"&gt;-p3000&lt;/span&gt;:3000 &lt;span class="nt"&gt;--runtime&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;io.containerd.wasmedge.v1 ghcr.io/ajavageek/wasm-kubernetes:runtime
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;&lt;code&gt;io.containerd.wasmedge.v1&lt;/code&gt; is the current version of the Wasmedge runtime. You must be authenticated with GitHub if you want to try it out.&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;curl localhost:3000/get&lt;span class="se"&gt;\?&lt;/span&gt;&lt;span class="nv"&gt;foo&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;bar | jq
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;The result is the same as for the native version:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"flavor"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"runtime"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"args"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"foo"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"bar"&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"headers"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"accept"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"*/*"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"host"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"localhost:3000"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"user-agent"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"curl/8.7.1"&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"url"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"/get?foo=bar"&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Wasi on Docker Desktop allows you to spin up an HTTP server that behaves like a regular native image! Even better, the image size is as tiny as the WebAssembly file it contains:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Repository&lt;/th&gt;
&lt;th&gt;Tag&lt;/th&gt;
&lt;th&gt;Size (Mb)&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;ghcr.io/ajavageek/wasm-kubernetes&lt;/td&gt;
&lt;td&gt;runtime&lt;/td&gt;
&lt;td&gt;1.15&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;ghcr.io/ajavageek/wasm-kubernetes&lt;/td&gt;
&lt;td&gt;embed&lt;/td&gt;
&lt;td&gt;12.4&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;ghcr.io/ajavageek/wasm-kubernetes&lt;/td&gt;
&lt;td&gt;native&lt;/td&gt;
&lt;td&gt;8.7&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;
&lt;h2&gt;
  
  
  Running the Wasm image on Kubernetes
&lt;/h2&gt;

&lt;p&gt;Now comes the fun part: your favorite Cloud provider(s) isn't using Docker Desktop. Despite this, we can still run WebAssembly workloads on Kubernetes. For this, we need to understand a bit about the not-too-low levels of what happens when you run a container, regardless of whether it's from an OCI runtime or Kubernetes.&lt;/p&gt;

&lt;p&gt;The latter executes a process; in our case, it's &lt;code&gt;containerd&lt;/code&gt;. Yet, &lt;code&gt;containerd&lt;/code&gt; is only an orchestrator of other container processes. It detects the "flavor" of the container and calls the relevant executable. For example, for "regular" containers, it calls &lt;code&gt;runc&lt;/code&gt; via a &lt;em&gt;shim&lt;/em&gt;. The good thing is that we can install other shims dedicated to other container types, such as Wasm. The following illustration, taken from  the &lt;a href="https://wasmedge.org/docs/develop/deploy/intro/" rel="noopener noreferrer"&gt;Wasmedge website&lt;/a&gt;, summarizes the flow:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0ezr2jr9wqi4zxdb4209.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0ezr2jr9wqi4zxdb4209.png" alt="containerd Architecture" width="451" height="481"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Despite some of the &lt;a href="https://learn.microsoft.com/en-us/azure/aks/use-wasi-node-pools" rel="noopener noreferrer"&gt;mainstream&lt;/a&gt; &lt;a href="https://www.spinkube.dev/docs/install/azure-kubernetes-service/" rel="noopener noreferrer"&gt;Cloud providers&lt;/a&gt; offering Wasm integration, none of them provide such a low-level one. I'll continue on my laptop, but Docker Desktop doesn't offer a direct integration either: it's time to be creative. For example, &lt;a href="https://minikube.sigs.k8s.io/" rel="noopener noreferrer"&gt;minikube&lt;/a&gt; is a full-fledged Kubernetes distribution that creates an intermediate Linux virtual machine within a Docker environment. We can SSH into the VM and configure it to our heart's content. Let's start by installing &lt;code&gt;minikube&lt;/code&gt;.&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;brew &lt;span class="nb"&gt;install &lt;/span&gt;minikube
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Now, we start &lt;code&gt;minikube&lt;/code&gt; with the &lt;code&gt;containerd&lt;/code&gt; driver and specify a profile to enable differently configured VMs. We unimaginatively call this profile &lt;code&gt;wasm&lt;/code&gt;.&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;minikube start &lt;span class="nt"&gt;--driver&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;docker &lt;span class="nt"&gt;--container-runtime&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;containerd &lt;span class="nt"&gt;-p&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;wasm
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Depending on whether you have already installed &lt;code&gt;minikube&lt;/code&gt; and whether it has already downloaded its images, starting can take a few seconds to dozens of minutes. Be patient. The output should be something akin to:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;😄  [wasm] minikube v1.35.0 on Darwin 15.1.1 (arm64)
✨  Using the docker driver based on user configuration
📌  Using Docker Desktop driver with root privileges
👍  Starting "wasm" primary control-plane node in "wasm" cluster
🚜  Pulling base image v0.0.46 ...
❗  minikube was unable to download gcr.io/k8s-minikube/kicbase:v0.0.46, but successfully downloaded docker.io/kicbase/stable:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 as a fallback image
🔥  Creating docker container (CPUs=2, Memory=12200MB) ...
📦  Preparing Kubernetes v1.32.0 on containerd 1.7.24 ...
    ▪ Generating certificates and keys ...
    ▪ Booting up control plane ...
    ▪ Configuring RBAC rules ...
🔗  Configuring CNI (Container Networking Interface) ...
🔎  Verifying Kubernetes components...
    ▪ Using image gcr.io/k8s-minikube/storage-provisioner:v5
🌟  Enabled addons: storage-provisioner, default-storageclass
🏄  Done! kubectl is now configured to use "wasm" cluster and "default" namespace by default
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;At this point, our goal is to install on the underlying VM:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Wasmedge to run Wasm workloads&lt;/li&gt;
&lt;li&gt;A shim to bridge between &lt;code&gt;containerd&lt;/code&gt; and &lt;code&gt;wasmedge&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;minikube ssh &lt;span class="nt"&gt;-p&lt;/span&gt; wasm
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;We can install Wasmedge, but I found nowhere to download the shim. In the &lt;a href="https://wasmedge.org/docs/develop/deploy/cri-runtime/containerd" rel="noopener noreferrer"&gt;next step&lt;/a&gt;, we will build both. We first need to install Rust:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;curl &lt;span class="nt"&gt;--proto&lt;/span&gt; &lt;span class="s1"&gt;'=https'&lt;/span&gt; &lt;span class="nt"&gt;--tlsv1&lt;/span&gt;.2 &lt;span class="nt"&gt;-sSf&lt;/span&gt; https://sh.rustup.rs | sh
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;The script likely complains that it can't execute the downloaded binary:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Cannot execute /tmp/tmp.NXPz8utAQx/rustup-init (likely because of mounting /tmp as noexec).
Please copy the file to a location where you can execute binaries and run ./rustup-init.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Follow the instructions:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;cp&lt;/span&gt; /tmp/tmp.NXPz8utAQx/rustup-init &lt;span class="nb"&gt;.&lt;/span&gt;
./rustup-init
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Proceed with the default installation by pressing the &lt;code&gt;ENTER&lt;/code&gt; button. When it's finished, source your current shell.&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;.&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$HOME&lt;/span&gt;&lt;span class="s2"&gt;/.cargo/env"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;The system is ready to build Wasmedge and the shim.&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;apt-get update
&lt;span class="nb"&gt;sudo &lt;/span&gt;apt-get &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;-y&lt;/span&gt; git

git clone https://github.com/containerd/runwasi.git

&lt;span class="nb"&gt;cd &lt;/span&gt;runwasi
./scripts/setup-linux.sh

make build-wasmedge
&lt;span class="nv"&gt;INSTALL&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"sudo install"&lt;/span&gt; &lt;span class="nv"&gt;LN&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"sudo ln -sf"&lt;/span&gt; make install-wasmedge
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;The last step requires configuring the &lt;code&gt;containerd&lt;/code&gt; process with the shim. Insert the following snippet in the &lt;code&gt;[plugins."io.containerd.grpc.v1.cri".containerd.runtimes]&lt;/code&gt; section of the &lt;code&gt;/etc/containerd/config.toml&lt;/code&gt; file:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight toml"&gt;&lt;code&gt;        &lt;span class="nn"&gt;[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.wasmedgev1]&lt;/span&gt;
          &lt;span class="py"&gt;runtime_type&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"io.containerd.wasmedge.v1"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Restart &lt;code&gt;containerd&lt;/code&gt; to load the new config.&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;systemctl restart containerd
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Our system is finally ready to accept Webassembly workloads. Users can deploy a Wasmedge &lt;code&gt;pod&lt;/code&gt; with the following manifest:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;node.k8s.io/v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;RuntimeClass&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;wasmedge&lt;/span&gt;                                                         &lt;span class="c1"&gt;#1&lt;/span&gt;
&lt;span class="na"&gt;handler&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;wasmedgev1&lt;/span&gt;                                                      &lt;span class="c1"&gt;#2&lt;/span&gt;
&lt;span class="nn"&gt;---&lt;/span&gt;
&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Pod&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;runtime&lt;/span&gt;
  &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;arch&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;runtime&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;containers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;runtime&lt;/span&gt;
      &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ghcr.io/ajavageek/wasm-kubernetes:runtime&lt;/span&gt;
  &lt;span class="na"&gt;runtimeClassName&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;wasmedge&lt;/span&gt;                                             &lt;span class="c1"&gt;#3&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;ol&gt;
&lt;li&gt;Wasmedge workloads should use this name&lt;/li&gt;
&lt;li&gt;Handler to use. It should be the last segment of the section added in the TOML file, &lt;em&gt;i.e.&lt;/em&gt;, &lt;code&gt;containerd.runtimes.wasmedgev2&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Point to the runtime class name we defined just above&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;I used a single &lt;code&gt;Pod&lt;/code&gt; instead of a full-fledged &lt;code&gt;Deployment&lt;/code&gt; to keep things simple.&lt;/p&gt;

&lt;p&gt;Notice the many levels of indirection:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;The &lt;code&gt;pod&lt;/code&gt; refers to the &lt;code&gt;wasmedge&lt;/code&gt; runtime class name&lt;/li&gt;
&lt;li&gt;The &lt;code&gt;wasmedge&lt;/code&gt; runtime class points to the &lt;code&gt;wasmedgev1&lt;/code&gt; handler&lt;/li&gt;
&lt;li&gt;The &lt;code&gt;wasmedgev1&lt;/code&gt; handler in the TOML file specifies the &lt;code&gt;io.containerd.wasmedge.v1&lt;/code&gt; runtime type&lt;/li&gt;
&lt;/ol&gt;
&lt;h2&gt;
  
  
  Final steps
&lt;/h2&gt;

&lt;p&gt;To compare the approaches and test our work, we can use the &lt;code&gt;minikube&lt;/code&gt; &lt;code&gt;ingress&lt;/code&gt; addon and &lt;a href="https://www.vcluster.com/" rel="noopener noreferrer"&gt;vCluster&lt;/a&gt;. The former offers a single access point for all three workloads, &lt;code&gt;native&lt;/code&gt;, &lt;code&gt;embed&lt;/code&gt;, and &lt;code&gt;runtime&lt;/code&gt;, while vCluster isolates workloads from each other in their virtual cluster.&lt;/p&gt;

&lt;p&gt;Let's start by installing the addon:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;minikube &lt;span class="nt"&gt;-p&lt;/span&gt; wasm addons &lt;span class="nb"&gt;enable &lt;/span&gt;ingress
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;It deploys an Nginx Ingress Controller in the &lt;code&gt;ingress-nginx&lt;/code&gt; namespace:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;💡  ingress is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
💡  After the addon is enabled, please run "minikube tunnel" and your ingress resources would be available at "127.0.0.1"
    ▪ Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
    ▪ Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
    ▪ Using image registry.k8s.io/ingress-nginx/controller:v1.11.3
🔎  Verifying ingress addon...
🌟  The 'ingress' addon is enabled
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;We must create a dedicated virtual cluster to deploy the &lt;code&gt;Pod&lt;/code&gt; later.&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;helm upgrade &lt;span class="nt"&gt;--install&lt;/span&gt; runtime vcluster/vcluster &lt;span class="nt"&gt;--namespace&lt;/span&gt; runtime &lt;span class="nt"&gt;--create-namespace&lt;/span&gt;  &lt;span class="nt"&gt;--values&lt;/span&gt; vcluster.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;We will define the &lt;code&gt;Ingress&lt;/code&gt;, the &lt;code&gt;Service&lt;/code&gt;, and their related &lt;code&gt;Pod&lt;/code&gt; in each virtual cluster. We need vCluster to synchronize the &lt;code&gt;Ingress&lt;/code&gt; with the Ingress Controller. Here's the configuration to achieve this:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;sync&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;toHost&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;ingresses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;enabled&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;The output should be similar to:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Release "runtime" does not exist. Installing it now.
NAME: runtime
LAST DEPLOYED: Thu Jan 30 11:53:14 2025
NAMESPACE: runtime
STATUS: deployed
REVISION: 1
TEST SUITE: None
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;We can amend the above manifest with the &lt;code&gt;Service&lt;/code&gt; and &lt;code&gt;Ingress&lt;/code&gt; to expose the &lt;code&gt;Pod&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Service&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;runtime&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ClusterIP&lt;/span&gt;                                                        &lt;span class="c1"&gt;#1&lt;/span&gt;
  &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;port&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;3000&lt;/span&gt;                                                         &lt;span class="c1"&gt;#1&lt;/span&gt;
  &lt;span class="na"&gt;selector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;arch&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;runtime&lt;/span&gt;
&lt;span class="nn"&gt;---&lt;/span&gt;
&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;networking.k8s.io/v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Ingress&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;runtime&lt;/span&gt;
  &lt;span class="na"&gt;annotations&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;nginx.ingress.kubernetes.io/use-regex&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;true"&lt;/span&gt;                        &lt;span class="c1"&gt;#2&lt;/span&gt;
    &lt;span class="na"&gt;nginx.ingress.kubernetes.io/rewrite-target&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/$2&lt;/span&gt;                      &lt;span class="c1"&gt;#2&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;ingressClassName&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx&lt;/span&gt;
  &lt;span class="na"&gt;rules&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;host&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;localhost&lt;/span&gt;
      &lt;span class="na"&gt;http&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;paths&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/runtime(/|$)(.*)&lt;/span&gt;                                      &lt;span class="c1"&gt;#4&lt;/span&gt;
            &lt;span class="na"&gt;pathType&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ImplementationSpecific&lt;/span&gt;                             &lt;span class="c1"&gt;#4&lt;/span&gt;
            &lt;span class="na"&gt;backend&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
              &lt;span class="na"&gt;service&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
                &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;runtime&lt;/span&gt;
                &lt;span class="na"&gt;port&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
                  &lt;span class="na"&gt;number&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;3000&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;ol&gt;
&lt;li&gt;Expose the &lt;code&gt;Pod&lt;/code&gt; inside the cluster&lt;/li&gt;
&lt;li&gt;Nginx-specific annotations to handle path regular expression and rewrite it&lt;/li&gt;
&lt;li&gt;Regex path&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Nginx will forward all requests starting with &lt;code&gt;/runtime&lt;/code&gt; to the &lt;code&gt;runtime&lt;/code&gt; service, removing the prefix. To apply the manifest, we first connect to the previously created virtual cluster:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;vcluster connect runtime
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;11:53:21 info Waiting for vcluster to come up...
11:53:39 done vCluster is up and running
11:53:39 info Starting background proxy container...
11:53:39 done Switched active kube context to vcluster_embed_embed_vcluster_runtime_runtime_wasm
- Use `vcluster disconnect` to return to your previous kube context
- Use `kubectl get namespaces` to access the vcluster
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Now apply the manifest:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; runtime.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;We do the same with the &lt;code&gt;embed&lt;/code&gt; and the &lt;code&gt;native&lt;/code&gt; pods, barring the &lt;code&gt;runtimeClassName&lt;/code&gt; as they are "regular" images.&lt;/p&gt;

&lt;p&gt;The final deployment diagram is the following:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Faa7bvvrjt7w506bbk39g.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Faa7bvvrjt7w506bbk39g.png" alt="Final deployment diagram" width="800" height="680"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The final touch is to tunnel to expose services:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;minikube &lt;span class="nt"&gt;-p&lt;/span&gt; wasm tunnel
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;✅  Tunnel successfully started

📌  NOTE: Please do not close this terminal as this process must stay alive for the tunnel to be accessible ...

❗  The service/ingress runtime-x-default-x-runtime requires privileged ports to be exposed: [80 443]
🔑  sudo permission will be asked for it.
🏃  Starting tunnel for service runtime-x-default-x-runtime.
Password:
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Let's request the lightweight container that uses the Wasmedge runtime:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;curl localhost/runtime/get&lt;span class="se"&gt;\?&lt;/span&gt;&lt;span class="nv"&gt;foo&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;bar | jq
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;We get the expected output:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"flavor"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"runtime"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"args"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"foo"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"bar"&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"headers"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"user-agent"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"curl/8.7.1"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"x-forwarded-host"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"localhost"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"x-request-id"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"dcbdfde4715fbfc163c7c9098cbdf077"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"x-scheme"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"http"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"x-forwarded-for"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"10.244.0.1"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"x-forwarded-scheme"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"http"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"accept"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"*/*"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"x-real-ip"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"10.244.0.1"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"x-forwarded-proto"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"http"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"host"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"localhost"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"x-forwarded-port"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"80"&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"url"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"/get?foo=bar"&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;We should get similar results with the other approaches, with different &lt;code&gt;flavor&lt;/code&gt; values.&lt;/p&gt;
&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;In this post, I showed how to use Webassembly on Kubernetes with the Wasmedge runtime. I created three flavors for comparison purposes: &lt;code&gt;native&lt;/code&gt;, &lt;code&gt;embed&lt;/code&gt;, and &lt;code&gt;runtime&lt;/code&gt;. The first two are "regular" Docker images, while the latter contains only a single Wasm file, which makes it very lightweight and secure. However, we need a dedicated runtime to run it.&lt;/p&gt;

&lt;p&gt;Regular managed Kubernetes services don't allow configuring an additional shim, such as the Wasmedge shim. Even on my laptop, I had to be creative to make it happen. I had to use Minikube and put much effort into configuring its intermediate virtual machine to run Wasm workloads on Kubernetes. Yet, I managed to run all three images inside their virtual cluster, exposed outside the cluster by an Nginx Ingress Controller.&lt;/p&gt;

&lt;p&gt;Now, it's up to you to decide whether the extra effort is worth the 10x reduction of image size and the improved security. I hope the future will improve the support so that the pros outweigh the cons.&lt;/p&gt;

&lt;p&gt;The complete source code for this post can be found on GitHub:&lt;/p&gt;


&lt;div class="ltag-github-readme-tag"&gt;
  &lt;div class="readme-overview"&gt;
    &lt;h2&gt;
      &lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fassets.dev.to%2Fassets%2Fgithub-logo-5a155e1f9a670af7944dd5e12375bc76ed542ea80224905ecaf878b9157cdefc.svg" alt="GitHub logo"&gt;
      &lt;a href="https://github.com/ajavageek" rel="noopener noreferrer"&gt;
        ajavageek
      &lt;/a&gt; / &lt;a href="https://github.com/ajavageek/wasm-kubernetes" rel="noopener noreferrer"&gt;
        wasm-kubernetes
      &lt;/a&gt;
    &lt;/h2&gt;
    &lt;h3&gt;
      
    &lt;/h3&gt;
  &lt;/div&gt;
  &lt;div class="ltag-github-body"&gt;
    
&lt;div id="readme" class="adoc"&gt;
&lt;div&gt;
&lt;div&gt;
&lt;pre&gt;helm upgrade --install runtime vcluster/vcluster --namespace runtime --create-namespace  --values vcluster.yaml
helm upgrade --install embed vcluster/vcluster --namespace embed --create-namespace  --values vcluster.yaml
helm upgrade --install native vcluster/vcluster --namespace native --create-namespace  --values vcluster.yaml&lt;/pre&gt;
&lt;/div&gt;


&lt;/div&gt;

&lt;div&gt;
&lt;div&gt;
&lt;pre&gt;vcluster connect runtime
kubectl apply -f runtime.yaml
vcluster connect embed
kubectl apply -f embed.yaml
vcluster connect native
kubectl apply -f native.yaml&lt;/pre&gt;
&lt;/div&gt;


&lt;/div&gt;
&lt;/div&gt;
&lt;br&gt;
&lt;br&gt;
  &lt;/div&gt;
&lt;br&gt;
  &lt;div class="gh-btn-container"&gt;&lt;a class="gh-btn" href="https://github.com/ajavageek/wasm-kubernetes" rel="noopener noreferrer"&gt;View on GitHub&lt;/a&gt;&lt;/div&gt;
&lt;br&gt;
&lt;/div&gt;
&lt;br&gt;


&lt;p&gt;&lt;strong&gt;Go further&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://wasmedge.org/docs/develop/deploy/intro/" rel="noopener noreferrer"&gt;Introduction to WasmEdge&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://web.archive.org/web/20240808201241/https://nigelpoulton.com/webassembly-and-containerd-how-it-works/" rel="noopener noreferrer"&gt;WebAssembly and containerd: How it works&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/deislabs/containerd-wasm-shims" rel="noopener noreferrer"&gt;containerd-wasm-shims&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://wasmedge.org/docs/develop/deploy/cri-runtime/containerd/" rel="noopener noreferrer"&gt;Deploy with containerd's runwasi&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>webassembly</category>
      <category>kubernetes</category>
      <category>deepdive</category>
      <category>wasmedge</category>
    </item>
    <item>
      <title>Technical Guide: Syncing Ingress Resources from various Virtual Cluster on GKE with vCluster</title>
      <dc:creator>Hrittik Roy</dc:creator>
      <pubDate>Mon, 03 Mar 2025 13:43:35 +0000</pubDate>
      <link>https://dev.to/loft/technical-guide-syncing-ingress-resources-from-various-vcluster-on-gke-with-vcluster-3c56</link>
      <guid>https://dev.to/loft/technical-guide-syncing-ingress-resources-from-various-vcluster-on-gke-with-vcluster-3c56</guid>
      <description>&lt;p&gt;Kubernetes Ingress is the most widely used Kubernetes resource for exposing an application to the outside world. Understanding the concepts and Layer-7 load balancing may sound difficult, but with this article, it won’t be. &lt;/p&gt;

&lt;p&gt;This article uses &lt;a href="https://cloud.google.com/kubernetes-engine?hl=en" rel="noopener noreferrer"&gt;Google Kubernetes Engine (GKE)&lt;/a&gt; as the host Kubernetes cluster, where you will install the Nginx Ingress controller and cert-manager to get TLS certificates for your web apps. With that, you can run an application with proper TLS. &lt;/p&gt;

&lt;p&gt;This article touches on how to create a virtual cluster using vCluster to reuse the host cluster ingress controller and cert-manager to create ingress. This approach allows your virtual clusters to reuse the ingress controller running on the host cluster, GKE in our case, and the cert-manager. &lt;/p&gt;

&lt;p&gt;If you are hearing about virtual clusters for the first time, then read more &lt;a href="https://www.vcluster.com/" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Prerequisites
&lt;/h2&gt;

&lt;p&gt;Ensure these are installed: &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Google Cloud SDK (&lt;code&gt;gcloud&lt;/code&gt;)&lt;/strong&gt; – To interact (create/delete) with GKE. Make sure to have a project and a billing account linked and authenticated. &lt;a href="https://cloud.google.com/sdk/docs/install" rel="noopener noreferrer"&gt; Install &lt;/a&gt; &lt;strong&gt;[Note: You can use the UI if you prefer]&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Kubernetes CLI (&lt;code&gt;kubectl&lt;/code&gt;)&lt;/strong&gt; – To manage Kubernetes clusters. &lt;a href="https://kubernetes.io/docs/tasks/tools/#kubectl" rel="noopener noreferrer"&gt; Install &lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;vCluster CLI&lt;/strong&gt; – To create virtual clusters within a single Kubernetes cluster.&lt;a href="https://www.vcluster.com/docs/get-started/#deploy-vcluster" rel="noopener noreferrer"&gt; Install &lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;A Domain Name&lt;/strong&gt; – This is used to configure A Records, DNS, and TLS.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Basic Kubernetes Knowledge&lt;/strong&gt; – Familiarity with &lt;strong&gt;Deployments&lt;/strong&gt;, &lt;strong&gt;Ingress&lt;/strong&gt;, and &lt;strong&gt;Services&lt;/strong&gt;.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Setting Up GKE Cluster
&lt;/h2&gt;

&lt;p&gt;Once you have completed the prerequisites, you can proceed to the next step: creating your Kubernetes cluster. This cluster will serve as the environment where you deploy your controllers, deployments, services as well as your virtual clusters.  &lt;/p&gt;

&lt;p&gt;With gcloud installed, you can create your cluster using the command &lt;code&gt;gcloud container clusters create&lt;/code&gt;, along with a few parameters to specify the project and location of the deployment:&lt;/p&gt;

&lt;p&gt;Command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;gcloud container clusters create test  --project=hrittik-project --zone asia-southeast1-c
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After you apply the above command, your &lt;code&gt;gcloud SDK&lt;/code&gt; will begin creating the cluster. This process will take a few minutes, and once completed, a kubeconfig entry will be generated that allows you to interact with the cluster using kubectl.&lt;/p&gt;

&lt;p&gt;Successful cluster creation will look something similar:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Creating cluster test in asia-southeast1-c... Cluster is being health-checked (Kubernetes Control Plane is healthy)...done.                                                                     
Created [https://container.googleapis.com/v1/projects/hrittik-project/zones/asia-southeast1-c/clusters/test].
To inspect the contents of your cluster, go to: https://console.cloud.google.com/kubernetes/workload_/gcloud/asia-southeast1-c/test?project=hrittik-project
kubeconfig entry generated for test
NAME  LOCATION           MASTER_VERSION      MASTER_IP       MACHINE_TYPE  NODE_VERSION        NUM_NODES  STATUS
test  asia-southeast1-c  1.30.8-gke.1051000  34.124.254.219  e2-medium     1.30.8-gke.1051000  3          RUNNING

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Setting Up the Ingress Controller
&lt;/h2&gt;

&lt;p&gt;Once your GKE cluster is set up and operational, the next step is to install an &lt;strong&gt;Ingress Controller&lt;/strong&gt; to handle the routing of external traffic to your services. In this section, we will use &lt;strong&gt;NGINX&lt;/strong&gt; as our Ingress Controller. NGINX is a popular and reliable choice for Kubernetes ingress management, renowned for its flexibility and performance.&lt;/p&gt;

&lt;p&gt;Ingress will act as a reverse proxy to route traffic outside the cluster to specific Kubernetes Services based on the Ingress Rules. &lt;/p&gt;

&lt;h3&gt;
  
  
  Cluster Admin Permissions
&lt;/h3&gt;

&lt;p&gt;To configure Ingress on GKE, the first step is to provide the user &lt;code&gt;cluster-admin&lt;/code&gt; permissions to carry out the operational tasks:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl create clusterrolebinding cluster-admin-binding \
  --clusterrole cluster-admin \
  --user $(gcloud config get-value account)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Success in this step will create a &lt;code&gt;cluster-admin-binding&lt;/code&gt;in your cluster like below:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;clusterrolebinding.rbac.authorization.k8s.io/cluster-admin-binding created

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Ingress Controller Deployment
&lt;/h3&gt;

&lt;p&gt;With the appropriate permissions configured, the next step is to deploy the controller which will manage all of the routing logic. The step’s as simple as running the command in your cluster:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.12.0/deploy/static/provider/cloud/deploy.yaml

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Success will create resources in an ingress-nginx namespace as shown below:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;namespace/ingress-nginx created
serviceaccount/ingress-nginx created
serviceaccount/ingress-nginx-admission created
role.rbac.authorization.k8s.io/ingress-nginx created
role.rbac.authorization.k8s.io/ingress-nginx-admission created
clusterrole.rbac.authorization.k8s.io/ingress-nginx created
clusterrole.rbac.authorization.k8s.io/ingress-nginx-admission created
rolebinding.rbac.authorization.k8s.io/ingress-nginx created
rolebinding.rbac.authorization.k8s.io/ingress-nginx-admission created
clusterrolebinding.rbac.authorization.k8s.io/ingress-nginx created
clusterrolebinding.rbac.authorization.k8s.io/ingress-nginx-admission created
configmap/ingress-nginx-controller created
service/ingress-nginx-controller created
service/ingress-nginx-controller-admission created
deployment.apps/ingress-nginx-controller created
job.batch/ingress-nginx-admission-create created
job.batch/ingress-nginx-admission-patch created
ingressclass.networking.k8s.io/nginx created
validatingwebhookconfiguration.admissionregistration.k8s.io/ingress-nginx-admission created


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Testing Ingress Controller
&lt;/h3&gt;

&lt;p&gt;The key point to focus on is that the ingress-controller will be exposed as a &lt;strong&gt;LoadBalancer&lt;/strong&gt; by default, which we will explore later in the configuring A Record Section. For now, to verify the installation, check if your pods are running by using the command below:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl get pods -n ingress-nginx

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The Status as Running means everything is configured correctly:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;~ ❯ kubectl get pods -n ingress-nginx                                  
NAME                                       READY   STATUS      RESTARTS   AGE
ingress-nginx-admission-create-bh7nl       0/1     Completed   0          100s
ingress-nginx-admission-patch-bvlhf        0/1     Completed   0          100s
ingress-nginx-controller-cbb88bdbc-5dkxt   1/1     Running     0          101s


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Installing Cert-Manager on the Host Cluster
&lt;/h2&gt;

&lt;p&gt;Now that your &lt;strong&gt;Ingress&lt;/strong&gt; &lt;strong&gt;Controller&lt;/strong&gt; is set up and running, the next step is to &lt;strong&gt;install&lt;/strong&gt; &lt;strong&gt;Cert-Manager&lt;/strong&gt; to automate the management and issuance of TLS certificates. &lt;strong&gt;Cert-Manager&lt;/strong&gt; is a powerful Kubernetes project that simplifies obtaining and renewing SSL/TLS certificates from various certificate authorities (CAs) like for eg: &lt;strong&gt;Let's Encrypt&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;This tutorial uses Cert Manager to create and store certificates as Kubernetes Secrets automatically.&lt;/p&gt;

&lt;h3&gt;
  
  
  Installation of cert-manager
&lt;/h3&gt;

&lt;p&gt;The installation process is straightforward; simply execute the command below on your cluster:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.16.3/cert-manager.yaml

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Successful installation will look similar to the screenshot below: &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffvkojuxxdjqm6fe731ua.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffvkojuxxdjqm6fe731ua.png" alt=" " width="800" height="626"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Verifying cert-manager Installation
&lt;/h3&gt;

&lt;p&gt;Once installed, you can verify that Cert-Manager is running by checking the pods in the &lt;code&gt;cert-manager&lt;/code&gt; namespace:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl get pods --namespace cert-manager
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Successful creation will display a manager pod, a cainjector pod, and a webhook pod to verify domain authority within that namespace, as shown below:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;~ ❯ kubectl get pods --namespace cert-manager                                                                                                

NAME                                       READY   STATUS    RESTARTS   AGE
cert-manager-6c7fdcbcd5-6lp94              1/1     Running   0          2m11s
cert-manager-cainjector-64d77f8498-f6p7d   1/1     Running   0          2m12s
cert-manager-webhook-68796f6795-59sqq      1/1     Running   0          2m11s
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Creating a Certificate Issuer CRD
&lt;/h3&gt;

&lt;p&gt;A &lt;strong&gt;Certificate Issuer&lt;/strong&gt; instructs the manager on how to obtain certificates. There are two primary types of issuers: &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. ClusterIssuer&lt;/strong&gt;: This issuer is available throughout the entire cluster and is recommended for most cases.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Issuer&lt;/strong&gt;: This issuer is limited to a specific namespace.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl apply -f - &amp;lt;&amp;lt;EOF
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
 name: letsencrypt-prod
 namespace: cert-manager
spec:
 acme:
   # The ACME server URL
   server: https://acme-v02.api.letsencrypt.org/directory 
   # Email address used for ACME registration
   email: test-hrittik@example.com # Replace with your email
   # Name of a secret used to store the ACME account private key
   privateKeySecretRef:
     name: letsencrypt-prod
   # Enable the HTTP-01 challenge provider
   solvers:
   - http01:
       ingress:
         class: nginx
EOF


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In this example, we will use a ClusterIssuer with &lt;strong&gt;Let's Encrypt&lt;/strong&gt; as the verification server. Don’t forget to update your email address in the YAML below before applying:&lt;/p&gt;

&lt;p&gt;Success will look similar:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;clusterissuer.cert-manager.io/letsencrypt-prod created

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Verify the Cluster Issuer
&lt;/h3&gt;

&lt;p&gt;To verify successful installation, use the below command to get the  &lt;code&gt;clusterissuer&lt;/code&gt; CRD:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl get clusterissuer letsencrypt-prod

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Successful output will be similar to:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;NAME AGE 
letsencrypt-prod 3m
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Configuring A Records for Domain
&lt;/h2&gt;

&lt;p&gt;Now that you have set up your &lt;strong&gt;NGINX Ingress Controller&lt;/strong&gt; and are issuing certificates with &lt;strong&gt;Cert-Manager&lt;/strong&gt;, it’s time to configure the &lt;strong&gt;A Records **for your domain. In Google Kubernetes Engine (GKE), A Records **are used for mapping&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;An &lt;strong&gt;A Record&lt;/strong&gt; (Address Record) in DNS points a domain or subdomain to a specific &lt;strong&gt;IP address&lt;/strong&gt;. In this case, you need to direct your domain (for example, &lt;code&gt;hrittikhere.live&lt;/code&gt;) to the &lt;strong&gt;external&lt;/strong&gt; &lt;strong&gt;IP address&lt;/strong&gt; of your &lt;strong&gt;NGINX Ingress Controller.&lt;/strong&gt; This will allow external traffic to be properly routed to your cluster.&lt;/p&gt;

&lt;h3&gt;
  
  
  Getting the External-IP
&lt;/h3&gt;

&lt;p&gt;After installing the &lt;strong&gt;NGINX Ingress Controller&lt;/strong&gt;, a &lt;code&gt;LoadBalancer&lt;/code&gt; type service will be created that automatically allocates an &lt;strong&gt;external IP address&lt;/strong&gt;. To find the allocated IP, use the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt; kubectl get svc -A | grep -E "NAME|ingress-nginx-controller"

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The output will display a list of services, from which you should select the one that has an external IP. Keep the IP address readily available:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fergavgxhom8ovn2yilvv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fergavgxhom8ovn2yilvv.png" alt=" " width="800" height="221"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Configuring A Name Record on Domain Provider
&lt;/h3&gt;

&lt;p&gt;The next step is to log in to your domain provider and navigate to the DNS Record section. Depending on the provider, the UI might change but you just need to configure four three things once you click on Create New Record:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Name/Host&lt;/strong&gt;: * # Wildcard to capture all Hosts&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Type&lt;/strong&gt;: &lt;code&gt;A   # A record for GCP&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Value&lt;/strong&gt;: &lt;code&gt;34.142.255.42  #  External IP address of your NGINX Ingress Controller&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;TTL&lt;/strong&gt;: &lt;code&gt;300  #  Use the default value.&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;On the Dashboard, this will look something similar to below: &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fckcjvqhtg8ys1a6aj4pf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fckcjvqhtg8ys1a6aj4pf.png" alt=" " width="800" height="196"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Click on &lt;code&gt;Add Record&lt;/code&gt; and you will be all ready for the next step! &lt;/p&gt;

&lt;h2&gt;
  
  
  Creating an Application on the Host Cluster
&lt;/h2&gt;

&lt;p&gt;Now, you have all of the moving pieces in place: domain to serve your application, ingress controller for Load Balancing and Cert-Manager for TLS. The only missing piece is application so the next step is to deploy one simple game to your cluster.&lt;/p&gt;

&lt;p&gt;The following YAML deployed three things together: &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Service&lt;/strong&gt;: Exposes the game application via a Kubernetes &lt;code&gt;Service&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Deployment&lt;/strong&gt;: Deploys the game application.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Ingress&lt;/strong&gt;: Defines routing rules for the Ingress Controller to route external traffic to the Service.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl apply -f - &amp;lt;&amp;lt;EOF
apiVersion: v1
kind: Service
metadata:
  name: game-2048
  namespace: default
spec:
  ports:
    - name: http
      port: 80
      targetPort: 80
  selector:
    app: game-2048
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: game-2048
  namespace: default
spec:
  replicas: 1
  selector:
    matchLabels:
      app: game-2048
  template:
    metadata:
      labels:
        app: game-2048
    spec:
      containers:
        - name: backend
          image: alexwhen/docker-2048
          ports:
            - name: http
              containerPort: 80
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: game-2048-ingress
  namespace: default
  annotations:
    cert-manager.io/cluster-issuer: "letsencrypt-prod"
    nginx.ingress.kubernetes.io/rewrite-target: /
spec:
  ingressClassName: nginx
  rules:
    - host: game.hrittikhere.live
      http:
        paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: game-2048
                port:
                  number: 80
  tls:
  - hosts:
    - game.hrittikhere.live
    secretName: letsencrypt-prod
EOF
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Success will create three objects in your cluster:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;service/game-2048 created
deployment.apps/game-2048 created
ingress.networking.k8s.io/game-2048-ingress created
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Verification of Ingress
&lt;/h3&gt;

&lt;p&gt;To find the ingress Host you can use the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl get ingress
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Success will look something like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;~/Code/ingress ❯ kubectl get ingress                                                                                               
NAME                CLASS    HOSTS                   ADDRESS         PORTS     AGE

game-2048-ingress   nginx    game.hrittikhere.live   34.142.255.42   80, 443   51s


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you go to the following URL, you will find your application running on the host: &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqe7xe7f3dhv44fdarls2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqe7xe7f3dhv44fdarls2.png" alt=" " width="800" height="520"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you check the network tab, the Remote Address corresponds to the LoadBalancer assigned by the Ingress Controller. In simple terms, the Ingress Controller assigns the subdomain and serves the service smoothly.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqdqltsazpzh6ao7hwzel.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqdqltsazpzh6ao7hwzel.png" alt=" " width="800" height="325"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Syncing Ingress Resources with vCluster between your Virtual Clusters
&lt;/h2&gt;

&lt;p&gt;After completing this tutorial, you will see that it’s a bit complicated to set up the backbone to create ingress resources. However, imagine the IT headache if you’re operating 100s of Clusters and managing all these certificates and secrets while making sure your domains are linked well. Sounds Challenging? &lt;/p&gt;

&lt;p&gt;vCluster saves you from these and many similar problems. What vCluster does is create virtual clusters on top of your host clusters in specific namespaces. This will help you create a virtual cluster for each of your teams with full admin privileges in seconds instead of the hours it takes to create a normal cluster and configure all the required resources.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkcr7s5bx8orrnffye9pd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkcr7s5bx8orrnffye9pd.png" alt=" " width="800" height="384"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You just define a virtual cluster and what host cluster features you want in your virtual cluster. For example, if we want to ingressClasses from our Host Cluster and sync ingresses back to our host cluster from the virtual cluster to back to your host it’s as simple as defying it in the vcluster.yaml file like below:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sync:
  fromHost:
    ingressClasses:
      enabled: true
  toHost:
    ingresses:
      enabled: true
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;With this, whatever application you’re running on your virtual cluster can request and run an ingress smoothly. Let’s see a demo but the first step is to create the virtual cluster using the vCluster CLI [&lt;a href="https://www.vcluster.com/docs/get-started/#deploy-vcluster" rel="noopener noreferrer"&gt;Installation Step&lt;/a&gt;]:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt; vcluster create my-vcluster -f vcluster.yaml


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Success will look something like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;11:35:20 done Successfully created virtual cluster my-vcluster in namespace vcluster-my-vcluster
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;With that, make sure that you’re in the** virtual cluster context** and you can apply the same application again with a new subdomain path:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl apply -f - &amp;lt;&amp;lt;EOF
apiVersion: v1
kind: Service
metadata:
  name: game-2048
  namespace: default
spec:
  ports:
    - name: http
      port: 80
      targetPort: 80
  selector:
    app: game-2048
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: game-2048
  namespace: default
spec:
  replicas: 1
  selector:
    matchLabels:
      app: game-2048
  template:
    metadata:
      labels:
        app: game-2048
    spec:
      containers:
        - name: backend
          image: alexwhen/docker-2048
          ports:
            - name: http
              containerPort: 80
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: game-2048-ingress
  namespace: default
  annotations:
    cert-manager.io/cluster-issuer: "letsencrypt-prod"
    nginx.ingress.kubernetes.io/rewrite-target: /
spec:
  ingressClassName: nginx
  rules:
    - host: game1.hrittikhere.live
      http:
        paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: game-2048
                port:
                  number: 80
  tls:
  - hosts:
    - game1.hrittikhere.live
    secretName: letsencrypt-prod
EOF
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The success will look like this again when you have three things configured:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;service/game-2048 created
deployment.apps/game-2048 created
ingress.networking.k8s.io/game-2048-ingress created
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;On the CLI, if you do &lt;code&gt;kubectl get ingress&lt;/code&gt; you will see a new ingress resource created with the new specified host:  \&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvhvr3rn8xbojpu7r3dbo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvhvr3rn8xbojpu7r3dbo.png" alt=" " width="800" height="304"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now, if you check &lt;code&gt;game1.hrittikhere.live&lt;/code&gt;, you will find a similar game running alongside. With just one vCluster configuration, you can create multiple new clusters, and all of them will have ingress functioning seamlessly.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fngmhx50mgdsys1av9688.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fngmhx50mgdsys1av9688.png" alt=" " width="800" height="520"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The IP is also again the same as your host cluster, ensuring all the networking is working smoothly: &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fng6hsiuym7x4agu5j4rm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fng6hsiuym7x4agu5j4rm.png" alt=" " width="800" height="325"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Clean Up
&lt;/h2&gt;

&lt;p&gt;With everything working the cleaning steps are very simple. You can remove the DNS Record from your service provider through the UI by clicking on &lt;code&gt;delete record&lt;/code&gt; and for the GKE Cluster use the following command with your own parameters:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;gcloud container clusters delete test  --project=hrittik-project --zone asia-southeast1-c

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Successful deletion will look something like this: &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhtsrsdof0qhrnxlbi31p.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhtsrsdof0qhrnxlbi31p.png" alt=" " width="800" height="455"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;With that, you have cleared all the resources, including your virtual clusters. &lt;/p&gt;

&lt;h2&gt;
  
  
  Final Thoughts
&lt;/h2&gt;

&lt;p&gt;With this guide, we’ve explored how to configure an Ingress Controller using NGINX in a GKE (Google Kubernetes Engine) environment and to set up TLS certificates with Cert-Manager. We’ve also seen how to configure DNS A Records to point to a Kubernetes cluster’s external IP and route traffic efficiently to our services.&lt;/p&gt;

&lt;p&gt;In addition to all of this, you’ve learned how powerful vCluster can be, especially when dealing with multi-tenant Kubernetes environments. If you're managing several teams or clusters, using vCluster allows you to create isolated, lightweight Kubernetes clusters (virtual clusters) within a larger host cluster, providing the flexibility of separate clusters without needing to pay for the control plane cost of each cluster.&lt;/p&gt;

&lt;p&gt;More Questions?&lt;a href="https://slack.loft.sh/" rel="noopener noreferrer"&gt; Join our Slack&lt;/a&gt; to talk to the team behind vCluster! &lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>ingress</category>
      <category>vcluster</category>
    </item>
    <item>
      <title>Pull request testing on Kubernetes: vCluster for isolation and costs control</title>
      <dc:creator>Nicolas Fränkel</dc:creator>
      <pubDate>Thu, 27 Feb 2025 09:02:00 +0000</pubDate>
      <link>https://dev.to/loft/pull-request-testing-on-kubernetes-vcluster-for-isolation-and-costs-control-4gh6</link>
      <guid>https://dev.to/loft/pull-request-testing-on-kubernetes-vcluster-for-isolation-and-costs-control-4gh6</guid>
      <description>&lt;p&gt;This week's post is the third and final in my series about running tests on Kubernetes for each pull request. In the &lt;a href="https://blog.frankel.ch/pr-testing-kubernetes/1/" rel="noopener noreferrer"&gt;first post&lt;/a&gt;, I described the app and how to test locally using Testcontainers and in a GitHub workflow. The &lt;a href="https://blog.frankel.ch/pr-testing-kubernetes/2/" rel="noopener noreferrer"&gt;second post&lt;/a&gt; focused on setting up the target environment and running end-to-end tests on Kubernetes.&lt;/p&gt;

&lt;p&gt;I concluded the latter by mentioning a significant quandary. Creating a dedicated cluster for each workflow significantly impacts the time it takes to run. On GKE, it took between 5 and 7 minutes to spin off a new cluster. If you create a GKE instance upstream, you face two issues:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Since the instance is always up, it raises costs. While they are reasonable, they may become a decision factor if you are already struggling. In any case, we can leverage the built-in Cloud autoscaler. Also, note that the costs mainly come from the workloads; the control plane costs are marginal.&lt;/li&gt;
&lt;li&gt;Worse, some changes affect the whole cluster, &lt;em&gt;e.g.&lt;/em&gt;, CRD version changes. CRDs are cluster-wide resources. In this case, we need a dedicated cluster to avoid incompatible changes. From an engineering point of view, it requires identifying which PR can run on a shared cluster and which one needs a dedicated one. Such complexity hinders the delivery speed.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In this post, I'll show how to benefit from the best of both worlds with &lt;a href="https://vcluster.com" rel="noopener noreferrer"&gt;vCluster&lt;/a&gt;: a single cluster with testing from each PR in complete isolation from others.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Virtual clusters are fully functional Kubernetes clusters nested inside a physical host cluster providing better isolation and flexibility to support multi-tenancy. Multiple teams can operate independently within the same physical infrastructure while minimizing conflicts, maximizing autonomy, and reducing costs.&lt;/p&gt;

&lt;p&gt;-- &lt;a href="https://www.vcluster.com/docs/" rel="noopener noreferrer"&gt;What are virtual clusters?&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;With virtual clusters, we can have our cake—a single physical cluster for limited costs—and eat it with fully isolated virtual clusters.&lt;/p&gt;

&lt;h2&gt;
  
  
  Weaving vCluster into the GitHub workflow
&lt;/h2&gt;

&lt;p&gt;Weaving vCluster into the GitHub workflow is a three-step process:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Install vCluster&lt;/li&gt;
&lt;li&gt;Create a virtual cluster&lt;/li&gt;
&lt;li&gt;Connect to the virtual cluster
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Install vCluster&lt;/span&gt;
        &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;loft-sh/setup-vcluster@main&lt;/span&gt;                                    &lt;span class="c1"&gt;#1&lt;/span&gt;
        &lt;span class="na"&gt;with&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;kubectl-install&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Create a vCluster&lt;/span&gt;
        &lt;span class="na"&gt;id&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;vcluster&lt;/span&gt;                                                         &lt;span class="c1"&gt;#2&lt;/span&gt;
        &lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;time vcluster create vcluster-pipeline-${{github.run_id}}&lt;/span&gt;       &lt;span class="c1"&gt;#3&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Connect to the vCluster&lt;/span&gt;
        &lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;vcluster connect vcluster-pipeline-${{github.run_id}}&lt;/span&gt;           &lt;span class="c1"&gt;#4&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;ol&gt;
&lt;li&gt;Install vCluster. By default, the action installs the latest available version. You can override it.&lt;/li&gt;
&lt;li&gt;Step IDs are not necessary unless you want to reference them in later steps. We are going to need it&lt;/li&gt;
&lt;li&gt;Create the virtual cluster. To avoid collisions, we name it with the workflow name suffixed with the GitHub run ID&lt;/li&gt;
&lt;li&gt;Connect to the virtual cluster&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The output is along the following lines:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;gt; Run time vcluster create vcluster-pipeline-12632713145

12:44:13 info Creating namespace vcluster-vcluster-pipeline-12632713145
12:44:13 info Create vcluster vcluster-pipeline-12632713145...
12:44:13 info execute command: helm upgrade vcluster-pipeline-12632713145 /tmp/vcluster-0.22.0.tgz-2721862840 --create-namespace --kubeconfig /tmp/3273578530 --namespace vcluster-vcluster-pipeline-12632713145 --install --repository-config='' --values /tmp/3458157332
12:44:19 done Successfully created virtual cluster vcluster-pipeline-12632713145 in namespace vcluster-vcluster-pipeline-12632713145
12:44:23 info Waiting for vcluster to come up...
12:44:35 info vcluster is waiting, because vcluster pod vcluster-pipeline-12632713145-0 has status: Init:1/3
12:45:03 done vCluster is up and running
12:45:04 info Starting background proxy container...
12:45:11 done Switched active kube context to vcluster_vcluster-pipeline-12632713145_vcluster-vcluster-pipeline-12632713145_gke_vcluster-pipeline_europe-west9_minimal-cluster
- Use `vcluster disconnect` to return to your previous kube context
- Use `kubectl get namespaces` to access the vcluster

real    1m2.947s
user    0m0.828s
sys 0m0.187s

&amp;gt; Run vcluster connect vcluster-pipeline-12632713145

12:45:13 done vCluster is up and running
12:45:13 info Starting background proxy container...
12:45:16 done Switched active kube context to vcluster_vcluster-pipeline-12632713145_vcluster-vcluster-pipeline-12632713145_gke_vcluster-pipeline_europe-west9_minimal-cluster
- Use `vcluster disconnect` to return to your previous kube context
- Use `kubectl get namespaces` to access the vcluster
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;For fairness' sake, I used the time command to measure the creation time of a virtual cluster precisely. I measure other steps by looking at the GitHub workflow log.&lt;/p&gt;

&lt;p&gt;Installing vCluster and connecting to the virtual cluster take around one second. The creation of a virtual cluster takes about one minute; the creation of a full-fledged GKE instance takes at least five times more.&lt;/p&gt;
&lt;h2&gt;
  
  
  Changes to the workflow
&lt;/h2&gt;

&lt;p&gt;Here comes the great news: there's absolutely no change to any of the workflow steps. We can keep using the same steps because a virtual cluster has the same interface as a regular Kubernetes cluster.&lt;/p&gt;

&lt;p&gt;It includes:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Installing the PostgreSQL Helm Chart&lt;/li&gt;
&lt;li&gt;Creating the PostgreSQL connection parameters &lt;code&gt;ConfigMap&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Creating the GitHub registry &lt;code&gt;Secret&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Applying the Kustomized manifest&lt;/li&gt;
&lt;li&gt;And retrieving the external IP from the &lt;code&gt;LoadBalancer&lt;/code&gt;!&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If you are already using Kubernetes, and you probably are because you read this post, introducing vCluster in our daily work does not require any breaking changes.&lt;/p&gt;
&lt;h2&gt;
  
  
  Cleaning up
&lt;/h2&gt;

&lt;p&gt;So far, we haven't cleaned up any objects we created. It means pods with our app and PostgreSQL keep piling up in the cluster, not to mention &lt;code&gt;Service&lt;/code&gt; objects, making available ports a scarce resource. It was not an oversight: the reason was that it was a lot of overload to delete each object individually. I could have deployed all objects of a workflow run into a dedicated namespace and deleted that namespace. Unfortunately, I've been bitten by &lt;a href="https://www.baeldung.com/ops/delete-namespace-terminating-state" rel="noopener noreferrer"&gt;namespaces stuck in the &lt;code&gt;Terminating&lt;/code&gt; state&lt;/a&gt; before.&lt;/p&gt;

&lt;p&gt;On the opposite, deleting a virtual cluster is a breeze. Let's add the last step to our workflow definition:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Delete the vCluster&lt;/span&gt;
        &lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;vcluster delete vcluster-pipeline-${{github.run_id}}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;There still is one issue: if a step of a GitHub workflow fails, &lt;em&gt;i.e.&lt;/em&gt;, returns a non-0 exit code, the job fails immediately, &lt;strong&gt;and GitHub skips executing subsequent steps&lt;/strong&gt;. Hence, the above cleanup won't happen if the end-to-end tests fail. For example, it might be on purpose to keep the cluster's state if things go wrong. In this case, you should rely on observability instead for this purpose, like you do in production. I encourage you to delete your environment in every case.&lt;/p&gt;

&lt;p&gt;GitHub provides an &lt;code&gt;if&lt;/code&gt; attribute to run a step depending on conditions. For example, it offers a &lt;code&gt;if: always()&lt;/code&gt;; GitHub runs the step regardless of the success or failure of previous steps. It would be redundant since we don't want to delete the virtual cluster unless it has been created in a prior step. We should delete it only if the creation is successful:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Delete the vCluster&lt;/span&gt;
        &lt;span class="na"&gt;if&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${{ !cancelled() &amp;amp;&amp;amp; steps.vcluster.conclusion == 'success' }}&lt;/span&gt;    &lt;span class="c1"&gt;#1&lt;/span&gt;
        &lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;vcluster delete vcluster-pipeline-${{github.run_id}}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;ol&gt;
&lt;li&gt;Run if the job wasn't canceled &lt;strong&gt;and&lt;/strong&gt; if the &lt;code&gt;vcluster&lt;/code&gt; step (defined above) was successful. The job cancellation guard isn't necessary, but it allows you to keep the cluster up anyway.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The above setup allows each Pull Request to run in its sandbox, avoiding conflicts while controlling costs. By leveraging this approach, you can simplify your workflows, reduce risks, and focus on delivering features without worrying about breaking shared environments.&lt;/p&gt;
&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;This post concludes our series on testing Pull Requests on Kubernetes. In the first post, we ran unit tests with Testcontainers locally and set up the foundations of the GitHub workflow. We also leveraged GitHub Service Containers in our pipeline. In the second post, we created a GKE instance, deployed our app and its PostgreSQL database, got the &lt;code&gt;Service&lt;/code&gt; URL, and ran the end-to-end tests. In this post, we used vCluster to isolate each PR and manage the costs.&lt;/p&gt;

&lt;p&gt;While I couldn't cover every possible option, the series provides a solid foundation for starting your journey on end-to-end testing PRs on Kubernetes.&lt;/p&gt;

&lt;p&gt;The complete source code for this post can be found on GitHub:&lt;/p&gt;


&lt;div class="ltag-github-readme-tag"&gt;
  &lt;div class="readme-overview"&gt;
    &lt;h2&gt;
      &lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fassets.dev.to%2Fassets%2Fgithub-logo-5a155e1f9a670af7944dd5e12375bc76ed542ea80224905ecaf878b9157cdefc.svg" alt="GitHub logo"&gt;
      &lt;a href="https://github.com/ajavageek" rel="noopener noreferrer"&gt;
        ajavageek
      &lt;/a&gt; / &lt;a href="https://github.com/ajavageek/vcluster-pipeline" rel="noopener noreferrer"&gt;
        vcluster-pipeline
      &lt;/a&gt;
    &lt;/h2&gt;
    &lt;h3&gt;
      
    &lt;/h3&gt;
  &lt;/div&gt;
&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;To go further:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://vcluster.com" rel="noopener noreferrer"&gt;vCluster&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.vcluster.com/docs/vcluster/deploy/environment/gke" rel="noopener noreferrer"&gt;Deploy vCluster on GKE&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/loft-sh/setup-vcluster" rel="noopener noreferrer"&gt;GitHub Action to install the vcluster CLI&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://stackoverflow.com/questions/78670276/google-github-actions-get-gke-credentials-failed-with-required-container-clust" rel="noopener noreferrer"&gt;google-github-actions/get-gke-credentials failed with: required 'container.clusters.get' permission(s)&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.github.com/en/actions/writing-workflows/choosing-what-your-workflow-does/evaluate-expressions-in-workflows-and-actions#status-check-functions" rel="noopener noreferrer"&gt;GitHub workflow status check functions&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>devops</category>
      <category>testing</category>
      <category>kubernetes</category>
    </item>
    <item>
      <title>Remote Development made simple with DevPod</title>
      <dc:creator>Nicolas Fränkel</dc:creator>
      <pubDate>Thu, 06 Feb 2025 09:02:00 +0000</pubDate>
      <link>https://dev.to/loft/remote-development-made-simple-with-devpod-2dan</link>
      <guid>https://dev.to/loft/remote-development-made-simple-with-devpod-2dan</guid>
      <description>&lt;p&gt;I come relatively late to the subject of Remote Development Environments (also known as Cloud Development Environments). The main reason is that I haven't worked in a development team for over six years. However, I'm now working for Loft Labs, and we have a RDE product: &lt;a href="https://devpod.sh/" rel="noopener noreferrer"&gt;DevPod&lt;/a&gt;. I wanted to understand our value proposition as I'll be at &lt;a href="https://fosdem.org/" rel="noopener noreferrer"&gt;FOSDEM&lt;/a&gt; operating the &lt;a href="https://fosdem.org/2025/news/2024-11-16-stands-announced/" rel="noopener noreferrer"&gt;DevPod booth&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  The problem
&lt;/h2&gt;

&lt;p&gt;As a former developer, I vividly remember the pain of setting up each developer's development environment. At the beginning of my career, the architect had to configure my development machine painfully, so it was similar to his setup. Later on, I did the same for my team members repeatedly. The scope of possible discrepancies impacting development is virtually endless: Operating System, of course, version &lt;strong&gt;and&lt;/strong&gt; flavour of the SDKs, &lt;em&gt;e.g.&lt;/em&gt;, Java's Eclipse Temurin vs SapMachine, git hooks, etc. It was sweat, toil, and blood on every project.&lt;/p&gt;

&lt;p&gt;Over the years, I saw some interesting approaches to reproducing development environments. In the beginning, they stemmed from VMs, then from containers. I think Vagrant was the first tool that caught my attention: I attended a talk in 2012 where the speaker mentioned he used it to set up machines before his training sessions.&lt;/p&gt;

&lt;p&gt;App architectures have evolved significantly over the years, becoming more complex and sophisticated. Years ago, chances were that the only infrastructure dependency was a SQL database. In the JVM ecosystem, we were lucky to have JDBC, an API that would work across all SQL databases. All you needed to do was write standard SQL, and you could configure the database instance at runtime. With embedded databases such as &lt;a href="https://db.apache.org/derby/" rel="noopener noreferrer"&gt;Apache Derby&lt;/a&gt; and &lt;a href="https://www.h2database.com/" rel="noopener noreferrer"&gt;H2&lt;/a&gt;, you didn't need a dedicated Oracle instance for each developer.&lt;/p&gt;

&lt;p&gt;Times have changed. It's not uncommon for apps to need a SQL database, a NoSQL database, a Kafka cluster, and a few additional application services. Organizations that develop such apps are already using some container-related technology, &lt;em&gt;e.g.&lt;/em&gt;, Docker or Kubernetes, to manage this complexity.&lt;/p&gt;

&lt;p&gt;It doesn't solve the initial issue, though: how do you align the IDE, its plugins, the SDK(s), the git hooks, and everything else? You probably guessed it from the title-Remote Development Environments.&lt;/p&gt;

&lt;h2&gt;
  
  
  Development Containers
&lt;/h2&gt;

&lt;p&gt;In the introduction, I mentioned that RDEs are called Cloud Development Environments. The main idea behind RDEs is to store all you can in a Cloud and share it with all developers. In addition, you want them to work across the most widespread Cloud providers and the most commonly used IDEs. When such a need appears, it's time for industry actors to gather around a standard. Microsoft initiated the Development Container standard for their VS Code Remove development plugin for this exact purpose.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;A development container (or dev container for short) allows you to use a container as a full-featured development environment. It can be used to run an application, to separate tools, libraries, or runtimes needed for working with a codebase, and to aid in continuous integration and testing. Dev containers can be run locally or remotely, in a private or public cloud, in a variety of supporting tools and editors.&lt;/p&gt;

&lt;p&gt;The Development Container Specification seeks to find ways to enrich existing formats with common development specific settings, tools, and configuration while still providing a simplified, un-orchestrated single container option – so that they can be used as coding environments or for continuous integration and testing. Beyond the specification's core metadata, the spec also enables developers to quickly share and reuse container setup steps through Features and Templates.&lt;/p&gt;

&lt;p&gt;-- &lt;a href="https://containers.dev/" rel="noopener noreferrer"&gt;What are Development Containers?&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The configuration file is &lt;code&gt;devcontainer.json&lt;/code&gt;. You can find the schema reference &lt;a href="https://containers.dev/implementors/json_reference/" rel="noopener noreferrer"&gt;here&lt;/a&gt;. VS Code, Visual Studio, and IntelliJ products can leverage a &lt;code&gt;devcontainer.json&lt;/code&gt; file. On the provider side, GitHub Codespaces, CodeSandbox, and DevPod suport it.&lt;/p&gt;

&lt;h2&gt;
  
  
  Introducing DevPod
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://devpod.sh/" rel="noopener noreferrer"&gt;DevPod&lt;/a&gt; is a solution that leverages &lt;code&gt;devcontainer.json&lt;/code&gt;. It implements three main properties:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Open Source: No vendor lock-in. 100% free and open source built by developers for developers. &lt;/li&gt;
&lt;li&gt;Client Only: No server side setup needed. Download the desktop app or the CLI to get started. &lt;/li&gt;
&lt;li&gt;Unopinionated: Repeatable dev environment for any infra, any IDE, and any programming language. &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;DevPod is designed to be user-friendly and straightforward, making it a breeze to use. I decided to write this post because I was impressed with the product and to get my thoughts in order.&lt;/p&gt;

&lt;p&gt;The first step is to install DevPod itself. I'm on Mac; there's a Homebrew recipe.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;brew &lt;span class="nb"&gt;install &lt;/span&gt;devpod
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once installed, you can launch it from the CLI or the GUI. I favour GUIs, in the beginning, to help understand the available options.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6586pe5jabgdw6zm2wi3.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6586pe5jabgdw6zm2wi3.jpg" alt="DevPod Home screen" width="800" height="559"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;DevPod offers providers: locations where to run the containers. The default is Docker. You can add additional providers, including Cloud Providers and Kubernetes clusters.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk5fy269l186jb6za4vg5.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk5fy269l186jb6za4vg5.jpg" alt="Configuring a new DevPod provider" width="800" height="279"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;For this post, I'll keep Docker—I'm using OrbStack. Now, onto the meat. Let's go to the workspaces menu item. If you already have created workspaces, they should appear here. Since it's our first visit, we shall create one. Click on the btn:[Create workspace] button. Let's try one of the quickstart examples, &lt;em&gt;i.e.&lt;/em&gt;, Rust. My IDE of choice is IntelliJ IDEA, but you can choose yours. Once you've selected an image, an IDE, and a provider, click Create Workspace.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2hs6mdm8dgu2eeyfw2fi.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2hs6mdm8dgu2eeyfw2fi.jpg" alt="Starting a new DevPod workspace" width="800" height="708"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;At this point, DevPod will download the image and open the project running in OrbStack in IntelliJ.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5qjb8gdafan4aa079f89.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5qjb8gdafan4aa079f89.jpg" alt="Running IntelliJ via the JetBrains Gateway" width="800" height="499"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;From now on, we can happily start working on our Rust project, confident that every team member uses the same Rust version.&lt;/p&gt;

&lt;p&gt;Note that the first time you use this setup, DevPod will download the JetBrains client as well. It's a one-time download delay, though.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fplrfhdh3tb6w6osh2k7i.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fplrfhdh3tb6w6osh2k7i.jpg" alt="Downloading the JetBrains client" width="800" height="165"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The same holds for Git pre-commit hooks, for example. If you prefer to develop within another IDE, select it at launch time, and you're good. When done with your day's work, stop the container. If you're running in the Cloud, it saves money. On the next day, resume the container and continue your work.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;DevPod is a nice tool around your toolbelt that allows your development team(s) to share the same machine configuration without hassle. In this introductory blog post, I showed a small fraction of what you can do. I encourage you to leverage its power if faced with heterogeneous development environments.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;To go further:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://containers.dev/" rel="noopener noreferrer"&gt;Development Containers&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://devpod.sh/" rel="noopener noreferrer"&gt;DevPod - Open Source Dev-Environments-As-Code&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.loft.sh/blog/comparing-coder-vs-codespaces-vs-gitpod-vs-devpod" rel="noopener noreferrer"&gt;Gitpod vs. Codespaces vs. Coder vs. DevPod: 2024 Comparison&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>development</category>
      <category>devpod</category>
      <category>remotedevelopmentenvironment</category>
      <category>cloud</category>
    </item>
    <item>
      <title>A solution to the problem of cluster-wide CRDs</title>
      <dc:creator>Nicolas Fränkel</dc:creator>
      <pubDate>Thu, 19 Dec 2024 09:02:00 +0000</pubDate>
      <link>https://dev.to/loft/a-solution-to-the-problem-of-cluster-wide-crds-2fbc</link>
      <guid>https://dev.to/loft/a-solution-to-the-problem-of-cluster-wide-crds-2fbc</guid>
      <description>&lt;p&gt;I'm an average Reddit user, scrolling much more than reading or interacting. Sometimes, however, a post rings a giant red bell. When I stumbled upon &lt;a href="https://www.reddit.com/r/kubernetes/comments/1ga0deo/comment/lta8itb/?context=3&amp;amp;share_id=ZS15DmQexSXUjhXuqQ81z" rel="noopener noreferrer"&gt;If you could add one feature to K8s, what would it be?&lt;/a&gt;, I knew the content would be worth it. The most voted answer is:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Namespace scoped CRDs &lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  A short intro to CRDs
&lt;/h2&gt;

&lt;p&gt;Kubernetes comes packed with existing objects, such as &lt;code&gt;Pod&lt;/code&gt;, &lt;code&gt;Service&lt;/code&gt;, &lt;code&gt;DaemonSet&lt;/code&gt;, etc., but you can create your own: the latter are called Custom Resource Definitions. Most of the time, CRDs are paired with a custom controller called an &lt;em&gt;operator&lt;/em&gt;. An operator subscribes to the lifecycle events of CRD(s). When you act upon a CRD by creating, updating, or deleting it, Kubernetes changes its status, and the operator gets notified. What it does depends on the nature of the CRD.&lt;/p&gt;

&lt;p&gt;For example, the &lt;a href="https://prometheus-operator.dev/docs/getting-started/introduction/" rel="noopener noreferrer"&gt;Prometheus operator&lt;/a&gt; subscribes to the lifecycles of a couple of different CRDs: &lt;code&gt;Prometheus&lt;/code&gt;, &lt;code&gt;Alertmanager&lt;/code&gt;, &lt;code&gt;ServiceMonitor&lt;/code&gt;, etc., to make operating Prometheus easier. In particular, it will create a Prometheus instance when it detects a new &lt;code&gt;Prometheus&lt;/code&gt; CR. It will configure the instance according to the CR's manifest.&lt;/p&gt;

&lt;h2&gt;
  
  
  The issue with cluster-wide CRDs
&lt;/h2&gt;

&lt;p&gt;CRDs have a cluster-wide scope; that is, you install a CRD for an entire cluster. Note that while the definition is cluster-wide, the CR's scope is either &lt;code&gt;Cluster&lt;/code&gt; or &lt;code&gt;Namespaced&lt;/code&gt; depending on the CRD.&lt;/p&gt;

&lt;p&gt;I noticed the problem of cluster-wide CRDs when I worked with Apache APISIX, an API gateway. Routing in Kubernetes has evolved across several steps: &lt;code&gt;NodePort&lt;/code&gt;, &lt;code&gt;LoadBalancer&lt;/code&gt;, and &lt;code&gt;IngressController&lt;/code&gt;, each trying to fix the limitations of its predecessor. The latest step is the Gateway API.&lt;/p&gt;

&lt;p&gt;At the time of this writing, the Gateway API is still an add-on and not part of the Kubernetes distro. You need to install it explicitly as a CRD. The Gateway API went through several versions. If team A were a precursor and installed version &lt;code&gt;v1alpha2&lt;/code&gt;, every other team would need to use the same version because the Gateway API is a CRD. Of course, team B can try to convince team A to upgrade, but if you've been in such a situation, you know how painful it can be.&lt;/p&gt;

&lt;p&gt;I mentioned above that the magic happened via an operator. The Gateway API doesn't come with an out-of-the-box operator. Instead, &lt;a href="https://gateway-api.sigs.k8s.io/implementations/" rel="noopener noreferrer"&gt;different vendors provide their own&lt;/a&gt;. For example, Apache APISIX has one, Traefik has one, etc. Of course, they are more or less advanced. At the time, the APISIX operator only worked with version 0.5.0 of the Gateway API CRD.&lt;/p&gt;

&lt;p&gt;So now, it gets worse. Team A installed v0.5.0 to work with APISIX; team B comes later and wants to use Traefik, which fully supports the latest and greatest. Unfortunately, they can't because it would require the latest CRD.&lt;/p&gt;

&lt;p&gt;Don't get me wrong; I'm all for a lean architectural landscape that limits the number of different technologies. However, it should be a deliberate choice, not a technical limitation. The above also prevents rolling upgrades. Imagine that we decided on Apache APISIX early on. Yet, it hasn't progressed toward supporting the latest Gateway API versions. We should be able to migrate from APISIX to Traefik (or any other) team by team.&lt;/p&gt;

&lt;p&gt;The cluster-wide CRD doesn't allow it, or at least makes it very hard: we should find a Traefik that handles v0.5.0, &lt;strong&gt;if there's one&lt;/strong&gt; and it's still maintained, migrate all APISIX CR to Traefik &lt;strong&gt;at once&lt;/strong&gt;, and then proceed to upgrade. This approach requires expensive coordination, the cost of which grows exponentially with the number of teams involved.&lt;/p&gt;

&lt;h2&gt;
  
  
  The separate clusters approach
&lt;/h2&gt;

&lt;p&gt;The obvious solution is to have one cluster per team. If you have been operating clusters, you know this approach doesn't scale.&lt;/p&gt;

&lt;p&gt;Each cluster requires a primary node and a control plane. These are just "administrative" costs of running a cluster: they don't bring anything to the table.&lt;/p&gt;

&lt;p&gt;On top of that, every cluster needs a complete monitoring solution. It includes at least metrics and logging, possibly distributed tracing. Whatever your architecture, it's again an additional burden with no business value. You can generalize the above over every support feature of a cluster, authentication, authorization, etc.&lt;/p&gt;

&lt;p&gt;All in all, lots of clusters mean lots of additional operational costs.&lt;/p&gt;

&lt;h2&gt;
  
  
  vCluster, a sensible alternative
&lt;/h2&gt;

&lt;p&gt;The ideal situation, as the initial quote of this post states, would be to have namespace-scoped CRDs. Unfortunately, it's not the path that Kubernetes chose. The next best thing would be to add a virtual cluster on top of the real one to partition it: that's the promise of &lt;a href="https://www.vcluster.com/" rel="noopener noreferrer"&gt;vCluster&lt;/a&gt;.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;What are virtual clusters?&lt;/p&gt;

&lt;p&gt;Virtual clusters are a Kubernetes concept that enables isolated clusters to be run within a single physical Kubernetes cluster. Each cluster has its own API server, which makes them better isolated than namespaces and more affordable than separate Kubernetes clusters.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;vCluster isolates each virtual cluster. Hence, with a &lt;strong&gt;single control plane&lt;/strong&gt;, you can deploy a v1.0 CRD in one cluster and a v1.2 in another without trouble.&lt;/p&gt;

&lt;p&gt;Imagine two teams working with different Gateway API providers, each requiring a different CRD version. Let's create a virtual cluster for each of them with vCluster so each can work independently from the other team. I'll assume you already have the &lt;code&gt;vcluster&lt;/code&gt; CLI installed; if not, look at the &lt;a href="https://www.vcluster.com/docs/get-started/#deploy-vcluster" rel="noopener noreferrer"&gt;documentation&lt;/a&gt;, for we provide a couple of different installation options depending on your platform and your tastes.&lt;/p&gt;

&lt;p&gt;We can now create our virtual clusters.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;vcluster create teamx
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The output should be similar to the following:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;08:01:02 info Creating namespace vcluster-teamx
08:01:02 info Detected local kubernetes cluster orbstack. Will deploy vcluster with a NodePort &amp;amp; sync real nodes
08:01:02 info Chart not embedded: "open chart/vcluster-0.21.1.tgz: file does not exist", pulling from helm repository.
08:01:02 info Create vcluster teamx...
08:01:02 info execute command: helm upgrade teamx https://charts.loft.sh/charts/vcluster-0.21.1.tgz --create-namespace --kubeconfig /var/folders/kb/g075x6tx36360yvwjrb1x6yr0000gn/T/83460322 --namespace vcluster-teamx --install --repository-config='' --values /var/folders/kb/g075x6tx36360yvwjrb1x6yr0000gn/T/1777816672
08:01:03 done Successfully created virtual cluster teamx in namespace vcluster-teamx
08:01:07 info Waiting for vcluster to come up...
08:01:32 done vCluster is up and running
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Because we didn't specify any namespace, &lt;code&gt;vcluster&lt;/code&gt; created one with the same name as the virtual cluster. If you prefer to set a specific namespace, use the &lt;code&gt;-n&lt;/code&gt; option, &lt;em&gt;.e.g.&lt;/em&gt;, &lt;code&gt;vcluster create mycluster -n mynamespace&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Note that you can customize each virtual cluster via a &lt;code&gt;values.yaml&lt;/code&gt; &lt;a href="https://www.vcluster.com/docs/vcluster/configure/vcluster-yaml/" rel="noopener noreferrer"&gt;configuration file&lt;/a&gt;. In the context of this post, we will keep the default options.&lt;/p&gt;

&lt;p&gt;We use the &lt;code&gt;vcluster connect&lt;/code&gt; command to connect to a virtual cluster. However, we are already connected because we used the &lt;code&gt;vcluster create&lt;/code&gt; command.&lt;/p&gt;

&lt;p&gt;At this point, it's as if we were in a separate Kubernetes cluster. Team X can install the CRDs using the version that they require.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl apply -f https://github.com/kubernetes-sigs/gateway-api/releases/download/v1.0.0/standard-install.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The output is:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;customresourcedefinition.apiextensions.k8s.io/gatewayclasses.gateway.networking.k8s.io created
customresourcedefinition.apiextensions.k8s.io/gateways.gateway.networking.k8s.io created
customresourcedefinition.apiextensions.k8s.io/httproutes.gateway.networking.k8s.io created
customresourcedefinition.apiextensions.k8s.io/referencegrants.gateway.networking.k8s.io created
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Team Y can do the same with their version. Because we are both teams X and Y, we need to disconnect first from the virtual cluster.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;vcluster disconnect
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You should see the result of the operation:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;08:05:29 info Successfully disconnected and switched back to the original context: orbstack
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Let's impersonate team Y, create the virtual cluster, and install another version of the CRDs:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;vcluster create teamy
kubectl apply -f https://github.com/kubernetes-sigs/gateway-api/releases/download/v1.2.0/standard-install.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The output of the second command is the following:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;customresourcedefinition.apiextensions.k8s.io/gatewayclasses.gateway.networking.k8s.io created
customresourcedefinition.apiextensions.k8s.io/gateways.gateway.networking.k8s.io created
customresourcedefinition.apiextensions.k8s.io/grpcroutes.gateway.networking.k8s.io created
customresourcedefinition.apiextensions.k8s.io/httproutes.gateway.networking.k8s.io created
customresourcedefinition.apiextensions.k8s.io/referencegrants.gateway.networking.k8s.io created
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Version 1.2 has a new GRPC route that is not found in version 1.0. This way, team X can now install their Gateway API provider that works with v1.0 and team Y the one that works with 1.2.&lt;/p&gt;

&lt;p&gt;CRDs are cluster-wide resources, but there's no conflict since the virtual clusters behave like isolated clusters. Each team can happily use the version they need without forcing others to use it.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fixyl0zzfsvlbx5xzkhme.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fixyl0zzfsvlbx5xzkhme.png" alt="Deployment of virtual clusters" width="642" height="363"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;In this post, we touched on the problem of some Kubernetes objects: they are cluster-wide and lock all teams working on the same cluster to use the same version. Running a Kubernetes cluster incurs costs; managing lots of them requires mature and organized automation.&lt;/p&gt;

&lt;p&gt;vCluster allows an organization to get the best of both worlds: limit the number of clusters while preventing teams from stepping on each others' toes.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;To go further:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://learnk8s.io/how-many-clusters" rel="noopener noreferrer"&gt;Architecting Kubernetes clusters — how many should you have?&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.vcluster.com/" rel="noopener noreferrer"&gt;vCluster&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>kubernetes</category>
      <category>crd</category>
      <category>vcluster</category>
    </item>
    <item>
      <title>vcluster Exploded in 2022</title>
      <dc:creator>Lukas Gentele</dc:creator>
      <pubDate>Tue, 27 Dec 2022 10:02:53 +0000</pubDate>
      <link>https://dev.to/loft/vcluster-exploded-in-2022-2n7e</link>
      <guid>https://dev.to/loft/vcluster-exploded-in-2022-2n7e</guid>
      <description>&lt;p&gt;By &lt;a href="https://twitter.com/richburroughs" rel="noopener noreferrer"&gt;Rich Burroughs&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;2022 was a very exciting year for vcluster. If you’re unfamiliar with vcluster, it’s an open source tool for creating and managing virtual Kubernetes clusters. Virtual clusters are lightweight and run in a namespace of an underlying host cluster but appear to the users as if they’re full-blown clusters. If you’d like more details, there’s an &lt;a href="https://www.vcluster.com/docs/what-are-virtual-clusters" rel="noopener noreferrer"&gt;extended explanation in the docs&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Let’s take a look at the massive growth of the project and some of the new features that showed up in 2022.&lt;/p&gt;

&lt;h2&gt;
  
  
  Highlights
&lt;/h2&gt;

&lt;p&gt;First, some numbers: vcluster now has more than 2,200 stars on GitHub, and the Docker images have been downloaded more than 26 million times from Docker Hub. While I don’t put a lot of faith in GitHub stars as a metric on its own, there’s a lot of other evidence that the project has grown a lot this year.&lt;/p&gt;

&lt;p&gt;One of the highlights for vcluster was KubeCon + CloudNativeCon 2022 North America, which was held in Detroit. vcluster was featured in a &lt;a href="https://youtu.be/eJG7uIU9NpM" rel="noopener noreferrer"&gt;keynote by Whitney Lee and Mauricio Salatino&lt;/a&gt;, which illustrated how platform teams can better serve developers, as well as &lt;a href="https://youtu.be/p8BluR5WT5w" rel="noopener noreferrer"&gt;a talk by Joseph Sandoval and Dan Garfield&lt;/a&gt; about how Adobe’s new CI/CD platform that uses vcluster and Argo CD. And Mike Tougeron from Adobe &lt;a href="https://youtu.be/casLvZWlIDw" rel="noopener noreferrer"&gt;spoke more about their use of vcluster&lt;/a&gt; at GitOps Con North America in the buildup to KubeCon.&lt;/p&gt;

&lt;p&gt;vcluster was featured on &lt;a href="https://youtu.be/EaoxUDGpARE" rel="noopener noreferrer"&gt;VMware’s TGIK stream&lt;/a&gt;, &lt;a href="https://youtu.be/wMmUmjSB6hw" rel="noopener noreferrer"&gt;Whitney Lee’s Elightning show&lt;/a&gt;, and Bret Fisher’s &lt;a href="https://youtu.be/FYqKQIthH6s" rel="noopener noreferrer"&gt;DevOps and Docker stream&lt;/a&gt;. We also saw written content about vcluster from the community, like this &lt;a href="https://medium.com/nerd-for-tech/multi-tenancy-in-kubernetes-using-lofts-vcluster-dee6513a7206" rel="noopener noreferrer"&gt;intro tutorial&lt;/a&gt; by Pavan Kumar, a blog post from Mauricio Salatino on &lt;a href="https://salaboy.com/2022/08/03/building-platforms-on-top-of-kubernetes-vcluster-and-crossplane/" rel="noopener noreferrer"&gt;building dev platforms with vcluster and Crossplane&lt;/a&gt;, and this super cool post from Jason Andress about &lt;a href="https://sysdig.com/blog/how-to-honeypot-vcluster-falco/" rel="noopener noreferrer"&gt;building honeypots with vcluster&lt;/a&gt;. The honeypot use case had never occurred to me. I love seeing what creative uses people come up with for vcluster.&lt;/p&gt;

&lt;h2&gt;
  
  
  New Features
&lt;/h2&gt;

&lt;p&gt;There wouldn’t be so much excitement around vcluster if not for the work of the maintainers and contributors. It’s great to see so many new features being added, and they often come out of feedback from people in the community. Here’s a look at some of the things that shipped in 2022.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Deploying Helm charts and manifests on startup:&lt;/strong&gt; This is one of my favorite features that’s been added to vcluster. It’s one thing to provision a bare cluster and another thing for that to become a useful environment. With this feature, you can apply Helm charts (public or private) or Kubernetes YAML manifests as the virtual cluster spins up for the first time. It’s super helpful. (&lt;a href="https://www.vcluster.com/docs/operator/init-manifests" rel="noopener noreferrer"&gt;Read the docs here&lt;/a&gt;)&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Distros:&lt;/strong&gt; Initially, vcluster was built on top of k3s, but after a while, people started asking if we could support other Kubernetes distributions. Now vcluster also supports k0s, EKS, and standard Kubernetes. This allows you to use a virtual cluster more like your production environment. (&lt;a href="https://www.vcluster.com/docs/operator/other-distributions" rel="noopener noreferrer"&gt;Docs&lt;/a&gt;)&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Isolated mode:&lt;/strong&gt; As more people started using vcluster, we received lots of questions and feedback about how much isolation the virtual clusters had. With isolated mode, vcluster now adds some additional Kubernetes security features to the virtual clusters as they’re provisioned: a Pod Security Standard, a resource quota, a limit range, and a network policy. Isolated mode is optional and can be invoked with the &lt;code&gt;--isolated&lt;/code&gt; flag. (&lt;a href="https://www.vcluster.com/docs/operator/security" rel="noopener noreferrer"&gt;Docs&lt;/a&gt;)&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Plugins:&lt;/strong&gt; With the complexity of Kubernetes, it became clear that people would need the ability to customize and extend vcluster to fit their workflows. Enter vcluster plugins and the &lt;a href="https://github.com/loft-sh/vcluster-sdk" rel="noopener noreferrer"&gt;vcluster SDK&lt;/a&gt;. Plugins are written in Go and allow users to customize the behavior of vcluster’s syncer to do all kinds of things, like sharing resources between host and virtual clusters. (&lt;a href="https://www.vcluster.com/docs/plugins/tutorial" rel="noopener noreferrer"&gt;Docs&lt;/a&gt;)&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Pausing and resuming virtual clusters:&lt;/strong&gt; This handy feature can scale down workloads in your virtual clusters when they’re not being used. That’s done by setting the number of replicas to zero for StatefulSets and Deployments. Resuming the virtual cluster sets the replicas back to their original value, and then the scheduler spins the pods back up. This allows users to suspend the workloads while keeping configurations in the virtual cluster in place. (&lt;a href="https://www.vcluster.com/docs/operator/pausing-vcluster" rel="noopener noreferrer"&gt;Docs&lt;/a&gt;)&lt;/p&gt;

&lt;p&gt;This is just a handful of the many improvements made to vcluster in 2022.&lt;/p&gt;

&lt;h2&gt;
  
  
  Thank you
&lt;/h2&gt;

&lt;p&gt;I want to thank the vcluster maintainers, the contributors, and everyone who used vcluster in 2022. As I mentioned, many great ideas for improving the project come from folks in the community through &lt;a href="https://github.com/loft-sh/vcluster/issues" rel="noopener noreferrer"&gt;GitHub issues&lt;/a&gt;, pull requests, or even feedback in &lt;a href="https://slack.loft.sh/" rel="noopener noreferrer"&gt;our community Slack&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;We’re excited that so many people have found vcluster useful for their work. If you’ve read this far and haven’t tried vcluster yet yourself, there’s &lt;a href="https://www.vcluster.com/docs/quickstart" rel="noopener noreferrer"&gt;an easy quickstart&lt;/a&gt; that takes just a few minutes.&lt;/p&gt;

&lt;p&gt;I’m looking forward to seeing how the project grows in 2023.&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>vcluster</category>
      <category>devspace</category>
      <category>loft</category>
    </item>
    <item>
      <title>Kubernetes Multitenancy: Why Namespaces aren’t Good Enough</title>
      <dc:creator>Lukas Gentele</dc:creator>
      <pubDate>Mon, 21 Nov 2022 19:32:30 +0000</pubDate>
      <link>https://dev.to/loft/kubernetes-multitenancy-why-namespaces-arent-good-enough-i53</link>
      <guid>https://dev.to/loft/kubernetes-multitenancy-why-namespaces-arent-good-enough-i53</guid>
      <description>&lt;p&gt;By &lt;a href="https://twitter.com/TheEbizWizard" rel="noopener noreferrer"&gt;Jason Bloomberg&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Multitenancy has long been a core capability of cloud computing, and indeed, for virtualization in general.&lt;br&gt;
With multitenancy, everybody has their own sandbox to play in, isolated from everybody else’s sandbox, even though beneath the covers, they share common infrastructure.&lt;br&gt;
Kubernetes offers its own kind of multitenancy as well, via the use of namespaces. Namespaces provide a mechanism for organizing clusters into virtual sub-clusters that serve as logically separated tenants.&lt;br&gt;
Relying upon namespaces to provide all the advantages of true multitenancy, however, is a mistake. Namespaces are for cloud native teams that don’t want to step on each other’s toes – but who are all colleagues who trust each other.&lt;br&gt;
True multitenancy, in contrast, isn’t for colleagues. It’s for strangers – where no one knows whether the owner of the next tenant over is up to no good. Kubernetes namespaces don’t provide this level of multitenancy.&lt;/p&gt;

&lt;h2&gt;
  
  
  Multitenancy: A Quick Primer
&lt;/h2&gt;

&lt;p&gt;Multitenancy is most familiar as a core part of how cloud computing operates.&lt;br&gt;
Everybody’s cloud account is its own separate world, complete in itself and separate from everybody else’s. You get your own login, configuration, services, and environments. Meanwhile, everybody else has the same experience, even though under the covers, the cloud provider runs each of these tenants on shared infrastructure.&lt;/p&gt;

&lt;p&gt;There are different flavors of multitenancy, depending upon just what infrastructure they share beneath this abstraction.&lt;/p&gt;

&lt;p&gt;IaaS tenants, aka instances or nodes, share hypervisors that abstract the underlying hardware and physical networking. Meanwhile, SaaS tenants, for example, Salesforce or ServiceNow accounts, might share database infrastructure, common services, or other application elements.&lt;/p&gt;

&lt;p&gt;Either way, each tenant is isolated from all the others. Isolation, in fact, is one of the most important characteristics of multitenancy, because it protects one tenant from the actions of another.&lt;/p&gt;

&lt;p&gt;To be effective, the infrastructure must enforce isolation at the network layer. Any network traffic from one tenant that is destined for another must leave the tenant via its public interfaces, traverse the network external to the tenants, and then enter the destination tenant through the same interface that any other external traffic would enter.&lt;/p&gt;

&lt;p&gt;Even when the infrastructure provider decides to offer a shortcut for such traffic, avoiding the hairpin to improve performance, it’s important for such shortcuts to comply with the same network security policies as any other traffic.&lt;/p&gt;

&lt;h2&gt;
  
  
  Using Kubernetes Namespaces for Multitenancy
&lt;/h2&gt;

&lt;p&gt;Namespaces have been around for years, largely as a way to keep different developers working on the same project from inadvertently declaring identical variable or method names.&lt;/p&gt;

&lt;p&gt;Providing a scope for names is also a benefit of Kubernetes namespaces, but naming alone isn’t their entire purpose. Kubernetes namespaces are also virtual clusters within the name physical cluster.&lt;/p&gt;

&lt;p&gt;This notion of virtual clusters sounds like individual tenants in a multitenant cluster, but in the case of Kubernetes namespaces, they have markedly different properties.&lt;/p&gt;

&lt;p&gt;Kubernetes logically separates namespaces within a cluster but allows for them to communicate with each other within the cluster. By default, Kubernetes doesn’t offer any security for such interactions, although it does allow for role-based access control (RBAC) in order to limit users and processes to individual namespaces.&lt;/p&gt;

&lt;p&gt;Such RBAC, however, does not provide the network isolation that is essential to true multitenancy. Furthermore, Kubernetes doesn’t implement any privilege separation, instead delegating such control to a dedicated authorization plugin.&lt;/p&gt;

&lt;p&gt;Furthermore, Kubernetes defines cluster roles and their associated bindings, thus empowering certain individuals to have access and control over all the namespaces within the cluster. Not only do such roles open the door for insider attacks, but they also allow for misconfigurations of the cluster roles that would leave the door open between namespaces.&lt;/p&gt;

&lt;p&gt;If cluster roles weren’t bad enough, Kubernetes also allows for privileged pods within a cluster. Depending upon how admins have configured such pods, they can access any node-level capabilities for the node hosting the cluster. For example, the privileged pod might be able to access file system, network, or Linux process capabilities.&lt;/p&gt;

&lt;p&gt;In other words, a privileged pod can impersonate the node that hosts its cluster – regardless of what namespaces run on that cluster or how they’re configured.&lt;/p&gt;

&lt;h2&gt;
  
  
  ‘True’ Kubernetes Multitenancy that Provides Isolation between Tenants
&lt;/h2&gt;

&lt;p&gt;In order to implement secure Kubernetes multitenancy, it’s essential to use a tool like Loft Labs to implement virtual clusters within Kubernetes clusters that act just like real clusters.&lt;/p&gt;

&lt;p&gt;With this ‘true’ multitenancy, traffic from one virtual cluster to another must go through the same access controls as any cluster-to-cluster traffic would – because fundamentally, Loft Labs handles traffic between virtual clusters just as Kubernetes would handle traffic between clusters.&lt;/p&gt;

&lt;p&gt;One of the primary benefits of this approach to Kubernetes multitenancy is that virtual clusters support namespaces just as clusters do – not necessarily for isolation (as namespace isolation is inadequate), but for the name scoping that namespaces are most familiar for.&lt;/p&gt;

&lt;p&gt;Loft Labs’ multitenancy provides other benefits that namespaces cannot, for example, the ability to spin down individual virtual clusters for better cost efficiency.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Intellyx Take
&lt;/h2&gt;

&lt;p&gt;The best way to think about multitenancy options for Kubernetes is this: namespaces are for friends, while Loft Labs’ multitenancy is also for strangers.&lt;/p&gt;

&lt;p&gt;Using namespaces for multitenancy works best when securing traffic between tenants is a non-issue, say, when all the developers using the cluster are on the same team and actively collaborating. &lt;/p&gt;

&lt;p&gt;True multitenancy of the sort Loft provides, in contrast, provides virtual clusters that separate teams can use – even if those teams aren’t collaborating, or indeed, don’t know or trust each other at all.&lt;/p&gt;

&lt;p&gt;This zero-trust approach to sharing resources is fundamental to modern cloud native computing, even in situations where people are all working for the same company. &lt;/p&gt;

&lt;p&gt;Not only does such isolation add security, but it also enforces GitOps-style best practices for how multiple teams can work on the same codebase in parallel.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Copyright © Intellyx LLC. Loft Labs is an Intellyx customer. Intellyx retains final editorial control of this article.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>vcluster</category>
      <category>devspace</category>
      <category>loft</category>
    </item>
    <item>
      <title>Making Self-Service Clusters Ready for DevOps Adoption</title>
      <dc:creator>Lukas Gentele</dc:creator>
      <pubDate>Mon, 21 Nov 2022 19:32:26 +0000</pubDate>
      <link>https://dev.to/loft/making-self-service-clusters-ready-for-devops-adoption-4m4k</link>
      <guid>https://dev.to/loft/making-self-service-clusters-ready-for-devops-adoption-4m4k</guid>
      <description>&lt;p&gt;By &lt;a href="https://twitter.com/bluefug" rel="noopener noreferrer"&gt;Jason English&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;History is littered with cautionary tales of software delivery tools that were technically ahead of their time, yet were ultimately unsuccessful because of a lack of end user adoption.&lt;/p&gt;

&lt;p&gt;In the past, the success of developer tooling vendors depended upon the rise and fall of the major competitive platforms around them. An upstart vendor could still break through to grab market share from dominant players in a space by delivering a superior user experience (or UX) and partnering with a leader, until such time as they were acquired.&lt;/p&gt;

&lt;p&gt;A great UX generally includes an intuitive UI design based on human factors, especially important in consumer-facing applications. Human factors are still important in software development tooling, however, the UX focus is on whether the tools will readily deliver value to the organization, by empowering developers to efficiently deliver better software.&lt;/p&gt;

&lt;p&gt;Kubernetes (or k8s) arose from the open source foundations of Linux, containerization, and a contributed project from Google. A global community of contributors turned the enterprise space inside out, by abstracting away the details of deploying and managing infrastructure as code. &lt;/p&gt;

&lt;p&gt;Finally, development and operations teams could freely download non-proprietary tooling and orchestrate highly scalable cloud native software architecture. So what was holding early K8s adopters back from widespread use in their DevOps lifecycles?&lt;/p&gt;

&lt;h2&gt;
  
  
  The challenge: empowering developers
&lt;/h2&gt;

&lt;p&gt;A core tenet of the DevOps movement is self-service automation. Key stakeholders should be fully empowered to collaborate freely with access to the tools and resources they need. &lt;/p&gt;

&lt;p&gt;Instead of provisioning through the approval process of an IT administrative control board, DevOps encourages the establishment of an agile platform team (in smaller companies, this may be one platform manager). The platform team should provide developers with a self-service stack of approved on-demand tooling and environments, without requesting an exhaustive procurement process or ITIL review cycles.&lt;/p&gt;

&lt;p&gt;At first glance, Kubernetes, with its declarative abstraction of infrastructure, seems like a perfect fit for orchestrating these environments. But much like an early sci-fi spaceship where wires are left hanging behind the lights of control panels, many specifics of integration, data movement, networking and security were intentionally left up to the open source community to build out, rather than locking in design assumptions in these key areas.&lt;/p&gt;

&lt;p&gt;Because the creation and configuration of Kubernetes clusters comes with a unique set of difficulties, the platform team may try to reduce rework by offering a one-size-fits-all approach. Sometimes, this may not meet the needs of all developers, and may exceed the needs of other teams with excess allocation and cloud cost.&lt;/p&gt;

&lt;p&gt;You can easily tell if an organization’s DevOps initiative is off track if it simply shifts the provisioning bottleneck from IT to a platform team that is backlogged and struggling to deploy k8s clusters for the right people at the correct specifications.&lt;/p&gt;

&lt;h2&gt;
  
  
  Handing over the keys
&lt;/h2&gt;

&lt;p&gt;The ability to usurp the limitations of physical networks and IP addressing is the secret weapon of Kubernetes. With configuration defined as code, teams can call for namespaces and clusters that truly fit the semantics and dimensions of the application.&lt;/p&gt;

&lt;p&gt;The inherent flexibility of k8s produces an additional set of concerns around role-based access controls (RBAC) that must be solved in order to scale without undue risk.&lt;/p&gt;

&lt;p&gt;In today’s cloudy and distributed ecosystem, engineering organizations are composed differently than the siloed Dev and Ops teams in traditional IT organizations. Various teams may need to access certain clusters or pods within as part of their developmental or operational duties on specific projects. &lt;/p&gt;

&lt;p&gt;Even with automated provisioning, a request would by default generate a cluster with one ‘front door’ key for an administrator, who may share this key among project team members. Permissioned individuals can step on each other’s work in the environment, inadvertently break the cluster, or even allow their credentials to get exposed to the outside world.&lt;/p&gt;

&lt;p&gt;To accelerate delivery without risk, least-privilege rights should be built into the provisioning system by policy and leverage the company’s single-sign-on (SSO) backend for resource access across an entire domain, rather than being manually doled out by an admin. &lt;/p&gt;

&lt;p&gt;In a self-service solution, multiple people can get their own keys with access to specific clusters and pods, or assign other team members to get them. These permissions can lean on the organization’s authorization tools of choice for access control, without requiring admins to write any custom policies to prevent inadvertent conflicts.&lt;/p&gt;

&lt;h2&gt;
  
  
  A self-service Kubernetes storefront
&lt;/h2&gt;

&lt;p&gt;We already know the cost of not getting self-service right. Unsatisfied developers will sneak around procurement to provision their own rogue clusters, creating costly cloud sprawl and lots of lost and forgotten systems with possible vulnerabilities.&lt;/p&gt;

&lt;p&gt;As consumers, we’re acclimated to using e-commerce websites and app stores on our personal devices. At work, we can use a credit card to buy apps, plugins and tooling from marketplaces provided by a SaaS vendor or public cloud.&lt;/p&gt;

&lt;p&gt;The storefront model offers a good paradigm for self-service cluster provisioning. One vendor, Loft Labs, offers a Kubernetes control plane built upon the open source DevSpace tool for standing up stacks. An intuitive interface allows domain-level administrators to navigate automated deployments and track usage.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Floft.sh%2Fblog%2Fimages%2Fcontent%2Fkubernetes-self-service-with-loft.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Floft.sh%2Fblog%2Fimages%2Fcontent%2Fkubernetes-self-service-with-loft.png" title="Kubernetes self-service clusters with Loft" alt="Kubernetes self-service clusters with Loft" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;More importantly, developers can use their own filtered view of Loft Labs as a storefront for provisioning all available and approved K8s cluster images into new or existing namespaces. Or they can make the provisioning requests via a CLI and drill down into each cluster’s details with the kubectl prompt.&lt;/p&gt;

&lt;p&gt;The system provides guardrails for developers to provision Kubernetes clusters and namespaces in the mode they prefer, without consuming excess resources or making configuration mistakes.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Intellyx Take
&lt;/h2&gt;

&lt;p&gt;Quite a few vendors are already offering comprehensive ‘Kubernetes-as-a-Service’ management platforms that gloss over much of the complexity of provisioning and access to clusters, when what is really needed is transparency and portability.&lt;/p&gt;

&lt;p&gt;Engineers will avoid waiting on procurement boards, and they hate writing repetitive commands, whether that is launching 100 pods at a time for autoscaling or bringing them down when they are no longer required. But they do still want to directly address kubectl for a single pod, look at the logs for that pod and analyze what is going on.&lt;/p&gt;

&lt;p&gt;The platform team’s holy grail is to provide a self-service Kubernetes storefront that works with the company’s authorization regimes to entitle the right users and allow project management, tracking and auditing, while giving experienced engineers the engineering interfaces they need. &lt;/p&gt;

&lt;p&gt;Next up in this series, we’ll be covering the challenges of multi-tenancy and cost control!&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;\&lt;br&gt;
*© 2022, Intellyx, LLC. Intellyx is solely responsible for the content of this article. At the time of writing, &lt;a href="https://loft.sh/" rel="noopener noreferrer"&gt;Loft Labs&lt;/a&gt; is an Intellyx customer. Image sources: Maps, Unsplash. Screenshot, Loft Labs.&lt;/strong&gt;*&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>vcluster</category>
      <category>devspace</category>
      <category>loft</category>
    </item>
    <item>
      <title>Interview with KubeCon Keynote Speaker Mauricio Salatino from VMware</title>
      <dc:creator>Lukas Gentele</dc:creator>
      <pubDate>Thu, 27 Oct 2022 13:16:49 +0000</pubDate>
      <link>https://dev.to/loft/interview-with-kubecon-keynote-speaker-mauricio-salatino-from-vmware-1eh7</link>
      <guid>https://dev.to/loft/interview-with-kubecon-keynote-speaker-mauricio-salatino-from-vmware-1eh7</guid>
      <description>&lt;p&gt;By &lt;a href="https://twitter.com/richburroughs" rel="noopener noreferrer"&gt;Rich Burroughs&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Loft Labs is excited to welcome Kubecon North America 2022 keynote speaker Mauricio “Salaboy” Salatino for an exclusive interview where we dive into the struggles facing platform engineers with the CNCF ecosystem. Mauricio and his co-speaker Whitney Lee will be presenting a demo for their keynote presentation focused on the provisioning of virtual clusters with Crossplane, vcluster, and Knative, to develop an internal development platform.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Rich&lt;/strong&gt;: How did you learn about vcluster?&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Mauricio:&lt;/strong&gt; I had heard about vcluster in the context of multi-tenancy. While delivering training for &lt;a href="https://learnk8s.io" rel="noopener noreferrer"&gt;LearnK8s&lt;/a&gt; and working for different companies, I’ve repeatedly seen teams struggling to answer a very simple question: One cluster or multiple clusters? I’ve seen teams starting simple with namespaces inside a single Kubernetes Cluster and then struggling to move to use multiple clusters when the isolation levels of namespaces are not enough. And that is precisely where I see vcluster as a better alternative because it provides, from the get-go, the separation into different Kubernetes API Servers. In today’s world, where every organization is building its internal development platform, I see tools like vcluster as critical components of these platforms.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Rich&lt;/strong&gt;: In the demo for your KubeCon keynote you provisioned virtual clusters with vcluster and Crossplane. Can you explain how that works? And what’s your experience been like using those two tools together?&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Mauricio:&lt;/strong&gt; The demo presented at the KubeCon keynote session uses Crossplane, vcluster, and Knative, all working together around this concept of building internal development platforms. Crossplane is used to abstract where cloud resources are created. With Crossplane, we can declaratively create Kubernetes clusters in all major cloud providers, but creating these clusters is expensive, and it is a process that takes time. This is where using vcluster can save you time and money. Because, to my surprise, creating a vcluster is just installing a Helm chart into your existing cluster.  We can use the Crossplane Helm Provider to create vcluster from inside a Crossplane Composition, and that is exactly what my demo is doing. But vcluster doesn’t stop there, because with vcluster Plugins you can share tools between the host cluster and the virtual clusters. The demo shows how you can enable your vclusters with tools like Knative Serving (for dynamic autoscaling and advanced traffic management) without installing Knative Serving in each cluster. In a scenario where you have 10 teams all working in different vclusters and using tools like Knative Serving you save on installing and running 9 Knative Serving installations.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Rich&lt;/strong&gt;: Your demo focused on provisioning environments for developers. What do you think makes vcluster a great tool for dev environments?&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Mauricio:&lt;/strong&gt; vclusters are better than namespaces because they provide more isolation and are cheaper than fully fledged clusters. Suppose you have developers working with cluster-wide resources such as CRDs and tools that need to be installed to do their work. In that case, vclusters will give them the freedom to work against a dedicated Kubernetes API Server, where they will have total freedom to do what they need. By using vcluster you can give your development teams full access to their dedicated API servers without the need of creating, maintaining and paying for full-fledged Kubernetes Control Planes.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Rich:&lt;/strong&gt; There are so many new tools in the Kubernetes space and more arriving all the time. The CNCF landscape continues to grow. What do you look for when you evaluate new open source, cloud native tools?&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Mauricio:&lt;/strong&gt; I evaluate their (project) community, and in the last two years, I’ve been very interested in tools that fit into the story of building internal developer platforms, as I’ve been in that space for a long time. Most of my work in the open source space is around helping developers to be more productive in building their software. &lt;/p&gt;

&lt;p&gt;If you are evaluating Open Source / CNCF projects, check their maturity level, who (which company or companies) are sponsoring the project, and how healthy their community is. Looking at which companies are active in their community forums or slack channels is a great validation to see if other companies in your industry are trying to solve the same problems that you are facing.  &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Rich:&lt;/strong&gt; There’s been a lot of focus on the role of platform engineer the last few years. What do you think are the big challenges facing platform engineers today?&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Mauricio:&lt;/strong&gt; I’ve been writing about these topics lately in my blog, you can read more about this here -&amp;gt; The challenges of building platforms on top of Kubernetes &lt;a href="https://salaboy.com/2022/09/29/the-challenges-of-platform-building-on-top-of-kubernetes-1-4/" rel="noopener noreferrer"&gt;Part 1&lt;/a&gt;, &lt;a href="https://salaboy.com/2022/10/03/the-challenges-of-platform-building-on-top-of-kubernetes-2-4/" rel="noopener noreferrer"&gt;Part 2&lt;/a&gt;, &lt;a href="https://salaboy.com/2022/10/17/the-challenges-of-platform-building-on-top-of-kubernetes-3-4/" rel="noopener noreferrer"&gt;Part 3&lt;/a&gt;, Part 4&lt;/p&gt;

&lt;p&gt;But the big challenge nowadays is to keep up with all that is happening in the Cloud Native space, so you need to look for tools that:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Be ready to pivot: look for tools that provide you with the right abstractions to be able to pivot if things change, Crossplane is a big player in this space, but there are other projects that are worth looking at. If you are really into platform building you should check a project that I am hoping gets donated to the CNCF called &lt;a href="https://kratix.io" rel="noopener noreferrer"&gt;Kratix&lt;/a&gt;. These folks are working on providing the right abstractions to build platforms allowing platform teams to focus on deciding which projects they want to use and how those projects will work together. &lt;/li&gt;
&lt;li&gt;Reuse instead of build: try to identify the problem that you are trying to solve into two different buckets: 1) Generic problem that every company will have 2) very specific challenge that is specific to your company. If you are looking for tools to solve problems in the first bucket, then make sure that you don’t build an in-house solution. If you are looking for tools to solve problems in the second bucket you need to focus your search on a combination of an existing tools can do the work to make sure that you don’t spend time and effort reinventing the wheel just because the tools available doesn’t match 100% your requirements. &lt;/li&gt;
&lt;li&gt;Ecosystem integrations: When you look at a specific tool make sure that it integrates well with the other tools in the ecosystem. Don’t be tricked by the fact that they are all Kubernetes projects. Depending on how you want tools to work together your platform team might need to spend a considerable amount of time to make these tools work for your specific use case.&lt;/li&gt;
&lt;li&gt;Tailored Developer Experiences are the best way to promote your platform: you need to spend a lot of time and effort into building amazing developer experiences for your teams to have the right tools to do their work. These developer experiences are enabled by the tools that the platform team chooses to use and by always keep improving how application development teams interact with the platform.  &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If you are into building platforms, I am currently writing a book titled ”&lt;a href="http://mng.bz/jjKP" rel="noopener noreferrer"&gt;Continuous Delivery for Kubernetes&lt;/a&gt;” where I cover tools like Tekton, Crossplane, vcluster, Keptn, Knative, ArgoCD, Helm, among others, to build platforms that are focused on delivering more software more efficiently.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;R﻿ich:&lt;/strong&gt; Thanks for your time Mauricio.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;If you want to learn more about vcluster, check our &lt;a href="http://vcluster.com" rel="noopener noreferrer"&gt;vcluster.com&lt;/a&gt; for links to the docs, the GitHub repo, and our community Slack. You can find Mauricio on Twitter at &lt;a href="https://twitter.com/salaboy" rel="noopener noreferrer"&gt;@salaboy&lt;/a&gt;, and Rich at &lt;a href="https://twitter.com/richburroughs" rel="noopener noreferrer"&gt;@richburroughs&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>vcluster</category>
      <category>devspace</category>
      <category>loft</category>
    </item>
  </channel>
</rss>
