<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: vCluster</title>
    <description>The latest articles on DEV Community by vCluster (@vcluster_89).</description>
    <link>https://dev.to/vcluster_89</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/vcluster_89"/>
    <language>en</language>
    <item>
      <title>Multi-Node Local Kubernetes with vind: Pod Scheduling, Node Drains, and Affinity Rules</title>
      <dc:creator>vCluster</dc:creator>
      <pubDate>Thu, 26 Mar 2026 08:00:00 +0000</pubDate>
      <link>https://dev.to/vcluster_89/multi-node-local-kubernetes-with-vind-pod-scheduling-node-drains-and-affinity-rules-305j</link>
      <guid>https://dev.to/vcluster_89/multi-node-local-kubernetes-with-vind-pod-scheduling-node-drains-and-affinity-rules-305j</guid>
      <description>&lt;p&gt;Yesterday we got started with a &lt;a href="https://www.vcluster.com/blog/vind-getting-started-first-deployment-loadbalancer-kubernetes-docker" rel="noopener noreferrer"&gt;single-node vind cluster&lt;/a&gt;. That’s great for basic development, but if you want to test pod scheduling, node affinity, anti-affinity, topology constraints, or node drains, you need multiple nodes.&lt;/p&gt;

&lt;p&gt;With KinD, multi-node configs work but you’re still limited to local Docker containers with no external node support. vind gives you the same multi-node Docker setup, plus the option to add real cloud nodes later (we’ll cover that in Day 4).&lt;/p&gt;

&lt;p&gt;Today, let’s create a 4-node cluster and put it through its paces.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Configuration
&lt;/h2&gt;

&lt;p&gt;Create a multi-node.yaml file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;experimental&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;docker&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;nodes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;worker-1&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;worker-2&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;worker-3&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That’s it. This tells vind to create 3 additional worker nodes alongside the control plane. Each worker runs as its own Docker container with kubelet, kube-proxy, and Flannel.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw2ulwezizh00rsq5zd9p.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw2ulwezizh00rsq5zd9p.png" alt=" "&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Create the Cluster
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Command:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;vcluster create multi-node &lt;span class="nt"&gt;-f&lt;/span&gt; multi-node.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Output:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;12:57:42 info Using vCluster driver 'docker' to create your virtual clusters, which means the CLI is managing Docker-based virtual clusters locally
12:57:42 info If you prefer to use helm or the vCluster platform API instead, use '--driver helm' or '--driver platform', or run 'vcluster use driver helm' or 'vcluster use driver platform' to change the default
12:57:42 info Ensuring environment for vCluster multi-node...
12:57:43 done Created network vcluster.multi-node
12:57:47 warn Load balancer type services are not supported inside the vCluster because this command was executed with insufficient privileges. To enable load balancer type services, run this command with sudo
12:57:48 info Will connect vCluster multi-node to platform...
12:57:49 info Starting vCluster standalone multi-node
12:57:51 info Adding node worker-1 to vCluster multi-node
12:57:52 info Joining node vcluster.node.multi-node.worker-1 to vCluster multi-node...
12:58:17 info Adding node worker-2 to vCluster multi-node
12:58:17 info Joining node vcluster.node.multi-node.worker-2 to vCluster multi-node...
12:58:24 info Adding node worker-3 to vCluster multi-node
12:58:24 info Joining node vcluster.node.multi-node.worker-3 to vCluster multi-node...
12:58:31 done Successfully created virtual cluster multi-node
12:58:31 info Finding docker container vcluster.cp.multi-node...
12:58:31 info Waiting for vCluster kubeconfig to be available...
12:58:32 info Waiting for vCluster to become ready...
12:58:32 done vCluster is ready
12:58:32 done Switched active kube context to vcluster-docker_multi-node
- Use `vcluster disconnect` to return to your previous kube context
- Use `kubectl get namespaces` to access the vcluster
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Each node takes about 10 seconds to join. Let’s verify:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Command:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl get nodes &lt;span class="nt"&gt;-o&lt;/span&gt; wide
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Output:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;NAME         STATUS   ROLES                  AGE    VERSION
multi-node   Ready    control-plane,master   122m   v1.35.0
worker-1     Ready    &amp;lt;none&amp;gt;                 122m   v1.35.0
worker-2     Ready    &amp;lt;none&amp;gt;                 122m   v1.35.0
worker-3     Ready    &amp;lt;none&amp;gt;                 122m   v1.35.0
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Four nodes, one control plane and three workers. Each has its own IP on the Docker network, running Kubernetes v1.35.0. This looks exactly like a real multi-node cluster.&lt;/p&gt;

&lt;h2&gt;
  
  
  Deploy and Watch Pod Distribution
&lt;/h2&gt;

&lt;p&gt;Let’s deploy 6 replicas and see how Kubernetes distributes them:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl create deployment web &lt;span class="nt"&gt;--image&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;nginx:latest &lt;span class="nt"&gt;--replicas&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;6
deployment.apps/web created
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After a few seconds:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl get pods &lt;span class="nt"&gt;-o&lt;/span&gt; wide
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Output:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;NAME                  READY   STATUS    RESTARTS   AGE   IP           NODE         NOMINATED NODE   READINESS GATES
web-ff44d897b-5ffqt   1/1     Running   0          8s    10.244.5.2   worker-3     &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
web-ff44d897b-7vm76   1/1     Running   0          8s    10.244.2.3   worker-1     &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
web-ff44d897b-hpf9d   1/1     Running   0          8s    10.244.5.3   worker-3     &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
web-ff44d897b-l4c7t   1/1     Running   0          8s    10.244.4.2   worker-2     &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
web-ff44d897b-p276z   1/1     Running   0          8s    10.244.2.2   worker-1     &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
web-ff44d897b-z77gm   1/1     Running   0          8s    10.244.0.4   multi-node   &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Look at the NODE column, pods are distributed across all 4 nodes: - worker-1: 2 pods (10.244.2.x subnet) - worker-2: 1 pod (10.244.3.x subnet) - worker-3: 2 pods (10.244.4.x subnet) - multi-node (control plane): 1 pod (10.244.0.x subnet)&lt;/p&gt;

&lt;p&gt;Each node has its own Flannel subnet. The Kubernetes scheduler is doing real scheduling across real (containerized) nodes.&lt;/p&gt;

&lt;h2&gt;
  
  
  Test Node Drain
&lt;/h2&gt;

&lt;p&gt;This is where multi-node really shines. Let’s drain worker-3 and watch pods get rescheduled:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Command:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl drain worker-3 &lt;span class="nt"&gt;--ignore-daemonsets&lt;/span&gt; &lt;span class="nt"&gt;--delete-emptydir-data&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Output:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Warning: ignoring DaemonSet-managed Pods: kube-flannel/kube-flannel-ds-fpr8n, kube-system/kube-proxy-dsg8b
evicting pod default/web-ff44d897b-hpf9d
evicting pod default/web-ff44d897b-5ffqt
pod/web-ff44d897b-5ffqt evicted
pod/web-ff44d897b-hpf9d evicted
node/worker-3 drained
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Both pods on worker-3 were evicted. Where did they go?&lt;br&gt;
&lt;strong&gt;Command:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl get pods &lt;span class="nt"&gt;-o&lt;/span&gt; wide
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Output:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;NAME                   READY   STATUS    RESTARTS   AGE     IP           NODE         NOMINATED NODE   READINESS GATES
NAME                  READY   STATUS    RESTARTS   AGE   IP           NODE         NOMINATED NODE   READINESS GATES
web-ff44d897b-7vm76   1/1     Running   0          20m   10.244.2.3   worker-1     &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
web-ff44d897b-hchpq   1/1     Running   0          19m   10.244.4.3   worker-2     &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
web-ff44d897b-l4c7t   1/1     Running   0          20m   10.244.4.2   worker-2     &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
web-ff44d897b-p276z   1/1     Running   0          20m   10.244.2.2   worker-1     &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
web-ff44d897b-pc4cx   1/1     Running   0          19m   10.244.0.5   multi-node   &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
web-ff44d897b-z77gm   1/1     Running   0          20m   10.244.0.4   multi-node   &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The scheduler created new pods on worker-2 and the control plane node. Zero pods on worker-3. This is exactly how it works in production.&lt;/p&gt;

&lt;p&gt;Uncordon when you’re done:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Command:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl uncordon worker-3
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Output:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;node/worker-3 uncordoned
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Testing Node Affinity
&lt;/h2&gt;

&lt;p&gt;With multi-node, you can test real node affinity rules:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;apps/v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Deployment&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;worker-only&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;replicas&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;3&lt;/span&gt;
  &lt;span class="na"&gt;selector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;matchLabels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;worker-only&lt;/span&gt;
  &lt;span class="na"&gt;template&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;worker-only&lt;/span&gt;
    &lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;affinity&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;nodeAffinity&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;requiredDuringSchedulingIgnoredDuringExecution&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="na"&gt;nodeSelectorTerms&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;matchExpressions&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
              &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;key&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;node-role.kubernetes.io/control-plane&lt;/span&gt;
                &lt;span class="na"&gt;operator&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;DoesNotExist&lt;/span&gt;
      &lt;span class="na"&gt;containers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx&lt;/span&gt;
        &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx:latest&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This ensures pods only run on worker nodes, not the control plane, something you can only test with multiple nodes.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; affinity.yaml 
deployment.apps/worker-only created
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl get po &lt;span class="nt"&gt;-owide&lt;/span&gt; 
worker-only-86dd84d489-6v98m   1/1     Running   0          15s   10.244.5.4   worker-3     &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
worker-only-86dd84d489-hmbq4   1/1     Running   0          15s   10.244.4.4   worker-2     &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
worker-only-86dd84d489-xdttw   1/1     Running   0          15s   10.244.2.4   worker-1     &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Pod Anti-Affinity
&lt;/h2&gt;

&lt;p&gt;Force pods to spread across different nodes:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;apps/v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Deployment&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;spread-app&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;replicas&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;3&lt;/span&gt;
  &lt;span class="na"&gt;selector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;matchLabels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;spread-app&lt;/span&gt;
  &lt;span class="na"&gt;template&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;spread-app&lt;/span&gt;
    &lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;topologySpreadConstraints&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;maxSkew&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;1&lt;/span&gt;
        &lt;span class="na"&gt;topologyKey&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;kubernetes.io/hostname&lt;/span&gt;
        &lt;span class="na"&gt;whenUnsatisfiable&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;DoNotSchedule&lt;/span&gt;
        &lt;span class="na"&gt;labelSelector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;matchLabels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;spread-app&lt;/span&gt;
      &lt;span class="na"&gt;containers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx&lt;/span&gt;
        &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx:latest&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;With 3 replicas and 3 workers, each worker gets exactly one pod. Try doing that with a single-node cluster.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; antiaffinity.yaml 
deployment.apps/spread-app created
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl get po  &lt;span class="nt"&gt;-owide&lt;/span&gt; | &lt;span class="nb"&gt;grep &lt;/span&gt;spread
spread-app-596d884c4d-hp585    1/1     Running   0          10s   10.244.4.5   worker-2     &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
spread-app-596d884c4d-j4hcm    1/1     Running   0          10s   10.244.5.5   worker-3     &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
spread-app-596d884c4d-lj7c5    1/1     Running   0          10s   10.244.2.5   worker-1     &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Environment Variables for Workers
&lt;/h2&gt;

&lt;p&gt;You can pass environment variables to worker containers:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;experimental&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;docker&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;nodes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;worker-1&lt;/span&gt;
        &lt;span class="na"&gt;env&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;CUSTOM_VAR=value1"&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;worker-2&lt;/span&gt;
        &lt;span class="na"&gt;env&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;CUSTOM_VAR=value2"&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;worker-3&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;These are Docker container environment variables, useful for differentiating nodes in testing scenarios.&lt;/p&gt;

&lt;h2&gt;
  
  
  Cleanup
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Command:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;vcluster delete multi-node
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;16:18:36 info Using vCluster driver 'docker' to delete your virtual clusters, which means the CLI is managing Docker-based virtual clusters locally
16:18:36 info If you prefer to use helm or the vCluster platform API instead, use '--driver helm' or '--driver platform', or run 'vcluster use driver helm' or 'vcluster use driver platform' to change the default
16:18:36 info Removing vCluster container vcluster.cp.multi-node...
16:18:39 info Removing vCluster node worker-3...
16:18:40 info Removing vCluster node worker-2...
16:18:42 info Removing vCluster node worker-1...
16:18:44 info Delete virtual cluster instance p-default/multi-node in platform
16:18:44 info Deleted kube context vcluster-docker_multi-node
16:18:44 done Successfully deleted virtual cluster multi-node
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Tomorrow: External Nodes
&lt;/h2&gt;

&lt;p&gt;Multi-node with Docker containers is powerful, but what if you need real cloud resources? A GPU instance? A specific CPU architecture? Tomorrow, we’ll add a GCP Compute Engine instance as an external worker node to a vind cluster, all connected via VPN. That’s something KinD simply cannot do.&lt;/p&gt;

&lt;p&gt;All commands tested on macOS (Apple Silicon M1) with Docker Desktop and vCluster CLI v0.31.0.&lt;/p&gt;

</description>
      <category>vind</category>
      <category>docker</category>
      <category>kubernetes</category>
      <category>vcluster</category>
    </item>
    <item>
      <title>Day 2: Getting Started with vind: Your First Deployment with LoadBalancer</title>
      <dc:creator>vCluster</dc:creator>
      <pubDate>Wed, 25 Mar 2026 11:00:00 +0000</pubDate>
      <link>https://dev.to/vcluster_89/day-2-getting-started-with-vind-your-first-deployment-with-loadbalancer-36b9</link>
      <guid>https://dev.to/vcluster_89/day-2-getting-started-with-vind-your-first-deployment-with-loadbalancer-36b9</guid>
      <description>&lt;p&gt;Yesterday I introduced vind and why I think it’s a better alternative to KinD. Today, let’s get hands-on. We’ll install vind, create a cluster, deploy an application, and see the built-in LoadBalancer in action.&lt;/p&gt;

&lt;p&gt;Prerequisites&lt;br&gt;
You need two things:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Docker installed and running&lt;/li&gt;
&lt;li&gt;vCluster CLI v0.31.0 or later
# Check Docker&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker version
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Output:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Client:
 Version:           28.5.1
 API version:       1.51
 Go version:        go1.24.8
 Git commit:        e180ab8
 Built:             Wed Oct  8 12:16:17 2025
 OS/Arch:           darwin/arm64
 Context:           desktop-linux

Server: Docker Desktop 4.48.0 (207573)
 Engine:
  Version:          28.5.1
  API version:      1.51 (minimum version 1.24)
  Go version:       go1.24.8
  Git commit:       f8215cc
  Built:            Wed Oct  8 12:18:25 2025
  OS/Arch:          linux/arm64
  Experimental:     false
 containerd:
  Version:          1.7.27
  GitCommit:        05044ec0a9a75232cad458027ca83437aae3f4da
 runc:
  Version:          1.2.5
  GitCommit:        v1.2.5-0-g59923ef
 docker-init:
  Version:          0.19.0
  GitCommit:        de40ad0
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h1&gt;
  
  
  Install vCluster CLI (macOS)
&lt;/h1&gt;

&lt;p&gt;$ brew install loft-sh/tap/vcluster&lt;/p&gt;

&lt;h1&gt;
  
  
  Or upgrade if you already have it
&lt;/h1&gt;

&lt;p&gt;Command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;vcluster upgrade &lt;span class="nt"&gt;--version&lt;/span&gt; v0.32.1

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Output:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;11:49:25 info Downloading version v0.32.1...
11:49:30 done Successfully updated vcluster to version v0.32.1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h1&gt;
  
  
  Verify
&lt;/h1&gt;

&lt;p&gt;Command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;vcluster &lt;span class="nt"&gt;--version&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Output:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;vcluster version 0.32.1

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Set Docker as the Default Driver
&lt;/h2&gt;

&lt;p&gt;This is the key step that tells vCluster to use Docker instead of an existing Kubernetes cluster:&lt;/p&gt;

&lt;p&gt;Command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;vcluster use driver docker

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Output:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;11:50:00 done Successfully switched driver to docker

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You only need to do this once. It persists across sessions.&lt;/p&gt;

&lt;h2&gt;
  
  
  Create Your First Cluster
&lt;/h2&gt;

&lt;p&gt;Command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;vcluster create blog-demo

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Output:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;11:58:04 info Using vCluster driver 'docker' to create your virtual clusters, which means the CLI is managing Docker-based virtual clusters locally
11:58:04 info If you prefer to use helm or the vCluster platform API instead, use '--driver helm' or '--driver platform', or run 'vcluster use driver helm' or 'vcluster use driver platform' to change the default
11:58:04 info Ensuring environment for vCluster blog-demo...
11:58:05 done Created network vcluster.blog-demo
11:58:12 info Will connect vCluster blog-demo to platform...
11:58:13 info Starting vCluster standalone blog-demo
11:58:14 done Successfully created virtual cluster blog-demo
11:58:14 info Finding docker container vcluster.cp.blog-demo...
11:58:14 info Waiting for vCluster kubeconfig to be available...
11:58:17 info Waiting for vCluster to become ready...
11:58:29 done vCluster is ready
11:58:29 done Switched active kube context to vcluster-docker_blog-demo
- Use `vcluster disconnect` to return to your previous kube context
- Use `kubectl get namespaces` to access the vcluster
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That took about 20 seconds. Let’s see what we got:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl get nodes &lt;span class="nt"&gt;-o&lt;/span&gt; wide

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Output:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;NAME        STATUS   ROLES                  AGE   VERSION   INTERNAL-IP   EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION     CONTAINER-RUNTIME
blog-demo   Ready    control-plane,master   30s   v1.35.0   172.23.0.2    &amp;lt;none&amp;gt;        Ubuntu 24.04.3 LTS   6.10.14-linuxkit   containerd://2.1.6
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;A single-node cluster running Kubernetes v1.35.0 with containerd. Let’s check the namespaces:&lt;/p&gt;

&lt;p&gt;Command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl get namespaces

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Output:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;NAME                 STATUS   AGE
default              Active   108s
kube-flannel         Active   99s
kube-node-lease      Active   108s
kube-public          Active   109s
kube-system          Active   109s
local-path-storage   Active   99s
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Everything you’d expect from a real Kubernetes cluster,  including Flannel for networking and local-path-storage for persistent volumes.&lt;/p&gt;

&lt;h2&gt;
  
  
  Deploy an Application
&lt;/h2&gt;

&lt;p&gt;Let’s deploy nginx with 2 replicas and expose it via a LoadBalancer service:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl create deployment nginx &lt;span class="nt"&gt;--image&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;nginx:latest &lt;span class="nt"&gt;--replicas&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;2
deployment.apps/nginx created
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Create Service with type:LoadBalancer:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl expose deployment nginx &lt;span class="nt"&gt;--type&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;LoadBalancer &lt;span class="nt"&gt;--port&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;80 &lt;span class="nt"&gt;--target-port&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;80
service/nginx exposed
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Wait a few seconds for the pods to come up:&lt;/p&gt;

&lt;p&gt;Get Pods:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get pods &lt;span class="nt"&gt;-o&lt;/span&gt; wide
NAME                    READY   STATUS    RESTARTS   AGE   IP           NODE        NOMINATED NODE   READINESS GATES
nginx-b6485fcbb-6w6gm   1/1     Running   0          29s   10.244.0.4   blog-demo   &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
nginx-b6485fcbb-8dvvg   1/1     Running   0          29s   10.244.0.5   blog-demo   &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Get Service:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get svc
NAME         TYPE           CLUSTER-IP      EXTERNAL-IP      PORT&lt;span class="o"&gt;(&lt;/span&gt;S&lt;span class="o"&gt;)&lt;/span&gt;        AGE
kubernetes   ClusterIP      10.96.0.1       &amp;lt;none&amp;gt;           443/TCP        3m31s
nginx        LoadBalancer   10.111.30.236   172.23.255.254   80:31860/TCP   37s
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Get Deployments:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get deployments
NAME    READY   UP-TO-DATE   AVAILABLE   AGE
nginx   2/2     2            2           69s
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Both pods are running. The LoadBalancer service is created,  on Linux it gets an external IP automatically. On macOS, you need to run the vCluster create command with sudo to get full LoadBalancer support with HAProxy.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;&lt;strong&gt;Tip&lt;/strong&gt;: On macOS, run sudo vCluster create my-cluster to enable LoadBalancer IP assignment via HAProxy. Without sudo, you can still access services via NodePort.&lt;/em&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;curl 172.23.255.254
&amp;lt;&lt;span class="o"&gt;!&lt;/span&gt;DOCTYPE html&amp;gt;
&amp;lt;html&amp;gt;
&amp;lt;&lt;span class="nb"&gt;head&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;
&amp;lt;title&amp;gt;Welcome to nginx!&amp;lt;/title&amp;gt;
&amp;lt;style&amp;gt;
html &lt;span class="o"&gt;{&lt;/span&gt; color-scheme: light dark&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="o"&gt;}&lt;/span&gt;
body &lt;span class="o"&gt;{&lt;/span&gt; width: 35em&lt;span class="p"&gt;;&lt;/span&gt; margin: 0 auto&lt;span class="p"&gt;;&lt;/span&gt;
font-family: Tahoma, Verdana, Arial, sans-serif&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="o"&gt;}&lt;/span&gt;
&amp;lt;/style&amp;gt;
&amp;lt;/head&amp;gt;
&amp;lt;body&amp;gt;
&amp;lt;h1&amp;gt;Welcome to nginx!&amp;lt;/h1&amp;gt;
&amp;lt;p&amp;gt;If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.&amp;lt;/p&amp;gt;

&amp;lt;p&amp;gt;For online documentation and support please refer to
&amp;lt;a &lt;span class="nv"&gt;href&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"http://nginx.org/"&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;nginx.org&amp;lt;/a&amp;gt;.&amp;lt;br/&amp;gt;
Commercial support is available at
&amp;lt;a &lt;span class="nv"&gt;href&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"http://nginx.com/"&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;nginx.com&amp;lt;/a&amp;gt;.&amp;lt;/p&amp;gt;

&amp;lt;p&amp;gt;&amp;lt;em&amp;gt;Thank you &lt;span class="k"&gt;for &lt;/span&gt;using nginx.&amp;lt;/em&amp;gt;&amp;lt;/p&amp;gt;
&amp;lt;/body&amp;gt;
&amp;lt;/html&amp;gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Cluster Management
&lt;/h2&gt;

&lt;p&gt;vind gives you simple commands to manage your cluster lifecycle:&lt;/p&gt;

&lt;h1&gt;
  
  
  List all clusters
&lt;/h1&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;vcluster list
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Copy&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;   NAME    | STATUS  | CONNECTED |  AGE
-----------+---------+-----------+--------
  blog-demo | running | True      | 102s
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h1&gt;
  
  
  Disconnect from a cluster (keeps it running)
&lt;/h1&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;vcluster disconnect

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h1&gt;
  
  
  Reconnect later
&lt;/h1&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;vcluster connect blog-demo

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h1&gt;
  
  
  Delete when done
&lt;/h1&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;vcluster delete blog-demo

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;What’s Under the Hood?&lt;/p&gt;

&lt;p&gt;When you create a cluster, vind sets up Docker containers:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;docker ps &lt;span class="nt"&gt;--format&lt;/span&gt; &lt;span class="s2"&gt;"table {{.Names}}&lt;/span&gt;&lt;span class="se"&gt;\t&lt;/span&gt;&lt;span class="s2"&gt;{{.Status}}"&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;NAMES                                 STATUS
vcluster.lb.blog-demo.nginx.default   Up 3 minutes
vcluster.cp.blog-demo                 Up 6 minutes
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;vcluster.cp.blog-demo:&lt;/strong&gt; The control plane container with K8s, etcd, and everything&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;vcluster.lb.blog-demo.nginx.default:&lt;/strong&gt; The HAProxy load balancer for LoadBalancer services&lt;br&gt;
You can even peek inside:&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h1&gt;
  
  
  View control plane logs
&lt;/h1&gt;

&lt;p&gt;$ docker exec vcluster.cp.blog-demo journalctl -u vcluster --no-pager | tail -5&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Mar 06 06:31:14 blog-demo vcluster[188]: 2026-03-06 06:31:14        INFO        commandwriter/commandwriter.go:128        allocated clusterIPs        {"component": "vcluster", "component": "apiserver", "location": "alloc.go:329", "service": "default/nginx", "clusterIPs": "{\"IPv4\":\"10.111.30.236\"}"}
Mar 06 06:31:14 blog-demo vcluster[188]: 2026-03-06 06:31:14        INFO        record/event.go:389        Event occurred        {"component": "vcluster", "object": {"name":"nginx","namespace":"default"}, "fieldPath": "", "kind": "Service", "apiVersion": "v1", "type": "Normal", "reason": "EnsuringLoadBalancer", "message": "Ensuring load balancer"}
Mar 06 06:31:14 blog-demo vcluster[188]: 2026-03-06 06:31:14        INFO        cloudcontrollermanager/loadbalancer_docker.go:247        Image haproxy:3.3-alpine found locally, skipping pull.        {"component": "vcluster"}
Mar 06 06:31:14 blog-demo vcluster[188]: 2026-03-06 06:31:14        INFO        cloudcontrollermanager/loadbalancer_docker.go:198        Started LoadBalancer container vcluster.lb.blog-demo.nginx.default with ip 172.23.255.254 and forward ports enabled        {"component": "vcluster"}
Mar 06 06:31:14 blog-demo vcluster[188]: 2026-03-06 06:31:14        INFO        record/event.go:389        Event occurred        {"component": "vcluster", "object": {"name":"nginx","namespace":"default"}, "fieldPath": "", "kind": "Service", "apiVersion": "v1", "type": "Normal", "reason": "EnsuredLoadBalancer", "message": "Ensured load balancer"}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h1&gt;
  
  
  Check what's running inside the container
&lt;/h1&gt;

&lt;p&gt;$ docker exec vcluster.cp.blog-demo crictl ps&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;CONTAINER           IMAGE               CREATED             STATE               NAME                     ATTEMPT             POD ID              POD                                       NAMESPACE
0ce64fdd8ec9d       2af158aaca82b       5 minutes ago       Running             nginx                    0                   d6ced28cb4fe5       nginx-b6485fcbb-8dvvg                     default
56f9406394d3e       2af158aaca82b       5 minutes ago       Running             nginx                    0                   e5b6ab67d5930       nginx-b6485fcbb-6w6gm                     default
181ce2847ff1c       b41714cf62496       7 minutes ago       Running             local-path-provisioner   0                   536bc927ec4eb       local-path-provisioner-5b9b9995f4-4d582   local-path-storage
e7ab4ae96019d       e3b7e577b2a83       7 minutes ago       Running             coredns                  0                   c765485409d89       coredns-79cf5f4c56-jh6jv                  kube-system
d1865682d318c       253e2cac1f011       7 minutes ago       Running             kube-flannel             0                   de40359cb6a81       kube-flannel-ds-5frp9                     kube-flannel
c9133c20d5937       de369f46c2ff5       7 minutes ago       Running             kube-proxy               0                   52d2a265d3d3c       kube-proxy-tvd85                          kube-system
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Custom Kubernetes Versions
&lt;/h2&gt;

&lt;p&gt;Want a specific Kubernetes version? Easy:&lt;/p&gt;

&lt;p&gt;Command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;vcluster create my-cluster &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--set&lt;/span&gt; controlPlane.distro.k8s.version&lt;span class="o"&gt;=&lt;/span&gt;v1.32.0
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Ouput:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;NAME         STATUS   ROLES                  AGE   VERSION
my-cluster   Ready    control-plane,master   66s   v1.32.0
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;No need to find and pull specific node images like with KinD. Just specify the version.&lt;/p&gt;

&lt;h2&gt;
  
  
  Cleanup
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;vcluster delete blog-demo
info  Removing vCluster container vcluster.cp.blog-demo...
info  Deleted kube context vcluster-docker_blog-demo
&lt;span class="k"&gt;done  &lt;/span&gt;Successfully deleted virtual cluster blog-demo
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Clean and fast.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Tomorrow: Multi-Node Clusters&lt;/strong&gt;&lt;br&gt;
Today we created a single-node cluster and deployed an app. But what if you need to test pod scheduling across nodes, node affinity, or node drains&lt;/p&gt;

&lt;p&gt;Tomorrow, we’ll create a multi-node vind cluster with 3 worker nodes and see pods distributed across them,  just like a real cluster.&lt;/p&gt;

&lt;p&gt;All commands tested on macOS (Apple Silicon M1) with Docker Desktop and vCluster CLI v0.32.1.&lt;/p&gt;

&lt;p&gt;vind is open source: github.com/loft-sh/vind , so do star the repo if you like vind&lt;/p&gt;

</description>
      <category>vind</category>
      <category>opensource</category>
      <category>docker</category>
      <category>vcluster</category>
    </item>
    <item>
      <title>Day 1: Introduction to vind: Why I Replaced KinD with vCluster in Docker [vind]</title>
      <dc:creator>vCluster</dc:creator>
      <pubDate>Tue, 24 Mar 2026 08:10:00 +0000</pubDate>
      <link>https://dev.to/vcluster_89/day-1-introduction-to-vind-why-i-replaced-kind-with-vcluster-in-docker-vind-2hgf</link>
      <guid>https://dev.to/vcluster_89/day-1-introduction-to-vind-why-i-replaced-kind-with-vcluster-in-docker-vind-2hgf</guid>
      <description>&lt;p&gt;If you’ve been working with Kubernetes for a while, you’ve probably used KinD (Kubernetes in Docker). I’ve used it extensively  for my local development, CI pipelines, demos, you name it. And for what it does, it works. But I’ve always felt like there was something missing.&lt;/p&gt;

&lt;p&gt;That’s why we built vind to solve some of the shortcomings of KinD.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is vind?
&lt;/h2&gt;

&lt;p&gt;vind stands for vCluster in Docker. It’s a new way to run local Kubernetes clusters using the Docker driver. Instead of spinning up KinD clusters, you use the same vCluster CLI you might already know  but pointed at Docker as the backend.&lt;/p&gt;

&lt;p&gt;The result? A local Kubernetes cluster that runs entirely in Docker containers, just like KinD, but with a bunch of features that KinD simply does not have.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why I Switched
&lt;/h2&gt;

&lt;p&gt;Overall KinD is great for basic use cases. But the moment you need anything beyond a simple single-node cluster, you start hitting walls:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;No LoadBalancer support: You need MetalLB or some workaround. With vind, LoadBalancer services work out of the box.&lt;/li&gt;
&lt;li&gt;No sleep/wake:  Delete and recreate every time. With vind, you can pause a cluster and resume it later.&lt;/li&gt;
&lt;li&gt;No UI:  KinD is purely CLI. vind has the vCluster Platform UI(Free) for visual management.&lt;/li&gt;
&lt;li&gt;No external nodes: KinD is local-only. vind lets you join real cloud VMs (GCP, AWS) as worker nodes via VPN backed by tailscale.&lt;/li&gt;
&lt;li&gt;Manual image loading:  KinD load docker-image every time, vind has a registry proxy that shares your Docker daemon’s image cache automatically.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  The Architecture
&lt;/h2&gt;

&lt;p&gt;Here’s what’s happening under the hood when you create a vind cluster:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi3yric51cwbw6g91kcff.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi3yric51cwbw6g91kcff.png" alt=" "&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Your machine runs Docker. Inside Docker, vind creates:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A control plane container with the Kubernetes API server, etcd, scheduler, and Flannel CNI (you can customise this too and install cilium)&lt;/li&gt;
&lt;li&gt;Optional worker node containers, each one is a full node with kubelet and kube-proxy&lt;/li&gt;
&lt;li&gt;A Docker network for inter-node communication&lt;/li&gt;
&lt;li&gt;An HAProxy load balancer for LoadBalancer service support&lt;/li&gt;
&lt;li&gt;A registry proxy that connects to your host Docker’s containerd storage&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The beauty of this is that each component runs as a regular Docker container. You can inspect them, view logs, and manage them with standard Docker commands.&lt;/p&gt;

&lt;h2&gt;
  
  
  Getting Started (Quick Preview)
&lt;/h2&gt;

&lt;p&gt;Here’s how fast you can get a cluster running:&lt;/p&gt;

&lt;h1&gt;
  
  
  Install or upgrade the vCluster CLI
&lt;/h1&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;brew &lt;span class="nb"&gt;install &lt;/span&gt;loft-sh/tap/vcluster

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h1&gt;
  
  
  or
&lt;/h1&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;vcluster upgrade &lt;span class="nt"&gt;--version&lt;/span&gt; v0.32.1

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h1&gt;
  
  
  Set Docker as the driver
&lt;/h1&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;vcluster use driver docker

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h1&gt;
  
  
  Create your first cluster
&lt;/h1&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;vcluster create my-cluster

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That’s it. Three commands and you have a running Kubernetes cluster:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight console"&gt;&lt;code&gt;&lt;span class="gp"&gt;$&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;vcluster create my-cluster14:58:13 info Using vCluster driver &lt;span class="s1"&gt;'docker'&lt;/span&gt; to create your virtual clusters, which means the CLI is managing Docker-based virtual clusters locally
&lt;span class="go"&gt;14:58:13 info If you prefer to use helm or the vCluster platform API instead, use '--driver helm' or '--driver platform', or run 'vcluster use driver helm' or 'vcluster use driver platform' to change the default
14:58:13 info Ensuring environment for vCluster my-cluster...
14:58:14 done Created network vcluster.my-cluster
14:58:19 warn Load balancer type services are not supported inside the vCluster because this command was executed with insufficient privileges. To enable load balancer type services, run this command with sudo
14:58:20 info Will connect vCluster my-cluster to platform...
14:58:21 info Starting vCluster standalone my-cluster
14:58:23 done Successfully created virtual cluster my-cluster
14:58:23 info Finding docker container vcluster.cp.my-cluster...
14:58:23 info Waiting for vCluster kubeconfig to be available...
14:58:25 info Waiting for vCluster to become ready...
14:58:37 done vCluster is ready
14:58:37 done Switched active kube context to vcluster-docker_my-cluster
- Use `vcluster disconnect` to return to your previous kube context
- Use `kubectl get namespaces` to access the vcluster
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  vind vs KinD  At a Glance
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmyvbm7k5wbva94a3h12i.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmyvbm7k5wbva94a3h12i.png" alt=" "&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  What’s Coming This Week
&lt;/h2&gt;

&lt;p&gt;Over the next 6 days, I’ll walk you through everything vind can do with real commands, real outputs, and real use cases:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Day 2: Getting Started:  deploying your first app with LoadBalancer&lt;/li&gt;
&lt;li&gt;Day 3: Multi-Node Clusters:  realistic multi-node setups with pod distribution&lt;/li&gt;
&lt;li&gt;Day 4: External Nodes:  joining a GCP Compute instance to your local cluster&lt;/li&gt;
&lt;li&gt;Day 5: Actions with vind:  using the setup-vind GitHub Action&lt;/li&gt;
&lt;li&gt;Day 6: Advanced Features:  sleep/wake, registry proxy, and custom networking&lt;/li&gt;
&lt;li&gt;Day 7: The vCluster Platform UI: managing vind clusters visually&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If you’re tired of KinD’s limitations and want a local Kubernetes experience that actually feels production-like, follow along. I think you’ll be surprised at what vind can do.&lt;/p&gt;

&lt;p&gt;vind is open source: github.com/loft-sh/vind , so do star the repo if you like vind&lt;/p&gt;

</description>
      <category>vind</category>
      <category>vcluster</category>
      <category>docker</category>
      <category>kubernetes</category>
    </item>
    <item>
      <title>Introducing vMetal: Run Your GPU Data Center Like a Hyperscaler</title>
      <dc:creator>vCluster</dc:creator>
      <pubDate>Mon, 23 Mar 2026 11:44:15 +0000</pubDate>
      <link>https://dev.to/vcluster_89/introducing-vmetal-run-your-gpu-data-center-like-a-hyperscaler-2j49</link>
      <guid>https://dev.to/vcluster_89/introducing-vmetal-run-your-gpu-data-center-like-a-hyperscaler-2j49</guid>
      <description>&lt;p&gt;The race to build AI infrastructure is accelerating.&lt;/p&gt;

&lt;p&gt;Across the industry, organizations are deploying massive GPU clusters to power the next generation of AI applications. New Neocloud providers are emerging, enterprises are building internal AI factories, and demand for GPU infrastructure continues to surge.&lt;/p&gt;

&lt;p&gt;But while buying GPUs has become easier, operating them like a cloud platform is still incredibly difficult.&lt;/p&gt;

&lt;p&gt;Selling raw GPU infrastructure is quickly becoming a commodity. To stand out and maximize GPU utilization, providers must deliver something more: a managed platform experience similar to EC2 or EKS, where teams can spin up environments and start running workloads immediately.&lt;/p&gt;

&lt;p&gt;Building that experience requires a complex stack of infrastructure systems, from machine provisioning to cluster orchestration to tenant environments and AI platforms. Many of the end-to-end platforms designed to manage infrastructure date back nearly two decades, while newer open source tools tend to solve only individual parts of the problem. As organizations rapidly build new GPU data centers and AI factories, the pace of infrastructure deployment has outgrown the tooling available today. As a result, most organizations end up attempting to stitch together a mix of legacy platforms, open source tools, and custom automation.&lt;/p&gt;

&lt;p&gt;At vCluster Labs, we believe AI infrastructure should operate as a unified platform, not a collection of disconnected tools. Today, we’re introducing vMetal, a new bare metal provisioning and lifecycle management layer designed to help Neocloud providers and AI factories turn raw GPU hardware into programmable infrastructure.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Problem: Operating GPU Infrastructure Is Hard
&lt;/h2&gt;

&lt;p&gt;Buying GPUs is only the first step. Operating them like a cloud platform is another story.&lt;/p&gt;

&lt;p&gt;Organizations building GPU infrastructure are expected to deliver an experience similar to hyperscalers, where teams can spin up environments on demand and run workloads immediately. But achieving that experience requires infrastructure capabilities across several layers:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Bare metal provisioning and hardware lifecycle management&lt;/li&gt;
&lt;li&gt;Network orchestration across clusters and tenants&lt;/li&gt;
&lt;li&gt;Kubernetes cluster operations&lt;/li&gt;
&lt;li&gt;Tenant isolation and environment provisioning&lt;/li&gt;
&lt;li&gt;AI tooling and GPU scheduling platforms
Most organizations attempt to build these capabilities internally or combine multiple tools to approximate them. But building a GPU cloud platform from scratch takes significant engineering effort and time. And time matters. A $10M GPU cluster generating several dollars per GPU hour can lose millions in potential revenue if platform launch is delayed by months.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The challenge is not hardware. It is infrastructure automation. We built vMetal to solve that problem.&lt;/p&gt;

&lt;h2&gt;
  
  
  Introducing vMetal: Bare Metal Provisioning for AI Infrastructure
&lt;/h2&gt;

&lt;p&gt;vMetal is a new machine management layer within the vCluster Platform that automates the lifecycle of bare metal GPU servers.&lt;/p&gt;

&lt;p&gt;It transforms physical infrastructure into programmable capacity that can be provisioned, assigned, upgraded, and repurposed through a centralized control plane. Instead of manually configuring machines or building custom provisioning pipelines, infrastructure operators can manage their entire GPU fleet through a unified system.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flabhepa4kzqp6b9wy14m.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flabhepa4kzqp6b9wy14m.png" alt=" " width="800" height="402"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;With vMetal, organizations can:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Automatically discover machines connected to the network&lt;/li&gt;
&lt;li&gt;Provision servers via PXE boot&lt;/li&gt;
&lt;li&gt;Manage machine lifecycle events such as upgrades or reconfiguration&lt;/li&gt;
&lt;li&gt;Assign machines directly to Kubernetes clusters or infrastructure pools&lt;/li&gt;
&lt;li&gt;Prepare machines for multi-tenant environments&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The result is a system where bare metal behaves more like cloud infrastructure. Servers become resources that can be allocated, reassigned, and managed through software workflows rather than manual operations.&lt;/p&gt;

&lt;h2&gt;
  
  
  From Rack to Cluster in Minutes
&lt;/h2&gt;

&lt;p&gt;Bringing new GPU hardware online is traditionally slow and manual. Servers often require installation, configuration, and networking setup before they can even be attached to a cluster.&lt;/p&gt;

&lt;p&gt;vMetal automates this entire process. Using automated provisioning and PXE-based bootstrapping, machines can move from rack installation to cluster-ready nodes in minutes.&lt;/p&gt;

&lt;p&gt;Infrastructure operators can:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Power on machines&lt;/li&gt;
&lt;li&gt;Automatically install operating systems&lt;/li&gt;
&lt;li&gt;Apply configuration and networking policies&lt;/li&gt;
&lt;li&gt;Attach nodes to Kubernetes clusters&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;All through the vCluster Platform.&lt;/p&gt;

&lt;p&gt;This dramatically reduces the time required to expand GPU capacity and allows infrastructure teams to operate clusters with the speed and flexibility expected from modern cloud environments.&lt;/p&gt;

&lt;h2&gt;
  
  
  Certified Stacks: Pre-Validated AI Platforms You Can Deploy in One Command
&lt;/h2&gt;

&lt;p&gt;Provisioning infrastructure is only part of the challenge. Platform teams still need to assemble the tooling required for real AI workloads, including GPU scheduling systems, platform services, and AI development frameworks. That is where vCluster Certified Stacks come in.&lt;/p&gt;

&lt;p&gt;Certified Stacks provide tested and maintained blueprints for deploying AI-ready platforms, combining:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;vCluster tenancy configurations&lt;/li&gt;
&lt;li&gt;Kubernetes platform components&lt;/li&gt;
&lt;li&gt;GPU scheduling and workload orchestration&lt;/li&gt;
&lt;li&gt;AI tooling and development environments&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These stacks allow platform teams to deploy complete AI environments quickly while still retaining the flexibility to customize their infrastructure.&lt;/p&gt;

&lt;p&gt;The first Certified Stacks support a growing ecosystem of AI infrastructure platforms, including:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;NVIDIA Run:ai for enterprise GPU orchestration&lt;/li&gt;
&lt;li&gt;SkyPilot for running and scaling AI workloads across infrastructure&lt;/li&gt;
&lt;li&gt;Ray for distributed AI applications and model training&lt;/li&gt;
&lt;li&gt;Slinky for AI platform orchestration&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnn44xavt84fbiau5onf4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnn44xavt84fbiau5onf4.png" alt=" " width="664" height="784"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Each stack is delivered as a maintained Terraform blueprint, enabling teams to go from infrastructure to a working AI platform in a repeatable and reliable way.&lt;/p&gt;

&lt;h2&gt;
  
  
  The New Infrastructure Stack for AI
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fel48423no23ccnls5aio.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fel48423no23ccnls5aio.png" alt=" " width="800" height="354"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;With the introduction of vMetal and vCluster Certified Stacks, the vCluster ecosystem now spans the full infrastructure stack required to run AI workloads. Organizations building GPU clouds or enterprise AI platforms can deploy a layered architecture designed specifically for AI infrastructure.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;vMetal: Bare metal machine management that provisions and operates GPU servers.&lt;/li&gt;
&lt;li&gt;vCluster: Tenant and cluster orchestration that enables multiple teams or customers to safely share Kubernetes infrastructure.&lt;/li&gt;
&lt;li&gt;vNode: Secure runtime isolation for AI workloads running inside shared clusters.&lt;/li&gt;
&lt;li&gt;vCluster Certified Stacks: Preconfigured AI environments that combine GPU scheduling, platform services, and AI tooling.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Together, these layers create a unified system capable of running AI workloads from physical machines all the way up to production AI environments.&lt;/p&gt;

&lt;h2&gt;
  
  
  Turn Your GPU Racks Into a Cloud Platform
&lt;/h2&gt;

&lt;p&gt;Running AI at scale requires more than powerful hardware. It requires infrastructure capable of coordinating machines, clusters, tenants, and workloads across an entire platform.&lt;/p&gt;

&lt;p&gt;With the introduction of vMetal and Certified Stacks, the vCluster ecosystem now provides a unified stack for AI infrastructure: Bare Metal → Kubernetes → Tenant Environments → AI Platforms&lt;/p&gt;

&lt;p&gt;Instead of stitching together dozens of tools, platform teams can now build AI infrastructure using components designed to work together from the start.&lt;/p&gt;

&lt;p&gt;If you’re building GPU infrastructure for a Neocloud or AI factory and want to learn more, visit &lt;a href="https://www.vmetal.ai/" rel="noopener noreferrer"&gt;vMetal&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>vcluster</category>
      <category>vmetal</category>
      <category>gpu</category>
      <category>kubernetes</category>
    </item>
  </channel>
</rss>
