DEV Community

Unpublished Post. This URL is public but secret, so share at your own discretion.

From Zero to Hero with Loft’s Vcluster: A Comprehensive Guide to Multi-Tenancy in Kubernetes

Kubernetes is the engine driving today’s cloud-native applications, but managing multiple tenants across different environments can quickly become a challenge. That’s where Loft’s Vcluster comes in, offering a powerful solution for achieving multi-tenancy in Kubernetes with ease.

In this blog post, I’ll take you on a journey from zero to hero, showing you how to set up and manage virtual clusters (Vclusters) to isolate tenants, optimize resources, and simplify your Kubernetes management.

Understanding Multi-Tenancy and Why It’s Essential

Multi-tenancy allows multiple independent users or groups—called tenants—to share the same infrastructure while maintaining isolation. In Kubernetes, multi-tenancy is key to:

  • Efficient Resource Usage: Maximize hardware utilization by sharing resources across tenants.
  • Cost Savings: Reduce infrastructure costs by avoiding redundant clusters.
  • Simplified Management: Manage multiple tenants under a single Kubernetes cluster.

The Challenges of Traditional Kubernetes Multi-Tenancy

Imagine you're running a SaaS platform for schools, and each school needs its own Kubernetes environment. Creating a separate cluster for each school quickly becomes inefficient and costly. The challenges include:

  • High Costs: Running multiple clusters means paying for multiple control planes and node pools.
  • Redundancy: Each cluster requires its own set of components, leading to duplication.
  • Complex Management: Managing hundreds of clusters is a logistical nightmare.

Meet Vcluster: The Multi-Tenancy Game-Changer

Loft’s Vcluster allows you to create isolated virtual Kubernetes clusters within a single physical cluster. These virtual clusters have their own control planes but share the underlying resources, making them ideal for multi-tenancy.

Key Benefits of Vcluster:

  • Isolation: Each tenant has its own Kubernetes API server and control plane, ensuring strong isolation.
  • Cost Efficiency: Share node pools and infrastructure, reducing costs.
  • Simplicity: Manage multiple tenants in a single cluster without sacrificing control.

Getting Started: Setting Up Vcluster from Scratch

Let’s dive into setting up a Vcluster in a Kubernetes environment, using our educational SaaS platform example with two schools: Green Valley High and Blue Ridge Academy.

Prerequisites:

  • A running Kubernetes cluster (EKS, AKS, Kind, etc.).
  • Helm installed on your local machine.
  • Loft’s Vcluster CLI installed.

Step 1: Install Vcluster and Helm

First, ensure that Helm and Vcluster are installed. If you haven't installed them yet, use the following commands:

# Install Helm
curl https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 | bash

# Install Vcluster
curl -L -o vcluster "https://github.com/loft-sh/vcluster/releases/download/v0.20.0-beta.12/vcluster-linux-amd64" && sudo install -c -m 0755 vcluster /usr/local/bin && rm -f vcluster



Enter fullscreen mode Exit fullscreen mode

Step 2: Prepare Your Kubernetes Cluster

With your Kubernetes cluster ready, the next step is to create namespaces that will represent each school.

# Create namespaces for the two schools
kubectl create ns green-valley-high
kubectl create ns blue-ridge-academy
Enter fullscreen mode Exit fullscreen mode

Step 3: Deploy Vclusters for Each School

Now, let’s deploy a Vcluster for each school using Helm. We’ll start with Green Valley High, which is a medium-sized school with moderate resource needs.

# Deploy Vcluster for Green Valley High
kubectl config set-context --current --namespace=green-valley-high
Enter fullscreen mode Exit fullscreen mode

Step 4: Create the Values File

You need to create a values.yaml file with the necessary configuration. Here's a simple example:

# green-valley-values.yaml
resources:
  limits:
    cpu: 2
    memory: 4Gi
  requests:
    cpu: 1
    memory: 2Gi

# Additional configurations can be added here as needed
Enter fullscreen mode Exit fullscreen mode

Save this file as green-valley-values.yaml in the directory where you’re running the Helm command.

Step 5: Run the Helm Command

After creating the values.yaml file, re-run the Helm command:

helm upgrade --install green-valley-high vcluster \
  --values green-valley-values.yaml \
  --repo https://charts.loft.sh \
  --namespace green-valley-high
Enter fullscreen mode Exit fullscreen mode

Step 6: Verify the Installation

If the command runs successfully, you should see output indicating that the release was installed. You can verify the installation by checking the pods in the green-valley-high namespace:

kubectl get pods -n green-valley-high
Enter fullscreen mode Exit fullscreen mode

This should list the pods associated with the green-valley-high Vcluster.

kubectl get pods -n green-valley-high
NAME                                                         READY   STATUS    RESTARTS   AGE
coredns-68bdd584b4-9wckz-x-kube-system-x-green-valley-high   1/1     Running   0          33s
green-valley-high-0                                          1/1     Running   0          74s
controlplane $ 
Enter fullscreen mode Exit fullscreen mode
Thank you for installing vcluster.

Your vcluster is named green-valley-high in namespace green-valley-high.

To connect to the vcluster, use vcluster CLI (https://www.vcluster.com/docs/getting-started/setup):
  $ vcluster connect green-valley-high -n green-valley-high
  $ vcluster connect green-valley-high -n green-valley-high -- kubectl get ns
Enter fullscreen mode Exit fullscreen mode

Next, we’ll deploy a Vcluster for Blue Ridge Academy, a larger school that requires more computing power.

Here’s an example blue-ridge-values.yaml file tailored for Blue Ridge Academy, assuming it requires more resources compared to Green Valley High.

# blue-ridge-values.yaml

# Resource limits for Blue Ridge Academy's Vcluster
resources:
  limits:
    cpu: 4           # Limit of 4 CPU cores
    memory: 8Gi      # Limit of 8Gi of memory
  requests:
    cpu: 2           # Request 2 CPU cores
    memory: 4Gi      # Request 4Gi of memory

# Persistence settings (optional)
persistence:
  enabled: true
  storageClass: "standard"   # Use the appropriate storage class
  size: 20Gi                 # Size of the persistent volume

# Additional Vcluster configurations
kubernetesVersion: "v1.24.0"  # Specify the Kubernetes version for the Vcluster

# Networking settings (optional)
network:
  domain: "blue-ridge.local"  # Custom DNS domain for the Vcluster
  serviceType: "ClusterIP"    # Service type for the Vcluster's API server

# Custom labels and annotations (optional)
metadata:
  labels:
    environment: "production"
    tenant: "blue-ridge-academy"
  annotations:
    owner: "ops-team"

# Node selector and tolerations (optional)
nodeSelector:
  node-role.kubernetes.io/vcluster-node: ""

tolerations:
- key: "dedicated"
  operator: "Equal"
  value: "vcluster"
  effect: "NoSchedule"
Enter fullscreen mode Exit fullscreen mode

Key Sections Explained:

  • Resource Limits: Configures how much CPU and memory the Vcluster is allowed to use. This is higher than what might be configured for Green Valley High.
  • Persistence: This section enables persistent storage for the Vcluster, which is useful if you need to maintain state across restarts.
  • Kubernetes Version: Specifies the version of Kubernetes that the Vcluster will run. You can adjust this based on your needs.
  • Networking: Allows you to set a custom DNS domain and service type for the Vcluster.
  • Metadata: Custom labels and annotations for easier identification and management.
  • Node Selector and Tolerations: These are advanced settings to control which nodes the Vcluster can be scheduled on.

You can save this configuration in a file named blue-ridge-values.yaml and use it during your Helm deployment:

helm upgrade --install blue-ridge-academy vcluster \
  --values blue-ridge-values.yaml \
  --repo https://charts.loft.sh \
  --namespace blue-ridge-academy
Enter fullscreen mode Exit fullscreen mode

Troubleshooting

If you face issue such as shown below:

kubectl get pods -n blue-ridge-academy 
NAME                   READY   STATUS    RESTARTS   AGE
blue-ridge-academy-0   0/1     Pending   0          2m53s
controlplane $ 
Enter fullscreen mode Exit fullscreen mode

Check the logs:

kubectl describe pod blue-ridge-academy-0 -n blue-ridge-academy
Enter fullscreen mode Exit fullscreen mode
  Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      coredns-custom
    Optional:  true
  kube-api-access-x6bnv:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   Burstable
Node-Selectors:              node-role.kubernetes.io/vcluster-node=
Tolerations:                 dedicated=vcluster:NoSchedule
                             node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason            Age   From               Message
  ----     ------            ----  ----               -------
  Warning  FailedScheduling  82s   default-scheduler  0/1 nodes are available: 1 node(s) didn't match Pod's node affinity/selector. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.
Enter fullscreen mode Exit fullscreen mode

The error message you’re seeing indicates that the pod blue-ridge-academy-0 is unable to be scheduled because the node(s) in your cluster do not match the node selector or affinity rules defined in your configuration.

Step 7: Connect to Your Virtual Clusters

Once the Vclusters are up and running, you can connect to them using the Vcluster CLI.

# Connect to Green Valley High's Vcluster
vcluster connect green-valley-high --namespace green-valley-high
export KUBECONFIG="./kubeconfig.yaml"

# Verify the connection by creating a namespace
kubectl create ns test-green-valley
Enter fullscreen mode Exit fullscreen mode

This new namespace will only be visible in Green Valley High’s Vcluster, not in the underlying physical cluster, confirming the isolation.

Repeat the connection process for Blue Ridge Academy:

# Connect to Blue Ridge Academy's Vcluster
vcluster connect blue-ridge-academy --namespace blue-ridge-academy
export KUBECONFIG="./kubeconfig.yaml"
Enter fullscreen mode Exit fullscreen mode

Step 8: Managing Resources and Scaling

Each Vcluster can be configured with resource limits specific to each school. For example, Green Valley High might be allocated moderate resources, while Blue Ridge Academy gets more to accommodate its larger student body.

In your values.yaml files, you can set these limits:

# green-valley-values.yaml
resources:
  limits:
    cpu: 2
    memory: 4Gi
  requests:
    cpu: 1
    memory: 2Gi
Enter fullscreen mode Exit fullscreen mode
# blue-ridge-values.yaml
resources:
  limits:
    cpu: 4
    memory: 8Gi
  requests:
    cpu: 2
    memory: 4Gi
Enter fullscreen mode Exit fullscreen mode

This setup ensures that each school can only use the resources allocated to it, preventing any one tenant from overwhelming the shared infrastructure.

Step 9: Cleanup

When you’re done with your testing or deployment, you can easily remove the virtual clusters using Helm:

# Delete Vcluster for Green Valley High
helm delete green-valley-high -n green-valley-high

# Delete Vcluster for Blue Ridge Academy
helm delete blue-ridge-academy -n blue-ridge-academy
Enter fullscreen mode Exit fullscreen mode

Conclusion: Simplifying Multi-Tenancy with Vcluster

Congratulations! You’ve successfully set up and managed virtual clusters using Loft’s Vcluster, providing isolated Kubernetes environments for different schools within a single cluster. By using Vcluster, you’ve simplified tenant management, reduced costs, and ensured that each school has the resources it needs without sacrificing control.

As you continue to explore Vcluster, think about how it can benefit your own use cases—whether you’re managing multiple clients, teams, or environments. Vcluster offers a flexible and powerful way to implement multi-tenancy in Kubernetes.

Ready to dive deeper? Explore advanced configurations and use cases in the official Vcluster documentation and take your Kubernetes expertise to the next level!

Happy clustering!

Top comments (0)