<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Lukas Gentele</title>
    <description>The latest articles on DEV Community by Lukas Gentele (@gentele).</description>
    <link>https://dev.to/gentele</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/gentele"/>
    <language>en</language>
    <item>
      <title>Ephemeral PR environment using vCluster</title>
      <dc:creator>Lukas Gentele</dc:creator>
      <pubDate>Mon, 24 Feb 2025 06:12:13 +0000</pubDate>
      <link>https://dev.to/gentele/ephemeral-pr-environment-using-vcluster-3oj4</link>
      <guid>https://dev.to/gentele/ephemeral-pr-environment-using-vcluster-3oj4</guid>
      <description>&lt;p&gt;In a fast-paced development environment, having an isolated and ephemeral environment to test changes for every pull request (PR) is a game-changer. In this blog, I’ll walk you through setting up ephemeral PR environments using &lt;strong&gt;vCluster&lt;/strong&gt;, enabling seamless testing of your application in a Kubernetes environment. We'll also leverage GitHub Actions for automation, ensuring every labeled PR dynamically creates a vCluster, deploys the application, and cleans up upon merging or label removal.&lt;/p&gt;

&lt;p&gt;Let’s dive into the &lt;strong&gt;step-by-step guide&lt;/strong&gt;.&lt;/p&gt;




&lt;h2&gt;
  
  
  What is vCluster?
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://www.vcluster.com/" rel="noopener noreferrer"&gt;vCluster&lt;/a&gt; is a technology that allows you to create lightweight, isolated Kubernetes clusters within a host cluster. These virtual clusters offer full Kubernetes functionality while being resource-efficient, making them ideal for scenarios like PR testing environments.&lt;/p&gt;




&lt;h2&gt;
  
  
  Why Ephemeral PR Environments?
&lt;/h2&gt;

&lt;p&gt;Ephemeral environments allow:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  Testing pull request changes in an isolated environment&lt;/li&gt;
&lt;li&gt;  Quick validation without interfering with the main cluster&lt;/li&gt;
&lt;li&gt;  Automatic cleanup post-testing&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;By leveraging &lt;strong&gt;vCluster&lt;/strong&gt; and &lt;strong&gt;GitHub Actions&lt;/strong&gt;, you can automate this workflow and ensure every PR gets its own dedicated environment.&lt;/p&gt;




&lt;h2&gt;
  
  
  Prerequisites:
&lt;/h2&gt;

&lt;h4&gt;
  
  
  Kubernetes cluster 
&lt;/h4&gt;

&lt;p&gt;You need to have a Kubernetes cluster, in this case I am using a DigitalOcean Kubernetes cluster but any should work. I am creating a realistic production scenario so for that I used a cluster that can create service type: LoadBalancer. &lt;/p&gt;

&lt;p&gt;Command:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;kubectl get nodes&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Output:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl get nodes
NAME              STATUS   ROLES    AGE   VERSION
live-demo-e0is0   Ready    &amp;lt;none&amp;gt;   19d   v1.31.1
live-demo-e0is1   Ready    &amp;lt;none&amp;gt;   19d   v1.31.1
live-demo-e0isz   Ready    &amp;lt;none&amp;gt;   19d   v1.31.1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Deploying Ingress controller &lt;/p&gt;

&lt;p&gt;Command&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.9.4/deploy/static/provider/cloud/deploy.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Output&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl get po,svc -n ingress-nginx
NAME                                           READY   STATUS      RESTARTS   AGE
pod/ingress-nginx-admission-create-lcb85       0/1     Completed   0          19d
pod/ingress-nginx-admission-patch-xl2fk        0/1     Completed   0          19d
pod/ingress-nginx-controller-79fcc99b4-7f7ls   1/1     Running     0          19d
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Getting the LoadBalancer IP for the ingress controller:&lt;/p&gt;

&lt;p&gt;Command:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;kubectl get svc -n ingress-nginx&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Output:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;NAME                                         TYPE           CLUSTER-IP      EXTERNAL-IP      PORT(S)                      AGE
service/ingress-nginx-controller             LoadBalancer   10.109.28.126   209.38.160.229   80:31228/TCP,443:30435/TCP   19d
service/ingress-nginx-controller-admission   ClusterIP      10.109.15.162   &amp;lt;none&amp;gt;           443/TCP                      19d
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Domain mapping:&lt;/p&gt;

&lt;p&gt;For our application we need dynamic ingress for testing so what we have done here is added the loadbalancer IP of the ingress controller as the A record to the Domain. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.letterdrop.co%2Fimages%2F2025%2F2%2F17%2F5nhvb6%2Ff4i0g5so06.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.letterdrop.co%2Fimages%2F2025%2F2%2F17%2F5nhvb6%2Ff4i0g5so06.webp" width="800" height="50"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Connect the Kubernetes cluster to the platform&lt;/p&gt;

&lt;p&gt;We will enable vCluster Pro in order to use templates and create the clusters. For simplicity, I am using my &lt;a href="https://vcluster.cloud/" rel="noopener noreferrer"&gt;vcluster.cloud&lt;/a&gt; account and then creating the access key to login. In this way I don’t have to run any agent on the current cluster. You can either run vcluster platform start or sign up on &lt;a href="https://vcluster.cloud/" rel="noopener noreferrer"&gt;vCluster cloud&lt;/a&gt;  and once you login, you should be able to go to &lt;a href="https://www.vcluster.com/docs/platform/administer/users-permissions/access-keys?__hstc=107455133.68b8aad142f6b2e0b258b80295e8613b.1715346490624.1739253949199.1739281796856.176&amp;amp;__hssc=107455133.1.1739281796856&amp;amp;__hsfp=1154299380" rel="noopener noreferrer"&gt;access keys&lt;/a&gt; and create a short lived access key for the demo (remember to delete the key post demo for security reasons).&lt;/p&gt;

&lt;p&gt;Command:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;vcluster platform login https://saiyam.vcluster.cloud --access-key &amp;lt;your-access-key&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Output:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.letterdrop.co%2Fimages%2F2025%2F2%2F17%2Fvrd9nk%2Fwisz1kr8cr.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.letterdrop.co%2Fimages%2F2025%2F2%2F17%2Fvrd9nk%2Fwisz1kr8cr.webp" width="800" height="74"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.letterdrop.co%2Fimages%2F2025%2F2%2F17%2Fp7zj83%2F1sxn85vh5i.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.letterdrop.co%2Fimages%2F2025%2F2%2F17%2Fp7zj83%2F1sxn85vh5i.webp" width="800" height="452"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Create a template under vCluster templates in the vCluster cloud platform instance.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sync:
  fromHost:
    ingressClasses:
      enabled: true
  toHost:
    ingresses:
      enabled: true
external:
  platform:
    autoSleep:
      afterInactivity: 3600  # Automatically sleep after 1 hour of inactivity
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Until now we have a Kubernetes cluster with ingress controller installed and the Public IP of the nginx controller pointed to our domain. &lt;/p&gt;

&lt;p&gt;We also have logged into the platform using the access keys created using vcluster.cloud. Now let’s see the demo application that we have. &lt;/p&gt;

&lt;h2&gt;
  
  
  Demo Application
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.letterdrop.co%2Fimages%2F2025%2F2%2F17%2Fjjv6zq%2Fgbqbaj9ijg.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.letterdrop.co%2Fimages%2F2025%2F2%2F17%2Fjjv6zq%2Fgbqbaj9ijg.webp" width="800" height="572"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.letterdrop.co%2Fimages%2F2025%2F2%2F17%2F6wb8j7%2F2d3o6vv14m.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.letterdrop.co%2Fimages%2F2025%2F2%2F17%2F6wb8j7%2F2d3o6vv14m.webp" width="800" height="202"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The scenario we are trying to achieve here involves a sample application deployed onto a Kubernetes cluster. Often, in organizations, new features or bug fixes need to be deployed and tested before being merged into the main branch. In this case, a developer raises a pull request and adds a label to test it. Based on GitHub Actions, the application is built, and then a deployment, service, and ingress Kubernetes object file are generated and pushed to a new branch. A virtual cluster is created, and the new deployment file is applied, allowing the developer to test and verify the new application deployment. &lt;/p&gt;

&lt;p&gt;Let’s see how this looks in practice. &lt;/p&gt;

&lt;p&gt;GitHub repo - &lt;a href="https://github.com/saiyam1814/vcluster-demo" rel="noopener noreferrer"&gt;https://github.com/saiyam1814/vcluster-demo&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The application for this demo is a simple Go-based HTTP server:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;package main
import (
    "fmt"
    "net/http"
)
func handler(w http.ResponseWriter, r *http.Request) {
    fmt.Fprintln(w, "Hellooo, World for blog!!")
}
func main() {
    http.HandleFunc("/", handler)
    fmt.Println("Starting server on :8080")
    err := http.ListenAndServe(":8080", nil)
    if err != nil {
        panic(err)
    }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;




&lt;h3&gt;
  
  
  Step 1: Setting Up the Deployment Template
&lt;/h3&gt;

&lt;p&gt;The application is packaged as a Kubernetes deployment and exposed via a service and ingress. The deployment uses Jinja2 templating to inject dynamic values like the image tag and ingress host.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;tmpl/deploy.j2:&lt;/strong&gt;&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: apps/v1
kind: Deployment
metadata:
  name: hello-world
  labels:
    app: hello-world
spec:
  replicas: 1
  selector:
    matchLabels:
      app: hello-world
  template:
    metadata:
      labels:
        app: hello-world
    spec:
      containers:
      - name: hello-world
        image: {{ image_deploy_tag }}
        ports:
        - containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
  name: hello-world
spec:
  ports:
  - port: 80
    targetPort: 8080
  selector:
    app: hello-world
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: hello-world
spec:
  ingressClassName: nginx
  rules:
  - host: {{ ingress_tag }}
    http:
      paths:
      - pathType: Prefix
        path: "/"
        backend:
          service:
            name: hello-world
            port:
              number: 80
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;




&lt;h3&gt;
  
  
  Step 2: Automating with GitHub Actions
&lt;/h3&gt;

&lt;p&gt;GitHub Actions handles the workflow from building the application to deploying it on a vCluster.&lt;/p&gt;

&lt;h4&gt;
  
  
  PR Workflow
&lt;/h4&gt;

&lt;p&gt;&lt;strong&gt;File: .github/workflows/build-and-deploy.yml&lt;/strong&gt; This workflow:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; Builds the application with the latest changes made by the developer using ko&lt;/li&gt;
&lt;li&gt; Pushes the container image to docker hub account(credentials for which should be set in the Actions secret as described previously)&lt;/li&gt;
&lt;li&gt; Creates a deployment manifest using Jinja2 - The action will replace the ingress host and the deployment image variables mentioned in the jinja template and then push to a new feature branch.&lt;/li&gt;
&lt;li&gt; Creates a vCluster.&lt;/li&gt;
&lt;li&gt; Deploys the application to the vCluster.&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Exposes it via ingress for testing.&lt;/p&gt;

&lt;p&gt;name: Build and Deploy with vCluster&lt;/p&gt;

&lt;p&gt;on:&lt;br&gt;
  pull_request:&lt;br&gt;
    types: [labeled]&lt;/p&gt;

&lt;p&gt;jobs:&lt;br&gt;
  build-and-deploy:&lt;br&gt;
    if: ${{ github.event.label.name == 'test' }}&lt;br&gt;
    runs-on: ubuntu-latest&lt;/p&gt;

&lt;p&gt;    steps:&lt;br&gt;
      # Step 1: Checkout PR Code&lt;br&gt;
      - name: Checkout PR Code&lt;br&gt;
        uses: actions/checkout@v3&lt;br&gt;
        with:&lt;br&gt;
          ref: ${{ github.event.pull_request.head.sha }}&lt;/p&gt;

&lt;p&gt;      # Step 2: Set up Go&lt;br&gt;
      - name: Set up Go&lt;br&gt;
        uses: actions/setup-go@v4&lt;br&gt;
        with:&lt;br&gt;
          go-version: '1.22.5'&lt;/p&gt;

&lt;p&gt;      # Step 3: Set up ko&lt;br&gt;
      - name: Set up ko&lt;br&gt;
        uses: ko-build/&lt;a href="mailto:setup-ko@v0.6"&gt;setup-ko@v0.6&lt;/a&gt;&lt;br&gt;
        with:&lt;br&gt;
          version: v0.14.1&lt;/p&gt;

&lt;p&gt;      # Step 4: Log in to Docker Hub&lt;br&gt;
      - name: Log in to Docker Hub&lt;br&gt;
        env:&lt;br&gt;
          KO_DOCKER_REPO: docker.io/saiyam911&lt;br&gt;
        run: |&lt;br&gt;
          echo "${{ secrets.DOCKER_PASSWORD }}" | ko login docker.io --username ${{ secrets.DOCKER_USERNAME }} --password-stdin&lt;/p&gt;

&lt;p&gt;      # Step 5: Build and Push Image&lt;br&gt;
      - name: Build and Push Image&lt;br&gt;
        env:&lt;br&gt;
          KO_DOCKER_REPO: docker.io/saiyam911/vcluster-demo&lt;br&gt;
        run: |&lt;br&gt;
          cd app&lt;br&gt;
          export IMAGE_TAG=sha-$(git rev-parse --short HEAD)&lt;br&gt;
          echo "image_deploy_tag=docker.io/saiyam911/vcluster-demo:$IMAGE_TAG" &amp;gt;&amp;gt; $GITHUB_ENV&lt;br&gt;
          ko build --bare -t $IMAGE_TAG&lt;/p&gt;

&lt;p&gt;      # Step 6: Generate Deployment Manifest&lt;br&gt;
      - name: Generate Deployment Manifest&lt;br&gt;
        uses: cuchi/&lt;a href="mailto:jinja2-action@v1.1.0"&gt;jinja2-action@v1.1.0&lt;/a&gt;&lt;br&gt;
        with:&lt;br&gt;
          template: tmpl/deploy.j2&lt;br&gt;
          output_file: deploy/deployment.yaml&lt;br&gt;
          strict: true&lt;br&gt;
          variables: |&lt;br&gt;
            image_deploy_tag=${{ env.image_deploy_tag }}&lt;br&gt;
            ingress_tag=pr${{ github.event.pull_request.number }}.vcluster.tech&lt;/p&gt;

&lt;p&gt;      # Step 7: Install vCluster CLI&lt;br&gt;
      - name: Install vCluster CLI&lt;br&gt;
        uses: loft-sh/setup-vcluster@main&lt;/p&gt;

&lt;p&gt;      # Step 8: Login to vCluster Platform&lt;br&gt;
      - name: Login to vCluster Platform instance&lt;br&gt;
        env:&lt;br&gt;
          LOFT_URL: ${{ secrets.VCLUSTER_PLATFORM_URL }}&lt;br&gt;
          ACCESS_KEY: ${{ secrets.VCLUSTER_ACCESS_KEY }}&lt;br&gt;
        run: |&lt;br&gt;
          vcluster platform login $LOFT_URL --access-key $ACCESS_KEY&lt;/p&gt;

&lt;p&gt;      # Step 9: Create vCluster for the PR&lt;br&gt;
      - name: Create A vCluster&lt;br&gt;
        env:&lt;br&gt;
          NAME: pr-${{ github.event.pull_request.number }}&lt;br&gt;
        run: |&lt;br&gt;
          vcluster platform create vcluster $NAME --project default --template my-template --link "Preview=&lt;a href="http://pr$%7B%7B" rel="noopener noreferrer"&gt;http://pr${{&lt;/a&gt; github.event.pull_request.number }}.vcluster.tech"&lt;/p&gt;

&lt;p&gt;      # Step 10: Deploy to vCluster&lt;br&gt;
      - name: Deploy Application to vCluster&lt;br&gt;
        run: |&lt;br&gt;
          kubectl apply -Rf deploy/&lt;/p&gt;

&lt;p&gt;      # Step 11: Test Application with curl&lt;br&gt;
      - name: Test Application&lt;br&gt;
        run: |&lt;br&gt;
          sleep 10&lt;br&gt;
          curl --retry 5 --retry-delay 10 &lt;a href="http://pr$%7B%7B" rel="noopener noreferrer"&gt;http://pr${{&lt;/a&gt; github.event.pull_request.number }}.vcluster.tech&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;




&lt;h3&gt;
  
  
  Step 3: Cleanup Workflow
&lt;/h3&gt;

&lt;p&gt;Once the PR is merged or the label is removed, the ephemeral vCluster is deleted.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;File: .github/workflows/cleanup.yml&lt;/strong&gt;&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;name: Clean Up vCluster

on:
  pull_request:
    types: [closed, unlabeled]

jobs:
  cleanup:
    if: (github.event.action == 'closed' &amp;amp;&amp;amp; github.event.pull_request.merged == true) || github.event.label.name == 'test'
    runs-on: ubuntu-latest

    steps:
      # Step 1: Install vCluster CLI
      - name: Install vCluster CLI
        uses: loft-sh/setup-vcluster@main


      # Step 2: Login to vCluster Platform
      - name: Login to vCluster Platform instance
        env:
          LOFT_URL: ${{ secrets.VCLUSTER_PLATFORM_URL }}
          ACCESS_KEY: ${{ secrets.VCLUSTER_ACCESS_KEY }}
        run: |
          vcluster platform login $LOFT_URL --access-key $ACCESS_KEY


      # Step 3: Delete vCluster
      - name: Delete vCluster
        env:
          NAME: pr-${{ github.event.pull_request.number }}
        run: |
          vcluster platform delete vcluster $NAME --project default
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;




&lt;h2&gt;
  
  
  How It Works
&lt;/h2&gt;

&lt;p&gt;A developer creates a PR to do the feature changes.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.letterdrop.co%2Fimages%2F2025%2F2%2F17%2F9m8c3u%2F2bv4ocvs43.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.letterdrop.co%2Fimages%2F2025%2F2%2F17%2F9m8c3u%2F2bv4ocvs43.webp" width="800" height="355"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;‎With a small change the developer has raised a PR and now needs to add a &lt;code&gt;test&lt;/code&gt; label.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.letterdrop.co%2Fimages%2F2025%2F2%2F17%2F38oejg%2Fhm28vjj1xo.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.letterdrop.co%2Fimages%2F2025%2F2%2F17%2F38oejg%2Fhm28vjj1xo.webp" width="800" height="404"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As soon as the label is added the GitHub actions kicks off&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.letterdrop.co%2Fimages%2F2025%2F2%2F17%2Fob13sd%2Fs0g32s3ltc.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.letterdrop.co%2Fimages%2F2025%2F2%2F17%2Fob13sd%2Fs0g32s3ltc.webp" width="800" height="246"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In the vCluster platform cloud instance you will be able to see the cluster getting created and the application will be deployed. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.letterdrop.co%2Fimages%2F2025%2F2%2F17%2Fploxrf%2Fau96oiwjld.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.letterdrop.co%2Fimages%2F2025%2F2%2F17%2Fploxrf%2Fau96oiwjld.webp" width="800" height="81"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.letterdrop.co%2Fimages%2F2025%2F2%2F17%2Fuogfw0%2F5meid8utv4.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.letterdrop.co%2Fimages%2F2025%2F2%2F17%2Fuogfw0%2F5meid8utv4.webp" width="800" height="54"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The Action is completed and &lt;code&gt;pr14.vcluster.tech&lt;/code&gt; is created as part of ingress.&lt;/p&gt;

&lt;p&gt;The application is accessible at &lt;a href="http://pr" rel="noopener noreferrer"&gt;http://pr&lt;/a&gt;.vcluster.tech.&lt;/p&gt;

&lt;p&gt;As you can see the latest changes made by the developer are deployed.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.letterdrop.co%2Fimages%2F2025%2F2%2F17%2Fqsevyv%2F611awo6bfg.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.letterdrop.co%2Fimages%2F2025%2F2%2F17%2Fqsevyv%2F611awo6bfg.webp" width="714" height="170"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Cleanup:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Upon PR merge or label removal, the ephemeral vCluster is automatically deleted.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.letterdrop.co%2Fimages%2F2025%2F2%2F17%2F9ke4t9%2Ftmgftev8at.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.letterdrop.co%2Fimages%2F2025%2F2%2F17%2F9ke4t9%2Ftmgftev8at.webp" width="800" height="386"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After merging, the cleanup action is triggered, which will clear the virtual cluster.&lt;/p&gt;




&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Ephemeral PR environments using vCluster simplify testing, reduce resource usage, and provide a seamless developer experience. By combining vCluster with GitHub Actions, you can achieve an automated and efficient workflow for testing PRs.&lt;/p&gt;

&lt;p&gt;Check out the &lt;a href="https://github.com/saiyam1814/vcluster-demo" rel="noopener noreferrer"&gt;&lt;/a&gt;&lt;a href="https://github.com/saiyam1814/vcluster-demo" rel="noopener noreferrer"&gt;demo repository&lt;/a&gt; and give it a try! 🚀&lt;/p&gt;

&lt;p&gt;Let me know your thoughts or if you face any challenges while implementing this.&lt;/p&gt;

&lt;p&gt;We will be doing this as part of our workshop happening on 20th March 2025.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.vcluster.com/event/ephemeral-pull-request-environments-using-vcluster" rel="noopener noreferrer"&gt;Register here for the workshop&lt;/a&gt;&lt;/p&gt;

</description>
      <category>vcluster</category>
      <category>tutorials</category>
    </item>
    <item>
      <title>Seamless Kubernetes Multi-Tenancy with vCluster and a Shared Platform Stack</title>
      <dc:creator>Lukas Gentele</dc:creator>
      <pubDate>Fri, 21 Feb 2025 05:49:14 +0000</pubDate>
      <link>https://dev.to/gentele/seamless-kubernetes-multi-tenancy-with-vcluster-and-a-shared-platform-stack-1m4n</link>
      <guid>https://dev.to/gentele/seamless-kubernetes-multi-tenancy-with-vcluster-and-a-shared-platform-stack-1m4n</guid>
      <description>&lt;p&gt;Multi-tenancy in Kubernetes is not a new cooncept; it simply refers to creating isolated spaces for different users, teams, or projects. Many organizations begin by using namespaces to isolate workloads, teams, or projects. However, they soon encounter limitations, such as challenges with custom resource deployments, security, and per-tenant configurations.&lt;/p&gt;

&lt;p&gt;In short, multi-tenancy is essential for cost savings, enabling multiple tenants to share a single host cluster, preventing cluster sprawl, and reducing the maintenance efforts associated with managing multiple Kubernetes clusters.&lt;/p&gt;

&lt;p&gt;With this in mind, the easiest and most effective way to implement multi-tenancy is by creating virtual Kubernetes clusters using vCluster. Instead of taking the "&lt;strong&gt;one cluster per tenant&lt;/strong&gt;" approach, we recommend the "&lt;strong&gt;control plane per tenant&lt;/strong&gt;" model, where each virtual cluster has its own isolated control plane.&lt;/p&gt;

&lt;p&gt;One of the key things we want to highlight in this post is the concept of a shared platform stack. Let’s first understand what that means.&lt;/p&gt;

&lt;p&gt;A major challenge in Kubernetes multi-tenancy, or Kubernetes in general, is the duplication of applications installed across multiple Kubernetes clusters.&lt;/p&gt;

&lt;p&gt;Let’s break this down with an example:&lt;/p&gt;

&lt;p&gt;Imagine three teams: A, B, and C, each needing their own Kubernetes cluster. As administrators, we create three separate Kubernetes clusters. By default, a newly created cluster only runs the essential components needed for Kubernetes itself, such as the control plane components, the cloud controller manager etc.&lt;/p&gt;

&lt;p&gt;Now, if all three teams need to deploy applications with HTTPS support, the typical approach is to install an Ingress Controller and cert-manager. Each team then creates Deployments, Services, Ingress, and Certificate objects. However, since these components need to be installed on every cluster separately, this results in duplicate resources.&lt;/p&gt;

&lt;p&gt;This duplication problem also exists in multi-tenancy. One of the biggest challenges in Kubernetes multi-tenancy is the &lt;strong&gt;shared platform stack&lt;/strong&gt;. Ideally, we should be able to reuse resources from the host cluster instead of installing cert-manager and an Ingress Controller in every new cluster.&lt;/p&gt;

&lt;p&gt;The easiest way to solve this problem is by using virtual clusters (vClusters). With vCluster, you can define in the cluster configuration file which resources should be synced from the host cluster, allowing multiple tenants to share platform resources. This optimizes resource utilization and eliminates unnecessary duplication.&lt;/p&gt;

&lt;p&gt;This concept of a shared platform stack in a multi-tenant Kubernetes environment using virtual clusters helps organizations efficiently manage resources.&lt;/p&gt;

&lt;p&gt;Now, let’s see this in action with an end-to-end example where we install cert-manager and an Nginx Ingress Controller on the host cluster and then create a vCluster that reuses these host cluster resources.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.letterdrop.co%2Fimages%2F2025%2F2%2F17%2Fbvk8tj%2Fctwdywfu9i.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.letterdrop.co%2Fimages%2F2025%2F2%2F17%2Fbvk8tj%2Fctwdywfu9i.webp" width="800" height="485"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We will have a Kubernetes cluster with some tools installed on the host cluster, as mentioned in the image above, and then use those inside virtual clusters.&lt;/p&gt;




&lt;h2&gt;
  
  
  1. Prerequisites
&lt;/h2&gt;

&lt;p&gt;Before we dive in, ensure you have the following:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  A Kubernetes cluster with admin access&lt;/li&gt;
&lt;li&gt;  The latest version of the &lt;strong&gt;vCluster CLI(v0.22+)&lt;/strong&gt; installed&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;cert-manager&lt;/strong&gt; installed in the &lt;strong&gt;host&lt;/strong&gt; cluster&lt;/li&gt;
&lt;li&gt;  Nginx ingress controlled installed in the &lt;strong&gt;host&lt;/strong&gt; cluster&lt;/li&gt;
&lt;li&gt;  Basic understanding of Kubernetes resources like Ingress and Services&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Tools to Install:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  Install vCluster CLI on a Linux system follow the below command. If on a different system, refer to the &lt;a href="https://www.vcluster.com/docs/vcluster/#deploy-vcluster" rel="noopener noreferrer"&gt;docs&lt;/a&gt;. &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Command:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;curl -LO https://github.com/loft-sh/vcluster/releases/latest/download/vcluster-linux-amd64
chmod +x vcluster-linux-amd64
sudo mv vcluster-linux-amd64 /usr/local/bin/vcluster
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;ul&gt;
&lt;li&gt;  Install cert-manager on the host cluster:&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Command:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.16.2/cert-manager.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;ul&gt;
&lt;li&gt;  Install nginx ingress controller on the host cluster:&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Command:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.9.4/deploy/static/provider/cloud/deploy.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;




&lt;h2&gt;
  
  
  2. Setting Up vCluster
&lt;/h2&gt;

&lt;p&gt;We will configure a vCluster with cert-manager integration enabled.&lt;/p&gt;

&lt;h4&gt;
  
  
  vCluster Configuration
&lt;/h4&gt;

&lt;p&gt;Create a &lt;code&gt;vcluster.yaml&lt;/code&gt; file:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;integrations:
  certManager:
    enabled: true
sync:
  ingresses:
    enabled: true
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Enable vCluster Pro in order to use this feature: For simplicity, I am using my &lt;a href="https://vcluster.cloud/" rel="noopener noreferrer"&gt;vcluster.cloud&lt;/a&gt; account and then creating the access key to login and enable pro features. In this way I don’t have to run any agent on the current cluster. You can either run vcluster platform start or sign up on &lt;a href="https://vcluster.cloud/" rel="noopener noreferrer"&gt;vCluster cloud&lt;/a&gt; and once you login, you should be able to go to &lt;a href="https://www.vcluster.com/docs/platform/administer/users-permissions/access-keys" rel="noopener noreferrer"&gt;access keys&lt;/a&gt; and create a short lived access key for the demo (Remember to delete the key post demo for security reasons)&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.letterdrop.co%2Fimages%2F2025%2F2%2F17%2Ffnbl84%2Fmn8rrls656.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.letterdrop.co%2Fimages%2F2025%2F2%2F17%2Ffnbl84%2Fmn8rrls656.webp" width="800" height="452"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Command:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;vcluster platformlogin https://saiyam.vcluster.cloud --access-key &amp;lt;your-access-key&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Output:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.letterdrop.co%2Fimages%2F2025%2F2%2F17%2Fxeb80f%2F9b9tc6ihuy.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.letterdrop.co%2Fimages%2F2025%2F2%2F17%2Fxeb80f%2F9b9tc6ihuy.webp" width="800" height="74"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Create the vCluster&lt;/p&gt;

&lt;p&gt;Run the following command to create the vCluster:&lt;/p&gt;

&lt;p&gt;Command:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;vcluster create democert -f vcluster.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Output:&lt;/p&gt;

&lt;p&gt;Once the vCluster is created, verify it is running:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;vcluster list
  
      NAME   |     NAMESPACE     | STATUS  | VERSION | CONNECTED |  AGE    
  -----------+-------------------+---------+---------+-----------+---------
    democert | vcluster-democert | Running | 0.22.1  | True      | 3h3m1s  
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Export the vCluster kubeconfig:&lt;/p&gt;

&lt;p&gt;You need to make sure for the next steps to be done, you have switched the context to the virtual cluster.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl config current-context
vcluster_democert_vcluster-democert_do-nyc1-demo
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;




&lt;h2&gt;
  
  
  3. Configuring cert-manager Integration
&lt;/h2&gt;

&lt;h4&gt;
  
  
  Create an Issuer
&lt;/h4&gt;

&lt;p&gt;Create an Issuer in the virtual cluster that references cert-manager in the host cluster. With the cert-manager integration, the namespaced Issuers and Certificates are synced from the virtual cluster to the host cluster.&lt;/p&gt;

&lt;p&gt;Create a file issuer.yaml with below configuration:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: cert-manager.io/v1
kind: Issuer
metadata:
  name: letsencrypt
  namespace: default
spec:
  acme:
    email: saiyam-test@gmail.com
    server: https://acme-v02.api.letsencrypt.org/directory
    privateKeySecretRef:
      name: example-issuer-account-key
    solvers:
    - http01:
        ingress:
          ingressClassName: nginx
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Apply the Issuer inside the &lt;strong&gt;virtual cluster&lt;/strong&gt;:&lt;/p&gt;

&lt;p&gt;Command: &lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl apply -f issuer.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Output:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl get issuer
NAME                  READY   AGE
letsencrypt-staging   True    3h34m
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;




&lt;h2&gt;
  
  
  4. Deploying an Application with Ingress
&lt;/h2&gt;

&lt;h4&gt;
  
  
  Deploy a Sample NGINX Application
&lt;/h4&gt;

&lt;p&gt;Create a file app.yaml:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx
spec:
  replicas: 1
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx
        ports:
        - containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
  name: nginx
spec:
  selector:
    app: nginx
  ports:
  - protocol: TCP
    port: 80
    targetPort: 80
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Apply the file: Apply this on the virtual cluster&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl apply -f app.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Output:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl get pod,svc      
NAME                         READY   STATUS    RESTARTS   AGE
pod/nginx-7769f8f85b-pmt2n   1/1     Running   0          3h34m
NAME                 TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)   AGE
service/kubernetes   ClusterIP   10.245.238.188   &amp;lt;none&amp;gt;        443/TCP   3h35m
service/nginx        ClusterIP   10.245.212.192   &amp;lt;none&amp;gt;        80/TCP    3h34m
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;




&lt;h2&gt;
  
  
  5. Configure Ingress and TLS
&lt;/h2&gt;

&lt;h4&gt;
  
  
  Create an Ingress
&lt;/h4&gt;

&lt;p&gt;Create a file &lt;code&gt;ingress.yaml&lt;/code&gt;:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: example-ingress
  namespace: default
  annotations:
    kubernetes.io/ingress.class: nginx
spec:
  ingressClassName: nginx
  tls:
  - hosts:
    - cert.&amp;lt;YOUR-EXTERNAL-IP&amp;gt;.nip.io
    secretName: example-cert-tls
  rules:
  - host: cert.&amp;lt;YOUR-EXTERNAL-IP&amp;gt;.nip.io
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: nginx
            port:
              number: 80
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;In above yaml file, the IP is the external IP of the nginx ingress controller manager running inside the host cluster.&lt;/p&gt;

&lt;p&gt;Apply the file: Apply this inside the &lt;strong&gt;virtual cluster&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Command:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl apply -f ingress.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Output:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl get ing
NAME              CLASS   HOSTS                       ADDRESS         PORTS     AGE
example-ingress   nginx   cert.24.199.67.197.nip.io   24.199.67.197   80, 443   3h36m
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;




&lt;h2&gt;
  
  
  6. Request a Certificate
&lt;/h2&gt;

&lt;h4&gt;
  
  
  Create a Certificate Resource
&lt;/h4&gt;

&lt;p&gt;Create a file &lt;code&gt;certificate.yaml&lt;/code&gt;:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
  name: example-cert
  namespace: default
spec:
  dnsNames:
  - cert.&amp;lt;YOUR-EXTERNAL-IP&amp;gt;.nip.io
  issuerRef:
    name: letsencrypt
    kind: Issuer
  secretName: example-cert-tls
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Apply the file: Apply this inside the &lt;strong&gt;virtual cluster&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Command:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl apply -f certificate.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Output:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl get certificate
NAME           READY   SECRET             AGE
example-cert   True    example-cert-tls   3h36m
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;




&lt;h2&gt;
  
  
  7. Testing the Setup
&lt;/h2&gt;

&lt;p&gt;Verify that the https curl command is working as expected&lt;/p&gt;

&lt;p&gt;Command:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;curl https://cert.24.199.67.197.nip.io
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Output:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;curl https://cert.24.199.67.197.nip.io
&amp;lt;!DOCTYPE html&amp;gt;
&amp;lt;html&amp;gt;
&amp;lt;head&amp;gt;
&amp;lt;title&amp;gt;Welcome to nginx!&amp;lt;/title&amp;gt;
&amp;lt;style&amp;gt;
html { color-scheme: light dark; }
body { width: 35em; margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif; }
&amp;lt;/style&amp;gt;
&amp;lt;/head&amp;gt;
&amp;lt;body&amp;gt;
&amp;lt;h1&amp;gt;Welcome to nginx!&amp;lt;/h1&amp;gt;
&amp;lt;p&amp;gt;If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.&amp;lt;/p&amp;gt;

&amp;lt;p&amp;gt;For online documentation and support please refer to
&amp;lt;a href="http://nginx.org/"&amp;gt;nginx.org&amp;lt;/a&amp;gt;.&amp;lt;br/&amp;gt;
Commercial support is available at
&amp;lt;a href="http://nginx.com/"&amp;gt;nginx.com&amp;lt;/a&amp;gt;.&amp;lt;/p&amp;gt;

&amp;lt;p&amp;gt;&amp;lt;em&amp;gt;Thank you for using nginx.&amp;lt;/em&amp;gt;&amp;lt;/p&amp;gt;
&amp;lt;/body&amp;gt;
&amp;lt;/html&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.letterdrop.co%2Fimages%2F2025%2F2%2F17%2Fm38cfb%2F6m2kgy48ku.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.letterdrop.co%2Fimages%2F2025%2F2%2F17%2Fm38cfb%2F6m2kgy48ku.webp" width="800" height="292"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;vCluster allows you to reuse the shared platform stack, enabling resources installed in your host Kubernetes cluster to be shared with virtual clusters. The virtual clusters can then leverage these host cluster resources efficiently. We will be conducting a hands-on workshop on March 6th, where we will demonstrate this in action, and you'll have the opportunity to try it out alongside us.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.vcluster.com/event/seamless-kubernetes-multi-tenancy-with-vcluster-and-a-shared-platform-stack" rel="noopener noreferrer"&gt;Register for the webinar here&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Join the &lt;a href="https://slack.loft.sh/" rel="noopener noreferrer"&gt;vCluster Slack&lt;/a&gt; to stay updated!&lt;/p&gt;

</description>
      <category>vcluster</category>
      <category>tutorials</category>
    </item>
    <item>
      <title>How to Assign vCluster to Specific Nodes Using Node Selectors</title>
      <dc:creator>Lukas Gentele</dc:creator>
      <pubDate>Thu, 16 Jan 2025 12:45:09 +0000</pubDate>
      <link>https://dev.to/gentele/how-to-assign-vcluster-to-specific-nodes-using-node-selectors-272a</link>
      <guid>https://dev.to/gentele/how-to-assign-vcluster-to-specific-nodes-using-node-selectors-272a</guid>
      <description>&lt;p&gt;When deploying a &lt;strong&gt;vCluster&lt;/strong&gt;, you might need to ensure it runs on specific nodes, such as GPU-enabled nodes or production-specific nodes. This can be achieved using &lt;strong&gt;node selectors&lt;/strong&gt;, which limit the scheduling of the vCluster control plane pods to nodes with specific labels.&lt;/p&gt;

&lt;h4&gt;
  
  
  Example Configuration
&lt;/h4&gt;

&lt;p&gt;To schedule your vCluster control plane on nodes labeled environment=GPU, use the following configuration in your Helm chart:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;controlPlane:
  statefulSet:
    scheduling:
      nodeSelector:
        environment: GPU
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;This ensures that the vCluster control plane only runs on nodes labeled with &lt;code&gt;environment=GPU&lt;/code&gt;.&lt;/p&gt;

&lt;h4&gt;
  
  
  Why Use Node Selectors?
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;Resource Optimization&lt;/strong&gt;: Assign vCluster workloads to nodes with specific resources (e.g., GPUs).&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Isolation&lt;/strong&gt;: Keep vCluster workloads separate from other applications.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Environment Control&lt;/strong&gt;: Deploy to specific environments, such as production or staging.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Let’s see this in Action 
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Step 1: Open Killercoda playground
&lt;/h3&gt;

&lt;p&gt;You can go to the &lt;a href="https://killercoda.com/playgrounds/scenario/kubernetes" rel="noopener noreferrer"&gt;Killercoda Kubernetes playground&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 2: Install the Vcluster  CLI
&lt;/h3&gt;

&lt;p&gt;Command:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;curl -L -o vcluster "https://github.com/loft-sh/vcluster/releases/latest/download/vcluster-linux-amd64" &amp;amp;&amp;amp; sudo install -c -m 0755 vcluster /usr/local/bin &amp;amp;&amp;amp; rm -f vcluster
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h3&gt;
  
  
  Step 3: Create the demo.yaml configuration file 
&lt;/h3&gt;

&lt;p&gt;Command:&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cat &amp;lt;&amp;lt;EOF &amp;gt; demo.yaml
controlPlane:
  statefulSet:
    scheduling:
      nodeSelector:
        environment: GPU
EOF
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h3&gt;
  
  
  Step 4: Label the controlplane node for Killercoda
&lt;/h3&gt;

&lt;p&gt;Let’s Label the node &lt;/p&gt;

&lt;p&gt;Command:&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl label node controlplane environment=GPU
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Output: &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.letterdrop.co%2Fimages%2F2025%2F1%2F16%2Fzhpur1%2Fb41wmxryst.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.letterdrop.co%2Fimages%2F2025%2F1%2F16%2Fzhpur1%2Fb41wmxryst.webp" width="800" height="50"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 5: Create vCluster
&lt;/h3&gt;

&lt;p&gt;Command:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;vcluster create demo -f demo.yaml 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Output:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.letterdrop.co%2Fimages%2F2025%2F1%2F16%2Fk7gwt2%2F4y9hbq0uyi.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.letterdrop.co%2Fimages%2F2025%2F1%2F16%2Fk7gwt2%2F4y9hbq0uyi.webp" width="800" height="214"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Let’s Verify &lt;/p&gt;

&lt;p&gt;Command:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl config use-context kubernetes-admin@kubernetes
kubectl get pods -n vcluster-demo -owide
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Output:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.letterdrop.co%2Fimages%2F2025%2F1%2F16%2Fst3fgp%2F0hks6zr6jd.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.letterdrop.co%2Fimages%2F2025%2F1%2F16%2Fst3fgp%2F0hks6zr6jd.webp" width="800" height="129"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As you can see above, the stateful set for the vCluster created landed on the node names controlplane and this was the node where we set up the label. This is how you can assign vCluster to a specific node and if you want to do that for the pods within the vCluster too, you can follow the &lt;a href="https://www.vcluster.com/docs/vcluster/configure/vcluster-yaml/sync/from-host/nodes#sync-pseudo-nodes-with-label-selector" rel="noopener noreferrer"&gt;documentation here&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>vcluster</category>
    </item>
    <item>
      <title>Chaos Engineering with Chaos Mesh and vCluster: Testing Close to Production</title>
      <dc:creator>Lukas Gentele</dc:creator>
      <pubDate>Fri, 10 Jan 2025 15:30:06 +0000</pubDate>
      <link>https://dev.to/gentele/chaos-engineering-with-chaos-mesh-and-vcluster-testing-close-to-production-3ne7</link>
      <guid>https://dev.to/gentele/chaos-engineering-with-chaos-mesh-and-vcluster-testing-close-to-production-3ne7</guid>
      <description>&lt;p&gt;Pioneered &lt;a href="https://netflixtechblog.com/5-lessons-weve-learned-using-aws-1f2a28588e4c" rel="noopener noreferrer"&gt;at Netflix&lt;/a&gt; over a decade ago, chaos engineering is a term used to describe the organized and intentional introduction of failures into systems in pre-production or production to understand their outcomes better and plan effectively for these failures.&lt;/p&gt;

&lt;p&gt;In a previous articles &lt;a href="https://www.loft.sh/blog/analyzing-five-popular-chaos-engineering-platforms" rel="noopener noreferrer"&gt;we have highlighted&lt;/a&gt; tools which can help you perform chaos engineering. In this article, we will go one step further and take a look at how vCluster can help you test as close to production by using Litmus.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Importance of Testing Close to Production
&lt;/h2&gt;

&lt;p&gt;Before diving deeper into the technicalities, it's essential to understand &lt;strong&gt;why&lt;/strong&gt; testing close to production is critical. Traditional pre-production environments often require setting up a separate staging area, deploying the application, and running various tests such as load testing or failover simulations.&lt;/p&gt;

&lt;p&gt;In Kubernetes environments, this typically means spinning up a separate cluster, often with limited resources to cut costs. While this setup is practical, it rarely mirrors the true production environment. As Netflix put it, &lt;em&gt;"&lt;/em&gt;&lt;a href="https://netflixtechblog.com/5-lessons-weve-learned-using-aws-1f2a28588e4c" rel="noopener noreferrer"&gt;&lt;em&gt;Learn with real scale, not toy models.&lt;/em&gt;&lt;/a&gt;&lt;em&gt;"&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;This is where &lt;strong&gt;vCluster&lt;/strong&gt; shines. Instead of spinning up a separate environment, vCluster lets you leverage underlying host resources. You can create virtual Kubernetes clusters within a single physical cluster. This enables you to test applications at a near-production scale without the overhead of managing a full cluster.&lt;/p&gt;

&lt;p&gt;For &lt;strong&gt;operations teams&lt;/strong&gt;, this means monitoring whether Horizontal Pod Autoscaler (HPA) rules kick in as expected or validating the effectiveness of network policies. For &lt;strong&gt;developers&lt;/strong&gt;, it provides insights into application behavior under real-world scenarios.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is Chaos Mesh?
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://chaos-mesh.org/" rel="noopener noreferrer"&gt;Chaos Mesh&lt;/a&gt; is an end-to-end chaos engineering platform for cloud-native infrastructure and applications. It is a CNCF incubating project and aims to help both developers and SREs gain meaningful insights from their applications from their applications.&lt;/p&gt;

&lt;h2&gt;
  
  
  Prerequisites
&lt;/h2&gt;

&lt;p&gt;This tutorial assumes some familiarity with Kubernetes; additionally, you will need the following installed locally to follow along.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;a href="https://kind.sigs.k8s.io/" rel="noopener noreferrer"&gt;Access to a Kubernetes cluster.&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;  &lt;a href="https://kubernetes.io/docs/tasks/tools/" rel="noopener noreferrer"&gt;Kubectl&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;  &lt;a href="https://helm.sh/docs/intro/install/" rel="noopener noreferrer"&gt;Helm&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;  &lt;a href="https://www.vcluster.com/docs/get-started/?__hstc=107455133.ae41aa585a33651f2fcd088a0de83d2d.1727015344199.1732292807435.1732410676443.28&amp;amp;__hssc=107455133.4.1732410676443&amp;amp;__hsfp=151973015&amp;amp;_gl=1*1a8cna4*_gcl_au*ODMyMTA2MjY0LjE3MjI3NjQ4MjE.*_ga*OTY1NjA0ODg2LjE3MjI3NjQ4MTg.*_ga_4RQQZ3WGE9*MTcyMzcwNjI5OS4xNi4xLjE3MjM3MDYzMzUuMjQuMC4xNTkxNzk1NzE2" rel="noopener noreferrer"&gt;vCluster CLI&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Create a virtual cluster
&lt;/h2&gt;

&lt;p&gt;Begin by creating a virtual cluster. For this demonstration, let's consider that the infrastructure team is starting to introduce chaos mesh into your pre-prod cluster. Later in this tutorial, you will use &lt;a href="https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough" rel="noopener noreferrer"&gt;Horizontal Pod Autoscaler (HPA)&lt;/a&gt;. Because of this, you need to explicitly allow vCluster to sync metrics from the host cluster into your virtual cluster.&lt;/p&gt;

&lt;p&gt;Create a custom vCluster config:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cat &amp;lt;&amp;lt;EOF &amp;gt; values.yaml
integrations:
  metricsServer:
    enabled: true
    nodes: true
    pods: true
EOF
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Create a virtual cluster:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;vcluster create pre-prod -n pre-prod -f values.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Output is similar to:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;11:48:27 warn There is a newer version of vcluster: v0.21.1. Run `vcluster upgrade` to upgrade to the newest version. 11:48:27 info Creating namespace pre-prod 11:48:27 info Detected local kubernetes cluster minikube. Will deploy vcluster with a NodePort &amp;amp; sync real nodes 11:48:27 info Create vcluster pre-prod... 11:48:27 info execute command: helm upgrade pre-prod /var/folders/gw/gd4m32rs5k5chjbf761w3zrw0000gn/T/vcluster-0.20.1.tgz-1751275844 --create-namespace --kubeconfig /var/folders/gw/gd4m32rs5k5chjbf761w3zrw0000gn/T/689082436 --namespace pre-prod --install --repository-config='' --values /var/folders/gw/gd4m32rs5k5chjbf761w3zrw0000gn/T/754764801 --values values.yaml 11:48:28 done Successfully created virtual cluster pre-prod in namespace pre-prod 11:48:30 info Waiting for vcluster to come up... 11:48:50 done vCluster is up and running 11:48:50 info Starting proxy container... 11:48:52 done Switched active kube context to vcluster_pre-prod_pre-prod_minikube - Use `vcluster disconnect` to return to your previous kube context - Use `kubectl get namespaces` to access the vcluster
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h2&gt;
  
  
  Install Chaos Mesh
&lt;/h2&gt;

&lt;p&gt;With a virtual cluster created, the next step is to install chaos mesh inside your virtual cluster. You can do this by using the helm chart:&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;helm repo add chaos-mesh https://charts.chaos-mesh.org

helm install chaos-mesh chaos-mesh/chaos-mesh -n=chaos-mesh --set chaosDaemon.runtime=docker --set chaosDaemon.socketPath=/var/run/docker.sock --version 2.7.0 --create-namespace
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Chaos Mesh requires access to the container runtime socket, which varies depending on your Kubernetes cluster. This tutorial was written using &lt;a href="https://minikube.sigs.k8s.io/docs/" rel="noopener noreferrer"&gt;Minikube&lt;/a&gt;, where the runtime is Docker, and the socket path is &lt;code&gt;/var/run/docker.sock&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Most Kubernetes clusters use &lt;strong&gt;Containerd&lt;/strong&gt; by default. For such environments, you should use:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;helm install chaos-mesh chaos-mesh/chaos-mesh -n chaos-mesh \  
  --set chaosDaemon.runtime=containerd \  
  --set chaosDaemon.socketPath=/run/containerd/containerd.sock \  
  --version 2.7.0 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;For clusters like K3s, MicroK8s, or those using CRI-O, refer to &lt;a href="https://chaos-mesh.org/docs/production-installation-using-helm/#step-4-install-chaos-mesh-in-different-environments" rel="noopener noreferrer"&gt;this section of the documentation.&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Verify your installation&lt;/strong&gt; &lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl get po -n chaos-mesh
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Output is similar to:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;NAME                                        READY   STATUS    RESTARTS        AGE
chaos-controller-manager-867f657d95-8pcx6   1/1     Running   0               13m
chaos-controller-manager-867f657d95-d4hbs   1/1     Running   0               13m
chaos-controller-manager-867f657d95-h9fwv   1/1     Running   0               13m
chaos-daemon-2rgcj                          1/1     Running   0               13m
chaos-dashboard-7c66c9f685-h9sq6            1/1     Running   0               13m
chaos-dns-server-69dd8595bf-zsm2l           1/1     Running   0               13m
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h2&gt;
  
  
  Your First Chaos Experiment
&lt;/h2&gt;

&lt;p&gt;With Chaos Mesh installed, you’re almost ready to write your first experiment. Before diving in, let’s clarify some terminology. In Chaos Mesh, an &lt;strong&gt;experiment&lt;/strong&gt; refers to a test designed to introduce controlled failures into your system. Experiments are divided into two categories:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; &lt;strong&gt;Kubernetes-level experiments&lt;/strong&gt;: These target resources like pods, deployments, or services within your Kubernetes cluster.&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;Physical node-level experiments&lt;/strong&gt;: These focus on the underlying infrastructure, such as CPU stress or network disruptions on the nodes themselves.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;In this tutorial, you’ll run two experiments—one targeting Kubernetes resources and another targeting a physical node. Each experiment will be created using a Kubernetes &lt;strong&gt;Custom Resource&lt;/strong&gt; (CR) based on Chaos Mesh’s &lt;strong&gt;Custom Resource Definitions&lt;/strong&gt; (CRDs).&lt;/p&gt;
&lt;h2&gt;
  
  
  PodChaos
&lt;/h2&gt;

&lt;p&gt;One of the easiest experiments to write is &lt;strong&gt;PodChaos&lt;/strong&gt;. This experiment randomly terminates a pod at a predefined interval, allowing you to simulate unexpected failures in your application. This can be particularly useful when testing &lt;strong&gt;consensus-based applications&lt;/strong&gt; such as databases, message brokers, or distributed storage systems.&lt;/p&gt;

&lt;p&gt;By using the PodChaos experiment in a virtual cluster, you can evaluate how well your application maintains &lt;strong&gt;quorum&lt;/strong&gt;, the minimum number of nodes required to make progress. For example, you can test if your application continues to handle reads and writes effectively when a pod is unexpectedly terminated.&lt;/p&gt;

&lt;p&gt;Begin by creating new deployment: &lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl apply -f - &amp;lt;&amp;lt;EOF
apiVersion: apps/v1
kind: Deployment
metadata:
  name: whoami
  namespace: default
spec:
  replicas: 1
  selector:
    matchLabels:
      app: whoami
  template:
    metadata:
      labels:
        app: whoami
    spec:
      containers:
        - name: whoami
          image: traefik/whoami
          ports:
            - containerPort: 80 
EOF
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;This will create a new deployment called &lt;code&gt;whoami&lt;/code&gt; in the default namespace. Verify all pods are running correctly using:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl get pods
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Output is similar to:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;whoami-76c9859cfc-lsxkm   1/1     Running             0          4s
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;Create a PodChaos experiment:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Next, apply the following manifest to create a &lt;code&gt;PodChaos&lt;/code&gt; experiment:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl apply -f - &amp;lt;&amp;lt;EOF
apiVersion: chaos-mesh.org/v1alpha1
kind: PodChaos
metadata:
  name: pod-failure-example
  namespace: default
spec:
  action: pod-failure
  mode: one
  duration: '120s'
  selector:
    labelSelectors:
      app: whoami
EOF
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;The manifest above defines a &lt;code&gt;PodChaos&lt;/code&gt; experiment that runs for two minutes(120s) which is specified using the &lt;code&gt;duration&lt;/code&gt; field, the &lt;code&gt;mode&lt;/code&gt; is also set to &lt;code&gt;one&lt;/code&gt; which will terminate one pod at a time for the duration of the experiment. Finally using &lt;a href="https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/" rel="noopener noreferrer"&gt;selectors&lt;/a&gt;, specify the target, which are pods with the label &lt;code&gt;app=whoami&lt;/code&gt; .&lt;/p&gt;

&lt;p&gt;You can immediately verify the experiment is active by running:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl get pods -w
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Adding the &lt;code&gt;-w&lt;/code&gt; flag will watch the namespace for any events, such as pod restarts. Output is similar to:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.letterdrop.co%2Fimages%2F2025%2F1%2F10%2Floft%2F073qdr%2Fimage.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.letterdrop.co%2Fimages%2F2025%2F1%2F10%2Floft%2F073qdr%2Fimage.webp" width="800" height="165"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;‎Slowly but surely you should see pods start to get restarted which means the test is active.&lt;/p&gt;

&lt;h2&gt;
  
  
  Accessing Test Results
&lt;/h2&gt;

&lt;p&gt;While you could view the test was performed through the CLI, this is by no means ideal. Chaos Mesh provides a dashboard for viewing and analyzing all test results.&lt;/p&gt;

&lt;p&gt;To access the dashboard, you first need to generate credentials. The following manifest creates a ServiceAccount, a ClusterRole with the required permissions, and a ClusterRoleBinding to link the two. Apply the manifest using the command below:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl apply -f - &amp;lt;&amp;lt;EOF
apiVersion: v1
kind: ServiceAccount
metadata:
  name: account-cluster-manager
  namespace: default

---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: role-cluster-manager
rules:
- apiGroups: [""]
  resources: ["pods", "namespaces"]
  verbs: ["get", "watch", "list"]
- apiGroups: ["chaos-mesh.org"]
  resources: ["*"]
  verbs: ["get", "list", "watch", "create", "delete", "patch", "update"]

---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: bind-cluster-manager
subjects:
- kind: ServiceAccount
  name: account-cluster-manager
  namespace: default
roleRef:
  kind: ClusterRole
  name: role-cluster-manager
  apiGroup: rbac.authorization.k8s.io
EOF 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h3&gt;
  
  
  Generating a Token
&lt;/h3&gt;

&lt;p&gt;Once the manifest is applied, generate a token for the ServiceAccount using the following command:&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl create token account-cluster-manager -n default  
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;This should output a token for the service account you just created. Save this to a secure location and run the following command to expose the dashboard:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl port-forward svc/chaos-dashboard -n chaos-mesh 2333:2333
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Navigating to &lt;code&gt;http://localhost:2333&lt;/code&gt; , should return a page like this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.letterdrop.co%2Fimages%2F2025%2F1%2F10%2Floft%2Fyogc49%2Fimage.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.letterdrop.co%2Fimages%2F2025%2F1%2F10%2Floft%2Fyogc49%2Fimage.webp" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Enter the credentials you just created, and you will be logged in. &lt;/p&gt;

&lt;h3&gt;
  
  
  StressChaos
&lt;/h3&gt;

&lt;p&gt;The &lt;code&gt;StressChaos&lt;/code&gt; CRD allows you to stress-test your application’s CPU or memory, simulating resource-intensive conditions. This is particularly useful for verifying metric-based scaling mechanisms, such as Kubernetes Horizontal Pod Autoscalers (HPA), to ensure they trigger under the correct conditions&lt;/p&gt;

&lt;h3&gt;
  
  
  Verify or Install Metrics Server
&lt;/h3&gt;

&lt;p&gt;The metrics-server component is critical for collecting resource metrics like CPU and memory usage, which are required for HPA to function. To check if it’s installed, run the following command on your host cluster:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl get deployment metrics-server -n kube-system  
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;If the deployment is not found on your host cluster, run the following commands after disconnecting from your virtual cluster:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;vcluster disconnect
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Install metrics-server:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl apply -f - &amp;lt;&amp;lt;EOF
apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    k8s-app: metrics-server
  name: metrics-server
  namespace: kube-system

---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  labels:
    k8s-app: metrics-server
  name: system:metrics-server
rules:
- apiGroups: [""]
  resources: ["nodes/metrics", "pods", "nodes"]
  verbs: ["get", "list", "watch"]

---
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    k8s-app: metrics-server
  name: metrics-server
  namespace: kube-system
spec:
  selector:
    matchLabels:
      k8s-app: metrics-server
  template:
    metadata:
      labels:
        k8s-app: metrics-server
    spec:
      serviceAccountName: metrics-server
      containers:
      - name: metrics-server
        image: registry.k8s.io/metrics-server/metrics-server:v0.7.2
        args:
        - --cert-dir=/tmp
        - --secure-port=10250
        - --kubelet-insecure-tls
        - --metric-resolution=15s
EOF
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Reconnect to your virtual cluster:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;vcluster connect pre-prod
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h3&gt;
  
  
  Configure Horizontal Pod Autoscaler
&lt;/h3&gt;

&lt;p&gt;Next, apply a HPA to scale the &lt;code&gt;whoami&lt;/code&gt; deployment based on memory usage. The following manifest scales the pods between 1 and 8 replicas when the average memory usage exceeds 300Mi:&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl apply -f - &amp;lt;&amp;lt;EOF
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
  name: whoami-hpa
  namespace: default
spec:
  scaleTargetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: whoami
  minReplicas: 1
  maxReplicas: 8
  metrics:
  - type: Resource
    resource:
      name: memory
      target:
        type: AverageValue
        averageValue: 300Mi
EOF
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h3&gt;
  
  
  Apply the StressChaos Experiment
&lt;/h3&gt;

&lt;p&gt;Finally, create a &lt;code&gt;StressChaos&lt;/code&gt; experiment to simulate memory pressure:&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl apply -f - &amp;lt;&amp;lt;EOF
apiVersion: chaos-mesh.org/v1alpha1
kind: StressChaos
metadata:
  name: mem-stress
  namespace: default
spec:
  mode: all
  selector:
    namespaces:
    - default
  stressors:
    memory:
      workers: 3
      size: 600MiB
      options: []  # Use an empty list for options
  duration: '120s'
EOF
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;The manifest above defines a &lt;strong&gt;StressChaos&lt;/strong&gt; experiment that runs for 2 minutes (&lt;code&gt;120s&lt;/code&gt;), as specified by the &lt;code&gt;duration&lt;/code&gt; field. This experiment is configured in &lt;code&gt;mode: all&lt;/code&gt;, meaning it will apply the stress test to all pods within the selected namespace(s).&lt;/p&gt;

&lt;p&gt;Using the &lt;code&gt;selector&lt;/code&gt; field, the experiment targets all pods in the &lt;code&gt;default&lt;/code&gt; namespace. The &lt;code&gt;stressors&lt;/code&gt; section specifies a &lt;strong&gt;memory stress test&lt;/strong&gt; with the following configuration:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;Workers:&lt;/strong&gt; &lt;code&gt;3&lt;/code&gt; — Three concurrent workers will be deployed to simulate the memory load.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Size:&lt;/strong&gt; &lt;code&gt;600MiB&lt;/code&gt; — Each worker will attempt to allocate 600 MiB of memory.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;As soon as you apply the manifest, you should see the HPA kick in. Verify this using:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl get pods 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;Output is similar to:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.letterdrop.co%2Fimages%2F2025%2F1%2F10%2Floft%2Fo9fh3a%2Fimage.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.letterdrop.co%2Fimages%2F2025%2F1%2F10%2Floft%2Fo9fh3a%2Fimage.webp" width="800" height="136"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You can also check on the HPA directly:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl get hpa
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Output is similar to:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;NAME         REFERENCE           TARGETS                   MINPODS   MAXPODS   REPLICAS   AGE
whoami-hpa   Deployment/whoami   memory: 639655936/300Mi   1         8         1          9h
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Finally, you can take a look at more detailed results of the experiments by navigating to &lt;code&gt;http://localhost:2333/#/experiments&lt;/code&gt; , be sure to expose the dashboard if you haven’t already:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl port-forward svc/chaos-dashboard -n chaos-mesh 2333:2333
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.letterdrop.co%2Fimages%2F2025%2F1%2F10%2Floft%2Fy5gwc0%2Fimage6.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.letterdrop.co%2Fimages%2F2025%2F1%2F10%2Floft%2Fy5gwc0%2Fimage6.webp" width="800" height="433"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Select the &lt;strong&gt;mem-stress&lt;/strong&gt; experiment, and you will be greeted with a more detailed view of the experiment:‎&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.letterdrop.co%2Fimages%2F2025%2F1%2F10%2Floft%2Ffabl1o%2Fimage.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.letterdrop.co%2Fimages%2F2025%2F1%2F10%2Floft%2Ffabl1o%2Fimage.webp" width="800" height="428"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;‎This will show information such as the duration, namespace . Kind of experiment that was run ,events that occurred during the experiment and the manifest used. For the memory stress test, it displays what namespace it was run in, along with the number of workers(6 in this case) how much memory each worker used(600MiB) and the duration &lt;code&gt;1200&lt;/code&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Chaos Engineering forces us to embrace failures and treat them as inevitable instead of hoping for the best outcome. With vCluster, teams can take this further and perform these tests as they would occur in production with no additional infrastructure costs.&lt;/p&gt;

&lt;p&gt;While this blog covered only two experiments, Chaos Mesh has an extensive &lt;a href="https://chaos-mesh.org/docs/simulate-pod-chaos-on-kubernetes/" rel="noopener noreferrer"&gt;library of tests&lt;/a&gt; that you can leverage. Here are some suggestions for more chaos engineering content.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;a href="https://www.loft.sh/blog/analyzing-five-popular-chaos-engineering-platforms" rel="noopener noreferrer"&gt;Analyzing five popular chaos engineering platforms&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;  &lt;a href="https://netflixtechblog.com/tagged/chaos-engineering" rel="noopener noreferrer"&gt;Netflix Engineering blog&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Don’t forget to check out the &lt;a href="https://www.vcluster.com/docs" rel="noopener noreferrer"&gt;vCluster documentation&lt;/a&gt; for more details! Join our &lt;a href="https://slack.loft.sh/" rel="noopener noreferrer"&gt;Slack here.&lt;/a&gt;&lt;/p&gt;

</description>
      <category>vcluster</category>
    </item>
    <item>
      <title>vcluster Exploded in 2022</title>
      <dc:creator>Lukas Gentele</dc:creator>
      <pubDate>Tue, 27 Dec 2022 10:02:53 +0000</pubDate>
      <link>https://dev.to/loft/vcluster-exploded-in-2022-2n7e</link>
      <guid>https://dev.to/loft/vcluster-exploded-in-2022-2n7e</guid>
      <description>&lt;p&gt;By &lt;a href="https://twitter.com/richburroughs" rel="noopener noreferrer"&gt;Rich Burroughs&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;2022 was a very exciting year for vcluster. If you’re unfamiliar with vcluster, it’s an open source tool for creating and managing virtual Kubernetes clusters. Virtual clusters are lightweight and run in a namespace of an underlying host cluster but appear to the users as if they’re full-blown clusters. If you’d like more details, there’s an &lt;a href="https://www.vcluster.com/docs/what-are-virtual-clusters" rel="noopener noreferrer"&gt;extended explanation in the docs&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Let’s take a look at the massive growth of the project and some of the new features that showed up in 2022.&lt;/p&gt;

&lt;h2&gt;
  
  
  Highlights
&lt;/h2&gt;

&lt;p&gt;First, some numbers: vcluster now has more than 2,200 stars on GitHub, and the Docker images have been downloaded more than 26 million times from Docker Hub. While I don’t put a lot of faith in GitHub stars as a metric on its own, there’s a lot of other evidence that the project has grown a lot this year.&lt;/p&gt;

&lt;p&gt;One of the highlights for vcluster was KubeCon + CloudNativeCon 2022 North America, which was held in Detroit. vcluster was featured in a &lt;a href="https://youtu.be/eJG7uIU9NpM" rel="noopener noreferrer"&gt;keynote by Whitney Lee and Mauricio Salatino&lt;/a&gt;, which illustrated how platform teams can better serve developers, as well as &lt;a href="https://youtu.be/p8BluR5WT5w" rel="noopener noreferrer"&gt;a talk by Joseph Sandoval and Dan Garfield&lt;/a&gt; about how Adobe’s new CI/CD platform that uses vcluster and Argo CD. And Mike Tougeron from Adobe &lt;a href="https://youtu.be/casLvZWlIDw" rel="noopener noreferrer"&gt;spoke more about their use of vcluster&lt;/a&gt; at GitOps Con North America in the buildup to KubeCon.&lt;/p&gt;

&lt;p&gt;vcluster was featured on &lt;a href="https://youtu.be/EaoxUDGpARE" rel="noopener noreferrer"&gt;VMware’s TGIK stream&lt;/a&gt;, &lt;a href="https://youtu.be/wMmUmjSB6hw" rel="noopener noreferrer"&gt;Whitney Lee’s Elightning show&lt;/a&gt;, and Bret Fisher’s &lt;a href="https://youtu.be/FYqKQIthH6s" rel="noopener noreferrer"&gt;DevOps and Docker stream&lt;/a&gt;. We also saw written content about vcluster from the community, like this &lt;a href="https://medium.com/nerd-for-tech/multi-tenancy-in-kubernetes-using-lofts-vcluster-dee6513a7206" rel="noopener noreferrer"&gt;intro tutorial&lt;/a&gt; by Pavan Kumar, a blog post from Mauricio Salatino on &lt;a href="https://salaboy.com/2022/08/03/building-platforms-on-top-of-kubernetes-vcluster-and-crossplane/" rel="noopener noreferrer"&gt;building dev platforms with vcluster and Crossplane&lt;/a&gt;, and this super cool post from Jason Andress about &lt;a href="https://sysdig.com/blog/how-to-honeypot-vcluster-falco/" rel="noopener noreferrer"&gt;building honeypots with vcluster&lt;/a&gt;. The honeypot use case had never occurred to me. I love seeing what creative uses people come up with for vcluster.&lt;/p&gt;

&lt;h2&gt;
  
  
  New Features
&lt;/h2&gt;

&lt;p&gt;There wouldn’t be so much excitement around vcluster if not for the work of the maintainers and contributors. It’s great to see so many new features being added, and they often come out of feedback from people in the community. Here’s a look at some of the things that shipped in 2022.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Deploying Helm charts and manifests on startup:&lt;/strong&gt; This is one of my favorite features that’s been added to vcluster. It’s one thing to provision a bare cluster and another thing for that to become a useful environment. With this feature, you can apply Helm charts (public or private) or Kubernetes YAML manifests as the virtual cluster spins up for the first time. It’s super helpful. (&lt;a href="https://www.vcluster.com/docs/operator/init-manifests" rel="noopener noreferrer"&gt;Read the docs here&lt;/a&gt;)&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Distros:&lt;/strong&gt; Initially, vcluster was built on top of k3s, but after a while, people started asking if we could support other Kubernetes distributions. Now vcluster also supports k0s, EKS, and standard Kubernetes. This allows you to use a virtual cluster more like your production environment. (&lt;a href="https://www.vcluster.com/docs/operator/other-distributions" rel="noopener noreferrer"&gt;Docs&lt;/a&gt;)&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Isolated mode:&lt;/strong&gt; As more people started using vcluster, we received lots of questions and feedback about how much isolation the virtual clusters had. With isolated mode, vcluster now adds some additional Kubernetes security features to the virtual clusters as they’re provisioned: a Pod Security Standard, a resource quota, a limit range, and a network policy. Isolated mode is optional and can be invoked with the &lt;code&gt;--isolated&lt;/code&gt; flag. (&lt;a href="https://www.vcluster.com/docs/operator/security" rel="noopener noreferrer"&gt;Docs&lt;/a&gt;)&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Plugins:&lt;/strong&gt; With the complexity of Kubernetes, it became clear that people would need the ability to customize and extend vcluster to fit their workflows. Enter vcluster plugins and the &lt;a href="https://github.com/loft-sh/vcluster-sdk" rel="noopener noreferrer"&gt;vcluster SDK&lt;/a&gt;. Plugins are written in Go and allow users to customize the behavior of vcluster’s syncer to do all kinds of things, like sharing resources between host and virtual clusters. (&lt;a href="https://www.vcluster.com/docs/plugins/tutorial" rel="noopener noreferrer"&gt;Docs&lt;/a&gt;)&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Pausing and resuming virtual clusters:&lt;/strong&gt; This handy feature can scale down workloads in your virtual clusters when they’re not being used. That’s done by setting the number of replicas to zero for StatefulSets and Deployments. Resuming the virtual cluster sets the replicas back to their original value, and then the scheduler spins the pods back up. This allows users to suspend the workloads while keeping configurations in the virtual cluster in place. (&lt;a href="https://www.vcluster.com/docs/operator/pausing-vcluster" rel="noopener noreferrer"&gt;Docs&lt;/a&gt;)&lt;/p&gt;

&lt;p&gt;This is just a handful of the many improvements made to vcluster in 2022.&lt;/p&gt;

&lt;h2&gt;
  
  
  Thank you
&lt;/h2&gt;

&lt;p&gt;I want to thank the vcluster maintainers, the contributors, and everyone who used vcluster in 2022. As I mentioned, many great ideas for improving the project come from folks in the community through &lt;a href="https://github.com/loft-sh/vcluster/issues" rel="noopener noreferrer"&gt;GitHub issues&lt;/a&gt;, pull requests, or even feedback in &lt;a href="https://slack.loft.sh/" rel="noopener noreferrer"&gt;our community Slack&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;We’re excited that so many people have found vcluster useful for their work. If you’ve read this far and haven’t tried vcluster yet yourself, there’s &lt;a href="https://www.vcluster.com/docs/quickstart" rel="noopener noreferrer"&gt;an easy quickstart&lt;/a&gt; that takes just a few minutes.&lt;/p&gt;

&lt;p&gt;I’m looking forward to seeing how the project grows in 2023.&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>vcluster</category>
      <category>devspace</category>
      <category>loft</category>
    </item>
    <item>
      <title>Kubernetes Multitenancy: Why Namespaces aren’t Good Enough</title>
      <dc:creator>Lukas Gentele</dc:creator>
      <pubDate>Mon, 21 Nov 2022 19:32:30 +0000</pubDate>
      <link>https://dev.to/loft/kubernetes-multitenancy-why-namespaces-arent-good-enough-i53</link>
      <guid>https://dev.to/loft/kubernetes-multitenancy-why-namespaces-arent-good-enough-i53</guid>
      <description>&lt;p&gt;By &lt;a href="https://twitter.com/TheEbizWizard" rel="noopener noreferrer"&gt;Jason Bloomberg&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Multitenancy has long been a core capability of cloud computing, and indeed, for virtualization in general.&lt;br&gt;
With multitenancy, everybody has their own sandbox to play in, isolated from everybody else’s sandbox, even though beneath the covers, they share common infrastructure.&lt;br&gt;
Kubernetes offers its own kind of multitenancy as well, via the use of namespaces. Namespaces provide a mechanism for organizing clusters into virtual sub-clusters that serve as logically separated tenants.&lt;br&gt;
Relying upon namespaces to provide all the advantages of true multitenancy, however, is a mistake. Namespaces are for cloud native teams that don’t want to step on each other’s toes – but who are all colleagues who trust each other.&lt;br&gt;
True multitenancy, in contrast, isn’t for colleagues. It’s for strangers – where no one knows whether the owner of the next tenant over is up to no good. Kubernetes namespaces don’t provide this level of multitenancy.&lt;/p&gt;

&lt;h2&gt;
  
  
  Multitenancy: A Quick Primer
&lt;/h2&gt;

&lt;p&gt;Multitenancy is most familiar as a core part of how cloud computing operates.&lt;br&gt;
Everybody’s cloud account is its own separate world, complete in itself and separate from everybody else’s. You get your own login, configuration, services, and environments. Meanwhile, everybody else has the same experience, even though under the covers, the cloud provider runs each of these tenants on shared infrastructure.&lt;/p&gt;

&lt;p&gt;There are different flavors of multitenancy, depending upon just what infrastructure they share beneath this abstraction.&lt;/p&gt;

&lt;p&gt;IaaS tenants, aka instances or nodes, share hypervisors that abstract the underlying hardware and physical networking. Meanwhile, SaaS tenants, for example, Salesforce or ServiceNow accounts, might share database infrastructure, common services, or other application elements.&lt;/p&gt;

&lt;p&gt;Either way, each tenant is isolated from all the others. Isolation, in fact, is one of the most important characteristics of multitenancy, because it protects one tenant from the actions of another.&lt;/p&gt;

&lt;p&gt;To be effective, the infrastructure must enforce isolation at the network layer. Any network traffic from one tenant that is destined for another must leave the tenant via its public interfaces, traverse the network external to the tenants, and then enter the destination tenant through the same interface that any other external traffic would enter.&lt;/p&gt;

&lt;p&gt;Even when the infrastructure provider decides to offer a shortcut for such traffic, avoiding the hairpin to improve performance, it’s important for such shortcuts to comply with the same network security policies as any other traffic.&lt;/p&gt;

&lt;h2&gt;
  
  
  Using Kubernetes Namespaces for Multitenancy
&lt;/h2&gt;

&lt;p&gt;Namespaces have been around for years, largely as a way to keep different developers working on the same project from inadvertently declaring identical variable or method names.&lt;/p&gt;

&lt;p&gt;Providing a scope for names is also a benefit of Kubernetes namespaces, but naming alone isn’t their entire purpose. Kubernetes namespaces are also virtual clusters within the name physical cluster.&lt;/p&gt;

&lt;p&gt;This notion of virtual clusters sounds like individual tenants in a multitenant cluster, but in the case of Kubernetes namespaces, they have markedly different properties.&lt;/p&gt;

&lt;p&gt;Kubernetes logically separates namespaces within a cluster but allows for them to communicate with each other within the cluster. By default, Kubernetes doesn’t offer any security for such interactions, although it does allow for role-based access control (RBAC) in order to limit users and processes to individual namespaces.&lt;/p&gt;

&lt;p&gt;Such RBAC, however, does not provide the network isolation that is essential to true multitenancy. Furthermore, Kubernetes doesn’t implement any privilege separation, instead delegating such control to a dedicated authorization plugin.&lt;/p&gt;

&lt;p&gt;Furthermore, Kubernetes defines cluster roles and their associated bindings, thus empowering certain individuals to have access and control over all the namespaces within the cluster. Not only do such roles open the door for insider attacks, but they also allow for misconfigurations of the cluster roles that would leave the door open between namespaces.&lt;/p&gt;

&lt;p&gt;If cluster roles weren’t bad enough, Kubernetes also allows for privileged pods within a cluster. Depending upon how admins have configured such pods, they can access any node-level capabilities for the node hosting the cluster. For example, the privileged pod might be able to access file system, network, or Linux process capabilities.&lt;/p&gt;

&lt;p&gt;In other words, a privileged pod can impersonate the node that hosts its cluster – regardless of what namespaces run on that cluster or how they’re configured.&lt;/p&gt;

&lt;h2&gt;
  
  
  ‘True’ Kubernetes Multitenancy that Provides Isolation between Tenants
&lt;/h2&gt;

&lt;p&gt;In order to implement secure Kubernetes multitenancy, it’s essential to use a tool like Loft Labs to implement virtual clusters within Kubernetes clusters that act just like real clusters.&lt;/p&gt;

&lt;p&gt;With this ‘true’ multitenancy, traffic from one virtual cluster to another must go through the same access controls as any cluster-to-cluster traffic would – because fundamentally, Loft Labs handles traffic between virtual clusters just as Kubernetes would handle traffic between clusters.&lt;/p&gt;

&lt;p&gt;One of the primary benefits of this approach to Kubernetes multitenancy is that virtual clusters support namespaces just as clusters do – not necessarily for isolation (as namespace isolation is inadequate), but for the name scoping that namespaces are most familiar for.&lt;/p&gt;

&lt;p&gt;Loft Labs’ multitenancy provides other benefits that namespaces cannot, for example, the ability to spin down individual virtual clusters for better cost efficiency.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Intellyx Take
&lt;/h2&gt;

&lt;p&gt;The best way to think about multitenancy options for Kubernetes is this: namespaces are for friends, while Loft Labs’ multitenancy is also for strangers.&lt;/p&gt;

&lt;p&gt;Using namespaces for multitenancy works best when securing traffic between tenants is a non-issue, say, when all the developers using the cluster are on the same team and actively collaborating. &lt;/p&gt;

&lt;p&gt;True multitenancy of the sort Loft provides, in contrast, provides virtual clusters that separate teams can use – even if those teams aren’t collaborating, or indeed, don’t know or trust each other at all.&lt;/p&gt;

&lt;p&gt;This zero-trust approach to sharing resources is fundamental to modern cloud native computing, even in situations where people are all working for the same company. &lt;/p&gt;

&lt;p&gt;Not only does such isolation add security, but it also enforces GitOps-style best practices for how multiple teams can work on the same codebase in parallel.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Copyright © Intellyx LLC. Loft Labs is an Intellyx customer. Intellyx retains final editorial control of this article.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>vcluster</category>
      <category>devspace</category>
      <category>loft</category>
    </item>
    <item>
      <title>Making Self-Service Clusters Ready for DevOps Adoption</title>
      <dc:creator>Lukas Gentele</dc:creator>
      <pubDate>Mon, 21 Nov 2022 19:32:26 +0000</pubDate>
      <link>https://dev.to/loft/making-self-service-clusters-ready-for-devops-adoption-4m4k</link>
      <guid>https://dev.to/loft/making-self-service-clusters-ready-for-devops-adoption-4m4k</guid>
      <description>&lt;p&gt;By &lt;a href="https://twitter.com/bluefug" rel="noopener noreferrer"&gt;Jason English&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;History is littered with cautionary tales of software delivery tools that were technically ahead of their time, yet were ultimately unsuccessful because of a lack of end user adoption.&lt;/p&gt;

&lt;p&gt;In the past, the success of developer tooling vendors depended upon the rise and fall of the major competitive platforms around them. An upstart vendor could still break through to grab market share from dominant players in a space by delivering a superior user experience (or UX) and partnering with a leader, until such time as they were acquired.&lt;/p&gt;

&lt;p&gt;A great UX generally includes an intuitive UI design based on human factors, especially important in consumer-facing applications. Human factors are still important in software development tooling, however, the UX focus is on whether the tools will readily deliver value to the organization, by empowering developers to efficiently deliver better software.&lt;/p&gt;

&lt;p&gt;Kubernetes (or k8s) arose from the open source foundations of Linux, containerization, and a contributed project from Google. A global community of contributors turned the enterprise space inside out, by abstracting away the details of deploying and managing infrastructure as code. &lt;/p&gt;

&lt;p&gt;Finally, development and operations teams could freely download non-proprietary tooling and orchestrate highly scalable cloud native software architecture. So what was holding early K8s adopters back from widespread use in their DevOps lifecycles?&lt;/p&gt;

&lt;h2&gt;
  
  
  The challenge: empowering developers
&lt;/h2&gt;

&lt;p&gt;A core tenet of the DevOps movement is self-service automation. Key stakeholders should be fully empowered to collaborate freely with access to the tools and resources they need. &lt;/p&gt;

&lt;p&gt;Instead of provisioning through the approval process of an IT administrative control board, DevOps encourages the establishment of an agile platform team (in smaller companies, this may be one platform manager). The platform team should provide developers with a self-service stack of approved on-demand tooling and environments, without requesting an exhaustive procurement process or ITIL review cycles.&lt;/p&gt;

&lt;p&gt;At first glance, Kubernetes, with its declarative abstraction of infrastructure, seems like a perfect fit for orchestrating these environments. But much like an early sci-fi spaceship where wires are left hanging behind the lights of control panels, many specifics of integration, data movement, networking and security were intentionally left up to the open source community to build out, rather than locking in design assumptions in these key areas.&lt;/p&gt;

&lt;p&gt;Because the creation and configuration of Kubernetes clusters comes with a unique set of difficulties, the platform team may try to reduce rework by offering a one-size-fits-all approach. Sometimes, this may not meet the needs of all developers, and may exceed the needs of other teams with excess allocation and cloud cost.&lt;/p&gt;

&lt;p&gt;You can easily tell if an organization’s DevOps initiative is off track if it simply shifts the provisioning bottleneck from IT to a platform team that is backlogged and struggling to deploy k8s clusters for the right people at the correct specifications.&lt;/p&gt;

&lt;h2&gt;
  
  
  Handing over the keys
&lt;/h2&gt;

&lt;p&gt;The ability to usurp the limitations of physical networks and IP addressing is the secret weapon of Kubernetes. With configuration defined as code, teams can call for namespaces and clusters that truly fit the semantics and dimensions of the application.&lt;/p&gt;

&lt;p&gt;The inherent flexibility of k8s produces an additional set of concerns around role-based access controls (RBAC) that must be solved in order to scale without undue risk.&lt;/p&gt;

&lt;p&gt;In today’s cloudy and distributed ecosystem, engineering organizations are composed differently than the siloed Dev and Ops teams in traditional IT organizations. Various teams may need to access certain clusters or pods within as part of their developmental or operational duties on specific projects. &lt;/p&gt;

&lt;p&gt;Even with automated provisioning, a request would by default generate a cluster with one ‘front door’ key for an administrator, who may share this key among project team members. Permissioned individuals can step on each other’s work in the environment, inadvertently break the cluster, or even allow their credentials to get exposed to the outside world.&lt;/p&gt;

&lt;p&gt;To accelerate delivery without risk, least-privilege rights should be built into the provisioning system by policy and leverage the company’s single-sign-on (SSO) backend for resource access across an entire domain, rather than being manually doled out by an admin. &lt;/p&gt;

&lt;p&gt;In a self-service solution, multiple people can get their own keys with access to specific clusters and pods, or assign other team members to get them. These permissions can lean on the organization’s authorization tools of choice for access control, without requiring admins to write any custom policies to prevent inadvertent conflicts.&lt;/p&gt;

&lt;h2&gt;
  
  
  A self-service Kubernetes storefront
&lt;/h2&gt;

&lt;p&gt;We already know the cost of not getting self-service right. Unsatisfied developers will sneak around procurement to provision their own rogue clusters, creating costly cloud sprawl and lots of lost and forgotten systems with possible vulnerabilities.&lt;/p&gt;

&lt;p&gt;As consumers, we’re acclimated to using e-commerce websites and app stores on our personal devices. At work, we can use a credit card to buy apps, plugins and tooling from marketplaces provided by a SaaS vendor or public cloud.&lt;/p&gt;

&lt;p&gt;The storefront model offers a good paradigm for self-service cluster provisioning. One vendor, Loft Labs, offers a Kubernetes control plane built upon the open source DevSpace tool for standing up stacks. An intuitive interface allows domain-level administrators to navigate automated deployments and track usage.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Floft.sh%2Fblog%2Fimages%2Fcontent%2Fkubernetes-self-service-with-loft.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Floft.sh%2Fblog%2Fimages%2Fcontent%2Fkubernetes-self-service-with-loft.png" title="Kubernetes self-service clusters with Loft" alt="Kubernetes self-service clusters with Loft" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;More importantly, developers can use their own filtered view of Loft Labs as a storefront for provisioning all available and approved K8s cluster images into new or existing namespaces. Or they can make the provisioning requests via a CLI and drill down into each cluster’s details with the kubectl prompt.&lt;/p&gt;

&lt;p&gt;The system provides guardrails for developers to provision Kubernetes clusters and namespaces in the mode they prefer, without consuming excess resources or making configuration mistakes.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Intellyx Take
&lt;/h2&gt;

&lt;p&gt;Quite a few vendors are already offering comprehensive ‘Kubernetes-as-a-Service’ management platforms that gloss over much of the complexity of provisioning and access to clusters, when what is really needed is transparency and portability.&lt;/p&gt;

&lt;p&gt;Engineers will avoid waiting on procurement boards, and they hate writing repetitive commands, whether that is launching 100 pods at a time for autoscaling or bringing them down when they are no longer required. But they do still want to directly address kubectl for a single pod, look at the logs for that pod and analyze what is going on.&lt;/p&gt;

&lt;p&gt;The platform team’s holy grail is to provide a self-service Kubernetes storefront that works with the company’s authorization regimes to entitle the right users and allow project management, tracking and auditing, while giving experienced engineers the engineering interfaces they need. &lt;/p&gt;

&lt;p&gt;Next up in this series, we’ll be covering the challenges of multi-tenancy and cost control!&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;\&lt;br&gt;
*© 2022, Intellyx, LLC. Intellyx is solely responsible for the content of this article. At the time of writing, &lt;a href="https://loft.sh/" rel="noopener noreferrer"&gt;Loft Labs&lt;/a&gt; is an Intellyx customer. Image sources: Maps, Unsplash. Screenshot, Loft Labs.&lt;/strong&gt;*&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>vcluster</category>
      <category>devspace</category>
      <category>loft</category>
    </item>
    <item>
      <title>Interview with KubeCon Keynote Speaker Mauricio Salatino from VMware</title>
      <dc:creator>Lukas Gentele</dc:creator>
      <pubDate>Thu, 27 Oct 2022 13:16:49 +0000</pubDate>
      <link>https://dev.to/loft/interview-with-kubecon-keynote-speaker-mauricio-salatino-from-vmware-1eh7</link>
      <guid>https://dev.to/loft/interview-with-kubecon-keynote-speaker-mauricio-salatino-from-vmware-1eh7</guid>
      <description>&lt;p&gt;By &lt;a href="https://twitter.com/richburroughs" rel="noopener noreferrer"&gt;Rich Burroughs&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Loft Labs is excited to welcome Kubecon North America 2022 keynote speaker Mauricio “Salaboy” Salatino for an exclusive interview where we dive into the struggles facing platform engineers with the CNCF ecosystem. Mauricio and his co-speaker Whitney Lee will be presenting a demo for their keynote presentation focused on the provisioning of virtual clusters with Crossplane, vcluster, and Knative, to develop an internal development platform.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Rich&lt;/strong&gt;: How did you learn about vcluster?&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Mauricio:&lt;/strong&gt; I had heard about vcluster in the context of multi-tenancy. While delivering training for &lt;a href="https://learnk8s.io" rel="noopener noreferrer"&gt;LearnK8s&lt;/a&gt; and working for different companies, I’ve repeatedly seen teams struggling to answer a very simple question: One cluster or multiple clusters? I’ve seen teams starting simple with namespaces inside a single Kubernetes Cluster and then struggling to move to use multiple clusters when the isolation levels of namespaces are not enough. And that is precisely where I see vcluster as a better alternative because it provides, from the get-go, the separation into different Kubernetes API Servers. In today’s world, where every organization is building its internal development platform, I see tools like vcluster as critical components of these platforms.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Rich&lt;/strong&gt;: In the demo for your KubeCon keynote you provisioned virtual clusters with vcluster and Crossplane. Can you explain how that works? And what’s your experience been like using those two tools together?&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Mauricio:&lt;/strong&gt; The demo presented at the KubeCon keynote session uses Crossplane, vcluster, and Knative, all working together around this concept of building internal development platforms. Crossplane is used to abstract where cloud resources are created. With Crossplane, we can declaratively create Kubernetes clusters in all major cloud providers, but creating these clusters is expensive, and it is a process that takes time. This is where using vcluster can save you time and money. Because, to my surprise, creating a vcluster is just installing a Helm chart into your existing cluster.  We can use the Crossplane Helm Provider to create vcluster from inside a Crossplane Composition, and that is exactly what my demo is doing. But vcluster doesn’t stop there, because with vcluster Plugins you can share tools between the host cluster and the virtual clusters. The demo shows how you can enable your vclusters with tools like Knative Serving (for dynamic autoscaling and advanced traffic management) without installing Knative Serving in each cluster. In a scenario where you have 10 teams all working in different vclusters and using tools like Knative Serving you save on installing and running 9 Knative Serving installations.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Rich&lt;/strong&gt;: Your demo focused on provisioning environments for developers. What do you think makes vcluster a great tool for dev environments?&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Mauricio:&lt;/strong&gt; vclusters are better than namespaces because they provide more isolation and are cheaper than fully fledged clusters. Suppose you have developers working with cluster-wide resources such as CRDs and tools that need to be installed to do their work. In that case, vclusters will give them the freedom to work against a dedicated Kubernetes API Server, where they will have total freedom to do what they need. By using vcluster you can give your development teams full access to their dedicated API servers without the need of creating, maintaining and paying for full-fledged Kubernetes Control Planes.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Rich:&lt;/strong&gt; There are so many new tools in the Kubernetes space and more arriving all the time. The CNCF landscape continues to grow. What do you look for when you evaluate new open source, cloud native tools?&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Mauricio:&lt;/strong&gt; I evaluate their (project) community, and in the last two years, I’ve been very interested in tools that fit into the story of building internal developer platforms, as I’ve been in that space for a long time. Most of my work in the open source space is around helping developers to be more productive in building their software. &lt;/p&gt;

&lt;p&gt;If you are evaluating Open Source / CNCF projects, check their maturity level, who (which company or companies) are sponsoring the project, and how healthy their community is. Looking at which companies are active in their community forums or slack channels is a great validation to see if other companies in your industry are trying to solve the same problems that you are facing.  &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Rich:&lt;/strong&gt; There’s been a lot of focus on the role of platform engineer the last few years. What do you think are the big challenges facing platform engineers today?&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Mauricio:&lt;/strong&gt; I’ve been writing about these topics lately in my blog, you can read more about this here -&amp;gt; The challenges of building platforms on top of Kubernetes &lt;a href="https://salaboy.com/2022/09/29/the-challenges-of-platform-building-on-top-of-kubernetes-1-4/" rel="noopener noreferrer"&gt;Part 1&lt;/a&gt;, &lt;a href="https://salaboy.com/2022/10/03/the-challenges-of-platform-building-on-top-of-kubernetes-2-4/" rel="noopener noreferrer"&gt;Part 2&lt;/a&gt;, &lt;a href="https://salaboy.com/2022/10/17/the-challenges-of-platform-building-on-top-of-kubernetes-3-4/" rel="noopener noreferrer"&gt;Part 3&lt;/a&gt;, Part 4&lt;/p&gt;

&lt;p&gt;But the big challenge nowadays is to keep up with all that is happening in the Cloud Native space, so you need to look for tools that:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Be ready to pivot: look for tools that provide you with the right abstractions to be able to pivot if things change, Crossplane is a big player in this space, but there are other projects that are worth looking at. If you are really into platform building you should check a project that I am hoping gets donated to the CNCF called &lt;a href="https://kratix.io" rel="noopener noreferrer"&gt;Kratix&lt;/a&gt;. These folks are working on providing the right abstractions to build platforms allowing platform teams to focus on deciding which projects they want to use and how those projects will work together. &lt;/li&gt;
&lt;li&gt;Reuse instead of build: try to identify the problem that you are trying to solve into two different buckets: 1) Generic problem that every company will have 2) very specific challenge that is specific to your company. If you are looking for tools to solve problems in the first bucket, then make sure that you don’t build an in-house solution. If you are looking for tools to solve problems in the second bucket you need to focus your search on a combination of an existing tools can do the work to make sure that you don’t spend time and effort reinventing the wheel just because the tools available doesn’t match 100% your requirements. &lt;/li&gt;
&lt;li&gt;Ecosystem integrations: When you look at a specific tool make sure that it integrates well with the other tools in the ecosystem. Don’t be tricked by the fact that they are all Kubernetes projects. Depending on how you want tools to work together your platform team might need to spend a considerable amount of time to make these tools work for your specific use case.&lt;/li&gt;
&lt;li&gt;Tailored Developer Experiences are the best way to promote your platform: you need to spend a lot of time and effort into building amazing developer experiences for your teams to have the right tools to do their work. These developer experiences are enabled by the tools that the platform team chooses to use and by always keep improving how application development teams interact with the platform.  &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If you are into building platforms, I am currently writing a book titled ”&lt;a href="http://mng.bz/jjKP" rel="noopener noreferrer"&gt;Continuous Delivery for Kubernetes&lt;/a&gt;” where I cover tools like Tekton, Crossplane, vcluster, Keptn, Knative, ArgoCD, Helm, among others, to build platforms that are focused on delivering more software more efficiently.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;R﻿ich:&lt;/strong&gt; Thanks for your time Mauricio.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;If you want to learn more about vcluster, check our &lt;a href="http://vcluster.com" rel="noopener noreferrer"&gt;vcluster.com&lt;/a&gt; for links to the docs, the GitHub repo, and our community Slack. You can find Mauricio on Twitter at &lt;a href="https://twitter.com/salaboy" rel="noopener noreferrer"&gt;@salaboy&lt;/a&gt;, and Rich at &lt;a href="https://twitter.com/richburroughs" rel="noopener noreferrer"&gt;@richburroughs&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>vcluster</category>
      <category>devspace</category>
      <category>loft</category>
    </item>
    <item>
      <title>3 Ways to Manage Kubernetes on AWS and How to Get Started</title>
      <dc:creator>Lukas Gentele</dc:creator>
      <pubDate>Fri, 30 Sep 2022 19:58:49 +0000</pubDate>
      <link>https://dev.to/loft/3-ways-to-manage-kubernetes-on-aws-and-how-to-get-started-10ag</link>
      <guid>https://dev.to/loft/3-ways-to-manage-kubernetes-on-aws-and-how-to-get-started-10ag</guid>
      <description>&lt;p&gt;By Talha Khalid&lt;/p&gt;

&lt;p&gt;AWS is one of the most popular choices for container orchestration due to its reliability and efficiency. In this article, we will look at some of the most popular tools and ways to manage Kubernetes on AWS. &lt;/p&gt;

&lt;h2&gt;
  
  
  Containers With AWS
&lt;/h2&gt;

&lt;p&gt;There are various tools that are of great help to developers. Today, one of the most important tools is a container, since applications can be implemented and packaged through them. Because they are lightweight, they provide a consistent software environment. Also, they very easily run applications that can be scaled to any location. &lt;/p&gt;

&lt;p&gt;Containers are used to build and deploy microservices, run batch jobs, learn applications, and port previously existing apps to the cloud. &lt;/p&gt;

&lt;p&gt;AWS offers different types of container services that help in managing Kubernetes. Let’s look at some of them. &lt;/p&gt;

&lt;h2&gt;
  
  
  1. Amazon Elastic Container Service (ECS)
&lt;/h2&gt;

&lt;p&gt;Amazon Elastic Container Service, or ECS, is a container control service that the client fully manages. Once Amazon ECS is installed, you don't need to install or run your Kubernetes container control software, manage or &lt;a href="https://loft.sh/blog/vcluster-for-local-development/" rel="noopener noreferrer"&gt;scale clusters&lt;/a&gt;, or schedule virtual machine containers. &lt;/p&gt;

&lt;p&gt;Amazon ECS was designed to be integrated with the entire AWS platform, which means you can count on all of its services. Furthermore, Amazon ECS is a very reliable solution that offers a &lt;a href="https://docs.aws.amazon.com/apigateway/latest/developerguide/welcome.html" rel="noopener noreferrer"&gt;native AWS API&lt;/a&gt; experience for Kubernetes, much like EC2 offers for virtual machines. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Floft.sh%2Fblog%2Fimages%2Fcontent%2Fcargo.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Floft.sh%2Fblog%2Fimages%2Fcontent%2Fcargo.jpg" title="image_tooltip" alt="alt_text" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Benefits of Amazon ECS 
&lt;/h3&gt;

&lt;p&gt;There are several advantages to why using&lt;a href="https://aws.amazon.com/blogs/aws/amazon-elastic-container-service-for-kubernetes/" rel="noopener noreferrer"&gt; Amazon ECS to manage Kubernetes&lt;/a&gt; is an excellent decision. These are some of the most relevant: &lt;/p&gt;

&lt;h4&gt;
  
  
  Run Containers Without Provisioning Servers
&lt;/h4&gt;

&lt;p&gt;Through AWS Fargate, which Amazon ECS offers, you can deploy and manage containers without the need to provision or manage servers. Also, Fargate gives you the freedom to focus on building and running applications. &lt;/p&gt;

&lt;h4&gt;
  
  
  Containerize Everything
&lt;/h4&gt;

&lt;p&gt;You can easily create all containerized applications and migrate Linux or Windows apps from on-premises environments to the cloud. Then, you can run them as Amazon ECS containerized applications. &lt;/p&gt;

&lt;h4&gt;
  
  
  Security
&lt;/h4&gt;

&lt;p&gt;Amazon ECS offers a high level of isolation. This allows you to create secure and reliable applications because Amazon ECS has its own Amazon VPC. Through this virtual private cloud, it launches its containers. This then allows the use of VPC network security groups and ACLs. &lt;/p&gt;

&lt;h4&gt;
  
  
  Performance at Scale
&lt;/h4&gt;

&lt;p&gt;Amazon ECS is based on technology developed from a solid track record in the execution of services. &lt;/p&gt;

&lt;h4&gt;
  
  
  Usability With Other AWS Services
&lt;/h4&gt;

&lt;p&gt;Amazon ECS integrates with various AWS services, such as Amazon VPC, IAM, Batch, CloudFormation, and more. &lt;/p&gt;

&lt;h2&gt;
  
  
  2. Elastic Kubernetes Service on AWS
&lt;/h2&gt;

&lt;p&gt;Amazon's Elastic Kubernetes Service, or EKS, is a service that helps make it easy to deploy and run &lt;a href="https://aws.amazon.com/kubernetes/" rel="noopener noreferrer"&gt;Kubernetes on AWS&lt;/a&gt;. That means you can do so without being an expert. Amazon EKS fully manages the solution and Kubernetes control plane scalability for each cluster. &lt;/p&gt;

&lt;p&gt;It is in charge of automatically performing each cluster operation, its updates, the scale of the masters, and the persistence layer. Additionally, it detects and replaces problem masters. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://docs.aws.amazon.com/eks/latest/userguide/eks-integrations.html" rel="noopener noreferrer"&gt;Moreover, Amazon EKS also integrates&lt;/a&gt; with a variety of AWS services. This way, it can provide security and scalability for your applications. These include Elastic Load Balancing for load balancing, IAM for authentication, and AWS CloudTrail, which is the keeper of the record. &lt;/p&gt;

&lt;p&gt;EKS always runs the latest version of Kubernetes, so you can use all of its plugins, consequently making it possible to migrate any standard Kubernetes application to Amazon EKS without any code modifications. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Floft.sh%2Fblog%2Fimages%2Fcontent%2Feks.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Floft.sh%2Fblog%2Fimages%2Fcontent%2Feks.png" title="image_tooltip" alt="alt_text" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Benefits of Amazon EKS
&lt;/h3&gt;

&lt;p&gt;There are several benefits of using Amazon EKS. &lt;/p&gt;

&lt;h4&gt;
  
  
  High Availability Fully Managed Service
&lt;/h4&gt;

&lt;p&gt;Amazon EKS makes it easy to run highly available Kubernetes clusters by automatically running and managing three master nodes spread across three zones for each cluster. &lt;/p&gt;

&lt;h4&gt;
  
  
  Security 
&lt;/h4&gt;

&lt;p&gt;Among other things, Amazon EKS integrates IAM with Kubernetes, allowing you to register IAM entities with the native authentication system. You can also use PrivateLink to access Kubernetes masters from your Amazon VPC. &lt;/p&gt;

&lt;h4&gt;
  
  
  Kubernetes Community Tools
&lt;/h4&gt;

&lt;p&gt;As Amazon EKS typically runs the latest version of Kubernetes software, all existing features, plugins, and applications are supported. &lt;/p&gt;

&lt;h2&gt;
  
  
  3. AWS Fargate
&lt;/h2&gt;

&lt;p&gt;Fargate is a service that deploys and manages containers without the need to manage the underlying infrastructure. You don't have to provision, scale, or configure &lt;a href="https://loft.sh/blog/vcluster-use-cases-platformcon/" rel="noopener noreferrer"&gt;clusters&lt;/a&gt; for virtual machines to run containers. In other words,&lt;a href="https://aws.amazon.com/fargate/" rel="noopener noreferrer"&gt; AWS Fargate&lt;/a&gt; allows you to focus only on building and running your application without worrying about your infrastructure. &lt;/p&gt;

&lt;h3&gt;
  
  
  Available for Amazon ECS and Amazon EKS
&lt;/h3&gt;

&lt;p&gt;Amazon ECS and EKS have two modes: Fargate and EC2 launch types. &lt;/p&gt;

&lt;p&gt;In the Fargate launch type, you only need to containerize the applications, specify the CPU and memory requirements, define the IAM access policies, and launch the application. &lt;/p&gt;

&lt;p&gt;In the EC2 release type, you can have more granular and server-level control over the application infrastructure, which is grouped on that server. Also, with this type of launch (EC2), it is possible to use ECS and EKS to manage a cluster of servers and schedule the placement of containers. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=9k2APeq0ORM&amp;amp;ab_channel=AWSOnlineTechTalks" rel="noopener noreferrer"&gt;Running Kubernetes on AWS Fargate - AWS Online Tech Talks&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Both ECS and EKS monitor the CPU, memory, and all other resources of the cluster. Additionally, they find the best server for running a container. ECS and EKS handle provisioning, scaling, and patching of clusters. Also, they can decide what kind of server to use, which applications to use, and how many containers they should run in a cluster to optimize their use and decide when to add or remove servers. &lt;/p&gt;

&lt;p&gt;The EC2 release offers more control over clustering and a greater range of customization. In turn, this allows for meeting requirements in specific applications or in compliance with government regulations. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Floft.sh%2Fblog%2Fimages%2Fcontent%2Fec2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Floft.sh%2Fblog%2Fimages%2Fcontent%2Fec2.png" title="image_tooltip" alt="alt_text" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Benefits of AWS Fargate
&lt;/h3&gt;

&lt;p&gt;Here are some of the benefits of AWS Fargate. &lt;/p&gt;

&lt;h4&gt;
  
  
  No Need to Manage Clusters
&lt;/h4&gt;

&lt;p&gt;You can focus on containers and on building and running your application. &lt;/p&gt;

&lt;h4&gt;
  
  
  Seamless Scalability
&lt;/h4&gt;

&lt;p&gt; Scaling your applications will be much easier, as you will not need to provide resources for your applications since AWS manages everything. &lt;/p&gt;

&lt;h4&gt;
  
  
  Integration With Amazon ECS and EKS 
&lt;/h4&gt;

&lt;p&gt;Fargate integrates seamlessly with Amazon ECS. Moreover, since 2018, it has also integrated with Amazon EKS. &lt;/p&gt;

&lt;h2&gt;
  
  
  Useful Tools for Managing Kubernetes on AWS
&lt;/h2&gt;

&lt;p&gt;There are some useful tools for managing Kubernetes on AWS that are worth discussing too. &lt;/p&gt;

&lt;h3&gt;
  
  
  Amazon Elastic Container Registry (ECR)
&lt;/h3&gt;

&lt;p&gt;ECR is a fully managed Docker container registry. With ECR, it's easy to store, manage, and deploy images to those containers using it. &lt;a href="https://aws.amazon.com/ecr/" rel="noopener noreferrer"&gt;Amazon ECR&lt;/a&gt; integrates with ECS, making it easier to develop workflows. Using Amazon ECR automatically hosts your images on a highly available and scalable architecture, giving you the freedom to deploy reliable containers for your applications. Additionally, it also integrates with AWS Identity and Access Management, which provides resource controls for each repository. Amazon ECR costs are calculated per amount of data stored and per amount of data received, so there are no predefined quotas. &lt;/p&gt;

&lt;h3&gt;
  
  
  Amazon CodePipeline
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://docs.aws.amazon.com/codebuild/latest/userguide/how-to-create-pipeline.html" rel="noopener noreferrer"&gt;AWS CodePipeline&lt;/a&gt; is a continuous integration and delivery service that executes application and infrastructure updates quickly and reliably. Also, it can be &lt;a href="https://aws.amazon.com/blogs/devops/continuous-deployment-to-kubernetes-using-aws-codepipeline-aws-codecommit-aws-codebuild-amazon-ecr-and-aws-lambda/" rel="noopener noreferrer"&gt;used with Kubernetes&lt;/a&gt; to create a continuous deployment pipeline. &lt;/p&gt;

&lt;p&gt;CodePipeline is in charge of compiling, testing, and deploying the code each time there is a change, as long as it complies with the processing models previously defined in the publication. Besides, it also enables fast and reliable delivery of features and updates. &lt;/p&gt;

&lt;p&gt;Using CodePipeline, you can easily create an end-to-end solution using third-party plugins like GitHub or by integrating your plugins at any release stage. With AWS CodePipeline, you pay only for your use, with no up-front fees or commitments. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Floft.sh%2Fblog%2Fimages%2Fcontent%2Fpipeline.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Floft.sh%2Fblog%2Fimages%2Fcontent%2Fpipeline.jpg" title="image_tooltip" alt="alt_text" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  AWS CloudWatch Logs
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/WhatIsCloudWatchLogs.html" rel="noopener noreferrer"&gt;CloudWatch Logs &lt;/a&gt;is the functionality of CloudWatch. It allows us to consolidate and analyze logs of the execution of your Kubernetes containers. Thus, it becomes an essential tool for recording execution data since containers are stateless and will not store information locally.  &lt;/p&gt;

&lt;h2&gt;
  
  
  Final Words
&lt;/h2&gt;

&lt;p&gt;All in all, AWS provides several different ways to manage Kubernetes containers. The choice of which service to opt for comes down to the specific requirements of your project, your budget, the dev team's experience, and other variables.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;This post was written by Talha Khalid. &lt;a href="http://medium.com/@talhakhalid101" rel="noopener noreferrer"&gt;Talha&lt;/a&gt; is a full-stack developer and data scientist who loves to make the cold and hard topics exciting and easy to understand.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>vcluster</category>
      <category>devspace</category>
      <category>loft</category>
    </item>
    <item>
      <title>How to Create and Manage Kubernetes Secrets: A Tutorial</title>
      <dc:creator>Lukas Gentele</dc:creator>
      <pubDate>Fri, 30 Sep 2022 19:53:37 +0000</pubDate>
      <link>https://dev.to/loft/how-to-create-and-manage-kubernetes-secrets-a-tutorial-15le</link>
      <guid>https://dev.to/loft/how-to-create-and-manage-kubernetes-secrets-a-tutorial-15le</guid>
      <description>&lt;p&gt;By Mercy Kibet&lt;/p&gt;

&lt;p&gt;When developing applications, it's common to have sensitive information you would not want exposed to unauthorized personnel. Unlike other objects in your cluster, such as those used in a Pod specification, a Secret can be created and stored independently of its associated Pods. This eliminates the risk of exposing the data to the public during the workflow of editing, creating, and viewing Pods. This post will show you how to create and manage Kubernetes Secrets. &lt;/p&gt;

&lt;h2&gt;
  
  
  What Are Kubernetes Secrets?
&lt;/h2&gt;

&lt;p&gt;Kubernetes applications are often set up with Secrets. They're a way to insulate sensitive information from the rest of your environment, so it's harder for malicious actors to find and modify it. Secrets have different methods for controlling access, like having an administrator create them and then assign access via role-based access control (RBAC). It can also be done with a policy file, which we'll cover in this tutorial.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Floft.sh%2Fblog%2Fimages%2Fcontent%2Fsecret2.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Floft.sh%2Fblog%2Fimages%2Fcontent%2Fsecret2.jpg" title="image_tooltip" alt="alt_text" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  How to Define Kubernetes Secrets
&lt;/h2&gt;

&lt;p&gt;Like any other object in Kubernetes, Secrets must be defined or created with a spec with a name, Secret, and other fields. Below is a short example of the required information for the spec. All you need to do is assign values for those three fields and you'll have a working Kubernetes Secret.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: v1

kind: Secret

metadata:

  name: my-secret

spec:

  selector: ingress.kubernetes.io/node-role.---/master, ingress.kubernetes.io/service-role.---/persistent-volume-controller, ingress.kubernetes.io/auth-provider, ingress.kubernetes.io/ingress-from, ingress.kubernetes.io/ingress-label, ingress.kubernetes....

  type: Opaque

  data: [base64 encoded value]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Let's break down this spec so you can see what's happening here. The metadata field is just a place to store your object's name, Secret, and other information. I chose:\&lt;br&gt;
\&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;ingress.kubernetes.io/node-role.---/master&lt;/code&gt;&lt;/strong&gt;\&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;ingress.kubernetes.io/service-role.---/persistent-volume-controller&lt;/code&gt;&lt;/strong&gt; \&lt;/li&gt;
&lt;li&gt;and &lt;strong&gt;&lt;code&gt;ingress.kubernetes.io/auth-provider&lt;/code&gt;&lt;/strong&gt; &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;because they're the service accounts I use across my Kubernetes environment for various components (names will vary based on your setup). &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;code&gt;data: [base64 encoded value]&lt;/code&gt;&lt;/strong&gt; tells the object to be stored as an opaque binary. &lt;strong&gt;&lt;code&gt;Opaque&lt;/code&gt;&lt;/strong&gt; is one of the options for &lt;strong&gt;&lt;code&gt;dataType&lt;/code&gt;&lt;/strong&gt;, which determines how the data is versioned and stored. Here, we're using a &lt;strong&gt;&lt;code&gt;base64 encoded value&lt;/code&gt;&lt;/strong&gt;, which will be stored in the database as a byte array. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;code&gt;selector&lt;/code&gt;&lt;/strong&gt; is used to determine where to inject the Secret. Here, we're saying that the Secret should only exist on nodes with a role of &lt;strong&gt;&lt;code&gt;master&lt;/code&gt;&lt;/strong&gt; or nodes that are &lt;strong&gt;&lt;code&gt;service controllers&lt;/code&gt;&lt;/strong&gt; (those with instance roles). The label selector option allows you to select by multiple labels at a time, so I've chosen several from my cluster configuration. In this case, I chose &lt;strong&gt;&lt;code&gt;ingress.kubernetes.io/ingress-from&lt;/code&gt;&lt;/strong&gt; (name of the ingress), &lt;strong&gt;&lt;code&gt;ingress.kubernetes.io/ingress-label&lt;/code&gt;&lt;/strong&gt; (the label for the ingress), and &lt;strong&gt;&lt;code&gt;ingress.kubernetes&lt;/code&gt;.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Floft.sh%2Fblog%2Fimages%2Fcontent%2Fsecret3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Floft.sh%2Fblog%2Fimages%2Fcontent%2Fsecret3.png" title="image_tooltip" alt="alt_text" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You can see there are a lot of options for defining Secrets in Kubernetes, but that's not necessarily a bad thing. You should be able to get started with some defaults and extend it later throughout your cluster, as long as you keep the constructs as YAML and never write to raw database fields, which would be a really unfortunate mistake. &lt;/p&gt;
&lt;h3&gt;
  
  
  Defining a Secret Under the Namespace
&lt;/h3&gt;

&lt;p&gt;Let's assume we have a Secret to store sensitive data and want to deploy it on our cluster nodes. We won't need the application, so we'll define the Secret under our Kubernetes cluster namespace (i.e., the default). First, we'll create a namespace. We can do that with the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl create ns mysecret
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now that we have a namespace, we can proceed to define our Secret. We'll create a &lt;strong&gt;.yaml&lt;/strong&gt; file with the content below:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion : v1 
kind : Secret 
metadata : 
   name : mysecretns 
data :
   mysecret : YWRtaW4= 
type : Opaque
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then we can instruct &lt;strong&gt;kubectl&lt;/strong&gt; to create the Secret file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl apply -f mysecret.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Verify that the Secret is in place using the below:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl get secrets NAME TYPE DATA AGE mysecretns Opaque 8 1s 
$ kubectl describe secrets/mysecretns Name: mysecretns Namespace: default Labels: &amp;lt;none&amp;gt; Annotations: &amp;lt;none&amp;gt; Data ==== mysecret: 8 byte string (binary)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And now we have successfully created a Secret file containing the information we need. We can use this information to set environment variables on instances in our cluster. We could also create a deployment and access the Secret values as environment variables with a container inside the deployment. &lt;/p&gt;

&lt;p&gt;As shown above, this is a straightforward example of deploying a Secret. Secrets can be used to store all sorts of information, from database passwords to private keys. &lt;/p&gt;

&lt;h2&gt;
  
  
  Managing Kubernetes Secrets
&lt;/h2&gt;

&lt;p&gt;From your terminal, you can use the &lt;strong&gt;&lt;code&gt;kubectl&lt;/code&gt;&lt;/strong&gt; command and subcommands such as &lt;strong&gt;create&lt;/strong&gt;, &lt;strong&gt;delete&lt;/strong&gt;, &lt;strong&gt;describe&lt;/strong&gt;, and &lt;strong&gt;apply&lt;/strong&gt; to manage and view your Kubernetes Secrets. First, we'll look at examples of each type with their corresponding command-line interface (CLI) output. Then, we'll examine how to use them in more detail. &lt;/p&gt;

&lt;h3&gt;
  
  
  Viewing Kubernetes Secrets
&lt;/h3&gt;

&lt;p&gt;To view Kubernetes Secrets, first use &lt;strong&gt;&lt;code&gt;cat&lt;/code&gt;&lt;/strong&gt; or &lt;strong&gt;&lt;code&gt;id &amp;lt;name&amp;gt;&lt;/code&gt;&lt;/strong&gt; to check if there are any Secrets in your cluster. Next, use &lt;strong&gt;&lt;code&gt;describe &amp;lt;name&amp;gt;&lt;/code&gt;&lt;/strong&gt; to get more information about a specific Secret. &lt;/p&gt;

&lt;p&gt;Use &lt;strong&gt;&lt;code&gt;create -f file.yaml&lt;/code&gt;&lt;/strong&gt; to add a new Secret and &lt;strong&gt;&lt;code&gt;replace file.yaml&lt;/code&gt;&lt;/strong&gt; with the name of your file that contains your Secret information. &lt;/p&gt;

&lt;p&gt;Use &lt;strong&gt;&lt;code&gt;delete&lt;/code&gt;&lt;/strong&gt; to remove a specific Secret. &lt;/p&gt;

&lt;p&gt;Finally, &lt;strong&gt;&lt;code&gt;apply -f filename.yaml&lt;/code&gt;&lt;/strong&gt; is used to apply changes that you have made locally without overwriting the Secret on the cluster (for example, if you want to test your changes). &lt;/p&gt;

&lt;h3&gt;
  
  
  Deleting a Kubernetes Secret Using kubectl delete
&lt;/h3&gt;

&lt;p&gt;To delete a Secret, first, use &lt;strong&gt;&lt;code&gt;cat&lt;/code&gt;&lt;/strong&gt; or &lt;strong&gt;&lt;code&gt;id &amp;lt;name&amp;gt;&lt;/code&gt;&lt;/strong&gt; to check if there are any Secrets in your cluster. Next, use &lt;strong&gt;&lt;code&gt;describe &amp;lt;name&amp;gt;&lt;/code&gt;&lt;/strong&gt; to get more information about a specific Secret. &lt;/p&gt;

&lt;p&gt;You delete Kubernetes Secrets using the &lt;strong&gt;&lt;code&gt;kubectl delete&lt;/code&gt;&lt;/strong&gt; command. &lt;/p&gt;

&lt;p&gt;The following example deletes a Kubernetes Secret named &lt;strong&gt;&lt;code&gt;my-secret-name-1&lt;/code&gt;&lt;/strong&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl delete secret my-secret-name-1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Modifying a Secret's Properties
&lt;/h3&gt;

&lt;p&gt;To modify the properties of an existing Kubernetes Secret you can use &lt;strong&gt;&lt;a href="https://kubernetes.io/docs/reference/kubectl/" rel="noopener noreferrer"&gt;``&lt;/a&gt;&lt;code&gt;kubectl edit&lt;/code&gt;.&lt;/strong&gt; &lt;/p&gt;

&lt;p&gt;The following command shows how you could modify a Kubernetes Secret named &lt;strong&gt;production-secret&lt;/strong&gt;: &lt;/p&gt;

&lt;p&gt;kubectl edit secret production-secret &lt;/p&gt;

&lt;p&gt;To find the original Kubernetes Secret that your new one replaced, use &lt;strong&gt;&lt;code&gt;kubectl get &amp;lt;name&amp;gt;&lt;/code&gt;&lt;/strong&gt; to see if the key pair exists in the cluster. If it does, you can use** &lt;code&gt;kubectl get secrets &amp;lt;name&amp;gt;&lt;/code&gt;** to see the original Secrets. &lt;/p&gt;

&lt;h2&gt;
  
  
  Encrypting Kubernetes Secrets
&lt;/h2&gt;

&lt;p&gt;You can use Kubernetes to provide sensitive data only to people who require access to it. This can reduce your exposure to security problems, typically from storing sensitive data on a public server. To do this, you must generate an encryption key pair called a &lt;strong&gt;kubecrypt key store&lt;/strong&gt; and keep it private. To encrypt Secrets with a &lt;strong&gt;&lt;code&gt;kubecrypt key store&lt;/code&gt;&lt;/strong&gt;, you'll first have to create the key pair. You can learn more with &lt;code&gt;kubecrypt&lt;/code&gt;. &lt;/p&gt;

&lt;p&gt;To create a key pair, run the following: &lt;/p&gt;

&lt;p&gt;&lt;code&gt;`&lt;br&gt;
kubectl create -f kubernetes.yaml&lt;br&gt;
`&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;This should generate your new key pair. You should see something like this after running the command:  &lt;/p&gt;

&lt;p&gt;&lt;code&gt;`&lt;br&gt;
token: eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9&lt;br&gt;
`&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Floft.sh%2Fblog%2Fimages%2Fcontent%2Fdoor.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Floft.sh%2Fblog%2Fimages%2Fcontent%2Fdoor.jpg" title="image_tooltip" alt="alt_text" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Secrets Alternatives
&lt;/h2&gt;

&lt;p&gt;Although Secrets allow you to store encrypted information, they only store the key and not the actual data. This limits security, as Kubernetes does not encrypt the actual data before storing it. There are two alternatives: &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Sealed Secrets:&lt;/strong&gt; This is a special type of Kubernetes Secret that encrypts and stores your sensitive information in its database.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;HashiCorp Vault:&lt;/strong&gt; This digital vault supports multi-algorithm encryption for over 20 blockchain technologies, authentication providers, and storage methods while also being able to maintain user permissions and privilege escalation policies on Secrets decryption keys using RBAC.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Floft.sh%2Fblog%2Fimages%2Fcontent%2Fsecrets5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Floft.sh%2Fblog%2Fimages%2Fcontent%2Fsecrets5.png" title="image_tooltip" alt="alt_text" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Kubernetes Secrets are a great way to manage sensitive information for your applications. They can be used to store things like database passwords, API keys, and other sensitive data. It's impossible to keep your most precious Secrets safe from prying eyes because of the sheer number of potential attack vectors available to hackers. Luckily, there are many better options for securing your Secrets (and backups), including Sealed Secrets and HashiCorp. If you're looking for an easy way to manage Kubernetes Secrets, check out &lt;a href="https://loft.sh/docs/secrets/#operation/readVirtualclusterLoftShV1NamespacedHelmRelease" rel="noopener noreferrer"&gt;Loft&lt;/a&gt;. Loft provides a simple and easy-to-use web interface for managing Kubernetes Secrets.    &lt;/p&gt;

&lt;p&gt;&lt;em&gt;This post was written by Mercy Kibet. &lt;a href="https://hashnode.com/@eiMJay" rel="noopener noreferrer"&gt;Mercy&lt;/a&gt; is a full-stack developer with a knack for learning and writing about new and intriguing tech stacks.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>vcluster</category>
      <category>devspace</category>
      <category>loft</category>
    </item>
    <item>
      <title>Kubernetes DaemonSet: What It Is and 5 Ways to Use It</title>
      <dc:creator>Lukas Gentele</dc:creator>
      <pubDate>Wed, 28 Sep 2022 14:15:19 +0000</pubDate>
      <link>https://dev.to/loft/kubernetes-daemonset-what-it-is-and-5-ways-to-use-it-fp4</link>
      <guid>https://dev.to/loft/kubernetes-daemonset-what-it-is-and-5-ways-to-use-it-fp4</guid>
      <description>&lt;p&gt;By Mercy Kibet&lt;/p&gt;

&lt;p&gt;Kubernetes DaemonSet is a powerful tool for managing persistent workloads on your system. It ensures that each instance of an application running on your system will always be running. When starting a Kubernetes cluster, you must configure the services or applications you want running in your cluster. &lt;/p&gt;

&lt;p&gt;In this post, we'll explain the problem it solves and five ways to use it. &lt;/p&gt;

&lt;h2&gt;
  
  
  What Is a DaemonSet in Kubernetes?
&lt;/h2&gt;

&lt;p&gt;A DaemonSet ensures that a specified collection of pods runs on the specified nodes. DaemonSet makes sure one pod exists per node. &lt;/p&gt;

&lt;p&gt;Kubernetes DaemonSets can be used for various applications, including key-value stores, caches, and servers that require high availability, like messaging apps. A DaemonSet will allow you to specify how many instances your app should run in the cluster and guarantee a consistent state among all pods running in the cluster. &lt;/p&gt;

&lt;p&gt;Kubernetes DaemonSet can run Docker containers without managing the underlying infrastructure. You don't have to worry about scaling your clusters or where the container data is stored. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Floft.sh%2Fblog%2Fimages%2Fcontent%2Fpeapod.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Floft.sh%2Fblog%2Fimages%2Fcontent%2Fpeapod.jpg" title="image_tooltip" alt="alt_text" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  What Is the Difference Between DaemonSet and Deployment?
&lt;/h2&gt;

&lt;p&gt;DaemonSet manages the number of pod copies to run in a node. However, a deployment manages the number of pods and where they should be on nodes. Deployment selects nodes to place replicas using labels and other functions (e.g., tolerations). &lt;/p&gt;

&lt;p&gt;A DaemonSet doesn't need external resources like IP addresses or port numbers. However, you must provide them if you create a deployment. &lt;/p&gt;

&lt;p&gt;Additionally, a DaemonSet doesn't need to know the number of nodes in your &lt;a href="https://loft.sh/blog/multi-cluster-kubernetes-part-one-defining-goals-and-responsibilities/" rel="noopener noreferrer"&gt;Kubernetes cluster.&lt;/a&gt; However, you must provide it if you create a deployment. &lt;/p&gt;

&lt;p&gt;Last, a DaemonSet doesn't need runtime state information, like the number of pods currently running on each node or the number of replicas running on each pod (although these things can still be specified). &lt;/p&gt;

&lt;h2&gt;
  
  
  Ways to Use a DaemonSet
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Running a backup job&lt;/strong&gt;—You can have a DaemonSet running on every node in your cluster responsible for running backups of your etcd, MySQL data files, and PostgreSQL data files. If one pod fails for any reason, the other backup pods in the set will take over.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Logging&lt;/strong&gt;—Another use case is to install an agent such as Sysdig on each node and launch a DaemonSet to manage all of these agents in a cluster-ready state.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Enforcing network policy&lt;/strong&gt;—If you have a multi-tenant cluster and wish to lock down each tenant to its range of IPs, you can create a DaemonSet for each tenant that ensures every node is running the pods for that tenant's range of IPs, which is defined as part of the pod spec.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Docker registry&lt;/strong&gt;—Maintaining a highly available and scalable Docker registry in Kubernetes is an example of another use case for DaemonSet. Ensure your Docker registry always has two replicas running in a cluster-ready state.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Log aggregation&lt;/strong&gt;—If you want to aggregate your container logs and ship them off to a centralized logging tool like Logz.io, you can create a DaemonSet that launches a tail-logs pod and replicates it many times as required.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Creating a DaemonSet Is Simple With an Example of Nginx
&lt;/h2&gt;

&lt;p&gt;First, we need to create a &lt;a href="https://en.wikipedia.org/wiki/Manifest_file" rel="noopener noreferrer"&gt;Manifest file&lt;/a&gt; which will contain all of the necessary configuration information for our DaemonSet.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: extensions/v1beta1 
kind: DaemonSet 
metadata: 
  name: nginx-example 
spec: template: metadata : labels : app : nginx spec : nodeSelector : role : web containers : - name : nginx image : nginx ports : - containerPort : 80
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Specifying the node selector allows you to control which nodes the pod will run on. You could spread your pods across multiple nodes for load balancing, high availability, and more. You can use the labels for pod selection and service discovery. &lt;/p&gt;

&lt;p&gt;In this example, we created a DaemonSet with all nodes in the cluster. &lt;/p&gt;

&lt;p&gt;From this example, we can use multiple containers in a DaemonSet. You could use CoreDNS (a DNS server) to spin up on every node in your cluster, or even an image library for processing images on every node for faster processing of your containers. &lt;/p&gt;

&lt;p&gt;The “spec” section tells Kubernetes what the pod is supposed to do; the “node selector” is how you choose which machines the pod is running on. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Floft.sh%2Fblog%2Fimages%2Fcontent%2Fspec.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Floft.sh%2Fblog%2Fimages%2Fcontent%2Fspec.png" title="image_tooltip" alt="alt_text" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Creating and Managing Kubelet DaemonSet
&lt;/h2&gt;

&lt;p&gt;If you add a node to your cluster, then you can create a new DaemonSet and add it to the node. If you want to ensure the required number of pods run on a subset of nodes, you can create multiple DaemonSets. Once created, they need to be managed by an operator. &lt;/p&gt;

&lt;h3&gt;
  
  
  Managing Sysdig Ops Agent with Kubernetes DaemonSet
&lt;/h3&gt;

&lt;p&gt;Here, these steps explain how to use Kubernetes to manage your cluster's kubelet agent (Sysdig) and how it compares with creating a deployment. &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Create the DaemonSet object in Kubernetes.
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;curl -s create-daemonset.sh -H "Content-Type: application/json" -d '{"kind":"DaemonSet", "apiVersion":"v1", "metadata": {"name": "sysdig"}}' | kubectl
 --kubeconfig=/etc/kubernetes/admin.kubeconfig apply -f-
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;The following is created in your Kubernetes cluster:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl --kubeconfig=/etc/kubernetes/admin.kubeconfig get svc/sysdig-k8s-agent

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE

sysdig-k8s-agent ClusterIP 10.157.194.75 &amp;lt;none&amp;gt; 80:31185/TCP,443:31186/TCP 1m
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Check the pod status to see how many pods are running in a cluster-ready state on your node.
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;curl -s http://10.157.194.75:31186/api/v1/namespaces/system:pod/pods | jq '.status.containerStatus'

"active" "running" "stopped-for-preview"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Use the kubectl top command to list all DaemonSets you've created, then check the status of your DaemonSet in a cluster-ready state for a pod (Sysdig pod). &lt;/li&gt;
&lt;li&gt;Use the &lt;strong&gt;kubectl delete&lt;/strong&gt; command to remove your DaemonSet after using it successfully. &lt;/li&gt;
&lt;li&gt;To view the pod status of your DaemonSet in a cluster-ready state, use the &lt;strong&gt;kubectl get&lt;/strong&gt; command. &lt;/li&gt;
&lt;li&gt;To launch a DaemonSet, use the command below: &lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;kubectl create -f daemonset.yaml --namespace=system/daemonset &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Floft.sh%2Fblog%2Fimages%2Fcontent%2Fpepper.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Floft.sh%2Fblog%2Fimages%2Fcontent%2Fpepper.jpg" title="image_tooltip" alt="alt_text" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  How to Use a Kubernetes DaemonSet for Log Collection
&lt;/h2&gt;

&lt;p&gt;DaemonSet is a collection of log files available to your application. You use it for collecting performance, usage, and error logs. &lt;/p&gt;

&lt;p&gt;DaemonSet contains a few components—a daemon, a set of clients (logs and other activity watchers), and a central log handler. It all comes together through the standard protocol described below, allowing seamless integration with your application's ongoing activity. &lt;/p&gt;

&lt;p&gt;DaemonSet collects logs, errors, and other system statistics and doesn't interfere with your application's behavior. &lt;/p&gt;

&lt;p&gt;The handler receives logs from clients and does the work of processing and sending them to the desired destination. You can have multiple handlers in a single DaemonSet. &lt;/p&gt;

&lt;p&gt;DaemonSet uses a flexible messaging format over TCP connection (by default, on port 9000) that allows for sending various messages to the handler(s). &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Floft.sh%2Fblog%2Fimages%2Fcontent%2Fdaemonset.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Floft.sh%2Fblog%2Fimages%2Fcontent%2Fdaemonset.png" title="image_tooltip" alt="alt_text" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Kubernetes DaemonSet is a great way to manage and deploy applications in a clustered environment. It's easy to use and has a wide range of features, making it an ideal choice for managing applications in a production environment. &lt;/p&gt;

&lt;p&gt;You can use DaemonSet to run a cluster storage, log collection, and node monitoring demon on each node. Now that you know how to spin up a DaemonSet, check out &lt;a href="https://loft.sh/" rel="noopener noreferrer"&gt;Loft&lt;/a&gt;, a platform that gives you autonomy to fully leverage DaemonSet's capabilities. &lt;/p&gt;

&lt;p&gt;&lt;em&gt;This post was written by Mercy Kibet. &lt;a href="https://hashnode.com/@eiMJay" rel="noopener noreferrer"&gt;Mercy&lt;/a&gt; is a full-stack developer with a knack for learning and writing about new and intriguing tech stacks.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>vcluster</category>
      <category>devspace</category>
      <category>loft</category>
    </item>
    <item>
      <title>Kubernetes Service Account: What It Is and How to Use It</title>
      <dc:creator>Lukas Gentele</dc:creator>
      <pubDate>Mon, 19 Sep 2022 14:38:44 +0000</pubDate>
      <link>https://dev.to/loft/kubernetes-service-account-what-it-is-and-how-to-use-it-5238</link>
      <guid>https://dev.to/loft/kubernetes-service-account-what-it-is-and-how-to-use-it-5238</guid>
      <description>&lt;p&gt;By Dawid Ziolkowski&lt;/p&gt;

&lt;p&gt;Kubernetes provides a few authentication and authorization methods. It comes with a built-in account management solution, but it should be integrated with your own user management system like Active Directory or LDAP. User management is one thing, but there is also a whole additional layer of non-human access. Think about CI/CD access to the cluster, pods-to-pods communication, or pods to Kubernetes API authentication. For these use cases, Kubernetes offers so-called "service accounts," and in this post, you'll learn what they are and how to use them. &lt;/p&gt;

&lt;h2&gt;
  
  
  What Are Kubernetes Service Accounts?
&lt;/h2&gt;

&lt;p&gt;Whenever you access your Kubernetes cluster with &lt;strong&gt;&lt;a href="https://kubernetes.io/docs/reference/kubectl/" rel="noopener noreferrer"&gt;kubectl&lt;/a&gt;&lt;/strong&gt;, you are authenticated by Kubernetes with your user account. User accounts are meant to be used by humans. But when a pod running in the cluster wants to access the Kubernetes API server, it needs to use a service account instead. Service accounts are just like user accounts but for non-humans. &lt;/p&gt;

&lt;h2&gt;
  
  
  Why Do Kubernetes Service Accounts Exist?
&lt;/h2&gt;

&lt;p&gt;Why do Kubernetes Service Accounts exist? The simple answer is because pods are not humans, so it's good to have a distinction from user accounts. It's especially important for security reasons. Also, once you start using an external user management system with Kubernetes, it becomes even more important since all your users will probably follow typical &lt;a href="mailto:firstname.lastname@your-company.com"&gt;firstname.lastname@your-company.com&lt;/a&gt; usernames. &lt;/p&gt;

&lt;p&gt;But, you may wonder, why would pods inside the Kubernetes cluster need to connect to the Kubernetes API at all? Well, there are multiple use cases for it. The most common one is when you have a CI/CD pipeline agent deploying your applications to the same cluster. Many cloud-native tools also need access to your Kubernetes API to do their jobs, such as logging or monitoring applications. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Floft.sh%2Fblog%2Fimages%2Fcontent%2Fserviceaccount.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Floft.sh%2Fblog%2Fimages%2Fcontent%2Fserviceaccount.png" title="image_tooltip" alt="alt_text" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Default Service Account
&lt;/h2&gt;

&lt;p&gt;You now know what service accounts are and why would you need one. Let's discuss how to use them. The first thing you need to know is that you have probably already used service accounts even if you never configured any. That's because Kubernetes comes with a predefined service account called "default." And by default, every created pod has that service account assigned to it. Let's validate that. I'll create a simple nginx deployment:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl create deployment nginx1 --image=nginx
deployment.apps/nginx1 created
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now, let's see the details of the deployed pod:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl get pods
NAME                      READY   STATUS    RESTARTS   AGE
nginx1-585f98d7bf-84rxg   1/1     Running   0          12s

$ kubectl get pod nginx1-585f98d7bf-84rxg -o yaml
apiVersion: v1
kind: Pod
metadata:
  (...)
spec:
  containers:
  - image: nginx
    (...)
  serviceAccount: default
  serviceAccountName: default
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Floft.sh%2Fblog%2Fimages%2Fcontent%2Fpeapod.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Floft.sh%2Fblog%2Fimages%2Fcontent%2Fpeapod.jpg" title="image_tooltip" alt="alt_text" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  OK, What Does It Mean?
&lt;/h3&gt;

&lt;p&gt;So, it turns out that your pods have the default service account assigned even when you don't ask for it. This is because every pod in the cluster needs to have one (and only one) service account assigned to it. What can your pod do with that service account? Well, pretty much nothing. That default service account doesn't have any permissions assigned to it. &lt;/p&gt;

&lt;p&gt;We can validate that as well. Let's get into our freshly deployed nginx pod and try to connect to a Kubernetes API from there. For that, we'll need to export a few environment variables and then use the &lt;strong&gt;curl&lt;/strong&gt; command to send an HTTP request to the Kubernetes API.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Export the internal Kubernetes API server hostname
$ APISERVER=https://kubernetes.default.svc

# Export the path to ServiceAccount mount directory
$ SERVICEACCOUNT=/var/run/secrets/kubernetes.io/serviceaccount

# Read the ServiceAccount bearer token
$ TOKEN=$(cat ${SERVICEACCOUNT}/token)

# Reference the internal Kubernetes certificate authority (CA)
$ CACERT=${SERVICEACCOUNT}/ca.crt

# Make a call to the Kubernetes API with TOKEN
$ curl --cacert ${CACERT} --header "Authorization: Bearer ${TOKEN}" -X GET ${APISERVER}/api/v1/namespaces/default/pods
{
  "kind": "Status",
  "apiVersion": "v1",
  "metadata": {},
  "status": "Failure",
  "message": "pods is forbidden: User \"system:serviceaccount:default:default\" cannot list resource \"pods\" in API group \"\" in the namespace \"default\"",
  "reason": "Forbidden",
  "details": {
    "kind": "pods"
  },
  "code": 403
}#

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;As you can see, the default service account indeed doesn't have many permissions. It's really only there to fulfil the requirement that each pod has a service account assigned to it. &lt;/p&gt;

&lt;h3&gt;
  
  
  Creating Your Own Service Accounts
&lt;/h3&gt;

&lt;p&gt;So, if you want your pod to actually be able to talk to the Kubernetes API and do something, you have two options. You either need to assign some permissions to the default service account, or you need to create a new service account. The first option is not recommended. In fact, you shouldn't use the default service account for anything. Let's choose the recommended option then, which is creating dedicated service accounts. It's also worth mentioning here that, just like with user access, you should create separate service accounts for separate needs. &lt;/p&gt;

&lt;p&gt;The easiest way to create a service account is by executing the &lt;strong&gt;kubectl create serviceaccount&lt;/strong&gt; command followed by a desired service account name.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl create serviceaccount nginx-serviceaccount
serviceaccount/nginx-serviceaccount created

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Just like with anything else in Kubernetes, it's worth knowing how to create one using the YAML definition file. In the case of service accounts, it's actually really simple and looks like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: v1
kind: ServiceAccount
metadata:
  name: nginx-serviceaccount
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;I'll save it as nginx-sa.yaml and apply that simple YAML file using &lt;strong&gt;kubectl apply -f nginx-sa.yaml:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl apply -f nginx-sa.yaml
serviceaccount/nginx-serviceaccount created

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can see from the output above that a service account was created, but you can double-check that with &lt;strong&gt;kubectl get serviceaccounts,&lt;/strong&gt; or &lt;strong&gt;kubectl get as&lt;/strong&gt; for short.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl get serviceaccounts
NAME                   SECRETS   AGE
default                1         3h14m
nginx-serviceaccount   1         72s

$ kubectl get sa
NAME                   SECRETS   AGE
default                1         3h14m
nginx-serviceaccount   1         112s

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Assigning Permissions to a Service Account
&lt;/h3&gt;

&lt;p&gt;OK, you created a new service account for your pod, but by default, it won't do much more than the default service account (called "default") that you saw previously. To change that, you can use the standard Kubernetes role-based access control mechanism. This means that you can either use existing role or create a new one (or ClusterRole) and then use RoleBinding to bind a role with your new ServiceAccount. &lt;/p&gt;

&lt;p&gt;For the purpose of the demo, you can assign a built-in Kubernetes ClusterRole called "view" that allows viewing all resources. You then need to create a RoleBinding for your Service Account.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl create rolebinding nginx-sa-readonly \
  --clusterrole=view \
  --serviceaccount=default:nginx-serviceaccount \
  --namespace=default
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now, any pod using the new ServiceAccount should be able to view all resources in the default namespace. To validate that, you can perform the same test you did earlier, but first you need to assign that ServiceAccount to your nginx pod. &lt;/p&gt;

&lt;h2&gt;
  
  
  Specifying ServiceAccount For Your Pod
&lt;/h2&gt;

&lt;p&gt;As mentioned previously, if you don't specify any service account for your pod, it will be assigned a "default" service account. You just created a new service account for your needs, so you'll want to use that one instead. For that, you need to pass the service account name as the value for &lt;strong&gt;serviceAccountName&lt;/strong&gt; key in a &lt;strong&gt;spec&lt;/strong&gt; section of your deployment definition file.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx1
  labels:
    app: nginx1
spec:
  replicas: 2
  selector:
    matchLabels:
      app: nginx1
  template:
    metadata:
      labels:
        app: nginx1
    spec:
      serviceAccountName: nginx-serviceaccount
      containers:
      - name: nginx1
        image: nginx
        ports:
        - containerPort: 80

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That's it. Now, after applying this definition, you can try to perform the same test as before from within the pod.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ curl --cacert ${CACERT} --header "Authorization: Bearer ${TOKEN}" -X GET ${APISERVER}/api/v1/namespaces/default/pods
{
  "kind": "PodList",
  "apiVersion": "v1",
  "metadata": {
    "resourceVersion": "52233"
  },
  "items": [
    {
      "metadata": {
        "name": "nginx1-65448895f9-5j6b6",
        "generateName": "nginx1-65448895f9-",
        "namespace": "default",
        "uid": "b09bfa93-a388-4cd9-9495-131f620613d0",
        "resourceVersion": "49536",
(...)

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And as expected, now it works fine. If, for any reason, your pods need access to a Kubernetes API, you create a new ServiceAccount for it, then assign a role or ClusterRole to it via RoleBinding, then specify the ServiceAccount name in your deployment. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Floft.sh%2Fblog%2Fimages%2Fcontent%2Fserviceaccount2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Floft.sh%2Fblog%2Fimages%2Fcontent%2Fserviceaccount2.png" title="image_tooltip" alt="alt_text" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Summary
&lt;/h2&gt;

&lt;p&gt;Service accounts are extremely useful. They provide a way for your pods to access the Kubernetes API. Many Kubernetes-native tools rely on this mechanism. Therefore, it's good to know what service accounts are and how they access the Kubernetes API. &lt;/p&gt;

&lt;p&gt;However, you also need to be careful because a misconfigured service account can be a security risk. If, for example, to save time, you decide to increase the permission for a default service account (instead of creating a new one), you'll make it possible for any pod on the cluster to access the Kubernetes API. If that access is read-only, it can lead to data exposure, and if it's read-write, the damage can be even great. Fortunately, as you learned in this post, creating a service account is easy. &lt;/p&gt;

&lt;p&gt;If you want to learn more about &lt;a href="https://loft.sh/blog/100-kubernetes-tutorials-to-get-you-from-zero-to-hero/" rel="noopener noreferrer"&gt;Kubernetes&lt;/a&gt;, take a look at &lt;a href="https://loft.sh/blog" rel="noopener noreferrer"&gt;our other blog posts&lt;/a&gt;. &lt;/p&gt;

&lt;p&gt;&lt;em&gt;This post was written by Dawid Ziolkowski. &lt;a href="https://medium.com/@dawid.ziolkowski" rel="noopener noreferrer"&gt;Dawid&lt;/a&gt; has 10 years of experience as a Network/System Engineer at the beginning, DevOps in between, Cloud Native Engineer recently. He’s worked for an IT outsourcing company, a research institute, telco, a hosting company, and a consultancy company, so he’s gathered a lot of knowledge from different perspectives. Nowadays he’s helping companies move to cloud and/or redesign their infrastructure for a more Cloud Native approach.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>vcluster</category>
      <category>devspace</category>
      <category>loft</category>
    </item>
  </channel>
</rss>
