DEV Community

Michael Levan
Michael Levan

Posted on • Updated on

Platform Engineering On Kubernetes Part 5: KubeVirt

There’s always been some type of divide between developers and infrastructure engineers/sysadmins. One wrote the code and the other spun up the environments for the code to run. With various movements that have occurred around cloud, DevOps, and open-source contributions, those lines got blurred. Everyone was writing code and working in the cloud.

With Kubernetes, those lines are even more blurred. Everyone is writing a little code and managing things like scaling, networks, and storage, but what about the environments that still need VMs? Organizations still need VMs and standard infrastructure regardless of the Dev and Ops lines being blurred.

This is where KubeVirt comes into play.

What is KubeVirt?

When it comes to Platform Engineering and Kubernetes, we still need to think about VMs. There are millions of Virtual Machines still running in the world and the truth is, that probably won’t stop for a very long time, if ever. Think about it -mainframes are still running.

Not every application will be containerized.

Because of that, we need a way to manage VMs in Kubernetes if Kubernetes is going to be the future platform of choice.

Enter KubeVirt.

KubeVirt is a method of, essentially, containerizing a VM. It almost feels like nested virtualization in a sense. You take an Operating System ISO or a .img and you then run it inside of Kubernetes. You can manage it declaratively as you can with any other Pod, but it “registers” in Kubernetes as a virtual machine.

Why KubeVirt

KubeVirt installs just like any other workload on Kubernetes - as a Pod with Operator components (CRDs and Controllers).

KubeVirt makes it possible to manage VMs on Kubernetes. You essentially import an existing VM or create a new one by copying and moving over it's hard drive to a Kubernetes Volume and then running it from that Volume.

It’s the declarative approach of dealing with VMs. Much like you’d maybe see with Ansible or Chef for VMs, except now it’s running on Kubernetes. If Kubernetes is to be the defacto standard for underlying platforms, there must be a method for managing and running workloads that aren’t just containers, because not all applications will be containerized.

An import note - You're not CONNECTING to existing virtual machines that are still running. You're taking workloads that would run on said existing VM and running it inside of Kubernetes.

You're running a new style of virtualization, not connecting directly to a VM that's already running.

KubeVirt Breakdown

When using KubeVirt, you have the ability to choose an imperative approach, a declarative approach, or both.

The Declarative approach will use an Operator. There are CRDs and Operators that you can install, which you’ll see how to do in the hands-on section of this blog post. With the declarative approach, you’ll use the [kubevirt.io/v1](http://kubevirt.io/v1) API with the available custom resources. For example, one of the custom resources is the VirtualMachine resource/object.

The imperative approach uses virtctl, the command-line utility for KubeVirt. Kubernetes is declarative by nature, so the goal is to always use the declarative approach, but there is certain functionality that will call for the CLI. For example, using the image-upload command to upload ISO’s or .dmg’s to use inside of Kubernetes as a VM is done via the CLI.

Image description

To get the CLI installed, there are two approaches:

  • Krew
  • From source

Krew is a way to manage plugins for Kubernetes. For more info, check out the following link: https://krew.sigs.k8s.io/

To install virtctl via Krew, run the following:

kubectl krew install virt
Enter fullscreen mode Exit fullscreen mode

ARM Device (Like an M1 Mac).

At the time of writing this, the virtctl plugin doesn’t work with Krew if you’re on an ARM-based system, like a Mac M1. Luckily, there’s a way around that starting with v1.0 of KubeVirt by going directly to source.

First, download virtctl for ARM: https://github.com/kubevirt/kubevirt/releases/tag/v1.1.0-alpha.0

Next, run it via the terminal. For example, to start a new VM called “deployvm” with the CLI, run the following:

./virtctl-v1.1.0-alpha.0-darwin-arm64 start deployvm
Enter fullscreen mode Exit fullscreen mode

You can put virtctl in your $PATH as well.

Deploying A VM

In this section, you’re going to learn how to deploy an Ubuntu VM within Kubernetes using KubeVirt.

Installing KubeVirt

First, you’ll need to install KubeVirt by:

  • Specifying the version of KubeVirt you want to use, which can be the latest.
  • Installing the CRD’s from the latest version
  • Installing the Operator from the latest version.
export VERSION=$(curl -s https://api.github.com/repos/kubevirt/kubevirt/releases | grep tag_name | grep -v -- '-rc' | sort -r | head -1 | awk -F': ' '{print $2}' | sed 's/,//' | xargs)

echo $VERSION

kubectl apply -f https://github.com/kubevirt/kubevirt/releases/download/$VERSION/kubevirt-operator.yaml

kubectl apply -f https://github.com/kubevirt/kubevirt/releases/download/$VERSION/kubevirt-cr.yaml
Enter fullscreen mode Exit fullscreen mode

After the above is installed, run kubectl get all -n kubevirt and you should see an output similar to the one below.

kubectl get all -n kubevirt
                               READY   STATUS    RESTARTS      AGE
pod/virt-api-5db54b85bd-65j59          1/1     Running   1 (22h ago)   26h
pod/virt-api-5db54b85bd-th5z8          1/1     Running   1 (22h ago)   26h
pod/virt-controller-594d556ff5-gmqdr   1/1     Running   1 (22h ago)   26h
pod/virt-controller-594d556ff5-r2vmr   1/1     Running   1 (22h ago)   26h
pod/virt-handler-ch522                 1/1     Running   0             26h
pod/virt-handler-sx7cs                 1/1     Running   0             26h
pod/virt-handler-wzdnr                 1/1     Running   0             25h
pod/virt-handler-xxkq8                 1/1     Running   0             26h
pod/virt-operator-599794b899-pmcj5     1/1     Running   0             26h
pod/virt-operator-599794b899-qkvcp     1/1     Running   0             26h

NAME                                  TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)   AGE
service/kubevirt-operator-webhook     ClusterIP   10.0.217.253   <none>        443/TCP   26h
service/kubevirt-prometheus-metrics   ClusterIP   None           <none>        443/TCP   26h
service/virt-api                      ClusterIP   10.0.226.106   <none>        443/TCP   26h
service/virt-exportproxy              ClusterIP   10.0.206.74    <none>        443/TCP   26h
Enter fullscreen mode Exit fullscreen mode

Creating The VM

Once KubeVirt is installed, you can use the VirtualMachine object to create a new VM. For example, below is a Manifest that will create a new VM called “deployvm” with an Ubuntu image.

apiVersion: kubevirt.io/v1
kind: VirtualMachine
metadata:
  name: deployvm
spec:
  running: true
  template:
    metadata:
      labels:
        kubevirt.io/size: medium
        kubevirt.io/domain: test
    spec:
      domain:
        devices:
          disks:
            - name: containerdisk
              disk:
                bus: virtio
            - name: cloudinitdisk
              disk:
                bus: virtio
          interfaces:
          - name: default
            masquerade: {}
        resources:
          requests:
            memory: 64M
      networks:
      - name: default
        pod: {}
      volumes:
        - name: containerdisk
          containerDisk:
            image: mcas/kubevirt-ubuntu-20.04:latest
        - name: cloudinitdisk
          cloudInitNoCloud:
            userDataBase64: SGkuXG4=
Enter fullscreen mode Exit fullscreen mode

💡 You can also start a VM by using the start vm command within virtctl.

Once you run the VirtualMachine Manifest, you can use the kubectl get vms command to see it starting up.

NAME       AGE   STATUS     READY
deployvm   3s    Starting   False
Enter fullscreen mode Exit fullscreen mode

After about 30 seconds, the STATUS should change to Running.

NAME       AGE   STATUS    READY
deployvm   42s   Running   True
Enter fullscreen mode Exit fullscreen mode

Access The VM

To access the VM through, what is essentially an SSH console, you can use the console command within the virtctl CLI.

For example, if your VM is called “deployvm”, use the following line:

virtctl console deployvm
Enter fullscreen mode Exit fullscreen mode

Golden Images

If you have a Sysadmin or Infrastructure Engineer background, the term “golden image” holds a close place in your heart. From a Cloud Engineering perspective, you may know it as an AMI.

It’s an image that’s designed for exactly what you need. Need an Ubuntu box with X type of apps and software? Create a clone of an image so you can use it later. Need a Windows Server 2019 box with Active Directory? Create one, make a clone of it, and keep that clone as the “golden image”.

In Kubernetes, there are two different approaches:

  • Containerized Data Importer (CDI).
  • Storing the ISO in a container image.

A CDI is a way to take a .img or ISO, save it in a Volume, and use it later when creating a Virtual Machine. The CDI has a separate CRD and Operator configuration outside of KubeVirt.

Storing an ISO in a container image is another approach and seems to be a bit more straightforward at the time of writing this. Because of that, this blog will follow the “ISO in a container image” approach.

The ISO used will be Windows Server 2019.

Prepare Windows

First, download the following ISO: https://www.microsoft.com/en-US/evalcenter/evaluate-windows-server-2019?filetype=ISO

Next, create a Dockerfile that’ll allow you to use an ISO as a bootable file.

FROM scratch
ADD  17763.3650.221105-1748.rs5_release_svc_refresh_SERVER_EVAL_x64FRE_en-us.iso /disk/
Enter fullscreen mode Exit fullscreen mode

Image description

Once complete, push the container image to Dockerhub. If you don’t have a Dockerhub account, you can sign up for free here: https://hub.docker.com/signup

When you sign up, you’ll need to log into Dockerhub on the CLI.

docker login name_of_dockerhub_org
Enter fullscreen mode Exit fullscreen mode

You’ll be prompted for your password.

Once you’re logged in, build the container image, tag it, and push it to Dockerhub. For example, my Dockerhub org name is adminturneddevops, so I use that as my org name below.

docker build -t adminturneddevops/winkube/w2k9_iso:sep2020 .
docker tag docker_id adminturneddevops/winkube:latest
docker push adminturneddevops/winkube:latest
Enter fullscreen mode Exit fullscreen mode

VM Deployment

Now that the container image is built and stored on Dockerhub, you can use it for the VM deployment.

First, created a PVC. As mentioned above, a Volume is used to store what’s needed for the VM to run. It’s like a computer hard drive in this case.

💡 Ensure that you research what Storage Class you need for your particular environment.

kubectl apply -f - <<EOF
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: winhd
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 100Gi
  storageClassName: name_of_storageclass
EOF
Enter fullscreen mode Exit fullscreen mode

Next, create the VM. The VM is created using the VirtualMachine object/resource from the KubeVirt API.

Ensure that on the image: line, you put your Dockerhub org name.

kubectl apply -f - <<EOF
apiVersion: kubevirt.io/v1
kind: VirtualMachine
metadata:
  generation: 1
  labels:
    kubevirt.io/os: windows
  name: windowsonkubernetes
spec:
  running: true
  template:
    metadata:
      labels:
        kubevirt.io/domain: vm1
    spec:
      domain:
        cpu:
          cores: 2
        devices:
          disks:
          - cdrom:
              bus: sata
            bootOrder: 1
            name: iso
          - disk:
              bus: virtio
            name: harddrive
          - cdrom:
              bus: sata
              readonly: true
            name: virtio-drivers
        machine:
          type: q35
        resources:
          requests:
            memory: 4096M
      volumes:
      - name: harddrive
        persistentVolumeClaim:
          claimName: winhd
      - name: iso
        containerDisk:
          image: YOUR_DOCKER_HUB_ORG/winkube:latest
      - name:  virtio-drivers
        containerDisk:
          image: kubevirt/virtio-container-disk
EOF
Enter fullscreen mode Exit fullscreen mode

Run kubectl get vmand you should see after about 2-3 minutes that the VM is running.

kubectl get vm

NAME                  AGE   STATUS    READY
windowsonkubernetes   21m   Running   True
Enter fullscreen mode Exit fullscreen mode

Connection

To connect via VNC, you’ll need remote-viewer. If you don’t have VNC installed, you can use the following link: https://www.realvnc.com/en/connect/download/viewer/macos/

If you have virtctl installed via Krew, use the following to connect via VNC:

kubectl virt vnc windowsonkubernetes
Enter fullscreen mode Exit fullscreen mode

If you installed virtctl from source, use the following:

virtctl vnc windowsonkubernetes
Enter fullscreen mode Exit fullscreen mode

You should now see the VNC console pop up.

Image description

Congrats! You’re successfully running a full-blown Windows VM on Kubernetes.

Top comments (0)