In the modern era of application development, containers have emerged as the preferred choice for building and scaling applications. However, the reality of enterprise infrastructures often includes a mix of containers and traditional virtual machines (VMs). OpenShift virtualization offers a solution to this challenge by seamlessly integrating container orchestration capabilities with virtualization technology. This powerful combination allows organizations to manage both containers and virtual machines on a single, unified platform. In this article, we will explore the inner workings of OpenShift virtualization, walk through a hands-on example, and discuss best practices to ensure successful implementation.
The Foundation of OpenShift Virtualization: KubeVirt
At the core of OpenShift virtualization lies KubeVirt, an innovative add-on that seamlessly integrates virtual machine management capabilities into the OpenShift platform. KubeVirt provides a powerful API that allows users to create, manage, and orchestrate virtual machines alongside containers, all within the familiar OpenShift environment.
Under the hood, OpenShift virtualization leverages the KVM hypervisor, a mature and widely-used virtualization technology. KVM is a kernel module that enables the Linux kernel to function as a hypervisor, providing a stable and efficient foundation for running virtual machines. By combining KubeVirt with KVM, OpenShift virtualization enables users to manage virtual machines using the same tools and processes they use for managing containers.
The Workflow of OpenShift Virtualization
When a user defines a Virtual Machine Instance (VMI) resource in OpenShift, the platform springs into action. The VMI definition serves as a blueprint for the desired virtual machine, specifying essential details such as the VM image, allocated memory and CPU resources, storage requirements, and networking configuration.
Once the VMI definition is submitted to the OpenShift API, the cluster validates the input and creates a corresponding VM custom resource definition (CRD) object. This object represents the virtual machine within the OpenShift ecosystem.
The virt-controller, a key component of KubeVirt, continuously monitors the VMI definitions. When a new VMI is detected, the virt-controller creates a regular OpenShift pod that acts as a container for the virtual machine. This pod undergoes the standard OpenShift scheduling process to determine the most suitable node in the cluster to host the virtual machine.
Once the pod is scheduled on a node, the virt-controller updates the VMI definition with the assigned node information and hands over control to the virt-handler daemon running on that specific node. The virt-handler is responsible for managing the lifecycle of virtual machines on the node, ensuring that they are created, started, stopped, and terminated according to the desired state specified in the VMI.
Inside each pod hosting a virtual machine, the virt-launcher component configures the pod's internal resources, such as cgroups and namespaces, to provide a secure and isolated environment for the VM to operate. The virt-launcher uses an embedded instance of libvirtd, a virtualization management library, to interact with the underlying KVM hypervisor and manage the VM's lifecycle.
By leveraging OpenShift's native scheduling, networking, and storage infrastructure, KubeVirt enables virtual machines to benefit from the same features and capabilities enjoyed by containerized workloads. This includes advanced scheduling policies, network isolation, load balancing, and high availability, ensuring that virtual machines are treated as first-class citizens within the OpenShift ecosystem.
Bridging the Gap: Containerized Virtual Machines in OpenShift
OpenShift virtualization introduces the concept of containerized virtual machines, which may seem counterintuitive at first glance. Traditionally, virtual machines and containers have been viewed as separate entities, each with its own management paradigms. However, OpenShift virtualization bridges this gap by running virtual machines within containers, enabling a unified approach to managing both workloads.
The Traditional KVM Approach
In a traditional KVM setup, virtual machines are managed directly on the host system. Each virtual machine is represented by a qemu-kvm process, which is spawned with extensive parameters defining the VM's hardware specifications. These processes interact directly with the host system's resources to create and manage the virtual machines.
The Containerized Approach in OpenShift
OpenShift virtualization takes a different approach. Instead of running virtual machines directly on the host system, OpenShift creates a dedicated pod for each virtual machine. This pod acts as a container that encapsulates the virtual machine process.
Inside each pod, the virt-launcher component is responsible for managing the virtual machine. It utilizes libvirtd, a virtualization management library, to interact with the underlying virtualization technology, such as KVM, on the host system. This approach allows virtual machines to be managed as native OpenShift objects, benefiting from the platform's robust scheduling, networking, and storage capabilities.
Benefits of Containerized Virtual Machines
By treating virtual machines as containerized workloads, OpenShift virtualization enables seamless integration with the platform's existing features and tools. Virtual machines can leverage OpenShift's advanced scheduling policies, ensuring optimal placement based on resource requirements, affinity rules, and other constraints. They can also benefit from OpenShift's load balancing and high availability mechanisms, enhancing the resilience and scalability of virtualized applications.
Containerized virtual machines also inherit OpenShift's software-defined networking (SDN) capabilities. They can be seamlessly integrated into the cluster's network fabric, allowing them to communicate with other workloads using standard OpenShift services and routes. Network policies, ingress, and egress rules can be applied to virtual machines, enabling fine-grained control over network traffic and enhancing security.
This unified approach to managing containers and virtual machines simplifies operations and reduces complexity. Administrators can use familiar OpenShift tools and workflows to manage both types of workloads, streamlining the deployment, scaling, and monitoring processes. Developers can also leverage OpenShift's CI/CD pipelines, GitOps practices, and other cloud-native paradigms to manage the lifecycle of virtualized applications.
By bridging the gap between containers and virtual machines, OpenShift virtualization enables organizations to modernize their infrastructure incrementally. Legacy applications that require virtual machines can coexist with containerized workloads, allowing for a smooth transition to a cloud-native architecture. This flexibility empowers organizations to adopt a hybrid approach, leveraging the benefits of both containers and virtual machines within a single, unified platform.
Deploying Virtual Machines in OpenShift: A Hands-On Guide
Now that we have a solid understanding of how OpenShift virtualization works, let's dive into the practical aspects of deploying virtual machines within an OpenShift environment. In this section, we will explore the different methods available for creating and managing virtual machines, including using existing templates, custom templates, and YAML definitions.
Prerequisites
Before we begin, ensure that you have the OpenShift Virtualization operator installed and configured in your OpenShift cluster. The operator can be installed from the Operator Hub, and it's crucial to select a version that matches your OpenShift cluster version. Once the operator is up and running, you will see a new "Virtualization" tab in the OpenShift console.
It's also important to note that virtual machines in OpenShift are assigned to specific projects. Users must have the necessary permissions for the target namespace to access, manage, and monitor the virtual machines within it.
Setting Up Storage
OpenShift virtualization relies on the Containerized Data Importer (CDI) to manage persistent storage for virtual machines. CDI creates Persistent Volume Claims (PVCs) based on the defined specifications and retrieves the disk image to populate the underlying storage volume. To ensure smooth operations, make sure you have a default storage class set up in your cluster.
You can check the available storage classes using the command oc get sc. The default storage class will have the "(default)" label next to it. If needed, you can set the default storage class using the oc patch command.
Using the virtctl Utility
The virtctl utility is a powerful tool for creating and managing virtualization manifests in OpenShift. You can download the virtctl utility from the "Overview" section in the Virtualization menu of the OpenShift console.
Once downloaded, decompress the archive and copy the virtctl binary to a directory in your PATH environment variable. Make sure to grant execute permissions to the binary using the chmod +x virtctl command.
With virtctl installed, creating virtual machine manifest files becomes a breeze. For example, to create a basic virtual machine manifest, you can use the command virtctl create vm --name vm-1. This will generate a YAML file with the necessary configuration for a virtual machine named "vm-1".
Customizing Virtual Machine Specifications
While the basic virtual machine manifest provides a starting point, you'll often need to customize the specifications to meet your requirements. This includes defining the operating system, disk size, and compute resources.
OpenShift virtualization offers predefined instance types that provide various combinations of CPU and memory configurations. These instance types are categorized into different series, such as CX for compute-intensive workloads, U for general-purpose applications, GN for GPU-accelerated workloads, and M for memory-intensive applications.
You can explore the available instance types using the command oc get vm.
Conclusion
OpenShift virtualization represents a significant advancement in the realm of application deployment and management. By seamlessly integrating traditional virtualization capabilities with the power of container orchestration, OpenShift provides a unified platform that caters to the diverse needs of modern enterprises.
Through the use of KubeVirt and the KVM hypervisor, OpenShift virtualization enables the management of virtual machines as native OpenShift objects. This approach brings the benefits of containerization, such as scalability, flexibility, and automation, to virtualized workloads. Organizations can now leverage the same tools, workflows, and best practices they use for containerized applications to manage their virtual machines.
The ability to run virtual machines within containers opens up new possibilities for application modernization and hybrid cloud deployments. Legacy applications that rely on virtual machines can be gradually migrated to OpenShift, coexisting with cloud-native workloads. This allows organizations to adopt a phased approach to modernization, minimizing disruption and risk.
As we have seen through the hands-on example and best practices discussed in this article, OpenShift virtualization empowers developers and operators to efficiently deploy, manage, and scale virtual machines alongside containers. By embracing this technology, organizations can unlock the full potential of their infrastructure, enabling them to deliver applications faster, more reliably, and with greater agility.
In conclusion, OpenShift virtualization represents a significant step forward in the journey towards a truly unified and flexible application platform. As the lines between containers and virtual machines continue to blur, OpenShift is well-positioned to help organizations navigate this new landscape and achieve their digital transformation goals.
Top comments (0)