<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Ganesh Viswanathan</title>
    <description>The latest articles on DEV Community by Ganesh Viswanathan (@gansvv).</description>
    <link>https://dev.to/gansvv</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/gansvv"/>
    <language>en</language>
    <item>
      <title>Compute Containers</title>
      <dc:creator>Ganesh Viswanathan</dc:creator>
      <pubDate>Fri, 02 Jan 2026 19:20:11 +0000</pubDate>
      <link>https://dev.to/gansvv/compute-containers-4ika</link>
      <guid>https://dev.to/gansvv/compute-containers-4ika</guid>
      <description>&lt;p&gt;&lt;strong&gt;Kata Containers&lt;/strong&gt; is a secure container runtime that runs each container inside a lightweight virtual machine, combining the isolation of VMs with the speed and developer experience of containers. It enables both speed and isolation for running containerized workloads. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Goal:&lt;/strong&gt; strengthen multi-tenant isolation beyond what namespaces/cgroups alone can safely provide, especially against kernel-level escapes.​&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Approach:&lt;/strong&gt; replace the usual low-level runtime (e.g., runc) with a runtime that boots a micro-VM per sandbox and runs the container inside it.​&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Result:&lt;/strong&gt; similar workflow (docker, containerd, Kubernetes), but each workload gets its own guest kernel and userspace boundary.​ ​&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How it started?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Started Dec 15 2017, via merger of two secure container efforts - Intel’s Clear Containers and Hyper.sh’s runV both of which used hardware virtualization to isolate containers inside lightweight virtual machines.&lt;/p&gt;

&lt;p&gt;Part of OpenStack (OpenInfra) Foundation, positioned as a community-driven way to blend VM-grade isolation with container workflows.&lt;/p&gt;

&lt;p&gt;Default runtimes such as runc already offer strong performance and a smooth developer experience. However, in multi-tenant or untrusted workload scenarios, a simple container boundary may not provide adequate isolation since all containers share the same host kernel. Kata Containers introduces an additional layer of defense by leveraging hardware virtualization to isolate each workload. This design makes it significantly more difficult for a compromised container to escape or impact other containers and the host system.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Description&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Prior - software isolation between containers&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh5rtw4y1tkno740dae16.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh5rtw4y1tkno740dae16.png" alt="Containers in Cloud" width="800" height="583"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;With Kata - hardware (hypervisor driven) isolation between containers&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr4uacth89wz39eg27yh5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr4uacth89wz39eg27yh5.png" alt="Hypervisor Based Containers" width="800" height="447"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;At a high level, a Kata deployment introduces:&lt;/p&gt;

&lt;p&gt;Hypervisor layer: QEMU, Cloud Hypervisor, or Firecracker, using KVM to create a minimal VM per pod or container.​&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx7a0cov4kkm9r53aygnu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx7a0cov4kkm9r53aygnu.png" alt=" " width="800" height="504"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbk8y94w2nx5im3cqj111.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbk8y94w2nx5im3cqj111.png" alt=" " width="800" height="452"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Guest assets: a trimmed guest kernel and rootfs optimized for fast boot and small memory footprint.​&lt;/p&gt;

&lt;p&gt;Kata agent: runs inside the guest; receives commands from the host runtime to create namespaces, mounts, and processes for the container.​&lt;/p&gt;

&lt;p&gt;Runtime integration: an OCI-compatible runtime that containerd/Docker/Kubelet can call instead of runc.​&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm8y4wsijg12ty655phbn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm8y4wsijg12ty655phbn.png" alt=" " width="800" height="435"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Make creation of container faster: Prepare rootfs/images for container, and other side is to create sandbox for pod. Use hotplug to combine them together and start pod. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzwt38zrtm9sva7ey68cx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzwt38zrtm9sva7ey68cx.png" alt=" " width="800" height="417"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Make small as a container: Don’t need to allocate real memory, use nvdimm to allocate virtual memory to device. Additionally use KSM to efficiently use memory; very high density than traditional VMs.&lt;/p&gt;

&lt;p&gt;Networking: MacVTAP to bridge veth pair to TAP device. Additionally, apply tc rules to perform traffic transforms.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4y5uepjcv9wj5nkxm263.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4y5uepjcv9wj5nkxm263.png" alt=" " width="800" height="426"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F71ghcvtyy43jebj0hx0m.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F71ghcvtyy43jebj0hx0m.png" alt=" " width="800" height="432"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Storage: support volumes either from block-device or from 9pfs. With virtio-blk, when you have block-device storage on the host, performance is closer to BM / native performance than not using block-device on host. See &lt;a href="https://dev.to/gansvv/storage-for-kata-containers-9pfs-vs-virtio-blk-f6n"&gt;https://dev.to/gansvv/storage-for-kata-containers-9pfs-vs-virtio-blk-f6n&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbbit5c006wldix3s4r37.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbbit5c006wldix3s4r37.png" alt=" " width="800" height="435"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;CPU/Memory multi-tenancy:​&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhvmlx3kllwzkqpz8hrvt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhvmlx3kllwzkqpz8hrvt.png" alt=" " width="800" height="427"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Networking support for K8s multi-tenancy:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft908rqpskb7zkp62lszc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft908rqpskb7zkp62lszc.png" alt=" " width="800" height="443"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;From the workload’s perspective, it is still a container; from the host’s perspective, it is a VM with a single-tenant container payload.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Containers&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Firecracker doesn’t support emulation environments, so GPUs don’t work with Firecracker.&lt;/p&gt;

&lt;p&gt;Outer runtime meant for lifecycling of VM. Inner runtime is OCP compliant, adheres to CNI/CSI/CRI, enables k8s networking and storage functionality.&lt;/p&gt;

&lt;p&gt;Fortify applications running on cluster:&lt;/p&gt;

&lt;p&gt;Kata has already been validated with GPU on BM:&lt;/p&gt;

&lt;p&gt;Running a confidential environment: components should be trustworthy - kernel, guest image, memory are in a specific known state. Workload owner can then review the attestation report and decide actions, e.g., releasing secrets into that environment.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;RATS reference arch and attestation:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Runtime class on pod spec - kata-qemu-gpu-snp for AMD system, tdx for tdx system, cca for arm system:&lt;/p&gt;

&lt;p&gt;Confidential containers: encrypted/signed container images (host no longer sees the images), confidential containers for every pod.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Terminology&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;GDR, GPS:&lt;/strong&gt;
Meaning: GPUDirect RDMA (Remote Direct Memory Access) and GPUDirect Storage (GDS) are NVIDIA technologies for high-performance data transfer, enabling direct paths between GPUs and network/storage devices, bypassing the CPU to slash latency and boost bandwidth, crucial for AI/HPC; while GPUDirect RDMA connects GPUs to network interfaces (NICs) or storage adapters, GDS specifically links GPUs directly to local/remote storage (like NVMe-oF), both leveraging PCIe for speed.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Cold plug vs Hot plug VFIO:&lt;/strong&gt;
Context: VFIO devices are hot-plugged on a bridge by default. In a confidential compute environment, hot-plugging can compromise security. Kata supports cold-plugging of VFIO devices to a bridge-port, root-port, or switch-port.
Meaning: In VFIO (Virtual Function I/O), "cold plug" means attaching a device (like a GPU) to a VM before the VM starts, while "hot plug" means attaching it while the VM is running, with "cold" being the standard, reliable way (passthrough) and "hot" being a more advanced, dynamic (but potentially complex) method for live device addition/removal&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Good References&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Kata launch:&lt;a href="https://youtu.be/VupUc88FV9Q?si=KuL4fI7FEMkpOHbx" rel="noopener noreferrer"&gt;https://youtu.be/VupUc88FV9Q?si=KuL4fI7FEMkpOHbx&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Nvidia - Kata Containers:&lt;a href="https://youtu.be/a3HzBmPuw5g?si=xM96k3Ji4PoBaFfX" rel="noopener noreferrer"&gt;https://youtu.be/a3HzBmPuw5g?si=xM96k3Ji4PoBaFfX&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;PodSandbox and container lifecycle mgmt: &lt;a href="https://kubernetes.io/blog/2016/12/container-runtime-interface-cri-in-kubernetes/#:%7E:text=Before%20starting%20a%20pod%2C%20kubelet%20calls%20RuntimeService.RunPodSandbox%20to%20create%20the%20environment" rel="noopener noreferrer"&gt;https://kubernetes.io/blog/2016/12/container-runtime-interface-cri-in-kubernetes/#:~:text=Before%20starting%20a%20pod%2C%20kubelet%20calls%20RuntimeService.RunPodSandbox%20to%20create%20the%20environment&lt;/a&gt; &lt;/li&gt;
&lt;li&gt;
&lt;a href="https://youtu.be/odBdXHU8W9U?si=S7OK98RWs-griBGh" rel="noopener noreferrer"&gt;https://youtu.be/odBdXHU8W9U?si=S7OK98RWs-griBGh&lt;/a&gt; &lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>kata</category>
      <category>containers</category>
    </item>
    <item>
      <title>Funny: AI mixing computing terminology with mechnical engineering in one sentence</title>
      <dc:creator>Ganesh Viswanathan</dc:creator>
      <pubDate>Mon, 29 Dec 2025 20:24:21 +0000</pubDate>
      <link>https://dev.to/gansvv/funny-ai-mixing-computing-terminology-with-mechnical-engineering-in-one-sentence-28d3</link>
      <guid>https://dev.to/gansvv/funny-ai-mixing-computing-terminology-with-mechnical-engineering-in-one-sentence-28d3</guid>
      <description>&lt;p&gt;On 29 Dec 2025, Google search term:&lt;br&gt;
cold plug vs hot plug VFIO&lt;/p&gt;

&lt;p&gt;Yielded:&lt;br&gt;
In VFIO (Virtual Function I/O), "cold plug" means attaching a device (like a GPU) to a VM before the VM starts, while "hot plug" means attaching it while the VM is running, with "cold" being the standard, reliable way (passthrough) and "hot" being a more advanced, dynamic (but potentially complex) method for live device addition/removal, *&lt;em&gt;requiring specific hardware/software support to avoid system instability or data loss, unlike simple spark plug heat ranges which are about engine temperature management. *&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkcy1derfowuj9hcaz30c.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkcy1derfowuj9hcaz30c.png" alt=" " width="800" height="738"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>funny</category>
      <category>ai</category>
      <category>todayisearched</category>
    </item>
    <item>
      <title>Storage for Kata Containers - 9pfs vs virtio-blk</title>
      <dc:creator>Ganesh Viswanathan</dc:creator>
      <pubDate>Mon, 29 Dec 2025 18:34:34 +0000</pubDate>
      <link>https://dev.to/gansvv/storage-for-kata-containers-9pfs-vs-virtio-blk-f6n</link>
      <guid>https://dev.to/gansvv/storage-for-kata-containers-9pfs-vs-virtio-blk-f6n</guid>
      <description>&lt;p&gt;9pfs (Plan 9 File System via virtio-9p) and virtio-blk are two different methods for providing storage to virtual machines, differing primarily in their level of abstraction (file-level vs. block-level).&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk29aa3hmizzkagli2pbb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk29aa3hmizzkagli2pbb.png" alt="Comparison table for 9pfs vs virtio-blk" width="800" height="409"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  9pfs (virtio-9p)
&lt;/h2&gt;

&lt;p&gt;9pfs exposes a specific directory on the host machine directly to the guest. It is often used for "shared folders" where the host and guest need simultaneous access to the same files.&lt;/p&gt;

&lt;p&gt;Pros:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Elasticity: Storage is only consumed by actual files on the host; no need to pre-allocate a large image file.&lt;/li&gt;
&lt;li&gt;Ease of Use: Useful for development, where you want to edit code on the host and run it immediately in the guest.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Cons:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Performance: Generally slower than block devices because every file operation must go through the 9P protocol and host system calls.&lt;/li&gt;
&lt;li&gt;Compatibility: While Linux support is excellent, Windows support is limited and often requires third-party drivers.&lt;/li&gt;
&lt;li&gt;Modern Alternative: virtio-fs is the successor to 9pfs in many modern QEMU setups, offering significantly better performance and POSIX compliance. &lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  virtio-blk
&lt;/h2&gt;

&lt;p&gt;This presents a virtual raw block device (a "hard drive") to the guest. The guest OS treats it like a physical disk and manages its own filesystem (e.g., ext4, XFS) on top of it.&lt;/p&gt;

&lt;p&gt;Pros:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;High Performance: Optimized for low latency and high throughput. Features like IOThread Virtqueue Mapping (introduced in QEMU 9.0) allow it to scale across multiple vCPUs efficiently.&lt;/li&gt;
&lt;li&gt;Full Feature Support: Supports TRIM/Discard (to reclaim space), bootable partitions, and standard disk management tools.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Cons:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Isolation: The host cannot easily "see" or edit files inside the block device while the guest is running without risking corruption.&lt;/li&gt;
&lt;li&gt;Fixed Size: Typically requires pre-allocation or using "sparse" image formats (like QCOW2) which can have their own performance trade-offs. &lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  When to use which
&lt;/h2&gt;

&lt;p&gt;Use 9pfs when:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;You need to share a host directory into a guest for development, simple file exchange, or testing.&lt;/li&gt;
&lt;li&gt;You value ease of sharing over raw performance and strict isolation.​&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Use virtio-blk when:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;You are provisioning a root disk or data disk for a VM.&lt;/li&gt;
&lt;li&gt;You care about performance, isolation, and standard disk semantics (snapshots, independent filesystems per VM).​&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>storage</category>
      <category>kata</category>
      <category>containers</category>
      <category>kubernetes</category>
    </item>
  </channel>
</rss>
