<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Kiki Fachry</title>
    <description>The latest articles on DEV Community by Kiki Fachry (@kikifachry).</description>
    <link>https://dev.to/kikifachry</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/kikifachry"/>
    <language>en</language>
    <item>
      <title>Using Kata Containers as a Container Runtime in OpenStack Zun</title>
      <dc:creator>Kiki Fachry</dc:creator>
      <pubDate>Sun, 06 Apr 2025 16:32:50 +0000</pubDate>
      <link>https://dev.to/kikifachry/using-kata-containers-as-a-container-runtime-in-openstack-zun-41l6</link>
      <guid>https://dev.to/kikifachry/using-kata-containers-as-a-container-runtime-in-openstack-zun-41l6</guid>
      <description>&lt;p&gt;As container adoption grows in cloud infrastructure, OpenStack has introduced Zun, a project designed to manage application containers natively within the OpenStack ecosystem. By default, Zun leverages container runtimes like runc, but for users seeking stronger isolation and enhanced security, integrating Kata Containers offers a compelling upgrade. With Kata, containers launched via Zun gain the security advantages of lightweight virtual machines—each with its own kernel—without giving up the flexibility and speed that make containers so attractive. In this post, we'll explore how Kata Containers can be used with Zun to provide a secure and efficient container experience within OpenStack.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxn2bd8d6tke0xnu5o2j3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxn2bd8d6tke0xnu5o2j3.png" alt="Zun Logo" width="403" height="357"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Topology
&lt;/h2&gt;

&lt;p&gt;In this case, we will deploy OpenStack using Kolla-Ansible in all-in-one mode and set Kata Containers as a container runtime for Zun. Here is the topology  &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4obn3ktl2eoxc949ovs5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4obn3ktl2eoxc949ovs5.png" alt="Topology" width="604" height="446"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Here is the topology explaination :&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;eno1&lt;/code&gt; and &lt;code&gt;eno2&lt;/code&gt; wil configured as a bonding interface (802.3ad) named &lt;code&gt;bond0&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;in &lt;code&gt;bond0&lt;/code&gt; we will create an VLAN interface with ID 100 ( &lt;code&gt;bond0.100&lt;/code&gt; ) for management and access the OpenStack services. &lt;strong&gt;This adapter has an IP address.&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;bond0&lt;/code&gt; will configured for external network adapter. We will using VLAN as an external network in &lt;code&gt;ml2.conf&lt;/code&gt;. &lt;strong&gt;This adapter doesn't have any IP address.&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;OpenStack will deployed using Kolla-ansible with docker for container service&lt;/li&gt;
&lt;li&gt;Docker and Containerd will need additional coniguration to add &lt;code&gt;kata&lt;/code&gt; as a runtime&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Prerequisites
&lt;/h2&gt;

&lt;p&gt;Let's breakdown the prerequisites before start the deployment&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;CPU with virtualization support&lt;/li&gt;
&lt;li&gt;64-bit Linux host ( must be using nested virtualization if using VM ) with multiple network adapters. In this case, we will using Ubuntu 24.04 and several network adapters ( explained at topology section )&lt;/li&gt;
&lt;li&gt;Internet access&lt;/li&gt;
&lt;li&gt;Sudo user&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Pre-installation
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Disable any swap
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;swapoff -a
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Don't forget to delete swap partition entry in &lt;code&gt;/etc/fstab&lt;/code&gt; to make sure the swap partition will not active when booting.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Enable &lt;code&gt;br_netfilter&lt;/code&gt; module&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Load &lt;code&gt;br_netfilter&lt;/code&gt; kernel module&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;modprobe br_netfiter
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Create a new file under &lt;code&gt;/etc/modules-load.d/&lt;/code&gt; and add &lt;code&gt;br_netfilter&lt;/code&gt; to make sure the module will automatically loaded when booting&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;echo 'br_netfilter' &amp;gt; /etc/modules-load.d/must-loaded.conf
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Installation
&lt;/h2&gt;

&lt;h3&gt;
  
  
  1. Kata Containers Installation
&lt;/h3&gt;

&lt;p&gt;We will start with install Kata Containers. In this case, we will install Kata Containers with Docker. So, we will execute &lt;code&gt;kata-manager.sh&lt;/code&gt; file with &lt;code&gt;-D&lt;/code&gt; options.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;./kata-manager.sh -D
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Or you can install &lt;strong&gt;only&lt;/strong&gt; Kata Containers and install Docker separately by using &lt;code&gt;-o&lt;/code&gt; options.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;./kata-manager.sh -o
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Also, you can change default hypervisor for Kata Containers from qemu to another such as firecracker, cloud-hypervisor, etc with &lt;code&gt;-S &amp;lt;hypervisor&amp;gt;&lt;/code&gt; options. For example, we will using cloud-hypervisor as a default hypervisor for Kata Containers&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;./kata-manager.sh -S clh
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can follow this &lt;a href="https://dev.to/kikifachry/deploy-kata-containers-in-ubuntu-2404-17le"&gt;post&lt;/a&gt; or official document of Kata Containers &lt;a href="https://github.com/kata-containers/kata-containers/blob/main/utils/README.md" rel="noopener noreferrer"&gt;here&lt;/a&gt; for any details.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Docker Installation ( Optional )
&lt;/h3&gt;

&lt;p&gt;If you install Kata Containers with Docker by using &lt;code&gt;kata-manager.sh&lt;/code&gt; you can skip this step. Follow this &lt;a href="https://docs.vultr.com/how-to-install-docker-on-ubuntu-24-04" rel="noopener noreferrer"&gt;guide&lt;/a&gt; if you only install Kata Containers without Docker in step 1.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. Kolla-Ansible Preparation
&lt;/h3&gt;

&lt;p&gt;Deploying OpenStack with Kolla-Ansible is quite simple. For this case, We will using OpenStack Dalmatian ( 2024.2 ).&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Install python build dependencies
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo apt install git python3-dev libffi-dev gcc libssl-dev libdbus-glib-1-dev
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Create python virtual env&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Create python virtual env for Kolla&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;python3 -m venv /path/to/venv
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Activate the virtual env
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;source /path/to/venv/bin/activate
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Install pip
Install pip and make sure we using the latest version of pip
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;pip install -U pip
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Install Ansible
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;pip install 'ansible-core&amp;gt;=2.17,&amp;lt;2.17.99'
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Install Kolla-Ansible &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Install Kolla-Ansible and its dependencies using pip&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;pip install git+https://opendev.org/openstack/kolla-ansible@stable/2024.2 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Create Kolla directory&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Create directory for kolla config and make sure the permission is accessible with user.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo mkdir -p /etc/kolla
sudo chown $USER:$USER /etc/kolla
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Copy preparation file
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cp -r /path/to/venv/share/kolla-ansible/etc_examples/kolla/* /etc/kolla
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Copy inventory file
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cp /path/to/venv/share/kolla-ansible/ansible/inventory/all-in-one .
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Install Kolla dependencies
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kolla-ansible install-deps
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Generate passwords
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kolla-genpwd
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Edit globals.yml&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Edit globals.yml file and make sure zun are enabled.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;enable_zun: "yes"
enable_kuryr: "yes"
enable_etcd: "yes"
docker_configure_for_zun: "yes"
containerd_configure_for_zun: "yes"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can also include another OpenStack service to install based on your needs.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Bootstrap server
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kolla-ansible bootstrap-servers -i all-in-one 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  4. Add Kata Runtime
&lt;/h3&gt;

&lt;p&gt;After bootstraping server, we need some configuration in Docker and Containerd side before deploy OpenStack. Change file &lt;code&gt;/etc/docker/daemon.json&lt;/code&gt; with this line below&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
    "bridge": "none",
    "ip-forward": false,
    "iptables": false,
    "log-opts": {
        "max-file": "5",
        "max-size": "50m"
    },
    "runtimes": {
        "kata": {
            "runtimeType": "io.containerd.kata.v2",
            "options": {}
        }
    }
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This means we registered kata runtime in Docker configuration. After that, dump all containerd configuration and place it into &lt;code&gt;/etc/containered/config.toml&lt;/code&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;containerd config dump | tee /etc/containerd/config.toml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Edit file &lt;code&gt;/etc/containerd/config.toml&lt;/code&gt; to do some changes. in [grpc] section, edit gid options&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;...
[grpc]
gid = 42463
...
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Save the configuration. Now, restart containerd and docker service&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;systemctl restart containerd docker
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  5. Deploy OpenStack
&lt;/h3&gt;

&lt;p&gt;After all completed, do prechecks before deploy OpenStack&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kolla-ansible prechecks -i all-in-one
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If no errors shown, we can deploy OpenStack&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kolla-ansible deploy -i all-in-one
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Wait until OpenStack are successfully deployed.&lt;/p&gt;

&lt;h3&gt;
  
  
  6. Launch a Container
&lt;/h3&gt;

&lt;p&gt;Access the OpenStack Horizon Dashboard and then create network, subnet, ssh keypair, security group. We need all of these components to create container. Move to menu Container to begin create a container.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzqk7z5x5ha3vj5bl5l2z.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzqk7z5x5ha3vj5bl5l2z.png" alt="Container Menu" width="250" height="507"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Choose &lt;code&gt;Create Container&lt;/code&gt;. Then, input the information about the container. For example, we will create nginx container like this picture below.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd20ue9db03nkc2aauhxz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd20ue9db03nkc2aauhxz.png" alt="Info" width="800" height="437"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Then, input the container spesification. Don't forget to use &lt;code&gt;kata&lt;/code&gt; as a runtime like this picture below.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj85ikvlyk8qxd3fot3eg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj85ikvlyk8qxd3fot3eg.png" alt="Spec Container" width="800" height="437"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Fill another requirements like network, volume if you need persistent volume, and other options. Choose create and wait until container created like this picture below.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvxlx0c0gou8b766xk2sy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvxlx0c0gou8b766xk2sy.png" alt="Successfull Created Container" width="800" height="32"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusions
&lt;/h2&gt;

&lt;p&gt;Integrating Kata Containers as a runtime for OpenStack Zun adds a valuable layer of security and workload isolation to containerized environments. By leveraging lightweight virtual machines, Kata provides strong boundaries between workloads—making it ideal for multi-tenant or untrusted scenarios often found in cloud platforms. This setup allows OpenStack users to benefit from the flexibility of containers without compromising on isolation, all while maintaining compatibility with existing OpenStack services. As container technologies continue to evolve, combining Zun and Kata offers a future-proof, security-conscious approach to running containers at scale within OpenStack. &lt;/p&gt;

</description>
      <category>openstack</category>
      <category>katacontainers</category>
      <category>openinfra</category>
      <category>zun</category>
    </item>
    <item>
      <title>Deploy Kata Containers in Ubuntu 24.04</title>
      <dc:creator>Kiki Fachry</dc:creator>
      <pubDate>Sun, 06 Apr 2025 10:55:15 +0000</pubDate>
      <link>https://dev.to/kikifachry/deploy-kata-containers-in-ubuntu-2404-17le</link>
      <guid>https://dev.to/kikifachry/deploy-kata-containers-in-ubuntu-2404-17le</guid>
      <description>&lt;p&gt;In &lt;a href="https://dev.to/kikifachry/kata-containers-lightweight-vms-for-containers-4ine"&gt;previous&lt;/a&gt; post, we’ve explored the core differences between traditional containers and Kata Containers. In this post we will start to install kata containers. Installing Kata Containers isn’t overly complex—and once set up. It can integrate seamlessly with container runtimes like containerd or CRI-O, and even work inside Kubernetes clusters. We will post it later but in this section, I’ll walk you through how to install Kata Containers on an Ubuntu system, step by step, so you can try it out yourself and see the isolation in action.&lt;/p&gt;

&lt;p&gt;Kata Containers can be installed with 3 methods :&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Official distro packages&lt;/li&gt;
&lt;li&gt;Automatic&lt;/li&gt;
&lt;li&gt;Using

&lt;code&gt;kata-deploy&lt;/code&gt;

( for running kubernetes )&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Recommended way to install Kata Containers is using official distro packages. But unfortunately, Kata Containers doesn't support debian-based packages, so we will install using automatic methods. With this method, we will use &lt;code&gt;kata-manager&lt;/code&gt; script to automate installation.&lt;/p&gt;

&lt;h2&gt;
  
  
  Prerequisites
&lt;/h2&gt;

&lt;p&gt;Before we begin, we need Ubuntu 24.04 baremetal host or VM with nested virtualization. Kata Containers rely on hardware virtualization to provide the strong isolation that sets them apart from traditional containers. This means each Kata container runs inside its own lightweight virtual machine. So, if you're running Kata Containers inside a virtual machine (like on a public cloud or a development VM), you'll need to enable nested virtualization—a feature that allows a VM to create and manage other VMs. Without it, the underlying hypervisor (like QEMU or Cloud Hypervisor) used by Kata won't be able to launch the isolated guest kernel, and the container runtime will fail to start Kata-based workloads. For example, if you use VMware vSphere you can enable Hardware Assisted Virtualization and IOMMU like this picture below:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft6qw5f8711sxq6l4i7rs.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft6qw5f8711sxq6l4i7rs.png" alt="Edit VM Setting" width="800" height="411"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Installation
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Login to your Ubuntu host with SSH&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Update repositoriy and upgrade the Ubuntu system&lt;br&gt;
&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo apt-get update &amp;amp;&amp;amp; sudo apt-get upgrade -y
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Reboot the host. Wait until the host powered on and we can do SSH again&lt;/li&gt;
&lt;li&gt;Download kata-manager script
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;wget https://raw.githubusercontent.com/kata-containers/kata-containers/main/utils/kata-manager.sh
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Make the script executable
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;chmod +x kata-manager.sh
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Execute the script to install Kata Containers with Containerd.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;./kata-manager.sh 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can add &lt;code&gt;-h&lt;/code&gt; for any command help and customization. For example, if we only install Kata Containers we need to add &lt;code&gt;-o&lt;/code&gt; options :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;./kata-manager.sh -o
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Also. if we want to install with Docker we need to add &lt;code&gt;-D&lt;/code&gt; options :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;./kata-manager.sh -D
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Installation progress will begin. After the installation completed, check the installed services
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;./kata-manager.sh -l
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Make sure Kata Containers and Containerd was installed&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;INFO: Getting version details
INFO: Kata Containers: installed version: Kata Containers containerd shim (Golang): id: "io.containerd.kata.v2", version: 3.15.0, commit: c0632f847fe706090d64951ba6b68865a416bdb4
INFO: Kata Containers: latest version: 3.15.0

INFO: containerd: installed version: containerd github.com/containerd/containerd v1.7.27 05044ec0a9a75232cad458027ca83437aae3f4da
INFO: containerd: latest version: v2.0.4

INFO: Docker (moby): installed version: &amp;lt;not installed&amp;gt;
INFO: Docker (moby): latest version: v28.0.4
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Testing
&lt;/h2&gt;

&lt;p&gt;After installation completed, we will test the Kata Containers with deploying a container and see the isolation is working.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Run &lt;code&gt;uname -a&lt;/code&gt; to make sure the host OS kernel version
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;root@dev-master:~# uname -a
Linux dev-master 6.8.0-57-generic #59-Ubuntu SMP PREEMPT_DYNAMIC Sat Mar 15 17:40:59 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
root@dev-master:~# 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We can see the host OS kernel version is 6.8&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Pull some container image to deploy. For example, we will use rocky linux docker image
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ctr image pull docker.io/rockylinux/rockylinux:latest
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Deploy rocky linux image with defaut runtime, and run command &lt;code&gt;uname -a&lt;/code&gt; to see what used kernel
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;root@dev-master:~# ctr run --rm docker.io/rockylinux/rockylinux:latest rocky-defaut uname -a
Linux dev-master 6.8.0-57-generic #59-Ubuntu SMP PREEMPT_DYNAMIC Sat Mar 15 17:40:59 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
root@dev-master:~# 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;From this testing, we can see the &lt;code&gt;rocky-default&lt;/code&gt; container was using the same OS kernel from the host just like traditional container concept. Now, deploy rocky linux image with Kata runtime&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;root@dev-master:~# ctr run --runtime io.containerd.kata.v2 --rm docker.io/rockylinux/rockylinux:latest rocky-kata uname -a
Linux localhost 6.12.13 #1 SMP Thu Mar 13 11:34:50 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
root@dev-master:~# 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Conclusions
&lt;/h2&gt;

&lt;p&gt;When a container is run using the default runtime, it shares the host's kernel. This means if you check the kernel version from inside the container (using &lt;code&gt;uname -a&lt;/code&gt;), you'll see the same version as your host operating system. In contrast, when you run a container using Kata Containers, the process runs inside a lightweight virtual machine with its own dedicated kernel. Running &lt;code&gt;uname -a&lt;/code&gt; will return a different kernel version, typically the one shipped by Kata itself. This is a simple but powerful way to confirm that Kata is using hardware virtualization to isolate your container workloads—each one gets its own kernel, separate from the host.&lt;/p&gt;

</description>
      <category>katacontainers</category>
      <category>containers</category>
      <category>ubuntu</category>
    </item>
    <item>
      <title>Kata Containers: Lightweight VMs for Containers</title>
      <dc:creator>Kiki Fachry</dc:creator>
      <pubDate>Sun, 06 Apr 2025 07:54:26 +0000</pubDate>
      <link>https://dev.to/kikifachry/kata-containers-lightweight-vms-for-containers-4ine</link>
      <guid>https://dev.to/kikifachry/kata-containers-lightweight-vms-for-containers-4ine</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;In today’s cloud-native world, containers have become the standard unit of software delivery. They allow developers to package applications along with their dependencies into lightweight, portable units that can run reliably across different environments. This has revolutionized the way we build, ship, and run applications—from developer laptops to large-scale Kubernetes clusters.&lt;/p&gt;

&lt;p&gt;But while containers are efficient and fast, they come with a trade-off: &lt;strong&gt;security&lt;/strong&gt;. Traditional containers share the host operating system’s kernel. That means if a container is compromised, there’s a potential risk to the entire system. For many teams running multi-tenant clusters or handling untrusted workloads, that risk isn’t acceptable.&lt;/p&gt;

&lt;p&gt;This is where Kata Containers come in—a project designed to bridge the gap between the speed of containers and the strong isolation of virtual machines. Kata Containers look and feel like containers, but under the hood, they run inside lightweight VMs using a separate kernel. This offers a level of isolation closer to traditional virtualization, without the overhead that comes with full-blown hypervisors. Whether you're managing sensitive data, running sandboxed workloads, or building a secure Kubernetes platform, Kata Containers can offer a powerful middle ground. &lt;/p&gt;

&lt;p&gt;To better understand what makes Kata Containers different, let’s take a look at a side-by-side comparison with traditional containers.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqmrl8asfq8yj4yybnt4w.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqmrl8asfq8yj4yybnt4w.png" alt="Container Differences" width="800" height="450"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;source : &lt;a href="https://yqintl.alicdn.com/fb67232d2a5e2e8afbc7968c9227f7dd4121bbaf.png" rel="noopener noreferrer"&gt;https://yqintl.alicdn.com/fb67232d2a5e2e8afbc7968c9227f7dd4121bbaf.png&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;On the right side of the diagram, we see how traditional containers work. Each containerized process (A, B, C) shares the same Linux kernel. Isolation is achieved using namespaces, cgroups, and additional filters like seccomp, MAC (Mandatory Access Control), and capabilities. While this method is lightweight and fast, it still means that all containers rely on the host’s kernel. If one container exploits a kernel vulnerability, it could potentially affect others or even the host.&lt;/p&gt;

&lt;p&gt;On the left side, Kata Containers take a different approach. Each containerized process runs inside its own lightweight virtual machine, with a dedicated kernel (e.g., Linux Kernel A, B, C). These VMs are powered by hardware virtualization, which provides strong, hardware-enforced isolation between workloads. From the perspective of the process inside, it’s still running in a container—but behind the scenes, it’s isolated as if it were a small, independent VM.&lt;/p&gt;

&lt;p&gt;In practice, this means Kata Containers offer much stronger security boundaries. If a container running inside a Kata VM is compromised, the attack surface is significantly reduced—it would have to break through an entire virtualized layer rather than just a namespace.&lt;/p&gt;

&lt;p&gt;The tradeoff? Slightly more overhead compared to traditional containers. But in environments where security and workload isolation are top priorities—like multi-tenant platforms, confidential workloads, or untrusted code—Kata Containers strike a compelling balance between performance and protection.&lt;/p&gt;

&lt;p&gt;Kata Containers may not be the default choice for every workload, but when security and strong isolation matter just as much as performance, they offer a powerful and elegant solution that bridges the best of both containers and virtual machines.&lt;/p&gt;

&lt;h3&gt;
  
  
  Resources
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://github.com/kata-containers/kata-containers/tree/main/docs/install" rel="noopener noreferrer"&gt;https://github.com/kata-containers/kata-containers/tree/main/docs/install&lt;/a&gt;&lt;/p&gt;

</description>
      <category>containers</category>
      <category>katacontainers</category>
    </item>
  </channel>
</rss>
