DEV Community

David Tio
David Tio

Posted on • Originally published at blog.dtio.app

KVM on Podman Networking

KVM on Podman Networking

Quick one-liner: Two networking modes for KVM in Podman — container-native (private) and bridge (full network). Each serves different use cases.


🤔 Why This Matters

Your VMs need to talk to something — containers, the host, or the outside world. But networking isn't one-size-fits-all.

Container-native is simple, private, and requires no host setup. Your VM lives inside the container's network namespace.

Bridge networking gives you full VM behavior — visible to host, other VMs, and the network. But it needs bridge, dnsmasq, and TAP devices on the host.

This post shows both approaches.


🐳 Section 1: Container-Native Networking

The VM lives inside the container's network namespace. It's private — only visible to containers on your Podman network.

When to Use This

  • Databases that shouldn't be exposed
  • Testing environments
  • Quick experiments without host setup

The Architecture

Container-Native Architecture

Prerequisites

  • qemu:base image from Post #1
  • ~/vm directory with cloud images

Step 1: Create a Podman Network

podman network create mynet
Enter fullscreen mode Exit fullscreen mode

Step 2: Run the VM with Slirp

Run the VM with slirp networking and port forwarding:

podman run --rm -it \
    --name qemu-container \
    --network=mynet \
    -p 2222:22 \
    -p 6379:6379 \
    --device /dev/kvm \
    -v ~/vm:/vm:z \
    qemu:base \
    qemu-system-x86_64 \
        -enable-kvm -cpu host \
        -m 1024 \
        -drive file=/vm/noble-server-cloudimg-amd64.img,format=qcow2,if=virtio \
        -netdev user,id=net0,hostfwd=tcp::22-:22,hostfwd=tcp::6379-:6379 \
        -device virtio-net-pci,netdev=net0 \
        -nographic
Enter fullscreen mode Exit fullscreen mode

The key flags:

  • mynet — Podman network (10.89.0.0/24)
  • -p 2222:22 -p 6379:6379 — expose to host
  • hostfwd=tcp::22-:22 — forward VM port 22 to container (all interfaces)
  • hostfwd=tcp::6379-:6379 — forward VM port 6379 to container (all interfaces)
  • -netdev user — slirp user-mode networking

The VM boots and gets an IP from slirp's internal DHCP (usually 10.0.2.15).

🐧 Step 3: Install Redis in the VM

Wait for boot and cloud-init (~20 seconds). Login at the console and install Redis:

sudo apt-get update
sudo apt-get install -y redis-server
sudo sed -i 's/bind 127.0.0.1/bind 0.0.0.0/' /etc/redis/redis.conf
sudo sed -i 's/protected-mode yes/protected-mode no/' /etc/redis/redis.conf
sudo systemctl enable --now redis-server
Enter fullscreen mode Exit fullscreen mode

🧪 Step 4: Test — Host to VM

From your host, test the Redis connection:

redis-cli localhost 6379
Enter fullscreen mode Exit fullscreen mode

Or SSH to the VM:

ssh -p 2222 sysadmin@localhost
Enter fullscreen mode Exit fullscreen mode

🧪 Step 5: Test — Container to VM

From a new terminal, test with an app-container:

podman run --rm -it \
    --network=mynet \
    docker.io/redis:latest \
    redis-cli -h qemu-container ping
Enter fullscreen mode Exit fullscreen mode

Note: Podman DNS resolves qemu-container to 10.89.0.2 — no need to remember IPs.


🌉 Section 2: Bridge Networking

The VM acts like it's on a real network. It's visible to the host, other VMs, and the network.

When to Use This

  • VMs that need full network access
  • Hosting services accessible from the network
  • Multi-VM environments

The Architecture

Bridge Architecture

Prerequisites

  • qemu:base image from Post #1
  • ~/vm directory with cloud images
  • Root access on your host

Step 1: Host Setup

Create the bridge and TAPs on your host:

sudo ip link add kvmbr0 type bridge
sudo ip addr add 192.168.100.1/24 dev kvmbr0
sudo ip link set kvmbr0 up
Enter fullscreen mode Exit fullscreen mode

Configure dnsmasq:

sudo tee /etc/dnsmasq.d/kvmbr0.conf << 'EOF'
interface=kvmbr0
bind-interfaces
dhcp-range=192.168.100.200,192.168.100.250,12h
dhcp-option=3,192.168.100.1
dhcp-option=6,8.8.8.8
EOF

sudo systemctl restart dnsmasq
Enter fullscreen mode Exit fullscreen mode

Enable IP forwarding:

sudo tee /etc/sysctl.d/99-kvmbr0.conf << 'EOF'
net.ipv4.ip_forward=1
EOF

sudo sysctl --system
Enter fullscreen mode Exit fullscreen mode

Create TAP devices:

sudo ip tuntap add tap0 mode tap
sudo ip link set tap0 master kvmbr0
sudo ip link set tap0 up
Enter fullscreen mode Exit fullscreen mode

Step 2: Build the QEMU Image

mkdir -p ~/kvmnet
cd ~/kvmnet

cat > run-vm.sh << 'EOF'
#!/bin/bash
DISK=$1
CPUS=${2:-2}
MEMORY=${3:-1024}
TAP=${4:-tap0}

qemu-system-x86_64 \
    -enable-kvm -cpu host \
    -m $MEMORY -smp $CPUS -nographic \
    -drive file=$DISK,format=qcow2,if=virtio \
    -netdev tap,id=net0,ifname=$TAP,script=no,downscript=no \
    -device virtio-net-pci,netdev=net0
EOF

chmod +x run-vm.sh
Enter fullscreen mode Exit fullscreen mode

Containerfile:

FROM alpine:latest

RUN apk add --no-cache \
        qemu-system-x86_64 \
        qemu-img \
        iproute2 \
        bash

COPY run-vm.sh /run-vm.sh
RUN chmod +x /run-vm.sh

WORKDIR /vms
CMD ["/bin/bash"]
Enter fullscreen mode Exit fullscreen mode

Build:

podman build -t qemu:base .
Enter fullscreen mode Exit fullscreen mode

Step 3: Run a VM

podman run --rm -it \
    --name vm1 \
    --network=host \
    --device /dev/kvm \
    --device /dev/net/tun:/dev/net/tun \
    -v ~/vm:/vm:z \
    qemu:base \
    /run-vm.sh /vm/noble-server-cloudimg-amd64.img
Enter fullscreen mode Exit fullscreen mode

Wait for the VM to boot and cloud-init (~20 seconds). Check the IP in the VM console:

ip addr show ens3
Enter fullscreen mode Exit fullscreen mode

Note this IP — you'll use it for the tests below.

🧪 Step 4: Test — Host to VM

From your host:

redis-cli <your-vm-ip>
Enter fullscreen mode Exit fullscreen mode

🧪 Step 6: Test — Container to VM

Launch an app-container:

podman run --rm -it \
    --name app-container \
    --network=container:vm1 \
    docker.io/redis:latest \
    redis-cli -h <your-vm-ip> ping
Enter fullscreen mode Exit fullscreen mode

Comparison

Aspect Container-Native Bridge
Host setup None Bridge, dnsmasq, TAPs
VM visible to host Via -p ports Yes
Container-to-VM Via Podman DNS Via IP
VM network Private (slirp) Real (bridge)
Use case Private/testing Full networking

🧹 Cleanup

Container-Native

podman rm -f qemu-container
podman network rm mynet
Enter fullscreen mode Exit fullscreen mode

Bridge

podman rm -f vm1 app-container

sudo ip link delete tap0
sudo ip link delete kvmbr0
sudo rm /etc/dnsmasq.d/kvmbr0.conf
sudo systemctl restart dnsmasq
Enter fullscreen mode Exit fullscreen mode

What You've Built

  • ✅ Container-native networking with slirp
  • ✅ Port forwarding (hostfwd) from VM to container
  • ✅ Podman DNS for container-to-VM communication
  • ✅ Host access via -p ports
  • ✅ Bridge networking with host bridge
  • ✅ TAP devices for VM networking
  • ✅ Container-to-VM via shared network
  • ✅ Host-to-VM via bridge

What's Next?

Your VMs and containers can now talk to each other. I am actually not sure what's next but this is definitely not the end .


Found this helpful?

Top comments (0)