TL;DR: The thing that actually forced my hand wasn't performance or security — it was a Slack message from our finance team asking why we had a $21/seat/month line item for "Docker Business" across 40 developers. Docker Desktop's licensing change (introduced back in 2022 but increasing
📖 Reading time: ~31 min
What's in this article
- Why I Started Looking Past Docker
- The Contenders I Actually Tested
- Podman: The Closest Drop-In
- nerdctl + containerd: What CI Actually Uses
- Lima on macOS: The Missing Piece
- Buildah: When You Only Need to Build Images
- Rancher Desktop: The 'It Just Works' Option
- Side-by-Side Comparison
Why I Started Looking Past Docker
The thing that actually forced my hand wasn't performance or security — it was a Slack message from our finance team asking why we had a $21/seat/month line item for "Docker Business" across 40 developers. Docker Desktop's licensing change (introduced back in 2022 but increasingly enforced since) means any company with more than 250 employees or over $10M in revenue needs a paid subscription. We qualified on both counts. Suddenly a tool everyone treated as free infrastructure had a $10K/year invoice attached to it, and I had to either justify it or find alternatives.
The memory situation on my M3 MacBook Pro pushed me further. I ran top one afternoon and Docker Desktop's VM was sitting at 3.8GB RSS — doing nothing. No containers running, no builds in flight. Just idling. The com.docker.virtualization process alone was consuming more RAM than my actual application under moderate load. On a 16GB machine where Chrome, VS Code, and a local Postgres instance are all competing for memory, giving 4GB to a container daemon on standby is a real cost, not a theoretical one.
The daemon architecture started feeling like a genuine liability after we caught a CVE on a shared CI host where the Docker socket was mounted into a build container — a common pattern that's basically root escalation waiting to happen. Docker's model requires a persistent root-owned daemon (dockerd), and anything that can talk to that socket owns the host. We had it locked down, but "locked down" and "safe" aren't the same thing when you're running untrusted third-party CI jobs. The CVE wasn't in Docker itself, but the architecture made the blast radius much larger than it needed to be.
I want to be clear about what this is and isn't. Docker isn't dying — the image format is OCI-standard now, the registry ecosystem is built around it, and docker pull muscle memory is real. What I found is that the Docker CLI and the Docker runtime are separable things, and in 2026 you have genuinely mature alternatives for the runtime, the build layer, and the desktop experience individually. I've actually installed and run Podman, nerdctl with containerd, Rancher Desktop, and OrbStack — not just read their READMEs.
# Quick smell test I ran on each alternative —
# can it pull and run a basic image without root?
podman run --rm hello-world
nerdctl run --rm hello-world
# Both work without sudo on Linux.
# OrbStack on macOS: just works, no flags needed.
While we're talking about rethinking your dev toolchain, Best AI Coding Tools in 2026 (thorough Guide) is worth a read if you're also revisiting your coding workflow — I ended up auditing several tools at the same time and there's real overlap in the "what am I actually paying for vs what open alternatives exist" question.
The honest summary: if you're a solo dev on Linux, you probably never needed Docker Desktop to begin with. If you're a team on macOS hitting the licensing threshold, OrbStack is the fastest migration path with the lowest friction. If you're running CI and the daemon model makes you nervous, Podman's rootless mode or a containerd-native setup is worth the two hours it takes to switch. None of these are theoretical — I have all three running in production or active dev environments right now.
The Contenders I Actually Tested
The thing that caught me off guard testing these wasn't the feature gaps — it was how production-ready most of them already are. I ran all of these on both an M3 MacBook Pro and an AMD64 Ubuntu 22.04 box over about three months, building real multi-service apps, not just hello-world containers.
Podman 5.x — Drop-In CLI Replacement (Mostly)
Podman 5.0 shipped with netavark as the default network stack and quadlet integration baked in, which makes it meaningfully different from the 4.x era. The daemonless architecture means no background process eating RAM when you're not actively using it, and rootless by default means your containers can't trivially escape to root on the host. I aliased docker=podman on day one and hit exactly two breaking points in three months: docker-compose (solvable, just not covered here) and a subtle difference in how --network host works on rootless setups on Linux.
# Install on Fedora/RHEL — it's native here
sudo dnf install podman
# On Ubuntu 22.04, get the upstream repo for 5.x
# The distro version is usually 3.x which is ancient
sudo apt-get install -y podman # gets you 3.x, probably not what you want
# Add Kubic repo for current builds
. /etc/os-release
echo "deb https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/unstable/xUbuntu_${VERSION_ID}/ /" \
| sudo tee /etc/apt/sources.list.d/devel:kubic:libcontainers:unstable.list
The gotcha nobody warns you about: rootless Podman uses a user-level network namespace, so binding to ports below 1024 fails unless you set net.ipv4.ip_unprivileged_port_start=80 in sysctl. On macOS, Podman runs through a lightweight QEMU VM (podman machine), which adds about 200-400ms to cold starts compared to native Linux.
nerdctl + containerd — What Your Kubernetes Nodes Are Actually Running
If you're deploying to Kubernetes, your workloads run on containerd, not Docker. nerdctl gives you a Docker-compatible CLI on top of containerd directly, which means you're testing against the exact same runtime your pods use. The CLI parity is genuinely impressive — nerdctl build, nerdctl compose, nerdctl run all work. The one area where it feels rough is image management tooling; the nerdctl image subcommands are less mature than Docker's.
# Install containerd + nerdctl on Ubuntu (nerdctl full bundle includes CNI plugins)
wget https://github.com/containerd/nerdctl/releases/download/v1.7.6/nerdctl-full-1.7.6-linux-amd64.tar.gz
sudo tar Cxzvvf /usr/local nerdctl-full-1.7.6-linux-amd64.tar.gz
# Enable containerd
sudo systemctl enable --now containerd
# Run rootless (requires uidmap package)
containerd-rootless-setuptool.sh install
nerdctl run --rm hello-world
Lima — The Sane VM Layer for macOS
Lima is what Rancher Desktop and Colima both use under the hood on macOS. Running it directly gives you control over which runtime lives inside the VM — you pick Podman, containerd/nerdctl, or even Docker CE if you want. A Lima VM with 4 CPUs and 8GB RAM starts in about 30-45 seconds on M3 hardware. The file sync story has improved dramatically; VirtioFS is the default now and is substantially faster than the old reverse-sshfs approach that made Docker Desktop feel sluggish on large projects.
# Install and start a containerd-based Lima instance
brew install lima
limactl start --name=dev template://default # uses containerd + nerdctl
# Or explicitly choose Podman
limactl start --name=podman-vm template://podman
# Shell into it
limactl shell dev
nerdctl ps
Buildah — When You Just Need to Build Images
Buildah's pitch is narrow and honest: build OCI-compliant images without running a daemon, without even having a base image if you don't want one. I use it specifically in CI pipelines where I don't want the attack surface of a running daemon. buildah bud accepts your existing Dockerfiles unchanged. The more interesting path is its native API where you construct layers in shell, which makes certain CI patterns cleaner than multi-stage Dockerfiles.
# Build from existing Dockerfile — works as-is
buildah bud -t myapp:latest .
# The native approach — build a container layer by layer without a Dockerfile
container=$(buildah from ubuntu:22.04)
buildah run $container -- apt-get update
buildah run $container -- apt-get install -y python3
buildah config --entrypoint '["python3", "-m", "http.server"]' $container
buildah commit $container myapp:latest
buildah rm $container
Rancher Desktop — The GUI Option That Doesn't Require You to Care
Rancher Desktop 1.x (currently at 1.13.x as of early 2026) is the honest answer for developers who want Docker Desktop parity without the Docker Desktop subscription. It bundles Lima, containerd or dockerd, nerdctl, kubectl, and Helm into one app with a preferences pane. You pick your runtime in the UI, it handles the VM, path configuration, and socket setup automatically. The performance on Apple Silicon is comparable to Docker Desktop for typical dev workloads. Where it loses points: the UI feels like an engineering prototype, and switching between containerd and dockerd runtimes requires a full reset that takes 2-3 minutes.
- Free for all use cases — no business tier licensing, which is the obvious reason to switch from Docker Desktop
- Kubernetes included — spins up a local k3s cluster without additional config, useful if you're validating manifests locally
- Socket compatibility — exposes
/var/run/docker.sockwhen dockerd mode is active, so tools that hardcode that path (looking at you, Testcontainers) work without workarounds
I deliberately skipped the podman-compose vs docker-compose comparison across all of these. Every one of these tools has a compose story, and the differences — especially around networking defaults, secret handling, and extension fields — are significant enough that collapsing it into a bullet point here would be doing it a disservice. That's its own post.
Podman: The Closest Drop-In
The alias trick is real and it's not a joke: alias docker=podman in your .bashrc and roughly 90% of your muscle memory just works. I was skeptical until I ran a full week of my normal workflow — building images, running containers, pushing to registries — without hitting a single incompatibility. The other 10% will find you eventually, but it's not the daily-driver problem you'd expect.
Installing the Right Version
On Fedora or RHEL, Podman ships with the OS — you already have it. On Ubuntu 22.04+, sudo apt install podman drops you on the 4.x branch, which is fine but missing some of the network stack improvements in 5.x. If you want 5.x on Ubuntu, pull from the Kubic repo:
# Add the Kubic repo for Podman 5.x on Ubuntu 22.04/24.04
. /etc/os-release
echo "deb https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/unstable/xUbuntu_${VERSION_ID}/ /" \
| sudo tee /etc/apt/sources.list.d/podman.list
curl -fsSL "https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/unstable/xUbuntu_${VERSION_ID}/Release.key" \
| sudo gpg --dearmor -o /etc/apt/trusted.gpg.d/podman.gpg
sudo apt update && sudo apt install podman
The basic command parity is solid. podman run -it --rm ubuntu:24.04 bash works exactly as you'd expect — same flags, same behavior, same image pulled from Docker Hub unless you've configured a different registry default in /etc/containers/registries.conf. That file is actually worth checking early because Podman doesn't default to Docker Hub the same way Docker does, and you'll get confusing "image not found" errors until you either qualify your image names or set unqualified-search-registries = ["docker.io"].
The Rootless Mode Gotcha That Bites Everyone
The place things fall apart is docker-compose files using network_mode: host in rootless mode. You'll hit permission errors the moment any service tries to bind to a port below 1024. The fix is a sysctl tweak, and you want to make it persistent, not just session-level:
# Temporary fix (current boot only)
sudo sysctl net.ipv4.ip_unprivileged_port_start=80
# Permanent fix — survives reboots
echo "net.ipv4.ip_unprivileged_port_start=80" | sudo tee /etc/sysctl.d/99-podman-ports.conf
sudo sysctl --system
This isn't a Podman bug exactly — it's the kernel being correct about unprivileged processes and low ports. But Docker historically just ran as root so you never noticed. The security model is actually better here; you're just paying the configuration tax upfront.
Podman Pods and Systemd Integration
Podman pods are underused. If you're running a web app with a sidecar or a database that needs to share a network namespace, you don't need Compose at all:
# Create a pod that exposes port 8080 externally mapped to 80 inside
podman pod create --name myapp -p 8080:80
# Now run containers into it — they share localhost with each other
podman run -d --pod myapp --name web nginx:alpine
podman run -d --pod myapp --name app myapp:latest
The systemd integration is the feature that actually made me recommend Podman for Linux servers over Docker. On a machine where systemd is the init system (which is most of them), generating a proper unit file from a running container is one command:
# Generate a user-level systemd service that restarts the container on boot
podman generate systemd --new mycontainer > ~/.config/systemd/user/mycontainer.service
# Enable it
systemctl --user daemon-reload
systemctl --user enable --now mycontainer
The --new flag is important — without it the unit file references the container by ID, which breaks after you update the image. With --new it re-creates the container from scratch on each start, which is the behavior you actually want.
The Image Cache Problem Nobody Warns You About
Podman's image cache is completely separate from Docker's, stored under ~/.local/share/containers/storage/ instead of /var/lib/docker/. Your first week after switching involves pulling everything again. There's no migration tool that works reliably — docker save | podman load is the manual path if you have large images you really don't want to re-download. For most people it's just friction that clears itself after a few days. Factor this in before you switch a CI machine that caches Docker layers aggressively; you'll need to rebuild that cache from scratch.
nerdctl + containerd: What CI Actually Uses
The thing that caught me off guard when I first looked at this stack: most Kubernetes clusters haven't used Docker as their runtime since Kubernetes 1.24 deprecated dockershim. Your pods are already running on containerd. So when you're building images with Docker on your laptop and running them in CI with Docker, you're actually introducing a translation layer that doesn't exist in production. nerdctl cuts that out entirely.
Installation: Get the Full Bundle
Don't grab the minimal nerdctl binary — you'll spend an hour debugging missing CNI plugins. Get the full bundle which ships with BuildKit, CNI plugins, and everything you need to actually build and run containers:
# Replace 2.x.x with the actual latest from github.com/containerd/nerdctl/releases
curl -Lo nerdctl.tar.gz https://github.com/containerd/nerdctl/releases/download/v2.x.x/nerdctl-full-2.x.x-linux-amd64.tar.gz
sudo tar Cxzvvf /usr/local nerdctl.tar.gz
# Start containerd if it's not already running
sudo systemctl enable --now containerd
After that, nerdctl run -d -p 80:80 nginx:alpine just works. The CLI surface is close enough to Docker that your muscle memory carries over — same flags, same syntax for volumes, same image naming conventions. I've dropped nerdctl into CI pipelines by replacing docker with nerdctl in shell scripts and had zero breakage on straightforward build/push workflows.
The CI Parity Argument Is Real
Here's the actual value proposition: if your CI runner and your k8s nodes are both pinned to containerd 1.7.x (or 2.x once clusters upgrade), you're using identical image unpacking, identical snapshot handling, identical OCI layer behavior. A subtle image that works in Docker-based CI but fails in production is almost always a containerd vs. Docker daemon behavior difference — storage drivers, overlay configuration, how they handle whiteout files. Eliminating that variable has saved me real debugging time. Your CI build environment and your production runtime are no longer different species pretending to be compatible.
BuildKit Without the Ceremony
BuildKit is baked in as the default builder. No DOCKER_BUILDKIT=1 prefix, no docker buildx create setup ritual. Registry-backed cache works out of the box:
# Push cache to registry, pull on next run — this actually works first try
nerdctl build \
--cache-from type=registry,ref=ghcr.io/myorg/myapp:buildcache \
--cache-to type=registry,ref=ghcr.io/myorg/myapp:buildcache,mode=max \
-t ghcr.io/myorg/myapp:latest .
# mode=max caches intermediate layers, not just the final image
# This is the flag Docker users forget exists because the default (min) is nearly useless for CI
The mode=max detail matters. Docker's BuildKit defaults to min which only caches the final stage — useless for multi-stage builds where your heavy dependency install happens in stage one. With nerdctl's BuildKit setup, I set mode=max once and cache hit rates on repeated CI runs jumped noticeably.
Where It Actually Gets Painful
Rootless mode is the rough edge. If your CI runners don't have root (which is a legitimate security setup), you're running this:
containerd-rootless-setuptool.sh install
# This will fail on kernels < 5.11 without cgroup v2 delegation configured
# Check first:
cat /sys/fs/cgroup/cgroup.controllers
On Ubuntu 20.04 runners (still common in 2026 for cost reasons), cgroup v2 isn't the default and the rootless networking setup through slirp4netns adds measurable overhead compared to root mode. My honest take: if you control your CI infrastructure, run containerd as root on the runner and call it done. If you're on shared infrastructure where rootless is mandatory, budget a few hours getting the cgroup delegation right — it's a one-time fix but the error messages when it goes wrong are spectacularly unhelpful. The nerdctl info command at least shows you what's missing, which is more than dockerd gives you in similar situations.
Lima on macOS: The Missing Piece
The thing that surprised me most about Lima is how little people talk about it given how well it works. You get a real Linux VM on your Mac, Podman inside it, socket forwarding to your host terminal, and the whole setup takes about three minutes. No licensing fees, no background service eating 400MB of RAM on startup.
brew install lima
limactl start template://podman
That second command pulls down a pre-configured VM template, starts a Lima-managed QEMU VM, and sets up Podman inside it. Once it's running, you can point your existing Docker-aware tools straight at it without changing anything else about your workflow:
export DOCKER_HOST=unix:///Users/you/.lima/podman/sock/podman.sock
docker ps # works — docker CLI talking to Podman over the forwarded socket
I'd put that export in your .zshrc immediately. The socket forwarding is clean — Compose, Testcontainers, whatever you use that speaks the Docker API just works. The gotcha is the path: it matches your actual username, so substitute you with the output of whoami or you'll spend 20 minutes debugging a "no such file" error that has nothing to do with Podman.
Before you run any real build, open ~/.lima/default/lima.yaml and fix the resource defaults. Out of the box you get 1 CPU and 512MB of memory, which means your first multi-stage Docker build will take long enough that you'll assume something is broken. Change these two lines:
# ~/.lima/default/lima.yaml
cpus: 4
memory: "4GiB"
Then limactl stop default && limactl start default to apply. Volume mount performance is where Lima genuinely hurts. If you're working on a Node project where node_modules has 50,000 files, directory traversal through the 9p filesystem Lima uses is noticeably slower than what Docker Desktop gives you. For build-only workflows where you're not mounting source code live, you won't feel it. For a dev server watching files in a mounted volume? You will. OrbStack handles this better — it uses a custom filesystem layer that benchmarks closer to native. OrbStack runs $8/month per developer after a free trial, and honestly if your team already pays for Datadog or similar tooling, that's a rounding error and worth it purely for the I/O improvement and the orb CLI which feels more polished. But Lima being MIT-licensed and free matters a lot for open source projects, personal machines, or CI pipelines where you just need a Podman socket without a license conversation.
Buildah: When You Only Need to Build Images
The thing that caught me off guard with Buildah is that it's not trying to replace Docker for everything — it's laser-focused on one job: building OCI-compliant images. No runtime, no container management, no docker ps equivalent. Just build and push. That constraint is exactly what makes it useful in a CI context where running a Docker daemon is either overkill or a security liability.
Drop-in compatibility with existing Dockerfiles is real and actually works. Run this in your pipeline:
# Works with your existing Dockerfile, no changes needed
buildah bud -t myapp:latest .
# Push directly to a registry without docker login
buildah push myapp:latest docker://registry.example.com/myapp:latest
No daemon process, no Unix socket to mount, no root required. In a Kubernetes Job running as a non-root user, this just works — whereas Docker-in-Docker requires either privileged: true or socket mounting, both of which your security team will flag immediately.
The scripted build API is where Buildah genuinely earns its place. Instead of writing a Dockerfile, you can drive the build procedurally from a shell script:
# Each step is explicit and scriptable
container=$(buildah from ubuntu:24.04)
# $container is just a string ID — pass it around, inspect it, branch it
buildah run $container -- apt-get update -y
buildah run $container -- apt-get install -y python3 pip
# Copy only what you need — no .dockerignore gymnastics
buildah copy $container ./src /app/src
buildah copy $container ./requirements.txt /app/
buildah config --cmd "python3 /app/src/main.py" $container
buildah commit $container myapp:latest
# Clean up the working container
buildah rm $container
This matters for complex builds where you need conditional logic, loops over matrix configs, or dynamic layer content based on environment variables. A shell script handles that naturally. A Dockerfile makes you reach for ARG hacks and multi-stage gymnastics.
The security story for GitOps is genuinely strong. A Tekton or Argo Workflows pipeline step running Buildah needs zero elevated permissions — no securityContext.privileged: true, no hostPath volume for the Docker socket. Your cluster's PSP or Pod Security Admission policies stay strict, and you're not exposing the host Docker daemon to arbitrary build code. If someone injects malicious build steps, the blast radius is contained to that pod's user namespace. That's a real, auditable security property, not marketing copy.
Honest take on the trade-off: if your team is comfortable with docker build, doesn't operate inside Kubernetes-native CI, and has no compliance pressure around privileged containers, Buildah's extra ceremony buys you nothing. The scripted API requires you to think in "working containers" and remember to buildah rm them. The docs assume familiarity with OCI internals. I'd only reach for it when I have a specific reason — unprivileged Kubernetes builds, scripted layer logic, or needing to manipulate image layers post-build before committing. Otherwise docker build or Kaniko gets the job done with less friction.
Rancher Desktop: The 'It Just Works' Option
The thing that surprised me most about Rancher Desktop is how little it asks of you. Download the DMG, run the installer, pick whether you want dockerd or containerd as the runtime, and you're building images within five minutes. No account creation. No license agreement that changed last quarter. No "activate your subscription to unlock this feature." It's genuinely free and open source, maintained by SUSE, and ships with kubectl and helm bundled — tools you'd otherwise install separately anyway.
The dockerd compatibility mode is the real killer feature for anyone switching from Docker Desktop. Your muscle memory stays intact. Every docker build, docker run, and docker compose up command works exactly as before because it is Docker under the hood — just without the commercial wrapper. The thing that catches people out is that the socket path is slightly different by default, but Rancher Desktop symlinks it to /var/run/docker.sock automatically. VS Code Dev Containers picks it up without a single config change. CI scripts you've been running for two years? Untouched.
# After install, verify the runtime is active
docker context ls
# Should show rancher-desktop as current context
# Helm and kubectl come pre-bundled at specific versions
# Check what shipped with your install
helm version
# version.BuildInfo{Version:"v3.14.x", ...}
kubectl version --client
# Confirm it matches your cluster's server version
# Switch runtime in the UI: Preferences → Container Engine → dockerd or containerd
I actively recommend this to teammates who need containers locally but don't want to become container infrastructure people. Someone who needs to run a Postgres sidecar, test a docker-compose setup, or build a Node image before pushing it — Rancher Desktop handles all of that without them needing to understand what containerd even is. The cognitive overhead is as low as Docker Desktop was before the licensing changes.
The honest trade-offs: the UI is functional but not polished. There's no Docker Scout for image vulnerability scanning, no Extensions marketplace, and the dashboard won't win any design awards. If your team has invested in the Docker ecosystem specifically — Scout alerts in CI, custom Docker Extensions, deep Docker Hub integration — you'll feel those gaps. Memory usage was roughly on par with Docker Desktop in my own testing on an M2 MacBook Pro; I measured both hovering around 2–3GB under light load. If you were hoping to reclaim RAM, look at Lima or Finch instead. Rancher Desktop trades efficiency for convenience, and that's a completely fair trade for most local development use cases.
Side-by-Side Comparison
The thing that trips people up most when evaluating these tools is assuming they're all interchangeable drop-ins. They're not. The daemon vs. no-daemon distinction alone changes your threat surface, your CI setup, and how you debug socket permission errors at 11pm. Here's how they actually stack up:
Tool
Daemon Required
Rootless Default
macOS Support
Docker CLI Compatible
Best For
Podman 5.x
No
Yes
Via Lima / Podman Machine
Yes (alias)
Linux devs, security-conscious teams
nerdctl + containerd
Yes (containerd)
Optional
Via Lima
Yes
Kubernetes-aligned CI/CD
Lima
N/A (VM layer)
Depends on guest
Yes (native)
With guest config
macOS users avoiding Docker Desktop
Buildah
No
Yes
Via Lima
Build only
Image-only CI pipelines
Rancher Desktop
Optional
Partial
Yes
Yes
Laptop devs who want zero config
The "Docker CLI Compatible" column needs a caveat. Podman's alias docker=podman trick works for 90% of workflows — until you hit Compose files that use network_mode: host on macOS, or anything that assumes a background daemon is listening on /var/run/docker.sock. That socket path is baked into more tooling than you'd expect. nerdctl is actually the tightest CLI clone of the bunch; it was designed to mirror docker flags almost exactly, so if you're migrating CI scripts, that's your lowest-friction path.
Rootless-by-default is a bigger deal than it sounds on shared infrastructure. With Podman 5.x and Buildah, containers run as your UID from the first command. No sudo, no adding your CI runner to a docker group, no worrying that a misconfigured container can write to host paths it shouldn't touch. nerdctl with containerd supports rootless mode via containerd-rootless-setuptool.sh install, but it's not flipped on automatically — you have to mean it.
# Podman rootless — no setup, this just works
podman run --rm -it alpine sh
# nerdctl rootless — requires explicit install step first
containerd-rootless-setuptool.sh install
nerdctl run --rm -it alpine sh
# Confirm you're actually running rootless (UID should match your user)
podman info --format '{{.Host.Security.Rootless}}'
# true
Lima is the odd one out in this table because it's not a container runtime — it's a VM manager that hosts one. That means macOS support is genuinely native in a way the others can't match: Lima manages the VM lifecycle, port forwarding, and file mounts directly. The trade-off is an extra abstraction layer. If your Lima VM has nerdctl installed, you're piping commands through two layers before a container starts. For most local dev that latency is invisible, but in tight CI loops you'll feel it.
Buildah's "Build only" CLI compatibility is a feature, not a limitation, if your use case is strictly producing OCI images in pipelines. You can write a Containerfile (Dockerfile-compatible syntax) and push to any registry without ever spinning up a runtime. I've used this in air-gapped environments where running containers was forbidden but building images for scanning was required — nothing else in this table fits that niche as cleanly.
All five tools are free and open source. Podman, nerdctl, Buildah, and Lima live under various CNCF or Red Hat umbrellas with no paywalled features. Rancher Desktop is Apache 2.0 licensed; SUSE offers commercial support through Rancher products but the desktop app itself costs nothing. Before you commit to any of these for a team, check each project's GitHub for enterprise support options — the space changes, and what's community-only today may have a support contract available by the time you're reading this.
When to Pick What: My Actual Recommendations
After spending months switching between these tools across different projects and environments, the honest answer is: the "best" choice is almost entirely determined by your OS, your team's starting point, and your deployment target. Here are my concrete recommendations without the hand-waving.
Linux running workloads headed to Kubernetes → nerdctl + containerd, full stop. The runtime Kubernetes actually uses is containerd, so why are you shimming through Docker to talk to it? nerdctl's CLI is close enough to Docker that muscle memory transfers immediately. The setup takes maybe 20 minutes:
# Install containerd + CNI plugins + nerdctl
curl -L https://github.com/containerd/nerdctl/releases/download/v1.7.6/nerdctl-full-1.7.6-linux-amd64.tar.gz | sudo tar xz -C /usr/local
sudo systemctl enable --now containerd
# Verify you can pull and run something
nerdctl run --rm hello-world
You get BuildKit baked in, no daemon overhead layered on top of containerd, and your local image behavior actually matches what happens in your cluster. The setup cost is real but it's a one-time thing — worth every minute.
macOS with Docker Desktop cost as the issue: The free path is Lima + Podman. Lima spins up a Linux VM with reasonable defaults and Podman slots into it cleanly. You lose some file I/O speed compared to Docker Desktop's VirtioFS implementation, but if you're not doing heavy volume-mounted development (e.g., just building and running containers, not live-reloading source code from a mounted directory), you won't notice. If you are doing that kind of dev work and the slowness is killing you, OrbStack at $8/month gives you file I/O that's genuinely competitive with native. I switched to OrbStack for a React project where Lima's bind mounts made hot reload unbearable. The difference was immediate.
Migrating a team from Docker with zero retraining budget → Rancher Desktop in dockerd mode. This is the pragmatic move. Rancher Desktop lets you pick dockerd as the runtime, which means every existing docker command, every docker-compose file, every script with docker build in it works unchanged on day one. Then you flip individual developers to containerd mode as they get comfortable. No big-bang migration, no one screaming that their workflow broke. The gradual path looks like:
- Roll out Rancher Desktop with dockerd to the whole team (replace Docker Desktop, zero behavior change)
- Migrate your CI pipeline to nerdctl or Buildah first — less human resistance there
- Switch developers to containerd mode one squad at a time, with a runbook for the three commands that differ
Hardened CI in a restricted environment → Buildah for builds, nerdctl for any runtime needs. Buildah's killer feature here is rootless, daemonless builds that don't need a privileged socket. In environments where your security team has locked down /var/run/docker.sock mounts in CI (and they should have), Buildah just works. It also speaks OCI natively, so you're not building Docker-proprietary image formats:
# Rootless build — no daemon, no socket, no sudo
buildah bud --layers -t my-app:latest -f ./Dockerfile .
# Push directly to a registry without docker login
buildah push my-app:latest docker://registry.example.com/my-app:latest
Fedora/RHEL/CentOS Stream → Podman, obviously. Red Hat ships Podman as the default container tool on these distros and the systemd integration is the thing that doesn't get enough credit. podman generate systemd outputs a proper unit file for any container you're running, which means you can manage containers with systemctl like any other service — restart policies, dependency ordering, journald logging — all without a daemon running in the background to babysit things. On RHEL 9+, Podman 4.x is the default and quadlets (native systemd container definitions) make this even cleaner.
If you actually need Docker-specific features — Scout vulnerability scanning, private Docker Hub repos at scale, Dev Environments for onboarding — Docker Desktop is still the right answer, just own the license cost. Docker Scout's integration with the Hub's image index is something none of the alternatives have replicated. If your security team is relying on Scout reports or you're distributing private images through Docker Hub to customers, you're not really choosing between Docker and an alternative; you're choosing between paying for Docker Desktop or rebuilding that workflow from scratch around something like Harbor + Trivy. That's a project, not a swap.
The Gotchas Nobody Warns You About
The thing that trips up almost every team switching away from Docker is that the mental model feels identical until it doesn't. Same CLI shape, same Dockerfiles, same Compose files — and then something breaks in a way that produces no error message, just wrong behavior. Here are the specific traps I've fallen into so you don't have to.
Podman Rootless and the Ownership Surprise
Run a Podman container as a non-root user and write a file inside it. Then check what owns that file on the host:
# Inside the container you're running as uid 0 (root)
$ podman run -v ./data:/data fedora touch /data/hello.txt
# Back on the host
$ ls -la ./data/
-rw-r--r-- 1 100999 100999 0 Jan 10 09:22 hello.txt
That 100999 is a subuid-mapped user — nobody you recognize. Podman uses user namespaces so your container's root maps to a high UID in the host's subordinate UID range (defined in /etc/subuid). This isn't a bug; it's the security model working correctly. But it will break CI pipelines that check file ownership, confuse volume cleanup scripts, and make developers think something is corrupted. The fix is to either use --userns=keep-id (maps your host UID into the container as-is) or set explicit ownership with podman unshare chown after the fact. Just know going in that bind mounts in rootless mode don't behave like Docker's.
containerd's Image Namespace Trap
If you're using containerd directly — common on Kubernetes nodes or with Lima — there are two CLI tools that talk to it: nerdctl and ctr. They don't show you the same images by default because containerd organizes images into namespaces, and each tool has a different default.
# nerdctl defaults to the "default" namespace
$ nerdctl images
REPOSITORY TAG IMAGE ID CREATED SIZE
# But kubelet uses the k8s.io namespace
$ nerdctl -n k8s.io images
REPOSITORY TAG IMAGE ID CREATED SIZE
registry.k8s.io/pause 3.9 829e9de338301 3 weeks ago 743kB
I spent an embarrassing amount of time wondering why images I'd pulled were "missing" on a node. The answer was namespaces. If you pull an image with nerdctl pull, it goes into default. Kubelet won't see it. Pull with nerdctl -n k8s.io pull instead. Same issue with ctr — its default namespace is also default, not k8s.io. The ctr tool is lower-level and has less polish, so I'd stick to nerdctl with explicit -n flags and alias them in your shell.
Lima Volume Performance: You Have to Opt In to the Fast Driver
Lima ships with a default volume mount strategy that's noticeably slow for anything write-heavy (think npm install, compiled language builds, or test runners that generate lots of small files). The new virtiofs driver is dramatically faster, but it's not the default and you have to enable it explicitly in your lima.yaml:
# ~/.lima/default/lima.yaml
mounts:
- location: "~"
mountType: virtiofs # without this you get 9p, which is painfully slow
writable: true
The old 9p protocol over QEMU is fine for reading config files. It's miserable for anything where you're creating or modifying hundreds of files rapidly. After switching to virtiofs on an M2 MacBook Pro, a cold npm install inside a Lima VM dropped from around 90 seconds to under 20. You also need macOS 13+ and Lima 0.18+ for virtiofs to work reliably — older combos had kernel panic issues on some machines.
podman-compose Is Not docker-compose
podman-compose is maintained separately from the Podman project itself and tracks the Compose spec at its own pace. Most basic stuff works. But the edges are rough. The specific failure that got me was static IP assignment in custom networks:
# This works fine with Docker Compose
networks:
backend:
driver: bridge
ipam:
config:
- subnet: 172.20.0.0/16
services:
db:
networks:
backend:
ipv4_address: 172.20.0.5
With podman-compose, this silently succeeds — no error — but the container gets a random IP instead of the one you specified. Services that relied on that static address then fail to connect, and since there's no error at startup you waste time debugging the wrong layer. The workaround is to use hostnames and internal DNS instead of static IPs, which is the right call anyway. But "silent failure" is the worst kind of incompatibility. Before committing to podman-compose in production, diff your Compose files against its supported features list — it's not exhaustive but it's honest.
Docker Contexts Will Gaslight Your Team
If someone on your team is running Docker Desktop, someone else is using Podman with a Docker-compatible socket, and your CI uses a remote Docker daemon — contexts will become the source of mysterious "it works on my machine" bugs. The default context name is just default on every tool, and when you docker context use something and forget about it, every subsequent Docker CLI call silently targets the wrong daemon.
# Name contexts after what they actually are, not "default"
$ docker context create lima-dev \
--docker "host=unix:///Users/you/.lima/default/sock/docker.sock"
$ docker context create prod-remote \
--docker "host=ssh://deploy@prod.internal"
# Make the active context visible in your shell prompt
$ docker context show
lima-dev
Add docker context show to your shell prompt the same way you'd show the active Git branch or Kubernetes context. Document the team's named contexts in the repo README with the exact create commands. The real danger is mixed-tool environments where podman system service is exposing a Docker-compatible socket — it responds to docker commands correctly enough that you won't realize you're hitting Podman until a subtle behavior difference bites you. Explicit context names are cheap insurance.
Disclaimer: This article is for informational purposes only. The views and opinions expressed are those of the author(s) and do not necessarily reflect the official policy or position of Sonic Rocket or its affiliates. Always consult with a certified professional before making any financial or technical decisions based on this content.
Originally published on techdigestor.com. Follow for more developer-focused tooling reviews and productivity guides.
Top comments (0)