DEV Community

Cover image for Running My Tiny Docker-like Runtime on macOS with Lima
amir
amir

Posted on

Running My Tiny Docker-like Runtime on macOS with Lima

Running My Tiny Docker-like Runtime on macOS with Lima: Lessons, Mistakes, and a Simple Benchmark

When I started building my own tiny Docker-like runtime in Go, I had one simple assumption:

“It is written in Go, so I should be able to run it anywhere.”

That assumption was only half correct.

Yes, Go makes it easy to compile binaries for different platforms. But a container runtime is not just a Go application. A container runtime depends heavily on operating system features, especially Linux kernel features.

In my case, the project needed things like:

  • Linux namespaces
  • cgroups v2
  • mount isolation
  • chroot
  • process isolation
  • bridge networking
  • veth pairs
  • iptables/NAT

And that is where macOS becomes a problem.

macOS is not Linux. It does not provide Linux namespaces or cgroups in the same way. So even if my code compiled on macOS, the actual container runtime logic could not work directly on macOS.

This was the point where I started using Lima.

In this article, I want to share how I used Lima to run my tiny Docker-like runtime from macOS, the mistakes I made, the small design decisions I learned from, and a simple experimental benchmark at the end.

This is not a “Lima vs Docker Desktop” article.

It is more about understanding the boundary between macOS, Linux, Docker, and a custom container runtime.


What I Was Building

The project is a small Docker-like runtime written in Go.

The goal was not to replace Docker.

The goal was to understand what Docker does under the hood.

Docker gives us a very clean developer experience:

docker run alpine echo hello
Enter fullscreen mode Exit fullscreen mode

But behind that simple command, many things happen:

image resolution
filesystem preparation
namespace creation
cgroup configuration
mount setup
network setup
process execution
log tracking
metadata storage
cleanup
Enter fullscreen mode Exit fullscreen mode

The official Docker documentation describes Docker as an open platform for developing, shipping, and running applications:

https://docs.docker.com/get-started/docker-overview/

That high-level explanation is useful, but when you build a tiny runtime yourself, Docker becomes much less magical.

You start seeing the lower-level Linux pieces.

For example, my runtime supports commands like:

tiny-docker-go run --rootfs ./rootfs/alpine /bin/sh
tiny-docker-go ps
tiny-docker-go logs -f <container-id>
tiny-docker-go stop <container-id>
Enter fullscreen mode Exit fullscreen mode

On Linux, this makes sense.

On macOS, it immediately raises a question:

Where do Linux namespaces and cgroups come from?

The answer is: they do not come from macOS.

I needed Linux.


The First Mistake: Thinking Go Portability Means Runtime Portability

My first mistake was confusing language portability with operating system feature portability.

I was thinking like this:

Go can build on macOS.
Therefore, my runtime should work on macOS.
Enter fullscreen mode Exit fullscreen mode

But the correct mental model is:

Go can compile the program for macOS.
But Linux container primitives still require Linux.
Enter fullscreen mode Exit fullscreen mode

A Go program can be cross-platform.

But this does not mean every syscall or kernel feature exists on every platform.

For example, when a container runtime wants to isolate a process, it may need Linux-specific features like:

CLONE_NEWUTS
CLONE_NEWPID
CLONE_NEWNS
CLONE_NEWNET
cgroup filesystem
mount operations
veth networking
iptables rules
Enter fullscreen mode Exit fullscreen mode

These are not available as normal Linux container primitives on macOS.

So the real problem was not the programming language.

The real problem was the kernel.

That was a very important lesson for me.


Why macOS Cannot Run This Directly

macOS uses the XNU kernel.

Linux containers depend on the Linux kernel.

This matters because containers are not virtual machines. A container is usually a regular process with a restricted view of the system.

That restricted view is created by kernel features.

For example:

PID namespace      -> gives the process its own process tree
UTS namespace      -> gives the process its own hostname
mount namespace    -> gives the process its own mount view
network namespace  -> gives the process its own network stack
cgroups            -> limit and track resource usage
chroot/rootfs      -> changes the visible filesystem root
Enter fullscreen mode Exit fullscreen mode

On a Linux machine, a runtime can call these features directly.

On macOS, the features are not available in the same way.

So the architecture had to change.

Instead of this:

macOS
  -> tiny-docker-go
      -> Linux namespaces/cgroups
Enter fullscreen mode Exit fullscreen mode

I needed this:

macOS
  -> Linux VM
      -> tiny-docker-go
          -> Linux namespaces/cgroups
Enter fullscreen mode Exit fullscreen mode

This is where Lima became useful.


What Is Lima?

Lima is a tool that runs Linux virtual machines on macOS.

Official documentation:

https://lima-vm.io/docs/

Installation guide:

https://lima-vm.io/docs/installation/

The important thing is this:

Lima is not Docker.
Lima is not my container runtime.
Lima gives me a Linux VM.
Enter fullscreen mode Exit fullscreen mode

That Linux VM gives my project access to the Linux kernel features it needs.

A simple mental model:

MacBook
  └── macOS
       └── Lima VM
            └── Linux
                 └── my tiny Docker-like runtime
                      └── container-like process
Enter fullscreen mode Exit fullscreen mode

This separation helped me understand the problem much better.

Lima is the environment.

My Go runtime is the thing doing the container work.

Alpine rootfs is the container filesystem.


Why I Did Not Just Use Docker Desktop

Docker Desktop is great.

I use Docker Desktop for normal development work.

But for this project, Docker Desktop was not the cleanest learning environment.

Docker Desktop itself uses a Linux VM behind the scenes on macOS. That is how Docker can run Linux containers on macOS.

But I was not trying to simply run containers.

I was trying to build a small runtime that behaves like a container runtime.

So if I put everything behind Docker Desktop too early, I would hide some of the details I wanted to learn.

My goal was not:

How do I run an app in Docker?
Enter fullscreen mode Exit fullscreen mode

My goal was:

How does a container runtime use Linux features to isolate a process?
Enter fullscreen mode Exit fullscreen mode

For that goal, Lima felt cleaner.

The distinction became:

Docker Desktop:
  Great for running Docker containers and application stacks.

Lima:
  Great for getting a Linux environment on macOS and experimenting with Linux internals.
Enter fullscreen mode Exit fullscreen mode

So for this project, Lima gave me a better learning path.


Installing Lima

On macOS, installing Lima with Homebrew is simple:

brew install lima
Enter fullscreen mode Exit fullscreen mode

Then I created a VM for the project:

limactl start --name=tiny-docker --cpus=4 --memory=4 --disk=20
Enter fullscreen mode Exit fullscreen mode

Then I entered the VM:

limactl shell tiny-docker
Enter fullscreen mode Exit fullscreen mode

or sometimes simply:

lima
Enter fullscreen mode Exit fullscreen mode

Inside the VM, I installed the Linux packages my runtime needed:

sudo apt update
sudo apt install -y golang-go curl tar iproute2 iptables
Enter fullscreen mode Exit fullscreen mode

Each dependency had a reason:

golang-go  -> build and test the runtime
curl       -> download rootfs archives
tar        -> extract rootfs archives
iproute2   -> work with Linux networking
iptables   -> configure NAT for isolated networking
Enter fullscreen mode Exit fullscreen mode

This was already a useful learning point.

A container runtime is not just one binary.

It also depends on Linux system capabilities and tools, especially if you are implementing networking.


Preparing the Root Filesystem

A container needs a filesystem.

Docker normally handles this using images and layers.

My project was simpler. I used an Alpine minirootfs.

Inside the Lima VM:

mkdir -p rootfs/alpine

ARCH=$(uname -m)

curl -L -o alpine-rootfs.tar.gz \
  "https://dl-cdn.alpinelinux.org/alpine/latest-stable/releases/${ARCH}/alpine-minirootfs-3.23.4-${ARCH}.tar.gz"

sudo tar -xzf alpine-rootfs.tar.gz -C rootfs/alpine
Enter fullscreen mode Exit fullscreen mode

Then I could run:

sudo ./tiny-docker-go run --rootfs ./rootfs/alpine /bin/sh
Enter fullscreen mode Exit fullscreen mode

At this point, the rootfs became the filesystem that the process sees inside the container-like environment.

This made the concept very concrete for me.

Before this project, I mostly thought about Docker images.

After this project, I started thinking more clearly about root filesystems.

A simplified version is:

Docker image:
  A packaged filesystem with metadata and layers.

Rootfs:
  The actual filesystem view used by the container process.
Enter fullscreen mode Exit fullscreen mode

My tiny runtime does not implement Docker image layers, registries, manifests, or OCI image pulling.

It simply uses an extracted root filesystem.

That is enough for learning.


Architecture: macOS as a Wrapper, Linux as the Runtime

After experimenting, I ended up with this architecture:

If host is Linux:
  run the runtime directly.

If host is macOS:
  route the command through Lima.
  execute the Linux binary inside the Lima VM.
Enter fullscreen mode Exit fullscreen mode

From the user's perspective, I wanted the command to still feel simple:

tiny-docker-go run --rootfs ./rootfs/alpine /bin/sh
Enter fullscreen mode Exit fullscreen mode

But internally, on macOS, it becomes closer to:

limactl shell tiny-docker sudo ./bin/tiny-docker-go-linux-amd64 \
  run \
  --rootfs ./rootfs/alpine \
  /bin/sh
Enter fullscreen mode Exit fullscreen mode

So the macOS binary acts more like a dispatcher.

The actual runtime work happens in Linux.

A simplified architecture:

macOS terminal
  |
  | tiny-docker-go run --rootfs ./rootfs/alpine /bin/sh
  v
Darwin service layer
  |
  | limactl shell tiny-docker sudo Linux binary ...
  v
Lima VM
  |
  v
Linux runtime binary
  |
  v
namespaces + cgroups + chroot + networking
Enter fullscreen mode Exit fullscreen mode

This design helped me keep a clean boundary.

macOS handles the developer command.

Linux handles the container primitives.


Building the Linux Binary

One mistake I made was forgetting that the binary inside Lima must match the Linux VM architecture.

For example, if the Lima VM is x86_64, I can build:

mkdir -p bin

GOOS=linux GOARCH=amd64 \
  go build -o bin/tiny-docker-go-linux-amd64 ./cmd/tiny-docker-go
Enter fullscreen mode Exit fullscreen mode

But if the Lima VM is aarch64, for example on Apple Silicon, I should build:

mkdir -p bin

GOOS=linux GOARCH=arm64 \
  go build -o bin/tiny-docker-go-linux-arm64 ./cmd/tiny-docker-go
Enter fullscreen mode Exit fullscreen mode

To check the VM architecture:

limactl shell tiny-docker uname -m
Enter fullscreen mode Exit fullscreen mode

Possible outputs:

x86_64   -> GOARCH=amd64
aarch64  -> GOARCH=arm64
Enter fullscreen mode Exit fullscreen mode

If you build the wrong architecture, you may see:

exec format error
Enter fullscreen mode Exit fullscreen mode

This error is simple but confusing when you first see it.

It usually means:

The binary architecture does not match the machine trying to execute it.
Enter fullscreen mode Exit fullscreen mode

This was one of those small details that reminded me how important platform boundaries are.


Mistake: Assuming Host Paths Always Exist Inside Lima

Another mistake was around file sharing.

My project existed on macOS at something like:

/Users/amir/Desktop/tiny-docker
Enter fullscreen mode Exit fullscreen mode

I expected that path to always work inside Lima.

Sometimes it did.

Sometimes the mount configuration was not what I expected.

So if I ran this inside the VM:

cd /Users/amir/Desktop/tiny-docker
Enter fullscreen mode Exit fullscreen mode

and got:

No such file or directory
Enter fullscreen mode Exit fullscreen mode

the problem was not my Go code.

It was not the runtime.

It was simply a shared folder issue.

The lesson was:

Always verify that the path exists inside the VM, not only on the host.

Useful checks:

limactl shell tiny-docker pwd
limactl shell tiny-docker ls -la /Users
limactl shell tiny-docker ls -la /Users/amir/Desktop
Enter fullscreen mode Exit fullscreen mode

If the project is not mounted, the easiest workaround is to clone the repository inside the VM:

git clone <your-repo-url>
cd tiny-docker
Enter fullscreen mode Exit fullscreen mode

The cleaner long-term solution is to configure Lima mounts properly.

But the important lesson is that a VM has its own filesystem view.

Never assume the host path exists inside the guest.


Mistake: Running the Wrong Binary

Another mistake was running the macOS binary when I actually needed the Linux binary.

This is easy to do when you have files like:

./tiny-docker-go
./bin/tiny-docker-go-linux-amd64
./bin/tiny-docker-go-linux-arm64
Enter fullscreen mode Exit fullscreen mode

The macOS binary can be useful as a CLI wrapper.

But the Linux binary must perform the real runtime operations.

The separation became:

macOS binary:
  command parsing
  platform detection
  Lima dispatching

Linux binary:
  namespaces
  cgroups
  chroot
  mount setup
  networking
  process lifecycle
Enter fullscreen mode Exit fullscreen mode

This made the code easier to reason about.

On macOS, I do not pretend to support Linux container primitives directly.

I route the work to the Linux VM.


Mistake: Not Checking Prerequisites Early

At first, failures happened too late.

For example, I could run a command and only later discover:

limactl is not installed
Lima instance does not exist
Lima instance is not running
Linux binary is missing
Rootfs is not accessible inside Lima
Enter fullscreen mode Exit fullscreen mode

This created confusing errors.

So I started adding validation before running the actual command.

Good prerequisite checks include:

Is limactl installed?
Does the Lima instance exist?
Is the Lima instance running?
Does the Linux binary exist?
Is the Linux binary accessible inside Lima?
Is the rootfs path accessible inside Lima?
Enter fullscreen mode Exit fullscreen mode

This improves developer experience a lot.

Instead of a low-level error, I want an error like:

Linux binary not found at "./bin/tiny-docker-go-linux-amd64";
build it first and share it with Lima.
Enter fullscreen mode Exit fullscreen mode

or:

rootfs "./rootfs/alpine" is not accessible inside Lima;
ensure the workspace is shared with the VM.
Enter fullscreen mode Exit fullscreen mode

This is not the most exciting part of a runtime project.

But it is an important engineering detail.

As a senior engineer, I have learned that good error messages are part of the product.

Even if the product is just a learning project.


Example: Running a Shell

After preparing the rootfs and building the runtime, I can run:

sudo ./tiny-docker-go run --rootfs ./rootfs/alpine /bin/sh
Enter fullscreen mode Exit fullscreen mode

Inside the shell:

cat /etc/os-release
Enter fullscreen mode Exit fullscreen mode

Example output:

NAME="Alpine Linux"
ID=alpine
VERSION_ID=3.23.4
Enter fullscreen mode Exit fullscreen mode

Check the hostname:

hostname
Enter fullscreen mode Exit fullscreen mode

Run a process:

sleep 30
Enter fullscreen mode Exit fullscreen mode

From another terminal:

sudo ./tiny-docker-go ps
Enter fullscreen mode Exit fullscreen mode

Example output:

ID            STATUS   PID   CREATED              COMMAND
ab12cd34ef56  running  1234  2026-05-17 12:30:45  /bin/sh
Enter fullscreen mode Exit fullscreen mode

This helped me understand process tracking better.

A container runtime does not only start processes.

It also needs to track them, store metadata, collect logs, stop them, and clean up after them.


Example: Running From macOS Through Lima

From macOS, I wanted a command like this:

./tiny-docker-go run --rootfs ./rootfs/alpine /bin/echo "hello from linux"
Enter fullscreen mode Exit fullscreen mode

Internally, the command is routed through Lima:

limactl shell tiny-docker sudo ./bin/tiny-docker-go-linux-amd64 \
  run \
  --rootfs ./rootfs/alpine \
  /bin/echo "hello from linux"
Enter fullscreen mode Exit fullscreen mode

Expected output:

hello from linux
Enter fullscreen mode Exit fullscreen mode

This gave me a nice workflow.

I could stay in my macOS terminal, but still execute the real Linux runtime inside the VM.


Example: Testing a Memory Limit

If cgroup v2 is available and the runtime supports memory limits, I can run:

sudo ./tiny-docker-go run \
  --memory 128m \
  --rootfs ./rootfs/alpine \
  /bin/sh
Enter fullscreen mode Exit fullscreen mode

Inside the container-like shell:

cat /sys/fs/cgroup/memory.max
Enter fullscreen mode Exit fullscreen mode

Expected output:

134217728
Enter fullscreen mode Exit fullscreen mode

That is 128 MiB in bytes.

This small test made cgroups much more real for me.

Before this project, a memory limit felt like a Docker CLI option:

docker run --memory 128m alpine
Enter fullscreen mode Exit fullscreen mode

After implementing a small version, I started seeing it differently:

Docker exposes a nice option.
The Linux kernel enforces the limit through cgroups.
Enter fullscreen mode Exit fullscreen mode

That is a very different level of understanding.


Example: Isolated Networking

For networking, the runtime can create a basic isolated mode using Linux networking primitives.

The rough idea is:

host bridge
  |
  veth pair
  |
container network namespace
Enter fullscreen mode Exit fullscreen mode

Run:

sudo ./tiny-docker-go run \
  --net isolated \
  --rootfs ./rootfs/alpine \
  /bin/sh
Enter fullscreen mode Exit fullscreen mode

Inside the shell:

ip addr
ip route
Enter fullscreen mode Exit fullscreen mode

Depending on the implementation, I expect to see a container-side interface and a default route.

This was one of the most interesting parts for me.

Docker networking feels simple from the outside:

docker run nginx
Enter fullscreen mode Exit fullscreen mode

But underneath, there is a lot of Linux networking:

network namespaces
veth pairs
bridges
routes
iptables
NAT
Enter fullscreen mode Exit fullscreen mode

Building even a small version made me appreciate how much complexity Docker hides.


What I Learned About chroot

My runtime uses chroot as a simple way to change the visible root filesystem.

For learning, this is useful.

But chroot is not the same as a full production container filesystem model.

With chroot, the process sees a different root directory:

Before:
/

After:
./rootfs/alpine becomes /
Enter fullscreen mode Exit fullscreen mode

But production runtimes usually involve more advanced concepts:

pivot_root
overlay filesystems
image layers
OCI runtime spec
capability dropping
seccomp
AppArmor
SELinux
user namespaces
read-only mounts
masked paths
Enter fullscreen mode Exit fullscreen mode

So I try to be careful when describing the project.

It is better to say:

This is a tiny educational runtime.
It demonstrates some basic container building blocks.
It is not a production replacement for Docker, containerd, or runc.
Enter fullscreen mode Exit fullscreen mode

That honesty matters.

Learning projects are valuable, but they should not be oversold.


My Updated Mental Model of Docker

Before this project, I mostly used Docker at the command level:

docker build
docker run
docker ps
docker logs
docker stop
Enter fullscreen mode Exit fullscreen mode

After building a tiny runtime, I started seeing Docker in layers:

Docker CLI
Docker daemon
containerd
runc
Linux namespaces
Linux cgroups
root filesystem
network namespace
mount namespace
process lifecycle
Enter fullscreen mode Exit fullscreen mode

A command like this:

docker run alpine echo hello
Enter fullscreen mode Exit fullscreen mode

looks small.

But conceptually, it involves:

resolving the image
downloading layers
preparing the root filesystem
creating namespaces
configuring cgroups
setting up mounts
configuring networking
starting the process
attaching stdio
tracking metadata
collecting exit status
cleaning up resources
Enter fullscreen mode Exit fullscreen mode

My tiny runtime only implements a small part of this.

But that small part was enough to make Docker feel less like magic.


Lima vs Docker Desktop: My Practical Conclusion

I do not see Lima and Docker Desktop as direct replacements for each other in every situation.

For normal application development, Docker Desktop is usually more convenient.

It gives me:

Docker CLI
Docker Compose
image management
container lifecycle management
volume support
networking
developer-friendly tooling
Enter fullscreen mode Exit fullscreen mode

But for learning Linux container internals, Lima gave me a cleaner mental model.

Lima gave me:

a Linux VM
direct access to Linux tools
a clean environment for experiments
less abstraction around Docker itself
Enter fullscreen mode Exit fullscreen mode

So my conclusion is:

Use Docker Desktop when your goal is to run and ship applications.
Use Lima when your goal is to understand or control the Linux environment.
Enter fullscreen mode Exit fullscreen mode

For this project, Lima was the better learning tool.


Simple Experimental Benchmark

This benchmark is not scientific.

I only wanted to understand the rough overhead of routing commands from macOS through Lima compared with running directly inside the Lima VM.

The command I tested was intentionally small:

/bin/echo hello
Enter fullscreen mode Exit fullscreen mode

Because the command is tiny, the overhead of the runtime and VM boundary becomes easier to notice.


Test 1: Running Directly Inside Lima

Inside the Lima VM:

time sudo ./tiny-docker-go run --rootfs ./rootfs/alpine /bin/echo hello
Enter fullscreen mode Exit fullscreen mode

Example result:

hello

real    0m0.045s
user    0m0.008s
sys     0m0.020s
Enter fullscreen mode Exit fullscreen mode

Test 2: Running From macOS Through Lima

From macOS:

time ./tiny-docker-go run --rootfs ./rootfs/alpine /bin/echo hello
Enter fullscreen mode Exit fullscreen mode

Internally, this routes through something like:

limactl shell tiny-docker sudo ./bin/tiny-docker-go-linux-amd64 \
  run \
  --rootfs ./rootfs/alpine \
  /bin/echo hello
Enter fullscreen mode Exit fullscreen mode

Example result:

hello

real    0m0.180s
user    0m0.020s
sys     0m0.030s
Enter fullscreen mode Exit fullscreen mode

Benchmark Interpretation

The direct Linux execution was faster.

The macOS-to-Lima path had extra overhead because the command crossed the VM boundary through limactl shell.

In this rough experiment:

Direct inside Lima:       ~45 ms
macOS through Lima:       ~180 ms
Extra routing overhead:   ~135 ms
Enter fullscreen mode Exit fullscreen mode

For very short commands like /bin/echo, the overhead is visible.

For long-running processes, the overhead matters much less.

For example, if I run a service for 10 minutes, an extra 100-200 ms at startup is not very important.

My practical conclusion:

For a nice macOS developer experience, routing through Lima is acceptable.
For tight benchmark loops, run directly inside the VM.
For production-grade runtimes, this approach is educational, not final.
Enter fullscreen mode Exit fullscreen mode

Things I Would Improve Next

There are many things I would like to improve in this project.

Some of them are runtime-related:

use pivot_root instead of only chroot
improve cgroup v2 handling
support better cleanup
add more robust metadata storage
improve log streaming
support better TTY handling
add user namespace support
drop Linux capabilities
add seccomp profiles
Enter fullscreen mode Exit fullscreen mode

Some of them are macOS/Lima-related:

detect Lima architecture automatically
choose the correct Linux binary automatically
improve Lima instance setup
validate shared paths more clearly
provide a bootstrap command for macOS users
make error messages more actionable
Enter fullscreen mode Exit fullscreen mode

A better macOS setup command could eventually look like this:

tiny-docker-go setup lima
Enter fullscreen mode Exit fullscreen mode

And it could handle:

checking limactl
creating the Lima instance
building the Linux binary
preparing the rootfs
validating mounts
testing a hello-world container
Enter fullscreen mode Exit fullscreen mode

That would make the project much easier to try.


Final Thoughts

Lima helped me understand the boundary between macOS and Linux much better.

The biggest lesson was simple:

Go can make the binary portable, but it cannot make Linux kernel features exist on macOS.

For a container runtime, the kernel matters.

My final mental model is:

macOS is my workstation.
Lima gives me a Linux VM.
The Linux VM gives me namespaces and cgroups.
My Go runtime uses those Linux features.
The rootfs gives the process its filesystem.
Enter fullscreen mode Exit fullscreen mode

This project also changed how I look at Docker.

Docker is not magic.

But Docker is impressive because it hides a lot of complexity behind a simple interface.

A command like this:

docker run alpine echo hello
Enter fullscreen mode Exit fullscreen mode

is easy to type.

But behind it, there are many layers of runtime, filesystem, networking, isolation, and process management.

Top comments (0)