DEV Community

El Housseine Jaafari
El Housseine Jaafari

Posted on

Fixing no space left on device for Docker when /run is full (Ubuntu)

If Docker or containerd suddenly refuse to start containers with errors like:

Error response from daemon: failed to create task for container:
failed to start shim: symlink ... /run/containerd/...: no space left on device: unknown
Enter fullscreen mode Exit fullscreen mode

…but df -h shows plenty of free disk on /, you’re likely hitting a full /run (tmpfs) or inode exhaustion. This guide explains why it happens and how to fix it quickly—and permanently.


TL;DR (Quick Unblock)

# Give /run more room (temporary until reboot)
sudo mount -o remount,size=512M /run   # or size=1G if you have RAM

# Restart services to clear stale state
sudo systemctl restart systemd-udevd
sudo systemctl restart containerd docker
Enter fullscreen mode Exit fullscreen mode

If it still fails:

sudo systemctl stop docker containerd
sudo rm -rf /run/containerd/io.containerd.runtime.v2.task/*
sudo systemctl start containerd docker
Enter fullscreen mode Exit fullscreen mode

Make it persistent:

echo 'tmpfs /run tmpfs defaults,size=512M 0 0' | sudo tee -a /etc/fstab
sudo mount -o remount /run
Enter fullscreen mode Exit fullscreen mode

Why this happens

  • /run is a RAM-backed tmpfs used by systemd, udev, Docker/containerd shims, etc.
  • On some systems, /run defaults to a small size (e.g., ~200 MB).
  • When /run runs out of space or inodes, containerd can’t create its shim files/symlinks under /run/containerd/..., resulting in “no space left on device” even though your disk has free space.

How to confirm the root cause

Check Docker’s data roots and the /run mount:

docker info 2>/dev/null | sed -n 's/^ Docker Root Dir: //p'
df -hT /var/lib/docker /var/lib/containerd /run
df -i  /var/lib/docker /var/lib/containerd /run
sudo du -xhd1 /run | sort -h
Enter fullscreen mode Exit fullscreen mode

What to look for:

  • /run type is tmpfs and Use% is ~100%; or
  • /run inodes are 100% used (df -i shows IUse% 100%); and
  • Large directories under /run (commonly udev, containerd, or docker).

Example output (problem case):

tmpfs  197M  197M  8.0K  100%  /run
tmpfs  251822 251816 6  100%   /run
sudo du -xhd1 /run | sort -h
...
3.7M  /run/containerd
193M  /run/udev
197M  /run
Enter fullscreen mode Exit fullscreen mode

Step-by-step fixes

1) Temporarily grow /run (immediate relief)

sudo mount -o remount,size=512M /run      # or size=1G on bigger hosts
Enter fullscreen mode Exit fullscreen mode

Then restart the relevant services:

sudo systemctl restart systemd-udevd
sudo systemctl restart containerd docker
Enter fullscreen mode Exit fullscreen mode

Re-check:

df -hT /run && df -i /run
Enter fullscreen mode Exit fullscreen mode

2) Clear stale containerd runtime state (if needed)

Stop services, remove leftover shim dirs, start again:

sudo systemctl stop docker containerd
sudo rm -rf /run/containerd/io.containerd.runtime.v2.task/*
sudo systemctl start containerd docker
Enter fullscreen mode Exit fullscreen mode

Safe because those are runtime artifacts recreated on start.

3) Make the larger /run persistent (survives reboot)

Append to /etc/fstab:

echo 'tmpfs /run tmpfs defaults,size=512M 0 0' | sudo tee -a /etc/fstab
sudo mount -o remount /run
Enter fullscreen mode Exit fullscreen mode

Pick a size appropriate to your RAM (e.g., 512 MB–1 GB for servers running containers).


Related issues & checks

A) Inode exhaustion (lots of tiny files)

If df -i /run shows 100% usage:

  • The remount with a larger size also increases available inodes on tmpfs.
  • Restarting systemd-udevd, containerd, and docker frees stale runtime entries.

For Docker’s persistent storage (not /run), prune if inodes are tight:

docker system df
docker system prune -f
docker image prune -a -f
docker volume prune -f
docker builder prune -a -f
Enter fullscreen mode Exit fullscreen mode

B) Docker root on a different filesystem that’s full

If /var/lib/docker or /var/lib/containerd are on a small/near-full partition, move Docker’s data:

sudo systemctl stop docker containerd
sudo rsync -aHAXx --info=progress2 /var/lib/docker/ /mnt/docker/
printf '{\n  "data-root": "/mnt/docker"\n}\n' | sudo tee /etc/docker/daemon.json
sudo mv /var/lib/docker /var/lib/docker.bak
sudo mkdir /var/lib/docker
sudo mount --bind /mnt/docker /var/lib/docker
sudo systemctl start containerd docker
docker info | grep "Docker Root Dir"
# If all good, clean up:
sudo rm -rf /var/lib/docker.bak
Enter fullscreen mode Exit fullscreen mode

Prevention tips

  • Right-size /run: Set size=512M1G in /etc/fstab for hosts that run containers or have busy udev activity.
  • Prune regularly: Set up periodic pruning for dangling images/builders/volumes if CI/build-heavy.
  • Monitor /run: Watch Use% and inode use with a simple cron or systemd timer:
  df -h /run; df -i /run
Enter fullscreen mode Exit fullscreen mode
  • Avoid runaway logs/artifacts in /run: Runtime data should be short-lived; if something keeps filling /run, investigate that service.

FAQ

Q: I have free disk space. Why “no space left on device”?
Because the error refers to the /run tmpfs, not your disk. /run can be full (or out of inodes) while / has gigabytes free.

Q: Is it safe to delete things under /run/containerd/...?
Yes—only after stopping containerd and Docker. Those are runtime artifacts and will be recreated.

Q: Can I just reboot?
Often yes, but the problem will likely return unless you increase /run or fix the service causing the pressure.


That’s it! With a right-sized /run and a quick service restart, Docker will stop tripping over “no space left on device” despite plenty of disk space.

Top comments (0)