DEV Community

Cover image for No Docker Here: Welcome to Singularity ๐Ÿ”ฌ
Amir Reza Dalir
Amir Reza Dalir

Posted on

No Docker Here: Welcome to Singularity ๐Ÿ”ฌ

I just joined a new project โ€” one that runs on a big HPC cluster. I opened the project README.md and saw something like this:

singularity exec docker://python:3.11 python train_model.py --data /shared/datasets/train
Enter fullscreen mode Exit fullscreen mode

I had no idea what singularity was. ๐Ÿ˜… So I did what felt natural โ€” typed the Docker command instead:

docker run python:3.11 python train_model.py --data /shared/datasets/train
Enter fullscreen mode Exit fullscreen mode

And the terminal replied with:

bash: docker: command not found
Enter fullscreen mode Exit fullscreen mode

I messaged my project manager. His reply was short:

"We don't use Docker here. We use Singularity."

I stared at the message, thinking: "I have been using Docker for years. I know docker build, docker run, docker push like the back of my hand. And now none of that works here?"

That's how it all started.


๐Ÿค” "Why Not Docker?"

I loved Docker. Years of packaging apps in containers, deploying to production, running ML training pipelines. It was part of how I worked every day.

So I asked my project manager to install Docker on the cluster. His reply came quickly:

"We can't. There are legal reasons we can't use Docker here. Company policy. But also โ€” Singularity is a better fit for what we do."

He didn't go into all the legal details, but explained the technical side in a Google Meet session. Docker needs a background daemon running all the time โ€” a service that sits there waiting for your commands. On a shared HPC cluster where hundreds of researchers submit jobs, that adds complexity and overhead. Singularity doesn't need any daemon โ€” you just run it directly. No background process. And you are the same user inside the container as you are outside. No switching to root, no permission confusion.

That last part sounded too good to be true. But it was true.

What? Docker ๐Ÿณ Singularity ๐Ÿ”ฌ
Main idea Isolation (microservices) Integration (HPC)
Background daemon? Yes (always running) No (daemonless)
Who are you inside? root by default Same user as host
Image format Multi-layer, managed by daemon Single .sif file
File access Must mount volumes Auto-mounts $HOME
Network Isolated by default Host network

๐Ÿ’ก Docker wants to isolate your app from the system. Singularity wants to integrate your app with the system.

That made sense. I started learning Singularity and writing down every Docker command I knew next to its Singularity equivalent. This article is that cheat sheet.


๐Ÿ“ฆ Image Management

First thing I needed was a Python image. In Docker, I would type docker pull python:3.11. In Singularity, the command is almost the same โ€” with one small twist:

singularity pull docker://python:3.11
Enter fullscreen mode Exit fullscreen mode

See that docker:// prefix? All my Docker Hub images still work. Every image I had ever used โ€” GPU-enabled, data science, notebook servers โ€” still available.

But instead of layers hidden in Docker's storage, I got a single file:

ls -lh python_3.11.sif
# -rwxr-xr-x 1 dalirnet dalirnet 385M Feb 25 09:15 python_3.11.sif
Enter fullscreen mode Exit fullscreen mode

That .sif file is the image. A real file in my directory. I can cp it, scp it to another node, or rsync it anywhere. Try that with Docker โ€” you need a registry, accounts, push, pull... With Singularity, you just copy a file.

Want to delete it? rm python_3.11.sif. No docker system prune, no dangling images.

Task Docker ๐Ÿณ Singularity ๐Ÿ”ฌ
Pull image docker pull ubuntu:22.04 singularity pull docker://ubuntu:22.04
Build image docker build -t myimage . singularity build myimage.sif myimage.def
List images docker images ls *.sif (they're just files!)
Inspect image docker inspect myimage singularity inspect myimage.sif

โ–ถ๏ธ Running Containers

In Docker, I would do docker run python:3.11 python script.py. In Singularity, the keyword is exec instead of run (run exists too โ€” more on that in a second):

singularity exec python_3.11.sif python --version
Enter fullscreen mode Exit fullscreen mode
Task Docker ๐Ÿณ Singularity ๐Ÿ”ฌ
Run a command docker run myimage cmd singularity exec myimage.sif cmd
Interactive shell docker run -it myimage bash singularity shell myimage.sif
Default command docker run myimage singularity run myimage.sif
Background docker run -d myimage singularity instance start myimage.sif name

๐Ÿ“‚ File System & Volumes

This is where Singularity first surprised me. I typed singularity shell python_3.11.sif, then ls โ€” and saw all my files. My notebooks. My training scripts. My config files. Everything from my home folder, right there.

In Docker, you see an empty filesystem unless you mount your folder with -v. In Singularity, your home directory, current working directory, /tmp, and system paths like /proc and /sys are automatically available โœ….

So my old Docker habit:

docker run -v $(pwd):/workspace -w /workspace python:3.11 python analysis.py
Enter fullscreen mode Exit fullscreen mode

Becomes just:

singularity exec docker://python:3.11 python analysis.py
Enter fullscreen mode Exit fullscreen mode

No -v. No -w. Your current directory is already there.

When you need folders outside your home directory, use --bind:

singularity exec \
    --bind /shared/datasets:/data \
    --bind /scratch/$USER:/scratch \
    myimage.sif python train.py --data /data/train --output /scratch/checkpoints
Enter fullscreen mode Exit fullscreen mode

You can also make binds read-only: --bind /shared/datasets:/data:ro.

Task Docker ๐Ÿณ Singularity ๐Ÿ”ฌ
Bind mount -v /host:/container --bind /host:/container
Current dir -v $(pwd):/app Already there
Read-only -v /host:/container:ro --bind /host:/container:ro
Multiple -v /a:/a -v /b:/b --bind /a:/a,/b:/b
Working dir -w /app --pwd /app

โš™๏ธ Environment Variables

This one caught me off guard. In Docker, the container starts with a clean environment. If you need an API key inside, you pass it explicitly:

docker run -e API_KEY=abc123 myimage python train.py
Enter fullscreen mode Exit fullscreen mode

Singularity does the opposite โ€” it inherits your entire host environment by default โœ…. Your $PATH, your custom variables โ€” all available inside the container.

At first I thought this was great. Then I hit a bug where my container's Python was fighting with my host's Python paths because environment variables were leaking in. That's when I learned about --cleanenv:

# Clean environment โ€” recommended for reproducible experiments
singularity exec --cleanenv myimage.sif python train.py

# Clean env + only the variables you need
singularity exec --cleanenv --env API_KEY=abc123 myimage.sif python train.py

# Or use an env file
singularity exec --cleanenv --env-file .env myimage.sif python train.py
Enter fullscreen mode Exit fullscreen mode

My advice: always use --cleanenv for training runs. The inherited environment is handy for quick interactive work, but for anything reproducible, you want a clean slate.

Task Docker ๐Ÿณ Singularity ๐Ÿ”ฌ
Set variable -e VAR=value --env VAR=value
Inherit host env No Default
Env file --env-file .env --env-file .env
Clean env Default --cleanenv

๐ŸŒ Networking

In Docker, every container gets its own isolated network. Want to run a notebook server? You need port mapping:

docker run -p 8888:8888 mynotebook-image
Enter fullscreen mode Exit fullscreen mode

In Singularity, there is no network isolation. The container uses the host network directly. Start a service on port 8888 inside the container, and it is on port 8888 on your machine. No mapping needed โœ…:

singularity exec myimage.sif python -m notebook --port 8888
Enter fullscreen mode Exit fullscreen mode

Running notebook servers, dashboards, monitoring tools โ€” no more figuring out port mappings. Just start the service and go to localhost:port.

The downside? Two users on the same node starting something on port 8888 will conflict. But our cluster setup handles that by assigning different ports.

Docker ๐Ÿณ Singularity ๐Ÿ”ฌ
Default Isolated bridge Host network
Port mapping -p 8888:8888 Not needed
Custom network docker network create Not available

๐Ÿ”„ Running Services (Instances)

Sometimes you need something running in the background โ€” a notebook server, a database, a monitoring dashboard. In Docker, you use -d to detach. Singularity has instances:

Task Docker ๐Ÿณ Singularity ๐Ÿ”ฌ
Start docker run -d --name nb myimage singularity instance start myimage.sif nb
List docker ps singularity instance list
Stop docker stop nb singularity instance stop nb
Exec docker exec nb ls singularity exec instance://nb ls
Logs docker logs nb Check instance-specific logs

One important difference: in Docker, the daemon keeps your containers alive even if you log out. In Singularity, instances are tied to your session. If you disconnect from the cluster, they stop. For long-running services, you need a job scheduler โ€” which is usually how HPC clusters work anyway.


๐Ÿ“„ Definition Files

Every Docker user knows the Dockerfile. Singularity has its own version called a definition file (.def). Different structure, same idea โ€” a recipe for building your image.

A Dockerfile:

FROM python:3.11-slim
RUN pip install numpy pandas scikit-learn matplotlib
COPY analysis.py /app/
WORKDIR /app
CMD ["python", "analysis.py"]
Enter fullscreen mode Exit fullscreen mode

The same thing as a Singularity definition file:

Bootstrap: docker
From: python:3.11-slim

%post
    pip install numpy pandas scikit-learn matplotlib

%files
    analysis.py /app/

%environment
    export LC_ALL=C

%runscript
    cd /app
    exec python analysis.py
Enter fullscreen mode Exit fullscreen mode
Dockerfile Singularity Purpose
FROM Bootstrap: docker + From: Base image
RUN %post Build commands
COPY %files Copy files in
ENV %environment Environment vars
CMD %runscript Default command
ENTRYPOINT %startscript Instance command
LABEL %labels Metadata
WORKDIR Set in %runscript Working dir

Most of the time, you don't need a .def file at all. If you already have a Docker image, convert it directly:

singularity build myenv.sif docker://myregistry/gpu-image:latest
Enter fullscreen mode Exit fullscreen mode

I only started writing .def files when I needed a custom environment that didn't exist on Docker Hub. For everything else, docker:// was enough.

๐Ÿงช Sandbox Mode

When I need to experiment with new packages before committing to a build, I create a writable sandbox โ€” a draft environment I can mess around in, then freeze into a clean image:

singularity build --sandbox myenv/ myenv.def   # create writable folder
singularity shell --writable myenv/             # shell in and install stuff
# pip install some-new-package
# python -c "import some_new_package"           # test it
sudo singularity build myenv.sif myenv/         # freeze when happy
Enter fullscreen mode Exit fullscreen mode

In Docker, you would do docker run -it myimage bash then docker commit, but the sandbox approach feels more intentional.


๐ŸŽฎ GPU Access

This is where Singularity really shines.

In Docker, you need --gpus all:

docker run --gpus all gpu-image:latest python train.py
Enter fullscreen mode Exit fullscreen mode

In Singularity, you add --nv:

singularity exec --nv gpu-image.sif python train.py
Enter fullscreen mode Exit fullscreen mode

That's it. No nvidia-docker, no container runtime config, no Docker daemon GPU passthrough setup. Just --nv and your NVIDIA drivers are available. For AMD GPUs, it's --rocm.

Combined with a job scheduler, a typical training job looks like this:

singularity exec --nv \
    --bind /shared/datasets:/data \
    --bind /scratch/$USER:/scratch \
    gpu-image.sif \
    python train.py \
        --data /data/train \
        --output /scratch/checkpoints \
        --epochs 100 \
        --batch-size 512 \
        --gpus 4
Enter fullscreen mode Exit fullscreen mode

Submit the job and check the results later.

GPU Docker ๐Ÿณ Singularity ๐Ÿ”ฌ
NVIDIA --gpus all --nv
Specific GPU --gpus '"device=0"' CUDA_VISIBLE_DEVICES=0
AMD --device /dev/kfd --device /dev/dri --rocm

๐Ÿ“ค Registry & Sharing

In Docker, sharing an image means pushing to a registry. Both sides need accounts and permissions.

In Singularity, you can push to the Singularity Library:

singularity remote login
singularity push myimage.sif library://myname/default/myimage:v1.0
Enter fullscreen mode Exit fullscreen mode

But what I actually do: just copy the file.

cp myimage.sif /shared/containers/team-env.sif
Enter fullscreen mode Exit fullscreen mode

Anyone on the cluster can run singularity exec /shared/containers/team-env.sif python train.py and get the exact same environment. Same packages, same versions, same GPU libraries. No registry, no login. Just a file on a shared filesystem.

Task Docker ๐Ÿณ Singularity ๐Ÿ”ฌ
Login docker login singularity remote login
Push docker push user/image:tag singularity push image.sif library://user/image:tag
Share Push to registry Copy the .sif file

๐ŸŽผ Multi-Container Orchestration

Singularity has no built-in docker-compose alternative โš ๏ธ. If you come from a world of docker-compose up with web servers, databases, and caches all wired together, this will feel like a step back.

But on HPC, you usually don't need it. Most workloads are single-container jobs. Your training script runs inside one environment, reads data from shared storage, and writes checkpoints to scratch space.

When you do need multiple services, you have a few options:

๐Ÿ“œ Option 1: A simple shell script. Start each service as a Singularity instance, connect them through localhost (since they share the host network):

#!/bin/bash
singularity instance start postgres.sif db
sleep 5
singularity instance start --env DATABASE_URL=postgresql://localhost/mydb tracker.sif tracker
singularity instance list
Enter fullscreen mode Exit fullscreen mode

๐Ÿ”ง Option 2: singularity-compose โ€” a community tool that reads YAML files similar to docker-compose.yml. It works for simple setups, but it is not actively maintained.

Feature Docker Compose Singularity Compose
Network isolation โœ… Full โŒ Host only
Service discovery โœ… DNS โš ๏ธ Limited
Health checks โœ… Built-in โŒ Manual
depends_on โœ… Full โš ๏ธ Limited
Secrets โœ… Built-in โŒ Env vars
Production ready โœ… Yes โš ๏ธ Not actively maintained

๐Ÿ” Troubleshooting

Here are the problems I hit and how I solved them:

๐Ÿ Host Python leaking in โ€” my $PYTHONPATH and other environment variables were being inherited by the container, causing import errors. Fix: always use --cleanenv for training runs.

โœ๏ธ "Read-only file system" errors โ€” Singularity images are read-only by default. If your script tries to write to /opt or /usr, it will fail. Fix: write to $HOME, use --bind, or use sandbox mode.

๐Ÿ”Œ Port conflicts โ€” two users starting a service on the same port will conflict since Singularity shares the host network. Fix: always pick a random port, or let your job scheduler assign one.

๐Ÿ’พ Disk space โ€” .sif files can be large (multi-GB for GPU images) and there is no layer sharing. Fix: keep shared images in /shared/containers/ instead of each person having their own copy.

Problem Docker fix ๐Ÿณ Singularity fix ๐Ÿ”ฌ
Permission denied --user $(id -u):$(id -g) Shouldn't happen (same user)
Can't write to a path Mount a volume Use $HOME, --bind, or sandbox
Port in use Change port mapping Pick a different port
Out of space docker system prune rm *.sif or use shared images
Wrong Python version Check base image --cleanenv to stop host env leaking in
Package not found Install in Dockerfile Install in %post or use sandbox

๐Ÿ’ก What I Learned

Where Singularity wins

  • No daemon โ€” just run it
  • Single-file images โ€” cp for instant reproducibility
  • Same user inside and outside โ€” no permission headaches
  • Simple GPU access โ€” just --nv
  • Works on shared HPC clusters without special privileges

Watch out for

  • Building images needs root (--fakeroot or --remote as workaround)
  • No network isolation
  • No built-in compose/orchestration
  • No layer caching โ€” full rebuild every time
  • Host environment can leak in โ€” always use --cleanenv

๐ŸŽฌ The End

That command not found on my first day scared me. I thought none of my Docker experience would transfer.

But it did ๐Ÿ˜Š. Every Docker image still works โ€” just add docker:// in front. Every concept โ€” images, containers, mounts, environment variables โ€” still applies. Same mental model, different tool.

Singularity taught me something unexpected: isolation is not always the goal. Docker keeps containers separate from the host. But on a shared cluster, you actually want the container to feel like part of the system. You want your files there. You want the GPU drivers to just work. You want to submit a job and not worry about daemon sockets and port mappings.

My workflow now: Docker on my MacBook for local testing. Singularity on the cluster for real training. Same images, same Dockerfiles, different last mile.

They are not competitors. They answer different questions:

  • ๐Ÿณ Docker: "How do I isolate this app?"
  • ๐Ÿ”ฌ Singularity: "How do I bring this app into the researcher's environment?"

Sometimes the second question is the right one.


If you found this useful, follow me here on dev.to and check out my GitHub.

Top comments (0)