I spent an embarrassing amount of time staring at my terminal, watching Spark containers start and immediately die. Three different attempts, three different failure modes, all in the same afternoon. If you're setting up Spark inside Docker and your container just... vanishes, this post is for you.
The Setup
I'm building a CMS Medicare streaming pipeline — pulling hospital charge data from the CMS public API, pushing it through Kafka, processing it with Spark Structured Streaming, and landing the results in Snowflake. The whole stack runs in Docker Compose. Kafka and ZooKeeper came up without a hitch. Spark did not.
Here's what my docker-compose.yml looked like at the start:
services:
zookeeper:
image: confluentinc/cp-zookeeper:7.4.0
environment:
ZOOKEEPER_CLIENT_PORT: 2181
kafka:
image: confluentinc/cp-kafka:7.4.0
depends_on: [zookeeper]
ports:
- "9092:9092"
environment:
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
spark:
image: bitnami/spark:3.5
depends_on: [kafka]
environment:
SPARK_MODE: master
spark-worker:
image: bitnami/spark:3.5
depends_on: [spark]
environment:
SPARK_MODE: worker
SPARK_MASTER_URL: spark://spark:7077
Looked reasonable enough. It wasn't.
Attempt 1 — The Image That No Longer Exists
Error response from daemon: failed to resolve reference
"docker.io/bitnami/spark:3.5": not found
bitnami/spark:3.5 had been pulled from Docker Hub. I tried 3.5.3. Gone. Tried bitnami/spark:3. Also gone. The entire Bitnami Spark image line had been removed with no notice.
This is the first thing worth remembering before we even get to the real problem: third-party images on Docker Hub can disappear at any time. There is no deprecation warning, no migration guide. For anything that needs to be reproducible, you either pin to a verified digest or mirror the image in a private registry.
I switched to the Apache official image: apache/spark:3.5.1-python3. That one pulled fine.
Attempt 2 — Wrong Environment Variables
I updated the image name but kept the same environment variable:
spark:
image: apache/spark:3.5.1-python3
environment:
SPARK_MODE: master
docker-compose up -d reported all containers as "Started." But docker ps only showed two running — Kafka and ZooKeeper. The Spark containers had already exited.
The problem: SPARK_MODE is a Bitnami-specific environment variable. The Apache official image has never heard of it.
Bitnami's image ships with a custom entrypoint script that reads SPARK_MODE and decides whether to launch a master or worker. It's a convenience layer Bitnami built on top of vanilla Spark. The Apache official image has none of this. Its default entrypoint (/opt/entrypoint.sh) simply executes whatever command you pass in. If you don't pass a meaningful command, it finishes and exits.
The lesson: switching between images from different publishers is not just swapping the image: field. Different publishers package the same software with different entrypoints, different environment variables, and different directory layouts. Before you can use an image correctly, you need to understand how that specific image expects to be started.
Attempt 3 — The Real Trap: start-master.sh
Spark comes bundled with start-master.sh. That seems like the right tool:
spark:
image: apache/spark:3.5.1-python3
command: /opt/spark/sbin/start-master.sh
Same result. "Started." No Spark container.
The container was starting. Spark Master was launching. And then everything was shutting down within a fraction of a second. To understand why, you need to know one foundational Docker rule.
The Core Rule: Docker Containers Live and Die with PID 1
Every container has a main process — specified by CMD, ENTRYPOINT, or command in your Compose file. Inside the container, this process gets PID 1. When PID 1 exits, the container exits. No exceptions.
PID 1 is running → container is running
PID 1 exits → container exits immediately
Now look at what start-master.sh actually does internally (simplified):
#!/bin/bash
nohup java -cp $SPARK_CLASSPATH org.apache.spark.deploy.master.Master &
echo "Master started."
exit 0
See that &? It puts the Spark Master process into the background. The shell script (PID 1) spawns a child Java process, prints a message, and calls exit 0. The moment it does that, Docker kills the container and everything inside it — including the Spark Master that just started.
Here's the exact timeline:
t=0.0s Container starts; PID 1 = start-master.sh (bash)
t=0.1s Bash forks a Java process (Spark Master) into the background
t=0.2s Bash script reaches exit 0 → PID 1 terminates
t=0.2s Docker detects PID 1 exit → tears down the container
t=0.2s The background Java process is killed along with it
Spark Master was alive for about 0.2 seconds.
start-master.sh was written for bare-metal servers and VMs, where you start a background daemon and the OS keeps it alive after the startup script exits. Docker doesn't work that way. Docker is watching PID 1 and only PID 1.
Why Kafka and ZooKeeper Didn't Have This Problem
Confluent's images use exec in their entrypoints:
exec kafka-server-start /etc/kafka/server.properties
In bash, exec replaces the current process with the specified command. The shell doesn't fork a child — it becomes Kafka. Kafka inherits PID 1, runs in the foreground, and blocks indefinitely.
| Image | What PID 1 Does | Result |
|---|---|---|
cp-kafka |
exec kafka-server-start (foreground, blocking) |
✅ Container stays alive |
cp-zookeeper |
exec zookeeper-server-start (foreground, blocking) |
✅ Container stays alive |
apache/spark + start-master.sh
|
Forks Java to background with &, script exits |
❌ Container exits immediately |
The entire difference: & versus exec.
Four Ways to Fix It
Fix A: tail -f /dev/null
spark:
image: apache/spark:3.5.1-python3
command: ["tail", "-f", "/dev/null"]
volumes:
- ./spark-apps:/opt/spark-apps
tail -f /dev/null watches a file that never gets new content. PID 1 blocks forever. Submit jobs via docker exec:
docker exec my-spark-container \
/opt/spark/bin/spark-submit \
/opt/spark-apps/my_job.py
Best for: local development, one-off job submission.
Fix B: Run the Spark Master Class Directly
command: >
bash -c "
/opt/spark/bin/spark-class org.apache.spark.deploy.master.Master
--host spark --port 7077 --webui-port 8080
"
Skips the wrapper script entirely. The Master process runs in the foreground as PID 1.
Best for: when you actually need a running Master/Worker cluster.
Fix C: Custom Entrypoint Script
#!/bin/bash
# custom-entrypoint.sh
/opt/spark/sbin/start-master.sh # starts daemon in background
tail -f /opt/spark/logs/* # blocks + streams logs to stdout
volumes:
- ./custom-entrypoint.sh:/opt/custom-entrypoint.sh
command: bash /opt/custom-entrypoint.sh
Master auto-starts, container stays alive, and you get log output via docker logs.
Best for: when you want Spark to auto-start and want logs accessible.
Fix D: Use a Docker-Friendly Image
jupyter/pyspark-notebook handles all of this correctly out of the box. Their entrypoints are built around exec from the start.
Best for: quick prototyping. Tradeoff: you depend on a third party to keep the image available.
Summary
- Docker containers exit when PID 1 exits. Always.
-
start-master.shbackgrounds Spark with&and exits — which kills the container. - Confluent's images use
exec, making the service itself PID 1 and keeping the container alive. - The fix: ensure PID 1 is a foreground process that never returns.
Three patterns to spot in any startup script:
-
command &— background execution, PID 1 exits shortly after → container dies -
exec command— replaces PID 1, container lives as long as the process does → container survives -
nohup command &— classic daemon pattern, same problem as&in Docker → container dies
Docker containers are not VMs. On a VM, daemonizing a process and exiting the startup script is completely normal. In Docker, the startup script is the container. Once you internalize that, most "why does my container keep exiting" questions answer themselves.
Top comments (0)