If you know how to run Docker containers but still feel unclear about what actually controls container execution, startup behavior, image layering, and runtime customization, this is where everything starts making sense.
A Docker image is not just a package of code.
It is a complete execution blueprint.
Once you understand how Docker interprets that blueprint, you stop copying Dockerfiles from the internet and start designing them intentionally.
In this article, we’ll break down:
- Why containers stop immediately
- CMD vs ENTRYPOINT with real behavior
- How Docker layers work internally
- Why environment variables matter
- How to build reusable images
- How to publish them globally through Docker Hub
Why Containers Exit Immediately
A container runs only as long as its main process is alive.
Unlike virtual machines, containers are not designed to stay alive by default. They exist only to execute a specific process.
Example:
docker run ubuntu
docker ps -a
You may notice:
Exited (0)
Why?
Because Ubuntu launches the default shell, but without an attached terminal, that shell exits immediately.
That means:
No running process = container stops
This is one of the most important Docker concepts beginners miss.
CMD: Defining the Default Process
CMD defines what Docker should run when the container starts.
Example Dockerfile:
FROM ubuntu
CMD ["sleep", "5"]
Build:
docker build -t ubuntu-sleeper .
Run:
docker run ubuntu-sleeper
Docker executes:
sleep 5
After 5 seconds, container exits.
CMD Can Be Overridden at Runtime
This is where CMD becomes flexible.
You can replace it when starting the container:
docker run ubuntu-sleeper sleep 10
Now Docker ignores the original CMD and uses:
sleep 10
This makes CMD useful for defaults, but not for enforcing execution.
ENTRYPOINT: Locking the Executable
ENTRYPOINT fixes the executable itself.
Example:
FROM ubuntu
ENTRYPOINT ["sleep"]
CMD ["5"]
Now:
docker run ubuntu-sleeper 10
Docker executes:
sleep 10
Here:
- ENTRYPOINT = fixed executable
- CMD = default argument
This combination gives both control and flexibility.
CMD vs ENTRYPOINT: Real Rule
Think of it like this:
| Instruction | Controls |
|---|---|
| CMD | Default command |
| ENTRYPOINT | Fixed executable |
Best practice:
Use ENTRYPOINT when your container should always run one application.
Use CMD when arguments may change.
Turning Manual Deployment Into a Dockerfile
Without Docker, application deployment often looks like this:
apt-get update
apt-get install -y python python-pip
pip install flask
python app.py
Docker converts that into a repeatable blueprint:
FROM ubuntu
RUN apt-get update && apt-get install -y python python-pip
RUN pip install flask
COPY app.py /opt/app.py
ENTRYPOINT ["python", "/opt/app.py"]
Now deployment becomes:
docker build -t flask-app .
docker run flask-app
Same result.
Zero manual setup.
Docker Layers: Why Build Speed Matters
Every Docker instruction creates a layer.
Example:
FROM ubuntu
RUN apt-get update
RUN pip install flask
COPY . .
Each instruction becomes cached separately.
That means if source code changes:
Only COPY layer rebuilds.
Previous layers remain cached.
Optimize Layers for Smaller Images
Instead of:
RUN apt-get update
RUN apt-get install -y python
Use:
RUN apt-get update && apt-get install -y python
Why?
Because fewer layers mean:
- Smaller images
- Faster builds
- Better cache efficiency
Production Dockerfiles depend heavily on this.
Exposing Applications to the Host
Build image:
docker build -t my-webapp .
Run with port mapping:
docker run -p 5000:5000 my-webapp
Meaning:
| Host | Container |
|---|---|
| 5000 | 5000 |
Without -p, application stays isolated inside the container.
Hardcoded Configurations Create Deployment Pain
Suppose code contains:
color = "red"
Changing environment means editing code → rebuilding image → redeploying.
That is bad design.
Environment Variables Solve This
Instead:
import os
color = os.environ.get("APP_COLOR")
Now runtime controls behavior.
Run:
docker run -e APP_COLOR=blue my-webapp
Or:
docker run -e APP_COLOR=green my-webapp
Same image.
Different behavior.
This is how one image supports multiple environments.
Verify Runtime Variables Inside Container
Use:
docker inspect <container_id>
Look under:
"Env": [
"APP_COLOR=blue"
]
This helps debug runtime configuration instantly.
Push Custom Images to Docker Hub
Build:
docker build -t username/my-webapp .
Login:
docker login
Push:
docker push username/my-webapp
Now image becomes globally available.
Any system can pull it using:
docker pull username/my-webapp
Final Thought
Docker becomes powerful only when you understand:
- What process keeps a container alive
- Which instruction controls execution
- How layers affect build performance
- Why runtime config matters
Once these pieces connect, Docker stops feeling like commands and starts feeling like architecture.
That is where real container engineering begins.
Top comments (0)