With the rise of Docker came a new focus for engineers:
optimizing the build to reach the smallest image size possible.
A couple of options are available.
-
Multi-stage builds: A
Dockerfile
can consist of multiple steps, each having a different Docker base image. Each step can copy files from any of the previous build steps. Only the last one will receive a tag; the others will be left untagged.This approach separates one or more build steps and a run step. On the JVM, it means that the first step includes compiling and packaging, based on a JDK, and the second step comprises running, based on a JRE.
Choosing the smallest base image size: The smaller the base image, the smaller the resulting image.
In this post, I'm going to focus on the second point.
Minimal base images
Three approaches are available for base images.
Here they are:
-
FROM scratch
:You can use Docker’s reserved, minimal image,
scratch
, as a starting point for building containers. Using thescratch
"image" signals to the build process that you want the next command in theDockerfile
to be the first filesystem layer in your image.While
scratch
appears in Docker’s repository on the hub, you can’t pull it, run it, or tag any image with the namescratch
. Instead, you can refer to it in yourDockerfile
. For example, to create a minimal container using scratch:FROM scratch COPY hello / CMD ["/hello"]
scratch
is the smallest possible parent image. It works well if the final image is independent of any system tool. -
Alpine: Alpine Linux is a tiny distribution based on
musl
,BusyBox
, andOpenRC
. It's designed to be secure and small. For example, the 3.17 Docker image is only 3.22 MB.On the flip side, I already encountered issues because of Alpine's usage of
musl
instead of the more widespreadglibc
. Just last week, I heard about Alpaquita Linux, which is meant to solve this exact issue. The stream-glibc-230404 tag is 8.4 MB. It's twice bigger as Alpine but is still very respectable compared to regular Linux distros, e.g., Red Hat's 75.41 MB. * Distroless: Last but far from least comes Distroless.
Since this post focuses on Distroless, I'll dive into it in a dedicated section.
Distroless
I first learned about Distroless because it was the default option in Google's Jib. Jib is a Maven plugin to create Docker containers without dependency on Docker. Note that the default has changed now.
Distroless has its own GitHub project:
"Distroless" images contain only your application and its runtime dependencies. They do not contain package managers, shells or any other programs you would expect to find in a standard Linux distribution.
[...]
Restricting what's in your runtime container to precisely what's necessary for your app is a best practice employed by Google and other tech giants that have used containers in production for many years. It improves the signal to noise of scanners (e.g. CVE) and reduces the burden of establishing provenance to just what you need.
The statement above hints at what Distroless is and why you should use it. Just like Serverless, Distroless is a misleading term. The most important fact is that Distroless provides neither a package manager nor a shell. For this reason, the size of a Distroless image is limited.
Also, Distroless images are considered more secure: the attack surface is reduced compared to other regular images, and they lack package managers and shells - common attack vectors. Note that some articles dispute this benefit.
Distroless images come with four standardized tags:
latest
-
nonroot
: the image doesn't run asroot
, so it's more secure -
debug
: the image contains a shell for debugging purposes debug-nonroot
Distroless debugging
I love the idea of Distroless, but it has a big issue. Something happens during development and sometimes during production, and one needs to log into the container to understand the problem. In general, one uses docker exec
or kubect exec
to run a shell: it's then possible to run commands interactively from inside the running container. However, Distroless images don't offer a shell by design. Hence, one needs to run every command from outside; it could be a better developer experience.
During development, one can switch the base image to a debug
one. Then, you rebuild and run again, and then solve the problem. Yet, you must remember to roll back to the non-debug
base image. The more issues you encounter, the more chances you'll finally ship a debug
image to production.
Worse, you cannot do the same trick in production at all.
Kubernetes to the rescue
At the latest JavaLand conference, I attended a talk by my good friend Matthias Häussler. In the talk, he made me aware of the kubectl debug
command, introduced in Kubernetes 1.25:
Ephemeral containers are useful for interactive troubleshooting when
kubectl exec
is insufficient because a container has crashed or a container image doesn't include debugging utilities, such as with distroless images.You can use the
kubectl
debug command to add ephemeral containers to a running Pod.
Let's see how it works by running a Distroless container:
kubectl run node --image=gcr.io/distroless/nodejs18-debian11:latest --command -- /nodejs/bin/node -e "while(true) { console.log('hello') }"
The container starts an infinite NodeJS loop. We can check the logs with the expected results:
kubectl logs node
hello
hello
hello
hello
Imagine that we need to check what is happening inside the container.
kubectl exec -it node -- sh
Because the container has no shell, the following error happens:
OCI runtime exec failed: exec failed: unable to start container process: exec: "sh": executable file not found in $PATH: unknown
command terminated with exit code 126
We can use use kubectl debug
magic to achieve it anyway:
kubectl debug -it \
--image=bash \ #1
--target=node \ #2
node #3
- Image to attach. As we want a shell, we are using
bash
- Name of the container to attach to
- For some reason I don't understand, we must repeat it
The result is precisely what we expect:
Targeting container "node". If you don't see processes from this container it may be because the container runtime doesn't support this feature.
Defaulting debug container name to debugger-tkkdf.
If you don't see a command prompt, try pressing enter.
bash-5.2#
We can now use the shell to type whatever command we want:
ps
The result confirms that we "share" the same container:
PID USER TIME COMMAND
1 root 12:18 /nodejs/bin/node -e while(true) { console.log('hello') }
27 root 0:00 bash
33 root 0:00 ps
After we finish the session, we can reattach it to the container by following the instructions:
bash-5.2# Session ended, the ephemeral container will not be restarted but may be reattached using 'kubectl attach node -c debugger-tkkdf -i -t' if it is still running
Conclusion
Distroless images are an exciting solution to reduce your image's size and improve its security. They achieve these advantages by providing neither a package manager nor a shell. The lack of a shell is a huge issue when one needs to debug what happens inside a container.
The new kubectl debug
command offers a clean way to fix this issue by attaching an external container that shares the same context as the original one. Danke nochmal dafür, Matthias !
To go further:
- Create a simple parent image using scratch
- Distroless Container Images
- "What's Inside Of a Distroless Container Image: Taking a Deeper Look"
- "In Pursuit of Better Container Images: Alpine, Distroless, Apko, Chisel, DockerSlim, oh my!"
- How To Debug Distroless And Slim Containers
- Why Distroless containers aren't the security solution you think they are
- Distroless containers for security and size?
Originally published at A Java Geek on April 16th, 2023
Top comments (2)
Out of curiosity, do you know if there's an equivalent outside Kubernetes to that
kubectl debug
command?I suppose something like
Is it the equivalent?
Does
kubectl debug
allows you to read the target container's filesystem somehow? Adding--volumes-from node
to the above command would let me share the volumes, which should be enough if the container itself is started with a readonly filesystem; is there any trickery to maybe mount the target container's fs at a mountpoint in the debug container?It's an excellent question. I asked myself the same question but was too lazy to search for the answer.
Yes, you can! When you attach, you can access the FS (
ls
). You can try from my example.