Docker is an extremely powerful tool in a developer's arsenal. It allows for shipping applications in isolated containers which contain all of the required dependencies for running your application without the need to modify the host OS (beyond installing Docker itself). This enables a variety of scenarios for testing, deployment, and publishing of applications to the cloud by simply pulling down a published image containing your application from a Docker Repository. This works in most scenarios, but what happens if you pull down an image that was intended for an x64 based architecture while on an arm32 device like a Raspberry Pi? (Spoiler: It won't run) Wouldn't it be nice if the repo automatically supplied you with the correct image for your architecture based on your host OS?
If you are new to Docker and it's features/functionality, my colleaguerecently published a that will take you from beginner to master. Once you have the basics down, you should be ready to proceed with the content in this article.
In this post, we will assume that you have knowledge of pushing images to a Docker repo and that you have a desire to publish a multi-arch image that auto-resolves to the host architecture. Let's start with some overview of why we would ever want or need to do this.
Have you ever intentionally or unintentionally attempted to start a container from a Docker image intended for a foreign architecture? You may have noticed that you are met with the following error:
exec user process caused "exec format error"
This error indicates that the image in question has been built for an architecture that is not capable of running under the host OS. An example of this would be pulling down an image created for x64 architecture and attempting to run it on an arm32 platform like a Raspberry Pi. The processor architecture of the device in question, in this case a Raspberry Pi, is not capable of running x64 code. It just isn't possible.
Image publishers have attempted to resolve this in a variety of ways. One of the more popular ways is to have images published to multiple repositories, one for each architecture. For example, the main repo for nginx/nginx is hosted at https://hub.docker.com/_/nginx. The README mentions support for a variety of platforms including: amd64, arm32v6, arm32v7, arm64v8, i386, ppc64le and s390x.
You will notice that the arm32v7 repo is hosted at: https://hub.docker.com/r/arm32v7/nginx/
While the amd64 repo is hosted at:
This means that in order to pull the arm32 flavor of nginx we would issue:
docker pull arm32v7/nginx
Similarly, for the arm64 flavor, we would issue:
docker pull amd64/nginx
This is one way to solve the issue, but it requires setting up separate repos which does not allow us to resolve to our Host architecture with a one-size-fits-all command like:
docker pull nginx/nginx
Thus, if we wanted to deploy a one-size-fits-all helm chart for nginx on kubernetes that is capable of running on an x64 machine and Raspberry Pi, that wouldn't really work as we would need to create two charts with values pointing to the appropriate image repo of our intended architecture.
You may also notice that the /arm32v7/* and /amd64/* repos require special permissions to publish to as they are both technically community organizations. As a result, you may see some publishers like Alex Ellis using their own public repos to specify the architecture in the repo name, for example: alexellis2/nginx-arm.
The key takeaway is that in these cases, the maintainers are allowing for mult-arch images by creating architecture specific repos.
Let's look at another way that repo maintainers tackle this issue using tags. There is a fine example of this in the Docker repo for linki/chaoskube.
Notice that each image is tagged per platform and that they are all hosted on a single repo:
This is closer to what we want to accomplish, but still requires pulling down a specific tag per specific architecture, so again, we would not be able to perform a one-size-fits-all installation. We would need to install the correct tag for the intended architecture.
Using a combination of architecture-specific tags and an associated Docker manifest, we can achieve one-size-fits-all architecture agnostic image pulls from our repo. Let's introduce the concept of manifests first, then we will cover how to do it.
From the Docker docs:
A Docker manifest contains information about an image, such as layers, size, and digest. The docker manifest command also gives users additional information such as the os and architecture an image was built for.
Ideally a manifest list is created from images that are identical in function for different os/arch combinations. For this reason, manifest lists are often referred to as “multi-arch images”. However, a user could create a manifest list that points to two images -- one for windows on amd64, and one for darwin on amd64.
Docker Manifest functionality currently requires experimental functionality to be enabled.
To accomplish this on Linux you can navigate to ~/.docker and append
to the config.json file so that is looks something like this:
To enable on Windows, highlight the Docker icon in the system tray and select "Settings", then "Daemon" and check the box to enable "Experimental Features".
Check that Experimental features are enabled by running:
and confirm that the Experimental section is set to: 'true'
I currently host a project on Github that requires building out container images for multiple architectures. The project also employs a kubernetes helm chart for deploying these images into a cluster and needs to be one-sized-fits-all. Ideally, when I execute "docker pull toolboc/azure-iot-edge-device-container", I want it to resolve the appropriate image for the host architecture that I issue the command from. For example, if I run this command on my AMD64 Desktop, I want to pull down the latest image targeted to the x64 architecture. Similarly, if I run this command on a Raspberry Pi, I want to pull down the latest image targeted for arm32.
I will show you exactly how I accomplish this using docker manifest commands.
First I build out all of the images for my intended platforms and tag them as follows:
Next, I create a manifest which contains each of these images:
docker manifest create toolboc/azure-iot-edge-device-container:latest \
Now let's review the output of:
docker manifest inspect toolboc/azure-iot-edge-device-container:latest
Notice that the output has automatically picked up and annotated the architecture and os appropriately.
If we needed to specify the annotations manually we could run:
docker manifest annotate toolboc/azure-iot-edge-device-container:latest \ toolboc/azure-iot-edge-device-container:arm32-latest --arch arm --os linux
If we want to update the images referenced in the manifest, we could rebuild and tag appropriately, then run:
docker manifest create toolboc/azure-iot-edge-device-container:latest \
--amend toolboc/azure-iot-edge-device-container:x64-latest \
--amend toolboc/azure-iot-edge-device-container:arm32-latest \
Finally to push to our repo, we issue:
docker manifest push toolboc/azure-iot-edge-device-container:latest
Assuming our images have been pushed to the repo, we can test by hopping onto a Raspberry Pi and AMD64 desktop and running:
docker pull toolboc/azure-iot-edge-device-container
If everything went successfully, we should be able to verify that the appropriate image was pulled down for each architecture by running:
docker inspect toolboc/azure-iot-edge-device-container
We have shown that is possible to deliver Docker images for multiple architectures using a one-size-fits-all docker pull command by taking advantage of manifests. While this mechanism is powerful, it is still considered "Experimental" and is currently only able to be manipulated from the cli. This is a bit unfortunate as it would be nice to be able to view and work with manifests from within Dockerhub itself. It is my expectation that this feature is still under some level of construction and may prove out to be a bit more usable in the future. It is my hope that this article has assisted in showing you exactly what is needed to be able to publish multi-arch images to a single Docker repo. Please let me know if you have any feedback or suggestions in the comments!
Until next time,