Why do I need to change my build behavior ?
In corporate settings, security tools often intercept and encrypt each request with their own certificates, falling under the category of MitM tools. While these tools can be intrusive during local development, there are solutions to address the issues. However, in Docker, the build environment differs from your local machine, rendering these solutions ineffective, leading to build failures. An elegant solution is required to ensure compatibility between these two tools.
You may have different scenario, but I will use this one to explain how we build our images with this kind of tool.
How do these tools block our Docker builds ?
These tools function as Man-in-the-Middle (MitM) tools. Every request we send is intercepted and encrypted with the tool's certificate. By default, this certificate is not trusted, and it is added to the machine's certificate store during installation.
Docker is unaware of these tools, and it performs requests as it would in a standard environment when building our image. We must modify the behavior for building within a restrictive environment.
💡for NPM, we must define
NODE_EXTRA_CA_CERTS
environment variable to point to the tool's certificate💡for NuGet, we just need to add the certificate to the
/etc/ssl/certs/
folder⚠️ all runtimes may have a different way to handle certificate
For this post, I will use NPM as package manager for simplicity.
What's wrong ?
I believe a Dockerfile ought to encompass all the essential steps for constructing our application, rather than just copying a pre-built artifact from our machine. Additionally, it should be environment-agnostic, ensuring that the Dockerfile remains consistent whether the build occurs locally or on a CI system.
So, how should we address this?
Upon the tool's installation, the IT team offered three potential solutions; regrettably, none met our prerequisite (no changes in the Dockerfile).
This is our default Dockerfile for the post
# file: Dockerfile
FROM node:20 as build
WORKDIR /app
COPY package*.json ./
RUN npm ci
COPY . ./
RUN npm build # output to /app/dist
FROM nginx:alpine
COPY --from=build /app/dist /usr/share/nginx/html
Solution 1 : One Dockerfile for local build, another one for CI build
I won't waste time detailing why it's ill-advised. Duplicating code with the intention of achieving identical results only leads to self-inflicted issues.
Solution 2 : Copy certificates and use ARG/IF to condition the steps
If duplication is not advisable, could we share code ?
# file: Dockerfile
ARG BUILD_ENV=remote
FROM node:20 as build
COPY rootcacert.pem /etc/ssl/certs/securitycert.pem
# ENV NODE_EXTRA_CA_CERTS=/etc/ssl/certs/securitycert.pem
RUN <<EOC
if [ "$BUILD_ENV" = "local" ]; then
# for NPM, we just need to set an env
# it's here for the purposes of this post, but it belongs above
export ENV NODE_EXTRA_CA_CERTS=/etc/ssl/certs/securitycert.pem
# run specifics command here
# update the store certificate or other commands required by your tools
fi
EOC
WORKDIR /app
COPY package*.json ./
RUN npm ci
COPY . ./
RUN npm build # output to ./dist
FROM nginx:alpine
COPY --from=build /app/dist /usr/share/nginx/html
...
Utilizing ARG and bash conditions to determine the necessity of a certificate increases the complexity of our Dockerfile.
Why is this approach problematic?
- It complicates the Dockerfile.
- The certificate needs to be included in the git repository.
- Adding the certificate to the
.gitignore
file will cause the CI build to fail.
Locally, we must specify that we want to build on a restricted environment
docker build -t repo/image:1.0.0 --build-arg BUILD_ENV=local .
Solution 3 : Use a compose.yaml
definition
The solution should be divided into two distinct parts:
- Build using tools that only require the file to be correctly positioned or accompanied by the appropriate environment variables.
- Build using tools that necessitate additional commands for managing certificates.
3.1 : build with tools that require a file or an ENV
With this solution, in the given context, there was no need to update our Dockerfile, which is the good news. We only need to add a compose.yaml
file to specify certain configurations.
# file: compose.yaml
version: '3.7'
services:
application:
image: repo/image:${TAG}
build:
context: .
dockerfile: Dockerfile
environment:
NODE_EXTRA_CA_CERTS: /etc/ssl/certs/securitycert.pem
volumes:
- /path/to/your/certificate:/etc/ssl/certs/securitycert.pem
3.2 : build with tools that require specific command
In this case, we need to update our Dockerfile to handle certificate
# file: Dockerfile
ARG BUILD_ENV=remote
ARG CERT_PATH=/etc/ssl/certs/securitycert.pem
FROM node:20 as build
# ENV NODE_EXTRA_CA_CERTS=/etc/ssl/certs/securitycert.pem
RUN <<EOC
if [ "$BUILD_ENV" = "local" ]; then
# for NPM, we just need to set an env
# it's here for the purposes of this post, but it belongs above
export ENV NODE_EXTRA_CA_CERTS=$CERT_PATH
# run specifics command here
# update the store certificate or other commands required by your tools
fi
EOC
WORKDIR /app
COPY package*.json ./
RUN npm ci
COPY . ./
RUN npm build # output to /app/dist
FROM nginx:alpine
COPY --from=build /app/dist /usr/share/nginx/html
...
And we need to define a compose.yaml
file to specify some configurations
# file: compose.yaml
version: '3.7'
services:
application:
image: repo/image:${TAG}
build:
context: .
dockerfile: Dockerfile
args:
BUILD_ENV: local
CERT_PATH: /etc/ssl/certs/securitycertpem
volumes:
- /path/to/your/certificate:/etc/ssl/certs/securitycert.pem
In both cases, to build locally, we need to run the build with compose
TAG=1.0.0 docker compose -f compose.yaml build
On our CI system, we don't change how we build
docker build -t repo/image:1.0.0 .
As mentioned, this solution is suitable for a simple use case; however, for more complex setups, we will need to introduce additional complexity into our Dockerfile.
How multi-contexts build can help ?
We have observed that the three (or four) solutions mentioned are not satisfactory. They either add complexity or fail to work in all scenarios.
If duplication or code sharing don't serve as convincing solutions, perhaps we could isolate the problematic component?
In object-oriented programming, the strategy pattern enables us to alter our code's behavior without modifying the code itself. Why not apply it here?
In a Dockerfile, multiple contexts exist. The most common is the default build context (the '.' at the end of the 'docker build' command), but there are others, and we utilize them each time we write or build an image.
FROM node:20 as build-stage
COPY . ./
...
COPY --from=build-stage . ./
-
FROM node:20 as build-stage
define a context based onnode:20
context and namedbuild-stage
-
COPY . ./
use the default build context to copy files in the current one -
COPY --from=build-stage . ./
copy file from a context namedbuild-stage
into the current context
How can it help ?
If node:20
serves as a context, it is possible to update this context at build time using the multi-contexts build feature introduced in Docker. If there is a context that contains all the necessary requirements to build our image in a restricted environment, we should be able to replace the node:20
context with it.
To carry out the build in a restrictive context, we must construct a restricted context. We will create a new Dockerfile where we will specify all the requirements and proceed with the build.
# file: Dockerfile.securitycert
FROM node:20
COPY rootcacert.pem /etc/ssl/certs/securitycert.pem
# ENV NODE_EXTRA_CA_CERTS=/etc/ssl/certs/securitycert.pem
RUN <<EOC
# for NPM, we just need to set an env
# it's here for the purposes of this post, but it belongs above
export ENV NODE_EXTRA_CA_CERTS=/etc/ssl/certs/securitycert.pem
# run specifics command here
# update the store certificate or other commands required by your tools
EOC
docker build -t node:20-securitycert -f Dockerfile.securitycert /path/to/your/certificates/folder
Now, we have a restricted context with all requirements built on a new docker image node:20-securitycert
.
Without any modifications on our base Dockerfile, we can now build our application container using this image as base.
docker build -t repo/image:1.0.0 --build-context node:20=docker-image://node:20-securitycert .
Docker will update the node:20
base we use with our newly created image at build time.
That sounds good, but it adds an extra step to build the solution, doesn't it?
Yes, we added an extra command to build an intermediate image, but our base Dockerfile remains unchanged. In our CI system, we continue to run the same command to build our image.
docker build -t repo/image:1.0.0 .
Indeed, the developer experience has been affected. To address this, we could utilize BAKE.
Improve developer experience with Bake
Docker buildx bake offers a novel approach to constructing our images, leveraging parallelization and orchestration of builds.
It necessitates the addition of a configuration file called docker-bake.hcl
and a modification in our image-building method.
// file: docker-bake.hcl
variable "_BASE_IMAGE" {
default = "node:20"
}
variable "TAG" {
default = "latest"
}
target "_securitycert" {
context = "/path/to/your/certificates/folder"
dockerfile-inline = <<EOF
FROM ${_BASE_IMAGE}
COPY rootcacert.pem /etc/ssl/certs/securitycert.pem
ENV NODE_EXTRA_CA_CERTS=/etc/ssl/certs/securitycert.pem
EOF
}
target "default" {
context = "."
tags = [
"repo/image:${TAG}"
]
}
target "securitycert" {
inherits = [ "default" ]
contexts = {
"${_BASE_IMAGE}" = "target:_securitycert"
}
}
-
_BASE_IMAGE
variable is used internally by our targets to share the value (it can be change by the user, but I use_
as a convention for internal usage) -
TAG
variable can be change by user to change the tag used at build time -
_securitycert
target define how we build our restricted context image with an inline-docker definition (we also can define it on a separate file) -
default
target define how we build our image on a standard environment (not restricted one) -
securitycert
target defined how we build our image in a restricted environment. It inherits properties fromdefault
target, so every change on the default target is replicated on the securitycert one.
In the securitycert
target, we replace the context _BASE_IMAGE
with the _securitycert
target. Docker will build the _securitycert
target prior to executing the final build due to its dependency on it.
At build time, if we do not specify which target to build, bake will use the default
target.
To build locally, we need to run
TAG=1.0.0 docker buildx bake securitycert
On CI, or non-restrictive environment
TAG=1.0.0 docker buildx bake
# you can also run the old one if you want
# docker build -t repo/image:1.0.0 .
Conclusion
Docker and Dockerfile are potent tools for building container images with ease, yet they must remain straightforward, even in complex contexts. Adding complexity to Dockerfiles may deter developers from maintaining them, which is understandable.
Multi-context builds are effective in addressing numerous issues related to build contexts and ought to be utilized more frequently.
To enhance the developer experience and documentation, 'buildx bake' appears to be a beneficial tool, providing numerous advantages, especially when used in conjunction with multi-context builds.
Top comments (0)