In my last post, I showed how to refactor the Dockerfile in your project. I didn't want to make the previous post too long, so I didn't touch the multistage builds. Here is the post where I will show you how to improve the Dockerfile even more, using multistage builds. Using stages in your builds is not hard to use, but there are some things where the below knowledge may help you 😃
Let's take the result of the last article:
FROM php:7.4.25-fpm
WORKDIR /app
COPY --from=composer:2.1.11 /usr/bin/composer /usr/bin/composer
RUN apt update && \
apt install -y \
libicu-dev=67.1-7 \
libgd-dev=2.3.0-2 \
libonig-dev=6.9.6-1.1 \
unzip=6.0-26 && \
apt purge -y --auto-remove
RUN docker-php-ext-install \
exif \
gd \
intl \
mbstring \
mysqli \
opcache \
pdo_mysql \
sockets
ENV COMPOSER_ALLOW_SUPERUSER 1
COPY composer.json .
COPY composer.lock .
RUN composer install --no-dev --no-scripts
COPY . .
RUN composer dumpautoload --optimize
Don't keep not necessary dependencies
Every dependency in your image is potentially the source of vulnerabilities, and you should keep them up to date. So, it's good practice to keep only a minimum of really required dependencies.
So let's try to get rid of the composer from our final image. Composer is required to install our backend dependencies, but it's not required in the runtime of our app. Actually, you shouldn't keep it in your final image, because every change, like for example composer update
is removed after the container restart, so it's presence may be even confusing.
You can get rid of this dependency using multistage builds:
FROM composer:2.1.11 AS build
WORKDIR /app
COPY composer.json .
COPY composer.lock .
RUN composer install --no-dev --no-scripts --ignore-platform-reqs
COPY . .
RUN composer dumpautoload --optimize
FROM php:7.4.25-fpm
WORKDIR /app
RUN apt update && \
apt install -y \
libicu-dev=67.1-7 \
libgd-dev=2.3.0-2 \
libonig-dev=6.9.6-1.1 \
unzip=6.0-26 && \
apt purge -y --auto-remove
RUN docker-php-ext-install \
exif \
gd \
intl \
mbstring \
mysqli \
opcache \
pdo_mysql \
sockets
COPY --from=build /app /app
So, what happened here? As you can see, I've added a new FROM
instruction before the PHP stage. In this way, we can add multiple stages to our docker image. The AS
word is an alias for our stage, that we can use to refer to it. Referring to the stage is helpful if we need to copy something between stages, or use the previous stage as a base image for another stage.
In this case, I've added a build stage using a composer image, that will install all my dependencies and generate autoload files of my project. Now, when you start building this image, docker will create the build container first. When the build stage is finished, docker will start building the next stage. For the docker build process, those stages are recognized as different images, so the final image will keep only layers from the latest stage of build.
Also as you can see, I had added the --ignore-platform-reqs
flag. This flag will allow you to install your dependencies even if you don't have the required PHP extensions installed. Otherwise, compose may stop installing process if some packages require extensions that do not exist in the base composer image.
Cache
As I've explained before, for docker build our stages are like different images. That makes a small problem for the building process that had confused me when I've tried the first time to build this image using cache from the previously built image. When I tried to build this image a few times in CI/CD adding --cache-from
flag, the docker didn't use the cache. On my first time trying to get it working with using cache, I've spent a lot of time debugging the pipeline, I've even looked for problems in my runner. The problem wasn't in my runner, and what's more that's behavior is expected.
Because the final image keeps only the layers that belong to the target stage (default the latest target), the final image doesn't have layers of different stages than the target. That's obvious when you think about it because that helps you to make your images smaller. It may be confusing when you create the CI/CD pipeline when you probably don't have this thinking in your mind.
So how to resolve this problem? The goal is to keep all stage layers to use later as a cache. So we need to keep them during the build and then push them to our registry, to be able to get them on the next builds.
To achieve that, we can use the --target
flag:
docker pull myimage:latest-build | true
docker pull myimage:latest | true
docker build . --target=build --cache-from=myimage:latest-build -t myimage:latest-build
docker build . --cache-from=myimage:latest-build --cache-from=myimage:latest -t myimage:latest
docker push myimage:latest-build
docker push myimage:latest
In this way, our application will always use the cache when building your images if it's available of course.
The first two lines will pull your previous builds images. The ... | true
makes that commands always returning the 0 code for shell, even if the image doesn't exist that happens during the very first build. Another return code may stop your pipeline.
Image for local environment
The next thing that I had a problem with, was creating a single Dockerfile for local and running environments. The answer to my need was just to add another stage in Dockerfile.
I'm using the docker-compose to create the local development environment to work on my applications. In my case I had two Dockerfiles in my source code. The first of the files was the PHP with all extensions required to run the app. The second one was the runtime image that contains configured PHP same as the previous one, but with my application code installed in it. The PHP with extensions was the common part of both of those images. It's stupid to make the same changes in two separated Dockerfiles like I had to do it.
There is a way to solve this problem using stages:
FROM php:7.4.25-fpm AS base
WORKDIR /app
RUN apt update && \
apt install -y \
libicu-dev=67.1-7 \
libgd-dev=2.3.0-2 \
libonig-dev=6.9.6-1.1 \
unzip=6.0-26 && \
apt purge -y --auto-remove
RUN docker-php-ext-install \
exif \
gd \
intl \
mbstring \
mysqli \
opcache \
pdo_mysql \
sockets
FROM composer:2.1.11 AS build
WORKDIR /app
COPY composer.json .
COPY composer.lock .
RUN composer install --no-dev --no-scripts --ignore-platform-reqs
COPY . .
RUN composer dumpautoload --optimize
FROM base AS final
COPY --from=build /app /app
The first change is to move the part that prepares my base PHP image and name this target as base
. The next part is the same as before. The last part is the target created from the base
stage result, named final
, that also copies our application from the build
stage.
Now to build this image for your running environment like production, you need to build the final
stage, which automatically builds all previous stages. For using that Dockerfile in your docker-compose you need to specify which stage you need to run your local environment:
services:
web:
#...
port:
- 8080:80
php:
build:
context: .
dockerfile: Dockerfile
target: base
ports:
- "9000"
volumes:
- .:/app
And since now, we have had the same Dockerfile for the local and running environments.
So that's all what I want to show you in this article.
Have a nice day, and keep taking care of your Dockerfiles 😃
Originally posted on mateuszcholewka.com
Top comments (2)
I've learned so much from this article. thank you.
Can we create a separate container for nginx and share the php volume? Do you recommend it?
Hi, I'm really glad, that my article is helpful for you, thank you for your feedback :)
In answer to your question. You can find opinions that a single process per container is a non-breakable rule, and always you have to split them.
IMO it depends on your project requirements.
Referencing this docker docs page: docs.docker.com/config/containers/...
It's recommended to use a single process per container, but it's still ok to put two or more processes in a single one.
Personally, my default approach is to stick to the rule 1 process 1 container. It's usually easier to configure and maintain. For example, you don't need to care about installing nginx from package manager and configuring supervisord, and you have separate logs stream from nginx and php-fpm which is more readable, and manageable.
But sometimes I need to deploy something quickly, using services like "docker as a service" (I mean render.com, coolify, etc.) where it's usually not possible to reuse local docker-compose.yml and they also give the possibility to build a container directly from single Dockerfile. Then I'm configuring nginx and php-fpm in a single container, to avoid any problems with volumes/networks configuration differences.