After completing the exercises in Academind's Docker & Kubernetes course, I wanted to try my hand at containerizing a Spring-powered web application. Here is the story of how that went down...
Finding a sample project
The Spring PetClinic project is a well known sample in the Spring community, and there are multiple forks built with different technologies. Since I wanted to build a multi-container solution with Docker Compose, I needed to find one with easily separable backend and frontend pieces.
Fortunately, the forks list contained options that covered this case exactly.
I noticed the frontend repository already contains its own Dockerfile - I'll only have to build one for the backend and include both in a Docker Compose yaml
file, along a separate container for the database.
Containerizing the Spring Boot backend
My first step was to analyze the project and understand my options for data persistence - turns out it's quite flexible and includes a plethora of combinations - JDBC, JPA and Spring Data JPA as persistence layers, and H2, MySQL and PostgreSQL as underlying databases. Choosing an exact option is done through the application.properties
file and its sub-profiles. My choice was PostgreSQL with Spring data JPA.
I wanted to use a best practice called multi-stage builds, which means that intermediary Docker images are used as build tools/environments, the final artifact is copied into the running container and everything else is discarded. In our case, the Maven base image fit perfectly as a build environment, from which the resulting jar
could be copied into the final image (build upon the OpenJDK base image) and started from there.
The steps for our first stage are:
- Use a Maven base image
- Copy application source code into the image
- Run the
mvn package
command
Since the backend uses Spring Boot 2.4.2 as a parent project, and supports JDK 8, I specified the OpenJDK 8 base image.
FROM maven:3.8.4-openjdk-8 AS buildstage
WORKDIR /app
COPY . .
RUN mvn package
Running mvn package
generates a jar
archive in the /target
directory under the name spring-petclinic-rest-2.4.2.jar
.
When performing a multi-stage build, Docker enables us to copy files from the previous stage and discard the rest - in this case, we only need the packaged jar
without the source code and dependencies.
FROM openjdk:8 AS runstage
COPY --from=buildstage /app/target/spring-petclinic-rest-2.4.2.jar .
To make our job easier down the line, we can find the port used by our backend and use the EXPOSE
command to include it in the Dockerfile. In our case the port is 9966.
EXPOSE 9966
I have seen recommendations to create a separate non-privileged user to run applications. We can do so with the adduser
shell command.
RUN adduser --system --group spring
USER spring
ENTRYPOINT ["java", "-jar", "spring-petclinic-rest-2.4.2.jar"]
Putting all of these together, our final Dockerfile for the backend looks like the following:
FROM maven:3.8.4-openjdk-8 AS buildstage
WORKDIR /app
COPY . .
RUN mvn package
FROM openjdk:8 AS runstage
COPY --from=buildstage /app/target/spring-petclinic-rest-2.4.2.jar .
EXPOSE 9966
RUN adduser --system --group spring
USER spring
ENTRYPOINT ["java", "-jar", "spring-petclinic-rest-2.4.2.jar"]
There are additional modifications that need to be performed in our source code. Namely, the database host is hardcoded as localhost
. We need to change the host so it gets resolved to our container. For containers in the same network (and with Docker Compose, they are in the same network by default), containers can resolve each others' IP addresses through the service name specified in the docker-compose.yaml
file. I have chosen psqldb
as the name for the database service.
Additionally, instead of hardcoded credentials, I have preferred to use environment variables, which can be easily supplied during image building. There's a caveat here, though - supplying these values directly can pose a security risk since they are permanently baked into your image history. We can use .env
files instead.
Containerizing the database
I will not be creating a custom image for the database. It is sufficient to use the official PostgreSQL image and supply necessary environment variables for the default database and credentials. I will be specifying the default DB name ("petclinic", as retrieved from backend source code) and credentials (through an .env
file).
Additionally, we need to enable Docker to persist database entries outside of the container filesystem. This can be done through a named volume. The default directory where PostgreSQL saves its data is /var/lib/postgresql/data
.
This information must be codified in our docker-compose.yaml
file. The database service part will, then, look like the following
psqldb:
image: postgres
environment:
- POSTGRES_DB=petclinic
env_file:
- ./env/postgres.env
volumes:
- dbdata:/var/lib/postgresql/data
Containerizing the frontend
Since the authors of the Angular frontend already supplied a Dockerfile, I will use that instead of building my own.
The image for the frontend also uses the multi-stage build principle - firstly a node
base image is used to build the project. The underlying ng build
command creates a directory called dist
which contains the compiled app.
Secondly, an nginx
base image is used, and the dist
folder is copied into the default public directory - /usr/share/nginx/html
.
Assembling the docker-compose.yaml
configuration
As I previously mentioned, we have a named volume for the database, and it needs to be explicitly declared in the configuration.
volumes:
dbdata:
Next, we start listing the services. In our case, we have the following
1) psqldb
- uses the
postgres
base image - has an environment variable
POSTGRES_DB
with the valuepetclinic
- includes an
.env
file from which it receives DB credentials - mounts the
dbdata
named volume on the default data directory for PostgreSQL
2) backend
- needs to be built from our previously written Dockerfile
- also includes the aforementioned
.env
file for credentials - exposes port 9966
- depends on the
psqldb
service being up
3) frontend
- needs to be built from its Dockerfile
- exposes port 8080
- depends on the
backend
service being up
Translating all of these requirements into yaml
, we get the following:
version: "3"
volumes:
dbdata:
services:
psqldb:
image: postgres
environment:
- POSTGRES_DB=petclinic
env_file:
- ./env/postgres.env
volumes:
- dbdata:/var/lib/postgresql/data
backend:
build:
context: ./spring-petclinic-rest
env_file: ./env/postgres.env
ports:
- "9966:9966"
depends_on:
- psqldb
frontend:
build:
context: ./spring-petclinic-angular
ports:
- "8080:8080"
depends_on:
- backend
And that's it. Using the docker-compose up
command, we build and run our three services. Inspecting the running container for the frontend, we get its IP address and can access our PetClinic web application through port 8080.
Top comments (1)
hi, is there a git repo for this somewhere?