DEV Community

Cover image for Spring Boot: A Practical Guide to Building Review Environments for Feature Branches
İbrahim Gündüz
İbrahim Gündüz

Posted on • Originally published at Medium

Spring Boot: A Practical Guide to Building Review Environments for Feature Branches

Testing is one of the most crucial steps in the software development lifecycle. To ensure that the features we develop work correctly before integration, a review environment provides a dedicated space where application features can be tested using automated or manual tests.

In this article, we will focus on building a review environment on a dedicated PC or virtual machine.

Externalizing Application Configuration with Environment Variables

To ensure that each feature works correctly, the application must be fully isolated from other instances, including its external dependencies such as databases, caches, and message queues. Since we will run a separate instance of the application for each feature branch, configuration must be externalized using environment variables. This allows the application to dynamically determine which external resources it should connect to at runtime.

As we have already covered this topic in previous posts, I will not dive into the details of externalizing application configuration or providing deployment isolation for external resources such as Redis or RabbitMQ. However, if you haven’t read them yet, I strongly recommend taking a look at the following articles before continuing:

For our example project, the application will connect to PostgreSQL, Redis, and RabbitMQ, with isolation achieved through a dedicated database name, cache key prefix, and broker virtual host for each feature branch.

# application.yaml

spring:
  rabbitmq:
    host: ${RABBITMQ_HOST:localhost}
    port: ${RABBITMQ_PORT:5672}
    virtualHost: ${RABBITMQ_VHOST:dev}
    username: ${RABBITMQ_USER:guest}
    password: ${RABBITMQ_PASSWORD:guest}
  datasource:
    url: jdbc:postgresql://${POSTGRESQL_HOST:localhost}:${POSTGRESQL_PORT:5432}/${POSTGRESQL_DBNAME:dev}
    driver-class-name: org.postgresql.Driver
    username: ${POSTGRESQL_USER:postgres}
    password: ${POSTGRESQL_PASSWORD:postgres}
  data:
    redis:
      host: ${REDIS_HOST:localhost}
      port: ${REDIS_PORT:6379}
      password: ${REDIS_PASSWORD}
      database: 0
  cache:
    type: redis
    redis:
      key-prefix: ${CACHE_KEY_PREFIX:default}
      use-key-prefix: true
Enter fullscreen mode Exit fullscreen mode

Containerizing The Application

In this step, we will create a Dockerfile that packages the application into a container image, enabling it to run in an isolated and reproducible environment.

The following example demonstrates a multi-stage Dockerfile, where the application is built in the first stage and then copied into a lightweight runtime image in the second stage.

ARG MAVEN_VERSION=3.9-eclipse-temurin-21
ARG JRE_VERSION=21-jre

FROM maven:${MAVEN_VERSION} AS build

ARG CI_COMMIT_REF_NAME=dev

WORKDIR /build
COPY pom.xml .
RUN mvn -B -q dependency:go-offline

COPY src ./src
RUN mvn -B -q package -DskipTests -Drevision=${CI_COMMIT_REF_NAME}

FROM eclipse-temurin:${JRE_VERSION}

WORKDIR /app
COPY --from=build /build/target/*.jar app.jar
EXPOSE 8080

ENTRYPOINT ["java","-jar","/app/app.jar"]
Enter fullscreen mode Exit fullscreen mode

In the example above, the Maven project is built using the -Drevision=${CI_COMMIT_REF_NAME} parameter. This value is captured at build time and exposed through the application using the build-info feature of the spring-boot-maven-plugin, allowing the build version to be displayed in a controller response.

Since the required application configurations have already been externalized, this versioning approach is optional and can be omitted in your own project if build metadata is not required at runtime.

Provision the Review Environment Infrastructure

With the application packaged as a container image, the next step is to provision an environment capable of running these containers. Given the operational overhead of deploying Kubernetes on a dedicated system, Docker Swarm provides a simpler and more suitable alternative for hosting isolated review environments.

The first step in provisioning the environment is installing Docker on the target machine. Since this process is well documented in the official Docker documentation, we won’t cover the installation steps here. Instead, you can refer to the official guide at the following link:

Once Docker is installed, you can initialize Docker Swarm as shown below:

$ docker swarm init

Swarm initialized: current node (pzfpqsijpyuypd4xmluzlv3u7) is now a manager.

To add a worker to this swarm, run the following command:

    docker swarm join --token SWMTKN-1-2rci0c2v4bs7hh8bfpkenvmow78hcdcufs6411m2qaz90avlrt-0drqyzyeg8pghhu0tc4zjepq2 192.168.1.106:2377

To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.
Enter fullscreen mode Exit fullscreen mode

As shown above, Docker outputs the required commands for forming a Swarm cluster and scaling it across multiple nodes. You can add additional machines to the swarm using the docker swarm join --token ... command provided during initialization.

Provisioning Shared Infrastructure Services

In real-world environments, stateful services such as databases and message brokers are typically provisioned once and shared across deployments. These services are not recreated as part of the CI/CD pipeline, as doing so would introduce unnecessary risk and complexity. Instead, only application services are managed dynamically through CI/CD.

The following example demonstrate deployment configuration of the shared services.

# infra.yaml

services:
  pgsql:
    image: postgres:17.0
    environment:
      POSTGRES_USER: postgres
      POSTGRES_PASSWORD: postgres
    ports:
      - 5432:5432
    healthcheck:
      test: [ "CMD-SHELL", "pg_isready -U postgres" ]
      interval: 2s
      timeout: 120s
      retries: 15
      start_period: 30s
    volumes:
      - postgresql_data:/var/lib/postgresql/data:Z
    deploy:
      restart_policy:
        condition: on-failure
        delay: 5s
        max_attempts: 3
        window: 240s
    networks:
      - backend_net

  redis:
    image: redis:8.2
    ports:
      - 6379:6379
    healthcheck:
      test: [ "CMD", "redis-cli", "ping" ]
      interval: 2s
      timeout: 120s
      retries: 15
      start_period: 30s
    volumes:
      - redis_data:/data
    deploy:
      restart_policy:
        condition: on-failure
        delay: 5s
        max_attempts: 3
        window: 240s
    networks:
      - backend_net

  rabbitmq:
    image: rabbitmq:4.2.2-management-alpine
    ports:
      - 15672:15672
    healthcheck:
      test: [ "CMD", "rabbitmq-diagnostics", "-q", "ping" ]
      interval: 5s
      timeout: 5s
      retries: 5
      start_period: 20s
    networks:
      - backend_net
    volumes:
      - rabbitmq_data:/var/lib/rabbitmq

  traefik:
    image: traefik:v2.11
    command:
      - "--providers.docker=true"
      - "--providers.docker.swarmmode=true"
      - "--providers.docker.exposedbydefault=false"
      - "--entrypoints.web.address=:80"
      - "--ping=true"
      - "--ping.entrypoint=web"
      - "--log.level=INFO"
    ports:
      - "80:80"
    healthcheck:
      test: [ "CMD", "wget", "--spider", "-q", "http://localhost/ping" ]
      interval: 5s
      timeout: 2s
      retries: 5
      start_period: 10s
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock:ro
    networks:
      - ingress_net

volumes:
  postgresql_data:
    driver: local

  redis_data:
    driver: local

  rabbitmq_data:
    driver: local

networks:
  ingress_net:
    external: true
  backend_net:
    external: true
Enter fullscreen mode Exit fullscreen mode

There are a few important points worth mentioning about this configuration. Since shared infrastructure services and feature-branch applications are deployed as separate stacks, Docker Swarm would normally create isolated default networks for each deployment. However, in this setup, application services must be able to access the shared infrastructure, and Traefik must be able to route incoming requests to feature-branch deployments.

To enable this, we define two shared overlay networks:

  • ingress_net for routing external traffic from Traefik to application services

  • backend_net for communication between application services and shared backend services

Because these networks are marked as external, they must be created manually before deploying any stacks.

$ docker network create --driver overlay ingress_net && \
  docker network create --driver overlay backend_net

ac6il04xh6p5fct8kg6h5k4dv
nxbqa4tl9urpcc94b8xm53eyv
Enter fullscreen mode Exit fullscreen mode

Once the networks are created, you can deploy the shared infrastructure stack as shown below:

$ docker stack deploy -c infra.yaml infra --detach

Creating service infra_traefik
Creating service infra_pgsql
Creating service infra_redis
Creating service infra_rabbitmq
Enter fullscreen mode Exit fullscreen mode

Deploying Feature-Branch Application Services

With the shared infrastructure in place, we can now deploy the application itself as isolated review environments. In this section, we deploy a feature-branch instance of the application and expose it through Traefik using hostname-based routing.

The following stack definition describes how a single feature branch is deployed as an isolated application service. Branch-specific values, such as the branch name (CI_COMMIT_REF_NAME), are injected at deployment time by the CI tool.

# app.yaml
services:
  app:
    image: my-app:${CI_COMMIT_REF_NAME}
    environment:
      RABBITMQ_HOST: rabbitmq
      RABBITMQ_PORT: 5672
      RABBITMQ_VHOST: ${CI_COMMIT_REF_NAME}
      RABBITMQ_USER: ${CI_COMMIT_REF_NAME}_user
      RABBITMQ_PASSWORD: YourStrongPassword
      POSTGRESQL_HOST: pgsql
      POSTGRESQL_PORT: 5432
      POSTGRESQL_DBNAME: app_${CI_COMMIT_REF_NAME}
      POSTGRESQL_USER: postgres
      POSTGRESQL_PASSWORD: postgres
      REDIS_HOST: redis
      REDIS_PORT: 6379
      CACHE_KEY_PREFIX: "${CI_COMMIT_REF_NAME}:"
    networks:
      - backend_net
      - ingress_net
    deploy:
      labels:
        - "traefik.enable=true"
        - "traefik.docker.network=ingress_net"
        - "traefik.http.routers.app-${CI_COMMIT_REF_NAME}.rule=Host(`${CI_COMMIT_REF_NAME}.example.com`)"
        - "traefik.http.routers.app-${CI_COMMIT_REF_NAME}.entrypoints=web"
        - "traefik.http.services.app-${CI_COMMIT_REF_NAME}.loadbalancer.server.port=8080"

networks:
  ingress_net:
    external: true
  backend_net:
    external: true
Enter fullscreen mode Exit fullscreen mode

CI/CD Integration

Connecting the CI Environment to Docker Swarm

To deploy services into Docker Swarm, the CI environment must be able to interact with the Swarm manager. This is typically achieved by running a CI agent, such as a GitLab Runner, on a Swarm manager node or in an environment with access to the Docker API. Once configured, the CI pipeline can build images and deploy stacks directly into the cluster using standard Docker commands.

Adding Packaging Step To CI/CD Pipeline

To build the application image, the CI pipeline runs the docker build command and pushes the resulting image to a container registry. Make sure to consult your CI tool’s documentation to determine which environment variable is provided to represent the current branch name.

$ docker build \                                     
  --build-arg CI_COMMIT_REF_NAME=${CI_COMMIT_REF_NAME} \                                            
  -t my-app:$CI_COMMIT_REF_NAME .
...
$ docker push my-app:$CI_COMMIT_REF_NAME
Enter fullscreen mode Exit fullscreen mode

Adding Provisioning Step TO CI/CD Pipeline

Since each feature branch is deployed with isolated resources—such as a dedicated PostgreSQL database and a RabbitMQ virtual host—the CI/CD pipeline must perform a provisioning step before deploying the application service. This step prepares the required backend resources so the application can start successfully.

Creating the PostgreSQL Database

The following script retrieves the running PostgreSQL container backing the infra_pgsql service and creates a branch-specific database:

export SERVICE_ID=$(docker service ps infra_pgsql -q)
export CONTAINER_ID=$(docker inspect ${SERVICE_ID} --format '{{.Status.ContainerStatus.ContainerID}}') 
docker exec -it ${CONTAINER_ID} createdb -U postgres app_${CI_COMMIT_REF_NAME} || true
Enter fullscreen mode Exit fullscreen mode

This command is intentionally idempotent: if the database already exists, the pipeline continues without failing.

After the database is created, the CI pipeline executes database migrations against the newly created database before deploying the application. This ensures that each feature branch starts with a clean and predictable schema.

Creating the RabbitMQ Virtual Host and User

Similarly, RabbitMQ resources must be created inside the running RabbitMQ container. The following commands create a virtual host and a dedicated user for the feature branch:

export SERVICE_ID=$(docker service ps infra_rabbitmq -q)
export CONTAINER_ID=$(docker inspect ${SERVICE_ID} --format '{{.Status.ContainerStatus.ContainerID}}') 
docker exec -it ${CONTAINER_ID} rabbitmqctl add_vhost ${CI_COMMIT_REF_NAME}
docker exec -it ${CONTAINER_ID} rabbitmqctl add_user ${CI_COMMIT_REF_NAME}_user YourStrongPassword
docker exec -it ${CONTAINER_ID} rabbitmqctl set_user_tags ${CI_COMMIT_REF_NAME}_user
docker exec -it ${CONTAINER_ID} rabbitmqctl set_permissions -p ${CI_COMMIT_REF_NAME} \
  ${CI_COMMIT_REF_NAME}_user \
  ".*" ".*" ".*"
Enter fullscreen mode Exit fullscreen mode

This setup ensures that each feature branch has its own isolated messaging context without interfering with other deployments.

By provisioning branch-specific resources as part of the CI/CD pipeline, the application remains environment-agnostic, shared infrastructure services remain long-lived, and feature-branch deployments become fully automated and reproducible.

This approach closely mirrors how review environments are implemented in real-world systems.

Deploying the Feature-Branch Stack from CI

Once the application image is built and the required backend resources are provisioned, the final step in the pipeline is deploying the feature-branch stack into Docker Swarm. This is done by reusing the same stack definition and injecting branch-specific values at deployment time.

The following command deploys the application stack for the current feature branch:

$ docker stack deploy \
  -c app.yaml \
  app-${CI_COMMIT_REF_NAME} \
  --detach
Enter fullscreen mode Exit fullscreen mode

Cleanup Step

Most CI tools provide a mechanism to define cleanup tasks that run when a feature branch is merged or removed. For example, GitLab CI supports this through features such as auto_stop and on_stop jobs. Refer to the official documentation of your CI solution to identify the appropriate mechanism and configure it to execute the required cleanup commands, such as dropping the previously created database and removing branch-specific RabbitMQ resources.

Configuring Wildcard DNS for Review Environments

To make feature-branch review environments accessible via unique hostnames, a wildcard DNS record is required. Instead of creating individual DNS records for each branch, a single wildcard record is configured to route all subdomains to the Docker Swarm instance.

For example, the following DNS record maps all subdomains of example.com to the Swarm manager node:

A *.example.com → <swarm-manager-ip>
Enter fullscreen mode Exit fullscreen mode

With this configuration in place, requests to ${CI_COMMIT_REF_NAME}.example.com are forwarded to Traefik, which then routes the request to the corresponding application service based on hostname rules defined in the deployment labels.

Demo Time!

The example application included in this post exposes a simple endpoint that returns the currently deployed application version. Let’s call the root endpoint to verify that the request is correctly routed to the feature-branch deployment.

$ curl MF-443.example.com

Current version is: MF-443
Enter fullscreen mode Exit fullscreen mode

If you are testing locally, you can achieve the same result by explicitly setting the Host header in the request:

$ curl -H "Host: MF-443.example.com" http://127.0.0.1

Current version is: MF-443
Enter fullscreen mode Exit fullscreen mode

Conclusion

By combining containerization, externalized configuration, and declarative deployments, feature branches can be promoted to fully isolated review environments with minimal overhead. This approach enables fast feedback cycles while keeping shared infrastructure stable and manageable. Although the examples in this article use Docker Swarm, the same principles apply to other orchestration platforms and can be adapted to fit different team sizes and operational constraints.

As always, you can find the code example in the following repository.

Thanks for reading!

Credits:

Top comments (0)