Running services locally hasn't always been easy and still has its quirks. As soon as a service is integrated with external systems it can become challenging.
In this blog post, we will walk through how to set up a Spring Boot application that connects to multiple systems such as a message broker or a database, while using spring-boot-docker-compose to manage these dependencies as part of the application lifecycle.
Why & When to consider?
Using live environments during development has its own set of pros & cons.
✅ Common technologies supported by Spring Boot / Spring Cloud such as Kafka, MySQL, Firebase, RabbitMQ, and so on, are technologies that can and should be substituted whenever possible.
⚠️ On the flip side, mocking the behaviour of upstream services introduces complexity and should be carefully considered.
Use case
Consider a Blog Application that consumes Kafka events from a topic (blog.updates), calculates the sentiment of the given blogpost and pushes the final update in a RabbitMQ queue. For this exercise, let's pretend the publisher is also the consumer and eventually saves the processed result in a MSSQL server database.
Setup
Let's take a look at the complete picture of the project before diving into each part.
.
├── pom.xml 👈🏻
└── src
├── main
│ ├── java
│ │ └── ch
│ │ └── migrosonline
│ │ └── blog
│ │ ├── BlogApplication.java
│ │ ├── context
│ │ │ └── RabbitMQContext.java
│ │ ├── kafka
│ │ │ ├── BlogPostUpdateEventListener.java
│ │ │ └── model
│ │ │ └── BlogPostUpdateEvent.java
│ │ ├── processor
│ │ │ └── BlogProcessor.java
│ │ ├── rabbitmq
│ │ │ ├── model
│ │ │ │ └── ProcessedBlogPostMessage.java
│ │ │ ├── RabbitMQListener.java
│ │ │ └── RabbitMQProducer.java
│ │ └── repository
│ │ ├── BlogRepository.java
│ │ └── model
│ │ └── BlogEntity.java
│ └── resources
│ ├── application.yml 👈🏻
│ ├── application-LOCAL.yml 👈🏻
│ ├── compose.yml 👈🏻
│ └── init.sql
└── test
This guide assumes prior familiarity with some bricks of the Spring Ecosystem such as @SpringBootApplication main class or @Configuration contexts. What is important to focus on here is the resources marked with a '👈🏻'.
pom.xml
The key dependency that must be added to the dependency management system is spring-boot-docker-compose. Any important details regarding spring-boot-docker-compose can be found on the Spring reference documentation.
<dependencies>
...
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-docker-compose</artifactId>
<scope>runtime</scope>
<optional>true</optional>
</dependency>
...
</dependencies>
<build>
<plugins>
<plugin>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-maven-plugin</artifactId>
<executions>
<execution>
<goals>
<goal>repackage</goal>
</goals>
</execution>
</executions>
</plugin>
</plugins>
</build>
spring-boot-docker-composeis primarily a dev tool therefore Spring Boot won't be adding this dependency as part of the repackaged application (fat .jar). In other words, it won't be available in production builds unless explicitly specified.
When starting the application locally via IDE's plugins or the Spring Boot plugin, all runtime dependencies will be available on the classpath.
./mvnw spring-boot:run -Dspring-boot.run.jvmArguments="-Dspring.profiles.active=LOCAL"
Application properties
The application.yml file provides the default, production-grade configuration. By using Spring profiles, we can later define a LOCAL profile that overrides and leverages the compose.yml definition.
application.yml
spring:
application:
name: blog
datasource:
url: <URI_TO_A_REMOTE_DATABASE>
username: SA
password: ${DATABASE_PASSWORD}
docker.compose.enabled: false
kafka:
bootstrap-servers: <URI_TO_A_REMOTE_KAFKA_SERVER>
consumer:
group-id: ch.migrosonline
key-deserializer: org.apache.kafka.common.serialization.StringDeserializer
properties.spring.json.type.mapping: blogpost-update-event:ch.migrosonline.momentum.kafka.model.BlogPostUpdateEvent
security:
protocol: SASL_SSL
properties.sasl:
mechanism: PLAIN
jaas.config: |
org.apache.kafka.common.security.plain.PlainLoginModule required username='kafka' password='${KAFKA_PASSWORD}'
rabbitmq:
host: <URI_TO_A_REMOTE_RABBITMQ_SERVER>
port: 5672
username: rabbitmq
password: ${RABBIT_PASSWORD}
application-LOCAL.yml
spring:
datasource:
url: jdbc:sqlserver://localhost;encrypt=false
docker.compose:
enabled: true
file: classpath:/compose.yml
stop:
command: down # 'docker compose down' on application stop
timeout: 10s
kafka:
bootstrap-servers: localhost:29092
security.protocol: PLAINTEXT
rabbitmq:
host: localhost
sql:
init:
mode: always
schema-locations: classpath:init.sql
By default, the docker.compose.lifecycle-management property runs docker compose up when the application starts and docker compose stop when it stops. These commands can be customised by redefining the start or stop command as shown in the application-LOCAL.yml file.
compose.yml
The compose.yml file is fully customisable and describes services to be substituted.
In the example below, a Kafka cluster (with a topic and partitions), an MSSQL database service, and a RabbitMQ service are defined.
services:
kafka:
image: confluentinc/cp-kafka:latest
container_name: kafka
ports:
- "29092:9094"
environment:
CLUSTER_ID: clusterId
KAFKA_PROCESS_ROLES: broker,controller
KAFKA_NODE_ID: 0
KAFKA_LISTENERS: PLAINTEXT://:9092,CONTROLLER://:9093,EXTERNAL://:9094
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://kafka:9092,EXTERNAL://localhost:29092
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: CONTROLLER:PLAINTEXT,EXTERNAL:PLAINTEXT,PLAINTEXT:PLAINTEXT
KAFKA_CONTROLLER_QUORUM_VOTERS: 0@kafka:9093
KAFKA_CONTROLLER_LISTENER_NAMES: CONTROLLER
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
kafka-initializer:
image: confluentinc/cp-kafka:latest
depends_on:
- kafka
entrypoint: [ '/bin/sh', '-c' ]
command: |
"
kafka-topics --bootstrap-server kafka:9092 --create --if-not-exists --topic blog.updates --partitions 2 --replication-factor 1
kafka-topics --bootstrap-server kafka:9092 --list
"
mssql:
image: mcr.microsoft.com/mssql/server:latest
container_name: mssql
ports:
- "1433:1433"
environment:
ACCEPT_EULA: Y
MSSQL_SA_PASSWORD: ${DATABASE_PASSWORD}
rabbitmq:
image: rabbitmq:latest
container_name: rabbitmq
ports:
- "5672:5672"
environment:
RABBITMQ_DEFAULT_USER: rabbitmq
RABBITMQ_DEFAULT_PASS: ${RABBIT_PASSWORD}
Alternative approach
Testcontainers is an alternative to substitute external services not only in tests but also in dev mode. If you are interested to know more you can read the official Spring documentation.
Conclusion
We saw how to wire production grade technologies all together in a seamless way with Docker and spring-boot-docker-compose, why and when this setup should be considered.
The full source code can be found ➡️ here ⬅️.
Happy coding! 👨🏻💻👩🏼💻
Written by Gabriel Dinant, Staff Software Engineer @ MigrosOnline

Top comments (0)