This post originally appeared in jaxenter.
In part one of this tutorial, we learned about the basics of Docker and Continuous Integration and Delivery. We used CI/CD to build and test a Java Spring microservice application, and the end result was a ready-to-deploy Docker image.
Docker and Java Spring Boot [Part.1: Continuous Integration]
Tomas Fernandez for Semaphore ・ Jan 22 '20
In this second (and final) part, we’ll introduce Kubernetes to the picture. Kubernetes will provide scalability and no-downtime upgrades.
Adding a profile to the application
You may recall from the first part of the tutorial that our application has a glaring flaw: the lack of data persistence—our precious data is lost across reboots. Fortunately, this is easily fixed by adding a new profile with a real database.
First, edit the Maven manifest file (pom.xml) to add a production profile inside the <profiles> … </profiles>
tags:
<profile>
<id>production</id>
<properties>
<maven.test.skip>true</maven.test.skip>
</properties>
</profile>
Then, between the <dependencies> … </dependencies>
tags, add the MySQL driver as a dependency:
<dependency>
<groupId>mysql</groupId>
<artifactId>mysql-connector-java</artifactId>
<scope>runtime</scope>
</dependency>
Finally create a production-only properties file at src/main/resources/application-production.properties
:
spring.jpa.properties.hibernate.dialect=org.hibernate.dialect.MySQL55Dialect
spring.datasource.url=jdbc:mysql://${DB_HOST:localhost}:${DB_PORT:3306}/${DB_NAME}
spring.datasource.username=${DB_USER}
spring.datasource.password=${DB_PASSWORD}
We must avoid putting secret information such as passwords in GitHub. We’ll use environment variables and decide later how we’ll pass them along.
Now our application is ready for prime time.
Prepping the Cloud
In this section, we’ll set up the database and Kubernetes clusters. Log in to your favorite cloud provider and create a MySQL database and a Kubernetes cluster.
Database
Create a MySQL database with a relatively new version (i.e., 5.7 or 8.0+). You can install your own server or use a managed cloud database. For example, AWS has RDS and Aurora, and Google Cloud has Google SQL.
Once you have created the database service:
- Create a database called "demodb".
- Create a user called "demouser" with, at least,
SELECT, INSERT, UPDATE
permissions. - Take note of the database IP address and port.
Once that is set up, create the user tables:
CREATE TABLE `hibernate_sequence` (
`next_val` bigint(20) DEFAULT NULL
) ENGINE=InnoDB DEFAULT CHARSET=utf8;
CREATE TABLE `users` (
`id` bigint(20) NOT NULL,
`created_date` datetime DEFAULT NULL,
`email` varchar(255) DEFAULT NULL,
`modified_date` datetime DEFAULT NULL,
`password` varchar(255) DEFAULT NULL,
PRIMARY KEY (`id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8;
Kubernetes
The quickest way to get started with Kubernetes is through a managed cluster from a cloud provider (such as Elastic Kubernetes Service on AWS, Kubernetes on Google Cloud, etc). I’ll try to keep this tutorial vendor-agnostic so you, dear reader, have the freedom to choose whichever alternative best suits your needs.
In regards to the cluster node sizes, this is a microservice, so requirements are minimal. The most modest machine will suffice, and you can adjust the number of nodes to your budget. If you want to have rolling updates—that is, upgrades without downtime—you’ll need at least two nodes.
Working with Kubernetes
On paper, Kubernetes deployments are simple and tidy: you specify the desired final state and let the cluster manage itself. And it can be, once we can understand how Kubernetes thinks about:
-
Pods: a pod is a team of containers. Containers in a pod are guaranteed to run on the same machine.
- Deployments: a deployment monitors pods and manages their allocation. We can use deployments to scale up or down the number of pods and perform rolling updates.
- Services: services are entry points to our application. Service exposes a fixed public IP for our end users, they can do port mapping and load balancing.
- Labels: labels are short key-value pairs we can add to any resource in the cluster. They are useful to organize and cross-reference objects in a deployment. We’ll use labels to connect the service with the pods.
Did you notice that I didn’t list containers as an item? While it is possible to start a single container in Kubernetes, it’s best if we think of them as tires on a car, they’re only useful as parts of the whole.
Let’s start by defining the service. Create a manifest file called deployment.yml
with the following contents:
# deployment.yml
apiVersion: v1
kind: Service
metadata:
name: semaphore-demo-java-spring-lb
spec:
selector:
app: semaphore-demo-java-spring-service
type: LoadBalancer
ports:
- port: 80
targetPort: 8080
Under the spec tree, we find the service definition: a network load balancer that forwards HTTP traffic to port 8080.
Add the deployment to the same file, separated by three hyphens (---):
# deployment.yml (continued)
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: semaphore-demo-java-spring-service
spec:
replicas: 3
selector:
matchLabels:
app: semaphore-demo-java-spring-service
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
maxSurge: 1
template:
metadata:
labels:
app: semaphore-demo-java-spring-service
spec:
containers:
- name: semaphore-demo-java-spring-service
image: ${DOCKER_USERNAME}/semaphore-demo-java-spring:$SEMAPHORE_WORKFLOW_ID
imagePullPolicy: Always
env:
- name: ENVIRONMENT
value: "production"
- name: DB_HOST
value: "${DB_HOST}"
- name: DB_PORT
value: "${DB_PORT}"
- name: DB_NAME
value: "${DB_NAME}"
- name: DB_USER
value: "${DB_USER}"
- name: DB_PASSWORD
value: "${DB_PASSWORD}"
readinessProbe:
initialDelaySeconds: 60
httpGet:
path: /login
port: 8080
The template.spec
tree defines the containers that make up a pod. There’s only one container in our application, referenced by its image. Here we also pass along the environment variables.
The total number of pods is controlled with replicas
.
The update policy is defined in strategy
. A rolling update refreshes the pods in turns, so there is always at least one pod working. The test used to check if the pod is ready is defined with readinessProbe.
selector
, labels
and matchLabels
work together to connect the service and deployment. Kubernetes looks for matching labels to combine resources.
You may have noticed that were are using special tags in the Docker image.
In part one of the tutorial, we tagged all our Docker images as latest. The problem with the latest is that we lose the capacity to version images; old images get overwritten on each build. If we have difficulties with a release, there is no previous version to roll back to.
Instead of the latest, it’s best to use a variable such as $SEMAPHORE_WORKFLOW_ID to uniquely identify the image.
Getting Ready for Continuous Deployment
In part one, you created a secret with your Docker Hub credentials. Here, you’ll need to repeat the procedure with two more pieces of information.
Database user: a secret that contains your database username, password, and other connection details.
To add the secret:
- On the left navigation menu on Semaphore, click on Secrets next to Configuration.
- Click on Create New Secret.
- Create the secret as shown:
Kubernetes cluster: a secret with the Kubernetes connection parameters. The specific details will depend on how and where the cluster is running.
For example, if a kubeconfig file was provided, you can upload it to Semaphore by repeating the same steps you did for the database secret:
Deployment Pipeline
We’re almost done. The only thing left is to create a Deployment Pipeline to:
- Generate manifest: populate the manifest with the real environment variables.
- Make a deployment: send the desired final state to the Kubernetes cluster.
Open the Workflow Builder again by clicking on the Edit the workflow button:
Let’s call the new pipeline “Deploy to Kubernetes”. Click on the first block on the new pipeline and fill in the job details as follows:
-
Secrets: select all the secrets:
- dockerhub
- production-db-auth
- production-k8s-auth
-
Environment Variables: add any special variables required by your cloud provider such as
DEFAULT_REGION
orPROJECT_ID
. -
Prologue: add a
checkout
command and fill in any cloud-specific login, install or activation commands (gcloud, aws, etc.) - Job: generate the manifest and make the deployment with the following command:
cat deployment.yml | envsubst | tee deployment.yml kubectl apply -f deployment.yml
Depending on where and how the cluster is running, you may need to adapt the code above. If you only need a kubeconfig file to connect to your cluster, great, this should be enough. Some cloud providers, however, need additional helper programs.
For instance, AWS requires the aws-iam-authenticator when connecting with the cluster. For more information, consult your cloud provider documentation.
Since we are abandoning the latest tag, we need to change to the “Docker Build” pipeline. The Docker images must be tagged with the same workflow id across all pipelines.
Scroll left to the “Build and deploy Docker container” block and replace the last two commands in the job:
docker build \
--cache-from "$DOCKER_USERNAME"/semaphore-demo-java-spring:latest \
--build-arg ENVIRONMENT="${ENVIRONMENT}" \
-t "$DOCKER_USERNAME"/semaphore-demo-java-spring:latest .
docker push "$DOCKER_USERNAME"/semaphore-demo-java-spring:latest
With these two commands:
docker build \
--cache-from "$DOCKER_USERNAME"/semaphore-demo-java-spring:latest \
--build-arg ENVIRONMENT="${ENVIRONMENT}" \
-t "$DOCKER_USERNAME"/semaphore-demo-java-spring:$SEMAPHORE_WORKFLOW_ID .
docker push "$DOCKER_USERNAME"/semaphore-demo-java-spring:$SEMAPHORE_WORKFLOW_ID
Your first deployment
At this point, you’re ready to do your first deployment.
Press Run the workflow and click on Start:
Allow a few minutes for the pipelines to do their work. The workflow will stop at the “Docker build” block. Click on Promote to deploy to Kubernetes:
Once the workflow is complete, Kubernetes takes over:
You can monitor the process from your cloud console or using kubectl:
$ kubectl get deployments
$ kubectl get pods
To retrieve the external service IP address, check your cloud dashboard page or use kubectl:
$ kubectl get services
That’s it. The service is running, and you now have a complete CI/CD process to deploy your application.
Wrapping Up
You’ve learned how Semaphore and Docker can work together to automate Kubernetes deployments. Feel free to fork the demo project and adapt it to your needs. Kubernetes developers are in high demand and you just did your first Kubernetes deployment, way to go!
Learn more about CI/CD for Docker and Kubernetes with our other step-by-step tutorials:
Delicious Kubernetes in 4 Steps
Tomas Fernandez for Semaphore ・ Dec 12 '19
Would you like a vendor-specific tutorial?
- Google Cloud: Kubernetes in 10 Minutes
- AWS: CI/CD to AWS Kubernetes
Interested in CI/CD and Kubernetes? We’re working on a free ebook, sign up to receive it as soon as it’s published.
Did you find the post useful? Have any questions? Leave a comment below 👀
Thanks for reading!
Top comments (0)