In this tutorial, we'll delve into containerisation concepts, focusing on Docker, and explore deploying your Spring Boot application from a previous tutorial. By the tutorial's conclusion, you'll grasp Docker and Kubernetes concepts and gain hands-on experience deploying your application within a cloud infrastructure.
This tutorial is an extension of the previous tutorial where we explained how to write advanced aggregation queries in MongoDB using the Spring Boot framework. We will use the same GitHub repository to create this tutorial's deployment files.
We'll start by learning about containers, like digital packages that hold software. Then, we'll dive into Kubernetes, a system for managing those containers. Finally, we'll use Kubernetes to set up MongoDB and our Spring application, seeing how they work together.
Prerequisites
- A Spring Boot application running on your local machine
- Elastic Kubernetes Service deployed on AWS using
eksctl - A MongoDB Atlas account
Understanding containerisation
Often as a software developer, one comes across an issue where the features of the application work perfectly on the local machine, and many features seem to be broken on the client machine. This is where the concept of containers would come in.
In simple words, a container is just a simple, portable computing environment that contains everything an application needs to run. The process of creating containers for the application to run in any environment is known as containerisation.
Containerisation is a form of virtualisation where an application, along with all its components, is packaged into a single container image. These containers operate in their isolated environment within the shared operating system, allowing for efficient and consistent deployment across different environments.
Advantages of containerising the application
Portability
The idea of “write once and run anywhere” encapsulates the essence of containers, enabling applications to seamlessly transition across diverse environments, thereby enhancing their portability and flexibility.
Efficiency
When configured properly, containers utilise the available resources, and also, isolated containers can perform their operations without interfering with other containers, allowing a single host to perform many functions. This makes the containerised application work efficiently and effectively.
Better security
Because containers are isolated from one another, you can be confident that your applications are running in their self-contained environment. That means that even if the security of one container is compromised, other containers on the same host remain secure.
Comparing containerisation and traditional virtualisation methods
| Aspect | Containers | Virtual Machines |
|---|---|---|
| Abstraction Level | OS level virtualisation | Hardware-level virtualisation |
| Resource Overhead | Minimal | Higher |
| Isolation | Process Level | Stronger |
| Portability | Highly Portable | Less Portable |
| Deployment Speed | Fast | Slower |
| Footprint | Lightweight | Heavier |
| Startup Time | Almost instant | Longer |
| Resource Utilisation | Efficient | Less Efficient |
| Scalability | Easily Scalable | Scalable, but with resource overhead |
Understanding Docker
Docker application provides the platform to develop, ship, and run containers. This separates the application from the infrastructure and makes it portable. It packages the application into lightweight containers that can run across without worrying about underlying infrastructures.
Docker containers have minimal overhead compared to traditional virtual machines, as they share the host OS kernel and only include necessary dependencies. Docker facilitates DevOps practices by enabling developers to build, test, and deploy applications in a consistent and automated manner. You can read more about Docker containers and the steps to install them on your local machine from their official documentation.
Understanding Kubernetes
Kubernetes, often called K8s, is an open-source orchestration platform that automates containerised applications' deployment, scaling, and management. It abstracts away the underlying infrastructure complexity, allowing developers to focus on building and running their applications efficiently.
It simplifies the deployment and management of containerised applications at scale. Its architecture, components, and core concepts form the foundation for building resilient, scalable, and efficient cloud-native systems. The Kubernetes architectures have been helpful in typical use cases like microservices architecture, hybrid and multi-cloud deployments, and DevOps where continuous deployments are done.
Kubernetes Components
The K8s environment works in the controller-worker node architecture and therefore, two nodes manage the communication.
- Master Node is responsible for controlling the cluster and making decisions for the cluster.
- Worker node(s) is responsible for running the application receiving instructions from the Master Node and resorting back to the status.
Other components of the Kubernetes cluster are:
- Pods
- ReplicaSets
- Services
- Volumes
- Namespaces
Atlas Kubernetes Operator
Consider a use case where a Spring application running locally is connected to a database deployed on the Atlas cluster. Later, your organization introduces you to the Kubernetes environment and plans to deploy all the applications in the cloud infrastructure.
The question of how you will connect your Kubernetes application to the Atlas cluster running on a different environment will arise. This is when the Atlas Kubernetes Operator will come into the picture.
This operator allows you to manage the Atlas resources in the Kubernetes infrastructure.
For this tutorial, we will deploy the operator on the Elastic Kubernetes Service on the AWS infrastructure.
Step 1: Deploy an EKS cluster using eksctl
eksctl create cluster \
--name MongoDB-Atlas-Kubernetes-Operator \
--version 1.29 \
--region ap-south-1 \
--nodegroup-name linux-nodes \
--node-type t2.2xlarge \
--nodes 2
Step 2
kubectl get ns
Example output
NAME STATUS AGE
default Active 18h
kube-node-lease Active 18h
kube-public Active 18h
kube-system Active 18h
Export Environment Variables
export VERSION=v2.2.0
export ORG_ID=<your-organisations-id>
export PUBLIC_API_KEY=<your-public-key>
export PRIVATE_API_KEY=<your-private-key>
Apply operator:
kubectl apply -f https://raw.githubusercontent.com/mongodb/mongodb-atlas-kubernetes/$VERSION/deploy/all-in-one.yaml
Create secret:
kubectl create secret generic mongodb-atlas-operator-api-key \
--from-literal="orgId=$ORG_ID" \
--from-literal="publicApiKey=$PUBLIC_API_KEY" \
--from-literal="privateApiKey=$PRIVATE_API_KEY" \
-n mongodb-atlas-system
Label secret:
kubectl label secret mongodb-atlas-operator-api-key \
atlas.mongodb.com/type=credentials \
-n mongodb-atlas-system
project.yaml
apiVersion: atlas.mongodb.com/v1
kind: AtlasProject
metadata:
name: project-ako
spec:
name: atlas-kubernetes-operator
projectIpAccessList:
- cidrBlock: "0.0.0.0/0"
comment: "Allowing access to database from everywhere (only for Demo!)"
deployment.yaml
apiVersion: atlas.mongodb.com/v1
kind: AtlasDeployment
metadata:
name: my-atlas-cluster
spec:
projectRef:
name: project-ako
deploymentSpec:
clusterType: REPLICASET
name: "cluster0"
replicationSpecs:
- zoneName: AP-Zone
regionConfigs:
- electableSpecs:
instanceSize: M10
nodeCount: 3
providerName: AWS
regionName: AP_SOUTH_1
priority: 7
user.yaml
kubectl create secret generic the-user-password \
--from-literal="password=<password for your user>"
kubectl label secret the-user-password atlas.mongodb.com/type=credentials
apiVersion: atlas.mongodb.com/v1
kind: AtlasDatabaseUser
metadata:
name: my-database-user
spec:
roles:
- roleName: "readWriteAnyDatabase"
databaseName: "admin"
projectRef:
name: project-ako
username: theuser
passwordSecretRef:
name: the-user-password
Apply:
kubectl apply -f project.yaml
kubectl apply -f deployment.yaml
kubectl apply -f user.yaml
Deploying the Spring Boot application in the cluster
Build JAR
mvn clean package
Build Docker image
docker build -t mongodb_spring_tutorial:docker_image . --load
Push image
docker tag mongodb_spring_tutorial:docker_image <your_docker_username>/mongodb_spring_tutorial
docker push <your_docker_username>/mongodb_spring_tutorial
app-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: spring-app
spec:
replicas: 1
selector:
matchLabels:
app: springboot-application
template:
metadata:
labels:
app: springboot-application
spec:
containers:
- name: spring-app
image: <your_docker_username>/mongodb_spring_tutorial
ports:
- containerPort: 8080
env:
- name: SPRING_DATA_MONGODB_URI
valueFrom:
secretKeyRef:
name: atlas-kubernetes-operator-cluster0-theuser
key: connectionStringStandardSrv
- name: SPRING_DATA_MONGODB_DATABASE
value: sample_supplies
service.yaml
apiVersion: v1
kind: Service
metadata:
name: spring-app-service
spec:
selector:
app: spring-app
ports:
- protocol: TCP
port: 8080
targetPort: 8080
type: LoadBalancer
Apply:
kubectl apply -f ./*.yaml
Get External IP
kubectl get svc
Or:
EXTERNAL_IP=$(kubectl get svc | grep spring-app-service | awk '{print $4}')
echo $EXTERNAL_IP
Troubleshooting
Check Pods
kubectl describe pods <pod-name> -n <namespace>
kubectl get pods -n <namespace>
Check Nodes
kubectl get nodes
Check Logs
kubectl logs -f <pod-name> -n <namespace>
Check Services
kubectl describe svc <service-name> -n <namespace>
Conclusion
Throughout this tutorial, we've covered essential aspects of modern application deployment, focusing on containerisation, Kubernetes orchestration, and MongoDB management with Atlas Kubernetes Operator.
With this knowledge, you're well-prepared to architect and manage sophisticated cloud-native applications effectively.
Top comments (0)