DEV Community

Cover image for Create CI/CD pipeline in GitLab with AWS CDK, Docker, Spring Boot and Gradle
bright inventions
bright inventions

Posted on • Originally published at brightinventions.pl

Create CI/CD pipeline in GitLab with AWS CDK, Docker, Spring Boot and Gradle

CI/CD process is the backbone of every good performing team. It facilitates the development, testing, and deploying of an app.

Let’s define behaviour of our CI/CD pipeline:

  • the app is built and tested with every commit pushed to the repo
  • the app is deployed to the stage environment for commits pushed to the master branch (after the successful build and test phase)
  • every pipeline on the master branch has an option to deploy the app to the production environment

Deploy will consist with:

  • creating docker image
  • pushing docker image to ECR (Amazon Elastic Container Registry)
  • creating AWS infrastructure using AWS CDK

Build and test Spring Boot app with Gradle

Setting up Spring Boot app

For this guide, I generated Spring Boot app from Spring Boot initializr.

Besides, two dependencies were added:

implementation("org.springframework.boot:spring-boot-starter-web")
implementation("org.springframework.boot:spring-boot-starter-actuator")
Enter fullscreen mode Exit fullscreen mode

Running an app and requesting

curl-XGET http://localhost:8080/actuator/health

Enter fullscreen mode Exit fullscreen mode

should return

{“status”:”UP”}
Enter fullscreen mode Exit fullscreen mode

GitLab CI configuration

Initializing GitLab CI/CD is very easy. Just create a .gitlab-ci.yml file in the root directory of a project.

# name of the Docker image, the Docker executor uses to run CI/CD jobs
image: openjdk:17-alpine

# Stages, which define when to run the jobs. For example, stages that run tests after stages that compile the code.
stages:
  - Build
  - Test

# build job in Build stage
build:
  stage: Build
  script:
    - ./gradlew bootJar
# The paths keyword determines which files to add to the job artifacts. All paths to files and directories are relative to the repository where the job was created.
  artifacts:
    paths:
      - build/libs/*.jar
    expire_in: 1 week

# test job in Test stage
test:
  stage: Test
  needs:
    - build
  script:
    - ./gradlew check
Enter fullscreen mode Exit fullscreen mode

In the configuration, two stages were defined:

  • Build with job build — compile our code and save resulted jar file in job artifacts

  • Test with job test — run all tests in the project

That simple configuration allows us to introduce a continuous integration process in our development.

To speed up our builds we can use Gradle cache. Here is a nice article on how to do it with GitLab.

Creating docker image and pushing to ECR

Before creating and pushing docker images to ECR we must first create an ECR repository (in a region that we will be using).

Amazon ECR

Second, we must provide proper ci/cd variables in GitLab from AWS:

AWS_ACCOUNT_ID, AWS_REGION, AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY
Enter fullscreen mode Exit fullscreen mode

I created a user with admin privileges and I use its credentials in GitLab — it is only for the purpose of this guide and it’s not recommended technic for “real projects”.

AWS

_CI/CD Settings Variables _

After those steps, we can define a Dockerfile and a job in our pipeline that will create and push docker image.

FROM openjdk:17-alpine
ARG JAR_FILE=build/libs/\*.jar
COPY ${JAR_FILE} app.jar
ENTRYPOINT ["java","-jar","/app.jar"]
Enter fullscreen mode Exit fullscreen mode

In Dockerfile, a path to a jar file must point to a jar file generated by build job.

image: openjdk:17-alpine

stages:
  - Build
  - Test
  - Deploy Stage

build:
  stage: Build
  script:
    - ./gradlew bootJar
  artifacts:
    paths:
      - build/libs/*.jar
    expire_in: 1 week

test:
  stage: Test
  needs:
    - build
  script:
    - ./gradlew check

docker_image:
  image: docker:stable
  stage: Deploy Stage
  needs:
    - build
    - test
  variables:
    IMAGE_NAME: ci-cd-demo-app
    TAG_LATEST: $AWS_ACCOUNT_ID.dkr.ecr.$AWS_REGION.amazonaws.com/$IMAGE_NAME:latest
    TAG_COMMIT: $AWS_ACCOUNT_ID.dkr.ecr.$AWS_REGION.amazonaws.com/$IMAGE_NAME:$CI_COMMIT_SHORT_SHA
    DOCKER_TLS_CERTDIR: ""
  services:
    - docker:dind
  before_script:
    - apk add --no-cache aws-cli
  script:
    - aws ecr get-login-password --region $AWS_REGION |
      docker login --username AWS --password-stdin $AWS_ACCOUNT_ID.dkr.ecr.$AWS_REGION.amazonaws.com
    - docker pull $TAG_LATEST || true
    - docker build --cache-from $TAG_LATEST -t $TAG_COMMIT -t $TAG_LATEST .
    - docker push $TAG_COMMIT
    - docker push $TAG_LATEST
  rules:
    - if: '$CI_COMMIT_REF_NAME == "master"'
      when: on_success
    - when: manual
Enter fullscreen mode Exit fullscreen mode

Stage Deploy Stage was added and accordingly docker_image job. Because of needs keyword job can run only after build and test and will have access to artifacts produced by build and test. Created image is tagged with commit sha value and “latest”. rules define that the job runs automatically on the master branch but can also be triggered manually.

After running the job docker_image an image with our app should be visible in ECR repository.

ECR repository

Creating AWS infrastructure using AWS CDK

Install the AWS CDK npm install -g aws-cdk, then in the project root run the following commands:

mkdir infrastructure
cd infrastructure
cdk init app --language typescript
Enter fullscreen mode Exit fullscreen mode

Those commands will initialize cdk project in the infrastructure directory.

Our main class is bin/infrastructrue.ts. Here we will be creating stacks. Stacks will be defined in lib directory.

Let’s create our infrastructure stack:

import {Duration, RemovalPolicy, Stack, StackProps} from 'aws-cdk-lib';
import {Construct} from 'constructs';
import {AwsLogDriver, Cluster, ContainerImage} from "aws-cdk-lib/aws-ecs";
import {ApplicationLoadBalancedFargateService} from "aws-cdk-lib/aws-ecs-patterns";
import {LogGroup} from "aws-cdk-lib/aws-logs";
import {Vpc} from "aws-cdk-lib/aws-ec2";
import * as ecr from "aws-cdk-lib/aws-ecr";

export class InfrastructureStack extends Stack {

  private readonly TAG_COMMIT: string = process.env.TAG_COMMIT || 'latest'
  private readonly ECR_REPOSITORY_NAME: string = "ci-cd-demo-app"

  constructor(scope: Construct, id: string, props?: StackProps) {
    super(scope, id, props);

    const vpc = new Vpc(this, projectEnvSpecificName("VPC"), {
      natGateways: 1
    })

    const cluster = new Cluster(this, projectEnvSpecificName('Cluster'), {
      vpc: vpc
    });

    const service: ApplicationLoadBalancedFargateService = new ApplicationLoadBalancedFargateService(this, projectEnvSpecificName("application-lb-fargate-service"), {
      serviceName: projectEnvSpecificName("fargate-service"),
      cluster: cluster,
      cpu: 512,
      desiredCount: 2,
      listenerPort: 8080,
      memoryLimitMiB: 1024,
      publicLoadBalancer: true,
      taskImageOptions:  {
        containerName: projectEnvSpecificName("ecs-container"),
        image: ContainerImage.fromEcrRepository(ecrRepositoryForService(this,this.ECR_REPOSITORY_NAME), this.TAG_COMMIT),
        containerPort: 8080,
        logDriver: new AwsLogDriver({
          logGroup: new LogGroup(this, projectEnvSpecificName("log-group"), {
            logGroupName: projectEnvSpecificName("app-service"),
            removalPolicy: RemovalPolicy.DESTROY
          }),
          streamPrefix: projectEnvSpecificName(),
        })
      }
    })

    service.targetGroup.configureHealthCheck({
      path: "/actuator/health",
      port: "8080",
      healthyHttpCodes: "200"
    })

    const scalableTaskCount = service.service.autoScaleTaskCount({
      minCapacity: 2,
      maxCapacity: 4
    });

    scalableTaskCount.scaleOnCpuUtilization(projectEnvSpecificName("service-auto-scaling"), {
      targetUtilizationPercent: 50,
      scaleInCooldown: Duration.seconds(60),
      scaleOutCooldown: Duration.seconds(60),
    })
  }
}

export function ecrRepositoryForService(scope: Construct, serviceName: string) {
  return ecr.Repository.fromRepositoryName(scope, `${serviceName} repository`, serviceName)
}

const DEPLOY_ENV: DeployEnv = process.env.DEPLOY_ENV || 'test';

export enum KnownDeployEnv {
  prod = 'prod',
  stage = 'stage',
  test = 'test'
}

export type DeployEnv = KnownDeployEnv | string

export const PROJECT_NAME = "backend";

export function projectEnvSpecificName(name: string = ""): string {
  const prefix = PROJECT_NAME.replace('_', '-') + "-" + DEPLOY_ENV;
  if (name.startsWith(prefix)) {
    return name
  } else {
    return `${prefix}-${name}`
  }
}
Enter fullscreen mode Exit fullscreen mode

A lot of stuff is going on here. Let’s explain a little. 😉

First, we are creating VPC and ECS cluster. Then we are using ApplicationLoadBalancedFargateService. This construct will set up fargate service running on ecs cluster frontend by public application load balancer. The important thing here is passing our app image from ECR with a tag equal to commit sha value (image with commit sha tag was pushed in the previous step).

image: ContainerImage.fromEcrRepository(ecrRepositoryForService(this,this.ECR_REPOSITORY_NAME), this.TAG_COMMIT)
Enter fullscreen mode Exit fullscreen mode

Also we are identifying every resource with projectEnvSpecificName. It will help us distinguish AWS resources for different environments and projects.

In /bin/infarstructrue.ts we define creation of infrastructrue-stack and starting point of cdk:

#!/usr/bin/env node
import 'source-map-support/register';
import * as cdk from 'aws-cdk-lib';
import {InfrastructureStack, projectEnvSpecificName} from '../lib/infrastructure-stack';

async function main() {
    const app = new cdk.App();

    new InfrastructureStack(app, projectEnvSpecificName("app-service"))
}

main().catch(er => {
    console.log(er)
    process.exit(1)
})

Enter fullscreen mode Exit fullscreen mode

Our infrastructure is set up! Now we need to automize our deployment in GitLab CI/CD.

# install necessary dependencies and run our CDK, are used both by deploy_stage and deploy_prod
.deploy:
  image: node:18-alpine
  variables:
# setting TAG_COMMIT variable for CDK
    TAG_COMMIT: $CI_COMMIT_SHORT_SHA
  before_script:
    - cd infrastructure
    - node -v
  script:
    - npm i
    - AWS_REGION=${AWS_REGION:-eu-west-1} DEPLOY_ENV=${DEPLOY_ENV:-test} npm run  cdk -- deploy --all --require-approval ${APPROVAL_LEVEL:-never}

deploy_stage:
  stage: Deploy Stage
  extends:
    - .deploy
  variables:
# setting DEPLOY_ENV variable for CDK
    DEPLOY_ENV: stage
  environment:
    name: stage
# job can be run only after success of jobs: test, docker
  needs:
    - test
    - docker_image
# commits on the master branch are automatically deployed to the stage
  rules:
    - if: '$CI_COMMIT_REF_NAME == "master"'
      when: on_success
    - when: manual

deploy_prod:
  stage: Deploy Prod
# job can be run only on master branch
  only:
    - master
  extends:
    - .deploy
  variables:
# setting DEPLOY_ENV variable for CDK
    DEPLOY_ENV: prod
# job can be run only after success of jobs: test, docker, deploy_stage
  needs:
    - test
    - docker_image
    - deploy_stage
  environment:
    name: prod
# manual deployment
  when: manual

# prevent running additional pipelines in merge requests
workflow:
  rules:
    - if: '$CI_PIPELINE_SOURCE == "merge_request_event"'
      when: never
    - when: always
Enter fullscreen mode Exit fullscreen mode

We have defined .deploy configuration, deploy_stage and deploy_prod jobs.

After the success of deploy_stage job, we should be able to hit on the actuator/health endpoint. The URL to the service will be displayed in job log and also will be available in CloudFormation in the stack outputs.

deploy_stage job log

deploy_stage job log

CloudFormation

CloudFormation backend-stage-app-service outputs

curl -XGET http://backe-backe-xrbo7418s6uv-1422097338.eu-central-1.elb.amazonaws.com:8080/actuator/health

{"status":"UP"}
Enter fullscreen mode Exit fullscreen mode

Remember to destroy unused stacks and remove unused images in ECR to reduce costs in AWS!

Congratulations!

You have created a pipeline for your Spring Boot app with basic AWS infrastructure for two environments!

congratulations

Top comments (0)