DEV Community

abrarmoiz
abrarmoiz

Posted on

CICD pipelines: Application Developers perspective

This blog presents an alternate view of CICD pipelines with an example of AKS build deployment using GitHub actions.

A simplified view of CI pipeline is to allow developers to relate to and make the CICD pipelines more readable and update or fix any issues quickly in the CICD pipeline. The CI pipeline can be considered as an automated list of steps any developer would follow to get his code built and pushed into a code artifact repo.

Consider a situation where in there is no devops team around or you are just moved to devops team from a development background.

A quick view of the steps a developer would take and their matching automation step Github actions. (A similar action can be found on any other CI/CD tool like Azure Devops)


1. Code Build Phase

Our developer in this first step wants to build the code and save the created code output in a safe location. Previously depending the language the output could be a war / jar / zip file or dll file and it would be saved in versioned format.

Since in this case we deal with a containerized output i.e. a container image. The developer needs to save the image in an image repo like docker hub, ACR or ECR

First the developer needs machine to run the build on it can be a windows or linux machine a VM

runs-on: ubuntu-latest

Enter fullscreen mode Exit fullscreen mode
Next the developer would want to checkout the code

- name: Code checkout
  uses: actions/checkout@v3  
  with:
   ref: ${{github.head_ref}}
   fetch-depth: 0          

Enter fullscreen mode Exit fullscreen mode

github.head_ref is a reference to the github repo and branch links saved as a variable in github actions

Developer needs to now authenticate / configure the container registry in our case Azure Container Registry where the code in form of container images would reside

      - name: Login to Azure Container Registry
        uses: docker/login-action@v2
        with:
          registry: ${{ secrets.ACR_URL }}
          username: ${{ secrets.ACR_USERNAME }}
          password: ${{ secrets.ACR_PASSWORD }}

Enter fullscreen mode Exit fullscreen mode

Since we are dealing with credentials of ACR, it would be better saved as a secret.

Refer to this link for using secrets in github.

Developer then would run a build i.e. a docker build and push the created docker image into the aforementioned container registry

- name: Build docker image and push to registry
        uses: docker/build-push-action@v4
        with:
          context: ${{ inputs.PROJECT_PATH }}
          file: ${{ inputs.DOCKERFILE_PATH }}
          push: true
          build-args: |
            build_id = ${{ steps.vars.outputs.sha_short }}
          tags: ${{ secrets.ACR_URL }}/${{ steps.vars.outputs.project_name }}:${{ steps.vars.outputs.sha_short }}
          cache-from: type=registry, ref=${{ secrets.ACR_URL }}/${{ steps.vars.outputs.project_name }}:latest
          cache-to: type=inline


Enter fullscreen mode Exit fullscreen mode

In this action steps is a variable internal to github action used to refer to predefined variables.

With the above list of steps you have a got a basic CI pipeline running. Always remember
CI = checking out code, building / compiling code using the dependencies

Basic container build

We have got an artifact at this stage. A responsible developer however would want to run the test cases to ensure a new PR or code commit doesn’t break the existing functionality.


2. Test Phase

The developer again would need a machine to run the tests it can be a windows or linux machine a VM

runs-on: ubuntu-latest

Developer would again want to checkout the code

       - name: Code checkout
         uses: actions/checkout@v3  
         with:
           ref: ${{github.head_ref}}
           fetch-depth: 0          

Enter fullscreen mode Exit fullscreen mode
Developer would need to now set up the local environment to run tests and default test suite to get executed

- name: Execute unit tests
  if: inputs.TESTCASE_INPUT != ''
  defaults:
   run:
    working-directory: ${{ inputs.TESTCASES_FOLDER }}/ ${{inputs.TESTCASE_INPUT}}

Enter fullscreen mode Exit fullscreen mode

In this step **inputs... ** is referred to the variables passed to the github actions

Setup coding language version

- name: Install dependencies
  run: |
    python -m pip install --upgrade pip
    pip install -r requirements.txt

Enter fullscreen mode Exit fullscreen mode
Now execute the tests

- name: Install dependencies
  run: |
   pip install pytest pytest-cov
   pytest tests.py --doctest-modules --junitxml=junit/test-results.xml --cov=com --cov-report=xml --cov-report=html

Enter fullscreen mode Exit fullscreen mode

As this stage we realize why the team lead and the architects have been after us to increase the code coverage. Remember a good code coverage always allows you to catch the errors early. We have heard that multiple times earlier.

Python test setup

3. Code Deploy Phase

Finally, the developer needs to run this application on cloud or on premise. In our case its going to be AKS cluster on azure cloud. The developer pulls out the image pushed into the registry and tries to run it into the compute layer. The compute layer could be Azure web service or a VM here we consider an AKS cluster

Assumption: AKS, DB the application relies on is already setup we deal with only delivery of application code

Developer would again need a VM or machine to deploy the code from

runs-on: ubuntu-latest

Login to a cloud environment in our case an Azure environment

- name: Azure Login
  uses: azure/login@v1
  with:
    creds: ${{ secrets.AZURE_CREDENTIALS }}
    fetch-depth: 0          

Enter fullscreen mode Exit fullscreen mode
Developer would need to identify the proper AKS where the application code in form of docker image would reside this is done by setting the AKS context by identifying resource group, subscription group and AKS cluster name

- name: Set AKS Context
  uses: azure/aks-set-context@v3
  with:
    resource-group:
    ${{ secrets.RESOURCE_GROUP }}
    cluster-name: ${{ secrets.CLUSTER_NAME }}

Enter fullscreen mode Exit fullscreen mode
Authenticate into ACR container registry

- name: Login to Azure Container Registry
  uses: docker/login-action@v2
  with:
   registry: ${{ secrets.ACR_URL }}
   username: ${{ secrets.ACR_USERNAME }}
   password: ${{ secrets.ACR_PASSWORD }}

Enter fullscreen mode Exit fullscreen mode
Finally refresh the AKS deployment with the latest version of container image.

This will involve installation of kubectl tool, configuring kubeconfig from the secrets and running the k


- name: Set up Kubectl
  uses: azure/k8s-set-context@v1
  with:
    kubeconfig: ${{ secrets.KUBECONFIG }}
- name: Deploy
  run: |
     kubectl apply -f kubernetes/deployment.yaml
     kubectl apply -f kubernetes/service.yaml

Enter fullscreen mode Exit fullscreen mode

The above step could also be accomplished using helm charts.

Deployment in AKS

The entire CI/CD pipeline therefore can be viewed at a basic level by breaking it up into smaller jobs which are an automation of a developer’s day to day tasks of code build, running unit tests and deployment to a cloud server instead of running on his laptop.

Each of the main sections could be considered as a github action job and every developer activity we went through is a step. Different implementations of CI/CD may call each step with a different name i.e. a job, step, trigger can be referred and have syntactically different ways of representation in different tools. Refer to this link for a comparison of different CICD tools link

However, our pipeline is not production ready! The major difference between these steps and a CICD pipeline fit to be used in a production environment would be:

a. Optimizing steps:

  1. Remove duplicate steps between jobs for e.g. code checkout in build and test jobs to be reduced to only one
  2. Add dependencies between stages to ensure one stage is completed before the next one starts. e.g. deploy cannot start until build is completed

Image description

b. Adding security scanners to CICD:

Various measures are implemented to ensure the final output from the CI Pipeline is free from vulnerabilities. This is done by scanning code / code artifact by adding a few additional steps in the CI pipeline. This is a quality control applied on the CI pipeline. For this can use the standard github (or any custom CICD) actions to the pipeline which addresses the following –

  1. Static Application Security Testing (SAST) e.g. SonarQube , Sonar cloud
  2. Dynamic Application Security Testing (DAST) e.g. Snyk, ZAP

c. Event based execution:

The execution of the entire pipeline needs to be all glued to an event it could be PR approval or Commit to a branch or Creation etc.,

d. Continous delivery first and then continous deployment:

In the real world the deploy stage evolves first into a human monitored continous delivery and if the processes mature enough then it moves onto continous deployment something which devops teams would need to ponder over.

An overall view of our jobs in form of a CICD pipeline after implementing all the CICD best practices

Overall view

Top comments (0)