DEV Community

Cover image for Accelerating the Implementation of DevOps Culture in Your Organization with Amazon CodeCatalyst
Luis Parraguez for AWS Community Builders

Posted on • Updated on

Accelerating the Implementation of DevOps Culture in Your Organization with Amazon CodeCatalyst

Good morning everyone!

DevOps culture has become increasingly popular among software development organizations as it fosters a collaborative approach between development and operations teams, enabling the continuous delivery of high quality software. However, effectively implementing the DevOps culture can be a complex challenge.

This is where the Amazon CodeCatalyst service can play a relevant role. In this post, we’ll explore how Amazon CodeCatalyst can help organizations accelerate and improve the implementation of DevOps culture by providing a complete collaboration and automation platform.

Amazon CodeCatalyst Overview

Amazon CodeCatalyst is an Amazon Web Services (AWS) service that provides an integrated platform to help teams collaborate, automate processes, and adopt DevOps culture best practices. It offers capabilities for code versioning, integration and continuous delivery (CI/CD) pipeline management, issue tracking, configuration management, and more.

  • Enhanced collaboration: One of the keys to the success of the DevOps culture is effective collaboration between development and operations teams. Amazon CodeCatalyst offers advanced features to promote collaboration, such as centralized code repositories, integration with communication tools (such as Slack), and code review capabilities. These features allow teams to work together efficiently, share knowledge, and collaborate on projects with ease.

  • Process automation: Automating processes is key to accelerating the implementation of the DevOps culture. Amazon CodeCatalyst provides comprehensive capabilities for automation, enabling you to create CI/CD pipelines to automate software build, test, and deploy. This reduces reliance on time-consuming manual processes, increasing the efficiency and speed of software delivery.

  • Configuration management: Configuration management is an essential part of the DevOps culture. Amazon CodeCatalyst provides capabilities to efficiently manage the configuration of infrastructure and application environments. It supports the use of popular tools, such as AWS CloudFormation and Terraform, to provision and manage infrastructure resources as code. This ensures infrastructure consistency and traceability and simplifies the management of environments at different stages of the software lifecycle.

  • Issue tracking: Issue tracking and effective project management are crucial to the successful implementation of the DevOps culture. Amazon CodeCatalyst provides built-in capabilities for issue tracking, allowing teams to record, prioritize, and track issues and development tasks. This improves visibility and collaboration around issues, making them easier to resolve quickly and efficiently.

  • Security and compliance: Security and compliance are important considerations when implementing the DevOps culture. Amazon CodeCatalyst provides capabilities for access control, security monitoring, and integration with other security-focused AWS services, such as AWS Identity and Access Management (IAM) and AWS CloudTrail. This ensures that organizations can implement appropriate security practices and meet regulatory requirements.

Sharing Lessons Learned Using Amazon CodeCatalyst

I would like to share with you lessons learned from using Amazon CodeCatalyst applied in a project that combined Multi-Cloud and DevOps requirements. The final objective of this project was the implementation of a Static Serverless Website in Multi-Cloud architecture, having as main requirements:

  • Implement the static website through the use of Serverless storage resources in AWS (S3), Azure (Blob Storage) and OCI (Buckets).
  • Create centralized repositories of infrastructure as code (IaC) and static website code, integrated with integration and continuous delivery pipelines.

  • Centralize automated provisioning, configuration, and management of infrastructure across multiple Clouds using Amazon CloudCatalyst and Terraform.

  • Centralize the management and automation of static website integration and continuous delivery pipelines across multiple Clouds using Amazon CloudCatalyst.

  • Communicate with AWS Cloud through native Amazon CloudCatalyst resources.

  • Communicate with Azure and OCI using CLIs (Command Line Interfaces) and leveraging compute resources from Amazon CloudCatalyst.

We can see the solution architecture applied in this project in the following diagram:

Image description

Let’s now look at the key steps followed in this project:

Step 1 — Preparing Infrastructure as Code repository

As a first step to using the features of Amazon CloudCatalyst we must create a “Space” (in our project called “TerraformCodeCatalystLPG”). During the creation of the “Space” we need to specify the AWS Account that will be used for billing and for creating the resources in the AWS Cloud through authorization of IAM roles.

Created the “Space” we can create a “Project” (in our project called “TerraformCodeCatalyst”):

Image description

Within the “Project” we find the two groups of functionalities required to meet the requirements of the project: “Code” and “CI/CD”.

In the figure below we can see these options on the left, including the detail of the features related to “Code”:

  • Source Repositories: Functionality that allows the creation and versioning control of code repositories;

  • Pull Requests: Functionality that allows the management of code update requests in the repositories, including support for approval/disapproval and application of updates (Merge);

  • Dev Environments: Functionality that allows the creation of pre-configured development environments that we can use to work with the code of our infrastructure and / or applications.

Image description

In the project, we tested the creation of development environments and checked the availability of environments with support for AWS Cloud9 (running in web browser) and Visual Studio Code as well as other options with support for JetBrains IDEs. For the project we chose to use a local Visual Studio Code environment for the reuse of previously prepared Terraform code.

Using the feature Source Repository, we created our first repository (“bootstrapping-terraform-automation for-amazon-codecatalyst”) to store the Terraform code that was used for the provisioning of our infrastructure.

Within this repository we first created a folder (“_bootstrap”) to store the code of the base infrastructure required for the operation of Terraform using an S3 Backend in AWS. This base infrastructure requires the creation of (1) an S3 bucket to store the Terraform file to control provisioned resources (terraform.tfstate), (2) a DynamoDB table to control concurrent access to the terraform.tfstate file in the case of parallel executions, and (3) IAM roles and policies required to connect Amazon CloudCatalyst to the AWS account where the resources will be created — one IAM role for Branch Main with permission to create resources and another IAM role to Branch Pull Request with read-only permission.

Step 2 — Creation of CI/CD workflows to update the Infrastructure as Code

Once the base infrastructure resources have been created, we are ready to create the CI/CD workflows to manage the update of the infrastructure as code of our application. To do this first, we must associate the two previously created IAM roles with Amazon CloudCatalyst so that we can use them in workflows.

As you can see in the figure below, we now use the “CI/CD | Workflows” functionality, selecting the code repository, to create 03 workflows in Branch Main:

  • “TerraformPRBranch”: Workflow that will manage the evaluation of requested updates through Pull Requests from a Branch. This workflow will perform the installation of Terraform in a virtual machine EC2 and will execute the commands Terraform init, validate e plan aiming to validate the updates made to the infrastructure code;

  • “TerraformMainBranch”: Workflow that will manage the automatic application of the approved updates in the code of Branch Main of our repository. In a similar way this workflow will execute the commands Terraform init, validate, plan e apply aiming to apply the updates made to the infrastructure code;

  • “TerraformMainBranch_Destroy”: Workflow that will manage the removal of infrastructure created through Branch Main code. This workflow is configured to run manually and will execute the Terraform init and destroy commands to eliminate the provisioned resources.

Image description

As an example, we can see below the YAML code of the workflow “TerraformMainBranch”:

# Adaptation of the https://developer.hashicorp.com/terraform/tutorials/automation/github-actions workflow
Name: TerraformMainBranch
SchemaVersion: "1.0"

# Here we are including the trigger for this workflow: Push / Pull Request. If not included then this workflow will be executed only manually
Triggers:
  - Type: Push
    Branches:
      - main

# Here we are defining the actions that will be executed for this workflow
Actions:
  Terraform-Main-Branch-Apply:
    Identifier: aws/build@v1
    Inputs:
      Sources:
        - WorkflowSource
    Environment:
      Connections:
        - Role: Main-Branch-Infrastructure
          Name: "XXXXXXXXXXXX"
      Name: TerraformBootstrap
    Configuration: 
      Steps:
        - Run: export TF_VERSION=1.5.2 && wget -O terraform.zip "https://releases.hashicorp.com/terraform/${TF_VERSION}/terraform_${TF_VERSION}_linux_amd64.zip"
        - Run: unzip terraform.zip && rm terraform.zip && mv terraform /usr/bin/terraform && chmod +x /usr/bin/terraform
        - Run: terraform init -no-color
        - Run: terraform validate -no-color
        - Run: terraform plan -no-color -input=false
        - Run: terraform apply -auto-approve -no-color -input=false
    Compute:
      Type: EC2
Enter fullscreen mode Exit fullscreen mode

Step 3 — Execution of CI/CD workflows to update the Infrastructure as Code

Next, we created a Branch (“test-pr-workflow”) that was used to validate the updates to the Terraform code of our infrastructure.

The Terraform files of the application were organized into groups: the first one focused on connecting to AWS, Azure and OCI (multicloud_provider.tf and multicloud_variables.tf) and another three for provisioning the storage resources in each Cloud (Example: aws_storage.tf and aws_variables.tf). For the provisioning of this infrastructure we also used the S3 Backend previously created but storing the file terraform.tfstate in a different key of the bucket.

Using Visual Studio Code Insiders, we synchronized the Terraform files of the infrastructure with our repository in Amazon CloudCatalyst using the “test-pr-workflow” Branch.

Updated the files in the Branch “test-pr-workflow” we created a Pull Request to start the workflow “TerraformPRBranch” in this Branch. In the figure below you can see the data for creation of a Pull Request including the source Branch “test-pr-workflow” and Target Branch “main” as well as the specification of Reviewers, mandatory and optional, of the requested changes with which we are applying collaboration within the team of DevOps.

Image description

The creation of the Pull Request triggered the workflow “TerraformPRBranch” in Branch “test-pr-workflow” and after its completion we were able to verify, through the Logs of the command Terraform Plan, that the application of the updates in the infrastructure could be carried out successfully and, being this the case, we authorized Merge of updates in the Branch “main”.

By authorizing the Merge, the workflow “TerraformMainBranch” was initiated and with it were carried out the infrastructure updates defined by the Terraform code:

Image description

This demonstrated a full CI/CD cycle of automating infrastructure upgrades using Amazon CodeCatalyst!!

Step 4 — Preparation of the application code repository (Serverless Static Website in Multi-Cloud architecture)

Similar to Step 1, using the Source Repository functionality we created the repository for the code of our application (“static-website-repo”), containing the files required for the website:

Image description

Step 5 — Creation of CI/CD workflows to update the application

Following the same procedure as before, we proceeded to the creation of workflows to store the updates of the files of the static website in the buckets of each Cloud. Each of the workflows was configured to sequentially update the three specified environments: Testing, Homologation and Production, with the condition of advancing to the next environment only in case of success in the application in the previous environment.

Let’s see below the highlights in the preparation of each workflow:

“Upload_to_AWS_S3” Workflow — AWS S3 Bucket Storage

As you can see below, in the visual representation of this workflow, we have configured as an automatic trigger the code update event in the “main” branch of the repository. This trigger will start the workflow that will consist of 03 actions of type “aws/s3-publish@v1.0.5. This action is a native feature of Amazon CodeCatalyst that allows you to upload files to an S3 Bucket by running commands from an EC2 virtual machine:

Image description

“Upload_to_Azure_Blob” Workflow — Azure Blob Storage

As you can see below, in the visual representation of this workflow, we have also configured as an automatic trigger the code update event in the “main” branch of the repository. This trigger will start the workflow that will also be composed of 03 actions (Actions).

According to project requirements, communication with Azure was performed using Azure CLI (Command Line Interface). To enable this, the technical alternative was to apply Amazon CloudCatalyst’s ability to allow the creation of the processing environment using a custom container image. The custom image was the containerized version of Azure CLI (Image: mcr.microsoft.com/azure-cli). Authentication for CLI use was performed by leveraging Amazon CloudCatalyst Secrets functionality for the security of access information.

Image description

“Upload_to_OCI_Bucket” Workflow — OCI Buckets Storage

As you can see below, in the visual representation of this workflow, we have also configured as an automatic trigger the code update event in the “main” branch of the repository. This trigger will start the workflow that will also be composed of 03 actions (Actions).

According to the requirements of the project, the communication with OCI was also carried out using the OCI CLI (Command Line Interface) and to enable this the technical alternative applied was to run the containerized version of the OCI CLI on the EC2 virtual machine (Imagen: ghcr.io/oracle/oci-cli:latest ) using Docker commands in the version already available in the EC2 processing environment. Authentication for CLI use was also performed by leveraging Amazon CloudCatalyst Secrets functionality for the security of access information.

Image description

Step 6 — Execution of CI/CD workflows to update the application code

The update of the application code was done applying the same methodology that we applied in the case of the infrastructure as code repository (branch creation for Pull Requests). After this process was carried out and authorizing the Merge, a Push event was generated in the “main” Branch and with that the three previous workflows were triggered initiating the update of the Buckets in AWS, Azure and OCI. The images in the sequence below show the result of the execution of the workflows:

Image description

Image description

Image description

And as a result of the processing we had our Static Serverless Website in Multi-Cloud architecture up and running on AWS, Azure and OCI powered by an end-to-end DevOps Process supported by Amazon CodeCatalyst!!

Image description

Conclusion

Efficient implementation of DevOps culture is critical to the success of software development organizations. Amazon CodeCatalyst provides a comprehensive platform to accelerate and enhance this implementation.

With advanced collaboration, process automation, configuration management, and issue tracking capabilities, Amazon CodeCatalyst enables teams to collaborate more efficiently, improve speed of delivery, and ensure software quality. By adopting Amazon CodeCatalyst, organizations can drive their DevOps journey quickly and efficiently, leveraging the benefits of an agile, collaborative approach to software development.

And, as we saw in the project presented above, Amazon CodeCatalyst also has the necessary resources to work with solutions in Multi-Cloud architecture in partnership with Terraform and Docker. I suggest you experiment the solution too!!

We are available to support you in this process and in the continuity of your Cloud Journey!

Let’s meet again in our next post!

Top comments (0)