DEV Community

Cover image for How I implement DevOps culture in a webapp - Part 1
Adeolu Oyinlola
Adeolu Oyinlola

Posted on

How I implement DevOps culture in a webapp - Part 1

In this post, am going to show you a real-world solution for setting up an environment that is using DevOps technologies and practices for deploying apps and cloud services/cloud infrastructure to AWS.
I will break this series into two part, where this is going to be the first part, you will do well to search for the second part. However, I will drop the link to second part at the end of this post.

Full Disclosure - As a DevOps interns with permission, I did have to use ruralx.africa landing page source code and the initial DevOps practices and technologies.

Why DevOps?
Imagine you're currently working in an organization that is very monolithic. There is a ton of bare metal, virtualization, manual deployments of applications, and old school practices based on the current teams knowledge of IT.

You're brought in to the company and team to make things more modern so the organization can not only succeed, but stay ahead of their competition. Management now understands the needs and complexity that comes with staying ahead of their competition and they know that they need to. Otherwise, the organization will fall.

Solution?
The solution in this post is a real-world example of how to sprinkle a little DevOps on an existing source code - DevOps is a set of practices that combines software development and IT operations. It aims to shorten the systems development life cycle and provide continuous delivery with high software quality.

Prerequisites
I will be using the following technologies and platforms to set up a DevOps environment.

  • AWS
    AWS will be used to host the application, cloud infrastructure, and any other services we may need to ensure the ruralx app is deployed properly.

  • AWS CLI
    The AWS CLI is a way for you to interact with all AWS services at a programmatic level using the terminal.

  • GitHub
    To store the application and infrastructure/automation code

  • Nodejs
    Nodejs will be used for the webapp (it is written in JavaScript).

  • Docker
    Create a Docker image
    Store the Docker image in AWS ECR

  • Terraform
    Create an S3 bucket to store Terraform State files
    Create an AWS ECR repository with Terraform
    Create an EKS cluster

  • Kubernetes
    To run the Docker image that's created for the containerized ruralx app. Kubernetes, in this case, EKS, will be used to orchestrate the container.

  • CI/CD
    Use GitHub Actions to create an EKS cluster

  • Automated testing
    Testing Terraform code with Checkov

  • Prometheus
    An open-source monitoring system with a dimensional data model, flexible query language, efficient time series database and modern alerting approach.

In this first part, I will show you how I apply DevOps practices in Continuous Development; using AWS, Docker and Terraform. Where in the last part, I will show you how I apply Continuous Integration and Continuous Deployment; using GitHub Action, Kubernetes and Prometheus.
Let's get to it.

1.0 AWS
Install AWS CLI
The steps to install AWS is pretty simple and straight, follow each step from AWS documentation. Note that installation of basic tools like AWS, Text Editor, Docker and others are outside the scope of this post. Else, it will make this post unnecessarily long, so I will assume you have all the prerequisites knowledge mentioned above.
For AWS CLI documentation, Here is the link

Configure Credentials To Access AWS At The Programmatic Level
The purpose is to configure IAM credentials on my local computer so that you can access AWS at a programmatic level (SDKs, CLI, Terraform, etc.).

  • Open up the AWS management console and go to IAM Sign in to the organization account, from your username tab at top right corner dropdown click Security Credential; Image description
  • Create a new user or use existing AWS user It is highly recommended to create a new user for this access. Particularly for organization project where each team have a distinct access to particular asset of the organization and controlled by root user.

Click User from IAM menu bar at the top left;
Image description

  • Give the user programmatic access
    From Security Credential dashboard, choose Access Key dropdown, then click on Create New Access Key;
    Image description

  • Copy the access key and secret key
    It is important to note that once you exit out of the dialog box, you can not access that key again, so you need to copy the key or download and save the key. Though, you can always create a new one if lost;
    Image description

Configure The AWS CLI

  • Open Terminal Window
    Try to test if aws is properly installed by running this command
    aws --version
    If the display is different from example below, know that you need to re-install to avoid further complication;
    Image description

  • Run configure command
    To configure run aws configure
    Fill the Access Key ID and Secret Access Key.
    If not sure of Default region name, just leave it as it is by pressing Enter button.
    For output format, I advice to use JSON, press Enter button.
    Yeah, that simple, we are done with configuring AWS.
    Image description
    However, if you need additional info on AWS CLI install and configure, I recommend this Link

2.0 VPC - Virtual Private Cloud
To create my cluster VPC with public and private subnets

  • Open the AWS CloudFormation console, type CloudFormation inside the service box.
  • Choose Create stack. Image description
  • Select template is ready and, Amazon S3 URL. Then Paste the following URL into the text area and choose Next: https://s3.us-west-2.amazonaws.com/amazon-eks/cloudformation/2020-10-29/amazon-eks-vpc-private-subnets.yaml

Image description

  • On the Specify Stack Details page, fill out the following and Choose Next:
  • Stack name: I choose a stack name for my AWS CloudFormation stack. For example, I call it ruralx-vpc.
  • VpcBlock: Choose a CIDR range for my VPC. Each worker node, pod, and load balancer that I deploy is assigned an IP address from this block. The default value provides enough IP addresses for most implementations, but if it doesn't, then I can change it. For more information, see VPC and subnet sizing in the Amazon VPC User Guide. I can also add additional CIDR blocks to the VPC once it's created.
  • PublicSubnet01Block: Specify a CIDR block for public subnet 1. The default value provides enough IP addresses for most implementations, but if it doesn't, then I can change it.
  • PublicSubnet02Block: Specify a CIDR block for public subnet 2. The default value provides enough IP addresses for most implementations, but if it doesn't, then I can change it.
  • PrivateSubnet01Block: Specify a CIDR block for private subnet 1. The default value provides enough IP addresses for most implementations, but if it doesn't, then I can change it.
  • PrivateSubnet02Block: Specify a CIDR block for private subnet 2. The default value provides enough IP addresses for most implementations, but if it doesn't, then I can change it.
  • (Optional) On the Options page, tag your stack resources. Choose Next.
  • On the Review page, choose Create.
  • When my stack is created, select it in the console and choose Outputs.
  • Record the Security Groups value for the security group that was created. When I add nodes to my cluster, I must specify the ID of the security group. The security group is applied to the elastic network interfaces that are created by Amazon EKS in my subnets that allows the control plane to communicate with my nodes. These network interfaces have Amazon EKS in their description.
  • Record the VpcId for the VPC that was created. I need this when I launch my node group template.
  • Record the SubnetIds for the subnets that were created and whether I created them as public or private subnets. When I add nodes to my cluster, I must specify the IDs of the subnets that I want to launch the nodes into.

3.0 Terraform
The purpose of the Terraform section is to create all of the AWS cloud services I'll need from an environment/infrastructure perspective to run the ruralx application.

Create S3 Bucket To Store TFSTATE Files
Here, I will create an S3 bucket that will be used to store Terraform state files.
I created main.tf file inside a directory, write these code;

provider "aws" {
  region = "us-east-1"
}

resource "aws_s3_bucket" "terraform_state" {
  bucket = "terraform-state-rurlax"
  versioning {
    enabled = true
  }

  server_side_encryption_configuration {
    rule {
      apply_server_side_encryption_by_default {
        sse_algorithm = "AES256"
      }
    }
  }
}
Enter fullscreen mode Exit fullscreen mode

The Terraform main.tf will do these:

  1. Create the S3 bucket in the us-east-1 region
  2. Ensure that version enabling is set to True
  3. Utilize AES256 encryption
  • Create the bucket by running the following:

terraform init - To initialize the working directory and pull down the provider
terraform plan - To go through a "check" and confirm the configurations are valid
terraform apply - To create the resource

Create an Elastic Container Registry
The idea is to create a repository to store the Docker image that I created for the ruralx app.
Then, I add to the main.tf this;

terraform {
  backend "s3" {
    bucket = "terraform-state-ruralx"
    key    = "ecr-terraform.tfstate"
    region = "us-east-1"
  }
  required_providers {
    aws = {
      source = "hashicorp/aws"
    }
  }
}

provider "aws" {
  region = "us-east-1"
}

resource "aws_ecr_repository" "ruralx-ecr-repo" {
  name                 = var.ruralx-ecr
  image_tag_mutability = "MUTABLE"

  image_scanning_configuration {
    scan_on_push = true
  }
}
Enter fullscreen mode Exit fullscreen mode

Also, create variables.tf file inside the directory, basically to store the env of the repo. I then write this code;

variable ruralx-ecr {
  type        = string
  default     = "ruralx"
  description = "ECR repo to store a Docker image"
}
Enter fullscreen mode Exit fullscreen mode

Note, that the Terraform main.tf will do this:

  1. Use a Terraform backend to store the .tfstate in an S3 bucket.
  2. Use the aws_ecr_repository Terraform resource to create a new respository.
  • Create the bucket by running the following:

terraform init - To initialize the working directory and pull down the provider
terraform plan - To go through a "check" and confirm the configurations are valid
terraform apply - To create the resource

Create An EKS Cluster and IAM Role/Policy
I am going to create EKS cluster here, writing the following code;

terraform {
  backend "s3" {
    bucket = "terraform-state-ruralx"
    key    = "ecr-terraform.tfstate"
    region = "us-east-1"
  }
  required_providers {
    aws = {
      source = "hashicorp/aws"
    }
  }
}

provider "aws" {
  region = "us-east-1"
}

resource "aws_s3_bucket" "terraform_state" {
  bucket = "terraform-state-ruralx"
  versioning {
    enabled = true
  }

  server_side_encryption_configuration {
    rule {
      apply_server_side_encryption_by_default {
        sse_algorithm = "AES256"
      }
    }
  }
}

resource "aws_ecr_repository" "ruralx-africa-ecr-repo" {
  name                 = var.ruralx-africa-ecr
  image_tag_mutability = "MUTABLE"

  image_scanning_configuration {
    scan_on_push = true
  }
}

terraform {
  backend "s3" {
    bucket = "terraform-state-ruralx"
    key    = "eks-terraform-workernodes.tfstate"
    region = "us-east-1"
  }
  required_providers {
    aws = {
      source = "hashicorp/aws"
    }
  }
}


# IAM Role for EKS to have access to the appropriate resources
resource "aws_iam_role" "eks-iam-role" {
  name = "ruralx-eks-iam-role"

  path = "/"

  assume_role_policy = <<EOF
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "Service": "eks.amazonaws.com"
      },
      "Action": "sts:AssumeRole"
    }
  ]
}
EOF

}

## Attach the IAM policy to the IAM role
resource "aws_iam_role_policy_attachment" "AmazonEKSClusterPolicy" {
  policy_arn = "arn:aws:iam::aws:policy/AmazonEKSClusterPolicy"
  role       = aws_iam_role.eks-iam-role.name
}
resource "aws_iam_role_policy_attachment" "AmazonEC2ContainerRegistryReadOnly-EKS" {
  policy_arn = "arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly"
  role       = aws_iam_role.eks-iam-role.name
}

## Create the EKS cluster
resource "aws_eks_cluster" "ruralx-eks" {
  name = "ruralx-cluster"
  role_arn = aws_iam_role.eks-iam-role.arn

  vpc_config {
    subnet_ids = [var.subnet_id_1, var.subnet_id_2]
  }

  depends_on = [
    aws_iam_role.eks-iam-role,
  ]
}

## Worker Nodes
resource "aws_iam_role" "workernodes" {
  name = "eks-node-group-example"

  assume_role_policy = jsonencode({
    Statement = [{
      Action = "sts:AssumeRole"
      Effect = "Allow"
      Principal = {
        Service = "ec2.amazonaws.com"
      }
    }]
    Version = "2012-10-17"
  })
}

resource "aws_iam_role_policy_attachment" "AmazonEKSWorkerNodePolicy" {
  policy_arn = "arn:aws:iam::aws:policy/AmazonEKSWorkerNodePolicy"
  role       = aws_iam_role.workernodes.name
}

resource "aws_iam_role_policy_attachment" "AmazonEKS_CNI_Policy" {
  policy_arn = "arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy"
  role       = aws_iam_role.workernodes.name
}

resource "aws_iam_role_policy_attachment" "EC2InstanceProfileForImageBuilderECRContainerBuilds" {
  policy_arn = "arn:aws:iam::aws:policy/EC2InstanceProfileForImageBuilderECRContainerBuilds"
  role       = aws_iam_role.workernodes.name
}

resource "aws_iam_role_policy_attachment" "AmazonEC2ContainerRegistryReadOnly" {
  policy_arn = "arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly"
  role       = aws_iam_role.workernodes.name
}

resource "aws_eks_node_group" "worker-node-group" {
  cluster_name    = aws_eks_cluster.ruralx-eks.name
  node_group_name = "ruralx-workernodes"
  node_role_arn   = aws_iam_role.workernodes.arn
  subnet_ids      = [var.subnet_id_1, var.subnet_id_2]
  instance_types = ["t3.xlarge"]

  scaling_config {
    desired_size = 1
    max_size     = 1
    min_size     = 1
  }

  depends_on = [
    aws_iam_role_policy_attachment.AmazonEKSWorkerNodePolicy,
    aws_iam_role_policy_attachment.AmazonEKS_CNI_Policy,
    #aws_iam_role_policy_attachment.AmazonEC2ContainerRegistryReadOnly,
  ]
}
Enter fullscreen mode Exit fullscreen mode
variable "subnet_id_1" {
  type = string
  default = "subnet-05f2a43d392934382"
}

variable "subnet_id_2" {
  type = string
  default = "subnet-0086dd63b54f54812"
}

Enter fullscreen mode Exit fullscreen mode
  • Create the EKS Cluster by running the following:

terraform plan - To go through a "check" and confirm the configurations are valid
terraform apply - To create the resource

If you want to get more familiar with terraform, you can start here Click this link

4.0 Docker
Here, we plan for the application requirements, code, build, and unit test the application.
I will assume all these phases been taken care of as DevOps team not solely responsible for these, but rather work with larger team of both development and operation of an organization.

Create The Docker Image for the ruralx app
The purpose of this is to create a Docker image from the app's source code that the organization have, containerize it, and store the container inside a container repository. Here, I'll use AWS ECR repo container.

  1. Inside the app source code's root directory, create Dockerfile Image description
  2. Open the Dockerfile, then define the instruction, this depend on base image and others, you can consider this as an;
# syntax=docker/dockerfile:1
FROM node:10.16.1-alpine
ENV NODE_ENV=production
WORKDIR /app
COPY ["package.json", "package-lock.json*", "./"]
RUN npm install --production
COPY . .
CMD [ "npm", "start" ]
Enter fullscreen mode Exit fullscreen mode
  1. To create the Docker image, you'll run the following command: docker build -t ruralx:latest . The -t is for the tag (the name) of the Docker image and the . is telling the Docker CLI that the Dockerfile is in the current directory.
  2. After the Docker image is created, run the following command to confirm the Docker image is on your machine; docker image ls

Run The Docker Image Locally
Now that the Docker image is created, you can run the container locally just to confirm it'll work and not crash.

  1. To run the Docker container, run the following command: docker run -tid ruralx
  2. To confirm the Docker container is running, run the following command: docker container ls

Log Into AWS ECR Repository
The idea is to push image To ECR

  • Log in to ECR with AWS CLI
    aws ecr-public get-login-password --region us-east-1 | docker login --username AWS --password-stdin public.ecr.aws/ruralx

  • Tag The Docker image
    Tag the Docker image docker tag ruralx 1XX9-6XXX-6XX7.dkr.ecr.us-east-1.amazonaws.com

  • Push The Docker Image To ECR
    Push the Docker image to ECR docker push 1XX9-6XXX-6XX7.dkr.ecr.us-east-1.amazonaws.com/ruralx

In addition to Docker documentation, Click here for another useful resource from Nelson(YouTube channel; Amigoscode)

To learn more, catch me up on the second part!
Read the >>>

Thanks for reading! Happy Cloud Computing!

I will love to connect with you on LinkedIn

Top comments (2)

Collapse
 
stormytalent profile image
StormyTalent

Perfect!

Collapse
 
deoluoyinlola profile image
Adeolu Oyinlola

Thanks much