DEV Community

Cover image for Deploy an Amazon-Clone App on EKS with Kustomize and ArgoCD with GitOps best practices
Jones Ndzenyuy
Jones Ndzenyuy

Posted on

Deploy an Amazon-Clone App on EKS with Kustomize and ArgoCD with GitOps best practices

In this Tutorial, we will deploy an Amazon Clone App on EKS with DevOps best practices. The App is a React based,so we'll deploy the infrastructure(EKS and OIDC Role) using Cloudformation, the CICD Pipeline will run with GitHub Actions, ArgoCD with Kustomize to deploy the kubernetes manifest files.

Architecture

First the Cloud Engineer deploys the cloudformation code to deploy the infrastructure to AWS, once the infrastructure is configured with argoCD pointing to the Repo 2 with the manifest files, Prometheus and Grafana setup for monitoring the deployement, the developers can at anytime push an app update through the Repo 1 containing the App code. Once the developer pushes the code, Github actions triggers the pipeline to run tests; Security checks in Sonar Cloud, Dependencies and Vulnerabilities with OWASP, if these checks pass, the image is built and Trivy scans the image for known vulnerabilities, if the code and the docker image passes all the checks, Github actions, configured with OIDC role with policies to push images to ECR, pushes the image, then clones the second repository, updates the kustomization.yml file with the new image version and pushes it to the repository. ArgoCD will pick the changes and update the cluster.

Step by Step Guide

1. Clone the App repository

Clone the code repository, and initialise the code to be hosted on your repository, run the following code on the terminal(make sure you have vscode installed on your machine)

git clone https://github.com/Ndzenyuy/Amazon-FE.git
cd Amazon-FE
rm -rf .git .github
git init
code .
Enter fullscreen mode Exit fullscreen mode

This code will clone the repository, switch into the repository, initialize git for you to be able to create and push to your repository and finally open VSCode.

2. Create and deploy the Cloudformation Stack

On your local machine, create a file named infrastructure.yml and paste the following code:

AWSTemplateFormatVersion: "2010-09-09"
Description: Create an Amazon EKS Cluster with a managed node group.

Parameters:
  ClusterName:
    Type: String
    Default: Amazon-clone

  ClusterVersion:
    Type: String
    Default: "1.31"

  VpcId:
    Type: AWS::EC2::VPC::Id
    Description: "VPC for the EKS cluster"

  SubnetIds:
    Type: List<AWS::EC2::Subnet::Id>
    Description: "Subnets (private/public) for the worker nodes and control plane"

  NodeInstanceType:
    Type: String
    Default: t3.medium

  DesiredCapacity:
    Type: Number
    Default: 2

  GitHubOrg:
    Description: Name of GitHub Username (case sensitive)
    Type: String
    Default: "Ndzenyuy"

  RepositoryName:
    Description: Name of GitHub Repository (case sensitive)
    Type: String
    Default: "Amazon-FE"

  OIDCProviderArn:
    Description: ARN of the GitHub OIDC Provider (Leave blank to create one)
    Type: String
    Default: "arn:aws:iam::997450571655:oidc-provider/token.actions.githubusercontent.com"

  OIDCAudience:
    Description: Audience supplied to configure-aws-credentials.
    Type: String
    Default: "sts.amazonaws.com"

Conditions:
  CreateOIDCProvider: !Equals
    - !Ref OIDCProviderArn
    - ""

Resources:
  # IAM Role for EKS Cluster
  EKSClusterRole:
    Type: AWS::IAM::Role
    Properties:
      AssumeRolePolicyDocument:
        Version: "2012-10-17"
        Statement:
          - Effect: Allow
            Principal:
              Service: eks.amazonaws.com
            Action: sts:AssumeRole
      ManagedPolicyArns:
        - arn:aws:iam::aws:policy/AmazonEKSClusterPolicy

  # EKS Cluster
  EKSCluster:
    Type: AWS::EKS::Cluster
    Properties:
      Name: !Ref ClusterName
      Version: !Ref ClusterVersion
      RoleArn: !GetAtt EKSClusterRole.Arn
      ResourcesVpcConfig:
        SubnetIds: !Ref SubnetIds
        EndpointPrivateAccess: false
        EndpointPublicAccess: true

  # IAM Role for Node Group
  NodeInstanceRole:
    Type: AWS::IAM::Role
    Properties:
      AssumeRolePolicyDocument:
        Version: "2012-10-17"
        Statement:
          - Effect: Allow
            Principal:
              Service: ec2.amazonaws.com
            Action: sts:AssumeRole
      ManagedPolicyArns:
        - arn:aws:iam::aws:policy/AmazonEKSWorkerNodePolicy
        - arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly
        - arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy

  # Managed Node Group
  NodeGroup:
    Type: AWS::EKS::Nodegroup
    Properties:
      ClusterName: !Ref EKSCluster
      NodeRole: !GetAtt NodeInstanceRole.Arn
      Subnets: !Ref SubnetIds
      ScalingConfig:
        DesiredSize: !Ref DesiredCapacity
        MaxSize: 4
        MinSize: 1
      InstanceTypes:
        - !Ref NodeInstanceType
      AmiType: AL2_x86_64
      NodegroupName: !Sub "${ClusterName}-nodegroup"
      DiskSize: 20

  EKSPublicAccessSecurityGroupIngress:
    Type: AWS::EC2::SecurityGroupIngress
    Properties:
      GroupId: !GetAtt EKSCluster.ClusterSecurityGroupId
      IpProtocol: tcp
      FromPort: 80
      ToPort: 80
      CidrIp: 0.0.0.0/0

  GithubOidc:
    Type: AWS::IAM::OIDCProvider
    Condition: CreateOIDCProvider
    Properties:
      Url: https://token.actions.githubusercontent.com
      ClientIdList:
        - !Ref OIDCAudience
      ThumbprintList:
        - "74f3a68f16524f15424927704c9506f55a9316bd" # Replace with actual thumbprint

  Role:
    Type: AWS::IAM::Role
    Properties:
      AssumeRolePolicyDocument:
        Version: "2012-10-17"
        Statement:
          - Effect: Allow
            Action: sts:AssumeRoleWithWebIdentity
            Principal:
              Federated: !If
                - CreateOIDCProvider
                - !Ref GithubOidc
                - !Ref OIDCProviderArn
            Condition:
              StringEquals:
                token.actions.githubusercontent.com:aud: !Ref OIDCAudience
              StringLike:
                token.actions.githubusercontent.com:sub: !Sub repo:${GitHubOrg}/${RepositoryName}:*
      Policies:
        - PolicyName: AllowECRAndEKSAccess
          PolicyDocument:
            Version: "2012-10-17"
            Statement:
              - Effect: Allow
                Action:
                  - ecr:GetAuthorizationToken
                  - ecr:BatchCheckLayerAvailability
                  - ecr:CompleteLayerUpload
                  - ecr:GetDownloadUrlForLayer
                  - ecr:InitiateLayerUpload
                  - ecr:PutImage
                  - ecr:UploadLayerPart
                Resource: "*"

              - Effect: Allow
                Action:
                  - eks:DescribeCluster
                  - eks:ListClusters
                Resource: "*"

              - Effect: Allow
                Action:
                  - sts:GetCallerIdentity
                Resource: "*"

              - Effect: Allow
                Action:
                  - eks:Describe*
                  - eks:List*
                  - eks:Update*
                  - eks:AccessKubernetesApi
                Resource: "*"

Outputs:
  ClusterName:
    Value: !Ref EKSCluster
    Description: Name of the EKS Cluster

  ClusterRoleArn:
    Value: !GetAtt EKSClusterRole.Arn

  RoleArn:
    Description: IAM Role ARN for GitHub Actions
    Value: !GetAtt Role.Arn

  NodeGroupRoleArn:
    Value: !GetAtt NodeInstanceRole.Arn
Enter fullscreen mode Exit fullscreen mode

On the Console, search cloudformation,then click -> Create stack -> with new resources, then enter the following parameters:

  • Prepare template: Choose an existing template
  • Specify template: Upload a template file
  • Upload a template file: Choose the infrastructure.yml file
  • Click Next
  • Stack name: amazon-clone
  • Under the parameters, leave the others at their defaults but edit SubnetIds by selecting the first 2 subnets, along with the VpcId, select the default.
  • Next
  • scroll down and Click: I acknowledge that AWS CloudFormation might create IAM resources.
  • Next
  • Submit. This will create the infrastructure required to run this project, while waiting for the create to complete, go to Sonarcloud.

- Create ECR Repository

On AWS console, create a repository named "amazon-clone". This will match the repo where the images will be pushed to.

3. Create a project in Sonarcloud

Create a sonarcloud organization

  • Click on the account icon on the top right of the screen
  • My Organizations and click create organisation
  • Create one manually
  • Give it the name "amazon-clone"
  • select the free plan -> Create Organization

Copy the organization name as it will be needed shortly

  • Analyse new project
  • Display name, give it "amazon-clone-project"
  • Project key: "amazon-clone-project" (copy project key as it will be used)
  • Project visibility: public
  • Next The new code for this project will be based on: Previous version choose your analysis method: With Github Actions Copy the SONAR_TOKEN as will be needed in a safe location

4 Prepare Secrets

- AWS_GITHUB_ROLE Return to cloudformation and select outputs, copy the value of the Role arn, this is the OIDC role arn that will be used by Github Actions

- DEPLOY_KEY_PRIVATE We need to create an ssh key pair that will be used to authenticate into Repo 2 containing the manifest files argoCD will be using. In the terminal(ubuntu)

ssh-keygen
Enter fullscreen mode Exit fullscreen mode

Press keyboard enter till the keypair is generated, now run

cd
cat .ssh/id_rsa
Enter fullscreen mode Exit fullscreen mode

This command will output the private key of the key pair file generated, copy it and save it in the DEPLOY_KEY_PRIVATE variable.

Now create a second repository in GitHub, call it Amazon-clone-infra, we have to add the public ssh key file generated to the repository deploy keys in order to authenticate and push code to the repository. So back in our terminal, run

cat .ssh/id_rsa.pub
Enter fullscreen mode Exit fullscreen mode

Copy the keys to github repo 2, under settings -> Deploy keys -> add deploy key:

  • Title: deploy-infra,
  • key: paste the content of the private key there and save it

5. Store Secrets in the App Repository

In VSCode, initialize git, create a remote repository and push the code to the repository, make it public. In the repository on Github, we have to store the secrets that will be used by our github actions workflow to successfully deploy the app to Kubernetes. Go to the repository in Github(Amazon-FE) -> Settings -> Secrets and variables -> actions -> New repository secrets. There create the secrets one after the other, the secrets required are:

  • SONAR_TOKEN
  • AWS_GITHUB_ROLE
  • DEPLOY_KEY_PRIVATE
  • SONAR_HOST_URL
  • SONAR_ORGANIZATION
  • SONAR_PROJECT_KEY

After creating, we should have something like:

6. Configure ArgoCD

Back on your terminal, follow these steps:

  • update kubeconfig
aws eks update-kubeconfig --name Netflix-clone --region us-east-1
Enter fullscreen mode Exit fullscreen mode
  • Install argoCD on the cluster
 kubectl create namespace argocd
 kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml
Enter fullscreen mode Exit fullscreen mode
  • verify installation
kubectl get all -n argocd
Enter fullscreen mode Exit fullscreen mode
  • expose argocd
kubectl patch svc argocd-server -n argocd -p '{"spec": {"type": "LoadBalancer"}}'
Enter fullscreen mode Exit fullscreen mode
  • export argocd dns
export ARGOCD_SERVER=`kubectl get svc argocd-server -n argocd -o json | jq --raw-output '.status.loadBalancer.ingress[0].hostname'`
Enter fullscreen mode Exit fullscreen mode
  • Get the argocd dns
echo $ARGOCD_SERVER
Enter fullscreen mode Exit fullscreen mode
  • export argocd password
export ARGO_PWD=`kubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath="{.data.password}" | base64 -d`
Enter fullscreen mode Exit fullscreen mode
  • get the argocd PASSWORD
echo $ARGO_PWD
Enter fullscreen mode Exit fullscreen mode

Copy the argoCD dns displayed in the command to get the argoCD dns and run it in a browser. The user name is admin and the password is the output of the cli command to get the argocd password. When successfully signed in, the display will be like:

Now we need the manifest files(kubernetes files) in a repository where ArgoCD will monitor for changes.
So clone the empty repo 2 to your local machine, in it we will put the manifest files for kubernetes. Create a folder named kubernetes containing the following files deployment.yml, service.yml, kustomization.yml and paste the following content:

  • deployment.yml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: amazon-app
spec:
  replicas: 4
  selector:
    matchLabels:
      app: amazon-app
  template:
    metadata:
      labels:
        app: amazon-app
    spec:
      containers:
        - name: amazon-app
          image: 997450571655.dkr.ecr.us-east-1.amazonaws.com/amazon-clone:latest
          ports:
            - containerPort: 3000
Enter fullscreen mode Exit fullscreen mode
  • service.yml
apiVersion: v1
kind: Service
metadata:
  name: amazon-app-service
spec:
  selector:
    app: amazon-app
  ports:
    - protocol: TCP
      port: 80           # External port to expose the service
      targetPort: 3000   # Port on the container
  type: LoadBalancer
Enter fullscreen mode Exit fullscreen mode
  • kustomization.yml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization

resources:
  - deployment.yml
  - service.yml

namePrefix: kustomize-

commonLabels:
  app: amazon-clone
  version: v1

images:
  - name: amazon-clone
    newName: 997450571655.dkr.ecr.us-east-1.amazonaws.com/amazon-clone
    newTag: latest

replicas:
  - name: amazon-app
    count: 4
Enter fullscreen mode Exit fullscreen mode

Commit and push the code to Github.

On the web browser, where argoCD was opened, create a new application with the following fields:

  • Application name: amazon-clone
  • Project name: default
  • Sync: automatic
  • repository url:
  • Revision: HEAD
  • path: kubernetes
  • Cluster url: select the dropdown
  • namespace: default

Leave the rest as defaults and click create. Within a moment argoCD will create the pods to sync with the kustomize files.

Now run

kubectl get svc -n default
Enter fullscreen mode Exit fullscreen mode

You will have the endpoint of the load balancer, copy and paste it on the browser, you will access the Application:

In repo 1, open the VERSION file and change the version to 0.0.2, commit and push to trigger a pipeline deployment. Go to Github -> Actions, you will see the pipeline running automatically, and if observed, the pipeline will finish successfully like below:

If you check argoCD web app, you will see that it will automatically detect the change and update the pods with the new image version.

7. Configure Monitoring

When the cluster is working, it is important to know the behavior of the resources. This can only be done when proper monitoring is put in place to Monitor Key Performance metrics. The steps include

  • Add Prometheus community Helm Repo
helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
helm repo add stable https://charts.helm.sh/stable
helm repo update  
Enter fullscreen mode Exit fullscreen mode
  • Create namespaces
kubectl create namespace prometheus
kubectl create namespace grafana
Enter fullscreen mode Exit fullscreen mode
  • Install Prometheus
helm install prometheus prometheus-community/prometheus \
  --namespace prometheus \
  --set server.service.type=LoadBalancer \
  --set server.persistentVolume.enabled=false
Enter fullscreen mode Exit fullscreen mode
  • Install Grafana
helm repo add grafana https://grafana.github.io/helm-charts   
helm repo update     
helm install grafana grafana/grafana \
  --namespace grafana \
  --set service.type=LoadBalancer \
  --set adminPassword='SuperSecret123!'
Enter fullscreen mode Exit fullscreen mode

After 2-3mins, run

kubectl get svc -n prometheus
kubectl get svc -n grafana
Enter fullscreen mode Exit fullscreen mode
  • Edit Security Group in AWS console
    Go to EC2 > Load Balancers and locate the ELB with the same name as in the EXTERNAL-IP, Click the description tab of the ELB, copy the SG ID, now go to the Security groups in EC2 and edit the inbound rules and add port 80, open to 0.0.0.0/0

  • Add Prometheus Data source to Grafana

- Log in to the grafana UI
- Go to settings -> Data sources -> Add data source
- Choose Prometheus
- Enter the url: http://prometheus-server.prometheus.svc.cluster.local
Click "Save & test"
Enter fullscreen mode Exit fullscreen mode
  • Import Dashboards
- go to + -> import
- Enter the id: 6417 (kubernetes cluster monitoring)
- click Import and select Prometheus as data source
Enter fullscreen mode Exit fullscreen mode

It will load the Graphana UI for kubernetes cluster monitoring like the one below. Congratulations on finishing the project

Top comments (0)