DEV Community

Cover image for Deploy Netflix Clone App on EKS with Cloudformation and GitHub Actions
Jones Ndzenyuy
Jones Ndzenyuy

Posted on • Edited on

Deploy Netflix Clone App on EKS with Cloudformation and GitHub Actions

In this project, we will deploy a Netflix clone on an EKS Cluster. We will deploy the EKS cluster using Cloudformation and use GitHub Actions for the CICD pipeline.

Project Architecture

Project architecture

Step by Step Guide

1. Get the TMDB API Key

  • Open a web browser and navigate to TMDB (The Movie Database) website.
  • Click on "Login" and create an account.
  • Once logged in, go to your profile and select "Settings."
  • Click on "API" from the left-side panel.
  • Create a new API key by clicking "Create" and accepting the terms and conditions.
  • Provide the required basic details and click "Submit."
  • You will receive your TMDB API key.

2. Clone the Code

Clone the code repository, and initialise the code to be hosted on your repository, run the following code on the terminal(make sure you have vscode installed on your machine)

git clone https://github.com/Ndzenyuy/NetFlix-Clone-app.git
cd NetFlix-Clone-app
rm -rf .git .github
git init
code .
Enter fullscreen mode Exit fullscreen mode

3. Create Sonar Cloud Project

Create a sonarcloud organization

  • Click on the account icon on the top right of the screen
  • My Organizations and click create organisation
  • create one manually
  • Give it the name "netflix-clone"
  • select the free plan
  • -> Create Organization

  • Copy the organization name as it will be needed shortly

Analyse new project

  • Display name, give it "netflix-clone-project"
  • Project visibility: public
  • Next
  • The new code for this project will be based on: Previous version
  • choose your analysis methode: With Github Actions
  • Copy the SONAR_TOKEN as will be needed in a safe location

Create the github actions workflow and test project scan with Sonar scanner

  • Create a folder in the root folder with name .github, a subfolder workflows and create a file named "deployment.yml". In it paste the following code:
name: Netflix Clone wih Github Actions
on:
  push:
    branches:
      - "main"
      - "dev"
  pull_request:
    branches:
      - "main"

  workflow_dispatch:

env:
  AWS_REGION: us-east-1

jobs:
  Testing:
    runs-on: ubuntu-latest
    steps:
      - name: Code checkout
        uses: actions/checkout@v4

      - name: Setup SonarQube
        uses: warchant/setup-sonar-scanner@v7

      - name: SonarQube Scan
        run: sonar-scanner
          -Dsonar.host.url=${{ secrets.SONAR_HOST_URL }}
          -Dsonar.login=${{ secrets.SONAR_TOKEN }}
          -Dsonar.organization=${{ secrets.SONAR_ORGANIZATION }}
          -Dsonar.projectKey=${{ secrets.SONAR_PROJECT_KEY }}
          -Dsonar.sources=src/
          -Dsonar.junit.reportsPath=target/surefire-reports/
          -Dsonar.jacoco.reportsPath=target/jacoco.exec
          -Dsonar.java.checkstyle.reportPaths=target/checkstyle-result.xml
          -Dsonar.java.binaries=target/test-classes/com/visualpathit/account/controllerTest/

Enter fullscreen mode Exit fullscreen mode

Now commit the code, publish the branch in a repository. The workflow will fail because the secrets are not stored. On GitHub, open the repository settings and added these secrets to the project:

TMDB_V3_API_KEY: <your api key for TMDB>
SONAR_TOKEN: <your sonar token>
IMAGE_NAME: neflix-clone-image
SONAR_ORGANIZATION: <your sonar organization>
SONAR_PROJECT_KEY: <your project key>
Enter fullscreen mode Exit fullscreen mode

Now push the code to GitHub once more to trigger a build. The build should be successful this time

Open Sonar cloud and open the organisation of this project. We should see the successful scan results

4. Create an OIDC Role for our workflow

We are going to deploy a cloudformation template which will create an IAM role that will permit github actions to deploy to our AWS account without exposing our keys.
For the parameters, you will need your GitHub repository name and your GitHub usernames, create a Yaml file OIDC.yml and paste the following code:

Parameters:
  GitHubOrg:
    Description: Name of GitHub Username (case sensitive)
    Type: String
    Default: "Ndzenyuy"

  RepositoryName:
    Description: Name of GitHub Repository (case sensitive)
    Type: String
    Default: "NetFlix-Clone-app"

  OIDCProviderArn:
    Description: ARN of the GitHub OIDC Provider (Leave blank to create one)
    Type: String
    Default: "arn:aws:iam::997450571655:oidc-provider/token.actions.githubusercontent.com"

  OIDCAudience:
    Description: Audience supplied to configure-aws-credentials.
    Type: String
    Default: "sts.amazonaws.com"

Conditions:
  CreateOIDCProvider: !Equals
    - !Ref OIDCProviderArn
    - ""

Resources:
  GithubOidc:
    Type: AWS::IAM::OIDCProvider
    Condition: CreateOIDCProvider
    Properties:
      Url: https://token.actions.githubusercontent.com
      ClientIdList:
        - !Ref OIDCAudience
      ThumbprintList:
        - "74f3a68f16524f15424927704c9506f55a9316bd" # Replace with actual thumbprint

  Role:
    Type: AWS::IAM::Role
    Properties:
      AssumeRolePolicyDocument:
        Version: "2012-10-17"
        Statement:
          - Effect: Allow
            Action: sts:AssumeRoleWithWebIdentity
            Principal:
              Federated: !If
                - CreateOIDCProvider
                - !Ref GithubOidc
                - !Ref OIDCProviderArn
            Condition:
              StringEquals:
                token.actions.githubusercontent.com:aud: !Ref OIDCAudience
              StringLike:
                token.actions.githubusercontent.com:sub: !Sub repo:${GitHubOrg}/${RepositoryName}:*
      Policies:
        - PolicyName: AllowECRAndEKSAccess
          PolicyDocument:
            Version: "2012-10-17"
            Statement:
              - Effect: Allow
                Action:
                  - ecr:GetAuthorizationToken
                  - ecr:BatchCheckLayerAvailability
                  - ecr:CompleteLayerUpload
                  - ecr:GetDownloadUrlForLayer
                  - ecr:InitiateLayerUpload
                  - ecr:PutImage
                  - ecr:UploadLayerPart
                Resource: "*"

              - Effect: Allow
                Action:
                  - eks:DescribeCluster
                  - eks:ListClusters
                Resource: "*"

              - Effect: Allow
                Action:
                  - sts:GetCallerIdentity
                Resource: "*"

              - Effect: Allow
                Action:
                  - eks:Describe*
                  - eks:List*
                  - eks:Update*
                  - eks:AccessKubernetesApi
                Resource: "*"

Outputs:
  RoleArn:
    Description: IAM Role ARN for GitHub Actions
    Value: !GetAtt Role.Arn

Enter fullscreen mode Exit fullscreen mode

Go to cloud formation and create a stack with new resources:

  • create stack
  • choose an existing template
  • upload a template file
  • choose file and locate where the cloud formation template is stored on your local machine
  • stack name: github-oidc
  • GitHubOrg:
  • OIDCAudience: sts.amazonaws.com
  • OIDCProviderARN: leave blanc to create a new oidc provider
  • Repository name: give your repository
  • Acknowledge -> next and submit
  • After it finishes creation, copy the role arn as it will be stored in Github secrets(with the name "AWS_GITHUB_ROLE").

5 Add OWASP and Trivy File scan

OWASP provides tools for scanning files and containers to identify vulnerabilities and security risks. OWASP Dependency Check, which allows users to scan project source code by providing a zip file or a GitHub URL. The container then generates a report through an API. Trivy is used for container scanning, it identifies known vulnerable packages inside the container and scans the built container as part of each commit and pull-request. It is also run as a nightly cron job against the default branch. If vulnerabilities are discovered, the maintainers are alerted via GitHub’s security tab. Adding these two to our workflow, we can therefore scan our project to be sure every known vulnerability is mitigated.

The code quality check job should be as follows:

Code-Quality-checks:
    runs-on: ubuntu-latest
    steps:
      - name: Code checkout
        uses: actions/checkout@v4

      - name: Set up Node.js
        uses: actions/setup-node@v4
        with:
          node-version: "16"

      - name: Set up JDK 17
        uses: actions/setup-java@v4
        with:
          java-version: "17"
          distribution: "temurin"

      - name: Setup SonarQube
        uses: warchant/setup-sonar-scanner@v7

      - name: SonarQube Scan
        run: sonar-scanner
          -Dsonar.host.url=${{ secrets.SONAR_HOST_URL }}
          -Dsonar.login=${{ secrets.SONAR_TOKEN }}
          -Dsonar.organization=${{ secrets.SONAR_ORGANIZATION }}
          -Dsonar.projectKey=${{ secrets.SONAR_PROJECT_KEY }}
          -Dsonar.sources=src/
          -Dsonar.junit.reportsPath=target/surefire-reports/
          -Dsonar.jacoco.reportsPath=target/jacoco.exec
          -Dsonar.java.checkstyle.reportPaths=target/checkstyle-result.xml
          -Dsonar.java.binaries=target/test-classes/com/visualpathit/account/controllerTest/

      - name: Install Dependencies
        run: npm install

      - name: Download OWASP Dependency Check
        run: |
          curl -L -o dependency-check.zip https://github.com/jeremylong/DependencyCheck/releases/download/v8.4.0/dependency-check-8.4.0-release.zip
          unzip dependency-check.zip -d dependency-check-dir
          ls dependency-check-dir
          chmod +x dependency-check-dir/dependency-check/bin/dependency-check.sh

      - name: Run OWASP Dependency Check
        run: |
          dependency-check-dir/dependency-check/bin/dependency-check.sh \
            --project "Netflix" \
            --scan . \
            --format "XML" \
            --disableYarnAudit \
            --disableNodeAudit \
            --disableAssembly \
            --exclude node_modules

      - name: Upload OWASP Report
        uses: actions/upload-artifact@v4
        with:
          name: dependency-check-report
          path: "**/dependency-check-report.xml"

      - name: Trivy FS Scan
        uses: aquasecurity/trivy-action@master
        with:
          scan-type: "fs"
          scan-ref: "."
          output: "trivyfs.txt"
          severity: "HIGH,CRITICAL"
          exit-code: "0"   # Do not fail build
          format: "table"

      - name: Upload Trivy FS Scan Report
        uses: actions/upload-artifact@v4
        with:
          name: trivy-fs-scan
          path: trivyfs.txt
Enter fullscreen mode Exit fullscreen mode

5. Build and Push to ECR

After Testing our code, we have to now build and push to ECR. Create an ECR Private repository with the name: netflix-clone, update your workflow code with:

name: Netflix Clone wih Github Actions
on:
  push:
    branches:
      - "main"
      - "dev"
  pull_request:
    branches:
      - "main"

  workflow_dispatch:

permissions:
  id-token: write
  contents: read

env:
  AWS_REGION: us-east-1
  ECR_REPOSITORY: neflix-clone

jobs:
  Code-Quality-checks:
    runs-on: ubuntu-latest
    steps:
      - name: Code checkout
        uses: actions/checkout@v4

      - name: Set up Node.js
        uses: actions/setup-node@v4
        with:
          node-version: "16"

      - name: Set up JDK 17
        uses: actions/setup-java@v4
        with:
          java-version: "17"
          distribution: "temurin"

      - name: Setup SonarQube
        uses: warchant/setup-sonar-scanner@v7

      - name: SonarQube Scan
        run: sonar-scanner
          -Dsonar.host.url=${{ secrets.SONAR_HOST_URL }}
          -Dsonar.login=${{ secrets.SONAR_TOKEN }}
          -Dsonar.organization=${{ secrets.SONAR_ORGANIZATION }}
          -Dsonar.projectKey=${{ secrets.SONAR_PROJECT_KEY }}
          -Dsonar.sources=src/
          -Dsonar.junit.reportsPath=target/surefire-reports/
          -Dsonar.jacoco.reportsPath=target/jacoco.exec
          -Dsonar.java.checkstyle.reportPaths=target/checkstyle-result.xml
          -Dsonar.java.binaries=target/test-classes/com/visualpathit/account/controllerTest/

      - name: Install Dependencies
        run: npm install

      - name: Download OWASP Dependency Check
        run: |
          curl -L -o dependency-check.zip https://github.com/jeremylong/DependencyCheck/releases/download/v8.4.0/dependency-check-8.4.0-release.zip
          unzip dependency-check.zip -d dependency-check-dir
          ls dependency-check-dir
          chmod +x dependency-check-dir/dependency-check/bin/dependency-check.sh

      - name: Run OWASP Dependency Check
        run: |
          dependency-check-dir/dependency-check/bin/dependency-check.sh \
            --project "Netflix" \
            --scan . \
            --format "XML" \
            --disableYarnAudit \
            --disableNodeAudit \
            --disableAssembly \
            --exclude node_modules

      - name: Upload OWASP Report
        uses: actions/upload-artifact@v4
        with:
          name: dependency-check-report
          path: "**/dependency-check-report.xml"

      - name: Trivy FS Scan
        uses: aquasecurity/trivy-action@master
        with:
          scan-type: "fs"
          scan-ref: "."
          output: "trivyfs.txt"
          severity: "HIGH,CRITICAL"
          exit-code: "0" # Do not fail build
          format: "table"

      - name: Upload Trivy FS Scan Report
        uses: actions/upload-artifact@v4
        with:
          name: trivy-fs-scan
          path: trivyfs.txt

  build-and-push:
    needs: Code-Quality-checks
    runs-on: ubuntu-latest
    steps:
      - name: Checkout code
        uses: actions/checkout@v4

      - name: Set up Docker Buildx
        uses: docker/setup-buildx-action@v3

      - name: Configure AWS Credentials
        uses: aws-actions/configure-aws-credentials@v4
        with:
          role-to-assume: ${{ secrets.AWS_GITHUB_ROLE }}
          aws-region: ${{ env.AWS_REGION }}

      - name: Login to Amazon ECR
        id: login-ecr
        uses: aws-actions/amazon-ecr-login@v2

      - name: Get version from VERSION file
        id: get-version
        run: |
          if [ -f VERSION ]; then
            VERSION=$(cat VERSION)
          else
            echo "VERSION file not found. Exiting..."
            exit 1
          fi
          echo "Current version: $VERSION"
          echo version=$VERSION >> $GITHUB_OUTPUT

      - name: Build Docker image
        run: |
          docker build -t ${{ env.ECR_REPOSITORY }}:${{ steps.get-version.outputs.version }} .

      - name: Scan image with Trivy
        uses: aquasecurity/trivy-action@master
        with:
          image-ref: ${{ env.ECR_REPOSITORY }}:${{ steps.get-version.outputs.version }}
          format: "table"
          severity: CRITICAL,HIGH
          ignore-unfixed: true
          exit-code: 1 # fail if vulnerabilities are found

      - name: Tag and Push Docker image
        run: |
          docker tag ${{ env.ECR_REPOSITORY }}:${{ steps.get-version.outputs.version }} \
            ${{ steps.login-ecr.outputs.registry }}/${{ env.ECR_REPOSITORY }}:${{ steps.get-version.outputs.version }}

          docker tag ${{ env.ECR_REPOSITORY }}:${{ steps.get-version.outputs.version }} \
            ${{ steps.login-ecr.outputs.registry }}/${{ env.ECR_REPOSITORY }}:latest

          docker push ${{ steps.login-ecr.outputs.registry }}/${{ env.ECR_REPOSITORY }}:${{ steps.get-version.outputs.version }}
          docker push ${{ steps.login-ecr.outputs.registry }}/${{ env.ECR_REPOSITORY }}:latest

Enter fullscreen mode Exit fullscreen mode

6 Deploy a Kubernetes cluster

Now go to Cloudformation and create a stack. This stack will create an IAM role for our OIDC and a kubernetes cluster. Create a file named cloudInfra.yml with the content:

AWSTemplateFormatVersion: "2010-09-09"
Description: Create an Amazon EKS Cluster with a managed node group.

Parameters:
  ClusterName:
    Type: String
    Default: Netflix-clone

  ClusterVersion:
    Type: String
    Default: "1.31"

  VpcId:
    Type: AWS::EC2::VPC::Id
    Description: "VPC for the EKS cluster"

  SubnetIds:
    Type: List<AWS::EC2::Subnet::Id>
    Description: "Subnets (private/public) for the worker nodes and control plane"

  NodeInstanceType:
    Type: String
    Default: t3.medium

  DesiredCapacity:
    Type: Number
    Default: 2

  GitHubOrg:
    Description: Name of GitHub Username (case sensitive)
    Type: String
    Default: "Ndzenyuy"

  RepositoryName:
    Description: Name of GitHub Repository (case sensitive)
    Type: String
    Default: "NetFlix-Clone-app"

  OIDCProviderArn:
    Description: ARN of the GitHub OIDC Provider (Leave blank to create one)
    Type: String
    Default: "arn:aws:iam::997450571655:oidc-provider/token.actions.githubusercontent.com"

  OIDCAudience:
    Description: Audience supplied to configure-aws-credentials.
    Type: String
    Default: "sts.amazonaws.com"

Conditions:
  CreateOIDCProvider: !Equals
    - !Ref OIDCProviderArn
    - ""

Resources:
  # IAM Role for EKS Cluster
  EKSClusterRole:
    Type: AWS::IAM::Role
    Properties:
      AssumeRolePolicyDocument:
        Version: "2012-10-17"
        Statement:
          - Effect: Allow
            Principal:
              Service: eks.amazonaws.com
            Action: sts:AssumeRole
      ManagedPolicyArns:
        - arn:aws:iam::aws:policy/AmazonEKSClusterPolicy

  # EKS Cluster
  EKSCluster:
    Type: AWS::EKS::Cluster
    Properties:
      Name: !Ref ClusterName
      Version: !Ref ClusterVersion
      RoleArn: !GetAtt EKSClusterRole.Arn
      ResourcesVpcConfig:
        SubnetIds: !Ref SubnetIds
        EndpointPrivateAccess: false
        EndpointPublicAccess: true

  # IAM Role for Node Group
  NodeInstanceRole:
    Type: AWS::IAM::Role
    Properties:
      AssumeRolePolicyDocument:
        Version: "2012-10-17"
        Statement:
          - Effect: Allow
            Principal:
              Service: ec2.amazonaws.com
            Action: sts:AssumeRole
      ManagedPolicyArns:
        - arn:aws:iam::aws:policy/AmazonEKSWorkerNodePolicy
        - arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly
        - arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy

  # Managed Node Group
  NodeGroup:
    Type: AWS::EKS::Nodegroup
    Properties:
      ClusterName: !Ref EKSCluster
      NodeRole: !GetAtt NodeInstanceRole.Arn
      Subnets: !Ref SubnetIds
      ScalingConfig:
        DesiredSize: !Ref DesiredCapacity
        MaxSize: 4
        MinSize: 1
      InstanceTypes:
        - !Ref NodeInstanceType
      AmiType: AL2_x86_64
      NodegroupName: !Sub "${ClusterName}-nodegroup"
      DiskSize: 20

  EKSPublicAccessSecurityGroupIngress:
    Type: AWS::EC2::SecurityGroupIngress
    Properties:
      GroupId: !GetAtt EKSCluster.ClusterSecurityGroupId
      IpProtocol: tcp
      FromPort: 80
      ToPort: 80
      CidrIp: 0.0.0.0/0
  PrometheusIngress:
    Type: AWS::EC2::SecurityGroupIngress
    Properties:
      GroupId: !GetAtt EKSCluster.ClusterSecurityGroupId
      IpProtocol: tcp
      FromPort: 9090
      ToPort: 9090
      CidrIp: 0.0.0.0/0

  GrafanaIngress:
    Type: AWS::EC2::SecurityGroupIngress
    Properties:
      GroupId: !GetAtt EKSCluster.ClusterSecurityGroupId
      IpProtocol: tcp
      FromPort: 3000
      ToPort: 3000
      CidrIp: 0.0.0.0/0

  GithubOidc:
    Type: AWS::IAM::OIDCProvider
    Condition: CreateOIDCProvider
    Properties:
      Url: https://token.actions.githubusercontent.com
      ClientIdList:
        - !Ref OIDCAudience
      ThumbprintList:
        - "74f3a68f16524f15424927704c9506f55a9316bd" # Replace with actual thumbprint

  Role:
    Type: AWS::IAM::Role
    Properties:
      AssumeRolePolicyDocument:
        Version: "2012-10-17"
        Statement:
          - Effect: Allow
            Action: sts:AssumeRoleWithWebIdentity
            Principal:
              Federated: !If
                - CreateOIDCProvider
                - !Ref GithubOidc
                - !Ref OIDCProviderArn
            Condition:
              StringEquals:
                token.actions.githubusercontent.com:aud: !Ref OIDCAudience
              StringLike:
                token.actions.githubusercontent.com:sub: !Sub repo:${GitHubOrg}/${RepositoryName}:*
      Policies:
        - PolicyName: AllowECRAndEKSAccess
          PolicyDocument:
            Version: "2012-10-17"
            Statement:
              - Effect: Allow
                Action:
                  - ecr:GetAuthorizationToken
                  - ecr:BatchCheckLayerAvailability
                  - ecr:CompleteLayerUpload
                  - ecr:GetDownloadUrlForLayer
                  - ecr:InitiateLayerUpload
                  - ecr:PutImage
                  - ecr:UploadLayerPart
                Resource: "*"

              - Effect: Allow
                Action:
                  - eks:DescribeCluster
                  - eks:ListClusters
                Resource: "*"

              - Effect: Allow
                Action:
                  - sts:GetCallerIdentity
                Resource: "*"

              - Effect: Allow
                Action:
                  - eks:Describe*
                  - eks:List*
                  - eks:Update*
                  - eks:AccessKubernetesApi
                Resource: "*"

Outputs:
  ClusterName:
    Value: !Ref EKSCluster
    Description: Name of the EKS Cluster

  ClusterRoleArn:
    Value: !GetAtt EKSClusterRole.Arn

  RoleArn:
    Description: IAM Role ARN for GitHub Actions
    Value: !GetAtt Role.Arn

  NodeGroupRoleArn:
    Value: !GetAtt NodeInstanceRole.Arn

Enter fullscreen mode Exit fullscreen mode

Wait for a while for the EKS cluster to be created, it can take about 5-10 minutes.

Copy the OIDC role arn and create a secret the secrets for the application repository(NetFlix-Clone-app) in GitHub under the name: AWS_GITHUB_ROLE

Back on your terminal, follow these steps:

  • update kubeconfig
aws eks update-kubeconfig --name Netflix-clone --region us-east-1
Enter fullscreen mode Exit fullscreen mode
  • Install argoCD on the cluster
 kubectl create namespace argocd
 kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml
Enter fullscreen mode Exit fullscreen mode
  • verify installation
kubectl get all -n argocd
Enter fullscreen mode Exit fullscreen mode
  • expose argocd
kubectl patch svc argocd-server -n argocd -p '{"spec": {"type": "LoadBalancer"}}'
Enter fullscreen mode Exit fullscreen mode
  • export argocd dns
export ARGOCD_SERVER=`kubectl get svc argocd-server -n argocd -o json | jq --raw-output '.status.loadBalancer.ingress[0].hostname'`
Enter fullscreen mode Exit fullscreen mode
  • Get the argocd dns
echo $ARGOCD_SERVER
Enter fullscreen mode Exit fullscreen mode
  • export argocd password
export ARGO_PWD=`kubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath="{.data.password}" | base64 -d`
Enter fullscreen mode Exit fullscreen mode
  • get the argocd PASSWORD
echo $ARGO_PWD
Enter fullscreen mode Exit fullscreen mode

Copy the argoCD dns displayed in the command to get the argoCD dns and run it in a browser. The user name is admin and the password is the output of the cli command to get the argocd password. When successfully signed in, the display will be like:

Now we need the manifest files(kubernetes files) in a repository where ArgoCD will monitor for changes.

So in a new folder, let's create a repository (neflix-clone-manifest) and initialize it with the following commands[make sure it is not a subfolder of the app repository folder.

mkdir neflix-clone-manifest
cd neflix-clone-manifest
touch deployment.yml service.yml Chart.yml
Enter fullscreen mode Exit fullscreen mode

In the different folders, enter the following codes for each of the yaml files

  1. deployment.yml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: {{ .Release.Name }}-app
  labels:
    app: {{ .Release.Name }}-app
spec:
  replicas: {{ .Values.replicaCount }}
  selector:
    matchLabels:
      app: {{ .Release.Name }}-app
  template:
    metadata:
      labels:
        app: {{ .Release.Name }}-app
    spec:
      containers:
        - name: {{ .Release.Name }}-app
          image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
          imagePullPolicy: IfNotPresent
          ports:
            - containerPort: 80
Enter fullscreen mode Exit fullscreen mode
  1. service.yml
apiVersion: v1
kind: Service
metadata:
  name: {{ .Release.Name }}-app
  labels:
    app: {{ .Release.Name }}-app
spec:
  type: LoadBalancer
  ports:
    - port: 80
      targetPort: 80
  selector:
    app: {{ .Release.Name }}-app
Enter fullscreen mode Exit fullscreen mode
  1. Chart.yml
apiVersion: v2
name: netflix-clone
description: A Helm chart for the Netflix clone app
version: 0.1.0
appVersion: "1.0.0"
Enter fullscreen mode Exit fullscreen mode
  1. values.yml
replicaCount: 2
image:
  repository: <your image repository URI>
  tag: latest
resources: {}
nodeSelector: {}
tolerations: []
affinity: []
Enter fullscreen mode Exit fullscreen mode

Now install helm on your local machine and run the command

helm create netflix-app
Enter fullscreen mode Exit fullscreen mode

It will create a new folder called netflix-app, open this folder and delete all the files inside the charts subfolder, then paste the files values.yml and Chart.yml. Open templates folder and delete all files exept for __helpers.tpl, then paste the files deployment.yml, node-service.yml and service.yml. Your folder should be of the structure:

netflix-clone-infra/
├── netflix-clone/
    ├── Chart.yml
    ├── templates/ 
        ├── deployment.yml
        ├── node-service.yml
        ├── service.yml
        └── __helpers.tpl
    ├── deployment.yml    
    └── service.yml  
└── Kubernetes/      
Enter fullscreen mode Exit fullscreen mode

Now return to the root folder of this repo and push the code to github repository.

Configure ArgoCD to monitor this Repository

To argocd on the web browser, click on create an application, and fill the fields:

Application name: netflix-clone
Project name: default
repository url: <your repo url>
Revision: HEAD
path: netflix-clone
Cluster url: select the dropdown
namespace: default
Values file: values.yml
Enter fullscreen mode Exit fullscreen mode

Create the App, after creation, ArgoCD will sync with the repository and create the pods required.

Now run

kubectl get svc -n default
Enter fullscreen mode Exit fullscreen mode

You will have the endpoint of the load balancer, copy and paste it on the browser, you will access the Application:

7. Monitoring with Prometheus and Grafana

When the cluster is working, it is important to know the behavior of the resources. This can only be done when proper monitoring is put in place to Monitor Key Performance metrics. The steps include

  • Add Prometheus community Helm Repo
helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
helm repo add stable https://charts.helm.sh/stable
helm repo update  
Enter fullscreen mode Exit fullscreen mode
  • Create namespaces
kubectl create namespace prometheus
kubectl create namespace grafana
Enter fullscreen mode Exit fullscreen mode
  • Install Prometheus
helm install prometheus prometheus-community/prometheus \
  --namespace prometheus \
  --set server.service.type=LoadBalancer \
  --set server.persistentVolume.enabled=false
Enter fullscreen mode Exit fullscreen mode
  • Install Grafana
helm repo add grafana https://grafana.github.io/helm-charts   
helm repo update     
helm install grafana grafana/grafana \
  --namespace grafana \
  --set service.type=LoadBalancer \
  --set adminPassword='SuperSecret123!'
Enter fullscreen mode Exit fullscreen mode

After 2-3mins, run

kubectl get svc -n prometheus
kubectl get svc -n grafana
Enter fullscreen mode Exit fullscreen mode
  • Edit Security Group in AWS console
    Go to EC2 > Load Balancers and locate the ELB with the same name as in the EXTERNAL-IP, Click the description tab of the ELB, copy the SG ID, now go to the Security groups in EC2 and edit the inbound rules and add port 80, open to 0.0.0.0/0

  • Add Prometheus Data source to Grafana

- Log in to the grafana UI
- Go to settings -> Data sources -> Add data source
- Choose Prometheus
- Enter the url: http://prometheus-server.prometheus.svc.cluster.local
Click "Save & test"
Enter fullscreen mode Exit fullscreen mode
  • Import Dashboards
- go to + -> import
- Enter the id: 6417 (kubernetes cluster monitoring)
- click Import and select Prometheus as data source
Enter fullscreen mode Exit fullscreen mode

It will load the Graphana UI for kubernetes cluster monitoring like the one below. Congratulations on finishing the project

Top comments (0)