DEV Community

Cover image for The Kubernetes Resume Challenge Part 2
Ogonna Nnamani
Ogonna Nnamani

Posted on

The Kubernetes Resume Challenge Part 2

CLICK FOR PART 1 HERE

Step 10: Autoscale Your Application
Task: Automate scaling based on CPU usage to handle unpredictable traffic spikes.

Implement HPA: Create a Horizontal Pod Autoscaler targeting 50% CPU utilization, with a minimum of 2 and a maximum of 10 pods.

Apply HPA: Execute kubectl autoscale deployment ecom-web --cpu-percent=50 --min=2 --max=10.

Simulate Load: Use a tool like Apache Bench to generate traffic and increase CPU load.

Implementation
To implement this, instead of using the autoscale command I generated a "hpa.yaml" file that defined the metrics that should trigger the autoscaling.

apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
  namespace: default
  name: ecomm-hpa
  labels:
    app: ecomm-app
spec:
  scaleTargetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: ecomm-app-deployment
  minReplicas: 2 
  maxReplicas: 10  
  metrics:
  - type: Resource
    resource:
      name: cpu
      target:
        type: Utilization
        averageUtilization: 50
Enter fullscreen mode Exit fullscreen mode

Now that we have defined our HPA, this does not mean that automatically our deployment will scale, i encountered an error and no matter the amount of load testing done with Apache Bench, the pods did not scale.

HPA did not scale

As we can see here, when we apply the hpa to the deployment it is not able to track the current utilization of the pods hence the "unknown" status. After some research, the fix was that the resource part of the deployment file under the container section was not defined. This is an example of how this is defined in a deployment file.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: sample-app
  labels:
    app: sample
spec:
  replicas: 3
  selector:
    matchLabels:
      app: sample
  template:
    metadata:
      labels:
        app: sample
    spec:
      containers:
      - name: sample-app
        image: your-registry/sample-app:latest
        ports:
        - containerPort: 8080
        resources:
          requests:
            cpu: "300m"
Enter fullscreen mode Exit fullscreen mode

Explanation:
resources: Specifies the computational resources (CPU, memory, etc.) needed by the pods.

requests: Defines the minimum amount of resources required by the pods to run.

cpu: "300m": Sets the CPU request to 300 milliCPU (300m), which represents 0.3 of a CPU core. This indicates the minimum amount of CPU that each pod in the deployment requires to function properly.

After applying this updated file, to monitor this in real-time and to quickly test, i changed my hpa average from 50% to 10% and ran

kubectl get hpa -w
Enter fullscreen mode Exit fullscreen mode

and we get the following outputs

HPA now scales
it finally tracked the CPU utilization and also our replicas are now scaling from 3 to 5.

LOAD TESTING TOOLS
The guidelines suggested we use Apache bench which i found really interesting. The below command initiates the simulated requests to the kubernetes endpoint.

ab -n 1000 -c 10 http://<endpoint_url or IP>
Enter fullscreen mode Exit fullscreen mode
  • The ab command is used for benchmarking HTTP server performance. Here's what the command you provided does:
  • -n 1000: Specifies the number of requests to perform. In this case, it's set to 1000, meaning Apache Bench (ab) will send 1000 requests to the server.
  • -c 10: Specifies the number of multiple requests to perform at a time. Here, it's set to 10, meaning Apache Bench will send 10 requests concurrently.
  • http://: Specifies the URL or IP address of the server to benchmark.

Also discovered another tool called kubectl load generator that also runs load testing for specifically kubernetes endpoints. with a single command requests begin simulating.

kubectl run -i --tty load-generator --rm --image=busybox --restart=Never -- /bin/sh -c "while sleep 0.01; do wget -q -O- http://website-service; done"
Enter fullscreen mode Exit fullscreen mode
  • -i --tty: Allocates an interactive terminal for the command to run. load-generator: Name of the pod.
  • --rm: Removes the pod after it terminates.
  • --image=busybox: Specifies the Docker image to use for the pod.
  • --restart=Never: Indicates that the pod should not be restarted automatically if it fails.
  • /bin/sh -c "while sleep 0.01; do wget -q -O- http://website-service; done": The command to run inside the pod, which continuously sends HTTP requests to the specified endpoint - (http://website-service) with a delay of 0.01 seconds between requests.

Step 11: Implement Liveness and Readiness Probes
Task: Add liveness and readiness probes to website-deployment.yaml, targeting an endpoint in your application that confirms its operational status.

Implementation
This is another step where i reach out to a friend (PHP developer) to configure two checks on both the website and the db. hitting the endpoint/app-healthcheck returned "App is running", same with the database.

Step 12: Utilize ConfigMaps and Secrets
Task: Securely manage the database connection string and feature toggles without hardcoding them in the application.
Create Secret and ConfigMap: For sensitive data like DB credentials, use a Secret. For non-sensitive data like feature toggles, use a ConfigMap.

Implementation
Similar to the feature-toggle-config, I generated a ConfigMap.yaml file that stored all environment variables initially hardcoded in the deployment file. Now the configs are stored in a configmap file and referenced in the deployement file while a website-secret.yaml and mariadb-secret.yaml is also created to handle DB credentials of both resources.

apiVersion: v1
kind: ConfigMap
metadata:
  name: website-configmap
data:
  DB_NAME: "ecomdb"
  DB_HOST: "mariadb-service"
  DB_USER: "ecomdb-user"
  DB_PASSWORD: "password"
Enter fullscreen mode Exit fullscreen mode

An example of a configmap.yaml file above.

apiVersion: v1
data:
  DB_PASSWORD: cGFzc3dvcmQxMjM= 
kind: Secret
metadata:
  name: db-secret
type: Opaque
Enter fullscreen mode Exit fullscreen mode

An example of a secret.yaml file.
Now we reference this in our deployment.yaml file

apiVersion: apps/v1
kind: Deployment
metadata:
  name: sample-app
  labels:
    app: sample
spec:
  replicas: 3
  selector:
    matchLabels:
      app: sample
  template:
    metadata:
      labels:
        app: sample
    spec:
      containers:
      - name: sample-app
        image: your-registry/sample-app:latest
        ports:
        - containerPort: 80
        env:
        - name: DB_HOST
          valueFrom:
            configMapKeyRef:
              name: website-configmap
              key: DB_HOST
        - name: DB_USER
          valueFrom:
            configMapKeyRef:
              name: website-configmap
              key: DB_USER
        - name: DB_PASSWORD
          valueFrom:
            secretKeyRef:
              name: db-secret
              key: DB_PASSWORD
        - name: DB_NAME
          valueFrom:
            configMapKeyRef:
              name: website-configmap
              key: DB_NAME
        - name: FEATURE_DARK_MODE
          valueFrom:
            configMapKeyRef:
              name: feature-toggle-config
              key: FEATURE_DARK_MODE
Enter fullscreen mode Exit fullscreen mode

this references the configmap.yaml and secret.yaml files above, respectively

Extra credit:
Package Everything in Helm
Task: Utilize Helm to package your application, making deployment and management on Kubernetes clusters more efficient and scalable.

Implementation
Helm charts make managing and deploying kubernetes clusters very efficient. by utilizing a values.yaml file we can define values of our various files hereby making our chart generic and highly reusable. Below is a sample deployment file and a values.yaml handling the values of specific configurations.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: {{ .Release.Name }}-app
  labels:
    app: {{ .Release.Name }}
spec:
  replicas: {{ .Values.replicaCount }}
  selector:
    matchLabels:
      app: {{ .Release.Name }}
  template:
    metadata:
      labels:
        app: {{ .Release.Name }}
    spec:
      containers:
      - name: {{ .Release.Name }}-container
        image: {{ .Values.image.repository }}:{{ .Values.image.tag }}
        ports:
        - containerPort: {{ .Values.containerPort }}
        resources:
{{ toYaml .Values.resources | indent 10 }}
Enter fullscreen mode Exit fullscreen mode

Sample Deployment.yaml file

replicaCount: 3
image:
  repository: your-registry/sample-app
  tag: latest
containerPort: 8080
resources:
  requests:
    cpu: "300m"
    memory: "128Mi"
  limits:
    cpu: "500m"
    memory: "256Mi"
Enter fullscreen mode Exit fullscreen mode

Sample Values.yaml file

The deployment.yaml file is a Helm template file. It uses Go templating syntax to inject values from the values.yaml file.
The values.yaml file defines configurable values for the Helm chart, such as the number of replicas, the Docker image repository and tag, container port, and resource requests and limits.

using helm install command deploys our website to a kubernetes cluster.

Implement Persistent Storage
Task: Ensure data persistence for the MariaDB database across pod restarts and redeployments.

Create a PVC: Define a PersistentVolumeClaim for MariaDB storage needs.

Implementation

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: mariadb-pvc
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 200Mi
Enter fullscreen mode Exit fullscreen mode

The following is a sample pvc.yaml file that defines a persistent volume that is abstracted from the pod restarts aensuring that data persists as long as it remains mounted.

Implement Basic CI/CD Pipeline
Task: Automate the build and deployment process using GitHub Actions.
GitHub Actions Workflow: Create a .github/workflows/deploy.yml file to build the Docker image, push it to Docker Hub, and update the Kubernetes deployment upon push to the main branch

Implementation
To automate this deployment, I generated a CICD file that has the following:

  • utilizes an Azure setup to install kubectl for our pipeline.
jobs:
  deploy:
    runs-on: ubuntu-latest
    steps:
    - name: Checkout code
      uses: actions/checkout@v3
    - name: Install kubectl
      uses: azure/setup-kubectl@v2.0
      with:
        version: 'v1.27.0' # default is latest stable
      id: install
Enter fullscreen mode Exit fullscreen mode
  • Authenticates to an AWS account using AWS account ID and secret access key.
- name: Configure AWS Credentials
      uses: aws-actions/configure-aws-credentials@v4
      with:
        aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
        aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
        aws-region: us-east-1
Enter fullscreen mode Exit fullscreen mode
  • As we are deploying to ECR, we log in, build and finally push to an ECR repository.
- name: Login to Amazon ECR Public
      id: login-ecr-public
      uses: aws-actions/amazon-ecr-login@v2
      with:
        registry-type: public

    - name: Build, tag, and push docker image to Amazon ECR Public
      env:
        REGISTRY: ${{ secrets.ECR_REGISTRY }}
        REPOSITORY: kubernetes-resume-challenge-repo
        IMAGE_TAG: latest
      run: |
        docker build -t  ${{ secrets.ECR_REGISTRY }}/kubernetes-resume-challenge-repo:latest .
        docker push  ${{ secrets.ECR_REGISTRY }}/kubernetes-resume-challenge-repo:latest
Enter fullscreen mode Exit fullscreen mode

The below command updates the kubeconfig file with the credentials and endpoint information necessary to connect to the Amazon EKS cluster named kubernetes-cluster. This allows subsequent commands or steps in the pipeline to interact with the Kubernetes cluster

- name: Update kube config
  run: aws eks update-kubeconfig --name kubernetes-cluster
Enter fullscreen mode Exit fullscreen mode

This below command navigates to the helm template directory of this project to execute the helm install and uninstall commands. Because this is a continuous pipeline the first command uninstalls it if it exists and re-installs it to update new changes made to the project file.


- name: Deploy go-app helm chart to EKS
      run: |
        helm uninstall helm-app -n helm
        cd kubernetes/helm-app
        helm install helm-app helm-app -f values.yaml . -n helm
Enter fullscreen mode Exit fullscreen mode

Conclusion
Ladies and gentlemen, we have come to the end of this wonderful project and I didn't realize how much was put into this till I had to document and relive the roller coaster moments I had during this project. My knowledge of kubernetes and cloud in general has been stretched as a result of this. 

I hope that you enjoy going through this with me and maybe encourage you to also try this and have some fun while at it because I sure did!

To connect with me
LinkedIn
GitHub
Twitter

Top comments (0)