DEV Community

Cover image for Checking the Capability of ChatGPT for DevOps Automation Solution Design
Saravanan Gnanaguru
Saravanan Gnanaguru

Posted on

Checking the Capability of ChatGPT for DevOps Automation Solution Design

Table of Contents

Initial Words

Lately, we are hearing more about the capabilities of ChatGPT. So I wanted to take a stab at checking ChatGPT's capability to understand its usefulness in the life of a Cloud DevOps Engineer for designing automation solutions.
I would need to agree that the responses generated by ChatGPT were good so I thought of writing about the experience.
The views shared here are personal and wanted to show the understanding of How To Use ChatGPT with better prompting.

Quick Take Away

This is like Disclaimer for using the responses generated by chatGPT - Because one thing for sure is, ChatGPT can definitely be used to guide people on how to do things, but it is entirely the responsibility of end user to have at least the basic/intermediate knowledge to understand the response and customize it further for the user's need and use it for implementation.
We need to keep in mind, currently the responses might be more general and CANNOT be used as a copy paste solution, and as a user we need to further modify it for our own use case or scenario.

Let us Jump into See what I Tried

I've started the interaction with the below Prompt. Since I'm currently designing a 3-Tier architecture solution for my personal project, so i've started my prompt query related to that,

The Initial Query Prompt

I want to setup an deployment automation pipeline for a 3 tier application developed using react ui, golang and redis db. 
I want to deploy this in AWS EKS using kubernetes. Use tools like github action and terraform where ever possible
Enter fullscreen mode Exit fullscreen mode

Query 1

So, ChatGPT responded to the query, but I was not satisfied with the initial responses and asked ChatGPT to re-generate the response.
Finally the below response got my interest and has covered major CICD solution aspects I was looking for in the response.

So response goes like this,

Great! Here are the high-level steps to set up your deployment automation pipeline for your 3 tier application using React UI, Golang, and Redis DB in AWS EKS using Kubernetes, while utilizing Github Actions and Terraform:

Version Control: Ensure that your code is under version control, and stored in a repository such as GitHub.
Dockerize your application: Build docker images for each of the components of your 3-tier application (React UI, Golang, and Redis DB). Ensure that the images are pushed to a container registry, such as Docker Hub or Amazon ECR.
Infrastructure-as-code: Define your infrastructure as code using Terraform. This includes creating the AWS resources such as VPC, subnets, security groups, EKS cluster, and worker nodes.

Create Kubernetes manifests: Create Kubernetes manifests for your application components. These include deployment, service, and ingress manifests for the React UI and Golang components, and a statefulset manifest for Redis DB. Use best practices such as resource requests and limits, health checks, and liveness probes.

Set up Continuous Integration (CI): Use Github Actions to set up a CI pipeline that builds and tests your application every time you push new code. Ensure that you have defined appropriate unit tests and integration tests for your application components.

Set up Continuous Deployment (CD): Configure your CD pipeline to deploy your application to your EKS cluster whenever there are new changes in your code repository. Use Kubernetes tools like kubectl or Helm to deploy your application.
Monitor and scale: Monitor your application performance, and scale your application horizontally or vertically as necessary.

Here's a possible breakdown of these steps:
Step 1: Version Control

Create a repository in GitHub to store your application code
Ensure that each component (React UI, Golang, and Redis DB) has its own folder in the repository

Step 2: Dockerize your application

Create a Dockerfile for each component
Build the Docker images for each component
Push the Docker images to a container registry

Step 3: Infrastructure-as-code

Define the infrastructure as code using Terraform
Create the AWS resources such as VPC, subnets, security groups, EKS cluster, and worker nodes

Step 4: Create Kubernetes manifests

Create Kubernetes manifests for your application components
Use deployment, service, and ingress manifests for the React UI and Golang components
Use a statefulset manifest for the Redis DB component
Define appropriate resource requests and limits, health checks, and liveness probes

Step 5: Set up Continuous Integration (CI)

Create a GitHub Action workflow that builds and tests your application components
Use appropriate unit tests and integration tests for your application components
Ensure that the workflow runs every time you push new code to your repository

Step 6: Set up Continuous Deployment (CD)

Configure your CD pipeline to deploy your application to your EKS cluster
Use Kubernetes tools like kubectl or Helm to deploy your application
Ensure that the CD pipeline runs every time there are new changes in your repository

Step 7: Monitor and scale

Monitor your application performance using tools like Prometheus and Grafana
Use Kubernetes autoscaling to scale your application horizontally or vertically as necessary

This is just a high-level overview, and there may be additional steps or details that are specific to your application and infrastructure requirements.
Enter fullscreen mode Exit fullscreen mode

This is mind blowing, isn't it?

Continuing the Query Prompts

Query Prompt 2

Then I've continued my chat query prompt, by picking one particular solution pointer from the response provided by ChatGPT for the earlier query.
I've asked,

Explain how to scale application horizontally or vertically 
Enter fullscreen mode Exit fullscreen mode

Like I did for the previous query, I was not satisfied with the initial responses and asked ChatGPT to re-generate responses.
Finally, I got the one response, which was using the context of tools and technologies from the initial query,

Sure! Here's an explanation on how to scale your application horizontally or vertically as necessary:

Scaling your application refers to adding or removing resources, such as CPU, memory, or instances, to meet the demand of the workload. Scaling can be done in two ways: horizontal scaling and vertical scaling.

Horizontal Scaling:
Horizontal scaling, also known as scaling out, refers to adding more instances of the same type to your application to handle more requests. This approach increases the overall capacity of the application by distributing the workload across multiple instances. To horizontally scale your application, you need to add more nodes to your Kubernetes cluster. You can do this by adding more worker nodes to your EKS cluster using the AWS console, CLI, or Terraform. Once you have added more worker nodes, Kubernetes will automatically distribute the workload across the nodes using the service discovery mechanism. You can also use Kubernetes horizontal pod autoscaler (HPA) to scale your application based on resource usage or custom metrics.

Vertical Scaling:
Vertical scaling, also known as scaling up, refers to increasing the size of the instance or node to handle more requests. This approach increases the overall capacity of the application by adding more CPU, memory, or other resources to the existing nodes. To vertically scale your application, you need to modify the resources requests and limits in your Kubernetes manifests. For example, you can increase the CPU or memory requests and limits for your application pods to allocate more resources to the application.

Here are the steps to scale your application horizontally or vertically:

Monitor your application performance: Monitor your application performance using tools like Prometheus and Grafana. Keep an eye on the resource usage and performance metrics to identify when scaling is needed.

Determine the scaling approach: Decide whether you need to scale your application horizontally or vertically based on the workload and resource usage.

Horizontal scaling: If you need to horizontally scale your application, add more worker nodes to your EKS cluster using the AWS console, CLI, or Terraform. You can also use Kubernetes HPA to scale your application based on resource usage or custom metrics.

Vertical scaling: If you need to vertically scale your application, modify the resources requests and limits in your Kubernetes manifests. You can increase the CPU or memory requests and limits for your application pods to allocate more resources to the application.

Test the scaling: Test the scaling by increasing the workload and checking if the application can handle the increased traffic. Monitor the performance metrics to ensure that the scaling is effective.

Repeat as necessary: Keep monitoring the application performance and adjust the scaling as necessary to meet the demands of the workload.

That's a brief explanation on how to scale your application horizontally or vertically in AWS EKS using Kubernetes.
Enter fullscreen mode Exit fullscreen mode

Query Prompt 3

Then I've continued asking for examples, and here is my next query prompt

Show me examples for horizontal and vertical auto scaling
Enter fullscreen mode Exit fullscreen mode

Query 3

And here is the response that I find more relevant,

Sure, here are examples of horizontal and vertical auto-scaling using Kubernetes:

Horizontal Autoscaling:
Horizontal autoscaling adds or removes instances of an application to handle more requests. In Kubernetes, this can be achieved using the Horizontal Pod Autoscaler (HPA) resource.

For example, if your application is experiencing an increase in traffic and needs to handle more requests, you can configure the HPA to automatically add more pods to your deployment based on CPU utilization or other metrics. Similarly, if the traffic decreases, the HPA can automatically remove pods to save resources.

Here's an example HPA configuration in Kubernetes that scales the number of pods in a deployment based on CPU utilization:

apiVersion: autoscaling/v2beta2
kind: HorizontalPodAutoscaler
metadata:
  name: my-app-hpa
spec:
  scaleTargetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: my-app
  minReplicas: 1
  maxReplicas: 10
  metrics:
  - type: Resource
    resource:
      name: cpu
      targetAverageUtilization: 50


In this example, the HPA scales the number of pods in the "my-app" deployment between 1 and 10 based on CPU utilization, with a target average utilization of 50%.

Vertical Autoscaling:
Vertical autoscaling increases or decreases the resources allocated to an instance of an application to handle more requests. In Kubernetes, this can be achieved using the Vertical Pod Autoscaler (VPA) resource.

For example, if your application is experiencing an increase in traffic and needs to handle more requests, you can configure the VPA to automatically increase the CPU and memory resources allocated to each pod in your deployment. Similarly, if the traffic decreases, the VPA can automatically decrease the resources to save costs.

Here's an example VPA configuration in Kubernetes that scales the CPU and memory resources allocated to each pod in a deployment based on performance metrics:

apiVersion: autoscaling.k8s.io/v1
kind: VerticalPodAutoscaler
metadata:
  name: my-app-vpa
spec:
  targetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: my-app
  updatePolicy:
    updateMode: "Off"
  resourcePolicy:
    containerPolicies:
    - containerName: "*"
      minAllowed:
        cpu: 50m
        memory: 256Mi
      maxAllowed:
        cpu: 500m
        memory: 1Gi


In this example, the VPA scales the CPU and memory resources allocated to each pod in the "my-app" deployment between 50 milliCPU and 500 milliCPU, and between 256 MiB and 1 GiB of memory, based on performance metrics.

I hope these examples help illustrate the differences between horizontal and vertical autoscaling in Kubernetes!
Enter fullscreen mode Exit fullscreen mode

Query 3-2

Final Query Prompt

Just concluded the auto scaling configuration interaction, with the below query prompt, as I've asked it to expand it with an metrics server config,

Expand the above configurations with metrics server configuration
Enter fullscreen mode Exit fullscreen mode

So here is the response that was linked to previous response of Auto scaling configuration,

Sure, here's an example of how to configure the Kubernetes Metrics Server along with the Horizontal Pod Autoscaler (HPA) and Vertical Pod Autoscaler (VPA) configurations we provided earlier.

The Metrics Server is a Kubernetes add-on that provides resource utilization metrics for pods and nodes, which are used by the HPA and VPA to determine when and how to scale an application.

First, you'll need to install the Metrics Server into your Kubernetes cluster. You can do this by running the following command:

kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml


Once the Metrics Server is installed, you can configure your HPA and VPA to use it for metrics by adding the following annotation to your deployment:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: my-app
  annotations:
    autoscaling.alpha.kubernetes.io/metrics-provider: "metrics-server"


With this annotation in place, the HPA and VPA will use the Metrics Server to obtain CPU and memory utilization metrics for your pods and nodes.

Here's an updated example of the HPA configuration we provided earlier, with the addition of the Metrics Server configuration:

apiVersion: autoscaling/v2beta2
kind: HorizontalPodAutoscaler
metadata:
  name: my-app-hpa
spec:
  scaleTargetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: my-app
  minReplicas: 1
  maxReplicas: 10
  metrics:
  - type: Resource
    resource:
      name: cpu
      targetAverageUtilization: 50
  - type: Resource
    resource:
      name: memory
      targetAverageUtilization: 80


In this example, we've added a second metric to the HPA configuration that targets memory utilization at 80%.

Here's an updated example of the VPA configuration we provided earlier, with the addition of the Metrics Server configuration:

apiVersion: autoscaling.k8s.io/v1
kind: VerticalPodAutoscaler
metadata:
  name: my-app-vpa
spec:
  targetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: my-app
  updatePolicy:
    updateMode: "Off"
  resourcePolicy:
    containerPolicies:
    - containerName: "*"
      minAllowed:
        cpu: 50m
        memory: 256Mi
      maxAllowed:
        cpu: 500m
        memory: 1Gi
  metrics:
  - type: Resource
    resource:
      name: cpu
      target:
        type: Utilization
        averageUtilization: 50
  - type: Resource
    resource:
      name: memory
      target:
        type: Utilization
        averageUtilization: 80


In this example, we've added two metrics to the VPA configuration that target CPU and memory utilization, with target average utilization of 50% and 80% respectively.

I hope this helps you configure the Metrics Server, HPA, and VPA for your application in Kubernetes!
Enter fullscreen mode Exit fullscreen mode

So this is it!!!

ChatGPT Technical Description

It is developed by OpenAI and currently uses GPT3.5 (as on Apr-9-2023) algorithm for generating responses. Hope this will be improved further with the introduction of GPT4

Conclusion

As you can see, the ChatGPT can be used to explain each and every solution pointer that was generated in the initial response, and definitely can be used to create a full blown solution with tailor made customisation required for each use-case we are trying to resolve.
In this example blog, I tried to converse with ChatGPT on the aspects of CICD automation and Tried to dig more into auto scaling configurations.

If you have not tried ChatGPT yet, then you can sign up here

As we can see the future lies on the emergence of such AI tools (like ChatGPT, GitHub Co-pilot and many handful AI tools), we can utilise the AI knowledge for the benefit of improving the productivity and may go with the co-existence of each other.

Follow my work and share your thoughts,

Top comments (0)