<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: AzeematRaji</title>
    <description>The latest articles on DEV Community by AzeematRaji (@azeemah).</description>
    <link>https://dev.to/azeemah</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/azeemah"/>
    <language>en</language>
    <item>
      <title>Create Your Own Helm Charts: Reusable, Scalable, and Production-Ready</title>
      <dc:creator>AzeematRaji</dc:creator>
      <pubDate>Fri, 19 Sep 2025 19:58:16 +0000</pubDate>
      <link>https://dev.to/azeemah/create-your-own-helm-charts-reusable-scalable-and-production-ready-4ed7</link>
      <guid>https://dev.to/azeemah/create-your-own-helm-charts-reusable-scalable-and-production-ready-4ed7</guid>
      <description>&lt;p&gt;If you’ve ever deployed microservices on Kubernetes, you know the pain: endless YAML files, copy-pasting configs, and maintaining slightly different manifests for every service. It works… until you need to scale. That’s where Helm comes in. By creating your own charts, you can transform messy manifests into reusable building blocks that scale seamlessly as your app grows.&lt;/p&gt;

&lt;p&gt;In this post, I’ll walk you through how I built a shopping application with multiple microservices, deployed on AWS EKS with Terraform, but more importantly, how I made the whole setup modular with custom Helm charts.&lt;/p&gt;

&lt;h2&gt;
  
  
  As a DevOps engineer, here’s what you should know before deploying a microservices app:
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Which microservices you need to deploy&lt;/li&gt;
&lt;li&gt;Which services talk to each other&lt;/li&gt;
&lt;li&gt;How they communicate&lt;/li&gt;
&lt;li&gt;Which database(s) they use&lt;/li&gt;
&lt;li&gt;Which ports each service runs on&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Why Write Your Own Helm Charts?
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Reusability - no duplicate YAML&lt;/li&gt;
&lt;li&gt;Scalability - add new services easily&lt;/li&gt;
&lt;li&gt;Flexibility - use values.yaml for environment-specific configs&lt;/li&gt;
&lt;li&gt;Consistency - Redis, ingress, microservices share structure&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Instead of a one-off deployment, this approach gave me a framework to grow the application without headaches.&lt;/p&gt;

&lt;h2&gt;
  
  
  Prerequisites
&lt;/h2&gt;

&lt;p&gt;Before diving in, you should have:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Basic Kubernetes knowledge – deployments, services, and ingress&lt;/li&gt;
&lt;li&gt;Basic Terraform knowledge – how to provision infrastructure (init, apply, destroy)&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Infrastructure First: Terraform + EKS
&lt;/h3&gt;

&lt;p&gt;I used Terraform to provision the foundation:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A VPC with subnets&lt;/li&gt;
&lt;li&gt;An EKS cluster&lt;/li&gt;
&lt;li&gt;IAM roles and networking
With this, spinning up or tearing down the cluster is just:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;cd &lt;/span&gt;infra
terraform init
terraform apply &lt;span class="nt"&gt;--auto-approve&lt;/span&gt;
terraform destroy &lt;span class="nt"&gt;--auto-approve&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Building Helm Charts for Each Component
&lt;/h3&gt;

&lt;p&gt;Instead of dumping all microservices into one manifest, I built a chart per component:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;manifests/
├── charts/
│   ├── ingress/       
│   ├── redis/         
│   └── shopping-ms/   
├── values/
├── helmfile.yaml
└── issuer.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;I created chart for redis database, ingress and the microservice.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;helm create &amp;lt;chart-name&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For example, i created a chart for my shopping microservice:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;helm create shopping-ms
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This will generate a directory structure like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;shopping-ms/
├── charts/
├── templates/
│   ├── deployment.yaml
│   ├── service.yaml
│   └── _helpers.tpl
├── values.yaml
├── Chart.yaml
└── .helmignore
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;From here, templates can be edited with placeholders to fit the microservice and the values.yaml will carry the default values for the chart.&lt;/p&gt;

&lt;p&gt;Now going two level above the shopping-ms/, in the manifests/, inside the values/ we can then have seperate values.yaml for each microservice, the values in these yaml will override the default values in the values.yaml in the chart directory.&lt;/p&gt;

&lt;h3&gt;
  
  
  Securing Ingress with TLS
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;Install Nginx Ingress Controller in the cluster, which is responsible for creating loadbalancer and actually handling the traffic. Deployment and service in the Ingress chart is basically rules that the controller follows to know how to handle the traffic.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;I used the helm repo for nginx-ingress controller and cert-manager directly from helm to minimize lot of overhead, i would get to the cert-manager in a bit, hang on!&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Add Helm repo for ingress-nginx&lt;/span&gt;
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
helm repo update

&lt;span class="c"&gt;# Install ingress-nginx into its own namespace&lt;/span&gt;
kubectl create namespace ingress-nginx
helm &lt;span class="nb"&gt;install &lt;/span&gt;ingress-nginx ingress-nginx/ingress-nginx &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--namespace&lt;/span&gt; ingress-nginx &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--set&lt;/span&gt; controller.publishService.enabled&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nb"&gt;true&lt;/span&gt;

&lt;span class="c"&gt;# Verify Ingress Controller&lt;/span&gt;
kubectl get pods &lt;span class="nt"&gt;-n&lt;/span&gt; ingress-nginx

&lt;span class="c"&gt;# Check the LoadBalancer service&lt;/span&gt;

kubectl get svc &lt;span class="nt"&gt;-n&lt;/span&gt; ingress-nginx
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;You should have an existing domain name or create one, point your domain to the LoadBalancer IP with a CNAME record.&lt;/li&gt;
&lt;li&gt;Install cert-manager in the cluster.
This handle the automatic creation of certificates and renewal before it expires. However, a clusterissuer or issuer is needed to tell cert-manager what CA (certificate authority) to use. Also annotation of &lt;code&gt;cert-manager.io/cluster-issuer: letsencrypt-prod&lt;/code&gt; must be defined in the ingress.yaml so ingress can request for the certificate successfully.
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl create namespace cert-manager

helm repo add jetstack https://charts.jetstack.io
helm repo update

helm &lt;span class="nb"&gt;install &lt;/span&gt;cert-manager jetstack/cert-manager &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--namespace&lt;/span&gt; cert-manager &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--set&lt;/span&gt; &lt;span class="nv"&gt;installCRDs&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nb"&gt;true&lt;/span&gt;

&lt;span class="c"&gt;# This installs cert-manager and the CRDs (otherwise issuers won’t work, which is needed for TLS).&lt;/span&gt;
&lt;span class="c"&gt;# Verify Installation&lt;/span&gt;
kubectl get pods &lt;span class="nt"&gt;-n&lt;/span&gt; cert-manager
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;We need to apply the issuer.yaml, so the cert-manager is already to serve certificate when requested for.
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; issuer.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Key notes&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Ingress controller is cluster-wide, not namespace-specific&lt;/li&gt;
&lt;li&gt;cert-manager is also a cluster-wide resource&lt;/li&gt;
&lt;li&gt;Use ClusterIssuer instead of Issuer for certificates that apply across namespaces&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Deploying with Helmfile
&lt;/h3&gt;

&lt;p&gt;Managing each chart individually quickly becomes painful. That’s why I used Helmfile, a single command to apply or destroy all charts at once.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;helmfile apply
helmfile destroy
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Access the application
&lt;/h3&gt;

&lt;p&gt;Once deployed, the shopping app was live at:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;https://&amp;lt;your-domain&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Troubleshooting Tips
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;HTTPS not working? Ensure cert-manager is installed before applying your issuer&lt;/li&gt;
&lt;li&gt;Hit Let’s Encrypt rate limits? Use the staging issuer first&lt;/li&gt;
&lt;li&gt;Services not exposed? Double-check ingress annotations in your Helm chart&lt;/li&gt;
&lt;li&gt;Beginner level with kubernetes? I suggest you start with a single deployment file then turn it into helm charts.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Lessons Learned
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Start with charts early, don’t wait until you have YAML sprawl.&lt;/li&gt;
&lt;li&gt;Helmfile is a lifesaver, one command to apply/destroy everything.&lt;/li&gt;
&lt;li&gt;Charts are portable, I can now drop Redis or ingress into any new cluster.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Wrap up
&lt;/h3&gt;

&lt;p&gt;This project showed me how writing reusable Helm charts makes Kubernetes deployments scalable, consistent, and future-proof. Instead of juggling manifests, I now have modular building blocks I can apply anywhere.&lt;/p&gt;

&lt;p&gt;Next, I’m planning to extend these charts for monitoring (Prometheus + Grafana) so the setup can evolve into a full production-ready stack.&lt;/p&gt;

&lt;p&gt;Full codebase here: &lt;a href="https://github.com/AzeematRaji/helm-charts-for-microservices" rel="noopener noreferrer"&gt;Github repo&lt;/a&gt;&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>devops</category>
      <category>microservices</category>
      <category>terraform</category>
    </item>
    <item>
      <title>Deploying MongoDB on Amazon EKS with Terraform, Helm &amp; Ingress: A Story of Stateful Apps on Kubernetes</title>
      <dc:creator>AzeematRaji</dc:creator>
      <pubDate>Sat, 16 Aug 2025 22:49:52 +0000</pubDate>
      <link>https://dev.to/azeemah/deploying-mongodb-on-amazon-eks-with-terraform-helm-ingress-a-story-of-stateful-apps-on-3ghm</link>
      <guid>https://dev.to/azeemah/deploying-mongodb-on-amazon-eks-with-terraform-helm-ingress-a-story-of-stateful-apps-on-3ghm</guid>
      <description>&lt;p&gt;Have you ever wondered how stateful applications like databases fit into the world of Kubernetes? Or maybe you’ve been curious about what happens when a database pod is replicated, where does the data actually live?&lt;/p&gt;

&lt;p&gt;I had these same questions. Stateless apps were easy for me to understand,   spin up a pod, scale it up and down, and if one dies, Kubernetes simply replaces it. But with a database? Things get trickier. Data can’t just disappear when a pod restarts.&lt;/p&gt;

&lt;p&gt;That curiosity led me into this project: deploying MongoDB on Amazon EKS. Not only MongoDB, but also Mongo Express, a web-based admin interface to interact with the database. And I wanted to automate it end-to-end using Terraform, Helm, and Kubernetes manifests.&lt;/p&gt;

&lt;p&gt;Here’s how the journey unfolded:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Challenge: Stateful vs Stateless Apps&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Stateless apps&lt;/strong&gt; (like an API or frontend) don’t care if pods are destroyed and recreated — the state is stored elsewhere (e.g., S3, DynamoDB).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Stateful apps&lt;/strong&gt; (like MongoDB, PostgreSQL, MySQL) are different. They need persistent storage that survives pod restarts.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This means when you run a database in Kubernetes, you need to think about:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;How data is stored (Persistent Volumes)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;How pods are managed (StatefulSets)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;How storage integrates with your cloud provider (AWS EBS in my case)&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;My Approach&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Here’s the stack I used to make this work:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Terraform – Provisioned the VPC, subnets, and Amazon EKS cluster.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;AWS EBS CSI Driver – Enabled dynamic provisioning of EBS volumes for persistent storage.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Helm – Deployed MongoDB as a StatefulSet with customized values.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Mongo Express – A simple web-based UI to interact with MongoDB.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;NGINX Ingress Controller – Exposed Mongo Express via an AWS Load Balancer.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Architecture Diagram&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Here’s what the setup looks like:&lt;/p&gt;

&lt;p&gt;&lt;a href="" class="article-body-image-wrapper"&gt;&lt;img alt="Architecture diagram"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 1: Provision Infrastructure with Terraform&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Terraform took care of the networking and cluster setup. At a high level:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;module "vpc" {
  source  = "terraform-aws-modules/vpc/aws"
  version = "5.1.2"
  # Public &amp;amp; private subnets defined here...
}

module "eks" {
  source  = "terraform-aws-modules/eks/aws"
  version = "20.24.2"
  cluster_name    = "mongo-eks-cluster"
  cluster_version = "1.30"
  # Managed node groups defined here...
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once applied, I had a working VPC + EKS cluster.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 2: Install AWS EBS CSI Driver&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl apply -k "github.com/kubernetes-sigs/aws-ebs-csi-driver/deploy/kubernetes/overlays/stable/ecr/?ref=release-1.22"
kubectl get pods -n kube-system -l app=ebs-csi-controller
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This command pulls the stable release manifest and applies all necessary resources (controller, daemonset, RBAC, etc.) and verify installation.&lt;/p&gt;

&lt;p&gt;Without installing this, its just like trying to claim a persistent volume that doesnt exist, the cluster can not spin up a volume without a provisioner installed which is the EBS CSI driver&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 3: Attach policy to the Node IAM role&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws iam attach-role-policy \
  --role-name &amp;lt;NodeInstanceRoleName&amp;gt; \
  --policy-arn arn:aws:iam::aws:policy/service-role/AmazonEBSCSIDriverPolicy
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This command gives EKS worker nodes permission to create and manage EBS volumes so MongoDB (or any pod) can use persistent storage.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 4: Install MongoDB via Helm&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Before installing mongodb, set the mongodb-values.yaml for the required replicaset and persistent volume. Incase you are wondering what values.yaml file is, its the configuration file that overrides the default configuration when using helm to install any app.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;helm install my-mongodb oci://registry-1.docker.io/bitnamicharts/mongodb \
  -f mongodb-values.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Step 5: Create a service for Mongodb deployment&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;After installing mongodb on the cluster, it creates a headless service (with clusterIP: None), which is useful for internal communication between MongoDB pods, for example when they need to discover each other in a replica set. However, this is not suitable for external applications like Mongo Express, which need a stable endpoint to connect to MongoDB.&lt;/p&gt;

&lt;p&gt;In statefulset, one of the replicaset has the &lt;strong&gt;read and write&lt;/strong&gt; which is the primary pod, while secondary pods have just &lt;strong&gt;read&lt;/strong&gt; to reduce the load on primary pod. If a headless service of a secondary pod is used in the configmap for example, the UI might be accessible but one might be unable to write to it therefore limiting the functions of the app. Its quite important to connect to the database as a whole than connecting to the individual pod.&lt;/p&gt;

&lt;p&gt;To make MongoDB accessible to other applications inside the cluster, we need a ClusterIP service. This service provides a single stable DNS name and load balances connections across the MongoDB pods. &lt;/p&gt;

&lt;p&gt;&lt;code&gt;kubectl apply -f mongodb-svc.yaml&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Check all the pods and services to comfirm they are running;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl get pod
kubectl get svc
kubectl get all
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Step 6: Deploy Secret and Configmap Volume&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This is needed for the Mongoexpress to connect with the mongodb, these are some of the values we defined in the values.yaml&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl apply -f secret.yaml
kubectl apply -f configmap.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Step 7: Deploy Mongo Express&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Mongoexpress in the UI for the database, this will be deployed using kubernetes deployment.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;kubectl apply -f mongoexpress.yaml&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 8: Install Nginx ingress controller via helm&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Nginx ingress controller automatically spin up cloud loadbalancer in my case, aws loadbalancer and route traffic from the ingress to the loadbalancer behind the hood.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;helm install ingress-nginx oci://ghcr.io/nginxinc/charts/ingress-nginx \
  --namespace ingress-nginx \
  --create-namespace
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Step 9: Deploy Ingress rule&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This file contains the rule that ingress controller apply, basically how the traffic needs to be routed. Also this was exposed to all instead of a domain name because this is a testing environment.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;kubectl apply -f ingress.yaml&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 9: Accessing via mongoexpress UI&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;code&gt;kubectl get all&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;The IP address on the ingress service would be accessible on the browser, you can access and store data on the database Mongo Express credentials (default): admin / pass.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Bonus&lt;/strong&gt;: you can try to scale down the replicaset after saving data in the database, confirm there is no pod running, then scale up the replicaset and you will find your database would be attached back to it, thats the power of persistent volume in the cloud, pod or node restarting doesnt mean storage is lost.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Limitations of Self-Hosted&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;While self-hosting gives control and flexibility, it also introduces challenges such as:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Backup and restore processes.&lt;/li&gt;
&lt;li&gt;Security patching and updates.&lt;/li&gt;
&lt;li&gt;Monitoring and scaling for production workloads.&lt;/li&gt;
&lt;li&gt;High availability and failover management.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Key Lessons Learned
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Linode vs AWS Storage Provisioning&lt;/strong&gt; – Linode’s Kubernetes engine comes with a default storage provisioner, but AWS EKS requires installing and configuring the &lt;strong&gt;EBS CSI Driver&lt;/strong&gt; with proper IAM permissions.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;StatefulSet Basics&lt;/strong&gt; – Understanding PVC binding, PV lifecycle, and how pods reconnect to the same volumes after restarts.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Ingress Integration&lt;/strong&gt; – AWS LB + NGINX Ingress can seamlessly route traffic to internal services and external services&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Improvements
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;mongodb-values.yaml, terraform lock files and statefiles, .tfvars should all be in the .gitignore file&lt;/li&gt;
&lt;li&gt;EBS CSI drive can be installed, IAM policy attached, mongodb deployed via helm all usinf terraform, this helps maintain versions&lt;/li&gt;
&lt;li&gt;Secret.yaml can be done using third party tool like aws secret manager and referencing it in the secret.yaml instead of exposing it&lt;/li&gt;
&lt;li&gt;Lastly, our application can be accessed via domain name, it can be configured in the ingress rule.&lt;/li&gt;
&lt;li&gt;TLS for a secured connection using certmanager&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Next Steps
&lt;/h2&gt;

&lt;p&gt;To take this setup closer to production readiness, the following can be implemented:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Automated Backups – configure scheduled snapshots or use tools like Velero.&lt;/li&gt;
&lt;li&gt;Monitoring &amp;amp; Alerting – integrate with Prometheus + Grafana for insights.&lt;/li&gt;
&lt;li&gt;Maintenance – apply updates and patches regularly.&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Scaling – configure replica sets for fault tolerance.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Migration to Managed Services – consider AWS DocumentDB for reduced operational overhead.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You can find all codes snippet in &lt;a href="https://github.com/AzeematRaji/mongo-db-eks" rel="noopener noreferrer"&gt;my github repo&lt;/a&gt;&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>terraform</category>
      <category>ingress</category>
      <category>aws</category>
    </item>
    <item>
      <title>From Scripts to Cloud: My Hands-On Guide to ML + DevOps</title>
      <dc:creator>AzeematRaji</dc:creator>
      <pubDate>Mon, 21 Apr 2025 17:09:12 +0000</pubDate>
      <link>https://dev.to/azeemah/from-scripts-to-cloud-my-hands-on-guide-to-ml-devops-2kbg</link>
      <guid>https://dev.to/azeemah/from-scripts-to-cloud-my-hands-on-guide-to-ml-devops-2kbg</guid>
      <description>&lt;p&gt;As a DevOps engineer, I'm used to automating backend systems, deploying apps, and scaling infrastructure. But when I trained my first machine learning model, I found myself asking:&lt;br&gt;
How do I bring the same level of automation and structure here? &lt;/p&gt;

&lt;p&gt;I also started wondering, as AI continues to evolve, what’s the fate of a Cloud/DevOps engineer? Or better still, what role will we play in this new landscape?&lt;/p&gt;

&lt;p&gt;This blog post is my attempt to explore and answer those questions by integrating AI and DevOps. Think of it as a beginner-friendly hands-on project guide to building a bridge between the two worlds.&lt;/p&gt;
&lt;h3&gt;
  
  
  What I Built
&lt;/h3&gt;

&lt;p&gt;To put these questions into practice, I built a simple yet meaningful project:&lt;br&gt;
A machine learning model that predicts epitope regions in a protein sequence.&lt;br&gt;
An epitope is the part of an antigen that an antibody binds to, helping the immune system recognize and fight infections. This makes epitope prediction really important in vaccine and drug development.&lt;/p&gt;

&lt;p&gt;To bring DevOps into the mix, I wrapped the model in a FastAPI backend, containerized it with Docker, and set up automated deployment to an EC2 instance using GitHub Actions.&lt;/p&gt;
&lt;h3&gt;
  
  
  Why This Matters
&lt;/h3&gt;

&lt;p&gt;This ML model helps researchers, especially in the low resource environments by accelerating their work, cutting costs, and reducing the time spent running multiple experiments to identify which part of a protein is epitopic.&lt;/p&gt;

&lt;p&gt;By integrating DevOps practices, this model is accessible even to researchers without a strong data science or technical background. All they need to do is input their protein sequences into the cloud-hosted web UI and the prediction is returned instantly.&lt;/p&gt;

&lt;p&gt;This doesn’t just speed up research, it also widens access and promotes inclusion in the global scientific space.&lt;/p&gt;
&lt;h3&gt;
  
  
  Tools Used &amp;amp; Installed
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Python 3.8 and above&lt;/li&gt;
&lt;li&gt;FastAPI&lt;/li&gt;
&lt;li&gt;Conda&lt;/li&gt;
&lt;li&gt;Git&lt;/li&gt;
&lt;li&gt;GitHub Actions&lt;/li&gt;
&lt;li&gt;Docker&lt;/li&gt;
&lt;li&gt;AWS EC2&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;
  
  
  Prerequisite Knowledge
&lt;/h3&gt;

&lt;p&gt;To get the most out of this blog, it helps to be familiar with:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Linux command-line basics&lt;/li&gt;
&lt;li&gt;Python programming&lt;/li&gt;
&lt;li&gt;Basic machine learning concepts&lt;/li&gt;
&lt;li&gt;Version control with Git&lt;/li&gt;
&lt;li&gt;Fundamentals of Docker&lt;/li&gt;
&lt;li&gt;Understanding of APIs (e.g FastAPI)&lt;/li&gt;
&lt;li&gt;Basic CI/CD workflows&lt;/li&gt;
&lt;li&gt;Conda (for managing Python environments)&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;
  
  
  Project Overview
&lt;/h3&gt;

&lt;p&gt;The project is structured to separate concerns clearly:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Data Scripts: For retrieving, preprocessing, and training the model&lt;/li&gt;
&lt;li&gt;Notebook: Used for exploratory data analysis and visualization&lt;/li&gt;
&lt;li&gt;Model Artifacts: Trained model saved for inference&lt;/li&gt;
&lt;li&gt;API Backend: A FastAPI app that serves predictions&lt;/li&gt;
&lt;li&gt;Dockerfile: Containerizes the app for deployment&lt;/li&gt;
&lt;li&gt;CI/CD Workflow: GitHub Actions automates the build and deployment&lt;/li&gt;
&lt;li&gt;Cloud Infrastructure: The app is deployed to an EC2 instance on AWS&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The full codebase and directory structure can be found in my &lt;a href="https://github.com/AzeematRaji/epitope-ml-model" rel="noopener noreferrer"&gt;github&lt;/a&gt; repository&lt;/p&gt;
&lt;h3&gt;
  
  
  Build With Me
&lt;/h3&gt;

&lt;p&gt;Let’s walk through how I brought everything together, from training the ML model to deploying it in the cloud.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Setting up the environment&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I created a Conda environment and installed all necessary dependencies:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;conda create -n epitope python=3.10
conda activate epitope
pip install -r requirements.txt
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;To build with me, clone the repository
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;git clone https://github.com/AzeematRaji/epitope-ml-model.git
cd epitope-ml-model
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Training the Machine Learning Model&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I retrieved and preprocessed the data:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;python scripts/retrieve.py
python scripts/preprocess.py
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then I trained and saved the model:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;python scripts/train.py
joblib.dump(model, "../models/epitope_model.joblib")
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Creating the FastAPI Backend
I built a simple API using FastAPI to serve the trained model. To test it locally:&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;code&gt;uvicorn app.main:app --host 0.0.0.0 --port 8000 --reload&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Test access via API from the Browser:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;localhost:8000&lt;/code&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Writing a Dockerfile&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I containerized the app using Docker:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker build -t epitope-api .
docker run -p 8000:8000 epitope-api
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This made it easy to run in any environment.&lt;/p&gt;

&lt;p&gt;Test access via API from the Browser:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;localhost:8000&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Now, we have confirmed the app is working locally. Lets move it to the cloud.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Setting Up the EC2 Instance&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Launch an EC2 Instance on AWS&lt;/li&gt;
&lt;li&gt;Choose an appropriate instance type (e.g., t3.medium because of data size).&lt;/li&gt;
&lt;li&gt;Configure the security group to allow inbound traffic on port 8000 from 0.0.0.0 for testing.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;&lt;p&gt;Creating the GitHub Actions workflow&lt;/p&gt;&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;To automate deployment, I created a GitHub Actions workflow &lt;code&gt;.github/workflows/deploy.yml&lt;/code&gt; that is triggered whenever there is a push to the main branch, it:&lt;br&gt;
    &lt;em&gt;- Containerizes the application,&lt;/em&gt;&lt;br&gt;
    &lt;em&gt;- Pushes it to DockerHub,&lt;/em&gt;&lt;br&gt;
    &lt;em&gt;- Deploys it on your EC2 instance.&lt;/em&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;GitHub Secrets Configuration&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In the GitHub repository, the following secrets need to be configured to ensure the pipeline works:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;DOCKER_USERNAME&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;DOCKER_PASSWORD&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;EC2_SSH_KEY&lt;/strong&gt;: The private SSH key or .pem file used to connect to the EC2 instance&lt;br&gt;
&lt;strong&gt;EC2_PUBLIC_IP&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This process eliminates the need to manually deploy the application every time there is a code change, streamlining the workflow.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Final testing and accessibility&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Access the FastAPI via the browser:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;http://&amp;lt;ec2-public-ip&amp;gt;:8000&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;This would display a UI prompt to input protein sequence and predict whether it epitope or non-epitope&lt;/p&gt;

&lt;h3&gt;
  
  
  Lessons Learnt
&lt;/h3&gt;

&lt;p&gt;After evaluating the model, I observed that it tends to favor the prediction of non-epitope sequences over epitope sequences. This bias is largely due to the highly imbalanced dataset used during training.&lt;/p&gt;

&lt;p&gt;This highlighted two important takeaways:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Data quality matters: An imbalanced dataset can significantly affect a model's performance, especially in critical applications like drug discovery.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;There's room for improvement: Future iterations could involve experimenting with different featurizers, applying sampling techniques (e.g. SMOTE), and trying out more balanced or larger datasets to improve accuracy and fairness in predictions&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Raw Reflections &amp;amp; What’s Ahead
&lt;/h3&gt;

&lt;p&gt;This project was just the beginning of my exploration into the intersection of AI and DevOps. While I kept things simple here, there’s so much more to integrate, from using Kubernetes to manage and scale more complex ML workloads, to Terraform for automating infrastructure provisioning in a more structured and repeatable way.&lt;/p&gt;

&lt;p&gt;These tools and ideas will be explored in future projects and blog posts, where I’ll continue building and documenting how DevOps can enhance the accessibility and reliability of AI applications.&lt;/p&gt;

&lt;p&gt;If you found this insightful, feel free to follow my journey on &lt;a href="https://github.com/AzeematRaji" rel="noopener noreferrer"&gt;GitHub&lt;/a&gt; where I share more projects like this.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>devops</category>
      <category>opensource</category>
      <category>aws</category>
    </item>
    <item>
      <title>Very detailed! Thank you</title>
      <dc:creator>AzeematRaji</dc:creator>
      <pubDate>Mon, 03 Mar 2025 10:57:49 +0000</pubDate>
      <link>https://dev.to/azeemah/very-detailed-thank-you-2gnh</link>
      <guid>https://dev.to/azeemah/very-detailed-thank-you-2gnh</guid>
      <description>&lt;div class="ltag__link"&gt;
  &lt;a href="/yutee_okon" class="ltag__link__link"&gt;
    &lt;div class="ltag__link__pic"&gt;
      &lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Fuser%2Fprofile_image%2F236068%2F57ed3836-856f-4eac-a5c3-5e507858bdfe.jpg" alt="yutee_okon"&gt;
    &lt;/div&gt;
  &lt;/a&gt;
  &lt;a href="https://dev.to/yutee_okon/docker-to-the-rescue-deploying-react-and-fastapi-app-with-monitoring-1i79" class="ltag__link__link"&gt;
    &lt;div class="ltag__link__content"&gt;
      &lt;h2&gt;Docker to the Rescue: Deploying React And FastAPI App With Monitoring&lt;/h2&gt;
      &lt;h3&gt;Utibe ・ Nov 26 '24&lt;/h3&gt;
      &lt;div class="ltag__link__taglist"&gt;
        &lt;span class="ltag__link__tag"&gt;#devops&lt;/span&gt;
        &lt;span class="ltag__link__tag"&gt;#docker&lt;/span&gt;
        &lt;span class="ltag__link__tag"&gt;#prometheus&lt;/span&gt;
        &lt;span class="ltag__link__tag"&gt;#loki&lt;/span&gt;
      &lt;/div&gt;
    &lt;/div&gt;
  &lt;/a&gt;
&lt;/div&gt;


</description>
      <category>devops</category>
      <category>docker</category>
      <category>prometheus</category>
      <category>loki</category>
    </item>
    <item>
      <title>Building Smarter CI/CD: Automating Kubernetes Deployments with GitHub Actions</title>
      <dc:creator>AzeematRaji</dc:creator>
      <pubDate>Thu, 02 Jan 2025 08:54:02 +0000</pubDate>
      <link>https://dev.to/azeemah/building-smarter-cicd-automating-kubernetes-deployments-with-github-actions-57n1</link>
      <guid>https://dev.to/azeemah/building-smarter-cicd-automating-kubernetes-deployments-with-github-actions-57n1</guid>
      <description>&lt;h4&gt;
  
  
  Overview
&lt;/h4&gt;

&lt;p&gt;This project focuses on implementing automated Continuous Integration (CI) and Continuous Deployment (CD) workflows for both the frontend and backend applications of a web platform using GitHub Actions. The goal is to enhance development efficiency, enforce code quality standards, and streamline deployment to a Kubernetes cluster.&lt;/p&gt;

&lt;h4&gt;
  
  
  Deliverables
&lt;/h4&gt;

&lt;ol&gt;
&lt;li&gt;Frontend CI/CD Pipelines:&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;Automate testing, linting, building, and deployment of the frontend application.&lt;/li&gt;
&lt;li&gt;Ensure only code changes in the frontend application trigger workflows.&lt;/li&gt;
&lt;li&gt;Implement conditional job execution to build and deploy only if linting and tests pass.&lt;/li&gt;
&lt;li&gt;Tag Docker images with the Git SHA for traceability.&lt;/li&gt;
&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;Backend CI/CD Pipelines:&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;Establish a similar automated workflow for backend application development.&lt;/li&gt;
&lt;li&gt;Enforce code quality checks and test validation before building the application.&lt;/li&gt;
&lt;li&gt;Ensure Kubernetes deployments use the latest tagged Docker image based on the commit SHA.&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Prerequisites
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;AWS account and CLI configured.&lt;/li&gt;
&lt;li&gt;Terraform installed&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/AzeematRaji/Movie-app-CI-CD-pipeline/tree/main/starter/frontend" rel="noopener noreferrer"&gt;Frontend application code&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/AzeematRaji/Movie-app-CI-CD-pipeline/tree/main/starter/backend" rel="noopener noreferrer"&gt;Backend application code&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Basic knowledge of kubernetes&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Steps to achieving this;
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Deploying a cluster on EKS service in AWS using terraform&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;create a terraform/ &lt;/li&gt;
&lt;li&gt;containing &lt;code&gt;main.tf&lt;/code&gt; which will create;&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;VPC&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "aws_vpc" "vpc" {
  tags = {
    "Name" = "udacity"
  }
  cidr_block           = "10.0.0.0/16"
  enable_dns_support   = true
  enable_dns_hostnames = true
}

# Create an internet gateway
resource "aws_internet_gateway" "igw" {
  vpc_id = aws_vpc.vpc.id
}

# Create a public subnet
resource "aws_subnet" "public_subnet" {
  vpc_id                  = aws_vpc.vpc.id
  cidr_block              = "10.0.1.0/24"
  availability_zone       = "us-east-1${var.public_az}"
  map_public_ip_on_launch = true
  tags = {
    Name = "udacity-public"
  }
}

# Create public route table
resource "aws_route_table" "public" {
  vpc_id = aws_vpc.vpc.id

  route {
    cidr_block = "0.0.0.0/0"
    gateway_id = aws_internet_gateway.igw.id
  }

  tags = {
    Name = "public"
  }
}

# Associate the route table
resource "aws_route_table_association" "public" {
  subnet_id      = aws_subnet.public_subnet.id
  route_table_id = aws_route_table.public.id
}

# Create a private subnet
resource "aws_subnet" "private_subnet" {
  vpc_id            = aws_vpc.vpc.id
  availability_zone = "us-east-1${var.private_az}"
  cidr_block        = "10.0.2.0/24"
  tags = {
    Name = "udacity-private"
  }
}

# Create private route table
resource "aws_route_table" "private" {
  vpc_id = aws_vpc.vpc.id

  tags = {
    Name = "private"
  }
}

# Associate private route table
resource "aws_route_table_association" "private" {
  subnet_id      = aws_subnet.private_subnet.id
  route_table_id = aws_route_table.private.id
}

# Create EKS endpoint for private access
resource "aws_vpc_endpoint" "eks" {
  count               = var.enable_private == true ? 1 : 0 # only enable when private
  vpc_id              = aws_vpc.vpc.id
  service_name        = "com.amazonaws.us-east-1.eks"
  vpc_endpoint_type   = "Interface"
  security_group_ids  = [aws_eks_cluster.main.vpc_config.0.cluster_security_group_id]
  subnet_ids          = [aws_subnet.private_subnet.id]
  private_dns_enabled = true
}

# Create EC2 endpoint for private access
resource "aws_vpc_endpoint" "ec2" {
  count               = var.enable_private == true ? 1 : 0
  vpc_id              = aws_vpc.vpc.id
  service_name        = "com.amazonaws.us-east-1.ec2"
  vpc_endpoint_type   = "Interface"
  security_group_ids  = [aws_eks_cluster.main.vpc_config.0.cluster_security_group_id]
  private_dns_enabled = true
}

resource "aws_vpc_endpoint" "ecr-dkr-endpoint" {
  count               = var.enable_private == true ? 1 : 0
  vpc_id              = aws_vpc.vpc.id
  service_name        = "com.amazonaws.us-east-1.ecr.dkr"
  vpc_endpoint_type   = "Interface"
  security_group_ids  = [aws_eks_cluster.main.vpc_config.0.cluster_security_group_id]
  subnet_ids          = [aws_subnet.private_subnet.id]
  private_dns_enabled = true
}

resource "aws_vpc_endpoint" "ecr-api-endpoint" {
  count               = var.enable_private == true ? 1 : 0
  vpc_id              = aws_vpc.vpc.id
  service_name        = "com.amazonaws.us-east-1.ecr.api"
  vpc_endpoint_type   = "Interface"
  security_group_ids  = [aws_eks_cluster.main.vpc_config.0.cluster_security_group_id]
  subnet_ids          = [aws_subnet.private_subnet.id]
  private_dns_enabled = true
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;EKS resources&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Create an EKS cluster
resource "aws_eks_cluster" "main" {
  name     = "cluster"
  version  = var.k8s_version
  role_arn = aws_iam_role.eks_cluster.arn
  vpc_config {
    subnet_ids              = [aws_subnet.private_subnet.id, aws_subnet.public_subnet.id]
    endpoint_public_access  = var.enable_private == true ? false : true
    endpoint_private_access = true
  }
  depends_on = [aws_iam_role_policy_attachment.eks_cluster, aws_iam_role_policy_attachment.eks_service]
}


# Create an IAM role for the EKS cluster
resource "aws_iam_role" "eks_cluster" {
  name = "eks_cluster_role"

  assume_role_policy = jsonencode({
    Version = "2012-10-17"
    Statement = [
      {
        Action = "sts:AssumeRole"
        Effect = "Allow"
        Principal = {
          Service = "eks.amazonaws.com"
        }
      }
    ]
  })
}

# Attach policies to the EKS cluster IAM role
resource "aws_iam_role_policy_attachment" "eks_cluster" {
  policy_arn = "arn:aws:iam::aws:policy/AmazonEKSClusterPolicy"
  role       = aws_iam_role.eks_cluster.name
}

resource "aws_iam_role_policy_attachment" "eks_service" {
  policy_arn = "arn:aws:iam::aws:policy/AmazonEKSServicePolicy"
  role       = aws_iam_role.eks_cluster.name
}



# EKS Node Group

# Track latest release for the given k8s version
data "aws_ssm_parameter" "eks_ami_release_version" {
  name = "/aws/service/eks/optimized-ami/${aws_eks_cluster.main.version}/amazon-linux-2/recommended/release_version"
}

resource "aws_eks_node_group" "main" {
  node_group_name = "udacity"
  cluster_name    = aws_eks_cluster.main.name
  version         = aws_eks_cluster.main.version
  node_role_arn   = aws_iam_role.node_group.arn
  subnet_ids      = [var.enable_private == true ? aws_subnet.private_subnet.id : aws_subnet.public_subnet.id]
  release_version = nonsensitive(data.aws_ssm_parameter.eks_ami_release_version.value)
  instance_types  = ["t3.small"]

  scaling_config {
    desired_size = 1
    max_size     = 1
    min_size     = 1
  }


  # Ensure that IAM Role permissions are created before and deleted after EKS Node Group handling.
  # Otherwise, EKS will not be able to properly delete EC2 Instances and Elastic Network Interfaces.
  depends_on = [
    aws_iam_role_policy_attachment.node_group_policy,
    aws_iam_role_policy_attachment.cni_policy,
    aws_iam_role_policy_attachment.ecr_policy,
  ]

  lifecycle {
    ignore_changes = [scaling_config.0.desired_size]
  }
}

// IAM Configuration
resource "aws_iam_role" "node_group" {
  name               = "udacity-node-group"
  assume_role_policy = data.aws_iam_policy_document.assume_role_policy.json
}

resource "aws_iam_role_policy_attachment" "node_group_policy" {
  role       = aws_iam_role.node_group.name
  policy_arn = "arn:aws:iam::aws:policy/AmazonEKSWorkerNodePolicy"
}

resource "aws_iam_role_policy_attachment" "cni_policy" {
  role       = aws_iam_role.node_group.name
  policy_arn = "arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy"
}

resource "aws_iam_role_policy_attachment" "ecr_policy" {
  role       = aws_iam_role.node_group.name
  policy_arn = "arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly"
}

data "aws_iam_policy_document" "assume_role_policy" {
  statement {
    effect  = "Allow"
    actions = ["sts:AssumeRole"]
    principals {
      type        = "Service"
      identifiers = ["ec2.amazonaws.com"]
    }
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;ECR repositories&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# ECR Repositories

resource "aws_ecr_repository" "frontend" {
  name                 = "frontend"
  image_tag_mutability = "MUTABLE"
  force_delete         = true

  image_scanning_configuration {
    scan_on_push = true
  }
}

resource "aws_ecr_repository" "backend" {
  name                 = "backend"
  image_tag_mutability = "MUTABLE"
  force_delete         = true

  image_scanning_configuration {
    scan_on_push = true
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;github action user&lt;/strong&gt; for interacting in pipeline.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Github Action role

resource "aws_iam_user" "github_action_user" {
  name = "github-action-user"
}

resource "aws_iam_user_policy" "github_action_user_permission" {
  user   = aws_iam_user.github_action_user.name
  policy = data.aws_iam_policy_document.github_policy.json
}

data "aws_iam_policy_document" "github_policy" {
  statement {
    effect    = "Allow"
    actions   = ["ecr:*", "eks:*", "ec2:*", "iam:GetUser"]
    resources = ["*"]
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;outputs.tf&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;output "frontend_ecr" {
  value = aws_ecr_repository.frontend.repository_url
}
output "backend_ecr" {
  value = aws_ecr_repository.backend.repository_url
}

output "cluster_name" {
  value = aws_eks_cluster.main.name
}

output "cluster_version" {
  value = aws_eks_cluster.main.version
}

output "github_action_user_arn" {
  value = aws_iam_user.github_action_user.arn
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;variables.tf&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;variable "k8s_version" {
  default = "1.25"
}

variable "enable_private" {
  default = false
}

variable "public_az" {
  type        = string
  description = "Change this to a letter a-f only if you encounter an error during setup"
  default     = "a"
}

variable "private_az" {
  type        = string
  description = "Change this to a letter a-f only if you encounter an error during setup"
  default     = "b"
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;providers.tf&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;provider "aws" {
  region = "us-east-1"
}

terraform {
  required_providers {
    aws = {
      source = "hashicorp/aws"
      version = "4.55.0"
    }
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;run &lt;code&gt;terraform init&lt;/code&gt; to initialize the repository &lt;/li&gt;
&lt;li&gt;
&lt;code&gt;terraform plan&lt;/code&gt; to validate before applying&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;terraform apply --auto-approve&lt;/code&gt; to apply without prompt&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This will provision our infra&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Run &lt;code&gt;init.sh&lt;/code&gt; which adds the github-action-user IAM user ARN to the Kubernetes configuration that will allow that user to execute kubectl commands against the cluster.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#!/bin/bash
set -e -o pipefail

echo "Fetching IAM github-action-user ARN"
userarn=$(aws iam get-user --user-name github-action-user | jq -r .User.Arn)

# Download tool for manipulating aws-auth
echo "Downloading tool..."
curl -X GET -L https://github.com/kubernetes-sigs/aws-iam-authenticator/releases/download/v0.6.2/aws-iam-authenticator_0.6.2_linux_amd64 -o aws-iam-authenticator
chmod +x aws-iam-authenticator

echo "Updating permissions"
./aws-iam-authenticator add user --userarn="${userarn}" --username=github-action-role --groups=system:masters --kubeconfig="$HOME"/.kube/config --prompt=false

echo "Cleaning up"
rm aws-iam-authenticator
echo "Done!"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This script will download a tool, add the IAM user ARN to the authentication configuration, indicate a Done status, then it'll remove the tool.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Generate AWS access keys for Github Action user&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Go to AWS console, navigate to the IAM service &lt;/li&gt;
&lt;li&gt;Under users, the &lt;code&gt;github-action-user&lt;/code&gt; user account should be there&lt;/li&gt;
&lt;li&gt;Click the account and go to &lt;code&gt;Security Credentials&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Under &lt;code&gt;Access keys&lt;/code&gt; select &lt;code&gt;Create access key&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Select &lt;code&gt;Application running outside AWS&lt;/code&gt; and click &lt;code&gt;Next&lt;/code&gt;, then &lt;code&gt;Create access key&lt;/code&gt; to finish creating the keys&lt;/li&gt;
&lt;li&gt;Finally, make sure to copy these keys for storing in Github Secrets&lt;/li&gt;
&lt;li&gt;In the github repository settings, click &lt;code&gt;secrets&lt;/code&gt; tab, then &lt;code&gt;actions&lt;/code&gt;, past these keys in the repo secrets&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;&lt;p&gt;Backend CI/CD pipeline&lt;br&gt;&lt;br&gt;
&lt;/p&gt;&lt;/li&gt;

&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;name: Backend CD

on:
  push:
    branches:
      - main
    paths:
      - 'starter/backend/**'
  workflow_dispatch:

jobs:
  lint:
    name: Lint
    runs-on: ubuntu-latest
    steps:
      - name: Checkout code
        uses: actions/checkout@v4

      - name: Set up Python
        uses: actions/setup-python@v4
        with:
          python-version: '3.10'

      - name: Cache pipenv dependencies
        uses: actions/cache@v3
        with:
          path: ~/.cache/pipenv
          key: ${{ runner.os }}-pipenv-${{ hashFiles('starter/backend/Pipfile.lock') }}
          restore-keys: |
            ${{ runner.os }}-pipenv-

      - name: Install dependencies
        run: pip install pipenv &amp;amp;&amp;amp; pipenv install --dev
        working-directory: starter/backend

      - name: Run lint
        run: pipenv run lint
        working-directory: starter/backend

  test:
    name: Test
    runs-on: ubuntu-latest
    steps:
      - name: Checkout code
        uses: actions/checkout@v4

      - name: Set up Python
        uses: actions/setup-python@v4
        with:
          python-version: '3.10'
          cache: 'pipenv'

      - name: Install pipenv
        run: curl https://raw.githubusercontent.com/pypa/pipenv/master/get-pipenv.py | python
      - run: pipenv install
        working-directory: starter/backend

      - name: Run tests
        run: pipenv run test
        working-directory: starter/backend

  build-and-push:
    name: Build Docker Image
    runs-on: ubuntu-latest
    needs: 
      - lint
      - test
    steps:
      - name: Checkout code
        uses: actions/checkout@v4

      - name: Set up Python
        uses: actions/setup-python@v4
        with:
          python-version: '3.10'
          cache: 'pipenv'

      - name: Install pipenv
        run: curl https://raw.githubusercontent.com/pypa/pipenv/master/get-pipenv.py | python
      - run: pipenv install
        working-directory: starter/backend

      - name: Build Docker image
        run: |
          docker build \
            --tag mp-backend:latest \
            .
        working-directory: starter/backend

      - name: Set up AWS CLI
        uses: aws-actions/configure-aws-credentials@v2
        with:
          aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
          aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
          aws-region: us-east-1

      - name: Amazon ECR Login
        uses: aws-actions/amazon-ecr-login@v1

      - name: Tag and Push Docker Image
        run: |
          docker tag mp-backend:latest ${{ secrets.AWS_ACCOUNT_ID }}.dkr.ecr.us-east-1.amazonaws.com/backend:${{ github.sha }}
          docker push ${{ secrets.AWS_ACCOUNT_ID }}.dkr.ecr.us-east-1.amazonaws.com/backend:${{ github.sha }}

  deploy:
    name: Deploy to EKS
    runs-on: ubuntu-latest
    needs: build-and-push
    steps:
      - name: Checkout code
        uses: actions/checkout@v4

      - name: Set up AWS CLI
        uses: aws-actions/configure-aws-credentials@v2
        with:
          aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
          aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
          aws-region: us-east-1

      - name: Amazon ECR Login
        uses: aws-actions/amazon-ecr-login@v1

      - name: Set up kubectl
        uses: azure/setup-kubectl@v3
        with:
          version: 'v1.25.0'

      - name: Update kubeconfig
        run: |
          aws eks --region us-east-1 update-kubeconfig --name cluster

      - name: Set image in Kubernetes manifests using Kustomize
        run: |
          kustomize edit set image backend=${{ secrets.AWS_ACCOUNT_ID }}.dkr.ecr.us-east-1.amazonaws.com/backend:${{ github.sha }}
        working-directory: starter/backend/k8s

      - name: Deploy to EKS using Kustomize
        run: |
          kustomize build | kubectl apply -f -
        working-directory: starter/backend/k8s
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;This runs lint and test jobs in parallel.&lt;/li&gt;
&lt;li&gt;lint checks out the code, sets up Python, installs dependencies using pipenv, and then runs a linting tool to ensure the code adheres to style guidelines&lt;/li&gt;
&lt;li&gt;test runs the test suite to ensure the application functions correctly&lt;/li&gt;
&lt;li&gt;Proceeds to build docker image and push to the backend repo created in ECR.&lt;/li&gt;
&lt;li&gt;Deploy the app code to the cluster using the manifest files present in the &lt;code&gt;backend/&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;Kustomize manages and overlays YAML files, enabling reusable and environment-specific configurations.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Before pushing this code to the github repository;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;add &lt;code&gt;AWS_ACCOUNT_ID&lt;/code&gt; to the github secrets &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;After pushing the code, the workflow should be triggered and run successfully;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;run &lt;code&gt;kubectl get svc&lt;/code&gt; to get the URL of the backend service &lt;/li&gt;
&lt;li&gt;&lt;p&gt;add the &lt;code&gt;Backend_URL&lt;/code&gt; to the github repo secrets, frontend app needs it to connect to the backend_api&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Frontend CI/CD pipeline&lt;br&gt;
&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;name: Frontend CD

on:
  push:
    branches:
      - main
    paths:
      - 'starter/frontend/**'
  workflow_dispatch:

jobs:
  lint:
    name: Lint
    runs-on: ubuntu-latest
    steps:
      - name: Checkout code
        uses: actions/checkout@v4

      - name: Set up Node.js
        uses: actions/setup-node@v4
        with:
          node-version: 'lts/*'

      - name: Cache npm dependencies
        uses: actions/cache@v3
        with:
          path: ~/.npm
          key: ${{ runner.os }}-node-${{ hashFiles('starter/frontend/package-lock.json') }}
          restore-keys: |
            ${{ runner.os }}-node-

      - name: Install dependencies
        run: npm ci
        working-directory: starter/frontend

      - name: Run lint
        run: npm run lint
        working-directory: starter/frontend

  test:
    name: Test
    runs-on: ubuntu-latest
    steps:
      - name: Checkout code
        uses: actions/checkout@v4

      - name: Set up Node.js
        uses: actions/setup-node@v4
        with:
          node-version: 'lts/*'

      - name: Cache npm dependencies
        uses: actions/cache@v3
        with:
          path: ~/.npm
          key: ${{ runner.os }}-node-${{ hashFiles('starter/frontend/package-lock.json') }}
          restore-keys: |
            ${{ runner.os }}-node-

      - name: Install dependencies
        run: npm ci
        working-directory: starter/frontend

      - name: Run tests
        run: CI=true npm test
        working-directory: starter/frontend

  build-and-push:
    name: Build Docker Image
    runs-on: ubuntu-latest
    needs: 
      - lint
      - test
    steps:
      - name: Checkout code
        uses: actions/checkout@v4

      - name: Set up Node.js
        uses: actions/setup-node@v4
        with:
          node-version: 'lts/*'

      - name: Cache npm dependencies
        uses: actions/cache@v3
        with:
          path: ~/.npm
          key: ${{ runner.os }}-node-${{ hashFiles('starter/frontend/package-lock.json') }}
          restore-keys: |
            ${{ runner.os }}-node-

      - name: Install dependencies
        run: npm ci
        working-directory: starter/frontend

      - name: Build Docker image
        run: |
          docker build \
            --build-arg REACT_APP_MOVIE_API_URL=${{ secrets.BACKEND_URL }} \
            --tag mp-frontend:latest \
            .
        working-directory: starter/frontend

      - name: Set up AWS CLI
        uses: aws-actions/configure-aws-credentials@v2
        with:
          aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
          aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
          aws-region: us-east-1

      - name: Amazon ECR Login
        uses: aws-actions/amazon-ecr-login@v1

      - name: Tag and Push Docker Image
        run: |
          docker tag mp-frontend:latest ${{ secrets.AWS_ACCOUNT_ID }}.dkr.ecr.us-east-1.amazonaws.com/frontend:${{ github.sha }}
          docker push ${{ secrets.AWS_ACCOUNT_ID }}.dkr.ecr.us-east-1.amazonaws.com/frontend:${{ github.sha }}

  deploy:
    name: Deploy to EKS
    runs-on: ubuntu-latest
    needs: build-and-push
    steps:
      - name: Checkout code
        uses: actions/checkout@v4

      - name: Set up AWS CLI
        uses: aws-actions/configure-aws-credentials@v2
        with:
          aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
          aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
          aws-region: us-east-1

      - name: Amazon ECR Login
        uses: aws-actions/amazon-ecr-login@v1

      - name: Set up kubectl
        uses: azure/setup-kubectl@v3
        with:
          version: 'v1.25.0'

      - name: Update kubeconfig
        run: |
          aws eks --region us-east-1 update-kubeconfig --name cluster

      - name: Set image in Kubernetes manifests using Kustomize
        run: |
          kustomize edit set image frontend=${{ secrets.AWS_ACCOUNT_ID }}.dkr.ecr.us-east-1.amazonaws.com/frontend:${{ github.sha }}
        working-directory: starter/frontend/k8s

      - name: Deploy to EKS using Kustomize
        run: |
          kustomize build | kubectl apply -f -
        working-directory: starter/frontend/k8s
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This does similar to what the backend CI/CD pipeline is doing.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;It referenced the &lt;code&gt;$Backend_URL&lt;/code&gt; to access the backend_api endpoint.&lt;/li&gt;
&lt;li&gt;This workflow should be triggered on push&lt;/li&gt;
&lt;li&gt;Run &lt;code&gt;kubectl get svc&lt;/code&gt; and open the URL for the frontend service in a browser and the application should be live and displaying the data correctly&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Best practices
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Created a &lt;code&gt;github-action-user&lt;/code&gt; for enhanced security:&lt;br&gt;
This user is granted only the minimal permissions required to execute the workflow, reducing potential risks.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Utilized &lt;code&gt;terraform plan&lt;/code&gt; for validation:&lt;br&gt;
Ensures configurations are error-free before applying changes, allowing for easy rollback in case of failures.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Conclusion
&lt;/h4&gt;

&lt;p&gt;This project demonstrates a modern DevOps approach to managing web application development, emphasizing automation, efficiency, and scalability.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Faster release cycles with automated workflows.&lt;/li&gt;
&lt;li&gt;Enhanced code quality through linting and test validation.&lt;/li&gt;
&lt;li&gt;Improved traceability and reliability with SHA-tagged Docker images.&lt;/li&gt;
&lt;li&gt;Simplified deployment process to Kubernetes, reducing manual errors.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://github.com/AzeematRaji" rel="noopener noreferrer"&gt;Checkout my Github&lt;/a&gt;&lt;/p&gt;

</description>
      <category>aws</category>
      <category>kubernetes</category>
      <category>terraform</category>
      <category>githubactions</category>
    </item>
    <item>
      <title>Deploying APIs as Microservice with Kubernetes, Docker, and AWS</title>
      <dc:creator>AzeematRaji</dc:creator>
      <pubDate>Wed, 01 Jan 2025 01:14:07 +0000</pubDate>
      <link>https://dev.to/azeemah/deploying-apis-as-microservice-with-kubernetes-docker-and-aws-1a79</link>
      <guid>https://dev.to/azeemah/deploying-apis-as-microservice-with-kubernetes-docker-and-aws-1a79</guid>
      <description>&lt;h4&gt;
  
  
  Overview
&lt;/h4&gt;

&lt;p&gt;This project demonstrates how to deploy a set of APIs as a microservice using Kubernetes, Docker, and AWS. The service is designed to manage application operations, leveraging PostgreSQL for efficient data storage and Flask to create the web API. Here’s a breakdown of each key component:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Kubernetes: Used for orchestrating containerized microservices. This allows us to deploy, scale, and manage the APIs in production.&lt;/li&gt;
&lt;li&gt;Docker: Containers are used to package the microservices (Flask app and PostgreSQL) along with their dependencies, making the deployment process seamless.&lt;/li&gt;
&lt;li&gt;AWS CodeBuild: Automates the build process of Docker images, pushing them to Elastic Container Registry (ECR).&lt;/li&gt;
&lt;li&gt;PostgreSQL: Acts as the database for storing all user data, including usage statistics for the application.&lt;/li&gt;
&lt;li&gt;CloudWatch: Monitors application performance and logs, ensuring that any issues can be diagnosed quickly.&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Prerequisites:
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;AWS account and &lt;a href="https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html" rel="noopener noreferrer"&gt;configure AWS CLI&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Kubernetes, &lt;a href="https://kubernetes.io/docs/tasks/tools/" rel="noopener noreferrer"&gt;install kubectl&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/AzeematRaji/api-microservice-kubernetes-ci-cd/tree/main/analytics" rel="noopener noreferrer"&gt;Application code&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/AzeematRaji/api-microservice-kubernetes-ci-cd/tree/main/db" rel="noopener noreferrer"&gt;Database configuration files&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Step by step instructions
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Ensure AWS CLI is configured&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;code&gt;aws sts get-caller-identity&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Necessary IAM permissions is needed to create a cluster.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://eksctl.io/installation/" rel="noopener noreferrer"&gt;Install eksctl&lt;/a&gt; and use it to create an EKS cluster.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;code&gt;eksctl create cluster --name my-cluster --region us-east-1 --nodegroup-name my-nodes --node-type t3.small --nodes 1 --nodes-min 1 --nodes-max 2&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;code&gt;--name my-cluster&lt;/code&gt;: Cluster name.&lt;br&gt;
&lt;code&gt;--region us-east-1&lt;/code&gt;: Region for the cluster.&lt;br&gt;
&lt;code&gt;--node-type t3.small&lt;/code&gt;: EC2 instance type for worker nodes.&lt;br&gt;
&lt;code&gt;--nodes 1, --nodes-min 1, --nodes-max 2&lt;/code&gt;: Auto-scaling configuration for the number of worker nodes (1 to 2).&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Update the kubeconfig &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;code&gt;aws eks --region us-east-1 update-kubeconfig --name my-cluster&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;This allows access to the cluster locally using kubectl&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Verify connection&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;code&gt;kubectl config current-context&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Now we need to configure database for our application&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Create a file &lt;code&gt;pv.yaml&lt;/code&gt;  to define the Persistent Volume (PV), which will be used to store data.
&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: v1
kind: PersistentVolume
metadata:
  name: my-manual-pv
spec:
  capacity:
    storage: 1Gi
  accessModes:
    - ReadWriteOnce
  persistentVolumeReclaimPolicy: Retain
  storageClassName: gp2
  hostPath:
    path: "/mnt/data"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;&lt;code&gt;storageClassName: gp2&lt;/code&gt;: Tells Kubernetes to use an AWS EBS (Elastic Block Store) for storage.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Create &lt;code&gt;pvc.yaml&lt;/code&gt; to define the Persistent Volume Claim (PVC), which is used to bind the persistent volume to the application.
&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: postgresql-pvc
spec:
  storageClassName: gp2
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;It can be referenced in the database's deployment to mount the storage.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Create a file &lt;code&gt;postgresql-deployment.yaml&lt;/code&gt; for Postgres Deployment
&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: apps/v1
kind: Deployment
metadata:
  name: postgresql
spec:
  selector:
    matchLabels:
      app: postgresql
  template:
    metadata:
      labels:
        app: postgresql
    spec:
      containers:
      - name: postgresql
        image: postgres:latest
        env:
        - name: POSTGRES_DB
          value: mydatabase
        - name: POSTGRES_USER
          value: myuser
        - name: POSTGRES_PASSWORD
          value: mypassword
        ports:
        - containerPort: 5432
        volumeMounts:
        - mountPath: /var/lib/postgresql/data
          name: postgresql-storage
      volumes:
      - name: postgresql-storage
        persistentVolumeClaim:     #PVC referenced
          claimName: postgresql-pvc
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;ul&gt;
&lt;li&gt;Apply YAML configurations in the following order
&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl apply -f pv.yaml
kubectl apply -f pvc.yaml
kubectl apply -f postgresql-deployment.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;ul&gt;
&lt;li&gt;Verify Database deployment&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;code&gt;kubectl get pods&lt;/code&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;To open bash into the pod&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;code&gt;kubectl exec -it &amp;lt;postgresql-name&amp;gt; -- bash&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Once inside the pod, you can run &lt;code&gt;psql -U myuser -d mydatabase&lt;/code&gt; to have access to the &lt;code&gt;mydatabase&lt;/code&gt; database.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Create a YAML file, &lt;code&gt;postgresql-service.yaml&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Service needs to be created for the deployment to be exposed to other pods in the cluster&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: v1
kind: Service
metadata:
  name: postgresql-service
spec:
  ports:
  - port: 5432
    targetPort: 5432
  selector:
    app: postgresql
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This targets pods on port 5432, which is the default port for PostgreSQL deployment pod.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Run &lt;code&gt;kubectl apply -f postgresql-service.yaml&lt;/code&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Verify the service &lt;code&gt;kubectl get svc&lt;/code&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;psql is a lightweight client application for postgresql that enables connection to your postgres server and it must be installed&lt;br&gt;
&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apt update
apt install postgresql postgresql-contrib
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Run the seed files in &lt;a href="https://github.com/AzeematRaji/api-microservice-kubernetes-ci-cd/tree/main/db" rel="noopener noreferrer"&gt;db/&lt;/a&gt; in order to create the tables and populate them with data in the database
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;export DB_PASSWORD=mypassword
PGPASSWORD="$DB_PASSWORD" psql --host 127.0.0.1 -U myuser -d mydatabase -p 5433 &amp;lt; &amp;lt;FILE_NAME.sql&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Verify the database is populated&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;code&gt;PGPASSWORD="$DB_PASSWORD" psql --host 127.0.0.1 -U myuser -d mydatabase -p 5433&lt;/code&gt; to open psql terminal&lt;/p&gt;

&lt;p&gt;Run query &lt;code&gt;select *from users;&lt;/code&gt; to ensure they are not empty.&lt;/p&gt;

&lt;h6&gt;
  
  
  Setting up Continous Integration with codebuild
&lt;/h6&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Create an Amazon ECR repository on AWS console by navigating to  ECR service&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Create &lt;code&gt;buildspec.yml&lt;/code&gt; file the root directory of the repository&lt;br&gt;
&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;version: 0.2

phases:
  pre_build:
    commands:
      - echo Logging into ECR
      - aws ecr get-login-password --region $AWS_DEFAULT_REGION | docker login --username AWS --password-stdin $AWS_ACCOUNT_ID.dkr.ecr.$AWS_DEFAULT_REGION.amazonaws.com
  build:
    commands:
      - echo Starting build at `date`
      - echo Building the Docker image...          
      - docker build -t $IMAGE_REPO_NAME:$CODEBUILD_BUILD_NUMBER -f analytics/Dockerfile .
      - docker tag $IMAGE_REPO_NAME:$CODEBUILD_BUILD_NUMBER $AWS_ACCOUNT_ID.dkr.ecr.$AWS_DEFAULT_REGION.amazonaws.com/$IMAGE_REPO_NAME:$CODEBUILD_BUILD_NUMBER     
  post_build:
    commands:
      - echo Completed build at `date`
      - echo Pushing the Docker image...
      - docker push $AWS_ACCOUNT_ID.dkr.ecr.$AWS_DEFAULT_REGION.amazonaws.com/$IMAGE_REPO_NAME:$CODEBUILD_BUILD_NUMBER
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This template is what the CodeBuild will use to build docker image and push to the ECR repository. The placehoders in this file would need to be set in the CodeBuild project.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Create an Amazon CodeBuild project;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;navigate to the Codebuild service, click create a new project&lt;/li&gt;
&lt;li&gt;enter the project name&lt;/li&gt;
&lt;li&gt;select github as the source code provider&lt;/li&gt;
&lt;li&gt;authorize AWS to access project's GitHub repository and to run on push to the repository&lt;/li&gt;
&lt;li&gt;configure the environment &lt;/li&gt;
&lt;li&gt;check the priviledge box to enable docker build in the codebuild&lt;/li&gt;
&lt;li&gt;select new service role in the absence of existing service role but &lt;em&gt;note:&lt;/em&gt; ECR permission must be added to it &lt;/li&gt;
&lt;li&gt;set variables based on the placeholders in the &lt;code&gt;buildspec.yml&lt;/code&gt; like &lt;code&gt;$AWS_DEFAULT_REGION&lt;/code&gt; &lt;code&gt;$AWS_ACCOUNT_ID&lt;/code&gt; &lt;code&gt;$IMAGE_REPO_NAME&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;specify the buildspec file, (&lt;code&gt;buildspec.yml&lt;/code&gt; ensure it is in the root of your source repository)&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;&lt;p&gt;Modify the IAM role of the newly created service role by the codebuild, add inline policy&lt;br&gt;&lt;br&gt;
&lt;/p&gt;&lt;/li&gt;

&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "ecr:*"
            ],
            "Resource": [
                "*"
            ]
        }
    ]
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Push the &lt;code&gt;buildSpec.yml&lt;/code&gt; to the github repository and the codebuild should be triggered and the build should be successful&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmhbcxhb1bjsixgcwx0db.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmhbcxhb1bjsixgcwx0db.png" alt="CodeBuild" width="800" height="447"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Verify the image in ECR&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F438qcpic0an6aq89vq2l.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F438qcpic0an6aq89vq2l.png" alt="ECR" width="800" height="220"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Deploy the Application&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Create a &lt;code&gt;Configmap.yaml&lt;/code&gt; for the external configuration of the app, in this case our database values and Secret store all the sensitive environment variables such as (DB_PASSWORD)&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: v1
kind: ConfigMap
metadata:
  name: postgresql-service
data:
  DB_NAME: "mydatabase"
  DB_USER: "myuser"
  DB_HOST: "10.100.30.162"
  DB_PORT: "5432"
---
apiVersion: v1
kind: Secret
metadata:
  name: mysecret
type: Opaque
data:
  password: "bXlwYXNzd29yZA=="
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Run &lt;code&gt;kubectl apply -f configmap.yaml&lt;/code&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Create coworking.yaml for the service and deployment of the application, the docker image is the URI of the image we pushed to ECR, configmap and secret is referenced too.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: v1
kind: Service
metadata:
  name: coworking
spec:
  type: LoadBalancer
  selector:
    service: coworking
  ports:
  - name: "5153"
    protocol: TCP
    port: 5153
    targetPort: 5153
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: coworking
  labels:
    name: coworking
spec:
  replicas: 1
  selector:
    matchLabels:
      service: coworking
  template:
    metadata:
      labels:
        service: coworking
    spec:
      containers:
      - name: coworking
        image: 739244444072.dkr.ecr.us-east-1.amazonaws.com/api-service-img-redo:16
        imagePullPolicy: IfNotPresent
        livenessProbe:
          httpGet:
            path: /health_check
            port: 5153
          initialDelaySeconds: 5
          timeoutSeconds: 2
        readinessProbe:
          httpGet:
            path: "/readiness_check"
            port: 5153
          initialDelaySeconds: 5
          timeoutSeconds: 5
        envFrom:
        - configMapRef:
            name: postgresql-service
        env:
        - name: DB_PASSWORD
          valueFrom:
            secretKeyRef:
              name: mysecret
              key: password
      restartPolicy: Always
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Run &lt;code&gt;kubectl apply -f coworking.yaml&lt;/code&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Verify the deployment&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;code&gt;kubectl get pods&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqnrpoon8lm5ialwppu79.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqnrpoon8lm5ialwppu79.png" alt="pods description" width="800" height="62"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;code&gt;kubectl get svc&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3dniyrc59xshemvo3v0n.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3dniyrc59xshemvo3v0n.png" alt="svc" width="800" height="121"&gt;&lt;/a&gt;&lt;br&gt;
&lt;code&gt;kubectl describe svc &amp;lt;DATABASE_SERVICE_NAME&amp;gt;&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc7l84hvaykf8qsma1b9u.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc7l84hvaykf8qsma1b9u.png" alt="svc-description" width="800" height="246"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;code&gt;kubectl describe deployment &amp;lt;SERVICE_NAME&amp;gt;&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhxncaeyyqmc5eme3wuc6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhxncaeyyqmc5eme3wuc6.png" alt="deployment description" width="800" height="522"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The application is running successfully.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Monitoring container Insight logs for the applications with Cloudwatch&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;navigate to the Cloudwatch service on the console&lt;/li&gt;
&lt;li&gt;go to the logs menu, log groups, the cluster is present there&lt;/li&gt;
&lt;li&gt;now go to the terminal to change the eks node group IAM role
&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws iam attach-role-policy \
--role-name my-worker-node-role \
--policy-arn arn:aws:iam::aws:policy/CloudWatchAgentServerPolicy 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To get the name of the node group IAM role, go to your cluster, then compute, click on the active node group to get the name of  IAM role, replace &lt;code&gt;my-worker-node-role&lt;/code&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;run another command to install addons for the eks cluster&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;code&gt;aws eks create-addon --addon-name amazon-cloudwatch-observability --cluster-name my-cluster&lt;/code&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;check the log groups on the console again, &lt;code&gt;aws/containerinsights/my-cluster-name/application&lt;/code&gt; should be there, Click on one of the log streams to see the logs.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1i21ul1rpxd2ds66qqv4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1i21ul1rpxd2ds66qqv4.png" alt="cloudwatch" width="800" height="335"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The logs that show the health of the application&lt;/p&gt;

&lt;h4&gt;
  
  
  Conclusion
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Kubernetes, Docker, and AWS enable scalable and reliable microservice deployments.&lt;/li&gt;
&lt;li&gt;AWS CodeBuild automates building, pushing, and deploying application updates.&lt;/li&gt;
&lt;li&gt;PostgreSQL ensures effective data storage, while CloudWatch provides effective monitoring.&lt;/li&gt;
&lt;li&gt;This setup creates a maintainable system ready for production.&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>aws</category>
      <category>kubernetes</category>
      <category>docker</category>
      <category>cloudcomputing</category>
    </item>
    <item>
      <title>Deploying an Application Using CloudFormation with CDN Integration</title>
      <dc:creator>AzeematRaji</dc:creator>
      <pubDate>Mon, 30 Dec 2024 16:44:27 +0000</pubDate>
      <link>https://dev.to/azeemah/deploying-an-application-using-cloudformation-with-cdn-integration-36pe</link>
      <guid>https://dev.to/azeemah/deploying-an-application-using-cloudformation-with-cdn-integration-36pe</guid>
      <description>&lt;h4&gt;
  
  
  Introduction:
&lt;/h4&gt;

&lt;p&gt;This project automates the deployment of the application on AWS using Infrastructure as Code (IaC) with AWS CloudFormation. The infrastructure is divided into separate stacks: one for the network setup and one for the application deployment. The goal is to provision, configure, and tear down the necessary infrastructure with ease. It also integrates a Content Delivery Network (CDN) to enhance application performance and ensure global availability.&lt;/p&gt;

&lt;h4&gt;
  
  
  &lt;a href="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/o5041jn69mophttjua8p.png" rel="noopener noreferrer"&gt;Infrastructure diagram&lt;/a&gt;
&lt;/h4&gt;

&lt;h4&gt;
  
  
  Project Overview
&lt;/h4&gt;

&lt;p&gt;This project uses CloudFormation to provision the following resources:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;VPC with public and private subnets&lt;/li&gt;
&lt;li&gt;Load Balancer to handle HTTP/HTTPS traffic&lt;/li&gt;
&lt;li&gt;EC2 instances running the application&lt;/li&gt;
&lt;li&gt;S3 for static content storage&lt;/li&gt;
&lt;li&gt;CloudFront for content delivery&lt;/li&gt;
&lt;li&gt;IAM roles and policies for security&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The deployment is divided into two independent stacks:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Network Stack: Managed by the network team, this stack provisions VPCs, subnets, and security groups.&lt;/li&gt;
&lt;li&gt;Application Stack: Responsible for provisioning the application components, including EC2 instances, load balancers, S3, and CloudFront.&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Prerequisites:
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;AWS account with appropriate permissions&lt;/li&gt;
&lt;li&gt;AWSCLI &lt;a href="https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html" rel="noopener noreferrer"&gt;Install awscli&lt;/a&gt; and configured&lt;/li&gt;
&lt;li&gt;Basic knowledge of CloudFormation and its template structure.&lt;/li&gt;
&lt;li&gt;CloudFormation templates for both the Network and Application stacks.&lt;/li&gt;
&lt;li&gt;Scripts to create, update, deploy and delete the stacks.&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Spin up instructions
&lt;/h4&gt;

&lt;p&gt;To spin up your infrastructure using the provided scripts, follow the steps below:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;script to create stack:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#!/bin/bash

# Usage: ./create.sh &amp;lt;stack-name&amp;gt; &amp;lt;template-file&amp;gt; &amp;lt;parameters-file&amp;gt;

aws cloudformation create-stack --stack-name $1 \
    --template-body file://$2   \
    --parameters file://$3  \
    --capabilities "CAPABILITY_NAMED_IAM"  \
    --region=us-east-1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can edit the scripts to either &lt;a href="https://github.com/AzeematRaji/HighAvail-Website-using-cloudformation/blob/main/starter/delete.sh" rel="noopener noreferrer"&gt;delete&lt;/a&gt;, &lt;a href="https://github.com/AzeematRaji/HighAvail-Website-using-cloudformation/blob/main/starter/deploy.sh" rel="noopener noreferrer"&gt;deploy&lt;/a&gt; or &lt;a href="https://github.com/AzeematRaji/HighAvail-Website-using-cloudformation/blob/main/starter/update.sh" rel="noopener noreferrer"&gt;update&lt;/a&gt; stack, the purpose of the scripts is to avoid running the multiple line commands everytime.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;make the scripts executable&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;code&gt;chmod +x &amp;lt;scripts.sh&amp;gt;&lt;/code&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Use the create.sh script to create a new CloudFormation stack, create the network stack first since the application stack depends on it resources (like vpc, subnets)&lt;/li&gt;
&lt;li&gt;The script requires three parameters (such as StackName, &lt;a href="https://github.com/AzeematRaji/HighAvail-Website-using-cloudformation/blob/main/starter/network-parameters.json" rel="noopener noreferrer"&gt;ParameterFile&lt;/a&gt;, and &lt;a href="https://github.com/AzeematRaji/HighAvail-Website-using-cloudformation/blob/main/starter/network.yml" rel="noopener noreferrer"&gt;TemplateFile&lt;/a&gt;. You will need to pass these parameters as per the instructions in the script.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;code&gt;./create.sh StackName ParameterFile TemplateFile&lt;/code&gt; &lt;/p&gt;

&lt;p&gt;In this case &lt;/p&gt;

&lt;p&gt;&lt;code&gt;./create.sh networkStack network-parameters.json network.yml&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;This script creates a new stack with the provided parameters.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Update an existing stack: &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;code&gt;./update.sh StackName ParameterFile TemplateFile&lt;/code&gt; &lt;/p&gt;

&lt;p&gt;This script will update the stack with new configurations based on the changes in the parameter or template file.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Deploy a stack:&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;code&gt;./deploy.sh StackName ParameterFile TemplateFile&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;The deploy.sh script checks if the stack already exists. If it does, the script will update the stack; if it doesn't, it will create a new stack. It very useful for automation like CI/CD pipeline&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;create stack for the application as well &lt;a href="https://github.com/AzeematRaji/HighAvail-Website-using-cloudformation/blob/main/starter/udagram-parameters.json" rel="noopener noreferrer"&gt;ParameterFile&lt;/a&gt;, &lt;a href="https://github.com/AzeematRaji/HighAvail-Website-using-cloudformation/blob/main/starter/udagram.yml" rel="noopener noreferrer"&gt;TemplateFile&lt;/a&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;code&gt;./create.sh networkStack udagram-parameters.json udagram.yml&lt;/code&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Confirm your application is live&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;CloudFrontURL: &lt;a href="https://d1gjuuten5htu8.cloudfront.net" rel="noopener noreferrer"&gt;https://d1gjuuten5htu8.cloudfront.net&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;WebAppLBDNS: &lt;a href="http://udagra-WebAp-mMkpfHQBWeXO-1923467063.us-east-1.elb.amazonaws.com" rel="noopener noreferrer"&gt;http://udagra-WebAp-mMkpfHQBWeXO-1923467063.us-east-1.elb.amazonaws.com&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Tear down the deployed resources, follow these instructions:&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;code&gt;./delete.sh StackName&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Use the delete.sh script to delete a stack. The script only requires one parameter: the name of the stack you wish to delete.&lt;/p&gt;

&lt;h3&gt;
  
  
  Conclusion
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Cloudformation streamlines resource creation and reduces manual errors.&lt;/li&gt;
&lt;li&gt;Improves scalability, easy to replicate the setup in different environments.&lt;/li&gt;
&lt;li&gt;CDN Integration improves performance and ensures low latency worldwide.&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>aws</category>
      <category>devops</category>
      <category>cloudcomputing</category>
      <category>networking</category>
    </item>
    <item>
      <title>How to Host a Static Website on AWS S3 with CloudFront CDN</title>
      <dc:creator>AzeematRaji</dc:creator>
      <pubDate>Mon, 30 Dec 2024 14:30:22 +0000</pubDate>
      <link>https://dev.to/azeemah/how-to-host-a-static-website-on-aws-s3-with-cloudfront-cdn-29d0</link>
      <guid>https://dev.to/azeemah/how-to-host-a-static-website-on-aws-s3-with-cloudfront-cdn-29d0</guid>
      <description>&lt;h4&gt;
  
  
  Objective:
&lt;/h4&gt;

&lt;p&gt;To host a static website on AWS S3 and optimize its performance and security using CloudFront as a Content Delivery Network (CDN), ensuring fast and secure global content delivery.&lt;/p&gt;

&lt;h4&gt;
  
  
  Prerequisite:
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Active AWS account for setting up services.&lt;/li&gt;
&lt;li&gt;Pre-built static website content (e.g., HTML, CSS, JavaScript files).&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Step 1: Create an S3 Bucket
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Log in to the AWS Management Console and navigate to the S3 Service.&lt;/li&gt;
&lt;li&gt;Create a bucket with a globally unique name.&lt;/li&gt;
&lt;li&gt;Enable public access to allow your website to be publicly accessible.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5c7mmwyf2d33kn81dpct.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5c7mmwyf2d33kn81dpct.png" alt="s3-bucket" width="800" height="276"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  Step 2: Upload files and folder to the bucket
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Click upload on the newly created bucket&lt;/li&gt;
&lt;li&gt;Upload .html file as a file, then upload others as folder&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzcan2rpajczn2i6v9w9h.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzcan2rpajczn2i6v9w9h.png" alt="upload files &amp;amp; folders" width="800" height="246"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  Step 3: Configure the bucket to allow static web hosting
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Click on properties tab in the bucket, Enable Static Website Hosting and edit&lt;/li&gt;
&lt;li&gt;Choose "Host a static website" as the hosting option.&lt;/li&gt;
&lt;li&gt;Specify Index and Error Documents&lt;/li&gt;
&lt;li&gt;Enter the name of your main file (e.g index.html) in the Index Document field.&lt;/li&gt;
&lt;li&gt;Specify an error file (e.g., 404.html) for handling invalid URLs. (Optional)&lt;/li&gt;
&lt;li&gt;After saving, copy the Website Endpoint, the URL will be address for your website.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc1r8yynlvyk0w474phn2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc1r8yynlvyk0w474phn2.png" alt="configure bucket" width="800" height="361"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  Step 4: create IAM bucket policy that makes the bucket contents publicly accessible.
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Click on permission in the bucket&lt;/li&gt;
&lt;li&gt;Edit the bucket policy with this
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "AddPerm",
            "Effect": "Allow",
            "Principal": "*",
            "Action": "s3:GetObject",
            "Resource": "arn:aws:s3:::your-bucket-name/*"
        }
    ]
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fztqemxyzakjx48f6xr50.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fztqemxyzakjx48f6xr50.png" alt="bucket policy" width="800" height="334"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Save and open the website endpoint URL from Step 3 in a browser to ensure the website is accessible.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr3sqp3r3y9i16qeez6kc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr3sqp3r3y9i16qeez6kc.png" alt="website" width="800" height="325"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  Step 5: Set Up a CloudFront Distribution
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Navigate to cloudfront service, create a distribution&lt;/li&gt;
&lt;li&gt;Configure the settings; add origin domain, enter index.html into default root object&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr8qj36ckfc88hf6ievfi.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr8qj36ckfc88hf6ievfi.png" alt="cloudfront" width="800" height="148"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  Step 6:  Copy the domain URL and check it on the browser to confirm access to your website
&lt;/h4&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F14bvetpk0eaf1x3kc7th.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F14bvetpk0eaf1x3kc7th.png" alt="website" width="800" height="331"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  Conclusion
&lt;/h4&gt;

&lt;p&gt;Benefits of Using CloudFront:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Speeds up content delivery with edge locations.&lt;/li&gt;
&lt;li&gt;Provides an SSL certificate for secure connections.&lt;/li&gt;
&lt;li&gt;Offers caching to reduce latency and S3 bucket costs.&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>aws</category>
      <category>cloudcomputing</category>
    </item>
    <item>
      <title>Streamlining CI/CD with GitHub Actions: Provision and Deploy Infrastructure Seamlessly Using Terraform</title>
      <dc:creator>AzeematRaji</dc:creator>
      <pubDate>Thu, 17 Oct 2024 10:56:13 +0000</pubDate>
      <link>https://dev.to/azeemah/streamlining-cicd-with-github-actions-provision-and-deploy-infrastructure-seamlessly-using-terraform-54j</link>
      <guid>https://dev.to/azeemah/streamlining-cicd-with-github-actions-provision-and-deploy-infrastructure-seamlessly-using-terraform-54j</guid>
      <description>&lt;p&gt;&lt;strong&gt;Terraform:&lt;/strong&gt;&lt;br&gt;
Terraform is an open-source Infrastructure as Code (IaC) tool used to automate and manage cloud infrastructure. It allows you to define infrastructure resources (like servers, databases, and networks) in declarative configuration files and then provision them consistently across different environments.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Kubernetes:&lt;/strong&gt;&lt;br&gt;
Kubernetes is an open-source container orchestration platform designed to automate the deployment, scaling, and management of containerized applications. It helps ensure high availability, scalability, and efficient resource utilization across clusters of machines.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;GitHub Actions:&lt;/strong&gt;&lt;br&gt;
GitHub Actions is a CI/CD platform that allows you to automate workflows directly within your GitHub repository. It enables tasks like building, testing, and deploying code whenever specific events occur (e.g., push or pull requests). You can define workflows using YAML files to handle continuous integration and deployment pipelines.&lt;/p&gt;

&lt;p&gt;These tools work well together to streamline infrastructure management and application deployment.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Objectives&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Improve deployment scalability and reliability: Use Kubernetes to ensure that applications can scale automatically.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Streamline CI/CD pipeline: Create a seamless CI/CD pipeline using GitHub Actions to automate infrastructure and application deployments efficiently, reducing manual intervention.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Using terraform:&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Ensure a fully automated, consistent, reproducible, and reliable infrastructure deployment across various environments.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Leveraging its unified workflow and lifecycle management features to ensure easy updates and scaling.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Prerequisites&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;AWS account&lt;/li&gt;
&lt;li&gt;IAM user with administrator or neccessary role enabled&lt;/li&gt;
&lt;li&gt;Terraform &lt;a href="https://developer.hashicorp.com/terraform/install" rel="noopener noreferrer"&gt;Install terraform&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;AWSCLI &lt;a href="https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html" rel="noopener noreferrer"&gt;Install awscli&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Step by step guide&lt;/strong&gt;&lt;br&gt;
lets run our code manually before we attempt using CI/CD workflow.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Configure your AWS credentials so that the AWS CLI can authenticate and interact with your AWS account
&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws configure
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;ul&gt;
&lt;li&gt;Create a terraform diretory and cd into it
&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;mkdir terraform
cd terraform
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;ul&gt;
&lt;li&gt;Create terraform configuration files &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;code&gt;providers.tf&lt;/code&gt; &lt;em&gt;contains all the providers needed for this project&lt;/em&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform {
  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = "~&amp;gt; 5.0"
    }

    kubernetes = {
      source = "hashicorp/kubernetes"
      version = "2.30.0"
    }
  }
}

# Configure the AWS Provider
provider "aws" {
  region = var.region
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;code&gt;variables.tf&lt;/code&gt; &lt;em&gt;defines input variables&lt;/em&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;variable region {}
variable cluster_name {}
variable vpc_name {}
variable vpc_cidr_block {}
variable private_subnet_cidr_blocks {}
variable public_subnet_cidr_blocks {}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;code&gt;main.tf&lt;/code&gt; &lt;em&gt;contains;&lt;/em&gt; &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;em&gt;Modules for VPC and EKS Cluster: creates a Virtual Private Cloud (VPC) and an Amazon EKS cluster, allowing for simpler and more concise code.&lt;/em&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;em&gt;Kubernetes Provider: defines the Kubernetes provider for Terraform to deploy Kubernetes resources on the created cluster, all within a single .tf file, this ensures unified workflow and full life cycle management&lt;/em&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;em&gt;Configure the Kubernetes Provider: Using cloud specific plugins, the exec plugin is utilized in the Kubernetes provider block to handle authentication, as the AWS token expires every 15 minutes.&lt;/em&gt;&lt;br&gt;
&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Filter out local zones, which are not currently supported 
# with managed node groups

data "aws_availability_zones" "available" {
  filter {
    name   = "opt-in-status"
    values = ["opt-in-not-required"]
  }
}


module "vpc" {
  source  = "terraform-aws-modules/vpc/aws"
  version = "5.13.0"

  name = var.vpc_name

  cidr = var.vpc_cidr_block
  azs  = slice(data.aws_availability_zones.available.names, 0, 3)

  private_subnets = var.private_subnet_cidr_blocks
  public_subnets  = var.public_subnet_cidr_blocks

  enable_nat_gateway   = true
  single_nat_gateway   = true
  enable_dns_hostnames = true

  # This enables automatic public IP assignment for instances in public subnets
  map_public_ip_on_launch = true

  public_subnet_tags = {
    "kubernetes.io/role/elb" = 1
  }

  private_subnet_tags = {
    "kubernetes.io/role/internal-elb" = 1
  }
}

module "eks" {
  source  = "terraform-aws-modules/eks/aws"
  version = "20.24.2"

  cluster_name    = var.cluster_name
  cluster_version = "1.29"

  cluster_endpoint_public_access           = true
  enable_cluster_creator_admin_permissions = true


  vpc_id     = module.vpc.vpc_id
  subnet_ids = module.vpc.public_subnets

  # Additional security group rules
  node_security_group_additional_rules = {
    allow_all_traffic = {
      description                  = "Allow traffic from the internet"
      protocol                     = "-1"  # Allow all protocols
      from_port                    = 0
      to_port                      = 65535  # Allow all ports
      cidr_blocks                  = ["0.0.0.0/0"]  # Allow from anywhere
      type                         = "ingress"
    }
  }  


  eks_managed_node_group_defaults = {
    ami_type = "AL2_x86_64"

  }

  eks_managed_node_groups = {
    one = {
      name = "node-group-1"

      instance_types = ["t2.micro"]

      min_size     = 1
      max_size     = 2
      desired_size = 1
    }

    two = {
      name = "node-group-2"

      instance_types = ["t2.micro"]

      min_size     = 1
      max_size     = 2
      desired_size = 1
    }
  }
}



# Retrieve EKS cluster information and ensure data source waits for cluster to be created

data "aws_eks_cluster" "myApp-cluster" {
  name = module.eks.cluster_name
  depends_on = [module.eks]
}

data "aws_eks_cluster_auth" "myApp-cluster" {
  name = module.eks.cluster_name
}

#Kubernetes provider for Terraform to connect with AWS EKS Cluster

provider "kubernetes" {

  host                   = data.aws_eks_cluster.myApp-cluster.endpoint
  cluster_ca_certificate = base64decode(data.aws_eks_cluster.myApp-cluster.certificate_authority[0].data)
  exec {
    api_version = "client.authentication.k8s.io/v1beta1"
    args        = ["eks", "get-token", "--cluster-name", var.cluster_name]
    command     = "aws"
  }
}

#Kubernetes resources in Terraform

resource "kubernetes_namespace" "terraform-k8s" {

  metadata {
    name = "terraform-k8s"
  }
}

resource "kubernetes_deployment" "nginx" {

  metadata {
    name      = "nginx"
    namespace = kubernetes_namespace.terraform-k8s.metadata[0].name

  }

  spec {
    replicas = 2
    selector {
      match_labels = {
        app = "nginx"
      }
    }

    template {
      metadata {
        labels = {
          app = "nginx"
        }
      }

      spec {
        container {
          name  = "nginx"
          image = "nginx:1.21.6"

          port {
            container_port = 80
          }

          resources {
            limits = {
              cpu    = "0.5"
              memory = "512Mi"
            }
            requests = {
              cpu    = "250m"
              memory = "50Mi"
            }
          }
        }
      }
    }
  }
}



resource "kubernetes_service" "nginx" {

  metadata {
    name      = "nginx"
    namespace = kubernetes_namespace.terraform-k8s.metadata[0].name
  }

  spec {
    selector = {
      app = kubernetes_deployment.nginx.spec[0].template[0].metadata[0].labels.app
    }

    port {
      port        = 80
      target_port = 80
    }

    type = "LoadBalancer"
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;code&gt;outputs.tf&lt;/code&gt; &lt;em&gt;define output values that are displayed after the deployment completes&lt;/em&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;output "cluster_endpoint" {
  description = "Endpoint for EKS control plane"
  value       = module.eks.cluster_endpoint
}

output "cluster_security_group_id" {
  description = "Security group ids attached to the cluster control plane"
  value       = module.eks.cluster_security_group_id
}

output "region" {
  description = "AWS region"
  value       = var.region
}

output "cluster_name" {
  description = "Kubernetes Cluster Name"
  value       = module.eks.cluster_name
}

output "nginx_load_balancer_ip" {
  description = "output Load Balancer IP to access from browser"
  value = kubernetes_service.nginx.status[0].load_balancer[0].ingress[0].ip
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;code&gt;terraform.tfvars&lt;/code&gt; &lt;em&gt;file assigns values to the input variables defined in &lt;code&gt;variables.tf&lt;/code&gt;. This file contains sensitive information, such as AWS credentials or other configuration settings, and should not be exposed publicly, such as by pushing it to GitHub. Instead, it's best practice to store it as a secret variable in your repository's settings to ensure that sensitive information is kept secure.&lt;/em&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Run the terraform command&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;code&gt;terraform init&lt;/code&gt; &lt;em&gt;initializes the repository, adding all the dependencies required.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;code&gt;terraform plan&lt;/code&gt; &lt;em&gt;plan the changes to be added or removed, essentially a preview of what &lt;code&gt;terraform apply&lt;/code&gt; will do, allowing you to review and confirm&lt;/em&gt; &lt;/p&gt;

&lt;p&gt;&lt;code&gt;terraofrm apply --auto-approve&lt;/code&gt; &lt;em&gt;apply without prompt&lt;/em&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Confirm cluster is created and application is properly deployed on it, this can be done in two ways:&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;em&gt;aws console - check if cluster is running, nodes are healthy, check the loadbalancer for ip or dns name, open via browser, nginx app is successfully displayed&lt;/em&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fncck8o68bcv0ft5lxn6u.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fncck8o68bcv0ft5lxn6u.jpg" alt="nginx running" width="800" height="262"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;using kubectl - &lt;a href="https://kubernetes.io/docs/tasks/tools/install-kubectl" rel="noopener noreferrer"&gt;install kubectl&lt;/a&gt;, configure the kubectl to interact with the cluster &lt;code&gt;aws eks --region us-east-1 update-kubeconfig --name &amp;lt;cluster_name&amp;gt;&lt;/code&gt;. can run kubectl commands to get deployment, service, pods and others &lt;code&gt;kubectl get all -n &amp;lt;namespace&amp;gt;&lt;/code&gt;&lt;/em&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Run &lt;code&gt;terraform destroy --auto-approve&lt;/code&gt; we can safely destroy after confirm the configuration files are good.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;CI/CD workflow - to run this fully automated, create &lt;code&gt;.github/workflows&lt;/code&gt; directory and two yaml files in it, one is execute our terraform apply and the other to destroy when done.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;code&gt;terraform.yaml&lt;/code&gt; _ &lt;a href="https://docs.github.com/en/actions/security-for-github-actions/security-guides/using-secrets-in-github-actions#creating-secrets-for-a-repository" rel="noopener noreferrer"&gt;save&lt;/a&gt; your access key, secret key and region in your repository secret variable_&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# This is a basic workflow to help you deploy nginx on EKS using terraform

name: Terraform EKS Deployment

# Controls when the workflow will run
on:
  # Triggers the workflow on push events but only for the "main" branch
  push:
    branches: [ "main" ]

  # Allows you to run this workflow manually from the Actions tab
  workflow_dispatch:

# A workflow run is made up of one or more jobs that can run sequentially or in parallel
jobs:
  # This workflow contains multiple jobs terraform,
  terraform:
    name: Deploy EKS Cluster
    # The type of runner that the job will run on
    runs-on: ubuntu-latest

    # Steps represent a sequence of tasks that will be executed as part of the job
    steps:
      # Step 1: Checks-out your repository under $GITHUB_WORKSPACE, so your job can access it
      - name: Checkout Code
        uses: actions/checkout@v4

      # Step 2: Setup AWS Credentials
      - name: Configure AWS credentials
        uses: aws-actions/configure-aws-credentials@v2
        with:
          aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
          aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
          aws-region: ${{ secrets.AWS_REGION }}

      # Step 3: Setup Terraform
      - name: Setup Terraform
        uses: hashicorp/setup-terraform@v2
        with:
          terraform_version: 1.9.6

      # Step 4: Terraform Init
      - name: Terraform Init
        run: terraform init

      # Step 5: Terraform Plan
      - name: Terraform Plan
        run: terraform plan

      # Step 6: Terraform Apply
      - name: Terraform Apply
        id: apply
        run: terraform apply -auto-approve

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;code&gt;terraform-destroy.yml&lt;/code&gt; &lt;em&gt;it's essential to have access to the .tfstate file to ensure that you can properly delete the resources you created, such as the cluster. Instead of pushing your .tfstate to a repository (which is not secure), you should store it remotely. One recommended option is using an S3 bucket as the remote backend for your Terraform state.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;To do this, you can create a &lt;code&gt;backend.tf&lt;/code&gt; file that specifies the S3 bucket as the remote backend for storing the .tfstate file.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform {
  backend "s3" {
    bucket = "terraform-statefile-backup-storage"
    key    = "eks-cluster/terraform.tfstate"
    region = "us-east-1"
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;code&gt;terraform-destroy.yml&lt;/code&gt; &lt;em&gt;set to run manually instead of running on push&lt;/em&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;name: Terraform Destroy EKS Cluster

on:
  workflow_dispatch: # Manually triggered workflow

jobs:
  terraform:
    name: Destroy EKS Cluster
    runs-on: ubuntu-latest

    steps:
      - name: Checkout Code
        uses: actions/checkout@v4

      - name: Configure AWS credentials
        uses: aws-actions/configure-aws-credentials@v2
        with:
          aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
          aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
          aws-region: ${{ secrets.AWS_REGION }}

      - name: Setup Terraform
        uses: hashicorp/setup-terraform@v2
        with:
          terraform_version: 1.9.6

      - name: Terraform Init
        run: terraform init

      - name: Terraform Plan Destroy
        run: terraform plan -destroy

      - name: Terraform Destroy
        run: terraform destroy -auto-approve
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Push to Github, and the &lt;code&gt;terraform.yml&lt;/code&gt; will run on push&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fswruuhof3ehyp8jxzu4c.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fswruuhof3ehyp8jxzu4c.png" alt="successful terraform workflow" width="800" height="461"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Confirm by accessing your loadbalancer IP or dns name via browser&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fncck8o68bcv0ft5lxn6u.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fncck8o68bcv0ft5lxn6u.jpg" alt="nginx running" width="800" height="262"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Manually run the workflow of &lt;code&gt;terraform-destroy.yml&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc712ft36jkwk598rqzqu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc712ft36jkwk598rqzqu.png" alt="successful terraform-destroy workflow" width="800" height="446"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Time-Saving: Automating the deployment process saves time and effort, making it easier to manage infrastructure.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Consistency: Automation leads to more reliable deployments, reducing mistakes and ensuring everything works as expected.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Scalability: Automated workflows can easily grow with your project, allowing for faster updates without losing quality.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Better Teamwork: Integrating tools like Terraform and Kubernetes with GitHub Actions helps team members collaborate more effectively.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Flexibility: A well-defined CI/CD pipeline allows teams to quickly adjust to changes, improving overall project speed and adaptability.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://github.com/AzeematRaji/deploy-nginx-on-EKS-using-github-actions" rel="noopener noreferrer"&gt;Check out my Github&lt;/a&gt;&lt;/p&gt;

</description>
      <category>devops</category>
      <category>aws</category>
      <category>kubernetes</category>
      <category>tutorial</category>
    </item>
  </channel>
</rss>
