<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: David Oyewole</title>
    <description>The latest articles on DEV Community by David Oyewole (@david_oyewole).</description>
    <link>https://dev.to/david_oyewole</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/david_oyewole"/>
    <language>en</language>
    <item>
      <title>Deploying a 3-tier Micro-services Platform on AWS EKS</title>
      <dc:creator>David Oyewole</dc:creator>
      <pubDate>Wed, 15 Oct 2025 20:00:51 +0000</pubDate>
      <link>https://dev.to/david_oyewole/building-a-production-grade-micro-services-platform-on-aws-eks-cj7</link>
      <guid>https://dev.to/david_oyewole/building-a-production-grade-micro-services-platform-on-aws-eks-cj7</guid>
      <description>&lt;p&gt;&lt;em&gt;by David Oyewole, Cloud &amp;amp; DevOps Engineer&lt;/em&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;Every DevOps engineer eventually faces that “big one”, the project that brings together all the skills we’ve learned across infrastructure, automation, cloud, and observability.  &lt;/p&gt;

&lt;p&gt;For me, that project was &lt;strong&gt;deploying the Sock Shop microservices application&lt;/strong&gt; on &lt;strong&gt;Amazon EKS&lt;/strong&gt; using &lt;strong&gt;Terraform&lt;/strong&gt;, &lt;strong&gt;Helm&lt;/strong&gt;, &lt;strong&gt;AWS Load Balancer Controller&lt;/strong&gt;, &lt;strong&gt;Route53&lt;/strong&gt;, &lt;strong&gt;ACM&lt;/strong&gt;, &lt;strong&gt;ExternalDNS&lt;/strong&gt;, &lt;strong&gt;Prometheus&lt;/strong&gt;, and &lt;strong&gt;Grafana&lt;/strong&gt;.  &lt;/p&gt;

&lt;p&gt;What started as an academic capstone evolved into a &lt;strong&gt;fully automated, production-ready microservices deployment&lt;/strong&gt; — complete with HTTPS, DNS automation, and end-to-end monitoring.  &lt;/p&gt;

&lt;p&gt;This article walks through my design, implementation, challenges, and lessons learned from building this platform from scratch.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Vision
&lt;/h2&gt;

&lt;p&gt;I wanted to prove I could:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Provision and manage &lt;strong&gt;AWS infrastructure&lt;/strong&gt; as code using Terraform
&lt;/li&gt;
&lt;li&gt;Deploy &lt;strong&gt;Kubernetes workloads&lt;/strong&gt; with &lt;strong&gt;Helm charts&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Configure &lt;strong&gt;secure ingress with HTTPS and DNS automation&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Add &lt;strong&gt;observability&lt;/strong&gt; through Prometheus and Grafana
&lt;/li&gt;
&lt;li&gt;Do it all &lt;strong&gt;without manual steps&lt;/strong&gt;, in a reproducible, production-grade setup
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In other words, to go beyond “it works” and build something that would scale, self-document, and represent &lt;strong&gt;how DevOps is done in the real world&lt;/strong&gt;.&lt;/p&gt;




&lt;h2&gt;
  
  
  Architecture Overview
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2fh1844k3axecjeear0s.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2fh1844k3axecjeear0s.png" alt="Setup Architecture" width="800" height="601"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;At the heart of my setup is &lt;strong&gt;AWS EKS&lt;/strong&gt;, orchestrating dozens of containers across two main namespaces:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;sock-shop&lt;/code&gt; — the microservices application (frontend + internal services)&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;monitoring&lt;/code&gt; — the observability stack (Prometheus + Grafana)&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  The Cloud Stack
&lt;/h3&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Layer&lt;/th&gt;
&lt;th&gt;Technology&lt;/th&gt;
&lt;th&gt;Role&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Infrastructure&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Terraform&lt;/td&gt;
&lt;td&gt;Provisions VPC, EKS, subnets, IAM roles, ACM, Route53&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Ingress&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;AWS ALB Ingress Controller&lt;/td&gt;
&lt;td&gt;Manages ingress traffic via ALB&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;DNS &amp;amp; SSL&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Route53 + ACM + ExternalDNS&lt;/td&gt;
&lt;td&gt;Automated DNS record creation and TLS certificates&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Deployment&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Helm&lt;/td&gt;
&lt;td&gt;Deploys Sock Shop microservices and monitoring stack&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Monitoring&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Prometheus + Grafana&lt;/td&gt;
&lt;td&gt;Cluster and app observability&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Security&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;IRSA&lt;/td&gt;
&lt;td&gt;Fine-grained IAM for ALB and DNS controllers&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Traffic flows like this:&lt;br&gt;
User → Route53 (DNS) → ALB (HTTPS termination) → Ingress → Frontend Service → Internal ClusterIP Services&lt;/p&gt;


&lt;h2&gt;
  
  
  Step 1: Infrastructure as Code with Terraform
&lt;/h2&gt;

&lt;p&gt;I started by writing &lt;strong&gt;Terraform modules&lt;/strong&gt; to create the networking and compute foundation.&lt;/p&gt;

&lt;p&gt;Terraform handled:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A &lt;strong&gt;VPC&lt;/strong&gt; with public and private subnets
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;EKS cluster&lt;/strong&gt; and managed node groups
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;IAM roles&lt;/strong&gt; for both controllers (&lt;code&gt;external-dns&lt;/code&gt; and &lt;code&gt;aws-load-balancer-controller&lt;/code&gt;) using &lt;strong&gt;IRSA&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;A &lt;strong&gt;Route53 hosted zone&lt;/strong&gt; for my subdomain (&lt;code&gt;sock.blessedc.org&lt;/code&gt;)
&lt;/li&gt;
&lt;li&gt;An &lt;strong&gt;ACM wildcard certificate&lt;/strong&gt; for HTTPS termination
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Once the code was ready, I ran&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;terraform init
terraform apply &lt;span class="nt"&gt;-auto-approve&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In minutes, AWS spun up a fully functional Kubernetes environment with all IAM and networking policies correctly set.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 2: Deploying Microservices with Helm
&lt;/h2&gt;

&lt;p&gt;With infrastructure live, I used Helm to deploy the Sock Shop micro-services.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;helm repo add sock-shop https://microservices-demo.github.io/helm
helm &lt;span class="nb"&gt;install &lt;/span&gt;sockshop sock-shop/sock-shop &lt;span class="nt"&gt;-n&lt;/span&gt; sock-shop &lt;span class="nt"&gt;--create-namespace&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Helm made it easy to templatize everything and redeploy consistently after every test or rebuild.&lt;/p&gt;

&lt;p&gt;I configured only the frontend service to be public (via Ingress), while keeping all other services (catalogue, user, orders, shipping, etc.) internal-only with ClusterIP.&lt;/p&gt;

&lt;p&gt;This separation ensured the architecture was both secure and realistic, mimicking real-world micro-service deployments.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 3: HTTPS &amp;amp; DNS Automation
&lt;/h2&gt;

&lt;p&gt;Manual certificate management and DNS updates are the enemies of scalability.&lt;/p&gt;

&lt;p&gt;So I automated both using:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;AWS Certificate Manager (ACM) — provisioned via Terraform with a wildcard certificate (*.blessedc.org)&lt;/li&gt;
&lt;li&gt;ExternalDNS — dynamically updates Route53 A-records based on Ingress resources&lt;/li&gt;
&lt;li&gt;AWS ALB Ingress Controller — automatically creates the load balancer and attaches the certificate&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;A single Ingress.yaml file for the frontend handled all of this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;annotations&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;kubernetes.io/ingress.class&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;alb&lt;/span&gt;
  &lt;span class="na"&gt;alb.ingress.kubernetes.io/scheme&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;internet-facing&lt;/span&gt;
  &lt;span class="na"&gt;alb.ingress.kubernetes.io/target-type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ip&lt;/span&gt;
  &lt;span class="na"&gt;alb.ingress.kubernetes.io/certificate-arn&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;&amp;lt;acm-arn&amp;gt;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In less than five minutes, a fully functional HTTPS endpoint appeared:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;https://sock.blessedc.org
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And yes ExternalDNS took care of the Route53 record automatically using helm by running this command&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="s"&gt;helm repo add external-dns https://kubernetes-sigs.github.io/external-dns/&lt;/span&gt;
&lt;span class="s"&gt;helm repo update&lt;/span&gt;

&lt;span class="s"&gt;helm install external-dns external-dns/external-dns \&lt;/span&gt;
&lt;span class="s"&gt;--namespace external-dns \&lt;/span&gt;
&lt;span class="s"&gt;--create-namespace \&lt;/span&gt;
&lt;span class="s"&gt;--set provider=aws \&lt;/span&gt;
&lt;span class="s"&gt;--set registry=txt \&lt;/span&gt;
&lt;span class="s"&gt;--set txtOwnerId=sockshop \&lt;/span&gt;
&lt;span class="s"&gt;--set domainFilters={sock.blessedc.org} \&lt;/span&gt;
&lt;span class="s"&gt;--set aws.zoneType=public \&lt;/span&gt;
&lt;span class="s"&gt;--set serviceAccount.create=true \&lt;/span&gt;
&lt;span class="s"&gt;--set serviceAccount.name=external-dns \&lt;/span&gt;
&lt;span class="s"&gt;--set serviceAccount.annotations."eks\.amazonaws\.com/role-arn"=arn:aws:iam::&amp;lt;IAM-ID&amp;gt;:role/eks-externaldns-role&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Step 4: Monitoring with Prometheus and Grafana
&lt;/h2&gt;

&lt;p&gt;No production cluster is complete without observability.&lt;/p&gt;

&lt;p&gt;I deployed a complete monitoring stack using the kube-prometheus-stack Helm chart.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
helm repo update
helm &lt;span class="nb"&gt;install &lt;/span&gt;monitoring prometheus-community/kube-prometheus-stack &lt;span class="nt"&gt;-n&lt;/span&gt; monitoring &lt;span class="nt"&gt;--create-namespace&lt;/span&gt;
kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; grafana-ingress.yaml &lt;span class="nt"&gt;-n&lt;/span&gt; monitoring
kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; prometheus-ingress.yaml &lt;span class="nt"&gt;-n&lt;/span&gt; monitoring
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This deployed:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Prometheus — scraping cluster and application metrics&lt;/li&gt;
&lt;li&gt;Grafana — for dashboard visualization&lt;/li&gt;
&lt;li&gt;Alertmanager — ready for future integrations&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I created the grafana-ingress.yaml and prometheus-ingress.yaml file before executing that command, this created the grafana and prometheus sub domains like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;grafana.sock.blessedc.org
prometheus.sock.blessedc.org
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;and exposed them via Ingress, secured by the same ACM certificate.&lt;/p&gt;

&lt;h3&gt;
  
  
  Custom Grafana Dashboard
&lt;/h3&gt;

&lt;p&gt;I built a custom “Sock Shop Microservices Overview” dashboard showing:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Pod CPU &amp;amp; Memory usage per microservice&lt;/li&gt;
&lt;li&gt;Pod restart count&lt;/li&gt;
&lt;li&gt;Node resource utilization&lt;/li&gt;
&lt;li&gt;Cluster-wide health&lt;/li&gt;
&lt;li&gt;Request and error rates&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;It was a beautiful moment seeing real-time metrics flow across a system I built entirely from scratch.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbw1u7iydcceq5jvo3x62.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbw1u7iydcceq5jvo3x62.png" alt="Grafana Dashboard" width="800" height="373"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 5: Troubleshooting and Challenges
&lt;/h2&gt;

&lt;p&gt;No project is complete without a few scars.  &lt;/p&gt;

&lt;p&gt;Here are some of the issues I hit — and how I fixed them:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;&lt;strong&gt;Issue&lt;/strong&gt;&lt;/th&gt;
&lt;th&gt;&lt;strong&gt;Root Cause&lt;/strong&gt;&lt;/th&gt;
&lt;th&gt;&lt;strong&gt;Fix&lt;/strong&gt;&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;AccessDenied for Route53&lt;/td&gt;
&lt;td&gt;Missing IAM permissions for ExternalDNS&lt;/td&gt;
&lt;td&gt;Attached &lt;code&gt;route53:*&lt;/code&gt; policy to &lt;code&gt;eks-externaldns-role&lt;/code&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;ALB not creating&lt;/td&gt;
&lt;td&gt;Missing &lt;code&gt;ingressClassName&lt;/code&gt; and IAM for ALB controller&lt;/td&gt;
&lt;td&gt;Updated Ingress annotations and IRSA role&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Ingress stuck without address&lt;/td&gt;
&lt;td&gt;ACM ARN mismatch&lt;/td&gt;
&lt;td&gt;Ensured the certificate region matched the cluster region (&lt;code&gt;us-east-1&lt;/code&gt;)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Hosted Zone not deleting&lt;/td&gt;
&lt;td&gt;Route53 still had A/TXT records&lt;/td&gt;
&lt;td&gt;Cleaned up manually before running &lt;code&gt;terraform destroy&lt;/code&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Grafana unreachable&lt;/td&gt;
&lt;td&gt;Ingress hostname typo&lt;/td&gt;
&lt;td&gt;Fixed host rule to &lt;code&gt;grafana.sock.blessedc.org&lt;/code&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Each issue taught me something about how AWS and Kubernetes actually communicate under the hood and that’s where true learning happens.&lt;/p&gt;

&lt;h3&gt;
  
  
  Security Highlights
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Used IRSA (IAM Roles for Service Accounts) for both aws-load-balancer-controller and external-dns&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Ensured all backend services stayed internal (ClusterIP)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Enabled HTTPS-only ingress (redirecting all HTTP to HTTPS)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Isolated monitoring stack in a separate namespace&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This aligns with cloud-native security best practices for production workloads.&lt;/p&gt;

&lt;h3&gt;
  
  
  Results
&lt;/h3&gt;

&lt;p&gt;After deployment, I had:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;A fully automated and reproducible AWS EKS cluster&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Publicly available frontend accessible via HTTPS&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;DNS records auto-managed by Kubernetes (no manual updates)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Centralized monitoring for all microservices&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;A clean teardown/rebuild process via Terraform in minutes&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Lessons Learned
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Networking is everything. Understanding subnets, routes, and public vs private traffic is key before touching EKS.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;ALB Ingress Controller is incredibly powerful but requires precise IAM permissions.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Automation = Reliability. The fewer manual steps, the fewer production mistakes.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Monitoring early helps debug faster and ensures confidence in the platform.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;IRSA changed the way I think about security in Kubernetes no more over-permissioned nodes.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Final Thoughts
&lt;/h3&gt;

&lt;p&gt;This project made me appreciate the power of Infrastructure as Code and declarative configuration.&lt;br&gt;
It taught me to think like a systems designer, not just an implementer building platforms that can scale, heal, and redeploy themselves.&lt;/p&gt;

&lt;p&gt;Today, my sock.blessedc.org setup stands as more than a capstone,&lt;br&gt;
it’s my personal proof that DevOps is about automation, resilience, and continuous learning.&lt;/p&gt;

&lt;h3&gt;
  
  
  Summary
&lt;/h3&gt;

&lt;p&gt;This project demonstrates how to design and deploy a secure, scalable, observable microservices architecture on AWS, using Terraform for infrastructure, Helm for Kubernetes deployments, and modern DevOps best practices for automation, monitoring, and reliability.&lt;/p&gt;

&lt;p&gt;👨‍💻 Author&lt;/p&gt;

&lt;p&gt;David Oyewole&lt;br&gt;
Cloud &amp;amp; DevOps Engineer&lt;br&gt;
&lt;a href="//www.linkedin.com/in/david-oyewole-o"&gt;LinkedIn&lt;/a&gt;&lt;br&gt;
&lt;a href="https://github.com/oyewoledavid" rel="noopener noreferrer"&gt;Github&lt;/a&gt;&lt;/p&gt;

</description>
      <category>devops</category>
      <category>aws</category>
      <category>kubernetes</category>
      <category>microservices</category>
    </item>
    <item>
      <title>🚀 Building a Fully Automated Cloud Monitoring &amp; Logging Infrastructure on AWS</title>
      <dc:creator>David Oyewole</dc:creator>
      <pubDate>Fri, 25 Jul 2025 09:15:21 +0000</pubDate>
      <link>https://dev.to/david_oyewole/building-a-fully-automated-cloud-monitoring-logging-infrastructure-on-aws-ani</link>
      <guid>https://dev.to/david_oyewole/building-a-fully-automated-cloud-monitoring-logging-infrastructure-on-aws-ani</guid>
      <description>&lt;p&gt;In today’s cloud-native world, observability is more than just a luxury, it's a necessity. Whether you’re a startup deploying fast or an enterprise managing hundreds of services, &lt;strong&gt;monitoring and logging infrastructure&lt;/strong&gt; forms the backbone of system reliability and performance.&lt;/p&gt;

&lt;p&gt;This article documents how I designed and automated a &lt;strong&gt;real-world cloud monitoring and logging infrastructure&lt;/strong&gt; using &lt;strong&gt;Terraform&lt;/strong&gt;, &lt;strong&gt;Ansible&lt;/strong&gt;, &lt;strong&gt;Prometheus&lt;/strong&gt;, &lt;strong&gt;Grafana&lt;/strong&gt;, &lt;strong&gt;Loki&lt;/strong&gt;, &lt;strong&gt;Fluent Bit&lt;/strong&gt;, and &lt;strong&gt;Alertmanager&lt;/strong&gt;, all deployed on &lt;strong&gt;AWS EC2&lt;/strong&gt;  with logs stored remotely in &lt;strong&gt;S3&lt;/strong&gt;.&lt;/p&gt;




&lt;h2&gt;
  
  
  🎯 Why I Built This Project
&lt;/h2&gt;

&lt;p&gt;I’ve worked on cloud deployments before, but I wanted something beyond the usual hello-world stack. I wanted a setup that would:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Mirror what’s used in real production environments&lt;/li&gt;
&lt;li&gt;Be modular and repeatable (IAC-first mindset)&lt;/li&gt;
&lt;li&gt;Handle &lt;strong&gt;metrics&lt;/strong&gt;, &lt;strong&gt;logs&lt;/strong&gt;, and &lt;strong&gt;alerts&lt;/strong&gt; in a centralized and automated way&lt;/li&gt;
&lt;li&gt;Include &lt;strong&gt;secure public access&lt;/strong&gt; to dashboards and endpoints via HTTPS&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This project was not only an exercise in learning and problem-solving, but also a step toward building production-ready DevOps workflows.&lt;/p&gt;




&lt;h2&gt;
  
  
  🌍 Real-World Use Case
&lt;/h2&gt;

&lt;p&gt;Imagine you're managing infrastructure for a SaaS product with multiple microservices. Here’s how this stack supports you:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Concern&lt;/th&gt;
&lt;th&gt;Solution Provided&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Detect server failure&lt;/td&gt;
&lt;td&gt;Prometheus + Alertmanager alerts to Slack&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Monitor system metrics&lt;/td&gt;
&lt;td&gt;Node Exporter + Grafana dashboards&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Track app errors in logs&lt;/td&gt;
&lt;td&gt;Fluent Bit → Loki → Grafana&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Long-term log retention&lt;/td&gt;
&lt;td&gt;Loki stores chunks in S3&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Secure access to tools&lt;/td&gt;
&lt;td&gt;Nginx reverse proxy + SSL&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Infrastructure scalability&lt;/td&gt;
&lt;td&gt;Terraform + modular Ansible roles&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;




&lt;h2&gt;
  
  
  ⚙️ Architecture Overview
&lt;/h2&gt;

&lt;p&gt;The project provisions:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Two EC2 instances&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Monitoring Server (Prometheus, Grafana, Loki, Alertmanager, Nginx)&lt;/li&gt;
&lt;li&gt;Web Server (Fluent Bit, Node Exporter, and static web content)&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;&lt;p&gt;&lt;strong&gt;S3 Bucket&lt;/strong&gt; for storing Loki logs&lt;/p&gt;&lt;/li&gt;

&lt;li&gt;&lt;p&gt;&lt;strong&gt;Slack Integration&lt;/strong&gt; via Alertmanager&lt;/p&gt;&lt;/li&gt;

&lt;li&gt;&lt;p&gt;All services are containerless and run as systemd services for simplicity and transparency.&lt;br&gt;&lt;br&gt;
&lt;/p&gt;&lt;/li&gt;

&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    [Web Server]                      [Monitoring Server]
    ┌──────────────┐                   ┌────────────────────────────────┐
    │ Node Exporter│→───────────────→│  Prometheus               │
    │ Fluent Bit  │━━━━━━━━━━━┐          │                           │
    └───────────────└────────────→ │  Loki                     │
                                    │   ↳ S3 storage backend     │
                                    │  Grafana + Alertmanager    │
                                    └────────────────────────────┘
                                             ↓
                                       Slack Notifications
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  🧠 Automation Strategy
&lt;/h2&gt;

&lt;p&gt;From the beginning, the goal was to automate everything. I used:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Terraform&lt;/strong&gt;: To provision EC2, security groups, IAM roles, and the S3 bucket&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Ansible&lt;/strong&gt;: To install, configure, and start all services&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;A shell script&lt;/strong&gt;: To orchestrate the full deployment and handle DNS updates via Namecheap's dynamic DNS API&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Everything from spinning up servers to pushing logs into Grafana — happens with a single command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;./deploy.sh
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This deploys Terraform infra, fetches the EC2 IP, updates DNS, waits for propagation, generates Ansible variables, and runs the full Ansible playbook.&lt;/p&gt;




&lt;h2&gt;
  
  
  💥 Challenges Faced
&lt;/h2&gt;

&lt;h3&gt;
  
  
  1. &lt;strong&gt;DNS &amp;amp; SSL Certbot Errors&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Let’s Encrypt (via Certbot) expects Nginx to be running before it can verify domain ownership, but Nginx can’t start unless the SSL certs exist. Classic chicken-and-egg.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Solution:&lt;/strong&gt;&lt;br&gt;
I implemented a &lt;strong&gt;temporary HTTP-only Nginx config&lt;/strong&gt; just to obtain the certs, then replaced it with the full reverse proxy config.&lt;/p&gt;


&lt;h3&gt;
  
  
  2. &lt;strong&gt;Terraform IP Sync with Ansible&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;After &lt;code&gt;terraform apply&lt;/code&gt;, the private/public IPs of EC2 instances change, but Ansible needs those IPs to connect and configure properly.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Solution:&lt;/strong&gt;&lt;br&gt;
I wrote a &lt;strong&gt;Python script to auto-generate a YAML file&lt;/strong&gt; (&lt;code&gt;terraform_outputs.yml&lt;/code&gt;) from &lt;code&gt;terraform output -json&lt;/code&gt;, which Ansible uses directly.&lt;/p&gt;


&lt;h3&gt;
  
  
  3. &lt;strong&gt;Loki Failing to Flush Logs to S3&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;I saw this error:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;PutObjectInput.Bucket: minimum field size of 1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Turns out, even though the AWS credentials were set, &lt;strong&gt;the bucket name was missing&lt;/strong&gt; in the config due to a YAML templating issue.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Lesson:&lt;/strong&gt; Always double-check templated variables and file paths in production systems.&lt;/p&gt;




&lt;h3&gt;
  
  
  4. &lt;strong&gt;Slack Webhook Not Working&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Alertmanager showed the alert as firing, but no message came to Slack.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Cause:&lt;/strong&gt; The webhook URL wasn’t rendering correctly in the Alertmanager config due to a misconfigured variable.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Fix:&lt;/strong&gt; Injected the correct URL into Ansible vars and validated it using &lt;code&gt;cat /etc/alertmanager/alertmanager.yml | grep api_url&lt;/code&gt;.&lt;/p&gt;




&lt;h2&gt;
  
  
  📊 Dashboards in Action
&lt;/h2&gt;

&lt;p&gt;Once everything was set up, Grafana became the single-pane-of-glass for:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;CPU, memory, disk metrics from Prometheus + Node Exporter&lt;/li&gt;
&lt;li&gt;Log streams from Fluent Bit + Loki, searchable by label&lt;/li&gt;
&lt;li&gt;Alert status overview with firing/resolved alerts&lt;/li&gt;
&lt;li&gt;S3 bucket verification for long-term log persistence&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Access it securely at:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;https://monitoring.yourdomain.com/grafana/
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  📣 Slack Alerts in Action
&lt;/h2&gt;

&lt;p&gt;Prometheus + Alertmanager monitor system health, and notify me on Slack when:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A server goes down (NodeDown)&lt;/li&gt;
&lt;li&gt;Fluent Bit or Loki stop sending logs&lt;/li&gt;
&lt;li&gt;Memory or CPU usage exceeds thresholds&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  💡 Key Lessons Learned
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Observability must be baked in from the start&lt;/strong&gt;, not retrofitted.&lt;/li&gt;
&lt;li&gt;SSL and reverse proxy automation can be tricky — understand the Certbot flow.&lt;/li&gt;
&lt;li&gt;Always validate your YAML configs on target machines.&lt;/li&gt;
&lt;li&gt;The power of infrastructure-as-code is not just provisioning — but &lt;strong&gt;repeatability&lt;/strong&gt;.&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  ✅ What’s Next
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;[ ] Add autoscaling for web nodes and dynamic target discovery in Prometheus&lt;/li&gt;
&lt;li&gt;[ ] Automate dashboard import in Grafana via JSON API&lt;/li&gt;
&lt;li&gt;[ ] Add multi-user RBAC auth for Grafana&lt;/li&gt;
&lt;li&gt;[ ] Store Prometheus TSDB snapshots in S3&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  📁 GitHub Repository
&lt;/h2&gt;

&lt;blockquote&gt;
&lt;p&gt;Full codebase is available here:&lt;br&gt;
&lt;a href="https://github.com/oyewoledavid/cloud-monitoring-logging-infra" rel="noopener noreferrer"&gt;🔗 GitHub - Cloud Monitoring &amp;amp; Logging Infrastructure&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  🙌 Final Thoughts
&lt;/h2&gt;

&lt;p&gt;This project solidified my understanding of &lt;strong&gt;real-world observability&lt;/strong&gt;. It goes beyond tutorials and walks into what a DevOps engineer would actually implement in a production environment.&lt;/p&gt;

&lt;p&gt;Everything provisioning, configuration, metrics, logs, dashboards, alerts, HTTPS, log storage is done automatically with one script.&lt;/p&gt;

&lt;p&gt;If you're building or learning DevOps, I strongly recommend doing a project like this. It forces you to troubleshoot across multiple layers: infrastructure, OS, services, and automation.&lt;/p&gt;

&lt;p&gt;Let me know what you think, or if you’d like a walkthrough video or workshop based on this project!&lt;/p&gt;

</description>
      <category>monitoring</category>
      <category>devops</category>
      <category>aws</category>
      <category>cloud</category>
    </item>
    <item>
      <title>[Boost]</title>
      <dc:creator>David Oyewole</dc:creator>
      <pubDate>Fri, 25 Jul 2025 08:06:05 +0000</pubDate>
      <link>https://dev.to/david_oyewole/-4o1j</link>
      <guid>https://dev.to/david_oyewole/-4o1j</guid>
      <description>&lt;div class="ltag__link--embedded"&gt;
  &lt;div class="crayons-story "&gt;
  &lt;a href="https://dev.to/david_oyewole/from-scratch-to-kubernetes-my-full-stack-devops-project-on-a-local-machine-1143" class="crayons-story__hidden-navigation-link"&gt;From Scratch to Kubernetes: My Full-Stack DevOps Project on a Local Machine&lt;/a&gt;


  &lt;div class="crayons-story__body crayons-story__body-full_post"&gt;
    &lt;div class="crayons-story__top"&gt;
      &lt;div class="crayons-story__meta"&gt;
        &lt;div class="crayons-story__author-pic"&gt;

          &lt;a href="/david_oyewole" class="crayons-avatar  crayons-avatar--l  "&gt;
            &lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Fuser%2Fprofile_image%2F1502452%2Fe7e8e5d1-b46e-40f9-a7c2-1f2e63d514cc.jpg" alt="david_oyewole profile" class="crayons-avatar__image"&gt;
          &lt;/a&gt;
        &lt;/div&gt;
        &lt;div&gt;
          &lt;div&gt;
            &lt;a href="/david_oyewole" class="crayons-story__secondary fw-medium m:hidden"&gt;
              David Oyewole
            &lt;/a&gt;
            &lt;div class="profile-preview-card relative mb-4 s:mb-0 fw-medium hidden m:inline-block"&gt;
              
                David Oyewole
                
              
              &lt;div id="story-author-preview-content-2587251" class="profile-preview-card__content crayons-dropdown branded-7 p-4 pt-0"&gt;
                &lt;div class="gap-4 grid"&gt;
                  &lt;div class="-mt-4"&gt;
                    &lt;a href="/david_oyewole" class="flex"&gt;
                      &lt;span class="crayons-avatar crayons-avatar--xl mr-2 shrink-0"&gt;
                        &lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Fuser%2Fprofile_image%2F1502452%2Fe7e8e5d1-b46e-40f9-a7c2-1f2e63d514cc.jpg" class="crayons-avatar__image" alt=""&gt;
                      &lt;/span&gt;
                      &lt;span class="crayons-link crayons-subtitle-2 mt-5"&gt;David Oyewole&lt;/span&gt;
                    &lt;/a&gt;
                  &lt;/div&gt;
                  &lt;div class="print-hidden"&gt;
                    
                      Follow
                    
                  &lt;/div&gt;
                  &lt;div class="author-preview-metadata-container"&gt;&lt;/div&gt;
                &lt;/div&gt;
              &lt;/div&gt;
            &lt;/div&gt;

          &lt;/div&gt;
          &lt;a href="https://dev.to/david_oyewole/from-scratch-to-kubernetes-my-full-stack-devops-project-on-a-local-machine-1143" class="crayons-story__tertiary fs-xs"&gt;&lt;time&gt;Jun 12 '25&lt;/time&gt;&lt;span class="time-ago-indicator-initial-placeholder"&gt;&lt;/span&gt;&lt;/a&gt;
        &lt;/div&gt;
      &lt;/div&gt;

    &lt;/div&gt;

    &lt;div class="crayons-story__indention"&gt;
      &lt;h2 class="crayons-story__title crayons-story__title-full_post"&gt;
        &lt;a href="https://dev.to/david_oyewole/from-scratch-to-kubernetes-my-full-stack-devops-project-on-a-local-machine-1143" id="article-link-2587251"&gt;
          From Scratch to Kubernetes: My Full-Stack DevOps Project on a Local Machine
        &lt;/a&gt;
      &lt;/h2&gt;
        &lt;div class="crayons-story__tags"&gt;
            &lt;a class="crayons-tag  crayons-tag--monochrome " href="/t/devops"&gt;&lt;span class="crayons-tag__prefix"&gt;#&lt;/span&gt;devops&lt;/a&gt;
            &lt;a class="crayons-tag  crayons-tag--monochrome " href="/t/kubernetes"&gt;&lt;span class="crayons-tag__prefix"&gt;#&lt;/span&gt;kubernetes&lt;/a&gt;
            &lt;a class="crayons-tag  crayons-tag--monochrome " href="/t/docker"&gt;&lt;span class="crayons-tag__prefix"&gt;#&lt;/span&gt;docker&lt;/a&gt;
            &lt;a class="crayons-tag  crayons-tag--monochrome " href="/t/monitoring"&gt;&lt;span class="crayons-tag__prefix"&gt;#&lt;/span&gt;monitoring&lt;/a&gt;
        &lt;/div&gt;
      &lt;div class="crayons-story__bottom"&gt;
        &lt;div class="crayons-story__details"&gt;
          &lt;a href="https://dev.to/david_oyewole/from-scratch-to-kubernetes-my-full-stack-devops-project-on-a-local-machine-1143" class="crayons-btn crayons-btn--s crayons-btn--ghost crayons-btn--icon-left"&gt;
            &lt;div class="multiple_reactions_aggregate"&gt;
              &lt;span class="multiple_reactions_icons_container"&gt;
                  &lt;span class="crayons_icon_container"&gt;
                    &lt;img src="https://assets.dev.to/assets/sparkle-heart-5f9bee3767e18deb1bb725290cb151c25234768a0e9a2bd39370c382d02920cf.svg" width="18" height="18"&gt;
                  &lt;/span&gt;
              &lt;/span&gt;
              &lt;span class="aggregate_reactions_counter"&gt;1&lt;span class="hidden s:inline"&gt; reaction&lt;/span&gt;&lt;/span&gt;
            &lt;/div&gt;
          &lt;/a&gt;
            &lt;a href="https://dev.to/david_oyewole/from-scratch-to-kubernetes-my-full-stack-devops-project-on-a-local-machine-1143#comments" class="crayons-btn crayons-btn--s crayons-btn--ghost crayons-btn--icon-left flex items-center"&gt;
              Comments


              2&lt;span class="hidden s:inline"&gt; comments&lt;/span&gt;
            &lt;/a&gt;
        &lt;/div&gt;
        &lt;div class="crayons-story__save"&gt;
          &lt;small class="crayons-story__tertiary fs-xs mr-2"&gt;
            3 min read
          &lt;/small&gt;
            
              &lt;span class="bm-initial"&gt;
                

              &lt;/span&gt;
              &lt;span class="bm-success"&gt;
                

              &lt;/span&gt;
            
        &lt;/div&gt;
      &lt;/div&gt;
    &lt;/div&gt;
  &lt;/div&gt;
&lt;/div&gt;

&lt;/div&gt;


</description>
      <category>devops</category>
      <category>kubernetes</category>
      <category>docker</category>
      <category>monitoring</category>
    </item>
    <item>
      <title>From Scratch to Kubernetes: My Full-Stack DevOps Project on a Local Machine</title>
      <dc:creator>David Oyewole</dc:creator>
      <pubDate>Thu, 12 Jun 2025 13:12:54 +0000</pubDate>
      <link>https://dev.to/david_oyewole/from-scratch-to-kubernetes-my-full-stack-devops-project-on-a-local-machine-1143</link>
      <guid>https://dev.to/david_oyewole/from-scratch-to-kubernetes-my-full-stack-devops-project-on-a-local-machine-1143</guid>
      <description>&lt;h2&gt;
  
  
  🚀 Description
&lt;/h2&gt;

&lt;p&gt;A hands-on DevOps showcase: containerization, Kubernetes with Helm, CI/CD using GitHub Actions, and observability with Prometheus, all running locally.&lt;/p&gt;

&lt;p&gt;In this article, I walk through a DevOps project I recently completed, a fully containerized full-stack web application deployed on a local Kubernetes cluster. This project was designed not just to build a functional app, but to demonstrate my DevOps skills end-to-end: containerization, orchestration, CI/CD, and observability.&lt;/p&gt;




&lt;h2&gt;
  
  
  🛠️ Project Overview
&lt;/h2&gt;

&lt;p&gt;🔧 &lt;strong&gt;Project Goal:&lt;/strong&gt;&lt;br&gt;
The goal was to create a simple yet realistic web app and apply modern DevOps practices around it. Here's what the application does:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A &lt;strong&gt;React frontend&lt;/strong&gt; collects user information.&lt;/li&gt;
&lt;li&gt;The data is sent to a &lt;strong&gt;Flask backend&lt;/strong&gt;, which checks if the user already exists.&lt;/li&gt;
&lt;li&gt;If not, it adds the user to a &lt;strong&gt;PostgreSQL&lt;/strong&gt; database.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Redis&lt;/strong&gt; is used to cache the user data for faster reads.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Nginx&lt;/strong&gt; acts as a reverse proxy to route traffic efficiently.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Simple in logic, but powerful as a DevOps exercise.&lt;/p&gt;




&lt;h2&gt;
  
  
  📦 Containerization with Docker
&lt;/h2&gt;

&lt;p&gt;🐳 &lt;strong&gt;Consistent builds everywhere.&lt;/strong&gt; &lt;/p&gt;

&lt;p&gt;Each component of the stack React front-end, Flask back-end, PostgreSQL, Redis, and Nginx was packaged into its own Docker container. I created Dockerfiles for the front-end, back-end,PostgrSQL and Nginx  while the using redis official images&lt;/p&gt;

&lt;p&gt;I pushed all my custom images to &lt;strong&gt;Docker Hub&lt;/strong&gt;, making them easy to deploy anywhere.&lt;/p&gt;




&lt;h2&gt;
  
  
  ☸️ Kubernetes Orchestration with Helm
&lt;/h2&gt;

&lt;p&gt;🧭 &lt;strong&gt;Template-driven deployments.&lt;/strong&gt; &lt;br&gt;
I chose &lt;strong&gt;Minikube&lt;/strong&gt; as my local Kubernetes distribution because it’s lightweight and easy to set up.&lt;/p&gt;

&lt;p&gt;To manage deployments efficiently, I used &lt;strong&gt;Helm&lt;/strong&gt; charts. Helm made it easier to define and template Kubernetes manifests for each component. It allowed me to quickly spin up or update the application by adjusting just a few values.&lt;/p&gt;

&lt;p&gt;Key Kubernetes resources used:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Deployments for the front-end, back-end, and Nginx&lt;/li&gt;
&lt;li&gt;StatefulSet for PostgreSQL&lt;/li&gt;
&lt;li&gt;Persistent Volumes for database storage&lt;/li&gt;
&lt;li&gt;ConfigMaps and Secrets for environment variables&lt;/li&gt;
&lt;li&gt;Ingress configured with Nginx for routing&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  🌐 Reverse Proxy with NGINX
&lt;/h2&gt;

&lt;p&gt;🛣️ &lt;strong&gt;Traffic routing made simple.&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
NGINX was configured as a reverse proxy to route external traffic efficiently to the back-end services and enforce separation of concerns.&lt;/p&gt;




&lt;h2&gt;
  
  
  ⚡ Caching with Redis
&lt;/h2&gt;

&lt;p&gt;🚀 &lt;strong&gt;Faster response times.&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
Redis was integrated as an in-memory store to cache user data, helping reduce database load and improve response times.&lt;/p&gt;




&lt;h2&gt;
  
  
  🐘 PostgreSQL for Persistent Storage
&lt;/h2&gt;

&lt;p&gt;🗄️ &lt;strong&gt;Reliable relational storage.&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
PostgreSQL was used as the main relational database, storing all user information reliably with persistent volumes on Kubernetes.&lt;/p&gt;




&lt;h2&gt;
  
  
  📈 Observability with Prometheus, Grafana  &amp;amp; Alertmanager
&lt;/h2&gt;

&lt;p&gt;📊 &lt;strong&gt;Visualizing performance.&lt;/strong&gt; &lt;/p&gt;

&lt;p&gt;Monitoring is essential for any production-grade system. I deployed the &lt;strong&gt;Prometheus stack&lt;/strong&gt;, including &lt;strong&gt;Grafana&lt;/strong&gt; and &lt;strong&gt;Alertmanager&lt;/strong&gt; inside the cluster.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Prometheus scraped metrics from pods and the Kubernetes API.&lt;/li&gt;
&lt;li&gt;Grafana provided dashboards to visualize system performance (CPU, memory, request latency).&lt;/li&gt;
&lt;li&gt;I configured &lt;strong&gt;Alertmanager&lt;/strong&gt; to send alerts to a Slack channel when thresholds were breached (e.g., high memory usage or pod crashes).&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  🔁 CI/CD with GitHub Actions and Self-Hosted Runner
&lt;/h2&gt;

&lt;p&gt;🤖 &lt;strong&gt;Automated delivery pipeline.&lt;/strong&gt;&lt;br&gt;
To complete the DevOps lifecycle, I set up a full CI/CD pipeline using &lt;strong&gt;GitHub Actions&lt;/strong&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  CI/CD Workflow:
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Lint and test&lt;/strong&gt; the code (for front-end/back-end).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Build Docker images&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Push to Docker Hub&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Trigger a deployment&lt;/strong&gt; to my MicroK8s cluster.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Since the cluster was running locally, I used a &lt;strong&gt;self-hosted GitHub Actions runner&lt;/strong&gt; installed on my laptop. This allowed the pipeline to build and deploy directly to my local Kubernetes environment no cloud required.&lt;/p&gt;




&lt;h2&gt;
  
  
  ⚠️ Challenges &amp;amp; Lessons Learned
&lt;/h2&gt;

&lt;p&gt;🧩 &lt;strong&gt;Solving real-world problems.&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Ingress and Service Networking:&lt;/strong&gt; Getting Nginx ingress to work with service routing locally took some tweaking.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Secrets Management:&lt;/strong&gt; Managing database credentials and API keys securely in Kubernetes taught me the importance of Secrets and RBAC.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Resource Limits:&lt;/strong&gt; Without setting proper resource requests/limits, some pods would get evicted under memory pressure.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;CI/CD Integration:&lt;/strong&gt; Setting up a self-hosted GitHub runner took some trial and error but was a great learning experience.&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  ✅ Final Thoughts
&lt;/h2&gt;

&lt;p&gt;This project helped me strengthen my understanding of the entire DevOps lifecycle:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;From writing Dockerfiles to managing deployments with Helm&lt;/li&gt;
&lt;li&gt;From local Kubernetes orchestration to building CI/CD pipelines&lt;/li&gt;
&lt;li&gt;From setting up observability tools to configuring real-time alerts&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;It was a rewarding experience that sharpened my technical skills and gave me a deeper appreciation for system reliability and automation.&lt;/p&gt;




&lt;h2&gt;
  
  
  📎 Want to See the Code?
&lt;/h2&gt;

&lt;p&gt;Feel free to check out the project repository on GitHub:&lt;br&gt;&lt;br&gt;
👉 &lt;a href="https://github.com/oyewoledavid/full-stack-kubernetes" rel="noopener noreferrer"&gt;Github Repo&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you have any questions or feedback, I’d love to hear from you in the comments!&lt;/p&gt;




</description>
      <category>devops</category>
      <category>kubernetes</category>
      <category>docker</category>
      <category>monitoring</category>
    </item>
    <item>
      <title>How I Built a Bulk Email Sender with Python and Brevo API</title>
      <dc:creator>David Oyewole</dc:creator>
      <pubDate>Thu, 12 Jun 2025 12:12:18 +0000</pubDate>
      <link>https://dev.to/david_oyewole/how-i-built-a-bulk-email-sender-with-python-and-brevo-api-1p72</link>
      <guid>https://dev.to/david_oyewole/how-i-built-a-bulk-email-sender-with-python-and-brevo-api-1p72</guid>
      <description>&lt;p&gt;Sending personalized bulk emails programmatically is something I’ve always wanted to do, especially for community engagement and event updates. Recently, I built a simple but powerful bulk email sender in Python using the Brevo SMTP service. In this post, I’ll walk you through everything from setting up Brevo, to reading recipients from a CSV file, sending beautiful HTML emails with personalized content, and tracking email delivery status.&lt;/p&gt;

&lt;p&gt;🧰 Tools &amp;amp; Technologies Used&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Python 3

Pandas – for reading and updating the CSV

smtplib – to send emails via SMTP

email.mime – for formatting the emails in HTML

Brevo SMTP – the email delivery service (free plan supported!)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;📝 Step 1: Preparing the Email List&lt;/p&gt;

&lt;p&gt;I had already collected user responses via Google Forms. From the Google Sheet, I manually downloaded a CSV that included columns like Full Name, Email address, and sent_status. The sent_status column helps prevent resending to the same recipients.&lt;/p&gt;

&lt;p&gt;Here’s a sample structure of the CSV:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Full Name&lt;/th&gt;
&lt;th&gt;Email address&lt;/th&gt;
&lt;th&gt;sent_status&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Jane Doe&lt;/td&gt;
&lt;td&gt;&lt;a href="//mailto:jane@example.com"&gt;jane@example.com&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;John Smith&lt;/td&gt;
&lt;td&gt;&lt;a href="//mailto:johnsmith@domain.com"&gt;johnsmith@domain.com&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;✉️ Step 2: Setting Up Brevo SMTP&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;I signed up at Brevo.

From the dashboard, I went to SMTP &amp;amp; API section.

Copied my SMTP credentials and took note of the login username (something like abc123@smtp-brevo.com).

I also authenticated my sending domain (e.g., kingdavid.me) so my messages wouldn’t get marked as spam.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;If you don’t have a domain, Brevo still lets you test with limited functionality—but domain authentication is recommended for proper deliverability.&lt;/p&gt;

&lt;p&gt;🧑‍💻 Step 3: Writing the Python Script&lt;/p&gt;

&lt;p&gt;Here’s what the script does:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Loads the CSV using Pandas.

Loops through each row and checks if the email was already sent.

Creates a personalized HTML email for each recipient.

Sends the email via Brevo SMTP.

Updates the sent_status in the CSV after each email is sent.

Sleeps between sends to avoid being rate-limited.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;sib_api_v3_sdk&lt;/span&gt;
&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;pandas&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;pd&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;email.mime.multipart&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;MIMEMultipart&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;email.mime.text&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;MIMEText&lt;/span&gt;
&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;time&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;sib_api_v3_sdk.rest&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;ApiException&lt;/span&gt;  &lt;span class="c1"&gt;# Import ApiException
&lt;/span&gt;
&lt;span class="c1"&gt;# Initialize Brevo API client
&lt;/span&gt;&lt;span class="n"&gt;configuration&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;sib_api_v3_sdk&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;Configuration&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;span class="n"&gt;configuration&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;api_key&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;api-key&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;''&lt;/span&gt;  &lt;span class="c1"&gt;# Replace with your actual Brevo API key
&lt;/span&gt;
&lt;span class="n"&gt;api_instance&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;sib_api_v3_sdk&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;TransactionalEmailsApi&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;sib_api_v3_sdk&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;ApiClient&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;configuration&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;

&lt;span class="c1"&gt;# Define sender email as a dictionary (Ensure it's a verified sender email)
&lt;/span&gt;&lt;span class="n"&gt;sender&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;name&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;John Doe&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;  &lt;span class="c1"&gt;# Replace with your name
&lt;/span&gt;    &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;email&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Johndoe@email.com&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;  &lt;span class="c1"&gt;# Replace with a valid sender email that is verified
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="c1"&gt;# CSV file containing email addresses and names
&lt;/span&gt;&lt;span class="n"&gt;CSV_FILE&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Cleaned Data.csv&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
&lt;span class="n"&gt;DELAY&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;  &lt;span class="c1"&gt;# seconds between emails
&lt;/span&gt;
&lt;span class="n"&gt;SUBJECT&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;WEB3 LITERACY WITH BLESSED&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
&lt;span class="n"&gt;HTML_BODY&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;"""&lt;/span&gt;&lt;span class="s"&gt;
&amp;lt;html&amp;gt;
  &amp;lt;head&amp;gt;
    &amp;lt;style&amp;gt;
      body {{
        font-family: Arial, sans-serif;
        background-color: #f4f4f4;
        margin: 0;
        padding: 0;
      }}
      .container {{
        background-color: #ffffff;
        max-width: 600px;
        margin: 30px auto;
        padding: 20px;
        border-radius: 10px;
        box-shadow: 0 2px 5px rgba(0,0,0,0.1);
      }}
      .header {{
        text-align: center;
        padding-bottom: 20px;
      }}
      .logo {{
        width: 120px;
      }}
      .body-content {{
        font-size: 20px;
        color: #333333;
        line-height: 1.6;
      }}
      .footer {{
        text-align: center;
        font-size: 12px;
        color: #999999;
        padding-top: 20px;
      }}
    &amp;lt;/style&amp;gt;
  &amp;lt;/head&amp;gt;
  &amp;lt;body&amp;gt;
    &amp;lt;div class=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;container&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;&amp;gt;
      &amp;lt;div class=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;header&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;&amp;gt;
        &amp;lt;img src=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;https://drive.google.com/uc?export=view&amp;amp;id=1cdWkRN1S8JvGye-f55a3aWfkE2pr9d1w&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt; alt=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Logo&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt; class=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;logo&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt; /&amp;gt;
      &amp;lt;/div&amp;gt;
      &amp;lt;div class=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;body-content&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;&amp;gt;
        &amp;lt;p&amp;gt;Hi &amp;lt;strong&amp;gt; {name} &amp;lt;/strong&amp;gt;,&amp;lt;/p&amp;gt;
        &amp;lt;p&amp;gt;Thank you for taking the bold step to begin your journey into the world of Web3 and blockchain technology. I’m 
        truly excited to have you onboard, and I look forward to walking this path with you over the next few weeks. 
        We’ll kick things off tomorrow with a pre-class session, where you’ll get an overview of what to expect and how to 
        prepare. The full course officially begins on the 22nd of this month and will run for 10 days, taking place every 
        Wednesday, Thursday, Friday, and Saturday. To ensure a smooth and interactive learning experience, we’ve created four
          private Telegram groups, with each group holding between 200 to 400 participants to avoid overcrowding and make 
          sure everyone gets the attention they need. You can join your assigned group using the unique link in this email. 
          Please note that this is a personalized link created just for you. It must not be shared or forwarded. 
          Each link is tied to a specific participant, and sharing it may lead to identification and removal from the class. 
          The full curriculum will be shared with you soon. I’m looking forward to a powerful three weeks of Web3 literacy, 
          growth, and connection. I wish you great success as you step into the future of technology and digital opportunity! 
          Let’s do this! &amp;lt;/p&amp;gt;
          &amp;lt;a href=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;https:******&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;&amp;gt;&amp;lt;strong&amp;gt;Join The Group Here&amp;lt;/strong&amp;gt;&amp;lt;/a&amp;gt;
        &amp;lt;p&amp;gt;Warm regards,&amp;lt;br&amp;gt;&amp;lt;strong&amp;gt;Blessed&amp;lt;/strong&amp;gt; &amp;lt;br&amp;gt;&amp;lt;strong&amp;gt;Host, Web3 Literacy Season1&amp;lt;/strong&amp;gt;&amp;lt;/p&amp;gt;
      &amp;lt;/div&amp;gt;
      &amp;lt;div class=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;footer&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;&amp;gt;
        &amp;lt;p&amp;gt;This is an automated message — please don’t reply directly.&amp;lt;/p&amp;gt;
      &amp;lt;/div&amp;gt;
    &amp;lt;/div&amp;gt;
  &amp;lt;/body&amp;gt;
&amp;lt;/html&amp;gt;
&lt;/span&gt;&lt;span class="sh"&gt;"""&lt;/span&gt;

&lt;span class="c1"&gt;# Load the CSV file with email recipients
&lt;/span&gt;&lt;span class="n"&gt;df&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;pd&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;read_csv&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;CSV_FILE&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="c1"&gt;# Add a column for email sending status
&lt;/span&gt;&lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;sent_status&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt; &lt;span class="ow"&gt;not&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;df&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;columns&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="n"&gt;df&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;sent_status&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;""&lt;/span&gt;

&lt;span class="n"&gt;df&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;sent_status&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;df&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;sent_status&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;].&lt;/span&gt;&lt;span class="nf"&gt;astype&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="c1"&gt;# Loop through each email
&lt;/span&gt;&lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;i&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;row&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;df&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;iterrows&lt;/span&gt;&lt;span class="p"&gt;():&lt;/span&gt;
    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="nf"&gt;str&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;row&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;sent_status&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;]).&lt;/span&gt;&lt;span class="nf"&gt;strip&lt;/span&gt;&lt;span class="p"&gt;().&lt;/span&gt;&lt;span class="nf"&gt;lower&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;==&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;sent&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="k"&gt;continue&lt;/span&gt;

    &lt;span class="n"&gt;recipient_email&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;row&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Email address&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
    &lt;span class="n"&gt;name&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;row&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Full Name&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;there&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;html&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;HTML_BODY&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;format&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;name&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;name&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="c1"&gt;# Prepare the email
&lt;/span&gt;    &lt;span class="k"&gt;try&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="n"&gt;send_smtp_email&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;sib_api_v3_sdk&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;SendSmtpEmail&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
            &lt;span class="n"&gt;to&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;[{&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;email&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;recipient_email&lt;/span&gt;&lt;span class="p"&gt;}],&lt;/span&gt;
            &lt;span class="n"&gt;subject&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;SUBJECT&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="n"&gt;html_content&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;html&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="n"&gt;sender&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;sender&lt;/span&gt;  &lt;span class="c1"&gt;# Corrected sender format
&lt;/span&gt;        &lt;span class="p"&gt;)&lt;/span&gt;

        &lt;span class="c1"&gt;# Send the email using Brevo's API
&lt;/span&gt;        &lt;span class="n"&gt;api_instance&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;send_transac_email&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;send_smtp_email&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

        &lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;✅ Sent to &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;recipient_email&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="n"&gt;df&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;at&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;i&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;sent_status&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;sent&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;

    &lt;span class="k"&gt;except&lt;/span&gt; &lt;span class="n"&gt;ApiException&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;e&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;❌ Failed to send to &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;recipient_email&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt;: &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;e&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="n"&gt;df&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;at&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;i&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;sent_status&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;failed: &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;e&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;

    &lt;span class="n"&gt;time&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;sleep&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;DELAY&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="c1"&gt;# Save updated status to the CSV file
&lt;/span&gt;&lt;span class="n"&gt;df&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;to_csv&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;CSV_FILE&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;index&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="bp"&gt;False&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;📸 Adding an Image to Your Email&lt;/p&gt;

&lt;p&gt;To include an image in your HTML email, use a public image URL (e.g., uploaded to Imgur or your domain). Embedding local images isn't reliable for HTML email clients.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight html"&gt;&lt;code&gt;&lt;span class="nt"&gt;&amp;lt;img&lt;/span&gt; &lt;span class="na"&gt;src=&lt;/span&gt;&lt;span class="s"&gt;"https://yourdomain.com/logo.png"&lt;/span&gt; &lt;span class="na"&gt;alt=&lt;/span&gt;&lt;span class="s"&gt;"Web3 Logo"&lt;/span&gt; &lt;span class="nt"&gt;/&amp;gt;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;📊 Tracking Status&lt;/p&gt;

&lt;p&gt;The sent_status column is updated automatically in your CSV file, making the script idempotent. You can rerun the script, and it will skip those already marked as sent.&lt;br&gt;
🧠 Lessons Learned&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Validating your sending domain helps prevent email delivery issues.

Use try/except to catch failed deliveries and record them.

Avoid embedding base64 images—host your assets publicly.

Always add delays to avoid getting rate-limited or blacklisted.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;💡 Final Thoughts&lt;/p&gt;

&lt;p&gt;This project taught me a lot about email infrastructure and how easy it is to build something practical with just a few lines of Python. If you’re looking to build a newsletter or outreach tool without a full-blown ESP, this is a solid starting point.&lt;/p&gt;

&lt;p&gt;Feel free to fork it and tweak to your needs. Happy hacking!&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Why I Switched from WSL to Ubuntu for DevOps: A Personal Journey</title>
      <dc:creator>David Oyewole</dc:creator>
      <pubDate>Thu, 12 Jun 2025 12:10:40 +0000</pubDate>
      <link>https://dev.to/david_oyewole/why-i-switched-from-wsl-to-ubuntu-for-devops-a-personal-journey-44dp</link>
      <guid>https://dev.to/david_oyewole/why-i-switched-from-wsl-to-ubuntu-for-devops-a-personal-journey-44dp</guid>
      <description>&lt;h3&gt;
  
  
  👋 Introduction
&lt;/h3&gt;

&lt;p&gt;For a long time, I ran my development and DevOps tooling inside &lt;strong&gt;Windows Subsystem for Linux (WSL)&lt;/strong&gt;. It felt like the best of both worlds: I could run Linux commands and tools while still using my Windows desktop apps. I even configured Docker, Minikube, Microk8s and a bunch of other tools to work inside WSL. But as my DevOps workflow grew more complex especially around Kubernetes, CI/CD, and observability I kept running into annoying limitations.&lt;/p&gt;

&lt;p&gt;So I made a decision: I wiped the whole windows OS and WSL then installed &lt;strong&gt;Ubuntu&lt;/strong&gt; as my main OS. The result? Everything just works better. Faster. Cleaner. More reliable.&lt;/p&gt;

&lt;p&gt;This post walks you through why I switched, what changed, and why I’m never looking back.&lt;/p&gt;




&lt;h3&gt;
  
  
  🧠 WSL vs Native Ubuntu: A DevOps-Focused Comparison
&lt;/h3&gt;

&lt;p&gt;Let’s get straight to the core differences I noticed after making the switch:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Feature&lt;/th&gt;
&lt;th&gt;Ubuntu (Native)&lt;/th&gt;
&lt;th&gt;WSL (Windows)&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Networking&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Full, native networking&lt;/td&gt;
&lt;td&gt;Isolated VM networking&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Systemd Support&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;✅ Full &lt;code&gt;systemctl&lt;/code&gt; and timers&lt;/td&gt;
&lt;td&gt;❌ WSL1 lacks it, WSL2 partial&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Docker Support&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Native Docker daemon&lt;/td&gt;
&lt;td&gt;Needs Docker Desktop&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Kubernetes&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Runs smoothly (MicroK8s/Minikube)&lt;/td&gt;
&lt;td&gt;Requires tunneling &amp;amp; workarounds&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;GUI Apps&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;GNOME, Postman, VS Code, etc.&lt;/td&gt;
&lt;td&gt;WSLg is decent, but glitchy&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;File Access&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Native ext4 filesystem&lt;/td&gt;
&lt;td&gt;Slower across &lt;code&gt;/mnt/c/&lt;/code&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Hardware Support&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Direct access to USB, GPU, etc.&lt;/td&gt;
&lt;td&gt;Limited or indirect&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;WSL is amazing for lightweight scripting and simple setups. But when you need production-grade workflows or want to simulate real infrastructure, it becomes a struggle.&lt;/p&gt;




&lt;h3&gt;
  
  
  🚀 The Moment That Blew My Mind
&lt;/h3&gt;

&lt;p&gt;I deployed a sample app on Kubernetes with a &lt;code&gt;NodePort&lt;/code&gt; service, and without doing &lt;strong&gt;any port forwarding&lt;/strong&gt;, I opened my browser and hit &lt;code&gt;http://localhost:30000&lt;/code&gt;. It worked.&lt;/p&gt;

&lt;p&gt;Coming from WSL, where I had to constantly run:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl port-forward svc/my-service 8080:80
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;to get anything into the browser, this felt like &lt;strong&gt;pure magic&lt;/strong&gt;. No hacks. No tunnels. No separate IP guessing.&lt;/p&gt;

&lt;p&gt;That was the first moment I realized just how much friction I had been tolerating under WSL.&lt;/p&gt;




&lt;h3&gt;
  
  
  🛠️ What I Love About Ubuntu as a DevOps Engineer
&lt;/h3&gt;

&lt;p&gt;Since making the switch, here’s how my workflow has improved:&lt;/p&gt;

&lt;h4&gt;
  
  
  ✅ Full Docker &amp;amp; Kubernetes Support
&lt;/h4&gt;

&lt;p&gt;I now run &lt;strong&gt;MicroK8s&lt;/strong&gt; natively with all the bells and whistles: Helm, Prometheus, Grafana, Ingress, and more. No need for Docker Desktop or weird socket tricks everything is integrated and lightweight.&lt;/p&gt;

&lt;h4&gt;
  
  
  ✅ systemd for Automation
&lt;/h4&gt;

&lt;p&gt;I use &lt;code&gt;systemd&lt;/code&gt; to run background agents like Jenkins, Prometheus Node Exporter, and even test CI runners. I can enable them with:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;systemctl &lt;span class="nb"&gt;enable &lt;/span&gt;my-service
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Something I always missed in WSL.&lt;/p&gt;

&lt;h4&gt;
  
  
  ✅ Real Networking
&lt;/h4&gt;

&lt;p&gt;I use &lt;code&gt;nmap&lt;/code&gt;, &lt;code&gt;tcpdump&lt;/code&gt;, &lt;code&gt;iptables&lt;/code&gt;, and even set up firewall rules with &lt;code&gt;ufw&lt;/code&gt;. These tools either don't work or require complex setup in WSL.&lt;/p&gt;

&lt;h4&gt;
  
  
  ✅ Better Resource Utilization
&lt;/h4&gt;

&lt;p&gt;I can control my containers using real cgroups, assign memory limits, and monitor system metrics accurately no Docker Desktop abstractions.&lt;/p&gt;

&lt;h4&gt;
  
  
  ✅ Native File I/O Performance
&lt;/h4&gt;

&lt;p&gt;Building containers and running tools like Ansible or Terraform is significantly faster. No slow-downs when writing files.&lt;/p&gt;

&lt;h4&gt;
  
  
  ✅ GUI Just Works
&lt;/h4&gt;

&lt;p&gt;I’ve installed &lt;strong&gt;Postman&lt;/strong&gt;, &lt;strong&gt;DBeaver&lt;/strong&gt;, &lt;strong&gt;VS Code&lt;/strong&gt;, and everything runs like native apps because they are. WSLg is improving, but it still feels a bit clunky.&lt;/p&gt;




&lt;h3&gt;
  
  
  🧪 Real-World Use Case
&lt;/h3&gt;

&lt;p&gt;I recently built a full-stack app with Flask, React, PostgreSQL, and NGINX all containerized and deployed on MicroK8s. In Ubuntu:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;I deployed with Helm&lt;/li&gt;
&lt;li&gt;Monitored with Prometheus + Grafana&lt;/li&gt;
&lt;li&gt;Logged with the ELK stack&lt;/li&gt;
&lt;li&gt;Used VS Code + Terminal in split screen&lt;/li&gt;
&lt;li&gt;Browsed the app without needing a single &lt;code&gt;port-forward&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This would have been possible in WSL but not without hours of config, tunneling, and frustration.&lt;/p&gt;




&lt;h3&gt;
  
  
  🤔 Should You Switch?
&lt;/h3&gt;

&lt;p&gt;If you:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Work heavily with Docker or Kubernetes&lt;/li&gt;
&lt;li&gt;Build or test cloud-native apps&lt;/li&gt;
&lt;li&gt;Want to use tools like Ansible, Terraform, Helm&lt;/li&gt;
&lt;li&gt;Need fast builds, native services, or background agents&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Then &lt;strong&gt;yes&lt;/strong&gt;, switching to Ubuntu is absolutely worth it.&lt;/p&gt;

&lt;p&gt;But if you're doing light scripting, occasional CLI work, or you rely heavily on Windows apps, WSL might still be the better fit for you.&lt;/p&gt;




&lt;h3&gt;
  
  
  🧵 Final Thoughts
&lt;/h3&gt;

&lt;p&gt;WSL was a great bridge between worlds, but for serious DevOps engineering, native Ubuntu is where the real power lies. I’ve spent less time fighting the environment and more time shipping features, building infrastructure, and learning tools that behave exactly how they do in production.&lt;/p&gt;

&lt;p&gt;If you've been feeling the same friction I was, consider making the leap.&lt;/p&gt;

&lt;p&gt;Let me know your experience have you switched? Still thinking about it? Let's chat in the comments! 🚀&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Follow me for more DevOps tips, real-world experiences, and Kubernetes content.&lt;/em&gt;&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Automating Flask Deployment with GitHub Actions and Docker</title>
      <dc:creator>David Oyewole</dc:creator>
      <pubDate>Tue, 01 Apr 2025 09:18:11 +0000</pubDate>
      <link>https://dev.to/david_oyewole/automating-flask-deployment-with-github-actions-and-docker-4j1a</link>
      <guid>https://dev.to/david_oyewole/automating-flask-deployment-with-github-actions-and-docker-4j1a</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;Continuous Integration and Continuous Deployment (&lt;strong&gt;CI/CD&lt;/strong&gt;) are essential for modern software development. In this tutorial, I will walk you through setting up a &lt;strong&gt;CI/CD pipeline&lt;/strong&gt; for a Flask application using &lt;strong&gt;GitHub Actions&lt;/strong&gt; and &lt;strong&gt;Docker&lt;/strong&gt;. By the end of this guide, you will have an automated workflow that builds, tests, and pushes a Docker image to Docker Hub.&lt;/p&gt;

&lt;h2&gt;
  
  
  ⚙️ Prerequisites
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Python 3.x&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;pip&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Docker&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;GitHub Repository&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Docker Hub Account&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;AWS EC2 Instance&lt;/strong&gt; with Docker installed
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;SSH Access&lt;/strong&gt; to the EC2 instance&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Step 1: Setting Up the Flask Application
&lt;/h2&gt;

&lt;p&gt;Let's start by creating a simple Flask application.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Create a project directory:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;mkdir &lt;/span&gt;flask-cicd &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="nb"&gt;cd &lt;/span&gt;flask-cicd
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Create a virtual environment and activate it:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;python3 &lt;span class="nt"&gt;-m&lt;/span&gt; venv venv
&lt;span class="nb"&gt;source &lt;/span&gt;venv/bin/activate  &lt;span class="c"&gt;# On macOS/Linux&lt;/span&gt;
venv&lt;span class="se"&gt;\S&lt;/span&gt;cripts&lt;span class="se"&gt;\a&lt;/span&gt;ctivate  &lt;span class="c"&gt;# On Windows&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Install Flask:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;pip &lt;span class="nb"&gt;install &lt;/span&gt;flask
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Create an &lt;code&gt;app.py&lt;/code&gt; file:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;flask&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;Flask&lt;/span&gt;

&lt;span class="n"&gt;app&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;Flask&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;__name__&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="nd"&gt;@app.route&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;/&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;home&lt;/span&gt;&lt;span class="p"&gt;():&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Hello, Flask CI/CD!&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;

&lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;__name__&lt;/span&gt; &lt;span class="o"&gt;==&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;__main__&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="n"&gt;app&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;run&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;host&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;0.0.0.0&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;port&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;5000&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Create a &lt;code&gt;requirements.txt&lt;/code&gt; file:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;flask
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Step 2: Dockerizing the Flask Application
&lt;/h2&gt;

&lt;p&gt;Create a &lt;code&gt;Dockerfile&lt;/code&gt; in the project directory:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight docker"&gt;&lt;code&gt;&lt;span class="c"&gt;# Use official Python image&lt;/span&gt;
&lt;span class="k"&gt;FROM&lt;/span&gt;&lt;span class="s"&gt; python:3.10&lt;/span&gt;

&lt;span class="k"&gt;WORKDIR&lt;/span&gt;&lt;span class="s"&gt; /app&lt;/span&gt;

&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; requirements.txt requirements.txt&lt;/span&gt;
&lt;span class="k"&gt;RUN &lt;/span&gt;pip &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;--no-cache-dir&lt;/span&gt; &lt;span class="nt"&gt;-r&lt;/span&gt; requirements.txt

&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; . .&lt;/span&gt;
&lt;span class="k"&gt;EXPOSE&lt;/span&gt;&lt;span class="s"&gt; 5000&lt;/span&gt;
&lt;span class="k"&gt;CMD&lt;/span&gt;&lt;span class="s"&gt; ["python", "app.py"]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To test the Docker container locally, build and run it:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker build &lt;span class="nt"&gt;-t&lt;/span&gt; flask-cicd &lt;span class="nb"&gt;.&lt;/span&gt;
docker run &lt;span class="nt"&gt;-p&lt;/span&gt; 5000:5000 flask-cicd
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Step 3: Setting Up GitHub Actions for CI/CD
&lt;/h2&gt;

&lt;p&gt;Now, we will create a &lt;strong&gt;GitHub Actions workflow&lt;/strong&gt; to automate building and pushing our Docker image to Docker Hub.&lt;/p&gt;

&lt;p&gt;🔄 &lt;strong&gt;CI/CD Pipeline (GitHub Actions)&lt;/strong&gt;&lt;br&gt;
✅ CI — Continuous Integration&lt;br&gt;
GitHub Actions automatically runs the following on each push to main:&lt;/p&gt;

&lt;p&gt;Clone repository&lt;/p&gt;

&lt;p&gt;Set up Python and install dependencies&lt;/p&gt;

&lt;p&gt;Run unit tests&lt;/p&gt;

&lt;p&gt;Build Docker image&lt;/p&gt;

&lt;p&gt;Push image to Docker Hub&lt;/p&gt;

&lt;p&gt;🚀 CD — Continuous Deployment to EC2&lt;br&gt;
After a successful Docker push, a second GitHub Actions job connects to your EC2 server via SSH and:&lt;/p&gt;

&lt;p&gt;Logs into Docker (if private image)&lt;/p&gt;

&lt;p&gt;Pulls the latest image from Docker Hub&lt;/p&gt;

&lt;p&gt;Stops and removes the old container&lt;/p&gt;

&lt;p&gt;Runs the updated container&lt;/p&gt;

&lt;p&gt;This enables automatic deployment of your latest code to a live server!&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Inside your project, create &lt;code&gt;.github/workflows/main.yml&lt;/code&gt; and add the following:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# This workflow will install Python dependencies, run tests and lint with a single version of Python&lt;/span&gt;
&lt;span class="c1"&gt;# For more information see: https://docs.github.com/en/actions/automating-builds-and-tests/building-and-testing-python&lt;/span&gt;

&lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Python application&lt;/span&gt;

&lt;span class="na"&gt;on&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;push&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;branches&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;main"&lt;/span&gt; &lt;span class="pi"&gt;]&lt;/span&gt;
  &lt;span class="na"&gt;pull_request&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;branches&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;main"&lt;/span&gt; &lt;span class="pi"&gt;]&lt;/span&gt;

&lt;span class="na"&gt;permissions&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;contents&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;read&lt;/span&gt;

&lt;span class="na"&gt;jobs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;build&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;

    &lt;span class="na"&gt;runs-on&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ubuntu-latest&lt;/span&gt;

    &lt;span class="na"&gt;steps&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;actions/checkout@v4&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Set up Python &lt;/span&gt;&lt;span class="m"&gt;3.10&lt;/span&gt;
      &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;actions/setup-python@v3&lt;/span&gt;
      &lt;span class="na"&gt;with&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;python-version&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;3.10"&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Install dependencies&lt;/span&gt;
      &lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;
        &lt;span class="s"&gt;python -m pip install --upgrade pip&lt;/span&gt;
        &lt;span class="s"&gt;pip install flake8 pytest&lt;/span&gt;
        &lt;span class="s"&gt;if [ -f requirements.txt ]; then pip install -r requirements.txt; fi&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Lint with flake8&lt;/span&gt;
      &lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;
        &lt;span class="s"&gt;# stop the build if there are Python syntax errors or undefined names&lt;/span&gt;
        &lt;span class="s"&gt;flake8 . --count --select=E9,F63,F7,F82 --show-source --statistics&lt;/span&gt;
        &lt;span class="s"&gt;# exit-zero treats all errors as warnings. The GitHub editor is 127 chars wide&lt;/span&gt;
        &lt;span class="s"&gt;flake8 . --count --exit-zero --max-complexity=10 --max-line-length=127 --statistics&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Log in to Docker Hub&lt;/span&gt;
      &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;docker/login-action@v3&lt;/span&gt;
      &lt;span class="na"&gt;with&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;username&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${{ secrets.DOCKER_USERNAME }}&lt;/span&gt;
        &lt;span class="na"&gt;password&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${{ secrets.DOCKER_PASSWORD }}&lt;/span&gt;

    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Set up Docker Buildx&lt;/span&gt;
      &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;docker/setup-buildx-action@v3&lt;/span&gt;

    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Build and Push Docker Image&lt;/span&gt;
      &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;docker/build-push-action@v5&lt;/span&gt;
      &lt;span class="na"&gt;with&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;context&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;.&lt;/span&gt;
        &lt;span class="na"&gt;file&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Dockerfile&lt;/span&gt;  &lt;span class="c1"&gt;# Ensure this path is correct&lt;/span&gt;
        &lt;span class="na"&gt;push&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
        &lt;span class="na"&gt;tags&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${{ secrets.DOCKER_USERNAME }}/practice:latest&lt;/span&gt;

  &lt;span class="na"&gt;deploy&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;runs-on&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ubuntu-latest&lt;/span&gt;
    &lt;span class="na"&gt;steps&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Deploy on EC2 via SSH&lt;/span&gt;
        &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;appleboy/ssh-action@v1.0.0&lt;/span&gt;
        &lt;span class="na"&gt;with&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;host&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${{ secrets.EC2_HOST }}&lt;/span&gt;
          &lt;span class="na"&gt;username&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${{ secrets.EC2_USER }}&lt;/span&gt;
          &lt;span class="na"&gt;key&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${{ secrets.EC2_SSH_KEY }}&lt;/span&gt;
          &lt;span class="na"&gt;script&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;
            &lt;span class="s"&gt;docker login -u ${{ secrets.DOCKER_USERNAME }} -p ${{ secrets.DOCKER_PASSWORD }}&lt;/span&gt;
            &lt;span class="s"&gt;docker pull ${{ secrets.DOCKER_USERNAME}}/practice:latest&lt;/span&gt;

            &lt;span class="s"&gt;docker stop flask-app || true&lt;/span&gt;
            &lt;span class="s"&gt;docker rm flask-app || true&lt;/span&gt;

            &lt;span class="s"&gt;docker run -d \&lt;/span&gt;
              &lt;span class="s"&gt;--name flask-app \&lt;/span&gt;
              &lt;span class="s"&gt;-p 5000:5000 \&lt;/span&gt;
              &lt;span class="s"&gt;${{ secrets.DOCKER_USERNAME}}/practice:latest&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once pushed, GitHub Actions will automatically run the workflow. You can check the status under the &lt;strong&gt;Actions&lt;/strong&gt; tab in your repository.&lt;/p&gt;

&lt;p&gt;🔐 &lt;strong&gt;GitHub Secrets Configuration&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Remember to add the appropriate secrets to your github secret, go to &lt;strong&gt;Actions&amp;gt;secrets&lt;/strong&gt; and add the following with correct values&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Secret Name&lt;/th&gt;
&lt;th&gt;Purpose&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;DOCKER_USERNAME&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Docker Hub username&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;DOCKER_PASSWORD&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Docker Hub password/token&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;EC2_HOST&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Public IP or DNS of your EC2&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;EC2_USER&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;SSH username (e.g., &lt;code&gt;ubuntu&lt;/code&gt;)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;EC2_SSH_KEY&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Private SSH key for EC2 access&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Congratulations! 🎉 You have successfully set up a &lt;strong&gt;CI/CD pipeline&lt;/strong&gt; for your Flask application using &lt;strong&gt;GitHub Actions and Docker&lt;/strong&gt;. Now, every time you push code, your application will be tested, built, and deployed automatically.&lt;/p&gt;

&lt;p&gt;Feel free to share your thoughts and improvements in the comments below! 🚀&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Windows Subsystem for Linux (WSL)</title>
      <dc:creator>David Oyewole</dc:creator>
      <pubDate>Sat, 29 Mar 2025 19:36:17 +0000</pubDate>
      <link>https://dev.to/david_oyewole/windows-subsystem-for-linux-wsl-2ef2</link>
      <guid>https://dev.to/david_oyewole/windows-subsystem-for-linux-wsl-2ef2</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;In my early days of learning DevOps, one of the biggest challenges I faced was setting up a Linux environment on my laptop. Since I was already comfortable with Windows, I found it difficult to make a complete switch to Linux, mainly because I still needed access to certain Windows applications. &lt;/p&gt;

&lt;p&gt;I initially tried dual-booting, but it felt cumbersome—I wanted both operating systems to work side by side without rebooting. Then I experimented with virtual machines (VMs), which worked but drained my laptop’s battery too quickly.&lt;/p&gt;

&lt;p&gt;During my search for a better solution, I discovered Windows Subsystem for Linux (WSL), which turned out to be the perfect answer to my problem. Since then, I haven’t looked back—WSL allows me to run a Linux environment directly within Windows, seamlessly integrating both systems.&lt;/p&gt;

&lt;h3&gt;
  
  
  What is WSL?
&lt;/h3&gt;

&lt;p&gt;Windows Subsystem for Linux (WSL) is a powerful compatibility layer that enables users to run a full Linux environment natively on Windows. Unlike traditional virtual machines or dual-boot setups, WSL provides a lightweight and efficient way to access Linux tools and utilities directly from Windows.&lt;/p&gt;

&lt;p&gt;WSL is particularly useful for developers, system administrators, and DevOps engineers who require Linux-based tools while maintaining a Windows workflow. This guide will cover everything you need to know about WSL, from installation to advanced usage.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Use WSL?
&lt;/h2&gt;

&lt;p&gt;WSL offers several advantages over traditional virtualization and dual-boot setups:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Seamless Integration&lt;/strong&gt;: Run Linux commands and applications alongside Windows programs.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Low Resource Usage&lt;/strong&gt;: No need for additional system resources to run a separate virtual machine.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Native Performance&lt;/strong&gt;: WSL 2 provides full system call compatibility and improved performance with a real Linux kernel.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Improved Developer Experience&lt;/strong&gt;: Use Linux-native tools, package managers, and programming languages without leaving Windows.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Better Docker Support&lt;/strong&gt;: WSL 2 significantly improves Docker’s performance and integration with Windows.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Installing WSL on Windows
&lt;/h2&gt;

&lt;p&gt;To install WSL on your Windows machine, follow these steps:&lt;/p&gt;

&lt;h3&gt;
  
  
  Prerequisites
&lt;/h3&gt;

&lt;p&gt;Ensure that your system is running Windows 10 version 2004 and higher (Build 19041 and higher) or Windows 11.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 1: Enable WSL
&lt;/h3&gt;

&lt;p&gt;Open PowerShell as Administrator and execute the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;wsl &lt;span class="nt"&gt;--install&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This command automatically installs WSL and the default Linux distribution (Ubuntu), which can be changed later.&lt;/p&gt;

&lt;p&gt;If you already have WSL installed but want to update to WSL 2, run:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;wsl &lt;span class="nt"&gt;--update&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Step 2: Verify Installed Distributions
&lt;/h3&gt;

&lt;p&gt;Check the available and installed distributions with:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;wsl &lt;span class="nt"&gt;--list&lt;/span&gt; &lt;span class="nt"&gt;--verbose&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This will display:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Installed Linux distributions&lt;/li&gt;
&lt;li&gt;The default WSL version (WSL 1 or WSL 2)&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Step 3: Set WSL 2 as Default
&lt;/h3&gt;

&lt;p&gt;To take advantage of WSL 2’s features, set it as the default version:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;wsl &lt;span class="nt"&gt;--set-default-version&lt;/span&gt; 2
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Step 4: Install Additional Linux Distributions
&lt;/h3&gt;

&lt;p&gt;To install another Linux distribution, run:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;wsl &lt;span class="nt"&gt;--install&lt;/span&gt; &lt;span class="nt"&gt;-d&lt;/span&gt; &amp;lt;Distribution Name&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Replace &lt;code&gt;&amp;lt;Distribution Name&amp;gt;&lt;/code&gt; with your preferred Linux distribution.&lt;/p&gt;

&lt;p&gt;To see a list of available distributions, run:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;wsl &lt;span class="nt"&gt;--list&lt;/span&gt; &lt;span class="nt"&gt;--online&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you need to upgrade a specific distribution to WSL 2, use:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;wsl &lt;span class="nt"&gt;--set-version&lt;/span&gt; &amp;lt;DistroName&amp;gt; 2
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Replace &lt;code&gt;&amp;lt;DistroName&amp;gt;&lt;/code&gt; with the name of your installed Linux distribution.&lt;/p&gt;

&lt;h2&gt;
  
  
  Using WSL
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Launching WSL
&lt;/h3&gt;

&lt;p&gt;To start WSL, simply open a terminal and run:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;wsl
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The first time you launch a distribution, you'll need to create a user account and password.&lt;/p&gt;

&lt;h4&gt;
  
  
  Pro Tip: Use Windows Terminal for a Better Experience
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Windows Terminal supports multiple command-line environments (PowerShell, Command Prompt, Azure CLI, Git Bash, etc.).&lt;/li&gt;
&lt;li&gt;It allows you to open multiple tabs and split panes for improved multitasking.&lt;/li&gt;
&lt;li&gt;You can fully customize your terminal with color schemes, fonts, background images, and custom keyboard shortcuts.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You can also launch your Linux distribution directly from the Windows Start menu by typing its name, such as "Ubuntu".&lt;/p&gt;

&lt;h3&gt;
  
  
  Running Linux Commands in Windows
&lt;/h3&gt;

&lt;p&gt;After launching WSL, you can execute Linux commands just like in a standard Linux environment:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;ls&lt;/span&gt; &lt;span class="nt"&gt;-la&lt;/span&gt;
&lt;span class="nb"&gt;cd&lt;/span&gt; /home
&lt;span class="nb"&gt;cat&lt;/span&gt; /etc/os-release
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Installing Linux Packages
&lt;/h3&gt;

&lt;p&gt;Each WSL distribution has its own package manager. For Ubuntu-based distributions, use &lt;code&gt;apt&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;apt update &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="nb"&gt;sudo &lt;/span&gt;apt upgrade
&lt;span class="nb"&gt;sudo &lt;/span&gt;apt &lt;span class="nb"&gt;install &lt;/span&gt;git curl vim
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For Fedora-based distributions, use &lt;code&gt;dnf&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;dnf &lt;span class="nb"&gt;install &lt;/span&gt;nano wget
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Accessing Windows Files from WSL
&lt;/h3&gt;

&lt;p&gt;WSL allows seamless file sharing between Windows and Linux. To access Windows files from WSL:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;cd&lt;/span&gt; /mnt/c
&lt;span class="nb"&gt;ls&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This maps the Windows C: drive to &lt;code&gt;/mnt/c&lt;/code&gt;, enabling file access between the two environments.&lt;/p&gt;

&lt;h2&gt;
  
  
  WSL 1 vs WSL 2: Key Differences
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Feature&lt;/th&gt;
&lt;th&gt;WSL 1&lt;/th&gt;
&lt;th&gt;WSL 2&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;System Call Compatibility&lt;/td&gt;
&lt;td&gt;Partial&lt;/td&gt;
&lt;td&gt;Full&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;File I/O Performance&lt;/td&gt;
&lt;td&gt;Faster for Windows-to-Linux operations&lt;/td&gt;
&lt;td&gt;Faster for Linux-to-Linux operations&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Uses a Linux Kernel&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Virtual Machine Integration&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Docker Support&lt;/td&gt;
&lt;td&gt;Limited&lt;/td&gt;
&lt;td&gt;Fully Supported&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;Recommendation:&lt;/strong&gt; Use WSL 2 for better performance and full Linux kernel support, especially if you plan to run Docker or complex Linux workloads.&lt;/p&gt;

&lt;h2&gt;
  
  
  Advanced WSL Features
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Using WSL with Docker
&lt;/h3&gt;

&lt;p&gt;WSL 2 greatly enhances Docker’s performance. To integrate Docker with WSL:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Install Docker Desktop.&lt;/li&gt;
&lt;li&gt;Enable WSL 2 support in Docker settings.&lt;/li&gt;
&lt;li&gt;Run the following command to verify installation:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker run hello-world
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Real-World Use Cases
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Web Development&lt;/strong&gt;: Run Node.js, Python, and other web frameworks natively in Linux.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;DevOps &amp;amp; Cloud Engineering&lt;/strong&gt;: Use WSL for Kubernetes, Terraform, Ansible, and CI/CD workflows.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Security &amp;amp; Penetration Testing&lt;/strong&gt;: Install Kali Linux on WSL for ethical hacking and security research.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Machine Learning &amp;amp; Data Science&lt;/strong&gt;: Run Jupyter notebooks and ML libraries with native Linux support.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Troubleshooting Common Issues
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Issue: "WSL Not Recognized"
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Solution:&lt;/strong&gt; Ensure WSL is enabled by running:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;dism.exe /online /enable-feature /featurename:Microsoft-Windows-Subsystem-Linux /all /norestart
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Issue: "No Internet Access in WSL"
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Solution:&lt;/strong&gt; Restart the WSL network adapter:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;wsl &lt;span class="nt"&gt;--shutdown&lt;/span&gt;
netsh winsock reset
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then restart WSL and check network connectivity.&lt;/p&gt;

&lt;h3&gt;
  
  
  Issue: "Docker Not Working with WSL 2"
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Solution:&lt;/strong&gt; Ensure that WSL integration is enabled in Docker Desktop settings under "Resources &amp;gt; WSL Integration."&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;WSL is a powerful tool that bridges Windows and Linux, providing developers and IT professionals with the best of both worlds. By following this guide, you should now have a solid understanding of how to install, configure, and use WSL effectively. Happy coding!&lt;/p&gt;

&lt;h3&gt;
  
  
  Further Reading
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://learn.microsoft.com/en-us/windows/wsl/" rel="noopener noreferrer"&gt;Official Microsoft WSL Documentation&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/microsoft/WSL" rel="noopener noreferrer"&gt;WSL GitHub Repository&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Have you used WSL? Share your thoughts in the comments!&lt;/p&gt;

</description>
    </item>
    <item>
      <title>[Boost]</title>
      <dc:creator>David Oyewole</dc:creator>
      <pubDate>Mon, 24 Mar 2025 15:05:10 +0000</pubDate>
      <link>https://dev.to/david_oyewole/-146i</link>
      <guid>https://dev.to/david_oyewole/-146i</guid>
      <description>&lt;div class="ltag__link"&gt;
  &lt;a href="/david_oyewole" class="ltag__link__link"&gt;
    &lt;div class="ltag__link__pic"&gt;
      &lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Fuser%2Fprofile_image%2F1502452%2Fe7e8e5d1-b46e-40f9-a7c2-1f2e63d514cc.jpg" alt="david_oyewole"&gt;
    &lt;/div&gt;
  &lt;/a&gt;
  &lt;a href="https://dev.to/david_oyewole/mastering-kubernetes-persistent-volumes-a-simplified-beginner-guide-5e1l" class="ltag__link__link"&gt;
    &lt;div class="ltag__link__content"&gt;
      &lt;h2&gt;Mastering Kubernetes Persistent Volumes: A Simplified Beginner Guide&lt;/h2&gt;
      &lt;h3&gt;David Oyewole ・ Mar 21&lt;/h3&gt;
      &lt;div class="ltag__link__taglist"&gt;
        &lt;span class="ltag__link__tag"&gt;#devops&lt;/span&gt;
        &lt;span class="ltag__link__tag"&gt;#kubernetes&lt;/span&gt;
        &lt;span class="ltag__link__tag"&gt;#cloudcomputing&lt;/span&gt;
        &lt;span class="ltag__link__tag"&gt;#docker&lt;/span&gt;
      &lt;/div&gt;
    &lt;/div&gt;
  &lt;/a&gt;
&lt;/div&gt;


</description>
      <category>devops</category>
      <category>kubernetes</category>
      <category>cloudcomputing</category>
      <category>docker</category>
    </item>
    <item>
      <title>Mastering Kubernetes Persistent Volumes: A Simplified Beginner Guide</title>
      <dc:creator>David Oyewole</dc:creator>
      <pubDate>Fri, 21 Mar 2025 12:15:17 +0000</pubDate>
      <link>https://dev.to/david_oyewole/mastering-kubernetes-persistent-volumes-a-simplified-beginner-guide-5e1l</link>
      <guid>https://dev.to/david_oyewole/mastering-kubernetes-persistent-volumes-a-simplified-beginner-guide-5e1l</guid>
      <description>&lt;h2&gt;
  
  
  &lt;strong&gt;Introduction&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;In a traditional containerized environment, storage is ephemeral—meaning that when a pod is restarted, all its data is lost. This happens because container file systems are temporary and tied to the pod's lifecycle. This is a major challenge for stateful applications like databases and file storage systems that hold important data and information.&lt;/p&gt;

&lt;p&gt;Kubernetes solves this issue by providing &lt;strong&gt;Persistent Volumes (PV)&lt;/strong&gt; and &lt;strong&gt;Persistent Volume Claims (PVC)&lt;/strong&gt;, ensuring data persistence even when pods restart or move between nodes. This ensures that data remains persistent and secure.. In this article, I will guide you through how Persistent Volumes work, how to configure them, and best practices for managing persistent storage in Kubernetes, with a complete practical guide.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fve3l423cho7ai4lz4evd.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fve3l423cho7ai4lz4evd.jpg" alt="Diagram" width="676" height="261"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Understanding Persistent Volumes in Kubernetes&lt;/strong&gt;
&lt;/h2&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Why Do We Need Persistent Volumes?&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;By default, Kubernetes pods use ephemeral storage, which means:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;When a pod restarts or gets rescheduled to another node, its data is lost.&lt;/li&gt;
&lt;li&gt;Stateful applications (e.g., databases like MySQL, MongoDB) require persistent storage.&lt;/li&gt;
&lt;li&gt;Persistent Volumes provide a way to separate storage from pods, allowing data to persist beyond the lifecycle of a pod.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;What is a Persistent Volume (PV) and a Persistent Volume Claim (PVC)?&lt;/strong&gt;
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Persistent Volume (PV):&lt;/strong&gt; A physical storage resource in the cluster, provisioned by administrators or dynamically created using Storage Classes.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Persistent Volume Claim (PVC):&lt;/strong&gt; A request from a pod to use a Persistent Volume.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;StorageClass:&lt;/strong&gt; Defines how storage should be dynamically provisioned by the cloud provider or local storage.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;How Persistent Volumes Work&lt;/strong&gt;
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Administrator creates a Persistent Volume (PV)&lt;/strong&gt; with a specified storage size and access mode.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;A pod requests storage by creating a Persistent Volume Claim (PVC).&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Kubernetes binds the PVC to an available PV.&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;The pod mounts the Persistent Volume and uses it for storage.&lt;/strong&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Persistent Volume Provisioning: Static vs Dynamic&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Static Provisioning: The cluster administrator manually creates a Persistent Volume (PV) before any pod requests storage.&lt;/p&gt;

&lt;p&gt;Dynamic Provisioning: Kubernetes automatically provisions storage when a pod creates a Persistent Volume Claim (PVC) using a StorageClass.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Creating a Persistent Volume and PVC (Hands-on Guide)&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;In this practical guide, I will be using MongoDB and Mongo Express (a UI for MongoDB) to demonstrate how to use Persistent Volumes. This example is tested on a Minikube cluster, so ensure your Minikube is running.&lt;/p&gt;

&lt;p&gt;Since we will be using local storage to persist data, we need to create a storage path in Minikube. This storage path will be used to create our Persistent Volume (PV).&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;minikube ssh
sudo mkdir -p /data/mongodb
sudo chmod 777 /data/mongodb
exit
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This will create the directory and grant the necessary permissions. Now, we can proceed to create the &lt;strong&gt;PV&lt;/strong&gt;, &lt;strong&gt;PVC&lt;/strong&gt;, and deployments.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Step 1: Create a Persistent Volume (PV)&lt;/strong&gt;
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: v1
kind: PersistentVolume
metadata:
  name: mongodb-pv
  labels:
    type: local
spec:
  storageClassName: manual
  capacity:
    storage: 1Gi
  accessModes:
    - ReadWriteOnce
  persistentVolumeReclaimPolicy: Retain
  hostPath:
    path: "/data/mongodb"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Ensure that the &lt;code&gt;hostPath&lt;/code&gt; matches the directory created in your Minikube setup.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Step 2: Create a Persistent Volume Claim (PVC)&lt;/strong&gt;
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: mongodb-pvc
spec:
  storageClassName: manual
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Ensure the &lt;code&gt;storageClassName&lt;/code&gt;, &lt;code&gt;accessModes&lt;/code&gt;, and &lt;code&gt;storage&lt;/code&gt; values match those defined in the PV&lt;/p&gt;

&lt;p&gt;After applying these configurations, Kubernetes will bind the PVC to an available PV, allowing the pod to use persistent storage.&lt;br&gt;
Run &lt;code&gt;kubectl get pv&lt;/code&gt; and &lt;code&gt;kubectl get pvc&lt;/code&gt; after applying the configuration to confirm the binding. The status should look like this:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fagwhsqvxmzdnbu97n7fi.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fagwhsqvxmzdnbu97n7fi.jpg" alt="Bind" width="800" height="124"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  &lt;strong&gt;Step 3: Deploy MongoDB with Persistent Volume&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;For a real-world use case, let’s deploy &lt;strong&gt;MongoDB&lt;/strong&gt; with a Persistent Volume.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: apps/v1
kind: Deployment
metadata:
  name: mongo-deployment
  labels:
    app: mongodb
spec:
  replicas: 1
  selector:
    matchLabels:
      app: mongodb
  template:
    metadata:
      labels:
        app: mongodb
    spec:
      volumes:
      - name: mongodb-storage
        persistentVolumeClaim:
          claimName: mongodb-pvc
      containers:
        - name: mongo
          image: mongo
          ports:
            - containerPort: 27017
          volumeMounts:
            - name: mongodb-storage
              mountPath: /data/db
          resources:
            limits:
              cpu: "500m"
              memory: "256Mi"
            requests:
              cpu: "250m"
              memory: "128Mi"
          env:
            - name: MONGO_INITDB_ROOT_USERNAME
              valueFrom:
                secretKeyRef:
                  name: mongo-secret
                  key: username
            - name: MONGO_INITDB_ROOT_PASSWORD
              valueFrom:
                secretKeyRef:
                  name: mongo-secret
                  key: password
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Your MongoDB deployment file should look like this but take note of the following.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;In the &lt;code&gt;volumes&lt;/code&gt; section, the &lt;code&gt;claimName&lt;/code&gt; must match the name of your PVC.&lt;/li&gt;
&lt;li&gt;In The &lt;code&gt;volumeMounts&lt;/code&gt; section, the &lt;code&gt;mountPath&lt;/code&gt; is the default mountPath for MongoDB&lt;/li&gt;
&lt;li&gt;The &lt;code&gt;env&lt;/code&gt; which is the Environment Variables i.e the MONGO_INITDB_ROOT_USERNAME and MONGO_INITDB_ROOT_PASSWORD is stored in a secret file for security purposes, I will write about that in my next article.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Now apply the configuration&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Step 4: Deploy Mongo Express as a UI for MongoDB&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Now let's deploy Mongo Express&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: apps/v1
kind: Deployment
metadata:
  name: mongoexpress-deployment
  labels:
    app: mongoexpress
spec:
  replicas: 1
  selector:
    matchLabels:
      app: mongoexpress
  template:
    metadata:
      labels:
        app: mongoexpress
    spec:
      containers:
        - name: mongoexpress
          image: mongo-express
          ports:
            - containerPort: 8081
          resources:
            limits:
              cpu: "500m"
              memory: "256Mi"
            requests:
              cpu: "250m"
              memory: "128Mi"
          env:
            - name: ME_CONFIG_MONGODB_ADMINUSERNAME
              valueFrom:
                secretKeyRef:
                  name: mongo-secret
                  key: username
            - name: ME_CONFIG_MONGODB_ADMINPASSWORD
              valueFrom:
                secretKeyRef:
                  name: mongo-secret
                  key: password
            - name: ME_CONFIG_MONGODB_SERVER
              valueFrom:
                configMapKeyRef:
                  name: mongo-config
                  key: database_url
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This is what your Mongo Express deployment should look like, the environment variables are contained in a &lt;code&gt;secret&lt;/code&gt; and &lt;code&gt;configMap file&lt;/code&gt; as I mentioned earlier.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Step 5: Configure services for communication&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;For internal and external communication, we need to configure services, MongoDB will use ClusterIP because there will be no need for external communication while Mongo Express will use a Nodeport to give external access.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: v1
kind: Service
metadata:
  name: mongo-service
spec:
  selector:
    app: mongodb
  ports:
    - protocol: TCP
      port: 27017
      targetPort: 27017
  type: ClusterIP


---
apiVersion: v1
kind: Service
metadata:
  name: mongoexpress-service
spec:
  selector:
    app: mongoexpress
  ports:
    - protocol: TCP
      port: 8081
      targetPort: 8081
      nodePort: 30000
  type: NodePort
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Run &lt;code&gt;kubectl get svc&lt;/code&gt; to check your services, it should look like the image below.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F36xufdeselwjare7iv75.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F36xufdeselwjare7iv75.jpg" alt="Services" width="685" height="92"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Step 6: Minikube Service&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Next, we will use Minikube to expose the Mongo Express service, making it accessible from the browser. This will enable us to interact with MongoDB through the UI. Run the command:&lt;br&gt;
&lt;code&gt;minikube service minikube service mongoexpress-service&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;You can now access Mongo Express from the given IP address. The web page should look like this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe826mzjc0u8apkybpn2c.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe826mzjc0u8apkybpn2c.jpg" alt="minikube service" width="800" height="431"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To test this, create databases and collections from the Mongo Express UI. Restart the pods, and you will still have the data intact. This demonstrates how Persistent Volumes work.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Common Issues &amp;amp; Troubleshooting&lt;/strong&gt;
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;PVC stuck in "Pending" state?&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Run &lt;code&gt;kubectl describe pvc mongodb-pvc&lt;/code&gt; to check events.&lt;/li&gt;
&lt;li&gt;Ensure that a matching PV is available.&lt;/li&gt;
&lt;li&gt;Check if the storageClassName matches between the PV and PVC.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;p&gt;&lt;strong&gt;Pod cannot write to storage?&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Ensure the permissions on the host path are correct.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;p&gt;&lt;strong&gt;Pod stuck in "ContainerCreating"?&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Verify that the Persistent Volume is properly mounted.&lt;/li&gt;
&lt;li&gt;Run &lt;code&gt;kubectl describe pod &amp;lt;pod-name&amp;gt;&lt;/code&gt; for detailed logs.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;p&gt;&lt;strong&gt;PV and PVC not bound?&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Check kubectl get pv and kubectl get pvc for status.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;p&gt;&lt;strong&gt;Data loss after pod restart?&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Ensure that the volume is mounted correctly.&lt;/li&gt;
&lt;li&gt;Check the reclaim policy (&lt;code&gt;Retain&lt;/code&gt;, &lt;code&gt;Delete&lt;/code&gt;, &lt;code&gt;Recycle&lt;/code&gt;).&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Best Practices for Using Persistent Volumes&lt;/strong&gt;
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Use &lt;strong&gt;Storage Classes&lt;/strong&gt; for dynamic provisioning when working with cloud storage providers.&lt;/li&gt;
&lt;li&gt;Choose the &lt;strong&gt;right access mode&lt;/strong&gt; (ReadWriteOnce, ReadWriteMany) based on the application's needs.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Set the appropriate reclaim policy&lt;/strong&gt; (&lt;code&gt;Retain&lt;/code&gt;, &lt;code&gt;Delete&lt;/code&gt;, &lt;code&gt;Recycle&lt;/code&gt;) depending on data retention needs.&lt;/li&gt;
&lt;li&gt;Monitor storage usage using &lt;strong&gt;Kubernetes metrics and alerts&lt;/strong&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Conclusion&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Persistent Volumes are essential for running stateful applications in Kubernetes. By using PV, PVC, and Storage Classes, you can ensure that your application data persists beyond pod restarts and rescheduling.&lt;/p&gt;

&lt;p&gt;Now that you understand how Persistent Volumes work, try deploying them in your own Kubernetes cluster and see how they enhance data persistence! 🚀&lt;/p&gt;

</description>
      <category>devops</category>
      <category>kubernetes</category>
      <category>cloudcomputing</category>
      <category>docker</category>
    </item>
    <item>
      <title>How To Set Up An EC2 Instance on AWS</title>
      <dc:creator>David Oyewole</dc:creator>
      <pubDate>Sat, 18 May 2024 15:28:48 +0000</pubDate>
      <link>https://dev.to/david_oyewole/how-to-set-up-an-ec2-instance-on-aws-k2c</link>
      <guid>https://dev.to/david_oyewole/how-to-set-up-an-ec2-instance-on-aws-k2c</guid>
      <description>&lt;h2&gt;
  
  
  INTRODUCTION
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;WHAT IS EC2&lt;/strong&gt;&lt;br&gt;
Amazon Elastic Compute Cloud (Amazon EC2) is a pivotal service within the Amazon Web Services (AWS) suite of products. It functions as a scalable and highly secure virtual server, enabling users to effortlessly host applications without the need for upfront hardware investments. Operating EC2 is convenient, as it can be easily launched and managed from the AWS dashboard. The service encompasses a wide array of computing capabilities and grants users the flexibility to customize specifications such as RAM, ROM, and more. EC2 significantly streamlines the development process by facilitating easy scalability and seamless integration with other services, making it a highly valuable tool for developers and businesses.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;WHY USE EC2&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Easy to scale up and down.&lt;/li&gt;
&lt;li&gt;Remote Access to your assets from anywhere in the world.&lt;/li&gt;
&lt;li&gt;Very Secure.&lt;/li&gt;
&lt;li&gt;Pay-As-You use i.e. you only pay for what you use.&lt;/li&gt;
&lt;li&gt;Easy to set up as no hardware is required.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  SETTING UP AN EC2 INSTANCE
&lt;/h2&gt;

&lt;p&gt;Let's take a step-by-step look at how to set up an EC2 instance&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 1:&lt;/strong&gt; Log in to your AWS account or create an account &lt;a href="https://signin.aws.amazon.com/signin?redirect_uri=https%3A%2F%2Fconsole.aws.amazon.com%2Fconsole%2Fhome%3FhashArgs%3D%2523%26isauthcode%3Dtrue%26nc2%3Dh_ct%26src%3Dheader-signin%26state%3DhashArgsFromTB_eu-north-1_9fb81d06e959ff9c&amp;amp;client_id=arn%3Aaws%3Asignin%3A%3A%3Aconsole%2Fcanvas&amp;amp;forceMobileApp=0&amp;amp;code_challenge=snfNMqkvUZ2G0-vL_jcNR2eA2NLzd-E-IDeErxM_Xlk&amp;amp;code_challenge_method=SHA-256" rel="noopener noreferrer"&gt;Here&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 2:&lt;/strong&gt; Click on services by the top left corner or EC2 as shown in the image below, if you click on services, you will need to click on EC2 in the next screen.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj8as6bqqapvolrf7x51l.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj8as6bqqapvolrf7x51l.jpg" alt="AWS Console" width="672" height="580"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 3:&lt;/strong&gt; Click on &lt;em&gt;launch instance&lt;/em&gt; button as shown below.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmqrmzl36bkwu2sgq23hn.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmqrmzl36bkwu2sgq23hn.jpg" alt="AWS Console" width="678" height="320"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 4:&lt;/strong&gt; Configure the instance, examples are given below&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Name the instance&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbdcg2ud2pustis5dzp82.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbdcg2ud2pustis5dzp82.jpg" alt="AWS Console" width="760" height="179"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Select the type of OS you want and also the AMI(Amazon Machine Image) you want, take note, and take advantage of the AMI eligible for free-tier if your account is less than a year old. Also, select the Architecture you want.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foesfepm5bfl0ksw69861.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foesfepm5bfl0ksw69861.jpg" alt="AWS Console" width="763" height="573"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Select the instance type you want, and also take note of the free-tier eligible ones.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm9joif63jdvplx15n6cu.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm9joif63jdvplx15n6cu.jpg" alt="AWS Console" width="785" height="289"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Create a new key pair or select an existing one if you already have one. This key pair will allow you to securely connect to the instance using SSH agents. Click on the &lt;em&gt;create new key pair&lt;/em&gt; button.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fia3azqae4fl00ycw5gyj.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fia3azqae4fl00ycw5gyj.jpg" alt="AWS Console" width="798" height="263"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Input a name, select the key type and key format then click on the &lt;em&gt;Create Key Pair&lt;/em&gt; button shown below.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcqs5l1d4r8t35mo784dy.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcqs5l1d4r8t35mo784dy.jpg" alt="AWS Console" width="640" height="608"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Configure the network settings, select an existing security group, or create a new one and specify the rules.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fegwjf74q87plh0hnsy1x.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fegwjf74q87plh0hnsy1x.jpg" alt="AWS Console" width="800" height="600"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Configure the storage you want or leave the default one&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz2w6pxzwjnd6fimhzgzu.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz2w6pxzwjnd6fimhzgzu.jpg" alt="AWS Console" width="789" height="473"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Click on the &lt;em&gt;Launch Instance&lt;/em&gt; button at the bottom left corner.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftpck7u6au32h1gheongw.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftpck7u6au32h1gheongw.jpg" alt="AWS Console" width="403" height="560"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 5:&lt;/strong&gt; The next screen will take a moment and then show you a success message, click on the instance ID shown in the message to see the state of your instance.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmrtxtnb32bl2re4sxcn2.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmrtxtnb32bl2re4sxcn2.jpg" alt="AWS Console" width="764" height="215"&gt;&lt;/a&gt;&lt;br&gt;
&lt;strong&gt;Step 6:&lt;/strong&gt; Wait for some moments for the instance to initiate, click on the reload button shown below to refresh, and when the status check turns green, your instance is up and running.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frbavnc4panpw52x4q7jk.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frbavnc4panpw52x4q7jk.jpg" alt="AWS Console" width="800" height="175"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 7:&lt;/strong&gt; Click on the instance ID from the screen above, this will take you to the instance summary shown below. Click on the connect button to connect to your instance CLI, you can also connect to the instance using any SSH agent with the key pair we created earlier.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1c7qsumnryd8gm2y3bh5.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1c7qsumnryd8gm2y3bh5.jpg" alt="AWS Console" width="800" height="462"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  CONCLUSION
&lt;/h2&gt;

&lt;p&gt;This comprehensive article provides a step-by-step guide to simplify the process of launching an EC2 instance. By following the detailed instructions, you will not only be able to launch the instance but also learn how to configure it according to your specific needs and requirements.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>beginners</category>
    </item>
    <item>
      <title>Automating LAMP Stack Deployment Using a Bash Script on an Ubuntu Server</title>
      <dc:creator>David Oyewole</dc:creator>
      <pubDate>Fri, 17 May 2024 06:12:18 +0000</pubDate>
      <link>https://dev.to/david_oyewole/automating-lamp-stack-deployment-using-a-bash-script-on-an-ubuntu-server-3ko7</link>
      <guid>https://dev.to/david_oyewole/automating-lamp-stack-deployment-using-a-bash-script-on-an-ubuntu-server-3ko7</guid>
      <description>&lt;p&gt;&lt;strong&gt;Introduction&lt;/strong&gt;&lt;br&gt;
Laravel is a popular free and open-source PHP-based web framework used for building high-end web applications. It is renowned for its expressive and elegant syntax.&lt;/p&gt;

&lt;p&gt;In this article, we will deploy a Laravel app on a LAMP stack. LAMP stands for Linux, Apache, MySQL, and PHP. We will automate this deployment with a Bash script. This will not only speed up the deployment but also reduce the errors that might occur in manual deployment, while also ensuring consistency in the deployment process. Let’s go through the process step by step.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Prerequisites&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Basic knowledge of Linux and terminal commands&lt;/li&gt;
&lt;li&gt;An accessible Ubuntu server instance.&lt;/li&gt;
&lt;li&gt;A Laravel application in a GitHub repository.&lt;/li&gt;
&lt;li&gt;SSH Access into the Ubuntu server.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Laravel LAMP Automation Process&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Step 1:&lt;/strong&gt; Create a bash script file, I’ll name mine “lamp.sh”.&lt;br&gt;
&lt;code&gt;touch lamp.sh&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 2:&lt;/strong&gt; Give the file executable permission.&lt;br&gt;
&lt;code&gt;chmod +x lamp.sh&lt;/code&gt;&lt;br&gt;
I will be using vim as my editor you can use nano or any editor of your choice to edit the bash script file.&lt;br&gt;
&lt;code&gt;vim lamp.sh&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 3:&lt;/strong&gt; Now we can start writing our bash script, we will be dividing our tasks into functions, this is good for modularity and reusability. First, we declare our shebang&lt;br&gt;
&lt;code&gt;#!/bin/bash&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 4:&lt;/strong&gt; let’s write a function to update the repository and a function to install each of the LAMP stack&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;update_repository() {
    sudo apt update
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To install apache&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;install_apache() {
    sudo apt -y install apache2
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To install MySQL&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;install_mysql() {
    sudo apt -y install mysql-server mysql-client
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To install PHP and other PHP extensions required for the deployment, note that I installed PHP 8.2 hence all the extensions must be of the same version, I will also install zip and unzip which are not PHP extensions but will be needed later for composer.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;install_php() {
    sudo apt install software-properties-common --yes
    sudo add-apt-repository -y ppa:ondrej/php
    sudo apt update
    sudo apt -y install php8.2 php8.2-curl php8.2-dom php8.2-mbstring php8.2-xml php8.2-mysql zip unzip
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Step 5:&lt;/strong&gt; Enable URL rewriting and Restart Apache&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;enable_url_rewriting() {
    sudo a2enmod rewrite
    sudo systemctl restart apache2
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Step 6:&lt;/strong&gt; Write a function to install and setup composer, note that we will do this in the /usr/bin directory&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;install_composer() {
    cd /usr/bin
    curl -sS https://getcomposer.org/installer | sudo php -q
    if [ ! -f "composer" ]; then
        sudo mv composer.phar composer
    fi
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Step 7:&lt;/strong&gt; Write a function to clone the Laravel app from a GitHub repo. Here is a link to the repository Laravel repo, note that this function will also change the ownership of the /var/www directory to the current user.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;clone_laravel_repo() {
    sudo chown -R $USER:$USER /var/www
    cd /var/www
    if [ ! -d "laravel" ]; then
       git clone https://github.com/laravel/laravel.git
    fi
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Step 8:&lt;/strong&gt; Now we will install composer in the laravel directory which we just cloned.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;install_composer_in_project() {
    cd /var/www/laravel
    composer update --no-interaction
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Step 9:&lt;/strong&gt; Write a function to configure the .env file.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;build_env_file() {
    cd /var/www/laravel
    if [ ! -f ".env" ]; then
        cp .env.example .env
    fi
    sudo php artisan key:generate
    sudo chown -R www-data storage
    sudo chown -R www-data bootstrap/cache

}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Step 10:&lt;/strong&gt; Write a function to create a new virtual host and use it as the default.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;create_apache_config() {
    sudo bash -c 'cat &amp;gt; /etc/apache2/sites-available/laravel.conf &amp;lt;&amp;lt;EOF
&amp;lt;VirtualHost *:80&amp;gt;
    ServerAdmin webmaster@localhost
    ServerName localhost
    ServerAlias localhost
    DocumentRoot /var/www/laravel/public

    &amp;lt;Directory /var/www/laravel/public&amp;gt;
        Options -Indexes +FollowSymLinks
        AllowOverride All
        Require all granted
    &amp;lt;/Directory&amp;gt;

    ErrorLog ${APACHE_LOG_DIR}/laravel-error.log
    CustomLog ${APACHE_LOG_DIR}/laravel-access.log combined
&amp;lt;/VirtualHost&amp;gt;
EOF'

  cd ~
    sudo a2dissite 000-default.conf
    sudo a2ensite laravel.conf
    sudo systemctl restart apache2
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Step 11:&lt;/strong&gt; Now we can write a function that will configure our MySQL i.e. create a database and user.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;create_database_and_user() {
    sudo systemctl start mysql
    sudo mysql -uroot -e "CREATE DATABASE IF NOT EXISTS laravel;"
    sudo mysql -uroot -e "CREATE USER IF NOT EXISTS 'vagrant'@'localhost' IDENTIFIED BY '1805';"
    sudo mysql -uroot -e "GRANT ALL PRIVILEGES ON laravel.* TO 'vagrant'@'localhost';"

    cd /var/www/laravel
    grep -qF 'DB_CONNECTION=mysql' .env &amp;amp;&amp;amp; sed -i 's/DB_CONNECTION=mysql/DB_CONNECTION=mysql/' .env || echo "DB_CONNECTION=mysql" &amp;gt;&amp;gt; .env
    grep -qF 'DB_HOST=localhost' .env &amp;amp;&amp;amp; sed -i 's/DB_HOST=localhost/DB_HOST=localhost/' .env || echo "DB_HOST=localhost" &amp;gt;&amp;gt; .env
    grep -qF 'DB_PORT=3306' .env &amp;amp;&amp;amp; sed -i 's/DB_PORT=3306/DB_PORT=3306/' .env || echo "DB_PORT=3306" &amp;gt;&amp;gt; .env
    grep -qF 'DB_DATABASE=laravel' .env &amp;amp;&amp;amp; sed -i 's/DB_DATABASE=laravel/DB_DATABASE=laravel/' .env || echo "DB_DATABASE=laravel" &amp;gt;&amp;gt; .env
    grep -qF 'DB_USERNAME=vagrant' .env &amp;amp;&amp;amp; sed -i 's/DB_USERNAME=vagrant/DB_USERNAME=vagrant/' .env || echo "DB_USERNAME=vagrant" &amp;gt;&amp;gt; .env
    grep -qF 'DB_PASSWORD=1805' .env &amp;amp;&amp;amp; sed -i 's/DB_PASSWORD=1805/DB_PASSWORD=1805/' .env || echo "DB_PASSWORD=1805" &amp;gt;&amp;gt; .env

    sudo php artisan storage:link
    sudo php artisan migrate --force
    sudo php artisan db:seed --force
    sudo systemctl restart apache2

}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;NOTE:&lt;/strong&gt; My Database name is &lt;em&gt;laravel&lt;/em&gt;, my Username is &lt;em&gt;vagrant&lt;/em&gt; and my Password is &lt;em&gt;1805&lt;/em&gt;, you can use values of your choice.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 11:&lt;/strong&gt; Now we can call all our functions as we know functions won’t execute until they are called.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;update_repository
install_apache
install_mysql
install_php
enable_url_rewriting
install_composer
clone_laravel_repo
install_composer_in_project
build_env_file
create_apache_config
create_database_and_user
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Save the script and exit&lt;/li&gt;
&lt;li&gt;Execute the script
&lt;code&gt;./lamp.sh&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Load the IP of your server or your domain in your browser and you should have the output below&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;OUTPUT&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8m34q5rl0tjmv4dztqvz.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8m34q5rl0tjmv4dztqvz.jpg" alt="Output of deploying a laravel project" width="800" height="432"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Conclusion&lt;/strong&gt;&lt;br&gt;
By following the steps outlined in this guide, you will be able to streamline the deployment of the LAMP (Linux, Apache, MySQL, PHP) stack through the utilization of a bash script. While it is possible to carry out this process manually, automation offers the advantage of increased speed, consistency, and reduced potential for errors. Furthermore, the bash script can be reused for future deployments, saving time and effort.&lt;/p&gt;

</description>
      <category>devops</category>
      <category>ubuntu</category>
      <category>bash</category>
      <category>laravel</category>
    </item>
  </channel>
</rss>
