<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Akingbade Omosebi</title>
    <description>The latest articles on DEV Community by Akingbade Omosebi (@akingbade_omosebi).</description>
    <link>https://dev.to/akingbade_omosebi</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/akingbade_omosebi"/>
    <language>en</language>
    <item>
      <title>Opsfolio - From Interview Task to Production: Building a Security-First DevSecOps Platform</title>
      <dc:creator>Akingbade Omosebi</dc:creator>
      <pubDate>Tue, 25 Nov 2025 13:09:48 +0000</pubDate>
      <link>https://dev.to/akingbade_omosebi/opsfolio-from-interview-task-to-production-building-a-security-first-devsecops-platform-2i9g</link>
      <guid>https://dev.to/akingbade_omosebi/opsfolio-from-interview-task-to-production-building-a-security-first-devsecops-platform-2i9g</guid>
      <description>&lt;h2&gt;
  
  
  From Interview Task to Production: Building a Security-First DevSecOps Platform
&lt;/h2&gt;

&lt;h2&gt;
  
  
  TL;DR
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;The Assignment:&lt;/strong&gt; Deploy a simple app to local Kubernetes&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What I Built:&lt;/strong&gt; A production-grade DevSecOps platform with 6-layer security scanning, FinOps cost tracking, GitOps automation, and complete observability&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjqgir6a840l9d0vsvf4q.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjqgir6a840l9d0vsvf4q.png" alt=" " width="800" height="544"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Result:&lt;/strong&gt; "Opsfolio" - A hands-on demonstration of how I approach real-world infrastructure challenges&lt;/p&gt;

&lt;p&gt;🔗 &lt;strong&gt;&lt;a href="https://github.com/AkingbadeOmosebi/Opsfolio-Interview-App" rel="noopener noreferrer"&gt;View the complete repository&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  Table of Contents
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;The Challenge&lt;/li&gt;
&lt;li&gt;The Approach&lt;/li&gt;
&lt;li&gt;Security Architecture&lt;/li&gt;
&lt;li&gt;FinOps: Cost Intelligence&lt;/li&gt;
&lt;li&gt;Automation &amp;amp; GitOps&lt;/li&gt;
&lt;li&gt;Technical Implementation&lt;/li&gt;
&lt;li&gt;Results &amp;amp; Metrics&lt;/li&gt;
&lt;li&gt;Key Takeaways&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  The Challenge
&lt;/h2&gt;

&lt;p&gt;The interview assignment was straightforward:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;✅ Set up a local Kubernetes cluster (kind/minikube/k3s)&lt;/li&gt;
&lt;li&gt;✅ Create a Dockerfile&lt;/li&gt;
&lt;li&gt;✅ Deploy an application&lt;/li&gt;
&lt;li&gt;✅ &lt;strong&gt;Bonus:&lt;/strong&gt; IaC, GitOps, semantic versioning&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Simple enough. But I asked myself a different question:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;"What would this look like if I built it for production?"&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;That question changed everything.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Approach
&lt;/h2&gt;

&lt;p&gt;Instead of building the minimum viable solution, I treated this like a real-world production system with:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Security-first mindset&lt;/strong&gt;: Multiple scanning layers, zero static credentials&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Cost awareness&lt;/strong&gt;: FinOps integration for visibility before deployment&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Full automation&lt;/strong&gt;: From commit to production with zero manual steps&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Observability&lt;/strong&gt;: Complete monitoring and alerting stack&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Documentation&lt;/strong&gt;: Enterprise-grade docs for every component&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Security Architecture
&lt;/h2&gt;

&lt;h2&gt;
  
  
  🛡️ Multi-Layer Defense Strategy
&lt;/h2&gt;

&lt;h2&gt;
  
  
  Layer 1: CI/CD Security Pipeline (6 Scanners)
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;1. GitLeaks&lt;/strong&gt; - Secret Detection&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;GitLeaks Scan&lt;/span&gt;
  &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;gitleaks/gitleaks-action@v2&lt;/span&gt;
  &lt;span class="na"&gt;env&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;GITHUB_TOKEN&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${{ secrets.GITHUB_TOKEN }}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Scans entire Git history for exposed secrets (API keys, passwords, tokens).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. SonarCloud&lt;/strong&gt; - Static Application Security Testing (SAST)&lt;/p&gt;

&lt;p&gt;Achieved: &lt;strong&gt;A Security Rating&lt;/strong&gt;&lt;br&gt;
Detects code vulnerabilities, security hotspots, code smells&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. Snyk Open Source&lt;/strong&gt; - Software Composition Analysis (SCA)&lt;/p&gt;

&lt;p&gt;Scans dependencies for known vulnerabilities&lt;br&gt;
Severity threshold: CRITICAL&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4. Snyk Code - SAST&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Static code analysis for security issues&lt;br&gt;
Severity threshold: HIGH&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;5. Trivy - Container Security&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;- name: Run Trivy scanner
  uses: aquasecurity/trivy-action@master
  with:
    image-ref: 'interview-app:latest'
    severity: 'CRITICAL,HIGH'
    exit-code: '1'  # Blocks pipeline on findings

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Shift-left approach: Scans images BEFORE pushing to registry&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;6. TFSec - Infrastructure as Code Security&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Scans Terraform for misconfigurations&lt;br&gt;
Posts findings directly to PRs&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;7. MegaLinter - Code Quality &amp;amp; Security&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Multi-language linting&lt;br&gt;
Auto-fixes via pull requests&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Layer 2: Container Hardening&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;securityContext:
  ## Pod-level security
  runAsNonRoot: true
  runAsUser: 101
  fsGroup: 101

containers:
  - name: interview-app
    securityContext:
      # Container-level security
      allowPrivilegeEscalation: false
      capabilities:
        drop:
          - ALL  # Drop all Linux capabilities

    resources:
      # DoS prevention
      limits:
        memory: "512Mi"
        cpu: "500m"
      requests:
        memory: "128Mi"
        cpu: "100m"

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Dockerfile Security&lt;/strong&gt; :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Minimal Alpine base
FROM nginx:1.29.3-alpine

# OS updates
RUN apk update &amp;amp;&amp;amp; apk upgrade

# Non-root user
USER 101

# Non-privileged port
EXPOSE 8080

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Layer 3: Network Security&lt;/strong&gt; :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;args:
  - "http"
  - "interview-app-service:8080"
  - "--authtoken"
  - "$(NGROK_AUTH_TOKEN)"
  - "--auth"
  - "myuser:mypassword"        # Basic auth
  - "--allow-cidr"
  - "192.168.0.0/16"            # IP allowlist
  - "--deny-cidr"
  - "5.142.0.0/16"              # IP denylist
  - "--rate-limit"
  - "20:60s"                     # DDoS protection

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Features:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;✅ TLS encryption (Ngrok)&lt;/li&gt;
&lt;li&gt;✅ HTTP Basic Authentication&lt;/li&gt;
&lt;li&gt;✅ IP-based access control (CIDR filtering)&lt;/li&gt;
&lt;li&gt;✅ Rate limiting (20 requests/60 seconds)&lt;/li&gt;
&lt;li&gt;✅ Traffic inspection UI&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Layer 4: Secrets Management&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Zero Static Credentials Strategy&lt;/strong&gt;:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;AWS OIDC Authentication&lt;/strong&gt; (No long-lived keys)
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;- name: Configure AWS Credentials
  uses: aws-actions/configure-aws-credentials@v4
  with:
    role-to-assume: ${{ secrets.AWS_ROLE_ARN }}
    aws-region: us-east-1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Bitnami SealedSecrets&lt;/strong&gt; (Encryted at rest)
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: bitnami.com/v1alpha1
kind: SealedSecret
metadata:
  name: grafana-admin-secret
  namespace: monitoring
spec:
  encryptedData:
    admin-user: AgBxgB9cmAMxkypRMT5b5N7T...
    admin-password: AgB6DRmxAXK6Ot1c9Pn7XnZr...

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Terraform Cloud&lt;/strong&gt; (Encrypted State Backend)&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Results&lt;/strong&gt;: 100% encrpyted secrets, zero plaintext anywhere.&lt;/p&gt;

&lt;h2&gt;
  
  
  FinOps: Cost Intelligence
&lt;/h2&gt;

&lt;p&gt;💰 Infracost Integration&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Problem&lt;/strong&gt; : Teams deploy infrastructure, then get surprised by AWS bills.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Solution&lt;/strong&gt; : Cost visibility BEFORE deployment.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;- name: Generate Infracost JSON
  run: |
    infracost breakdown \
      --path . \
      --format json \
      --out-file infracost.json

- name: Post Infracost Comment on PR
  uses: infracost/actions/comment@v1
  with:
    path: infracost.json
    behavior: update

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Example PR Comment:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;💰 Monthly Cost Estimate

| Resource | Monthly Cost | Change |
|----------|--------------|--------|
| aws_eks_cluster.main | $73.00 | +$73.00 |
| aws_eks_node_group (t3.small) | $15.00 | +$15.00 |
| aws_nat_gateway (x2) | $65.00 | +$65.00 |
| **Total** | **$153.00** | **+$153.00** |

💡 Cost Optimization Opportunities:
- Use spot instances for node group (save ~70%)
- Single NAT gateway for non-production (save $32.50/mo)

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Business Impact&lt;/strong&gt; : Infrastructure decisions become data-driven, not guesswork.&lt;/p&gt;

&lt;h2&gt;
  
  
  Automation &amp;amp; GitOps
&lt;/h2&gt;

&lt;p&gt;🚀 &lt;strong&gt;Complete Release Automation&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Semantic Release Configuration&lt;/strong&gt; :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// .releaserc.js
module.exports = {
  branches: ['main'],
  plugins: [
    '@semantic-release/commit-analyzer',
    '@semantic-release/release-notes-generator',
    ['@semantic-release/changelog', { 
      changelogFile: 'CHANGELOG.md' 
    }],
    ['@semantic-release/exec', {
      prepareCmd: 'echo ${nextRelease.version} &amp;gt; VERSION.txt'
    }],
    ['@semantic-release/git', {
      assets: ['CHANGELOG.md', 'VERSION.txt']
    }],
    '@semantic-release/github'
  ]
};

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Commit Convention:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;feat: add new feature        # → Minor version bump (1.0.0 → 1.1.0)
fix: resolve bug             # → Patch version bump (1.0.0 → 1.0.1)
BREAKING CHANGE: ...         # → Major version bump (1.0.0 → 2.0.0)

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Result&lt;/strong&gt; : Automated versioning, changelog generation, and GitHub releases&lt;/p&gt;

&lt;h2&gt;
  
  
  ArgoCD GitOps
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: interview-app
  namespace: argocd
spec:
  syncPolicy:
    automated:
      prune: true      # Remove deleted resources
      selfHeal: true   # Auto-correct drift
  source:
    repoURL: https://github.com/AkingbadeOmosebi/Opsfolio-Interview-App
    path: k8s
    targetRevision: HEAD

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Features:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;✅ Continuous deployment from Git&lt;/li&gt;
&lt;li&gt;✅ Self-healing (automatic drift correction)&lt;/li&gt;
&lt;li&gt;✅ Image Updater with semver constraints&lt;/li&gt;
&lt;li&gt;✅ Complete audit trail&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Technical Implementation&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Dual Environment Strategy&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Local Environment (K3s)&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Install K3s
curl -sfL https://get.k3s.io | sh -

# Deploy application
kubectl apply -f k8s/

# Deploy monitoring
kubectl apply -f k8s/monitoring/prometheus-app.yaml

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Purpose&lt;/strong&gt; : Cost-free validation and prototyping&lt;/p&gt;

&lt;p&gt;Cloud Environment &lt;strong&gt;(AWS EKS)&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Terraform Infrastructure:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;module "eks" {
  source  = "terraform-aws-modules/eks/aws"
  version = "~&amp;gt; 20.0"

  cluster_name    = "opsfolio-cluster"
  cluster_version = "1.31"

  vpc_id     = module.vpc.vpc_id
  subnet_ids = module.vpc.private_subnets

  eks_managed_node_groups = {
    main = {
      min_size     = 1
      max_size     = 3
      desired_size = 2
      instance_types = ["t3.small"]
    }
  }
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Security Features&lt;/strong&gt; :&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Private VPC subnets for nodes&lt;/li&gt;
&lt;li&gt;IAM to Kubernetes RBAC mapping&lt;/li&gt;
&lt;li&gt;TFSec scanning before deployment&lt;/li&gt;
&lt;li&gt;OIDC authentication (no static keys)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Results &amp;amp; Metrics&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;📊 Measurable Outcomes&lt;/p&gt;

&lt;h3&gt;
  
  
  📊 Measurable Outcomes
&lt;/h3&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Metric&lt;/th&gt;
&lt;th&gt;Result&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;SonarCloud Security Rating&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;A&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Critical Vulnerabilities&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;0 (Snyk + Trivy)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Secrets Encrypted&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;100%&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Static Credentials&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;0 (Full OIDC)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;CI/CD Security Scanners&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;6&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Automated Releases&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Yes (Semantic)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Cost Visibility&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Pre-deployment&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h2&gt;
  
  
  Key Takeaways
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;What This Project Demonstrates&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Defense-in-Depth Security&lt;/strong&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;Not one tool, but 6 scanning layers&lt;/li&gt;
&lt;li&gt;Container hardening at multiple levels&lt;/li&gt;
&lt;li&gt;Network-level access controls&lt;/li&gt;
&lt;li&gt;Zero static credentials anywhere&lt;/li&gt;
&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;FinOps as Code&lt;/strong&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;Cost estimation before deployment&lt;/li&gt;
&lt;li&gt;Optimization recommendations in PRs&lt;/li&gt;
&lt;li&gt;Data-driven infrastructure decisions&lt;/li&gt;
&lt;li&gt;Complete Automation&lt;/li&gt;
&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Semantic versioning&lt;/strong&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;Auto-generated changelogs&lt;/li&gt;
&lt;li&gt;GitOps continuous deployment&lt;/li&gt;
&lt;li&gt;Self-healing infrastructure&lt;/li&gt;
&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Production Thinking&lt;/strong&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;Not "does it work?" but "is it production-ready?"&lt;/li&gt;
&lt;li&gt;Observability from day one&lt;/li&gt;
&lt;li&gt;Documented for team scalability&lt;/li&gt;
&lt;li&gt;Cost-conscious engineering&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;The Difference&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Junior approach: Meet the requirements&lt;br&gt;
Senior approach: Understand WHY production systems need security, observability, cost controls, and automation—then implement them&lt;/p&gt;

&lt;p&gt;This project is my answer to: "&lt;em&gt;How do you build production-ready infrastructure?&lt;/em&gt;"&lt;/p&gt;

&lt;h2&gt;
  
  
  Explore the Repository
&lt;/h2&gt;

&lt;p&gt;🔗 &lt;a href="https://github.com/AkingbadeOmosebi/Opsfolio-Interview-App" rel="noopener noreferrer"&gt;GitHub Repository&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What's Inside:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Complete source code&lt;/li&gt;
&lt;li&gt;Architecture diagrams (visual + ASCII)&lt;/li&gt;
&lt;li&gt;Step-by-step implementation guides&lt;/li&gt;
&lt;li&gt;Component deep-dives&lt;/li&gt;
&lt;li&gt;CI/CD workflows&lt;/li&gt;
&lt;li&gt;Kubernetes manifests&lt;/li&gt;
&lt;li&gt;Terraform infrastructure code&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Documentation:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Architecture Overview&lt;/li&gt;
&lt;li&gt;Local Setup Guide&lt;/li&gt;
&lt;li&gt;Cloud Infrastructure Setup&lt;/li&gt;
&lt;li&gt;CI/CD Workflows&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Let's Connect
&lt;/h2&gt;

&lt;p&gt;Found this helpful? Questions about the implementation?&lt;/p&gt;

&lt;p&gt;⭐ Star the repo&lt;br&gt;
💬 Open an issue for questions&lt;br&gt;
🔄 Share with your network&lt;/p&gt;

&lt;p&gt;What production practices do you prioritize in your infrastructure? Drop a comment below!&lt;/p&gt;

&lt;h1&gt;
  
  
  devops #kubernetes #devsecops #aws #finops #terraform #security #gitops #cicd #infrastructure
&lt;/h1&gt;

</description>
      <category>devsecops</category>
      <category>kubernetes</category>
      <category>security</category>
      <category>aws</category>
    </item>
    <item>
      <title>🏁 From Code to Cloud: My DevOps + DevSecOps Journey (Part 4/4 - The Reflection and Possible Routes)</title>
      <dc:creator>Akingbade Omosebi</dc:creator>
      <pubDate>Fri, 29 Aug 2025 11:42:37 +0000</pubDate>
      <link>https://dev.to/akingbade_omosebi/from-code-to-cloud-my-devops-devsecops-journey-5ehd</link>
      <guid>https://dev.to/akingbade_omosebi/from-code-to-cloud-my-devops-devsecops-journey-5ehd</guid>
      <description>&lt;p&gt;&lt;strong&gt;Part 4 – Lessons Learned and Takeaways&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In &lt;a href="https://dev.to/akingbade_omosebi/from-code-to-cloud-my-devops-devsecops-journey-5f8p"&gt;Part 3&lt;/a&gt; I explained how I used Terraform and Terraform Cloud to provision Azure infrastructure while keeping security at the core.&lt;/p&gt;

&lt;p&gt;Now, it’s time to wrap up this series with some real talk: what went wrong, what went right, and what I learned along the way.&lt;/p&gt;

&lt;h2&gt;
  
  
  ⚠️ The Struggles
&lt;/h2&gt;

&lt;p&gt;Let’s be honest! This wasn’t smooth sailing.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;SonarCloud kept failing&lt;/strong&gt; → My app was just HTML/CSS/JS, so SonarCloud gave me 0% test coverage. At first, that meant failed pipelines, which was frustrating.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Docker “latest” confusion&lt;/strong&gt; → I thought every push would create a new image version in ECR. Wrong. Without unique tags, I couldn’t track versions or roll back easily.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Trivy misconfigurations&lt;/strong&gt; → My first vulnerability scans failed because I didn’t reference the full ECR image path. Rookie mistake, but it cost me time.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Secret management headaches&lt;/strong&gt; → Hardcoding secrets was a no-go. Finding the right balance between Terraform Cloud, GitHub Actions, and AWS credentials took trial and error.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Every failure made me slow down, rethink, and fix things the right way.&lt;/p&gt;

&lt;h2&gt;
  
  
  ✅ The Wins
&lt;/h2&gt;

&lt;p&gt;Despite the bumps, here’s what I walked away with:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;A fully automated CI/CD pipeline&lt;/strong&gt; for my portfolio app&lt;/li&gt;
&lt;li&gt;Cross-cloud integration: &lt;strong&gt;AWS ECR → Azure Container Apps&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Security-first approach with &lt;strong&gt;SonarCloud, TFSEC, and Trivy&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Infrastructure fully managed as code with &lt;strong&gt;Terraform&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Secrets handled safely using &lt;strong&gt;Terraform Cloud sensitive variables&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The app itself? Just a simple portfolio.&lt;br&gt;
But the pipeline? &lt;strong&gt;Enterprise-grade.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;As at writing this documentation blog, the deployment was 100% functional, fired by Github actions, and deployed by Terraform Cloud. &lt;/p&gt;

&lt;p&gt;Last screenshots.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F97w7me4rurf8mqms98yu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F97w7me4rurf8mqms98yu.png" alt=" " width="800" height="245"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyovcbl2agfc8neyoxh9f.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyovcbl2agfc8neyoxh9f.png" alt=" " width="800" height="256"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcsvlmjlbvtct852vem56.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcsvlmjlbvtct852vem56.png" alt=" " width="800" height="397"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flzdx7p5bvi8hb9fxafcf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flzdx7p5bvi8hb9fxafcf.png" alt=" " width="800" height="241"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv37hp0y8nb70des84qqv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv37hp0y8nb70des84qqv.png" alt=" " width="800" height="253"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Before destroying project.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl75l1kqw8j3uyis26a15.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl75l1kqw8j3uyis26a15.png" alt=" " width="800" height="392"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As to everything, it has an end. While this isn't the total end, it's just a milestone with it, the beginning of a newer and more sophisticated challenge. More is coming!!&lt;/p&gt;

&lt;h2&gt;
  
  
  💡 Key Lessons Learned
&lt;/h2&gt;

&lt;p&gt;Looking back, here are the big takeaways from this journey:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Even a static app can teach you real DevOps.&lt;/strong&gt;&lt;br&gt;
It’s not about the complexity of the code — it’s about how you build, ship, and secure it.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Always version your Docker images.&lt;/strong&gt;&lt;br&gt;
Using Git commit SHAs as tags solved so many headaches and gave me rollback safety.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Security isn’t an afterthought.&lt;/strong&gt;&lt;br&gt;
TFSEC and Trivy forced me to think like a DevSecOps engineer. Better to fix issues now than explain them later.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Secrets should never live in repos.&lt;/strong&gt;&lt;br&gt;
Terraform Cloud’s sensitive variables saved me from bad practices and kept everything professional.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Documentation is part of DevOps.&lt;/strong&gt;&lt;br&gt;
This Dev.to series itself is proof. If you can’t explain what you built, it’s almost like it doesn’t exist.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  🚀 What’s Next?
&lt;/h2&gt;

&lt;p&gt;This project was just the beginning. If I were to extend it, I’d:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Add monitoring &amp;amp; observability (Prometheus, Grafana, or Azure Monitor)&lt;/li&gt;
&lt;li&gt;Deploy to multiple environments (staging + prod) with approval gates&lt;/li&gt;
&lt;li&gt;Write basic tests for my JavaScript to make SonarCloud happier&lt;/li&gt;
&lt;li&gt;Add a rollback strategy in the pipeline (in case a deploy fails)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Each of those would push this portfolio pipeline even closer to what real-world production systems look like.&lt;/p&gt;

&lt;h2&gt;
  
  
  🎯 Final Thoughts
&lt;/h2&gt;

&lt;p&gt;I started this journey wanting “just a portfolio site.”&lt;br&gt;
But I ended up building something much more:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A &lt;strong&gt;CI/CD pipeline&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;A &lt;strong&gt;multi-cloud deployment&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;A &lt;strong&gt;DevSecOps showcase&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The biggest lesson?&lt;/p&gt;

&lt;p&gt;👉 &lt;strong&gt;You don’t need a big app to prove your skills. You just need discipline, automation, and security woven into your process.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This wasn’t just about HTML, CSS, and JS.&lt;br&gt;
It was about showing that I can think, build, and operate like a DevOps engineer.&lt;/p&gt;

&lt;p&gt;And that’s the story behind my portfolio.&lt;/p&gt;

&lt;p&gt;Feel free to check out my &lt;a href="https://github.com/AkingbadeOmosebi/my-portfolio-azure-container-apps" rel="noopener noreferrer"&gt;Repo&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Thank you for reading.&lt;/p&gt;

&lt;p&gt;Feel free to like, leave a comment and share.&lt;/p&gt;

</description>
      <category>devops</category>
      <category>security</category>
      <category>aws</category>
      <category>azure</category>
    </item>
    <item>
      <title>🌍 From Code to Cloud: My DevOps + DevSecOps Journey. (Part 3/4 - The Execution)</title>
      <dc:creator>Akingbade Omosebi</dc:creator>
      <pubDate>Fri, 29 Aug 2025 11:34:42 +0000</pubDate>
      <link>https://dev.to/akingbade_omosebi/from-code-to-cloud-my-devops-devsecops-journey-5f8p</link>
      <guid>https://dev.to/akingbade_omosebi/from-code-to-cloud-my-devops-devsecops-journey-5f8p</guid>
      <description>&lt;p&gt;&lt;strong&gt;Part 3 – Terraform and Secure Cloud Infrastructure&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In &lt;a href="https://dev.to/akingbade_omosebi/from-code-to-cloud-my-devops-devsecops-journey-18ca"&gt;Part 2&lt;/a&gt;, I showed how my GitHub Actions pipeline automated builds, scans, and deployments for my portfolio app.&lt;/p&gt;

&lt;p&gt;Now it’s time to talk about infrastructure — the piece that made everything real in the cloud. &lt;/p&gt;

&lt;h2&gt;
  
  
  🛠️ Why Terraform?
&lt;/h2&gt;

&lt;p&gt;I could have clicked buttons in the Azure portal and spun up resources manually. I mean I have already done that in the past before and that would also defeat the whole purpose, I wanted something different, a different challenge without straying from my main goal.&lt;/p&gt;

&lt;p&gt;The goal of this project was to showcase DevOps discipline:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Repeatable deployments&lt;/strong&gt; (no “...but it works or worked on my machine”)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Version-controlled infrastructure&lt;/strong&gt; (IaC mindset)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Security baked in&lt;/strong&gt; (no leaking secrets in YAML files)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That’s why I chose Terraform.&lt;/p&gt;

&lt;h2&gt;
  
  
  🚀 The Infrastructure
&lt;/h2&gt;

&lt;p&gt;Here’s what I needed for my app to live in the cloud:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Azure Container App → the service to run my Dockerized portfolio&lt;/li&gt;
&lt;li&gt;Resource Group &amp;amp; Networking → to organize and isolate resources&lt;/li&gt;
&lt;li&gt;Terraform Cloud → remote state storage &amp;amp; secure variable management&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Notice something? I didn’t use Azure Container Registry (ACR).&lt;br&gt;
Instead, I built my images in GitHub Actions and pushed them to AWS ECR.&lt;br&gt;
Why? Because it showed I could integrate &lt;strong&gt;AWS + Azure in one pipeline&lt;/strong&gt; — To showcase a multi-cloud DevSecOps project. A valuable real-world skill.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1xtzyh0gp5ob1gtfcn1n.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1xtzyh0gp5ob1gtfcn1n.png" alt="Provisioned Resource Group and Container App" width="800" height="221"&gt;&lt;/a&gt;&lt;br&gt;
Provisioned Resource Group and Container App&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F24pbw8culjtoa3ifh87b.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F24pbw8culjtoa3ifh87b.png" alt="Terraform Cloud Run Sequence Triggered by GITHUB Actions to provision resources. Status: Successful!" width="800" height="397"&gt;&lt;/a&gt;&lt;br&gt;
Terraform Cloud Run Sequence Triggered by GITHUB Actions to provision resources. Status: Successful!&lt;/p&gt;
&lt;h2&gt;
  
  
  💰 Cost Estimation in Terraform Cloud
&lt;/h2&gt;

&lt;p&gt;But did you also pause to notice something?&lt;/p&gt;

&lt;p&gt;One underrated feature of Terraform Cloud is &lt;strong&gt;Cost Estimation&lt;/strong&gt;. Every time a terraform plan runs in TFC, it doesn’t just show what resources will change — it also estimates the monthly cloud bill of those changes.&lt;/p&gt;

&lt;p&gt;For example, this simple VM resource:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "azurerm_linux_virtual_machine" "myvm" {
  name                = "vm1"
  size                = "Standard_B2s"
  admin_username      = "adminuser"
  resource_group_name = azurerm_resource_group.rg.name
  location            = azurerm_resource_group.rg.location
  network_interface_ids = [
    azurerm_network_interface.myvm_nic.id,
  ]
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Might give me an output like:&lt;br&gt;
&lt;code&gt;Cost estimate: +$24.50/month&lt;br&gt;
&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;This is huge because it helps avoid surprise cloud bills and keeps infra spending predictable. On bigger teams, you can even enforce policies (e.g., block applies if cost &amp;gt; $100/month).&lt;/p&gt;

&lt;p&gt;👉 Lesson learned: Don’t just plan for resources, plan for costs. Infrastructure as Code is also Finance as Code.&lt;/p&gt;
&lt;h2&gt;
  
  
  🔑 Secret Management with Terraform Cloud
&lt;/h2&gt;

&lt;p&gt;One of the trickiest parts was handling secrets.&lt;/p&gt;

&lt;p&gt;Terraform needed:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;My ECR authentication token (from AWS)&lt;/li&gt;
&lt;li&gt;My Azure credentials (to deploy resources)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I refused to hardcode them anywhere.&lt;/p&gt;

&lt;p&gt;👉 My solution: store them as sensitive variables inside Terraform Cloud.&lt;br&gt;
That way:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;They were encrypted&lt;/li&gt;
&lt;li&gt;Not visible in logs&lt;/li&gt;
&lt;li&gt;Automatically injected into Terraform runs&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyfk6tt2v3ru930wnifap.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyfk6tt2v3ru930wnifap.png" alt="Terraform ENV Variables configured and set for Authentication to ECS &amp;amp; Deployment to Azure" width="800" height="395"&gt;&lt;/a&gt;&lt;br&gt;
Terraform ENV Variables configured and set for Authentication to ECS &amp;amp; Deployment to Azure&lt;/p&gt;

&lt;p&gt;This gave me a &lt;strong&gt;secure, enterprise-style workflow&lt;/strong&gt; without having to build a full secret management system.&lt;/p&gt;

&lt;p&gt;You can get the ECR Auth Token from AWS using the following command:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;aws ecr get-login-password --region region&lt;/code&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  🛡️ Security Checks with TFSEC
&lt;/h2&gt;

&lt;p&gt;Terraform is code — which means it can also have vulnerabilities.&lt;/p&gt;

&lt;p&gt;For example:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Misconfigured networking rules&lt;/li&gt;
&lt;li&gt;Exposed storage accounts&lt;/li&gt;
&lt;li&gt;Overly permissive IAM roles&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;To catch these, I integrated &lt;strong&gt;TFSEC&lt;/strong&gt; into my pipeline.&lt;br&gt;
Every time Terraform code ran, TFSEC checked it against security best practices.&lt;/p&gt;

&lt;p&gt;This meant my IaC wasn’t just functional — it was &lt;strong&gt;hardened&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwo61dg2wjc8ud58a95m9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwo61dg2wjc8ud58a95m9.png" alt="TFSEC Pipeline successfully configured and active" width="800" height="233"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  📜 A Simplified Terraform Snippet
&lt;/h2&gt;

&lt;p&gt;Here’s a safe example (trimmed down for clarity) of how I deployed my Azure Container App:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;provider "azurerm" {
  features {}
}

resource "azurerm_resource_group" "rg" {
  name     = "portfolio-rg"
  location = "West Europe"
}

resource "azurerm_container_app" "portfolio" {
  name                = "portfolio-app"
  resource_group_name = azurerm_resource_group.rg.name
  container_app_environment_id = azurerm_container_app_environment.env.id

  template {
    container {
      name   = "portfolio"
      image  = "ACCOUNT_ID.dkr.ecr.eu-west-1.amazonaws.com/portfolio:${var.image_tag}"
      cpu    = 0.5
      memory = "1.0Gi"
    }
  }
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Notice the image is pulled from AWS ECR — not ACR.&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;This cross-cloud integration was a deliberate choice to highlight flexibility.&lt;/strong&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  ✅ The Result
&lt;/h2&gt;

&lt;p&gt;When everything was wired up, here’s what I got:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A &lt;strong&gt;Dockerized portfolio app&lt;/strong&gt; running on Azure Container Apps&lt;/li&gt;
&lt;li&gt;Fully managed via &lt;strong&gt;Terraform IaC&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Secrets stored securely in &lt;strong&gt;Terraform Cloud&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;IaC continuously scanned with &lt;strong&gt;TFSEC&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Container image scanned with &lt;strong&gt;Trivy&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm354ttn59r10huy8ve35.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm354ttn59r10huy8ve35.png" alt="Docker Build and Trivy Scanner" width="800" height="324"&gt;&lt;/a&gt;&lt;br&gt;
Docker build with Trivy Image Scanner.&lt;/p&gt;

&lt;p&gt;This setup wasn’t about having the fanciest app.&lt;br&gt;
It was about** proving I could deploy apps securely, consistently, and across multiple clouds.**&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8k8i9k9k7pt7l2cr2hxs.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8k8i9k9k7pt7l2cr2hxs.png" alt="Final Application up and running from the Container App Live. Fully automated to make deployment changes based upon new image build" width="800" height="412"&gt;&lt;/a&gt;&lt;br&gt;
Final Application up and running from the Container App Live. Fully automated to make deployment changes based upon new image build&lt;/p&gt;

&lt;h2&gt;
  
  
  💡 Why This Matters
&lt;/h2&gt;

&lt;p&gt;Employers don’t just want someone who can write code.&lt;br&gt;
They want engineers who can:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Deploy infrastructure securely&lt;/li&gt;
&lt;li&gt;Work across multi-cloud environments&lt;/li&gt;
&lt;li&gt;Use IaC for repeatability and control&lt;/li&gt;
&lt;li&gt;Integrate security scanning into DevOps (aka DevSecOps)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That’s what this part of the project showed.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;🚀 Next Up&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In Part 4, I’ll share the biggest lessons learned along this journey: the frustrations, the wins, and how I’d improve this pipeline even further.&lt;/p&gt;

&lt;p&gt;Stay tuned! Because that’s where it all comes together.&lt;/p&gt;

&lt;p&gt;Click here to view the final part. &lt;a href="https://dev.to/akingbade_omosebi/from-code-to-cloud-my-devops-devsecops-journey-5ehd"&gt;Part 4&lt;/a&gt; ➡️&lt;/p&gt;

</description>
    </item>
    <item>
      <title>⚙️ From Code to Cloud: My DevOps + DevSecOps Journey (Part 2/4 - The Automation Obstacles)</title>
      <dc:creator>Akingbade Omosebi</dc:creator>
      <pubDate>Fri, 29 Aug 2025 11:03:10 +0000</pubDate>
      <link>https://dev.to/akingbade_omosebi/from-code-to-cloud-my-devops-devsecops-journey-18ca</link>
      <guid>https://dev.to/akingbade_omosebi/from-code-to-cloud-my-devops-devsecops-journey-18ca</guid>
      <description>&lt;p&gt;&lt;strong&gt;Part 2 – Automating the Pipeline&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In &lt;a href="https://dev.to/akingbade_omosebi/from-code-to-cloud-my-devops-devsecops-journey-part-1-the-vision-3hoe"&gt;Part 1&lt;/a&gt; of this series, I explained how I wanted my personal portfolio to be more than just a few HTML files sitting on GitHub Pages.&lt;/p&gt;

&lt;p&gt;I wanted it to &lt;strong&gt;behave like a production-grade application&lt;/strong&gt;: built, scanned, and deployed automatically with security woven into the process.&lt;/p&gt;

&lt;p&gt;This is the part where things got interesting — the pipeline.&lt;/p&gt;

&lt;h2&gt;
  
  
  🏗️ The App Itself
&lt;/h2&gt;

&lt;p&gt;Let’s keep it real: my app is not a full-stack system.&lt;/p&gt;

&lt;p&gt;It’s a portfolio website — plain and simple.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;HTML for structure&lt;/li&gt;
&lt;li&gt;CSS for styling&lt;/li&gt;
&lt;li&gt;JavaScript for a little interactivity&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That’s it.&lt;/p&gt;

&lt;p&gt;But what made it special was how I treated it:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;I Dockerized it using an NGINX base image to serve static files.&lt;/li&gt;
&lt;li&gt;I pushed that image to AWS Elastic Container Registry (ECR).&lt;/li&gt;
&lt;li&gt;Then, I deployed it to Azure Container Apps using Terraform.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Even though the app was static, the pipeline was dynamic.&lt;/p&gt;

&lt;h2&gt;
  
  
  🔄 The Pipeline Workflow
&lt;/h2&gt;

&lt;p&gt;Here’s how the GitHub Actions workflow was designed:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Checkout code from GitHub&lt;/li&gt;
&lt;li&gt;Build Docker image (NGINX serving my HTML/CSS/JS)&lt;/li&gt;
&lt;li&gt;Tag and push the image to AWS ECR&lt;/li&gt;
&lt;li&gt;Run security scans:

&lt;ul&gt;
&lt;li&gt;Trivy → scan Docker image for vulnerabilities&lt;/li&gt;
&lt;li&gt;TFSEC → scan Terraform code&lt;/li&gt;
&lt;li&gt;SonarCloud → check code quality&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Deploy with Terraform to Azure Container Apps&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  ⚠️ The Roadblocks
&lt;/h2&gt;

&lt;p&gt;It wasn’t smooth sailing at first.&lt;br&gt;
Here are some of the biggest challenges I hit:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;SonarCloud Failures&lt;/strong&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;Initially it was configuration issue, till i understood it clearly and passed its project key, token, and needs into Repo-Env-Secrets &amp;amp; Variables.&lt;/li&gt;
&lt;li&gt;Then next; SonarCloud kept failing the Quality Gate.&lt;/li&gt;
&lt;li&gt;Why? Because my project had no backend logic — just HTML, CSS, and JS.&lt;/li&gt;
&lt;li&gt;That meant 0% test coverage (and SonarCloud hates that).&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;👉 My fix: I kept SonarCloud in the pipeline, but marked it so failures didn’t block the build.&lt;br&gt;
That way, I still got visibility on quality checks, but my deployments weren’t stopped.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftk3n9asq84pg3wmyu82y.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftk3n9asq84pg3wmyu82y.png" alt=" " width="800" height="395"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;SonarCloud Failed due to 0% test coverage, since my app doesnt have any  actual complex logic. Nonetheless, it doesnt change the fact that SonarCloud scanner was integrated and showcases my potential.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Here is a simple SonarCloud workflow you can get your hands dirty with, to get started on your next project.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;name: "sonar_cloud_scan_github_actions"
on:
  workflow_dispatch:

jobs:
  DemoSonarCloudSCan:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v3
        with:
            fetch-depth: 0
      - name: SonarCloud Scan
        uses: sonarsource/sonarcloud-github-action@master
        env:
            GITHUB_TOKEN: ${{ secrets.GIT_TOKEN }}
            SONAR_TOKEN: ${{ secrets.SONAR_TOKEN }}
        with:
          args: &amp;gt;
              -Dsonar.organization=rekhugopal
              -Dsonar.projectKey=SonarCloudCodeAnalyisis
              -Dsonar.python.coverage.reportPaths=coverage.xml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;It is a manual trigger workflow, change it to automatic by modifying it to something like this for both a push and when a PR is opened:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;name: "sonar_cloud_scan_github_actions"

on:
  push:
    branches:
      - main
  pull_request:
    branches:
      - main

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Also, ensure to pass in the appropriate keys, variables and tokens in Actions or Github Env Variables.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Docker “latest” Tag Problem&lt;/strong&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;At first, I tagged my image as latest.&lt;/li&gt;
&lt;li&gt;The issue? If nothing in the image changed, ECR wouldn’t show a new version.&lt;/li&gt;
&lt;li&gt;It felt like my push was “skipped,” even though the workflow ran.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;👉 My fix: I switched to unique tags using Git commit SHA.&lt;br&gt;
Here is my code block for that from my actions.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker build -t my-portfolio:${{ github.sha }} .
docker push my-portfolio:${{ github.sha }}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This way, every commit created a brand new image version in ECR.&lt;br&gt;
No confusion, easy rollback.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Trivy Misconfiguration&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;My first Trivy scan failed with an invalid image reference error.&lt;/li&gt;
&lt;li&gt;I had forgotten to include the repository name before the image hash.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6mx3e6gybly621w6mu2q.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6mx3e6gybly621w6mu2q.png" alt=" " width="800" height="219"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;👉 My fix: I updated the scan step to reference the full ECR image path.&lt;br&gt;
Once fixed, Trivy scanned the image and reported vulnerabilities clearly.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;It was at this point, I had to make each workflow modular, and seperate to be able to isolate without messing up each workflow or pipeline. Segmenting it made me focus like a laser! And it work like a charm!&lt;/em&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Terraform Secrets&lt;/strong&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;Terraform needed my ECR token and Azure credentials.&lt;/li&gt;
&lt;li&gt;Storing them in plain text was not an option.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;👉 My fix: I stored them securely in Terraform Cloud as sensitive variables. That way, I avoided exposing secrets in GitHub Actions or in my repo.&lt;/p&gt;

&lt;p&gt;And the rest fix were properly passing in the right ENV values without human errors such as Spaces, Upper-cases, or Additional characters, for The Pipeline to pick up the right Secret Envs since I did not want to "Hardcode" any value or Secrets (See Screenshot below).&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzyqhw78567jkhqtf072q.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzyqhw78567jkhqtf072q.png" alt=" " width="800" height="460"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  ✅ The Working Pipeline
&lt;/h2&gt;

&lt;p&gt;After fixing those issues, my pipeline looked like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Commit to GitHub
       ↓
 GitHub Actions kicks off
       ↓
 Build Docker image (NGINX + portfolio files)
       ↓
 Push image to AWS ECR (tagged with commit SHA)
       ↓
 Run scans (SonarCloud, TFSEC, Trivy)
       ↓
 Deploy with Terraform to Azure Container Apps
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And the best part? Every push = automatic build + scan + deploy.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl3c6l9dds561d2u2ilgo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl3c6l9dds561d2u2ilgo.png" alt=" " width="800" height="362"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;My ECR Repo images, with each change, commit, and push it is built, tagged, scanned, before being deployed.&lt;/p&gt;

&lt;h2&gt;
  
  
  💡 Why This Matters
&lt;/h2&gt;

&lt;p&gt;Even though my app is just static HTML/CSS/JS, the pipeline is enterprise-grade.&lt;br&gt;
It shows that:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;I can build CI/CD pipelines from scratch&lt;/li&gt;
&lt;li&gt;I can integrate security tools (DevSecOps mindset)&lt;/li&gt;
&lt;li&gt;I can work across AWS + Azure&lt;/li&gt;
&lt;li&gt;I can solve real-world problems like failing quality gates, versioning issues, and secret management.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This is the stuff hiring managers love to see — not just “I can write code,” but “I can run a secure DevOps workflow.”&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;🚀 Next Up&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In Part 3, I’ll walk through the Terraform setup and how I provisioned Azure Container Apps with IaC.&lt;/p&gt;

&lt;p&gt;Spoiler: Terraform Cloud made my life much easier, but it came with its own surprises.&lt;/p&gt;

&lt;p&gt;Stay tuned.&lt;/p&gt;

&lt;p&gt;Click here to view &lt;a href="https://dev.to/akingbade_omosebi/from-code-to-cloud-my-devops-devsecops-journey-5f8p"&gt;part 3&lt;/a&gt; ➡️&lt;/p&gt;

</description>
    </item>
    <item>
      <title>🎬From Code to Cloud: My DevOps + DevSecOps Journey (Part 1/4 – The Vision)</title>
      <dc:creator>Akingbade Omosebi</dc:creator>
      <pubDate>Fri, 29 Aug 2025 10:28:06 +0000</pubDate>
      <link>https://dev.to/akingbade_omosebi/from-code-to-cloud-my-devops-devsecops-journey-part-1-the-vision-3hoe</link>
      <guid>https://dev.to/akingbade_omosebi/from-code-to-cloud-my-devops-devsecops-journey-part-1-the-vision-3hoe</guid>
      <description>&lt;p&gt;A few weeks ago, I decided I didn’t just want another static portfolio site.&lt;br&gt;
I wanted something that tells a story of my skills in action: automation, cloud, DevOps, and security all woven together.&lt;/p&gt;

&lt;p&gt;Instead of just pushing some HTML to GitHub Pages, I asked myself:&lt;/p&gt;

&lt;p&gt;👉 &lt;em&gt;“What if my portfolio itself became a DevOps project?”&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;That’s how this whole journey began.&lt;/p&gt;
&lt;h2&gt;
  
  
  &lt;strong&gt;💡The Idea&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;I wanted a simple app — nothing fancy. Just:&lt;/p&gt;

&lt;p&gt;Frontend: HTML + CSS + a bit of JavaScript&lt;/p&gt;

&lt;p&gt;Containerized: Docker with NGINX serving my files&lt;/p&gt;

&lt;p&gt;But the real magic wouldn’t be in the code.&lt;br&gt;
The magic would be in how it’s &lt;strong&gt;built, tested, scanned, and deployed&lt;/strong&gt;.&lt;/p&gt;
&lt;h2&gt;
  
  
  &lt;strong&gt;🛠️The Tech Stack I Chose&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Here’s what I pulled together for the project:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Version Control: GitHub&lt;/li&gt;
&lt;li&gt;CI/CD: GitHub Actions&lt;/li&gt;
&lt;li&gt;Image Registry: AWS Elastic Container Registry (ECR)&lt;/li&gt;
&lt;li&gt;Infrastructure: Terraform (state managed in Terraform Cloud)&lt;/li&gt;
&lt;li&gt;Deployment: Azure Container Apps&lt;/li&gt;
&lt;li&gt;Security &amp;amp; Quality:

&lt;ul&gt;
&lt;li&gt;SonarCloud (code quality &amp;amp; coverage)&lt;/li&gt;
&lt;li&gt;TFSEC (Terraform security scanning)&lt;/li&gt;
&lt;li&gt;Trivy (container vulnerability scanning)&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;That stack might sound like a lot for a portfolio, but that was the point.&lt;br&gt;
I wanted to treat my personal project as if it were production software.&lt;/p&gt;
&lt;h2&gt;
  
  
  🎯 The Goal
&lt;/h2&gt;

&lt;p&gt;t the end of the day, here’s what I wanted:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A fully automated pipeline where every git push would:&lt;/li&gt;
&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;Build my Docker image&lt;/li&gt;
&lt;li&gt;Push it to AWS ECR&lt;/li&gt;
&lt;li&gt;Run security scans&lt;/li&gt;
&lt;li&gt;Deploy the container to Azure using Terraform Cloud Run call.&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;All infrastructure managed as code with Terraform&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Quality gates in place to enforce DevSecOps practices&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Basically:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Commit → Build → Scan → Deploy → Done&lt;/strong&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  ⚡ The Challenges I Knew Were Coming
&lt;/h2&gt;

&lt;p&gt;I wasn’t naive. I knew I’d hit roadblocks like:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Failing pipelines when SonarCloud complained about coverage&lt;/li&gt;
&lt;li&gt;Docker images not updating in ECR when tagged latest&lt;/li&gt;
&lt;li&gt;Terraform secrets and tokens needing secure handling&lt;/li&gt;
&lt;li&gt;Security scanners flagging issues I’d have to fix&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;But that’s the beauty of it, this wasn’t just a coding exercise, it was a learning journey.&lt;/p&gt;
&lt;h2&gt;
  
  
  🗺️ The Big Picture
&lt;/h2&gt;

&lt;p&gt;Here's the high-level architecture of what i set out to build- All automated:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;   GitHub Repo
       |
       v
 GitHub Actions CI/CD
       |
       v
Docker Image Build
       |
       v
 Push to AWS ECR
       |
       v
Security Scans (SonarCloud, Trivy, TFSEC)
       |
       v
 Deploy via Terraform Cloud Run 
       |
       v
 Azure Container Apps
       |
       v
 Live App Running
       |
       v
Manage Terraform State-file Securely on Terraform Cloud.

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Every single step has its own story, which i'll break down in this series.&lt;/p&gt;

&lt;h2&gt;
  
  
  ✨ Why This Matters
&lt;/h2&gt;

&lt;p&gt;For me, this wasn’t about just having a portfolio.&lt;br&gt;
It was about proving to myself (and to future employers) that I can:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Build pipelines from scratch&lt;/li&gt;
&lt;li&gt;Integrate security into DevOps (a true DevSecOps mindset)&lt;/li&gt;
&lt;li&gt;Manage multi-cloud setups (AWS + Azure in one project)&lt;/li&gt;
&lt;li&gt;Solve real-world CI/CD issues&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This is Part 1 of the series. In Part 2, I’ll dive into the pipeline itself: the YAML, the pain, the failures, and the fixes that brought it to life.&lt;/p&gt;

&lt;p&gt;Stay tuned. 🚀&lt;/p&gt;

&lt;p&gt;Click here to view &lt;a href="https://dev.to/akingbade_omosebi/from-code-to-cloud-my-devops-devsecops-journey-18ca"&gt;Part 2&lt;/a&gt; ➡️&lt;/p&gt;

</description>
    </item>
    <item>
      <title>How i built My Own Cloud-Native Monitoring App, From Flask to ECR using Boto3, and EKS using Terraform, then implemented ArgoCD.</title>
      <dc:creator>Akingbade Omosebi</dc:creator>
      <pubDate>Tue, 22 Jul 2025 18:45:02 +0000</pubDate>
      <link>https://dev.to/akingbade_omosebi/how-i-built-my-own-cloud-native-monitoring-app-from-flask-to-ecr-using-boto3-and-eks-using-1ahl</link>
      <guid>https://dev.to/akingbade_omosebi/how-i-built-my-own-cloud-native-monitoring-app-from-flask-to-ecr-using-boto3-and-eks-using-1ahl</guid>
      <description>&lt;p&gt;Tech bros,&lt;br&gt;
We meet again!&lt;/p&gt;

&lt;p&gt;I’ve just got my hands dirty lately with something fun and lowkey production-ish: I built 🔧 a simple Python system monitoring web app, containerized it with Docker, shipped it to AWS ECR that was provisioned using Python's Boto3, spun up an EKS cluster with Terraform, deployed it with kubectl, and wired up ArgoCD for full-on GitOps DevOps Operations.&lt;/p&gt;

&lt;p&gt;Whole thing got me feeling like a software engineer 🧑‍💻👷🏿‍♂️👷🏿‍♂️ now, sike! Because i am one now! 🙂‍↕️🙂‍↕️&lt;/p&gt;

&lt;p&gt;Yeah, Thats right!! my local laptop is basically my mini DevOps playground right now. 🤣🤣&lt;/p&gt;

&lt;p&gt;Let's dive into this with phases.&lt;/p&gt;

&lt;p&gt;Here is an Architecture diagram i crafted from draw.io, use it as a flow reference.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx0zqj0ewiiwb37ip9fe6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx0zqj0ewiiwb37ip9fe6.png" alt="Architectural diagram" width="800" height="571"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  Phase 1 : My Python Flask Monitoring App
&lt;/h2&gt;

&lt;p&gt;So, let’s start at the very beginning.&lt;br&gt;
I wrote a Python web app that uses psutil, it grabs CPU load, memory usage, disk usage, all that good system info, and displays it with Flask, wrapped with some plain HTML + CSS.&lt;br&gt;
Lightweight, simple, but does the job. ✅✅&lt;/p&gt;
&lt;h2&gt;
  
  
  Phase 2: Containerize Everything
&lt;/h2&gt;

&lt;p&gt;Next step: Spun up a docker desktop in my local machine,&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvnnu5wpd520jvx3ddtea.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvnnu5wpd520jvx3ddtea.png" alt=" " width="800" height="219"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Whgipped out a Dockerfile with the following config:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;FROM python:3.9-slim-buster

WORKDIR /app

COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt

COPY . .

ENV FLASK_RUN_HOST=0.0.0.0

EXPOSE 5000

CMD ["flask", "run"]

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj94ibzrpscn0cpgulmiz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj94ibzrpscn0cpgulmiz.png" alt=" " width="800" height="69"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Built the image, spun up the container locally aaand boom!! My app was alive on localhost:5000.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzv796wcc3g2x7zx2stct.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzv796wcc3g2x7zx2stct.png" alt=" " width="800" height="414"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;That moment had me smiling like a goat 🐐🐐 cuz I'm the goat!. (see what i did there? 😏). &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbb8t2jg2tyk105fuv2ng.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbb8t2jg2tyk105fuv2ng.png" alt=" " width="800" height="327"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Phase 3: Pushed the image to AWS ECR
&lt;/h2&gt;

&lt;p&gt;Next up, I didn’t want this image sitting only on my laptop. So I had to put it somewhere, and for some random reason, i thought of google searching other ways to provision resources, i stumbled on boto3, and decided to give it a try, i used boto3 in Python to build out an ECR repository in my AWS account.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import boto3            ## i got it from here, you can also leearn  more from here https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/ecr/client/create_repository.html

ecr_client = boto3.client('ecr')

repository_name = "my-cloud-app-repo"
response = ecr_client.create_repository(repositoryName=repository_name)

repository_uri = response['repository']['repositoryUri']
print(repository_uri)

# I just attempted to do a little with it, and have an idea of how it works, i still prefer my terraform approach, as this requires a lot of API calls, which may delay or throw me off the plan.

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Logged in with the CLI, tagged my image, pushed it up to ECR — now my container lives in the cloud where it belongs.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;docker push&lt;/code&gt; and &lt;code&gt;aws ecr get-login-password&lt;/code&gt; were my witnesses.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuobx0g85lgxiflcjuych.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuobx0g85lgxiflcjuych.png" alt=" " width="800" height="153"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;so far; Python ==&amp;gt; Docker ==&amp;gt; ECR. ✅✅ &lt;/p&gt;

&lt;p&gt;Stay with me now!! dont go anywhere, keep scrolling!!! 😠&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Next up: Part 2! Terraform, EKS and Deployments!!.&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;💡 Why I’m doing this:&lt;br&gt;
I’m not just playing around!! I actually want recruiters, hiring managers, or whoever’s reading to see that I know how to build something end-to-end.&lt;br&gt;
I understand how code moves from my IDE ➡️ to a container ➡️ to a registry ➡️ to Kubernetes ➡️ and gets managed with GitOps.&lt;/p&gt;

&lt;p&gt;If you’re gonna call yourself a Cloud/DevOps engineer, you can talk the talk, but you better walk the talk. So I’m walking it, one cluster at a time. (Admit it, I'm sleek with it)😏😏&lt;/p&gt;
&lt;h2&gt;
  
  
  Part 2: Terraform + EKS: Bringing My non-existent Cluster to Life
&lt;/h2&gt;

&lt;p&gt;Alright, once my ECR image was chilling safely in AWS, it was time to spin up the big boys playground! "An EKS cluster", the big guns for container orchestration.&lt;/p&gt;

&lt;p&gt;No point clicking around in the AWS console, that’s rookie moves.&lt;br&gt;
I went full Infrastructure-as-Code with Terraform, because real engineers automate repeatable pain, not just deployments.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;The big boy tools ⚔️⚔️&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;I wrote my &lt;code&gt;main.tf&lt;/code&gt; with two key modules:&lt;/p&gt;

&lt;p&gt;The VPC module → pulled from the official &lt;code&gt;terraform-aws-modules/vpc&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# VPC MODULE, you can get it from thee official VPC config

module "vpc" {
  source  = "terraform-aws-modules/vpc/aws"
  version = "5.8.1"

  name = "eks-vpc"
  cidr = "10.0.0.0/16"

  azs             = ["eu-central-1a", "eu-central-1b", "eu-central-1c"]
  private_subnets = ["10.0.1.0/24", "10.0.2.0/24", "10.0.3.0/24"]
  public_subnets  = ["10.0.101.0/24", "10.0.102.0/24", "10.0.103.0/24"]

  enable_nat_gateway = true
  single_nat_gateway = true

  tags = {
    Name = "eks-vpc"
  }
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The EKS module → &lt;code&gt;terraform-aws-modules/eks&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# EKS MODULE: likewise this one too, it pulls from its official EKS config

module "eks" {
  source  = "terraform-aws-modules/eks/aws"
  version = "20.13.0"

  cluster_name                   = var.cluster_name
  cluster_version                = var.cluster_version
  cluster_endpoint_public_access = true

  # Link to VPC module output
  vpc_id     = module.vpc.vpc_id
  subnet_ids = module.vpc.private_subnets

  eks_managed_node_groups = {
    default = {
      desired_capacity = var.desired_capacity
      max_capacity     = var.max_capacity
      min_capacity     = var.min_capacity

      instance_types = var.node_instance_types
    }
  }


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;My VPC config gave me 3 AZs, private &amp;amp; public subnets, NAT Gateway; you know, the usual building blocks to keep traffic flowing but tight.&lt;/p&gt;

&lt;p&gt;Then the EKS module did the heavy lifting:&lt;br&gt;
It spun up the control plane, worker nodes, IAM roles, the whole shebang.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhxcg5ydnfi55enuiy0hr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhxcg5ydnfi55enuiy0hr.png" alt=" " width="800" height="349"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;*** RBAC Headaches 🥀 &amp;amp; IAM Headbutts 💔***&lt;br&gt;
Here’s where the "fun" started &amp;gt;&amp;gt; RBAC.&lt;br&gt;
As you can see, I’m an IAM user (Ak_DevOps), and Kubernetes does not care if your IAM user has AdminAccess in AWS. Cold and Ruthless, it didnt even care that I hadn't eaten while working on this project. 💔💔&lt;br&gt;
K8s RBAC is its own beast. 🧛🧛&lt;/p&gt;

&lt;p&gt;So there I was, cluster up, kubectl get nodes… Access Denied.&lt;br&gt;
Can’t list nodes. Can’t touch aws-auth. Nothing. 😫😥&lt;/p&gt;

&lt;p&gt;The Fix? The Right AccessEntry!&lt;br&gt;
First, I tried to wire up &lt;code&gt;system:masters&lt;/code&gt;.&lt;br&gt;
AWS EKS: “Nah bro, &lt;code&gt;system&lt;/code&gt;: prefixes are off-limits.”&lt;br&gt;
Cool cool cool.&lt;/p&gt;

&lt;p&gt;So I fixed the Terraform config: instead of shoehorning &lt;code&gt;system:masters&lt;/code&gt;, I used an AWS-provided cluster policy &lt;code&gt;AmazonEKSClusterAdminPolicy&lt;/code&gt;and associated it properly in &lt;code&gt;access_entries&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fiyve5epan5zqxclxkdvd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fiyve5epan5zqxclxkdvd.png" alt=" " width="800" height="223"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;A couple &lt;code&gt;terraform apply&lt;/code&gt; runs later, my user got mapped, RBAC was finally happy, &lt;code&gt;kubectl get nodes&lt;/code&gt; → works. We were good!! 🤝🤝&lt;/p&gt;

&lt;p&gt;Sometimes it’s not about who you are, but what policy ARN you carry. 😶‍🌫️😶‍🌫️Lesson learned. &lt;/p&gt;
&lt;h2&gt;
  
  
  Phase 4: Deploy My App!
&lt;/h2&gt;

&lt;p&gt;With the cluster breathing, it was time to launch my container in its new home.&lt;/p&gt;

&lt;p&gt;✅ I wrote &lt;code&gt;deployment.yaml&lt;/code&gt; → pointed it to my ECR image.&lt;br&gt;
✅ I wrote &lt;code&gt;service.yaml&lt;/code&gt; → exposed it as a &lt;code&gt;LoadBalancer&lt;/code&gt; on AWS.&lt;br&gt;
✅ Ran &lt;code&gt;kubectl apply -f&lt;/code&gt; → watched pods spin up, nodes pull my image, service get a public IP.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: apps/v1
kind: Deployment
metadata:
  name: python-monitoring-app
  labels:
    app: python-monitoring-app
spec:
  replicas: 2  # Running 2 copies so it’s reliable and handles load
  selector:
    matchLabels:
      app: python-monitoring-app
  template:
    metadata:
      labels:
        app: python-monitoring-app
    spec:
      containers:
      - name: monitoring-app
        image: 194722436853.dkr.ecr.eu-central-1.amazonaws.com/my-cloud-app-repo:latest  # My ECR image URL
        ports:
        - containerPort: 5000  # The port my Flask app listens on inside the container
        env:
        - name: FLASK_RUN_HOST
          value: "0.0.0.0"  # Make flask listen on all interfaces, not just localhost
        resources:
          requests:
            memory: "128Mi"  # I added these section, as i want to define the constraint of minimum resources which should be reserve for this container
            cpu: "100m"
          limits:
            memory: "256Mi"  #  maxx resources container can use so it doesnnt hog a lot of cluster memories and slow down the system!!!
            cpu: "200m"
---
apiVersion: v1
kind: Service
metadata:
  name: python-monitoring-service
spec:
  selector:
    app: python-monitoring-app  # tHIS will liink service to pods with this label
  ports:
  - protocol: TCP
    port: 80         # External port people hit to reach the app
    targetPort: 5000 # Internal port your app listens on (fix from 8080)
  type: LoadBalancer # AWS will spin up a real load balancer for me

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;and my service.yaml:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: v1
kind: Service
metadata:
  name: python-monitoring-service
spec:
  type: LoadBalancer
  selector:
    app: python-monitoring-app
  ports:
  - protocol: TCP
    port: 80       # external port
    targetPort: 5000  # container port

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F51kntt1yza6uarn8ilsy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F51kntt1yza6uarn8ilsy.png" alt=" " width="800" height="264"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Boom! my flask app, alive on the internet, powered by EKS.&lt;br&gt;
One tiny Python script, now scaling in the cloud.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2p9mm8acty21wuro24pc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2p9mm8acty21wuro24pc.png" alt=" " width="800" height="412"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Upcoming: Part 3!! ArgoCD — GitOps or Go Home!&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;So far:&lt;/p&gt;

&lt;p&gt;✅ Python Flask app&lt;/p&gt;

&lt;p&gt;✅ Docker + ECR&lt;/p&gt;

&lt;p&gt;✅ Terraform VPC + EKS&lt;/p&gt;

&lt;p&gt;✅ Deployed via kubectl&lt;/p&gt;

&lt;p&gt;Next up: ArgoCD.&lt;br&gt;
How I turned this into a proper GitOps pipeline!! fully automated, push code ➡️ watch the cluster sync ➡️ self-healing infra.&lt;/p&gt;

&lt;p&gt;And oh, the Dex server meltdown on &lt;code&gt;t3.small&lt;/code&gt;?&lt;br&gt;
Yeah… the late-night detective story deserves its own spotlight.&lt;/p&gt;

&lt;h2&gt;
  
  
  Part 3!! GitOps the Smart Way: ArgoCD + the Dex Saga
&lt;/h2&gt;

&lt;p&gt;So my Python Flask app was up on EKS. Cool.&lt;br&gt;
But let’s be real! &lt;em&gt;kubectl apply&lt;/em&gt; &lt;strong&gt;manually?&lt;/strong&gt; Nah.&lt;br&gt;
This is 2025, not 2015😑.&lt;/p&gt;

&lt;p&gt;I wanted proper GitOps! So i needed ArgoCD to watch my Git repo like a hawk and sync changes automatically.&lt;br&gt;
Push code → Argo picks it → deploys → cluster stays true to Git🤞🤞.&lt;br&gt;
Clean, declarative, bulletproof.&lt;/p&gt;

&lt;p&gt;Talk about ArgoCD &lt;strong&gt;Loyalty&lt;/strong&gt; to Git!! &lt;em&gt;Not sure you can relate&lt;/em&gt; 🤣🤣.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Installaing ArgoCD&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;I spun up a new &lt;code&gt;argocd&lt;/code&gt; namespace:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;kubectl create namespace argocd&lt;br&gt;
&lt;/code&gt;&lt;br&gt;
Then installed ArgoCD with the official manifests:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml&lt;br&gt;
&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx6i5vpki0gq1xyiokek6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx6i5vpki0gq1xyiokek6.png" alt=" " width="800" height="172"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;All good, riiiight?&lt;/p&gt;

&lt;p&gt;So far so good.&lt;br&gt;
Pods popped up: Application Controller, Repo Server, API Server…&lt;br&gt;
But then came Dex!! ArgoCD’s SSO engine. I call it, "The Humbler" because, boy was i humbled! 😭😭&lt;/p&gt;

&lt;h3&gt;
  
  
  Dex: The Tiny Pod that Broke my Night🤬🤬
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs8r9883p91geivaw06ib.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs8r9883p91geivaw06ib.png" alt=" " width="800" height="170"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The &lt;code&gt;argocd-dex-server&lt;/code&gt; kept dying. &lt;code&gt;CrashLoopBackOff&lt;/code&gt;.&lt;br&gt;
&lt;strong&gt;Logs? Useless. Google? Meh!!&lt;/strong&gt;,&lt;br&gt;
&lt;strong&gt;AWS forum?&lt;/strong&gt; All were dead ends!!&lt;br&gt;
&lt;strong&gt;Reddit?&lt;/strong&gt; I found only one person who had the issue one year ago, but he was never answered not resolved, i guess he quitted half-way. Totally excusable!. I started to crash out. I've come too far to fail😭😭&lt;/p&gt;

&lt;p&gt;I doubled my nodes: &lt;code&gt;desired_capacity = 3&lt;/code&gt;.&lt;br&gt;
No dice.&lt;br&gt;
&lt;code&gt;kubectl describe&lt;/code&gt; → oom killed.&lt;/p&gt;

&lt;p&gt;Then I did what DevOps folks do when docs fail I took my phone and dialed up Claude (👀).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Solution? Bigger Instances.&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Claude said:&lt;/p&gt;

&lt;p&gt;“Your &lt;code&gt;t3.small&lt;/code&gt; nodes don’t have enough RAM. Dex needs more headroom.”&lt;/p&gt;

&lt;p&gt;Fair.&lt;/p&gt;

&lt;p&gt;So i tweaked my Terraform:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;instance_types = ["t3.medium"]&lt;br&gt;
&lt;/code&gt;&lt;br&gt;
Re-applied. Watched the nodes drain and come back bigger.&lt;br&gt;
Dex? Back to life instantly😮‍💨😮‍💨.&lt;/p&gt;

&lt;p&gt;Sometimes more RAM fixes everything 😮‍💨😮‍💨. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhkodgb4u22dak9x9ajhd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhkodgb4u22dak9x9ajhd.png" alt=" " width="800" height="244"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Expose ArgoCD the smart wayy&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;By default, ArgoCD’s &lt;code&gt;argocd-server&lt;/code&gt; is a ClusterIP — internal only.&lt;br&gt;
So I patched it to a LoadBalancer:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;kubectl edit svc argocd-server -n argocd&lt;br&gt;
&lt;/code&gt;&lt;br&gt;
Changed &lt;code&gt;type: ClusterIP&lt;/code&gt; → &lt;code&gt;type: LoadBalancer&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;A shiny new ELB spun up — now my ArgoCD UI was live!!!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fekobpw80riuddmuklqhv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fekobpw80riuddmuklqhv.png" alt=" " width="800" height="412"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Admin Login&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Got the admin password:&lt;br&gt;
&lt;code&gt;kubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath="{.data.password}" | base64 -d&lt;br&gt;
&lt;/code&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Floglg2wrjt2j941ccm2f.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Floglg2wrjt2j941ccm2f.png" alt=" " width="800" height="219"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Loggined in as &lt;code&gt;admin&lt;/code&gt;. Changed the passsword.&lt;br&gt;
Safe and sound!!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5es29p8f5sw4x1au6lwg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5es29p8f5sw4x1au6lwg.png" alt=" " width="800" height="399"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Connect to GitHub! The GitOps Cycle or Loop, or whatever you want to call it.
&lt;/h2&gt;

&lt;p&gt;I didn’t want to click through the UI to add the repo, because that will reduce the little aura i had left from dex issue 🥲🥲, so I did it the proper way:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Wrote an &lt;code&gt;argo-app.yaml&lt;/code&gt; config that points to my GithUb Repo:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# This defines my ArgoCD app: pulls manifests from my GitHub repo, syncs to my EKS cluster, you can also do it console or manual approach too.

apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: my-cloud-monitoring-app   # This is the name of my ArgoCD app
  namespace: argocd               # Must match ArgoCD namespace we created earlier
spec:
  project: default

  source:
    repoURL: 'https://github.com/AkingbadeOmosebi/my-cloud-monitoring-app'  # My GitHub repo
    targetRevision: HEAD              # Branch to track (HEAD = default branch)
    path: manifests               # Path to my k8s manifests (deployment and service.yaml) folder inside repo

  destination:
    server: 'https://kubernetes.default.svc'  # EKS cluster endpoint inside ArgoCD
    namespace: default

  syncPolicy:           # This is where it syncs every deployyment
    automated:
      prune: true       # Remove old resources if not in Git anymore
      selfHeal: true    # Revert drift automatically,

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Applied it:&lt;br&gt;
&lt;code&gt;kubectl apply -f argo-app.yaml&lt;/code&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Watched ArgoCD pick up my &lt;code&gt;deployment.yaml&lt;/code&gt; &amp;amp; &lt;code&gt;service.yaml&lt;/code&gt; → deploy my app → match desired state.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv5z85e7y2fv8r5ynvq4m.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv5z85e7y2fv8r5ynvq4m.png" alt=" " width="800" height="412"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F36wjdr5yb0by0eft87j7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F36wjdr5yb0by0eft87j7.png" alt=" " width="800" height="399"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2pvxzoix7bqw3218uqek.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2pvxzoix7bqw3218uqek.png" alt=" " width="800" height="411"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I could feel a different sensation, it was Aura. Aura was rising from everywhere. 😎😎&lt;/p&gt;

&lt;h2&gt;
  
  
  The Moment of Truth: Scaling!!
&lt;/h2&gt;

&lt;p&gt;I tested it live:&lt;/p&gt;

&lt;p&gt;-Pushed an update to replicas: 2 → Argo synced.&lt;/p&gt;

&lt;p&gt;-Changed it to 4 → Argo synced.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkvjx8umbipe4n044j4ww.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkvjx8umbipe4n044j4ww.png" alt=" " width="800" height="408"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;-Pushed 6 → Argo synced and spun up 6 pods, just like that.&lt;/p&gt;

&lt;p&gt;This is how CD should feel: Hands off, Git is the source of truth, Argo enforces it. &lt;br&gt;
Such a beautiful relationship🥹🥹. I wish mine was like that too 🥲🥲, couldn't even wait for me to scale up 💔🥀. &lt;/p&gt;

&lt;p&gt;Anyways, it is what it is!.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Final Thoughts&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;So yeah, I built and turned:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;A Python Flask app&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Containerized with Docker&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Pushed to ECR&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Deployed to EKS (Infra-as-Code with Terraform)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Managed through ArgoCD&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Into a real Continuous Deployment pipeline — all my public Git.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Big lessons for recruiters, tech leads and tech ethusiasts reading this:&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;I don’t just build, I debug.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;I don’t fear YAML, I automate with it.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;I know how infra works from IAM quirks to cluster IPs.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;And when things break at 2AM? I figure it out, and document it so the next version is better and sleep at 3:30AM 😃😃.&lt;/p&gt;

&lt;p&gt;If you read this far, just pause and imagine. Imagine what I can do for your team when it’s not just me in the dark with Dex. 🔥&lt;/p&gt;

&lt;h2&gt;
  
  
  Outro or Perhaps What’s Next?
&lt;/h2&gt;

&lt;p&gt;This wasn’t just another “Hello World on Kubernetes”.&lt;/p&gt;

&lt;p&gt;I built, broke, fixed, tuned, automated, scaled, then wrapped it all in GitOps so it runs itself.&lt;br&gt;
And I made mistakes on purpose (well… some😅) so I could really understand what’s happening under the hood 🌚. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1vuglw91lzny0bm9k3l6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1vuglw91lzny0bm9k3l6.png" alt=" " width="800" height="131"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Next up?&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Templating my manifests with Helm.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Adding Ingress with cert-manager and SSL.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;-Maybe wire up Prometheus &amp;amp; Grafana to watch my app’s real CPU + RAM usage.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;And of course, more Terraform modules to make this repeatable for any project.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Why Does This Matter?&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;I’m not here to memorize commands, I understand why they break.&lt;/p&gt;

&lt;p&gt;I’m not scared of IAM, K8s RBAC, or AWS networking.&lt;/p&gt;

&lt;p&gt;I automate the boring stuff so I can focus on shipping value.&lt;/p&gt;

&lt;p&gt;##Let’s Connect 💬&lt;br&gt;
I’m open to DevOps, Platform Engineering, SRE or Cloud Native roles where:&lt;br&gt;
✅ Cloud + K8s + Terraform are the daily bread&lt;br&gt;
✅ GitOps, automation &amp;amp; CI/CD aren’t just buzzwords&lt;br&gt;
✅ And people actually share what they learn&lt;/p&gt;

&lt;p&gt;If this sounds like your kind of crew, lets talk.&lt;br&gt;
Or just drop a comment to geek out about EKS, ArgoCD, or your weirdest &lt;code&gt;CrashLoopBackOff&lt;/code&gt; story. I love hearing them all.&lt;/p&gt;

&lt;p&gt;📌 Check the full project repo: &lt;a href="https://github.com/AkingbadeOmosebi/my-cloud-monitoring-app/tree/main" rel="noopener noreferrer"&gt;my-cloud-monitoring-app&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;🔗 Connect with me on &lt;a href="https://www.linkedin.com/in/aomosebi/" rel="noopener noreferrer"&gt;Linkedin&lt;/a&gt;&lt;/p&gt;

</description>
      <category>devops</category>
      <category>aws</category>
      <category>kubernetes</category>
      <category>cicd</category>
    </item>
    <item>
      <title>Deploying a Fully Functional Multi-AZ WordPress App on AWS ECS + RDS with Terraform &amp; Spacelift.</title>
      <dc:creator>Akingbade Omosebi</dc:creator>
      <pubDate>Thu, 17 Jul 2025 17:32:59 +0000</pubDate>
      <link>https://dev.to/akingbade_omosebi/deploying-a-fully-functional-multi-az-wordpress-app-on-aws-ecs-rds-with-terraform-spacelift-1e99</link>
      <guid>https://dev.to/akingbade_omosebi/deploying-a-fully-functional-multi-az-wordpress-app-on-aws-ecs-rds-with-terraform-spacelift-1e99</guid>
      <description>&lt;p&gt;Hey everyone! I’m Akingbade Omosebi, and I like turning ideas into real, production-grade level infrastructure.&lt;/p&gt;

&lt;p&gt;This post breaks down exactly how I deployed a WordPress app on AWS ECS, using RDS for storage, an ALB, a multi-AZ VPC, and full CI/CD via Spacelift.&lt;/p&gt;

&lt;p&gt;It’s practical, minimal fluff, and everything here was built, tested, and verified, you’ll see my real console screenshots to prove it.&lt;/p&gt;

&lt;h2&gt;
  
  
  What you’ll see here
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;How I split my VPC into Public &amp;amp; Private Subnets across multiple AZs.&lt;/li&gt;
&lt;li&gt;How ECS, ALB, and RDS fit together.&lt;/li&gt;
&lt;li&gt;Why security groups matter, and how I designed them.&lt;/li&gt;
&lt;li&gt;How the Terraform files are split, no monolith .tf mess.&lt;/li&gt;
&lt;li&gt;How I ran it first locally, then automated it on Spacelift with secrets.&lt;/li&gt;
&lt;li&gt;Architecture diagram + real deployment screenshots.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  What’s my goal?
&lt;/h2&gt;

&lt;p&gt;A WordPress app that:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Runs in multiple Availability Zones.&lt;/li&gt;
&lt;li&gt;Gets traffic through an Application Load Balancer.&lt;/li&gt;
&lt;li&gt;Stores all posts/users in a MySQL RDS database in Private Subnets.&lt;/li&gt;
&lt;li&gt;Fully version-controlled and deployed through Spacelift.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Why multi-AZ?&lt;/strong&gt;&lt;br&gt;
If one AZ goes down, ECS and RDS keep the site alive&lt;/p&gt;
&lt;h2&gt;
  
  
  Overall Architecture.
&lt;/h2&gt;

&lt;p&gt;Here's my &lt;strong&gt;high level&lt;/strong&gt; arhitecctural diagram which i created on draw.io&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flre496vw24hx11e80jem.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flre496vw24hx11e80jem.png" alt=" " width="800" height="559"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Key parts:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Public Subnets&lt;/strong&gt; hold the ALB + ECS Tasks.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Private Subnets&lt;/strong&gt; hold the RDS DB.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;IGW&lt;/strong&gt; lets Public Subnets connect out.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Security Groups&lt;/strong&gt; lock down who talks to who.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
  
  
  VPC, Subnets &amp;amp; Internet Gateway
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;VPC: &lt;code&gt;10.0.0.0/16&lt;/code&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Public SUbnets:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;10.0.1.0/24&lt;/code&gt; in &lt;code&gt;eu-central-1a&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;&lt;code&gt;10.0.2.0/24- in -eu-central-1b&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Private Subnets:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;10.0.3.0/24&lt;/code&gt; in &lt;code&gt;eu-central-1a&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;10.0.4.0/24&lt;/code&gt; in &lt;code&gt;eu-central-1b&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;*&lt;em&gt;Why seperate? *&lt;/em&gt;&lt;br&gt;
Public Subnets have IGW for inbound HTTP. Private Subnets stay internals, RDS has no direct Internet pathwayy.&lt;/p&gt;

&lt;p&gt;Here is my terraform resource block code for my VPC, i liken it to my whole network block or network playground:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# -------- VPC --------
resource "aws_vpc" "main" {
  cidr_block = "10.0.0.0/16" # This means: our entire VPC has a lot of IPs, approx 65k or slightly moore
  tags = {
    Name = "${var.project_name}-vpc"
  }
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here is my resource block code for  one of my ssubnets, which are like little fenced yards within my vpc:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# -------- Public Subnets --------

# Subnet 2 in eu-central-1b
resource "aws_subnet" "public_2" {
  vpc_id                  = aws_vpc.main.id
  cidr_block              = "10.0.2.0/24" # Next block, 256 IPs
  availability_zone       = "eu-central-1b"
  map_public_ip_on_launch = true

  tags = {
    Name = "${var.project_name}-public-2"
  }
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here is for my Internet Gateway, that lets traffic in and out.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# -------- Internet Gateway --------
resource "aws_internet_gateway" "igw" {
  vpc_id = aws_vpc.main.id
  tags = {
    Name = "${var.project_name}-igw"
  }
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To view the rest of the VPC, check out my GitHub Repo Link &lt;a href="https://github.com/AkingbadeOmosebi/rds-ecs-wordpress-terraform" rel="noopener noreferrer"&gt;WordPress GitHub Repo&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Security Groups
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;ALB SG&lt;/strong&gt;: Inbound 80 from anywhere.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;ECS SG&lt;/strong&gt;: Inbound only from ALB SG.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;RDS SG&lt;/strong&gt;: Inbound on 3306 only from ECS SG.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This means the following, just as i illustrated in the diagram:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Traffic comes in via ALB.&lt;/li&gt;
&lt;li&gt;ALB talks to only ECS.&lt;/li&gt;
&lt;li&gt;Only ECS talks to RDS!!.&lt;/li&gt;
&lt;li&gt;Nothing else has direct DB access, especially as DB ought to be private always.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  ECS Fargate Cluster &amp;amp; Service
&lt;/h2&gt;

&lt;p&gt;One Cluster, multi-AZ.&lt;/p&gt;

&lt;p&gt;One Service, desired count = 2 Tasks. (i wanto two tasks like replicas in k8s to be up)&lt;/p&gt;

&lt;p&gt;Task Definition: is typically used within the ECS to define or run the official WordPress image, and is mapped to port 80 from its image (wrodpress image). You can get your images from ECR public imagess or from Dockerhub images!.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyi3obbwa9uj34ritsc0w.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyi3obbwa9uj34ritsc0w.png" alt="Cluster" width="800" height="147"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4wfekiyow12okdoz7dzh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4wfekiyow12okdoz7dzh.png" alt="Clustre Dashboard" width="800" height="286"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fejmzw8dihrn0j4kf8tbq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fejmzw8dihrn0j4kf8tbq.png" alt="Task Definition" width="800" height="128"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr4tv3sb6sku5lxc31p82.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr4tv3sb6sku5lxc31p82.png" alt="Task Definition Dashboard" width="800" height="390"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Here is my Task Definition code blocks from my ECS:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# -------- ECS Task Definition --------
# This is the 'recipe' for your WordPress container
resource "aws_ecs_task_definition" "wordpress" {
  family                   = "${var.project_name}-task"
  network_mode             = "awsvpc"    # Needed for Fargate
  requires_compatibilities = ["FARGATE"] # We use Fargate, no EC2 to manage
  cpu                      = 512         # 0.5 vCPU
  memory                   = 1024        # 1 GB
  execution_role_arn       = aws_iam_role.ecs_task_execution_role.arn

  container_definitions = jsonencode([
    {
      name      = "wordpress"
      image     = "wordpress:latest" # Official WordPress image from Docker Hub, you can also put ECR public WordPress image here
      essential = true

      portMappings = [
        {
          containerPort = 80
          hostPort      = 80
        }
      ]

      environment = [
        {
          name  = "WORDPRESS_DB_HOST"
          value = aws_db_instance.wordpress.address
        },
        {
          name  = "WORDPRESS_DB_USER"
          value = var.db_username
        },
        {
          name  = "WORDPRESS_DB_PASSWORD"
          value = var.db_password
        },
        {
          name  = "WORDPRESS_DB_NAME"
          value = var.db_name
        }
      ]
    }
  ])
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And here is my ECS Service code block as well:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# -------- ECS Service --------
# Keeps tasks alive &amp;amp; hooks them to ALB
resource "aws_ecs_service" "wordpress" {
  name            = "${var.project_name}-service"
  cluster         = aws_ecs_cluster.main.id
  task_definition = aws_ecs_task_definition.wordpress.arn
  launch_type     = "FARGATE"
  desired_count   = 2 # 2 containers for high availability, 1 is for just one only, but for production you should have at least 2 or more

  network_configuration {
    subnets = [
      aws_subnet.public_1.id,
      aws_subnet.public_2.id
    ]
    security_groups  = [aws_security_group.ecs_sg.id]
    assign_public_ip = true # Needed since we’re in public subnets
  }

  load_balancer {
    target_group_arn = aws_lb_target_group.main.arn
    container_name   = "wordpress"
    container_port   = 80
  }

  depends_on = [aws_lb_listener.http] # Make sure ALB listener exists first
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To view the rest of the VPC, check out my GitHub Repo Link &lt;a href="https://github.com/AkingbadeOmosebi/rds-ecs-wordpress-terraform" rel="noopener noreferrer"&gt;WordPress GitHub Repo&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  My ALB &amp;amp; Target Group
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;My ALB covers across both Public Subnets.&lt;/li&gt;
&lt;li&gt;My target group uses ip type (not instance needed for awsvpc mode).&lt;/li&gt;
&lt;li&gt;Listener forwards HTTP requests to the ECS Tasks.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flue487avq7vk7kfb2s3u.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flue487avq7vk7kfb2s3u.png" alt=" " width="800" height="132"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7xzu7dz4r4q7puq9lec3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7xzu7dz4r4q7puq9lec3.png" alt=" " width="800" height="372"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Mapped out Target group resources, showing its relevant connections are in place and active.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0pizfdvacy7cby1shr5p.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0pizfdvacy7cby1shr5p.png" alt="Resource Map" width="800" height="249"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Dashboard shows everything is healthy and good, based upon the success codes, both replicas are in great shape.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyg42323s85iyac43xewd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyg42323s85iyac43xewd.png" alt="Target Group" width="800" height="369"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Health Checks by the ALB, it sends requests every 30 seconds to get responses, its acceptable response code is 200-399 Max, else it assumes it is unhealthy and switches to the next replica which is provisioned.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6ovwddred9zltka136zc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6ovwddred9zltka136zc.png" alt=" " width="800" height="136"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Here are some code blocks taken from my ALB.tf file&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# -------- ALB --------
resource "aws_lb" "main" {
  name               = "${var.project_name}-alb"
  internal           = false # false = internet-facing
  load_balancer_type = "application"

  security_groups = [aws_security_group.alb_sg.id]
  subnets = [
    aws_subnet.public_1.id,
    aws_subnet.public_2.id
  ]

  tags = {
    Name = "${var.project_name}-alb"
  }
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;My Target Group code block:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt; # -------- ALB Target Group --------
# This is like the guest list — who can receive traffic from the ALB
resource "aws_lb_target_group" "main" {
  name     = "${var.project_name}-tg"
  port     = 80
  protocol = "HTTP"
  vpc_id   = aws_vpc.main.id
  target_type = "ip"


  health_check {
    path                = "/"
    protocol            = "HTTP"
    matcher             = "200-399"
    interval            = 30
    healthy_threshold   = 2
    unhealthy_threshold = 2
  }

  tags = {
    Name = "${var.project_name}-tg"
  }
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And from my listener code block:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# -------- ALB Listener --------
# This listens on port 80 and forwards traffic to our target group (ECS tasks)
resource "aws_lb_listener" "http" {
  load_balancer_arn = aws_lb.main.arn
  port              = 80
  protocol          = "HTTP"

  default_action {
    type             = "forward"
    target_group_arn = aws_lb_target_group.main.arn
  }
}


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To view the rest of the files, check out my GitHub Repo Link &lt;a href="https://github.com/AkingbadeOmosebi/rds-ecs-wordpress-terraform" rel="noopener noreferrer"&gt;WordPress GitHub Repo&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  My RDS: The DB Layer
&lt;/h2&gt;

&lt;p&gt;-MySQL DB.&lt;br&gt;
-Multi-AZ standby.&lt;br&gt;
-Lives in Private Subnets only.&lt;br&gt;
-ECS connects using private DNS endpoint.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fok0gmx2vhq6f3ppsi0du.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fok0gmx2vhq6f3ppsi0du.png" alt=" " width="800" height="185"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1m3csr5y6x6lk8mu0ixh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1m3csr5y6x6lk8mu0ixh.png" alt="RDS Dashboard Overview" width="800" height="387"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr6rlnkisr5a1o7sq6lpz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr6rlnkisr5a1o7sq6lpz.png" alt="RDS Dashboard Configuration Tab Overview" width="800" height="407"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Here is some code blocks from my rds.tf file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# -------- RDS MySQL Instance --------
resource "aws_db_instance" "wordpress" {
  identifier             = "${var.project_name}-db" # Unique name for my db
  allocated_storage    = 20              # 20 GB storage for DB
  storage_type         = "gp2"           # General Purpose SSD
  engine               = "mysql"         # DB engine
  engine_version       = "8.0"           # MySQL version
  instance_class       = "db.t3.micro"   # Smallest cheap instance for demo
  db_name              = var.db_name     # DB name from variables.tf
  username             = var.db_username # Master user
  password             = var.db_password # Master password
  db_subnet_group_name = aws_db_subnet_group.main.name

  # Attach RDS SG to control who can connect (only ECS)
  vpc_security_group_ids = [aws_security_group.rds_sg.id]

  # Don't keep final snapshot when destroying, i'll only do this for dev stages
  skip_final_snapshot = true

  backup_retention_period = 7  # Keep daily backups for 7 days, you can increase or decrease as needed (similar to what you are insturcted on Console)

  # tag for clarity or just incase moments of confusion
  tags = {
    Name = "${var.project_name}-rds"
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To view the rest of the files, check out my GitHub Repo Link &lt;a href="https://github.com/AkingbadeOmosebi/rds-ecs-wordpress-terraform" rel="noopener noreferrer"&gt;WordPress GitHub Repo&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Variables &amp;amp; Secrets
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;DB password is &lt;code&gt;sensitive = true&lt;/code&gt; in Terraform.&lt;/li&gt;
&lt;li&gt;I passed it to Spacelift as an Enviironment Variable.&lt;/li&gt;
&lt;li&gt;Make sure to never hardcode sensitibve credentials in .tf files.&lt;/li&gt;
&lt;li&gt;Outputs don’t print sensitive values.&lt;/li&gt;
&lt;li&gt;Always check: Always test secrets are functional before automating them.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;As you can see in my repo link, there's no secret or sensitve data in my Terrafrom code there, but it works well. I'll show you later how i embeded my secrets or sensitive data to Spacelift without pushing it to repo.&lt;/p&gt;

&lt;h2&gt;
  
  
  Local Runs (Optional, but good practice)
&lt;/h2&gt;

&lt;p&gt;Additionally, I often sometimes rerun my code locally or directly with terraform, especially if i want to test a module, so it doesnt mess up my repo. &lt;/p&gt;

&lt;p&gt;Here are some commands i use from terraform&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform init
terraform fmt
terraform validate
terraform plan
terraform apply /  #(-auto-approve) &amp;lt;---- Optional, use only if you're sure 
terraform destroy / #(-auto-approve) &amp;lt;---- Optional, use only if you're sure 

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  CI/CD via Spacelift
&lt;/h2&gt;

&lt;p&gt;Why Spacelift?&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;it automates the plan + apply with each Git commits and push successful in my repoo..&lt;/li&gt;
&lt;li&gt;it stores secrets securely.&lt;/li&gt;
&lt;li&gt;it keeps infra history in version control.&lt;/li&gt;
&lt;li&gt;It lets me rollback if needed.&lt;/li&gt;
&lt;li&gt;It builds secure pipelines for your workflow, and is collaborative&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxzi98vh414bavnc7y0yr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxzi98vh414bavnc7y0yr.png" alt="Awaiting my confirmation as instructed" width="800" height="395"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7inwbtshtk0vt55ckoiq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7inwbtshtk0vt55ckoiq.png" alt="Deployment complete" width="800" height="395"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I use contexts or environment variables within stacks to store secrets.&lt;br&gt;
Spacelift utilizes and maps out the TF_Var as a Terraform variable, and takes in its values.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F79dwtnl4pnfypnhjxn55.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F79dwtnl4pnfypnhjxn55.png" alt="Context already created and parameters/secrets passed and secured" width="800" height="395"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fapww6za55bywzmt30ftm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fapww6za55bywzmt30ftm.png" alt="New stack creation and application of Context" width="800" height="395"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  How does the my traffic flow within my work?
&lt;/h2&gt;

&lt;p&gt;Its pretty easy and simple!. &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Step 1: User hits ALB DNS (could be domain name, if domain is bought and assigned to Route 53 (R53), and is routed by the ALB to a healthy ECS Task.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Step 2: Task container runs Apache + PHP based on the configuration of the wordpress image, and talks to RDS over port 3306.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Step 3: Additionally, Lets assume, one AZ fails the health check or fails for whatever reasons, my ECS Tasks still serve traffic from the other AZ. DB failover too.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;As you can see my ECS Tasks is healthy, live and running.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1nde89pmrk663oheaykm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1nde89pmrk663oheaykm.png" alt="ECS Task" width="800" height="286"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Both replicas are spun up and running, minimum is set to two, incase one faults or fails, another is there.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv7v85alqcuivemaccwe6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv7v85alqcuivemaccwe6.png" alt="2/2" width="800" height="160"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And since there are up and healthy, to access it, we have to use the ALB DNS to access it in this case, for production level, they use route 53 and other DNS services, but for this project, this is fine, further developments will occur on it with time.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feehufo8shf03pf3rdgju.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feehufo8shf03pf3rdgju.png" alt="ALB DNS" width="800" height="181"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As you can see, my Wordpress site is on and live, waiting for me to setup.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Falhncj32l5rc900c0eac.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Falhncj32l5rc900c0eac.png" alt=" " width="800" height="415"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Account creation and all &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhvsv77ppl2snok4iajon.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhvsv77ppl2snok4iajon.png" alt=" " width="800" height="397"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Login portal&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft4z880f4csx50vef056e.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft4z880f4csx50vef056e.png" alt=" " width="800" height="312"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Blog up and running, left to be customized by designer.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftyk39nw577u7rj0d8017.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftyk39nw577u7rj0d8017.png" alt=" " width="800" height="414"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  WHat did i learn?
&lt;/h2&gt;

&lt;p&gt;I learned a lot, made a couple of mistakes, and ran into some errors which naturally frustrated me at first, such as, I wondered why my RDS was not Multi-AZ, until i read through the documentation on Registry.terraform site again, and made an isolated attempt, which worked, then i compared and adopted the changes between them. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Unplanned personal error&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;I also encountered a situation where i did not pay attention to a small detail, it went like this. I ran across some accidents such as wrongly configuring my TF_VAR, and blindly re-inputting my database instance name as actual name in my ENV_VAR, especially as they were similar with just a slight "-" hyphen difference, then crashing out 😭😭 and getting more coffee ☕☕ as it wasn't working and making no sense, until I found out my silly mistake from an inspected .json file within my RDS and my Task Definition, which showed my ENV_VAR was not the same.&lt;/p&gt;

&lt;p&gt;This made my ECS task to not function and my ALB to detect the replicas as unhealthy, (clearly as it couldn't authenticate with the username).&lt;/p&gt;

&lt;p&gt;It was funny but also interesting to see how the smallest and simplest but unexpected errors can throw anyone off, while we naturally focus on the larger complex tasks.😂😂&lt;/p&gt;

&lt;p&gt;I had to research a little bit as i worked to make sense of what i was doing.&lt;/p&gt;

&lt;p&gt;But these are normal human errors, and I learn to pay more attention and cross check little details. 😎😎&lt;/p&gt;

&lt;p&gt;Other things i learned were;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Why Task Definitions are just blueprints.&lt;/li&gt;
&lt;li&gt;Why awsvpc mode needs ip Target Groups.&lt;/li&gt;
&lt;li&gt;Why Security Groups must stay tight.&lt;/li&gt;
&lt;li&gt;Why multi-AZ is worth the extra cost.&lt;/li&gt;
&lt;li&gt;Why real CI/CD beats &lt;code&gt;terraform apply&lt;/code&gt; from a laptop.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Final thoughts.
&lt;/h2&gt;

&lt;p&gt;This project is 100% built from scratch, tested, deployed, and running in my own AWS account.&lt;/p&gt;

&lt;p&gt;If you’re learning:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Break big .tf files into small ones. Embrace modularity!.&lt;/li&gt;
&lt;li&gt;Draw your diagram or arhitecture, it helps you explain it to anyone, and also reminds you of what you're looking at.&lt;/li&gt;
&lt;li&gt;Run local first, then automate.&lt;/li&gt;
&lt;li&gt;Keep your secrets separate always.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Next up!&lt;/p&gt;

&lt;p&gt;In the future, i might probably add some more features, such as route 53, Custom domain name, and other features, for future tasks. Butr who knows? &lt;br&gt;
That's why you gotta stay tuned!! 😉.&lt;/p&gt;

&lt;p&gt;If you want to see the rest of my actual infra code block or configuration files (terraform files), then check out my &lt;a href="https://github.com/AkingbadeOmosebi/rds-ecs-wordpress-terraform" rel="noopener noreferrer"&gt;WordPress GitHub Repo&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I tried my best to keep it simple, calm, clear and direct, trying to figure my tones. If you enjoed this read up, dont forget to drop a comment here.&lt;/p&gt;

&lt;p&gt;Connect on &lt;a href="https://www.linkedin.com/in/aomosebi/" rel="noopener noreferrer"&gt;LinkedIn&lt;/a&gt; | &lt;a href="https://github.com/AkingbadeOmosebi/rds-ecs-wordpress-terraform" rel="noopener noreferrer"&gt;GitHub Repo&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Stay curious, keep building!&lt;/p&gt;

&lt;p&gt;—&lt;/p&gt;

&lt;h1&gt;
  
  
  AWS #Terraform #ECS #RDS #CI/CD #Spacelift
&lt;/h1&gt;

</description>
      <category>ecs</category>
      <category>terraform</category>
      <category>devops</category>
      <category>rds</category>
    </item>
    <item>
      <title>🚀 How I Built, &amp; Deployed My Portfolio Site With Docker, AWS ECR, ECS-FARGATE, Terraform &amp; Spacelift.</title>
      <dc:creator>Akingbade Omosebi</dc:creator>
      <pubDate>Wed, 09 Jul 2025 11:54:47 +0000</pubDate>
      <link>https://dev.to/akingbade_omosebi/how-i-built-deployed-my-portfolio-site-with-docker-aws-ecr-ecs-fargate-terraform--34ce</link>
      <guid>https://dev.to/akingbade_omosebi/how-i-built-deployed-my-portfolio-site-with-docker-aws-ecr-ecs-fargate-terraform--34ce</guid>
      <description>&lt;p&gt;Hey folks,&lt;br&gt;
So as you know I've been playing with Spacelift, and honestly, I think i'm starting to find myself enjoy it, even more than the traditional approach of just handling and deploying infrastructures. That's why I want to share a fun project of mine i worked on today. &lt;/p&gt;

&lt;p&gt;I actually decided to mess around and play both the roles of a Frontend Software Developer and a DevOps Engineer alone by myself for this project. I created and turned a portfolio site using HTML, CSS, JAVASCRIPT into a full-on AWS Cloud DevOps project. Which I containerized with Docker and and pushed to AWS Elastic Container Registry (ECR) and deployed with AWS Elastic Container Service (ECS), and all these were managed with Terraform and Spacelift.&lt;/p&gt;

&lt;p&gt;Before you jump into it, here is the Architecture diagram to explain the logical flow of what the project is about. But if not, you can always refer to it, incase you're lost within the concept.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fssse4arzg50w55h4sroq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fssse4arzg50w55h4sroq.png" alt="Architecture Diagram" width="800" height="442"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Why I Did This&lt;/strong&gt;&lt;br&gt;
Originally, I just wanted a simple portfolio site, you know, something to show off my projects and have a personal space online. But I didn’t want to stop at just pushing HTML, CSS, and JavaScript to a random static host. I thought: &lt;strong&gt;&lt;em&gt;Why not make this an opportunity to show real DevOps skills too?&lt;/em&gt;&lt;/strong&gt; make sense, right?&lt;/p&gt;

&lt;p&gt;So, I took it up a notch — wrapped the site in a Docker container, pushed it to AWS ECR, deployed it on ECS, wrote my whole infra with Terraform, then wired it all up with Spacelift to handle the deployments automatically whenever I push changes to GitHub.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Pre-requisites&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;But before you dive into something similar, trying to accomplish this too (which i encourage you to), here are some basics I suggest you should have ready:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;A basic front-end project (HTML, CSS, JavaScript) — I got a template, and build my frontend code based on the template, then added some extra more stuffs to it.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;At least have docker installed locally — you’ll need it to build your image, which you you have developed, to tag it, and to test it through containers locally at first.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;An AWS account — I use my AWS free tier account, you can use yours too if you are eligible, because you’ll need it for creating ECR repos, ECS clusters, IAM roles, VPCs, all the good and essential stuff.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Terraform installed — For this project I used Terraform, you can use CloudFormation or others as you desire, but you need an IaC like Terraform to define your infrastructure as code.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;A Spacelift account connected to your GitHub repo (or any IaC CI/CD tool you prefer), you can register for a free trial.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Basic git knowledge — After I had finished my work, everything went well, With my brain overexcited from this project, I accidentally typed "git init", rather than "terraform init" which really messed up my nearly perfect workflow between my local machine and remote repo, and that's where my Git skills came in handy. &lt;br&gt;
I promise, you’ll use git pull, rebase, and probably force push more than once.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
  
  
  Phase 1: Building the Portfolio Website
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Building the Site&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;There was nothing too fancy here, but here is where my Frontend Developer skills played out, by eventually crafting a nice portfolio using Html, Css and JavaScript. I found a clean web template, stripped it down, rewrote parts, tossed in some more parts, added my own images (&lt;strong&gt;&lt;em&gt;yes, I even took some quick professional portrait shot of myself just for this&lt;/em&gt;&lt;/strong&gt; talk about dedication to the craft 😌😌). Made sure it was responsive, light, and looked modern enough.&lt;/p&gt;
&lt;h2&gt;
  
  
  Phase 2: Dockerizing the App
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Meet our friends; Docker Desktop and Docker Engine, without it this project wont be possible&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;I decided to spin up my Docker Desktop, to get Docker engine running, and if you're on windows you know you will need to have WSL running, to be able to do that. If you do not have docker desktop, you can get it from its official website below:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.docker.com/products/docker-desktop/" rel="noopener noreferrer"&gt;Docker Desktop&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd61q1pgcd7v5bzvljjhd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd61q1pgcd7v5bzvljjhd.png" alt=" " width="800" height="396"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Containerizing with Docker&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Next, I wrote a simple Dockerfile. The goal was to serve the static site with NGINX inside a container. My Dockerfile was basically something like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# I will be using the Nginx alpine image as the base image
FROM nginx:alpine

# to copy the contents of the current directory to the /usr/share/nginx/html directory in the container.
COPY . /usr/share/nginx/html

# i will expose port 80 from Nginx. This is the port that Nginx will listen on inside the container.
EXPOSE 80
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you don't know how to build docker images. Here is a guide from docker's official website.&lt;br&gt;
&lt;a href="https://docs.docker.com/get-started/introduction/build-and-push-first-image/" rel="noopener noreferrer"&gt;Build and push your first image - docker.com&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I also added a &lt;strong&gt;&lt;em&gt;.dockerignore&lt;/em&gt;&lt;/strong&gt; file to make sure I didn’t accidentally send unnecessary stuff to the build context (like .git folders, local configs, etc), because I initially accidentally built my image with the Nginx:latest tag, and it was bulky 960MB, so i had to stepback, use Nginx:alpine and added the things to be ignored in order to ensure dockerbuild wasn't adding unneccesary files... It came out as 82MB.&lt;/p&gt;

&lt;p&gt;Here is the block of code from my &lt;strong&gt;&lt;em&gt;.dockerignore&lt;/em&gt;&lt;/strong&gt; file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Ignore Git metadata
.git

# Ignore GitHub workflows and config
.github

# Ignore Terraform infrastructure code
terraform

# Ignore documentation and config files not needed in container
README.md
Dockerfile
.dockerignore
workflows

# Optional: ignore logs and env files iif any
*.log
*.env
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then I ran docker build and docker run locally to make sure it actually worked. Seeing my site pop up locally inside a container felt satisfying.&lt;/p&gt;

&lt;p&gt;To build use the following commands:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;docker build --tag&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;or&lt;/p&gt;

&lt;p&gt;&lt;code&gt;docker build -t &amp;lt;filename&amp;gt;:&amp;lt;tag&amp;gt; .&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;or &lt;/p&gt;

&lt;p&gt;&lt;code&gt;docker build --tag name:latest .   #the dot after is extremely important, and make sure you're inside your directory.&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy40ef4thixyrczjs9f8j.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy40ef4thixyrczjs9f8j.png" alt="Docker Build" width="800" height="428"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;code&gt;GitBash:&amp;gt; my_portfolio                                                   latest    5481289d0f89   8 hours ago    82.2MB&lt;br&gt;
&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Docker host running locally:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8f5nlt8vwdjheailfi2n.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8f5nlt8vwdjheailfi2n.png" alt="Local Host Docker App" width="800" height="413"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;While that was cute, it wasn't the main objective or end goal. The true goal was to make it run on something that is more reachable, such as AWS Elastic Container Service or Elastic Kubernetes Service, or something else. So I had to press on.&lt;/p&gt;

&lt;p&gt;Before I could run any of these services on AWS, I needed some services to be up and supporting, such as VPC, Security Groups, IAM, ECR,&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;VPC (with Subnets &amp;amp; IGW)&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# My VPC network
resource "aws_vpc" "my_vpc" {
  cidr_block       = "10.0.0.0/16"
  instance_tenancy = "default"

  tags = {
    Name        = "my-vpc"
    Project     = "my_portfolio"
    Environment = "dev"
  }
}

# My 3 subnets
resource "aws_subnet" "subnet-1" {
  vpc_id            = aws_vpc.my_vpc.id
  cidr_block        = "10.0.1.0/24"
  availability_zone = "eu-central-1a" # my zone a subnet

  tags = {
    Name        = "subnet-1a"
    Project     = "my_portfolio"
    Environment = "dev"
  }
}

resource "aws_subnet" "subnet-2" {
  vpc_id            = aws_vpc.my_vpc.id
  cidr_block        = "10.0.2.0/24"
  availability_zone = "eu-central-1b" # my zone b subnet

  tags = {
    Name        = "subnet-1b"
    Project     = "my_portfolio"
    Environment = "dev"
  }
}

resource "aws_subnet" "subnet-3" {
  vpc_id            = aws_vpc.my_vpc.id
  cidr_block        = "10.0.3.0/24"
  availability_zone = "eu-central-1c" # my zone c subnet

  tags = {
    Name        = "subnet-1c"
    Project     = "my_portfolio"
    Environment = "dev"
  }
}

#My route table, so that my subnets can atleast access the internet
resource "aws_route_table" "my_route_table" {
  vpc_id = aws_vpc.my_vpc.id

  route {
    cidr_block = "0.0.0.0/0"
    gateway_id = aws_internet_gateway.my_igw.id
  }

  tags = {
    Name        = "my-route-table"
    Project     = "my_portfolio"
    Environment = "dev"
  }
}

# my subnet associations to the route table, selecting the subnets i want to have access to the internet. 
resource "aws_route_table_association" "subnet_associations" {
  count          = 3
  subnet_id      = [aws_subnet.subnet-1.id, aws_subnet.subnet-2.id, aws_subnet.subnet-3.id][count.index]
  route_table_id = aws_route_table.my_route_table.id
}

# Myy internet gateway
resource "aws_internet_gateway" "my_igw" {
  vpc_id = aws_vpc.my_vpc.id

  tags = {
    Name        = "my-igw"
    Project     = "my_portfolio"
    Environment = "dev"
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Security Group&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# My Security group mainly for my ECS tasks
resource "aws_security_group" "ecs_tasks_sg" {
  name        = "ecs-tasks-security-group"
  description = "Security group for ECS tasks"
  vpc_id      = aws_vpc.my_vpc.id

  # SSH access (Just incase, i need it for debugging)
  ingress {
    from_port   = 22
    to_port     = 22
    protocol    = "tcp"
    cidr_blocks = ["0.0.0.0/0"] # Normally, restrict to your IP for security, but open for now
    description = "SSH access"
  }

  # Allow HTTP traffic (port 80)
  ingress {
    from_port   = 80
    to_port     = 80
    protocol    = "tcp"
    cidr_blocks = ["0.0.0.0/0"]
    description = "HTTP access"
  }

  # Allow HTTPS traffic (port 443)
  ingress {
    from_port   = 443
    to_port     = 443
    protocol    = "tcp"
    cidr_blocks = ["0.0.0.0/0"]
    description = "HTTPS access"
  }

  # Allow My_portfolio application port
  ingress {
    from_port   = 5000
    to_port     = 5000
    protocol    = "tcp"
    cidr_blocks = ["0.0.0.0/0"]
    description = "Application port"
  }

  # Allow all outbound traffic
  egress {
    from_port   = 0
    to_port     = 0
    protocol    = "-1"
    cidr_blocks = ["0.0.0.0/0"]
    description = "All outbound traffic"
  }

  tags = {
    Name        = "ecs-tasks-sg"
    Project     = "my_portfolio"
    Environment = "dev"
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;** ECR **&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# My ECR repository for my_portfolio project
resource "aws_ecr_repository" "my_portfolio" {
  name                 = "my_portfolio"
  image_tag_mutability = "MUTABLE" # or "IMMUTABLE" based on your requirement
  image_scanning_configuration {
    scan_on_push = true
  }

  tags = {
    Name        = "my-portfolio-ecr"
    Project     = "my_portfolio"
    Environment = "dev"
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5661yz4te8zc3bxam3jj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5661yz4te8zc3bxam3jj.png" alt="ECR" width="800" height="162"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv89fugt21mwv36hgxikd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv89fugt21mwv36hgxikd.png" alt=" " width="800" height="219"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Phase 3: Pushing to ECR
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Pushing my Local Docker Image to AWS ECR&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;With my image ready, I created an ECR repository in AWS using Terraform and Spacelift. Then I logged in to ECR from my terminal, tagged my image, and pushed it up.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F91u8py7z0rgp7joum68m.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F91u8py7z0rgp7joum68m.png" alt="ECR Image Push" width="800" height="429"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This part went pretty smoothly — but if you’re new to ECR, make sure you don’t forget to authenticate your Docker client using the aws ecr get-login-password command. If you skip this, your push will fail and the error messages aren’t always the friendliest.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbtt7p9c31rzykl75d932.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbtt7p9c31rzykl75d932.png" alt=" " width="800" height="643"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once it was deployed, I had 3 images, which i only needed one as you can see below:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F34lmwgwqx85sw7qy0hdf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F34lmwgwqx85sw7qy0hdf.png" alt=" " width="800" height="206"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;so for that I added a policy resource block with two rule blocks; one for tagged images and another for untagged images, ensuring there was only a maximum of 1 at a time, i then deployed it via terraform spacelift. Here is the code for that:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "aws_ecr_lifecycle_policy" "my_portfolio_ecr_policy" {
  repository = aws_ecr_repository.my_portfolio.name

  policy = &amp;lt;&amp;lt;EOF
{
    "rules": [
         {
      "rulePriority": 1,
      "description": "Expire untagged images",
      "selection": {
        "tagStatus": "untagged",
        "countType": "imageCountMoreThan",
        "countNumber": 1
      },
      "action": {
        "type": "expire"
      }
    },
        {
            "rulePriority": 2,
            "description": "Keep only 1 image max",
            "selection": {
                "tagStatus": "any",
                "countType": "imageCountMoreThan",
                "countNumber": 1
            },
            "action": {
                "type": "expire"
            }
        }
    ]
}
EOF
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0imwnwnzpc3f5286hjer.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0imwnwnzpc3f5286hjer.png" alt="Image Policy" width="800" height="395"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Phase 4: Deploying to ECS with Terraform**
&lt;/h2&gt;

&lt;p&gt;After I was done with the whole ECR and Docker part, here’s where the fun (and a bit of chaos) started. I wrote my Terraform files to create an ECS cluster, services and task definitions.&lt;/p&gt;

&lt;p&gt;Here is Spacelift doing the Heavy Terraform Lifting:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc8byn06t1bf28w7tmx7y.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc8byn06t1bf28w7tmx7y.png" alt=" " width="800" height="335"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Here is the Cluster Created!:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fml2g098wtrtq4v6p9psn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fml2g098wtrtq4v6p9psn.png" alt="Cluster" width="800" height="158"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I also needed to provision an IAM role and IAM Role Policy Attachment for the ECS. Here is Spacelift deploying it.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqftlj7q7og618fe6scrc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqftlj7q7og618fe6scrc.png" alt=" " width="800" height="396"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And here is its code block:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# So, i need two things, aws_iam_role and aws_iam_role_policy_attachment for my ECS task execution role.

resource "aws_iam_role" "ecs_task_execution_role" {
  name = "ecs-task-execution-role"
  assume_role_policy = jsonencode({    # Terraform's "jsonencode" function converts a Terraform expression result to valid JSON syntax. you can get moore of this templates on terraform registry docs site like i did here.
    Version = "2012-10-17"
    Statement = [
      {
        Action = "sts:AssumeRole"
        Effect = "Allow"
        Sid    = ""
        Principal = {
          Service = "ecs-tasks.amazonaws.com"       # You must specific the service type!!! This is the service that will assume this role, in my case, it is the ECS tasks. Read more on it.
        }
      },
    ]
  })

  tags = {
    name = "ecs-task-execution-role"
  }
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;and for policy Attachment:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# This is my Policy Attachment role, you can find this within the registry docs and read more about it then modify its attributes and apply it here as i did.

resource "aws_iam_role_policy_attachment" "ecs_task_execution_role_policy" {
  role       = aws_iam_role.ecs_task_execution_role.name
  policy_arn = "arn:aws:iam::aws:policy/service-role/AmazonECSTaskExecutionRolePolicy"
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then i needed to create an ECS Task Definition to for our Cluster to know what to do.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvfti7jdhrwt4s9calccm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvfti7jdhrwt4s9calccm.png" alt=" " width="800" height="397"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv23znjqhsj1bv2byksvw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv23znjqhsj1bv2byksvw.png" alt=" " width="800" height="160"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdetqhujzizu4lg0tl09z.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdetqhujzizu4lg0tl09z.png" alt=" " width="800" height="163"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This is the code block for the Task defiition:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# now its time for taask definiinnition, you can find this basic example on terraform registry docs as i did, then read and modify it as you need it.

resource "aws_ecs_task_definition" "portfolio_task" {
  family                   = "portfolio-task"
  network_mode             = "awsvpc"
  requires_compatibilities = ["FARGATE"]
  cpu                      = "256"
  memory                   = "512"
  execution_role_arn       = aws_iam_role.ecs_task_execution_role.arn

  container_definitions = jsonencode([
    {
      name      = "portfolio-container"
      image     = "194722436853.dkr.ecr.eu-central-1.amazonaws.com/my_portfolio:latest"
      essential = true
      portMappings = [
        {
          containerPort = 80
          hostPort      = 80
        }
      ]
    }
  ])

  tags = {
    Name = "portfolio-task"
  }
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Finally for the creation of my ECS, I needed a services resource that pulls my image from ECR and runs it within our ECS cluster, so I created it.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe3u8ndiux6w03wm6ru1h.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe3u8ndiux6w03wm6ru1h.png" alt="Spacelift Provisioning Services Resource" width="800" height="396"&gt;&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Alright, so i am almost done, and here is where i add my service for my ecs and the amount i want running at all times.

resource "aws_ecs_service" "portfolio_service" {
  name            = "portfolio-service"
  cluster         = aws_ecs_cluster.portfolio_cluster.id
  task_definition = aws_ecs_task_definition.portfolio_task.arn
  launch_type     = "FARGATE"
  desired_count   = 1       # Personally id like to think of this as replicas with self heaaling in k8s or kubernetes. You can set to the miniumum about you  want to keep up and running at al times!!

  network_configuration {
    subnets         = [aws_subnet.subnet-1.id, aws_subnet.subnet-2.id]
    security_groups = [aws_security_group.ecs_tasks_sg.id]
    assign_public_ip = true
  }

  tags = {
    Name = "portfolio-service"
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;So far, so good. Clean and neatly written codes carefully deployed.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Where Things Went Sideways&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Everything was pretty good and working really well, so far so good. until i noticed my Cluster wasnt running, it had "0/1" task running for approx 5 minutes, and that didnt spell any good for me.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7iuk55le755l7pijbj8p.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7iuk55le755l7pijbj8p.png" alt=" " width="800" height="395"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Phase 5: Troubleshooting: When Things Broke
&lt;/h2&gt;

&lt;p&gt;Part of my duty as an engineer is to be able to repair broken stuffs, or things that went wrong within my project, So I did what i needed to do, like any DevOps or Technical Engineer. I went into trouble shooting mode, and looked into its log to see what the issues are. Luckily, I found the log and dug my way to it's root cause, a miswritten code, too much happy fingers. It was definitely rewarding to fix what was broken, it boosted my confidence more and gave me extra morale.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzf371v528g7updqmt4bu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzf371v528g7updqmt4bu.png" alt="Error Log" width="800" height="401"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;So, What was my Error Code?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Error Code:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;container_definitions = jsonencode([
  {
    name      = "my_portfolio"
    image     = "my_portfolio:latest"  # &amp;lt;-- this is wrong!
    ...
  }
])

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Caused by me carelessly re-writting the image name and tag, rather than paying attention to the docs, that specified the name here as image URL.&lt;/p&gt;

&lt;p&gt;So what was the correct block of code?&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;container_definitions = jsonencode([
  {
    name      = "my_portfolio"
    image     = "194722436853.dkr.ecr.eu-central-1.amazonaws.com/my_portfolio:latest"
    ...
  }
])

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can also find it here:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbxrsb680aj1xhkvuknc2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbxrsb680aj1xhkvuknc2.png" alt=" " width="800" height="160"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Phase 6: Lessons I Learned
&lt;/h2&gt;

&lt;p&gt;Always take your time, double-check your documentation and code in your Terraform ecs.tf file and every other file. Even a single mistake such as this, or a port mismatch can break the whole deployment. Thankfully, I found it &amp;gt; tweaked it &amp;gt; pushed it &amp;gt; Spacelift picked it up as usual and did the heavy lifting for me, and after a few seconds boom!!! My portfolio was running from my own ECS cluster.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8exsjsbj7a3q98sp82q6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8exsjsbj7a3q98sp82q6.png" alt=" " width="800" height="143"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And it was healthy as well...&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdhyyono9vfuoeu6wfoqu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdhyyono9vfuoeu6wfoqu.png" alt=" " width="800" height="398"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;But i wanted to access my app from the internet, normally, it is advised to use Application Load Balancer (ALB) Network Configuration, but for a small project such as mine, I went with the Public IP, with the rightful ports open in the security group, i had the Public Ip automatically assigned to it from the network configuration underneath the service block.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;network_configuration {
    subnets         = [aws_subnet.subnet-1.id, aws_subnet.subnet-2.id]
    security_groups = [aws_security_group.ecs_tasks_sg.id]
    assign_public_ip = true
  }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbds8ub3yvlt0lpvbjwlc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbds8ub3yvlt0lpvbjwlc.png" alt=" " width="800" height="396"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;and when i clicked the Public Ip address, My application was live and running. No more on local host, but on a public ip, and anyone across the globe could access it. I shared it with my friend who was far away, and he confirmed it from his mobile as well.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fztoi2vhtbkehnggab819.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fztoi2vhtbkehnggab819.png" alt=" " width="800" height="412"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Automating with Spacelift CICD&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Writing Terraform is nice, but I wanted it to run automatically whenever I push changes to GitHub. There came Spacelift, which i've been using for a while now, I bet you're tired of seeing me posting about it.&lt;br&gt;
I connected my repo, set up a stack, wired up the permissions, and boom!! Now every commit and push runs a plan and applies the infra config if approved.&lt;/p&gt;

&lt;p&gt;It felt good to see my commits trigger an actual pipeline that deploys my container and updates my infra with no extra manual steps.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Screenshots or It Didn’t Happen&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Because I won’t keep the cluster running forever (it costs money!), I took a bunch of screenshots as proof: the live site, my ECS cluster, task definitions, ECR repo, and my Spacelift runs. I added them to my repo’s README for anyone curious.&lt;/p&gt;

&lt;p&gt;Tip: Always do this for demo or learning projects. Trust me, you don’t want to pay AWS bills just so people can “see” your container is live forever.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;A Few Things to Watch Out For&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Be careful with your IAM permissions. ECS won’t run your tasks if you don’t have the right execution role.&lt;/p&gt;

&lt;p&gt;Double-check your .dockerignore. You don’t want to push your .git or local config files to your container.&lt;/p&gt;

&lt;p&gt;Use lifecycle policies on ECR to clean up old images automatically — your storage bill will thank you.&lt;/p&gt;

&lt;p&gt;If you run into git messes (I did!), don’t panic. Stash, reset, pull with rebase, force push, but be careful!&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Final Thoughts&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;I started this as a simple portfolio site but turned it into a practical showcase of cloud and DevOps skills. I learned (and re-learned) so many small but real lessons along the way.&lt;/p&gt;

&lt;p&gt;If you’re a developer wanting to break into DevOps or cloud engineering, I highly recommend picking a small project like this and taking it through the whole pipeline. From local dev to a live container in the cloud, all automated.&lt;/p&gt;

&lt;p&gt;I did not include everything here as it is already long.&lt;br&gt;
However, If you want to see my repo, check it out here: &lt;a href="https://github.com/AkingbadeOmosebi/my_porfolio/tree/main" rel="noopener noreferrer"&gt;Github&lt;/a&gt;&lt;br&gt;
I hope this inspires someone to try the same. &lt;/p&gt;

&lt;p&gt;Thanks for reading! 🚀✨&lt;/p&gt;

</description>
      <category>aws</category>
      <category>devops</category>
      <category>docker</category>
      <category>terraform</category>
    </item>
    <item>
      <title>Enforcing Cloud Guardrails with Spacelift Policies: My Hands-On Test with Rego, Terraform, and AWS</title>
      <dc:creator>Akingbade Omosebi</dc:creator>
      <pubDate>Mon, 07 Jul 2025 00:11:02 +0000</pubDate>
      <link>https://dev.to/akingbade_omosebi/enforcing-cloud-guardrails-with-spacelift-policies-my-hands-on-test-with-rego-terraform-and-aws-5efj</link>
      <guid>https://dev.to/akingbade_omosebi/enforcing-cloud-guardrails-with-spacelift-policies-my-hands-on-test-with-rego-terraform-and-aws-5efj</guid>
      <description>&lt;p&gt;After getting comfortable using Spacelift to automate my AWS infrastructure with Terraform, I wanted to push myself a bit further, so I decided to dig into something that often gets overlooked: Policies.&lt;/p&gt;

&lt;p&gt;In real-world organizations, you rarely have total freedom to spin up any resource you want, any way you want. Teams need guardrails, ways to make sure people stick to the right instance types, permissions, regions, or cost limits, resources, etc. That’s where Spacelift’s policy engine comes in. And honestly? I must say, i struggled with it a lot, as part of my failure journey, but it paid off in a very satisfying way. It’s pretty interesting once you get your hands dirty.&lt;/p&gt;

&lt;p&gt;Before we dive in deep, let me drop an architectural diagram, hopefully you would get the concept ahead (like a spoiler).&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdq8x0pg1beqf6iilwrwz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdq8x0pg1beqf6iilwrwz.png" alt="Architectural diagram" width="800" height="461"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Getting Started with Policies: Templates &amp;amp; Manual Rego&lt;/strong&gt; &lt;/p&gt;

&lt;p&gt;To get my head around it, I started with Spacelift’s template policies.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fagqyvopgfoq0qo1pbpji.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fagqyvopgfoq0qo1pbpji.png" alt="Policy Templates Selection" width="800" height="394"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;They provide some great examples to learn from. But I didn’t stop there. I wanted to see if I could write my own Rego policies from scratch too.&lt;/p&gt;

&lt;p&gt;My experiment?&lt;br&gt;
 Allow only t2.nano EC2 instances ✅&lt;br&gt;
 Prohibit t3.micro instances ❌&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Why should any of these matter?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;You may ask why? Well this matters to businesses in my opinion, and how i can fit in it?&lt;/p&gt;

&lt;p&gt;In a real company setup, you might have policies that say:&lt;/p&gt;

&lt;p&gt;“Developers can only launch small test instances, not production-grade ones.”&lt;br&gt;
Or:&lt;br&gt;
“No one is allowed to use certain instance families because they’re too expensive.”&lt;/p&gt;

&lt;p&gt;So I wanted to see exactly how you can enforce rules like that before someone clicks "Apply", and to make it clear why something is blocked.&lt;/p&gt;

&lt;p&gt;How I Wrote It (and Broke It)&lt;br&gt;
Writing Rego for the first time is honestly a bit of a brain twist. I made plenty of mistakes! Syntax errors, logic that didn’t match what I meant, rules that blocked the wrong thing. Such as this block of code i wrote, which seems nice and logical, but is flawed somewhere.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# secondary_only_t2_nano_instances - plan

package spacelift

deny[message] {
  rc := input.resource_changes[_]

  rc.type == "aws_instance"
  rc.change.actions[_] == "create"

  instance_type := rc.change.after.instance_type

  instance_type != "t2.nano"

  message := sprintf("Only t2.nano instances are allowed. Found disallowed type '%v' at %v", [instance_type, rc.address])
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;or this one:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# deny_t3_micro_instance - plan

package spacelift

deny[message] {
  some i
  rc := input.resource_changes[i]

  rc.type == "aws_instance"
  rc.change.actions[_] == "create"

  instance_type := rc.change.after.instance_type

  instance_type == "t3.micro"

  message := sprintf("Provisioning t3.micro instances is not allowed: %v", [rc.address])
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;or was it when i failed trying to combine both logics into one? it checked off well logically and looked good on paper, but still failed somehow.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;package spacelift

# Block if creating anything not t2.nano
deny[message] {
  rc := input.resource_changes[_]
  rc.type == "aws_instance"
  rc.change.actions[_] == "create"
  instance_type := rc.change.after.instance_type
  instance_type != "t2.nano"
  message := sprintf("Only t2.nano instances are allowed (create). Found disallowed type '%v' at %v", [instance_type, rc.address])
}

# Block if updating anything not t2.nano
deny[message] {
  rc := input.resource_changes[_]
  rc.type == "aws_instance"
  rc.change.actions[_] == "update"
  instance_type := rc.change.after.instance_type
  instance_type != "t2.nano"
  message := sprintf("Only t2.nano instances are allowed (update). Found disallowed type '%v' at %v", [instance_type, rc.address])
} 

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;These are the policies i created: &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fiqx7a2867jh0co0lw9ch.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fiqx7a2867jh0co0lw9ch.png" alt="Policies" width="800" height="394"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Irregardness, these were failures, I did not delete them for a deliberate reason, to return and learn from my mistakes. &lt;/p&gt;

&lt;p&gt;That’s part of why I’m documenting this: the mistakes helped me actually learn how it works.&lt;/p&gt;

&lt;p&gt;But guess what?! When I finally got my policy working, Spacelift did exactly what it should: &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2For7gbfx5ebovg6s2vayf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2For7gbfx5ebovg6s2vayf.png" alt="Working Policy" width="800" height="397"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If I tried to provision a t3.micro instance, the policy failed the plan.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F41sypk4a7idgjp0k9rz2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F41sypk4a7idgjp0k9rz2.png" alt="Policy deny enforcded" width="800" height="394"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;But instead of the usual boring “Policy was denied” message, I set a custom message: &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fok9mnttctqal13ynksv0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fok9mnttctqal13ynksv0.png" alt=" " width="800" height="398"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;“&lt;strong&gt;&lt;em&gt;Selected policy denied by Admin, only t2.nano instances are permitted or allowed.&lt;/em&gt;&lt;/strong&gt;”&lt;/p&gt;

&lt;p&gt;That little touch alone makes a huge difference, especially when teams grow and you need clear feedback instead of digital or cryptic errors.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Putting It All Together&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Of course, I tested my policy by actually provisioning an EC2 instance of type t2.nano through my Terraform code and Spacelift stack. It passed the policy check, applied successfully, and spun up the instance exactly as expected.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv8pc25ofv4m5n70os8zm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv8pc25ofv4m5n70os8zm.png" alt="t2.nano running" width="800" height="387"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I also learned how policies are attached to stacks in Spacelift:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2For2795digkm1a617783u.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2For2795digkm1a617783u.png" alt="Attaching a policy of my choice" width="800" height="393"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;You can choose whether your policy runs on plan events, push events, or others.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flikxramlxwossvbt0f3h.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flikxramlxwossvbt0f3h.png" alt="Plan Policies " width="800" height="395"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;You attach the policy to the specific stack you want to enforce it on.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;And when you don’t need it, you can detach it. Super easy!!.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Lessons &amp;amp; Takeaways&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This project and learning taught me a lot:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Rego is powerful! But you’ll probably break it a few times before you get it right. That’s normal!&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Clear policy messages make a big difference for teams. Explain why something failed, don’t just throw an error.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Spacelift policies are a practical way for organizations to balance freedom and control. Terraform doesn’t care what you write, but your company and it's budget probably does.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;If you’re getting into Terraform automation, don’t skip over policies. They’re not just “enterprise stuff”, they’re how you scale infrastructure safely. And like most things, you learn best by writing one, breaking it, and fixing it yourself.&lt;/p&gt;

&lt;p&gt;👀 If you're lost, you may want to read my first Spacelift + Terraform story here if you missed it:&lt;br&gt;
&lt;a href="https://dev.to/akingbade_omosebi/provisioning-aws-resources-with-terraform-through-spaceliftio-2n9n"&gt;Provisioning AWS Resources with Terraform through Spacelift.io&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/AkingbadeOmosebi/AWS-Terraform-Spacelift.io" rel="noopener noreferrer"&gt;GitHub Repo link for my spacelift project.&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Next up for me, will be showing you some less coding, and more of chilled/laid back sweet stuffs about Spacelift, so you don't get nervous and run away from Rego.&lt;/p&gt;

</description>
      <category>devops</category>
      <category>spacelift</category>
      <category>terraform</category>
      <category>aws</category>
    </item>
    <item>
      <title>Enforcing Cloud Guardrails with Spacelift Policies: My Hands-On Test with Rego, Terraform, and AWS"</title>
      <dc:creator>Akingbade Omosebi</dc:creator>
      <pubDate>Mon, 07 Jul 2025 00:11:02 +0000</pubDate>
      <link>https://dev.to/akingbade_omosebi/enforcing-cloud-guardrails-with-spacelift-policies-my-hands-on-test-with-rego-terraform-and-aws-2b3g</link>
      <guid>https://dev.to/akingbade_omosebi/enforcing-cloud-guardrails-with-spacelift-policies-my-hands-on-test-with-rego-terraform-and-aws-2b3g</guid>
      <description>&lt;p&gt;After getting comfortable using Spacelift to automate my AWS infrastructure with Terraform, I wanted to push myself a bit further, so I decided to dig into something that often gets overlooked: Policies.&lt;/p&gt;

&lt;p&gt;In real-world organizations, you rarely have total freedom to spin up any resource you want, any way you want. Teams need guardrails, ways to make sure people stick to the right instance types, permissions, regions, or cost limits, resources, etc. That’s where Spacelift’s policy engine comes in. And honestly? I must say, i struggled with it a lot, as part of my failure journey, but it paid off in a very satisfying way. It’s pretty interesting once you get your hands dirty.&lt;/p&gt;

&lt;p&gt;Before we dive in deep, let me drop an architectural diagram, hopefully you would get the concept ahead (like a spoiler).&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdq8x0pg1beqf6iilwrwz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdq8x0pg1beqf6iilwrwz.png" alt="Architectural diagram" width="800" height="461"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Getting Started with Policies: Templates &amp;amp; Manual Rego&lt;/strong&gt; &lt;/p&gt;

&lt;p&gt;To get my head around it, I started with Spacelift’s template policies.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fagqyvopgfoq0qo1pbpji.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fagqyvopgfoq0qo1pbpji.png" alt="Policy Templates Selection" width="800" height="394"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;They provide some great examples to learn from. But I didn’t stop there. I wanted to see if I could write my own Rego policies from scratch too.&lt;/p&gt;

&lt;p&gt;My experiment?&lt;br&gt;
 Allow only t2.nano EC2 instances ✅&lt;br&gt;
 Prohibit t3.micro instances ❌&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Why should any of these matter?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;You may ask why? Well this matters to businesses in my opinion, and how i can fit in it?&lt;/p&gt;

&lt;p&gt;In a real company setup, you might have policies that say:&lt;/p&gt;

&lt;p&gt;“Developers can only launch small test instances, not production-grade ones.”&lt;br&gt;
Or:&lt;br&gt;
“No one is allowed to use certain instance families because they’re too expensive.”&lt;/p&gt;

&lt;p&gt;So I wanted to see exactly how you can enforce rules like that before someone clicks "Apply", and to make it clear why something is blocked.&lt;/p&gt;

&lt;p&gt;How I Wrote It (and Broke It)&lt;br&gt;
Writing Rego for the first time is honestly a bit of a brain twist. I made plenty of mistakes! Syntax errors, logic that didn’t match what I meant, rules that blocked the wrong thing. Such as this block of code i wrote, which seems nice and logical, but is flawed somewhere.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# secondary_only_t2_nano_instances - plan

package spacelift

deny[message] {
  rc := input.resource_changes[_]

  rc.type == "aws_instance"
  rc.change.actions[_] == "create"

  instance_type := rc.change.after.instance_type

  instance_type != "t2.nano"

  message := sprintf("Only t2.nano instances are allowed. Found disallowed type '%v' at %v", [instance_type, rc.address])
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;or this one:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# deny_t3_micro_instance - plan

package spacelift

deny[message] {
  some i
  rc := input.resource_changes[i]

  rc.type == "aws_instance"
  rc.change.actions[_] == "create"

  instance_type := rc.change.after.instance_type

  instance_type == "t3.micro"

  message := sprintf("Provisioning t3.micro instances is not allowed: %v", [rc.address])
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;or was it when i failed trying to combine both logics into one? it checked off well logically and looked good on paper, but still failed somehow.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;package spacelift

# Block if creating anything not t2.nano
deny[message] {
  rc := input.resource_changes[_]
  rc.type == "aws_instance"
  rc.change.actions[_] == "create"
  instance_type := rc.change.after.instance_type
  instance_type != "t2.nano"
  message := sprintf("Only t2.nano instances are allowed (create). Found disallowed type '%v' at %v", [instance_type, rc.address])
}

# Block if updating anything not t2.nano
deny[message] {
  rc := input.resource_changes[_]
  rc.type == "aws_instance"
  rc.change.actions[_] == "update"
  instance_type := rc.change.after.instance_type
  instance_type != "t2.nano"
  message := sprintf("Only t2.nano instances are allowed (update). Found disallowed type '%v' at %v", [instance_type, rc.address])
} 

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;These are the policies i created: &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fiqx7a2867jh0co0lw9ch.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fiqx7a2867jh0co0lw9ch.png" alt="Policies" width="800" height="394"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Irregardness, these were failures, I did not delete them for a deliberate reason, to return and learn from my mistakes. &lt;/p&gt;

&lt;p&gt;That’s part of why I’m documenting this: the mistakes helped me actually learn how it works.&lt;/p&gt;

&lt;p&gt;But guess what?! When I finally got my policy working, Spacelift did exactly what it should: &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2For7gbfx5ebovg6s2vayf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2For7gbfx5ebovg6s2vayf.png" alt="Working Policy" width="800" height="397"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If I tried to provision a t3.micro instance, the policy failed the plan.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F41sypk4a7idgjp0k9rz2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F41sypk4a7idgjp0k9rz2.png" alt="Policy deny enforcded" width="800" height="394"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;But instead of the usual boring “Policy was denied” message, I set a custom message: &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fok9mnttctqal13ynksv0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fok9mnttctqal13ynksv0.png" alt=" " width="800" height="398"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;“&lt;strong&gt;&lt;em&gt;Selected policy denied by Admin, only t2.nano instances are permitted or allowed.&lt;/em&gt;&lt;/strong&gt;”&lt;/p&gt;

&lt;p&gt;That little touch alone makes a huge difference, especially when teams grow and you need clear feedback instead of digital or cryptic errors.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Putting It All Together&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Of course, I tested my policy by actually provisioning an EC2 instance of type t2.nano through my Terraform code and Spacelift stack. It passed the policy check, applied successfully, and spun up the instance exactly as expected.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv8pc25ofv4m5n70os8zm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv8pc25ofv4m5n70os8zm.png" alt="t2.nano running" width="800" height="387"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I also learned how policies are attached to stacks in Spacelift:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2For2795digkm1a617783u.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2For2795digkm1a617783u.png" alt="Attaching a policy of my choice" width="800" height="393"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;You can choose whether your policy runs on plan events, push events, or others.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flikxramlxwossvbt0f3h.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flikxramlxwossvbt0f3h.png" alt="Plan Policies " width="800" height="395"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;You attach the policy to the specific stack you want to enforce it on.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;And when you don’t need it, you can detach it. Super easy!!.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Lessons &amp;amp; Takeaways&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This project and learning taught me a lot:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Rego is powerful! But you’ll probably break it a few times before you get it right. That’s normal!&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Clear policy messages make a big difference for teams. Explain why something failed, don’t just throw an error.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Spacelift policies are a practical way for organizations to balance freedom and control. Terraform doesn’t care what you write, but your company and it's budget probably does.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;If you’re getting into Terraform automation, don’t skip over policies. They’re not just “enterprise stuff”, they’re how you scale infrastructure safely. And like most things, you learn best by writing one, breaking it, and fixing it yourself.&lt;/p&gt;

&lt;p&gt;👀 If you're lost, you may want to read my first Spacelift + Terraform story here if you missed it:&lt;br&gt;
&lt;a href="https://dev.to/akingbade_omosebi/provisioning-aws-resources-with-terraform-through-spaceliftio-2n9n"&gt;Provisioning AWS Resources with Terraform through Spacelift.io&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/AkingbadeOmosebi/AWS-Terraform-Spacelift.io" rel="noopener noreferrer"&gt;GitHub Repo link for my spacelift project.&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Next up for me, will be showing you some less coding, and more of chilled/laid back sweet stuffs about Spacelift, so you don't get nervous and run away from Rego.&lt;/p&gt;

</description>
      <category>devops</category>
      <category>spacelift</category>
      <category>terraform</category>
      <category>aws</category>
    </item>
    <item>
      <title>Provisioning AWS resources with Terraform through Spacelift.io</title>
      <dc:creator>Akingbade Omosebi</dc:creator>
      <pubDate>Fri, 27 Jun 2025 15:18:35 +0000</pubDate>
      <link>https://dev.to/akingbade_omosebi/provisioning-aws-resources-with-terraform-through-spaceliftio-2n9n</link>
      <guid>https://dev.to/akingbade_omosebi/provisioning-aws-resources-with-terraform-through-spaceliftio-2n9n</guid>
      <description>&lt;p&gt;When I first started working with Terraform, one of my biggest concerns was how to manage my infrastructure as code in a clean, automated, and secure way—without spending all my time running terraform apply manually or worrying about who has access to what. I researched and did some digging, I found different interesting options, but that’s when I also stumbled upon Spacelift.&lt;/p&gt;

&lt;p&gt;Spacelift is basically a modern CI/CD platform built specifically for Infrastructure as Code. Think of it as a bridge between your version control system (like GitHub) and your cloud resources—helping you automate Terraform workflows, manage policies, and handle approvals, all in one place.&lt;/p&gt;

&lt;p&gt;In this write-up, I want to share a quick story about how I used Spacelift to provision a VPC on AWS from a Terraform configuration sitting in my GitHub repo. I’ll walk you through what I did, why I did it this way, and some small lessons I picked up along the way. If you’re curious about how to get Terraform and Spacelift working together in the real world, I hope this helps you get started.&lt;/p&gt;

&lt;p&gt;Before I jump into it, let me share with you my architectural diagram which i designed with Draw.io&lt;br&gt;
. &lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fenrt5l9ad9v4l0xxr4c0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fenrt5l9ad9v4l0xxr4c0.png" alt=" " width="800" height="467"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Why I Picked Spacelift for This&lt;/strong&gt;&lt;br&gt;
Before this, I was managing my Terraform runs manually on my local machine. It worked, but it didn’t feel sustainable — especially when you think about team collaboration, state file security, and approvals for production changes.&lt;/p&gt;

&lt;p&gt;Spacelift stood out to me because it plugs right into my GitHub repo, listens for changes, and takes care of running terraform plan and terraform apply in a safe, controlled way. Plus, it lets me see exactly what’s going to change before I hit “approve.” That extra visibility really helps when you’re touching cloud resources.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What I Wanted to Build&lt;/strong&gt;&lt;br&gt;
For this little experiment, I kept it simple: a custom VPC on AWS. I wanted to have my own network, subnets, internet gateway — the basics that any modern cloud setup needs. Nothing too fancy, but enough to see Spacelift and Terraform working together end to end.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How I Set It Up&lt;/strong&gt;&lt;br&gt;
Here’s the high-level flow:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Write the Terraform Code&lt;br&gt;
I wrote a main.tf that defines my VPC, subnets, internet gateway, and route tables. I pushed this to a new GitHub repo.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Connect GitHub to Spacelift&lt;br&gt;
I created a new Spacelift stack, pointed it at my repo, and connected it to my AWS account using IAM credentials.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Run and Approve&lt;br&gt;
Spacelift detected my Terraform config, ran a plan, and showed me the changes. After reviewing, I gave it approval/confirmation and Spacelift applied the configuration to my AWS account.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;That’s it — my VPC was live, provisioned automatically from my GitHub code.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Here is the Terraform Code which I Used&lt;/strong&gt;&lt;br&gt;
I kept my Terraform configuration as simple and clean as possible — just enough to spin up a VPC with a few subnets. Here’s what it looks like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# main.tf

resource "aws_vpc" "my_vpc" { 
  cidr_block       = "10.0.0.0/16"
  instance_tenancy = "default"

  tags = {
    Name = "my-simple-vpc"
  }
}

# Subnet 1
resource "aws_subnet" "subnet1" {
  vpc_id     = aws_vpc.my_vpc.id
  cidr_block = "10.0.1.0/24"

  tags = {
    Name = "subnet-1"
  }
}

# Subnet 2
resource "aws_subnet" "subnet2" {
  vpc_id     = aws_vpc.my_vpc.id
  cidr_block = "10.0.2.0/24"

  tags = {
    Name = "subnet-2"
  }
}

# Subnet 3
resource "aws_subnet" "subnet3" {
  vpc_id     = aws_vpc.my_vpc.id
  cidr_block = "10.0.3.0/24"

  tags = {
    Name = "subnet-3"
  }
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And here’s my provider block, so Terraform knows which cloud and region to talk to:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# provider.tf

terraform {
  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = "6.0.0"
    }
  }
}

provider "aws" {
  region = "eu-central-1"
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;What This Code Does (Quick Breakdown)&lt;br&gt;
VPC: The first block creates a VPC with a /16 CIDR block (10.0.0.0/16). This means I have plenty of private IP space to carve out subnets later.&lt;/p&gt;

&lt;p&gt;Subnets: I added three subnets, each with a /24 CIDR block inside that VPC. Each subnet has a simple name tag so it’s easier to recognize in the AWS console.&lt;/p&gt;

&lt;p&gt;Provider: Finally, the provider config makes sure Terraform uses the AWS provider and targets the eu-central-1 region (Frankfurt, in my case).&lt;/p&gt;

&lt;p&gt;All terraform codes can be derived from the official terraform registry site. Here is the registry link i used. &lt;a href="https://registry.terraform.io/providers/hashicorp/aws/latest/docs" rel="noopener noreferrer"&gt;AWS Terraform Registry&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Small Tips I Picked Up&lt;/strong&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Version Pinning&lt;/strong&gt;: I pinned the AWS provider version (6.0.0). It’s a good habit — this helps avoid surprises when a new provider version has breaking changes.&lt;br&gt;
&lt;strong&gt;Tags&lt;/strong&gt;: Adding clear tags saves you headaches later, especially when you have multiple VPCs or subnets.&lt;br&gt;
&lt;strong&gt;State Management&lt;/strong&gt;: Since Spacelift handles remote state for me, I didn’t have to mess around with setting up an S3 bucket and DynamoDB table for locking — Spacelift does this behind the scenes.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;strong&gt;What’s Next?&lt;/strong&gt;&lt;br&gt;
After pushing this code to my GitHub repo, Spacelift automatically detected the changes, ran terraform plan, and let me preview what would happen. When I hit Approve, Spacelift ran terraform apply and my VPC popped up in AWS — no manual runs from my laptop.&lt;/p&gt;

&lt;p&gt;Here is what my Github repo i used looks like.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1z9q9horjrbtyjsz1q8g.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1z9q9horjrbtyjsz1q8g.png" alt=" " width="800" height="298"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And here is the link to the GitHub repo i worked with incase you want to access it. &lt;a href="https://github.com/AkingbadeOmosebi/AWS-Terraform-Spacelift.io" rel="noopener noreferrer"&gt;My GitHub Repo&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Here is my AWS VPC empty, before Spacelift provisioned its resources.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frs5wheozm8muo34m9nom.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frs5wheozm8muo34m9nom.png" alt=" " width="800" height="277"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Here is Spacelift running the Apply process for the Deployment cycle, right after the Init Phase, and Plan Phase. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fimgqz6nsrc5plsl6ueqq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fimgqz6nsrc5plsl6ueqq.png" alt=" " width="800" height="413"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Here is Spacelift once it was with the "Applying phase" and had finished its process cycle.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjsf7kolgomhcmhtbtkqb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjsf7kolgomhcmhtbtkqb.png" alt=" " width="" height=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Here is my formerly empty AWS VPC page which was empty, Now having the provisioned/deployed resource(s).&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3v76f6vfgo9cnqamjzjk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3v76f6vfgo9cnqamjzjk.png" alt=" " width="800" height="179"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To Deploy your code to your cloud provider, for example AWS. You need to have an Account, preferrable an IAM user with least required priviledges (for security practices). &lt;/p&gt;

&lt;p&gt;You need to "create" and "generate" a security access credentials from your IAM user, and download the CSV file, within it you will have your credentials.&lt;/p&gt;

&lt;p&gt;Two credentials needed for Spacelift to work with AWS are "Secret Key" &amp;amp; "Secret Access Key"&lt;/p&gt;

&lt;p&gt;Once Generated, they will take this form within your .csv file you downloaded.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;provider "aws" {
  region     = "us-west-2"
  access_key = "sample-my-access-key" # Your Key
  secret_key = "sample-my-secret-key" # Your Key
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You need to input them into your Spacelifts Slacks to configure it and authorize/Authenticate it. &lt;/p&gt;

&lt;p&gt;Here is a simple direction to follow:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp00r12j4kr6yj1ia9mti.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp00r12j4kr6yj1ia9mti.png" alt=" " width="800" height="412"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Additionally, here is an extra story of when I broke my configuration by later returning at an different hour, forgetting to review my code from scratch again, and accidentally pushing an unfinished config code to my repo. (and this is how Spacelift Saved Me)&lt;/p&gt;

&lt;p&gt;Right after getting the initial setup working, I decided to tweak something in my Terraform code. I made a quick change, pushed it to the GitHub repo… and boom — Spacelift flagged an error and deliberately allow it to fail.&lt;/p&gt;

&lt;p&gt;Turns out, I had pushed an invalid configuration by mistake. But here’s the cool part:&lt;br&gt;
Spacelift picked it up immediately, ran the terraform plan, and failed the run without applying anything. That moment was a good reminder of why I’m using this tool in the first place — it acts like a guardrail.&lt;/p&gt;

&lt;p&gt;Instead of letting broken code touch my AWS environment, Spacelift stopped the pipeline, showed me exactly where the config was wrong, and kept my infrastructure safe. Once I fixed the mistake, committed the changes, and pushed again, everything went through cleanly.&lt;/p&gt;

&lt;p&gt;Here’s a quick snapshot from that failed run (see below ⬇️). I left it in on purpose — because real DevOps isn’t perfect, and that’s okay. What matters is catching things early.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F81nyb6g7kwxchaaftlhi.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F81nyb6g7kwxchaaftlhi.png" alt=" " width="800" height="411"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9m7m55pzunhg4440wzf0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9m7m55pzunhg4440wzf0.png" alt=" " width="800" height="413"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;So i resolved it, and cleared the config issue. Hence, why you can see it successfully finished.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Wrapping up&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Final Thoughts&lt;/strong&gt;: What did i learn, right?&lt;/p&gt;

&lt;p&gt;This small project may seem basic — just a VPC and some subnets — but for many out there who are just starting or wondering on how to start or where to start, it will serve as an solid intro to combining Terraform with Spacelift in a real-world workflow. &lt;/p&gt;

&lt;p&gt;Personally, It showed me how powerful automation can be when paired with the right tools and a good version control setup.&lt;/p&gt;

&lt;p&gt;Here’s what stood out the most:&lt;/p&gt;

&lt;p&gt;🛡️ &lt;strong&gt;Safety First&lt;/strong&gt;: Spacelift acted like a second pair of eyes. It caught my broken code and stopped it from being applied. That alone is worth the setup.&lt;/p&gt;

&lt;p&gt;🧠 &lt;strong&gt;Git-Driven Infra&lt;/strong&gt;: Everything I did was tracked in Git. No more guessing what changed or why — just clean commits and transparent change history.&lt;/p&gt;

&lt;p&gt;🚀 &lt;strong&gt;Confidence Boost&lt;/strong&gt;: Watching Spacelift plan and apply my infrastructure automatically made me feel more confident about scaling future projects.&lt;/p&gt;

&lt;p&gt;I’m only just getting started with IaC automation, but this experiment gave me the motivation to go deeper. &lt;/p&gt;

&lt;p&gt;I have also used Spacelift to provision resources on Azure, and now I am looking forward to GCP.&lt;/p&gt;

&lt;p&gt;Here is Spacelift managing Azure Resources. &lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn3xvsgmoumi36w18mzub.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn3xvsgmoumi36w18mzub.png" alt="Azure Slack" width="800" height="395"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And here is Spacelift Lifecycle on the Azure Resource.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbv0c93xc86lnv14ntsaf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbv0c93xc86lnv14ntsaf.png" alt="Azure Spacelift" width="800" height="394"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Then next, on my AWS repo, I’ll be looking into adding network ACLs, route tables, maybe even EC2 instances — all managed through the same Spacelift pipeline.&lt;/p&gt;

&lt;p&gt;If you’re exploring Terraform and want a smoother CI/CD experience for your infrastructure, I genuinely recommend giving Spacelift a shot. It’s like Terraform on autopilot — but with controls.&lt;/p&gt;

&lt;p&gt;I hope you enjoyed reading through, feel free to practice with Spacelift.&lt;/p&gt;

&lt;p&gt;Thank you for reading.&lt;/p&gt;

</description>
      <category>terraform</category>
      <category>spacelift</category>
      <category>aws</category>
      <category>devops</category>
    </item>
  </channel>
</rss>
