<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: sagark</title>
    <description>The latest articles on DEV Community by sagark (@sagark4578).</description>
    <link>https://dev.to/sagark4578</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/sagark4578"/>
    <language>en</language>
    <item>
      <title>📢 SOX Compliance Isn’t Just for Auditors — It Starts with DevOps</title>
      <dc:creator>sagark</dc:creator>
      <pubDate>Tue, 15 Jul 2025 06:51:58 +0000</pubDate>
      <link>https://dev.to/sagark4578/sox-compliance-isnt-just-for-auditors-it-starts-with-devops-2f46</link>
      <guid>https://dev.to/sagark4578/sox-compliance-isnt-just-for-auditors-it-starts-with-devops-2f46</guid>
      <description>&lt;h2&gt;
  
  
  👋 Hey, I'm a DevOps Engineer at a SaaS Company
&lt;/h2&gt;

&lt;p&gt;We build a B2B financial analytics product. It processes customer billing data and integrates with ERPs like NetSuite and SAP.&lt;/p&gt;

&lt;p&gt;One day, our CTO dropped a bombshell:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;"We’re going public in 18 months. Time to get SOX compliant."&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;My first thought?&lt;br&gt;
&lt;em&gt;"Wait… isn’t that for finance and auditors?"&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Nope. Turns out, SOX isn’t just a legal checkbox ,it’s a mandate that touches &lt;strong&gt;how we write, deploy, and manage code&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Let me walk you through &lt;strong&gt;how I learned that and what we did.&lt;/strong&gt;&lt;/p&gt;


&lt;h2&gt;
  
  
  🔍 What Even Is SOX?
&lt;/h2&gt;

&lt;p&gt;SOX (Sarbanes-Oxley Act) is a U.S. law that protects shareholders from fraud.&lt;br&gt;
It forces companies to ensure:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Data accuracy&lt;/li&gt;
&lt;li&gt;Strong internal controls&lt;/li&gt;
&lt;li&gt;Proper access management&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;For DevOps? It means:&lt;/p&gt;

&lt;p&gt;🔐&lt;strong&gt;“Only the right people can touch the right systems at the right time — and everything must be logged.”&lt;/strong&gt;&lt;/p&gt;


&lt;h2&gt;
  
  
  🎯 Where DevOps Fits Into SOX
&lt;/h2&gt;

&lt;p&gt;Turns out, &lt;strong&gt;a huge chunk of SOX requirements land squarely on DevOps&lt;/strong&gt;:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Requirement&lt;/th&gt;
&lt;th&gt;DevOps Role&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Code changes must be reviewed&lt;/td&gt;
&lt;td&gt;PR workflows &amp;amp; branch protection&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;No cowboy deploys&lt;/td&gt;
&lt;td&gt;CI/CD approvals, GitOps&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;No one-man root access&lt;/td&gt;
&lt;td&gt;IAM, SSO, RBAC&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Logs must be available for audits&lt;/td&gt;
&lt;td&gt;Centralized logging &amp;amp; retention&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;DR plans must exist&lt;/td&gt;
&lt;td&gt;Infra backup &amp;amp; testing&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;


&lt;h2&gt;
  
  
  🛠️ Here’s What We Did — Step by Step
&lt;/h2&gt;
&lt;h3&gt;
  
  
  1. 🔐 Locked Down Access
&lt;/h3&gt;

&lt;p&gt;We started with access.&lt;br&gt;
No more "I just need prod access for a minute."&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Okta SSO&lt;/strong&gt; across GitHub, Jenkins, AWS&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Role-based access&lt;/strong&gt; (devs can’t touch prod)&lt;/li&gt;
&lt;li&gt;Access expires automatically unless renewed&lt;/li&gt;
&lt;li&gt;Logs pushed to &lt;strong&gt;Datadog&lt;/strong&gt; for traceability&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;✅ Result: No shadow access. Everything is tracked.&lt;/p&gt;


&lt;h3&gt;
  
  
  2. 🧾 Introduced Git Discipline
&lt;/h3&gt;

&lt;p&gt;We couldn’t allow code to sneak into production anymore.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;All changes go through &lt;strong&gt;Pull Requests&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;2 reviewers required&lt;/strong&gt; on financial modules&lt;/li&gt;
&lt;li&gt;PRs must be tied to Jira tasks&lt;/li&gt;
&lt;li&gt;No force pushes, no direct &lt;code&gt;main&lt;/code&gt; commits&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;We even enforced &lt;strong&gt;GPG commit signing&lt;/strong&gt;.&lt;br&gt;
We treat Git like the Bible during audits.&lt;/p&gt;


&lt;h3&gt;
  
  
  3. 🚦 Gate the Pipeline
&lt;/h3&gt;

&lt;p&gt;Deploying to production now looks like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;PR Approved → Merged to Main → GitHub Actions → Manual Approver → ArgoCD Sync → Production
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;GitHub Environments&lt;/strong&gt; require human approval before deploy&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;ArgoCD&lt;/strong&gt; ensures only what’s in Git gets deployed&lt;/li&gt;
&lt;li&gt;Every deployment is logged with SHA, author, and timestamp&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;We literally can't bypass this process — and that's the point.&lt;/p&gt;




&lt;h3&gt;
  
  
  4. 📦 Used Terraform for Everything
&lt;/h3&gt;

&lt;p&gt;Infrastructure was our next target.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Every subnet, RDS, role, or bucket is in &lt;strong&gt;Terraform&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;PR-based changes with &lt;strong&gt;peer review&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;We run &lt;strong&gt;tfsec&lt;/strong&gt; to catch misconfigurations&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;terraform plan&lt;/code&gt; logs get archived in S3&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;SOX loves this. Auditors love this. We love this.&lt;/p&gt;




&lt;h3&gt;
  
  
  5. 🔐 Secured Our Secrets
&lt;/h3&gt;

&lt;p&gt;We ditched &lt;code&gt;.env&lt;/code&gt; files for good.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Switched to &lt;strong&gt;Vault or secrets manager(aws)&lt;/strong&gt; + OIDC from GitHub Actions&lt;/li&gt;
&lt;li&gt;Secrets rotate automatically&lt;/li&gt;
&lt;li&gt;Access scoped per environment&lt;/li&gt;
&lt;li&gt;Nothing ever hits disk or logs&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Secrets are no longer "tribal knowledge." They’re managed.&lt;/p&gt;




&lt;h3&gt;
  
  
  6. 📜 Built an Audit Trail
&lt;/h3&gt;

&lt;p&gt;We feed &lt;strong&gt;everything&lt;/strong&gt; into Datadog/logging tools and S3:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;CloudTrail logs&lt;/li&gt;
&lt;li&gt;GitHub audit logs&lt;/li&gt;
&lt;li&gt;Vault/SM access logs&lt;/li&gt;
&lt;li&gt;ArgoCD sync events&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;Our auditors can search "who touched X system on Y day" and find an exact answer — instantly.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h3&gt;
  
  
  7. 💾 Ran Recovery Drills
&lt;/h3&gt;

&lt;p&gt;We scheduled quarterly DR tests:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Snapshots of prod databases&lt;/li&gt;
&lt;li&gt;Full restoration to staging&lt;/li&gt;
&lt;li&gt;Compare actual vs. expected values&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;We even simulate outages.&lt;br&gt;
Auditors want to see you don’t just &lt;em&gt;have&lt;/em&gt; a plan — you’ve &lt;em&gt;used&lt;/em&gt; it.&lt;/p&gt;




&lt;h2&gt;
  
  
  🤯 What Surprised Me
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;SOX doesn’t stop innovation&lt;/strong&gt; — it sharpens it.&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Most of what SOX requires are just &lt;strong&gt;good DevOps practices&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Immutable infra&lt;/li&gt;
&lt;li&gt;Git-based workflows&lt;/li&gt;
&lt;li&gt;Principle of least privilege&lt;/li&gt;
&lt;li&gt;Automated logging&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;We just had to formalize and enforce them.&lt;/p&gt;




&lt;h2&gt;
  
  
  ✅ My Personal SOX DevOps Checklist
&lt;/h2&gt;

&lt;p&gt;🔒 SSO + RBAC across all tools&lt;br&gt;
📜 All infra in Terraform, all changes in PRs&lt;br&gt;
🧾 Deployment via GitOps only, no manual pushes&lt;br&gt;
🔐 Vault for secrets, rotated and scoped&lt;br&gt;
📦 All logs centralized + retained for few years&lt;br&gt;
💾 Disaster recovery tested and documented&lt;/p&gt;




&lt;h2&gt;
  
  
  💬 Final Thought
&lt;/h2&gt;

&lt;p&gt;If your company is headed toward IPO, or you work on systems that touch finance — start implementing SOX-aligned DevOps today.&lt;/p&gt;

&lt;p&gt;Not just for compliance — but for &lt;strong&gt;clarity, security, and control&lt;/strong&gt;.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;The more you automate for compliance, the more time you earn for engineering.&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  🙌 Let’s Talk
&lt;/h2&gt;

&lt;p&gt;Have you built a compliant pipeline?&lt;br&gt;
Need a Terraform/GitHub starter for SOX?&lt;br&gt;
Want a checklist for your infra team?&lt;/p&gt;

&lt;p&gt;Let’s chat — I’d love to compare notes.&lt;/p&gt;

&lt;p&gt;🔨🤖 Built it. Secured it. Audited it.&lt;/p&gt;




</description>
      <category>terraform</category>
      <category>devops</category>
      <category>sox</category>
      <category>aws</category>
    </item>
    <item>
      <title>✨ Deploying MongoDB Atlas on AWS EKS Using Terraform Like a Pro💡with automated secrets and dynamic environments</title>
      <dc:creator>sagark</dc:creator>
      <pubDate>Wed, 16 Apr 2025 09:47:35 +0000</pubDate>
      <link>https://dev.to/sagark4578/deploying-mongodb-atlas-on-aws-eks-using-terraform-like-a-prowith-automated-secrets-and-dynamic-1dho</link>
      <guid>https://dev.to/sagark4578/deploying-mongodb-atlas-on-aws-eks-using-terraform-like-a-prowith-automated-secrets-and-dynamic-1dho</guid>
      <description>&lt;h2&gt;
  
  
  &lt;strong&gt;👋 Introduction&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Managing &lt;strong&gt;database infrastructure&lt;/strong&gt; for &lt;strong&gt;multiple environments (dev, staging, prod)&lt;/strong&gt; can be a headache — especially when it comes to security, secrets, and scaling.&lt;/p&gt;

&lt;p&gt;In this I’ll walk you through how we’ve automated the provisioning of &lt;strong&gt;MongoDB Atlas&lt;/strong&gt; on top of &lt;strong&gt;Amazon EKS&lt;/strong&gt; using &lt;strong&gt;Terraform&lt;/strong&gt;, complete with:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Role-based access&lt;/li&gt;
&lt;li&gt;Secure secret handling (via AWS Secrets Manager)&lt;/li&gt;
&lt;li&gt;Dynamic environment setup (dev, stg, prod)&lt;/li&gt;
&lt;li&gt;IP whitelisting based on your VPC setup&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Terraform Project Structure&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Here’s a high-level view of the files in our infrastructure setup:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;.
├── data.tf
├── main.tf
├── provider.tf
├── secrets.tf
└── variables.tf
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  🧱 main.tf – MongoDB Cluster + Project Setup
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;data "mongodbatlas_roles_org_id" "mongodbatlas_roles_org_id" {}

resource "mongodbatlas_project" "mongodbatlas_project" {
  name                                             = "Mongo-${var.environment}"
  org_id                                           = data.mongodbatlas_roles_org_id.mongodbatlas_roles_org_id.org_id
  is_collect_database_specifics_statistics_enabled = true
  is_data_explorer_enabled                         = true
  is_extended_storage_sizes_enabled                = true
  is_performance_advisor_enabled                   = true
  is_realtime_performance_panel_enabled            = true
  is_schema_advisor_enabled                        = true
}

resource "mongodbatlas_cluster" "mongodbatlas_cluster" {
  project_id                     = mongodbatlas_project.mongodbatlas_project[0].id
  name                           = "${var.environment}-cluster"
  cluster_type                   = "REPLICASET"
  provider_name                  = "TENANT"
  backing_provider_name          = "AWS"
  provider_region_name           = upper(replace(var.aws_region, "-", "_"))
  provider_instance_size_name    = "M10"
  auto_scaling_disk_gb_enabled   = false
  termination_protection_enabled = false
  mongo_db_major_version         = "7.0"
}

resource "mongodbatlas_project_ip_access_list" "mongodbatlas_project_ip_access_list" {
  project_id = mongodbatlas_project.mongodbatlas_project[0].id
  cidr_block = "${data.terraform_remote_state.vpc.outputs.nat_elastic_ip}/32"
  comment    = "cidr block for AWS ${var.environment}"
}

resource "mongodbatlas_project_ip_access_list" "mongodbatlas_project_ip_access_list_prod" {
  project_id = data.mongodbatlas_project.mongodbatlas_project[0].id
  cidr_block = "${data.terraform_remote_state.vpc.outputs.nat_elastic_ip}/32"
  comment    = "cidr block for AWS ${var.environment}"
}

resource "mongodbatlas_database_user" "mongodbatlas_database_user" {
  username           = var.environment
  password           = random_password.password[0].result
  project_id         = mongodbatlas_project.mongodbatlas_project[0].id
  auth_database_name = "admin"

  roles {
    role_name     = "readWriteAnyDatabase"
    database_name = "admin"
  }

  scopes {
    name = mongodbatlas_cluster.mongodbatlas_cluster[0].name
    type = "CLUSTER"
  }
}


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;🧾 In this setup, we automate the creation of a fully managed &lt;strong&gt;MongoDB Atlas environment&lt;/strong&gt; using &lt;strong&gt;Terraform for the stage of our application&lt;/strong&gt;. The Terraform script provisions the following resources:&lt;/p&gt;

&lt;p&gt;A MongoDB Atlas Project inside the organization's Atlas account.&lt;/p&gt;

&lt;p&gt;A MongoDB Cluster with a &lt;strong&gt;REPLICASET&lt;/strong&gt; architecture, running MongoDB &lt;strong&gt;version 7.0&lt;/strong&gt;, hosted on AWS with an &lt;strong&gt;M10 instance&lt;/strong&gt; size—ideal for development workloads.&lt;/p&gt;

&lt;p&gt;A Project IP Access List that &lt;strong&gt;whitelists the NAT Gateway IP&lt;/strong&gt; of the AWS VPC to ensure secure and controlled access to the cluster from within our infrastructure.&lt;/p&gt;

&lt;p&gt;A Database User with the same name as the environment , with &lt;strong&gt;readWriteAnyDatabase&lt;/strong&gt; permissions scoped to the cluster, enabling it to interact with any database within that cluster securely.&lt;/p&gt;

&lt;p&gt;This setup is part of our &lt;strong&gt;Infrastructure-as-Code (IaC)&lt;/strong&gt; strategy to maintain consistency, improve security, and reduce manual operations during the lifecycle of our environment.&lt;/p&gt;

&lt;h2&gt;
  
  
  🔐 secrets.tf – MongoDB Secrets in AWS Secrets Manager
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;data "aws_secretsmanager_secret" "aws_secretsmanager_secret" {
  name = "mongo"
}

data "aws_secretsmanager_secret_version" "aws_secretsmanager_secret_version" {
  secret_id = data.aws_secretsmanager_secret.aws_secretsmanager_secret.id
}

resource "aws_secretsmanager_secret" "mongo" {
  name                    = "/${var.environment}/mongo"
  recovery_window_in_days = 0
}

resource "aws_secretsmanager_secret_version" "mongo" {
  secret_id     = aws_secretsmanager_secret.mongo[0].id
  secret_string = &amp;lt;&amp;lt;EOF
   {
    "MONGO_URL"             : "mongodb+srv://${mongodbatlas_database_user.mongodbatlas_database_user[0].username}:${mongodbatlas_database_user.mongodbatlas_database_user[0].password}@${trimprefix(mongodbatlas_cluster.mongodbatlas_cluster[0].connection_strings[0].standard_srv, "mongodb+srv://")}/${var.environment}?authSource=${mongodbatlas_database_user.mongodbatlas_database_user[0].auth_database_name}"
   }
EOF
}


data "aws_secretsmanager_secret" "aws_secretsmanager_secret_beta" {
  name  = "/beta/mongo"
}

data "aws_secretsmanager_secret_version" "aws_secretsmanager_secret_version_beta" {
  secret_id = data.aws_secretsmanager_secret.aws_secretsmanager_secret_beta[0].id
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This secret is then pulled securely by our app pods using Kubernetes External Secrets or the AWS Secrets CSI driver.&lt;/p&gt;

&lt;h2&gt;
  
  
  ⚙️ provider.tf – AWS &amp;amp; MongoDB Atlas Providers
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform {
  required_providers {
    aws = {
      source = "hashicorp/aws"
    }
    mongodbatlas = {
      source = "mongodb/mongodbatlas"
    }
  }
}

provider "aws" {
  region = var.aws_region
}

provider "mongodbatlas" {
  "MONGO_PUBLIC_KEY": "your_mongodb_atlas_public_key",
  "MONGO_PRIVATE_KEY": "your_mongodb_atlas_private_key"
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;With this provider.tf setup, you can automate both cloud infrastructure and MongoDB Atlas resources in your development workflow. Using Terraform helps you stay consistent, reduce manual errors, and scale your deployments with ease.&lt;/p&gt;

&lt;h2&gt;
  
  
  🔑 data.tf – Account Info + Random Password
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;data "aws_caller_identity" "caller" {}

resource "random_password" "password" {
  length      = 25
  upper       = true
  special     = false
  min_numeric = 5
}

}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Terraform generates a secure password for the MongoDB user that we store in Secrets Manager along with the URI.&lt;/p&gt;

&lt;h2&gt;
  
  
  ✅ Final output.tf File
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;output "mongodb_endpoint" {
  value = "mongodb+srv://${mongodbatlas_database_user.mongodbatlas_database_user[0].username}:${mongodbatlas_database_user.mongodbatlas_database_user[0].password}@${trimprefix(mongodbatlas_cluster.mongodbatlas_cluster[0].connection_strings[0].standard_srv, "mongodb+srv://")}/${var.environment}?authSource=${mongodbatlas_database_user.mongodbatlas_database_user[0].auth_database_name}"
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;📝 Description&lt;/strong&gt;&lt;br&gt;
This output block dynamically generates the MongoDB connection string depending on the environment&lt;/p&gt;

&lt;h2&gt;
  
  
  ✅ Conclusion:
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;A Scalable, Secure, and Dev-Friendly MongoDB Setup on EKS&lt;br&gt;
With this Terraform-driven approach, we’ve not only provisioned MongoDB Atlas clusters directly from our infrastructure code, but we’ve also implemented environment-specific behavior, secure secret management via AWS Secrets Manager, and seamless integration with our EKS-based platform.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This setup ensures:&lt;/p&gt;

&lt;p&gt;🔐 &lt;strong&gt;Security-first practices for handling sensitive data&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;🔁 &lt;strong&gt;Reusability and scalability across environments (dev, stg, prod)&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;⚙️ &lt;strong&gt;Automation of cloud-native MongoDB provisioning with zero manual steps&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;☁️ &lt;strong&gt;Tight integration between MongoDB Atlas, AWS, and Kubernetes (EKS)&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Whether you're spinning up a new feature branch or deploying production-grade infrastructure, this solution gives your team a solid and secure foundation to build, test, and scale apps faster.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>mongodb</category>
      <category>terraform</category>
      <category>eks</category>
    </item>
    <item>
      <title>[Boost]</title>
      <dc:creator>sagark</dc:creator>
      <pubDate>Tue, 18 Mar 2025 07:17:56 +0000</pubDate>
      <link>https://dev.to/sagark4578/-3i9e</link>
      <guid>https://dev.to/sagark4578/-3i9e</guid>
      <description>&lt;div class="ltag__link"&gt;
  &lt;a href="/sagark4578" class="ltag__link__link"&gt;
    &lt;div class="ltag__link__pic"&gt;
      &lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Fuser%2Fprofile_image%2F1106266%2F71d4b76d-8051-4b2c-a00b-cab0c1af2bbc.jpeg" alt="sagark4578"&gt;
    &lt;/div&gt;
  &lt;/a&gt;
  &lt;a href="https://dev.to/sagark4578/deploying-aws-eks-with-alb-using-terraform-1lnn" class="ltag__link__link"&gt;
    &lt;div class="ltag__link__content"&gt;
      &lt;h2&gt;Deploying AWS EKS with ALB Using Terraform&lt;/h2&gt;
      &lt;h3&gt;sagark ・ Mar 11&lt;/h3&gt;
      &lt;div class="ltag__link__taglist"&gt;
        &lt;span class="ltag__link__tag"&gt;#aws&lt;/span&gt;
        &lt;span class="ltag__link__tag"&gt;#kubernetes&lt;/span&gt;
        &lt;span class="ltag__link__tag"&gt;#terraform&lt;/span&gt;
        &lt;span class="ltag__link__tag"&gt;#devops&lt;/span&gt;
      &lt;/div&gt;
    &lt;/div&gt;
  &lt;/a&gt;
&lt;/div&gt;


</description>
      <category>aws</category>
      <category>kubernetes</category>
      <category>terraform</category>
      <category>devops</category>
    </item>
    <item>
      <title>How to Set Up AWS EFS with Persistent Volumes &amp; PVCs Using Terraform</title>
      <dc:creator>sagark</dc:creator>
      <pubDate>Tue, 18 Mar 2025 07:16:16 +0000</pubDate>
      <link>https://dev.to/sagark4578/how-to-set-up-aws-efs-with-persistent-volumes-pvcs-using-terraform-5ffn</link>
      <guid>https://dev.to/sagark4578/how-to-set-up-aws-efs-with-persistent-volumes-pvcs-using-terraform-5ffn</guid>
      <description>&lt;p&gt;&lt;strong&gt;Introduction&lt;/strong&gt;&lt;br&gt;
When deploying applications on AWS, managing persistent storage is crucial for stateful workloads. Amazon Elastic File System (EFS) provides scalable, shared file storage that integrates seamlessly with AWS services like Lambda, Kubernetes (EKS), and EC2.&lt;/p&gt;

&lt;p&gt;In this guide, we’ll use Terraform to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Create an AWS EFS file system&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Configure security groups for secure access&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Set up EFS mount targets for private subnets&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Define access points for applications&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Integrate with Kubernetes Persistent Volumes (PV) and Persistent Volume Claims (PVC)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Output essential EFS parameters for further integrations&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;By the end of this tutorial, you'll have a fully functional EFS setup, ready to be used in a Kubernetes cluster or other AWS environments.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Step 1: Create an AWS Security Group for EFS&lt;/strong&gt;&lt;br&gt;
The security group defines access rules for EFS connections from private subnets.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "aws_security_group" "efs_sg" {
  name   = "efs-app-${var.environment}-sg"
  vpc_id = data.terraform_remote_state.vpc.outputs.vpc_id

  ingress {
    description = "Allow access from Lambda &amp;amp; VPC subnets"
    from_port   = 0
    to_port     = 0
    protocol    = "-1"
    cidr_blocks = concat(
      data.terraform_remote_state.vpc.outputs.private_subnets_cidr,
      data.terraform_remote_state.vpc.outputs.db_subnets_cidr
    )
  }

  egress {
    from_port   = 0
    to_port     = 0
    protocol    = "-1"
    cidr_blocks = ["0.0.0.0/0"]
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;✅ Why?&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Ensures secure access to EFS from private subnets.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Outbound connections are unrestricted for seamless communication.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Step 2: Create AWS EFS File System&lt;/strong&gt;&lt;br&gt;
This block provisions an encrypted EFS instance with lifecycle policies to optimize costs.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "aws_efs_file_system" "aws_efs_file_system" {
  creation_token = "${var.environment}-efs-app"
  encrypted      = true

  lifecycle_policy {
    transition_to_ia = "AFTER_14_DAYS"
  }

  lifecycle_policy {
    transition_to_primary_storage_class = "AFTER_1_ACCESS"
  }

  tags = {
    Name = "${var.environment}-efs-app"
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;✅ Why?&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Uses lifecycle policies to reduce storage costs.&lt;/li&gt;
&lt;li&gt;Encryption enabled for data security.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Step 3: Create EFS Mount Targets&lt;/strong&gt;&lt;br&gt;
Mount targets enable EC2 instances, Kubernetes pods, and Lambda functions to access EFS.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "aws_efs_mount_target" "efs_mt" {
  count           = length(data.terraform_remote_state.vpc.outputs.db_subnets)
  file_system_id  = aws_efs_file_system.aws_efs_file_system.id
  subnet_id       = data.terraform_remote_state.vpc.outputs.db_subnets[count.index]
  security_groups = [aws_security_group.efs_sg.id]
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;✅ Why?&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Creates mount targets in each private subnet.&lt;/li&gt;
&lt;li&gt;Ensures secure communication via the security group.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Step 4: Create an EFS Access Point&lt;/strong&gt;&lt;br&gt;
Access points define specific access rules for applications using EFS.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "aws_efs_access_point" "aws_efs_access_point" {
  file_system_id = aws_efs_file_system.aws_efs_file_system.id
  posix_user {
    gid = 1000
    uid = 1000
  }
  root_directory {
    path = "/app"
    creation_info {
      owner_gid   = 1000
      owner_uid   = 1000
      permissions = "777"
    }
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;✅ Why?&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Defines POSIX permissions for secure file access.&lt;/li&gt;
&lt;li&gt;Ensures controlled multi-user access to the file system.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Step 5: Kubernetes Integration with PV &amp;amp; PVC&lt;/strong&gt;&lt;br&gt;
To use EFS as a Persistent Volume (PV) in Kubernetes, define a PersistentVolume (PV) and a PersistentVolumeClaim (PVC) using the kubectl_manifest resource.&lt;/p&gt;

&lt;p&gt;Persistent Volume (PV)&lt;br&gt;
Create a file (for example, pv-pvc.tf) with the following code:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "kubectl_manifest" "persistent_volume" {
  yaml_body = &amp;lt;&amp;lt;YAML
apiVersion: v1
kind: PersistentVolume
metadata:
  name: var.name
spec:
  accessModes:
  - ReadWriteMany
  capacity:
    storage: 10Gi
  csi:
    driver: efs.csi.aws.com
    volumeHandle: ${data.terraform_remote_state.efs.outputs.efs_id}::${data.terraform_remote_state.efs.outputs.efs_access_point_id}
  persistentVolumeReclaimPolicy: Retain
  storageClassName: efs-sc
  volumeMode: Filesystem
YAML
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Persistent Volume Claim (PVC)&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "kubectl_manifest" "persistent_volume_claims" {
  depends_on = [kubectl_manifest.persistent_volume]
  yaml_body  = &amp;lt;&amp;lt;YAML
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: var.name-claim
spec:
  accessModes:
    - ReadWriteMany
  storageClassName: efs-sc
  resources:
    requests:
      storage: 10Gi
YAML
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;output.tf&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;output "efs_dns_name" {
  value = aws_efs_file_system.aws_efs_file_system.dns_name
}

output "efs_id" {
  value = aws_efs_file_system.aws_efs_file_system.id
}

output "efs_access_point_id" {
  value = aws_efs_access_point.aws_efs_access_point.id
}

output "efs_id_app2" {
  value = aws_efs_file_system.aws_efs_file_system_app2.id
}

output "efs_access_point_app2_id" {
  value = aws_efs_access_point.aws_efs_access_point_app2.id
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;✅ Why?&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The output blocks expose essential EFS parameters, which can be referenced by other Terraform modules or Kubernetes manifests.&lt;/li&gt;
&lt;li&gt;The Persistent Volume (PV) uses the AWS EFS CSI driver to connect to your EFS file system.&lt;/li&gt;
&lt;li&gt;The Persistent Volume Claim (PVC) lets Kubernetes applications request storage dynamically.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Conclusion&lt;/strong&gt;&lt;br&gt;
With this complete setup, you now have a scalable, secure, and high-availability persistent storage solution using AWS EFS and Terraform. This guide not only provisions the necessary AWS infrastructure but also integrates with Kubernetes to manage persistent storage using PVs and PVCs.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Next Steps&lt;/strong&gt;&lt;br&gt;
Experiment with dynamic scaling by integrating AWS Auto Scaling or EFS Intelligent-Tiering.&lt;br&gt;
Continue exploring Terraform modules to further streamline your infrastructure as code.&lt;br&gt;
📌 Have questions or suggestions? Drop a comment below! 🚀&lt;/p&gt;

</description>
      <category>aws</category>
      <category>devops</category>
      <category>terraform</category>
      <category>kubernetes</category>
    </item>
    <item>
      <title>Deploying AWS EKS with ALB Using Terraform</title>
      <dc:creator>sagark</dc:creator>
      <pubDate>Tue, 11 Mar 2025 07:14:50 +0000</pubDate>
      <link>https://dev.to/sagark4578/deploying-aws-eks-with-alb-using-terraform-1lnn</link>
      <guid>https://dev.to/sagark4578/deploying-aws-eks-with-alb-using-terraform-1lnn</guid>
      <description>&lt;p&gt;🔥 Introduction&lt;br&gt;
In cloud-native architectures, Amazon EKS (Elastic Kubernetes Service) is a powerful way to manage containerized applications at scale. When combined with AWS ALB (Application Load Balancer), it ensures seamless traffic management, automatic scaling, and security.&lt;/p&gt;

&lt;p&gt;In this guide, I'll walk you through setting up an EKS cluster and deploying an ALB Ingress Controller using Terraform.&lt;/p&gt;

&lt;p&gt;📌 Why Use ALB with EKS?&lt;br&gt;
Amazon's Application Load Balancer (ALB) integrates well with Kubernetes to:&lt;br&gt;
✅ Distribute traffic efficiently across multiple pods&lt;br&gt;
✅ Enable SSL/TLS termination for security&lt;br&gt;
✅ Support path-based &amp;amp; host-based routing&lt;br&gt;
✅ Improve scalability with auto-healing features&lt;/p&gt;

&lt;p&gt;🛠️ Tech Stack&lt;br&gt;
Terraform – Infrastructure as Code&lt;br&gt;
AWS EKS – Kubernetes Cluster&lt;br&gt;
AWS ALB – Load Balancer&lt;br&gt;
IAM Roles &amp;amp; Policies – Secure Access&lt;br&gt;
Helm – Package Manager for Kubernetes&lt;/p&gt;

&lt;p&gt;🚀 Step 1: Setting Up AWS EKS with Terraform&lt;br&gt;
First, define the EKS cluster in Terraform:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "aws_eks_cluster" "eks" {
  name     = "my-eks-cluster"
  role_arn = aws_iam_role.eks_role.arn

  vpc_config {
    subnet_ids = [aws_subnet.public_1.id, aws_subnet.public_2.id]
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Create IAM Role for the ALB Controller:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "aws_iam_role" "alb_controller" {
  name = "alb-controller-role"
  assume_role_policy = jsonencode({
    Statement = [{
      Effect = "Allow"
      Principal = {
        Service = "eks.amazonaws.com"
      }
      Action = "sts:AssumeRole"
    }]
  })
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;🚀 Step 2: Deploy ALB Ingress Controller&lt;br&gt;
Once EKS is up and running, install the ALB Ingress Controller using Helm:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;helm repo add eks https://aws.github.io/eks-charts

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;helm install aws-load-balancer-controller eks/aws-load-balancer-controller \
  --set clusterName=my-eks-cluster \
  --set serviceAccount.create=false \
  --set serviceAccount.name=alb-controller \
  -n kube-system
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;🚀 Step 3: Define Ingress Rules&lt;br&gt;
Now, create an Ingress resource to route traffic via ALB:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: my-ingress
  annotations:
    alb.ingress.kubernetes.io/scheme: internet-facing
    alb.ingress.kubernetes.io/target-type: ip
spec:
  rules:
  - host: my-app.example.com
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: my-service
            port:
              number: 80
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Apply the changes:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl apply -f ingress.yaml

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;✅ Verify Deployment&lt;br&gt;
Check if the ALB is created:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl get ingress -A

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Head over to AWS Console → EC2 → Load Balancers and verify the ALB instance.&lt;/p&gt;

&lt;p&gt;🎯 Conclusion&lt;br&gt;
With Terraform and Helm, deploying EKS with ALB is now streamlined and automated. This setup ensures a highly scalable, secure, and manageable cloud-native architecture.&lt;/p&gt;

&lt;p&gt;If you found this guide helpful, drop a comment or share your experience! 🚀&lt;/p&gt;

</description>
      <category>aws</category>
      <category>kubernetes</category>
      <category>terraform</category>
      <category>devops</category>
    </item>
  </channel>
</rss>
