<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Atul Vishwakarma</title>
    <description>The latest articles on DEV Community by Atul Vishwakarma (@vatul16).</description>
    <link>https://dev.to/vatul16</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/vatul16"/>
    <language>en</language>
    <item>
      <title>Building a Production-Ready 2-Tier Architecture on AWS with Terraform</title>
      <dc:creator>Atul Vishwakarma</dc:creator>
      <pubDate>Fri, 17 Apr 2026 09:16:42 +0000</pubDate>
      <link>https://dev.to/vatul16/building-a-production-ready-2-tier-architecture-on-aws-with-terraform-3m8f</link>
      <guid>https://dev.to/vatul16/building-a-production-ready-2-tier-architecture-on-aws-with-terraform-3m8f</guid>
      <description>&lt;h2&gt;
  
  
  From Theory to Real-World Infrastructure 🚀
&lt;/h2&gt;

&lt;p&gt;As part of my &lt;strong&gt;30 Days of AWS Terraform challenge&lt;/strong&gt;, Day 22 was a major milestone where I moved beyond individual resources and built a &lt;strong&gt;production-style 2-tier architecture&lt;/strong&gt; using Terraform.&lt;/p&gt;

&lt;p&gt;This project was all about combining networking, security, compute, and database layers into a cohesive and secure system — just like real-world applications.&lt;/p&gt;




&lt;h2&gt;
  
  
  🏗️ The Goal: Build a Secure 2-Tier Architecture
&lt;/h2&gt;

&lt;p&gt;The objective of this project was to design and deploy:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Web Tier (Public Layer)&lt;/strong&gt; → EC2 instances running a Flask application&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Database Tier (Private Layer)&lt;/strong&gt; → MySQL RDS instance&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These two layers communicate securely while maintaining strict isolation from public access.&lt;/p&gt;




&lt;h2&gt;
  
  
  🔐 Architecture Overview
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Web Tier (Public Subnet)
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;EC2 instance running Flask app&lt;/li&gt;
&lt;li&gt;Accessible via Internet Gateway&lt;/li&gt;
&lt;li&gt;Uses User Data for automated setup&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Database Tier (Private Subnet)
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Amazon RDS (MySQL)&lt;/li&gt;
&lt;li&gt;No public access&lt;/li&gt;
&lt;li&gt;Only accessible from Web Tier via Security Groups&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  ⚙️ Core Infrastructure Components
&lt;/h2&gt;

&lt;p&gt;To support this architecture, I provisioned:&lt;/p&gt;

&lt;h3&gt;
  
  
  🌐 Networking
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Custom VPC&lt;/li&gt;
&lt;li&gt;Public &amp;amp; Private subnets&lt;/li&gt;
&lt;li&gt;Internet Gateway (for web tier)&lt;/li&gt;
&lt;li&gt;NAT Gateway (for private subnet outbound access)&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  🔒 Security
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Security Groups with strict rules&lt;/li&gt;
&lt;li&gt;Principle of Least Privilege enforced&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  🔑 Secrets Management
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;AWS Secrets Manager for DB credentials&lt;/li&gt;
&lt;li&gt;No hardcoded sensitive data&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  🖥️ Compute
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;EC2 instance with Flask app&lt;/li&gt;
&lt;li&gt;Automated setup via User Data&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  🗄️ Database
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;RDS MySQL instance in private subnet&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  🧩 Modular Terraform Architecture
&lt;/h2&gt;

&lt;p&gt;One of the biggest highlights of Day 22 was applying &lt;strong&gt;Terraform Modules in a real project&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Instead of one large configuration file, I structured my code into reusable modules:&lt;/p&gt;

&lt;h3&gt;
  
  
  Module Structure:
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;VPC Module&lt;/strong&gt; → Networking setup&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Security Group Module&lt;/strong&gt; → Access control&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;RDS Module&lt;/strong&gt; → Database provisioning&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Secrets Module&lt;/strong&gt; → Credential management&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;EC2 Module&lt;/strong&gt; → Web server deployment&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  🔄 Data Flow Between Modules
&lt;/h2&gt;

&lt;p&gt;Terraform modules don’t communicate directly — so I used:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;outputs.tf&lt;/code&gt; → to expose values&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;variables.tf&lt;/code&gt; → to pass inputs&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Example:
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;VPC module outputs subnet IDs&lt;/li&gt;
&lt;li&gt;Root module passes them to EC2 &amp;amp; RDS modules&lt;/li&gt;
&lt;li&gt;Secrets module outputs credentials used by EC2&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This pattern ensures:&lt;/p&gt;

&lt;p&gt;✔️ Loose coupling&lt;br&gt;
✔️ High reusability&lt;br&gt;
✔️ Clean architecture&lt;/p&gt;




&lt;h2&gt;
  
  
  🧪 Testing the Application
&lt;/h2&gt;

&lt;p&gt;After deployment, I validated the setup by hitting:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;/health&lt;/code&gt; → Application health check&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;/db/info&lt;/code&gt; → Database connectivity&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Seeing the Flask app successfully connect to the RDS instance was a huge win! 🎉&lt;/p&gt;




&lt;h2&gt;
  
  
  🔐 Security Best Practices Implemented
&lt;/h2&gt;

&lt;p&gt;This project reinforced critical security principles:&lt;/p&gt;

&lt;h3&gt;
  
  
  ✔️ Private Database
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;RDS placed in private subnet&lt;/li&gt;
&lt;li&gt;No direct internet exposure&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  ✔️ Controlled Access
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Only EC2 security group can access DB&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  ✔️ Secrets Management
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Credentials stored in AWS Secrets Manager&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  ✔️ Least Privilege
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Minimal permissions across components&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  💡 Key Learnings from Day 22
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Modules are essential for scalable Terraform projects&lt;/li&gt;
&lt;li&gt;Networking design is critical for security&lt;/li&gt;
&lt;li&gt;Secrets should never be hardcoded&lt;/li&gt;
&lt;li&gt;Private subnets protect sensitive resources&lt;/li&gt;
&lt;li&gt;Testing validates real-world readiness&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  🧠 Why This Project Matters
&lt;/h2&gt;

&lt;p&gt;This was not just another Terraform lab — it was a &lt;strong&gt;real-world architecture simulation&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;It taught me how to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Design secure systems&lt;/li&gt;
&lt;li&gt;Structure modular IaC&lt;/li&gt;
&lt;li&gt;Connect application &amp;amp; database layers&lt;/li&gt;
&lt;li&gt;Apply DevOps best practices&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This is the kind of project that bridges the gap between learning and real engineering.&lt;/p&gt;




&lt;h2&gt;
  
  
  🚀 What’s Next?
&lt;/h2&gt;

&lt;p&gt;With only 8 days remaining, I’m excited to explore:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;CI/CD integration&lt;/li&gt;
&lt;li&gt;Advanced security practices&lt;/li&gt;
&lt;li&gt;Multi-environment deployments&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  🎯 Final Thoughts
&lt;/h2&gt;

&lt;p&gt;Day 22 was one of the most practical and rewarding days of this challenge.&lt;/p&gt;

&lt;p&gt;It reinforced that:&lt;/p&gt;

&lt;p&gt;👉 Good infrastructure is not just functional — it’s secure, modular, and scalable.&lt;/p&gt;

&lt;p&gt;If you're learning Terraform, I highly recommend building projects like this to truly understand how cloud systems work.&lt;/p&gt;

</description>
      <category>terraform</category>
      <category>cloud</category>
      <category>devops</category>
      <category>aws</category>
    </item>
    <item>
      <title>Mastering Cloud Policy &amp; Governance with Terraform</title>
      <dc:creator>Atul Vishwakarma</dc:creator>
      <pubDate>Fri, 17 Apr 2026 05:56:46 +0000</pubDate>
      <link>https://dev.to/vatul16/mastering-cloud-policy-governance-with-terraform-cl4</link>
      <guid>https://dev.to/vatul16/mastering-cloud-policy-governance-with-terraform-cl4</guid>
      <description>&lt;h2&gt;
  
  
  Building Secure &amp;amp; Compliant Cloud Infrastructure with IaC 🚀
&lt;/h2&gt;

&lt;p&gt;As part of my &lt;strong&gt;30 Days of AWS Terraform challenge&lt;/strong&gt;, Day 21 marked a major shift in perspective — from simply provisioning infrastructure to &lt;strong&gt;governing and securing it at scale&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Today’s focus was on &lt;strong&gt;AWS Policy and Governance using Terraform&lt;/strong&gt;, and it was one of the most practical and impactful lessons so far.&lt;/p&gt;

&lt;p&gt;Because in real-world cloud environments, success isn’t just about deploying resources — it’s about ensuring they are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Secure 🔐&lt;/li&gt;
&lt;li&gt;Compliant 📋&lt;/li&gt;
&lt;li&gt;Auditable 🔍&lt;/li&gt;
&lt;li&gt;Consistent ⚙️&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Why Policy &amp;amp; Governance Matter
&lt;/h2&gt;

&lt;p&gt;When infrastructure grows across teams, regions, and environments, manual management becomes:&lt;/p&gt;

&lt;p&gt;❌ Error-prone&lt;br&gt;
❌ Inconsistent&lt;br&gt;
❌ Difficult to audit&lt;br&gt;
❌ A major security risk&lt;/p&gt;

&lt;p&gt;This is where &lt;strong&gt;Infrastructure as Code (IaC)&lt;/strong&gt; combined with governance tools becomes critical.&lt;/p&gt;

&lt;p&gt;👉 Terraform allows us to &lt;strong&gt;codify guardrails&lt;/strong&gt;, ensuring that every deployment automatically follows best practices.&lt;/p&gt;




&lt;h2&gt;
  
  
  Core Concepts I Explored
&lt;/h2&gt;

&lt;p&gt;Today’s lab focused on three essential pillars of cloud governance:&lt;/p&gt;

&lt;h3&gt;
  
  
  1. Preventive Controls with IAM Policies 🔐
&lt;/h3&gt;

&lt;p&gt;IAM acts as the &lt;strong&gt;first line of defense&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Instead of reacting to issues, we can prevent them entirely by defining strict policies.&lt;/p&gt;

&lt;h3&gt;
  
  
  What I Implemented:
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Denied S3 bucket deletion without MFA&lt;/li&gt;
&lt;li&gt;Enforced encrypted uploads (HTTPS only)&lt;/li&gt;
&lt;li&gt;Restricted unsafe operations based on conditions&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Why It Matters:
&lt;/h3&gt;

&lt;p&gt;✔️ Stops misconfigurations before they happen&lt;br&gt;
✔️ Enforces least privilege&lt;br&gt;
✔️ Protects critical infrastructure&lt;/p&gt;




&lt;h3&gt;
  
  
  2. Continuous Monitoring with AWS Config 📊
&lt;/h3&gt;

&lt;p&gt;IAM prevents bad actions — but what about changes over time?&lt;/p&gt;

&lt;p&gt;That’s where &lt;strong&gt;AWS Config&lt;/strong&gt; comes in.&lt;/p&gt;

&lt;h3&gt;
  
  
  What I Built:
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Enabled AWS Config recorder&lt;/li&gt;
&lt;li&gt;Configured managed rules&lt;/li&gt;
&lt;li&gt;Monitored compliance continuously&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Example Checks:
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Unencrypted EBS volumes&lt;/li&gt;
&lt;li&gt;Missing resource tags&lt;/li&gt;
&lt;li&gt;Non-compliant S3 buckets&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Why It Matters:
&lt;/h3&gt;

&lt;p&gt;✔️ Detects drift in infrastructure&lt;br&gt;
✔️ Ensures continuous compliance&lt;br&gt;
✔️ Provides audit visibility&lt;/p&gt;




&lt;h3&gt;
  
  
  3. Secure Logging &amp;amp; Audit Trails 🪵
&lt;/h3&gt;

&lt;p&gt;Governance is incomplete without proper logging.&lt;/p&gt;

&lt;h3&gt;
  
  
  What I Implemented:
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Centralized S3 bucket for logs&lt;/li&gt;
&lt;li&gt;Enabled versioning&lt;/li&gt;
&lt;li&gt;Enforced encryption&lt;/li&gt;
&lt;li&gt;Restricted public access&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Why It Matters:
&lt;/h3&gt;

&lt;p&gt;✔️ Enables audits &amp;amp; investigations&lt;br&gt;
✔️ Preserves historical data&lt;br&gt;
✔️ Strengthens security posture&lt;/p&gt;




&lt;h2&gt;
  
  
  Hands-On Implementation Highlights ⚙️
&lt;/h2&gt;

&lt;p&gt;Today’s project involved building governance controls using Terraform:&lt;/p&gt;

&lt;h3&gt;
  
  
  ✔️ AWS Config Setup
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Config recorder automation&lt;/li&gt;
&lt;li&gt;Managed rule definitions&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  ✔️ Tagging Enforcement
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Standardized tags across all resources&lt;/li&gt;
&lt;li&gt;Improved cost tracking &amp;amp; ownership&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  ✔️ IAM Guardrails
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Attached policies to roles&lt;/li&gt;
&lt;li&gt;Controlled access behavior&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This made the entire infrastructure:&lt;/p&gt;

&lt;p&gt;👉 Self-governing&lt;br&gt;
👉 Consistent&lt;br&gt;
👉 Production-ready&lt;/p&gt;




&lt;h2&gt;
  
  
  The Real Challenge: IAM Policy Evaluation 🧠
&lt;/h2&gt;

&lt;p&gt;One of the most valuable learnings today was understanding &lt;strong&gt;how IAM policies are evaluated&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;It’s not just about writing policies — it’s about understanding:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Explicit Deny vs Allow&lt;/li&gt;
&lt;li&gt;Policy precedence&lt;/li&gt;
&lt;li&gt;Conditional logic behavior&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Key Insight:
&lt;/h3&gt;

&lt;p&gt;👉 &lt;strong&gt;An explicit deny always overrides an allow.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This concept is critical when designing secure systems.&lt;/p&gt;




&lt;h2&gt;
  
  
  Why This Matters in Real Organizations 🏢
&lt;/h2&gt;

&lt;p&gt;In enterprise environments, governance ensures:&lt;/p&gt;

&lt;p&gt;✔️ Compliance with regulations&lt;br&gt;
✔️ Security at scale&lt;br&gt;
✔️ Standardized deployments&lt;br&gt;
✔️ Reduced human error&lt;/p&gt;

&lt;p&gt;Without governance, cloud infrastructure quickly becomes chaotic.&lt;/p&gt;

&lt;p&gt;With Terraform + AWS Config + IAM → you get &lt;strong&gt;automated compliance&lt;/strong&gt;.&lt;/p&gt;




&lt;h2&gt;
  
  
  Key Takeaways from Day 21 💡
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Terraform can enforce governance, not just provisioning&lt;/li&gt;
&lt;li&gt;IAM policies act as preventive controls&lt;/li&gt;
&lt;li&gt;AWS Config enables continuous monitoring&lt;/li&gt;
&lt;li&gt;Logging is critical for auditing&lt;/li&gt;
&lt;li&gt;Understanding policy evaluation is essential&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  What’s Next? 🔥
&lt;/h2&gt;

&lt;p&gt;As I move forward in this journey, I’m excited to explore:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Policy as Code (OPA, Sentinel)&lt;/li&gt;
&lt;li&gt;Advanced compliance automation&lt;/li&gt;
&lt;li&gt;Security frameworks integration&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Final Thoughts
&lt;/h2&gt;

&lt;p&gt;Day 21 was a turning point.&lt;/p&gt;

&lt;p&gt;It changed my mindset from:&lt;/p&gt;

&lt;p&gt;➡️ “How do I deploy infrastructure?”&lt;br&gt;
➡️ To “How do I secure and govern infrastructure at scale?”&lt;/p&gt;

&lt;p&gt;That’s the real difference between writing Terraform and &lt;strong&gt;engineering cloud systems&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;If you’re learning Terraform, don’t skip governance — it’s what makes your infrastructure production-ready.&lt;/p&gt;

</description>
      <category>terraform</category>
      <category>aws</category>
      <category>devops</category>
      <category>cloud</category>
    </item>
    <item>
      <title>Mastering Custom Terraform Modules for Scalable AWS Infrastructure</title>
      <dc:creator>Atul Vishwakarma</dc:creator>
      <pubDate>Thu, 16 Apr 2026 10:48:58 +0000</pubDate>
      <link>https://dev.to/vatul16/mastering-custom-terraform-modules-for-scalable-aws-infrastructure-5cop</link>
      <guid>https://dev.to/vatul16/mastering-custom-terraform-modules-for-scalable-aws-infrastructure-5cop</guid>
      <description>&lt;h2&gt;
  
  
  Building Production-Ready Infrastructure with Reusable Terraform Modules 🚀
&lt;/h2&gt;

&lt;p&gt;As part of my &lt;strong&gt;30 Days of AWS Terraform challenge&lt;/strong&gt;, Day 20 was a major milestone in my Infrastructure as Code journey.&lt;/p&gt;

&lt;p&gt;Today’s focus was on one of the most important Terraform concepts for real-world DevOps: &lt;strong&gt;Custom Terraform Modules&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Until now, most of my learning revolved around creating and managing AWS resources directly. But Day 20 introduced the concept that truly separates beginner Terraform projects from production-grade cloud systems:&lt;/p&gt;

&lt;p&gt;👉 &lt;strong&gt;Modularity and reusability.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This project gave me hands-on experience building an &lt;strong&gt;Amazon EKS cluster&lt;/strong&gt; using a modular Terraform architecture — and it completely changed how I think about writing infrastructure code.&lt;/p&gt;




&lt;h2&gt;
  
  
  Why Terraform Modules Matter
&lt;/h2&gt;

&lt;p&gt;When starting with Terraform, it’s common to place everything in a single &lt;code&gt;main.tf&lt;/code&gt; file:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;VPC&lt;/li&gt;
&lt;li&gt;Subnets&lt;/li&gt;
&lt;li&gt;Security groups&lt;/li&gt;
&lt;li&gt;IAM roles&lt;/li&gt;
&lt;li&gt;EKS cluster&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This works for simple labs.&lt;/p&gt;

&lt;p&gt;But in real-world environments, this quickly becomes:&lt;/p&gt;

&lt;p&gt;❌ Hard to manage&lt;br&gt;
❌ Difficult to debug&lt;br&gt;
❌ Impossible to scale&lt;br&gt;
❌ Error-prone for teams&lt;/p&gt;

&lt;p&gt;Terraform modules solve this problem.&lt;/p&gt;

&lt;h3&gt;
  
  
  Benefits of Modules:
&lt;/h3&gt;

&lt;p&gt;✅ Code reusability&lt;br&gt;
✅ Better readability&lt;br&gt;
✅ Easier collaboration&lt;br&gt;
✅ Standardized deployments&lt;br&gt;
✅ Simplified maintenance&lt;/p&gt;

&lt;p&gt;Modules help transform Terraform from a scripting tool into a proper infrastructure framework.&lt;/p&gt;




&lt;h2&gt;
  
  
  Types of Terraform Modules I Explored
&lt;/h2&gt;

&lt;p&gt;Today, I explored the three main types of modules:&lt;/p&gt;

&lt;h3&gt;
  
  
  1. Public Modules
&lt;/h3&gt;

&lt;p&gt;Modules published in the Terraform Registry by providers like HashiCorp.&lt;/p&gt;

&lt;p&gt;Example:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;AWS VPC module&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  2. Partner Modules
&lt;/h3&gt;

&lt;p&gt;Modules maintained jointly by HashiCorp and cloud vendors / partners.&lt;/p&gt;

&lt;p&gt;Useful for:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Enterprise integrations&lt;/li&gt;
&lt;li&gt;Verified architectures&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  3. Custom Modules (Main Focus Today)
&lt;/h3&gt;

&lt;p&gt;Custom modules are built by your own team to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Match internal standards&lt;/li&gt;
&lt;li&gt;Enforce security controls&lt;/li&gt;
&lt;li&gt;Improve reusability&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This was the core focus of Day 20.&lt;/p&gt;




&lt;h2&gt;
  
  
  Project Goal: Build an EKS Cluster with Custom Modules 🎯
&lt;/h2&gt;

&lt;p&gt;For today’s hands-on project, I built an &lt;strong&gt;Amazon EKS (Elastic Kubernetes Service) cluster&lt;/strong&gt; using modular Terraform code.&lt;/p&gt;

&lt;p&gt;Instead of writing one large configuration file, I split the infrastructure into reusable child modules:&lt;/p&gt;

&lt;h3&gt;
  
  
  Module Breakdown:
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;VPC Module&lt;/strong&gt; → Networking, subnets, routing&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;IAM Module&lt;/strong&gt; → Roles, policies, permissions&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;EKS Module&lt;/strong&gt; → Cluster and worker node setup&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This approach felt much closer to how real production systems are designed.&lt;/p&gt;




&lt;h2&gt;
  
  
  Understanding Root Module vs Child Modules 🧠
&lt;/h2&gt;

&lt;p&gt;One of the biggest learnings today was understanding how Terraform structures module relationships.&lt;/p&gt;

&lt;h3&gt;
  
  
  Root Module
&lt;/h3&gt;

&lt;p&gt;The root module acts as the main entry point.&lt;/p&gt;

&lt;p&gt;Responsibilities:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Calling child modules&lt;/li&gt;
&lt;li&gt;Passing variables&lt;/li&gt;
&lt;li&gt;Managing overall orchestration&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Child Modules
&lt;/h3&gt;

&lt;p&gt;Child modules are reusable building blocks.&lt;/p&gt;

&lt;p&gt;Each child module:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Handles one responsibility&lt;/li&gt;
&lt;li&gt;Has its own variables&lt;/li&gt;
&lt;li&gt;Exposes outputs&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Why This Matters
&lt;/h3&gt;

&lt;p&gt;This separation improves:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Team ownership&lt;/li&gt;
&lt;li&gt;Maintainability&lt;/li&gt;
&lt;li&gt;Debugging&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This was a huge mindset shift for me.&lt;/p&gt;




&lt;h2&gt;
  
  
  Passing Data Between Modules 🔄
&lt;/h2&gt;

&lt;p&gt;Today’s most important concept was learning how to pass information between modules.&lt;/p&gt;

&lt;h3&gt;
  
  
  Key Challenge:
&lt;/h3&gt;

&lt;p&gt;Terraform child modules cannot directly communicate with each other.&lt;/p&gt;

&lt;p&gt;So if:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;VPC module creates subnets&lt;/li&gt;
&lt;li&gt;IAM module creates roles&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The EKS module still needs:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Subnet IDs&lt;/li&gt;
&lt;li&gt;IAM Role ARN&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Solution:
&lt;/h3&gt;

&lt;p&gt;Using:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;code&gt;variables.tf&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;outputs.tf&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Example Flow:
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;VPC module outputs subnet IDs&lt;/li&gt;
&lt;li&gt;Root module receives outputs&lt;/li&gt;
&lt;li&gt;Root passes them into EKS module&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Why This Matters
&lt;/h3&gt;

&lt;p&gt;This teaches clean architecture principles:&lt;/p&gt;

&lt;p&gt;👉 Modules should stay independent, but data should flow intentionally.&lt;/p&gt;

&lt;p&gt;This was the biggest technical takeaway from Day 20.&lt;/p&gt;




&lt;h2&gt;
  
  
  Encapsulation = Cleaner Infrastructure 🧩
&lt;/h2&gt;

&lt;p&gt;Another major lesson was the power of encapsulation.&lt;/p&gt;

&lt;p&gt;Instead of exposing all resource complexity in the root module:&lt;/p&gt;

&lt;p&gt;I wrapped:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;IAM roles&lt;/li&gt;
&lt;li&gt;Node groups&lt;/li&gt;
&lt;li&gt;Networking logic&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Inside modules.&lt;/p&gt;

&lt;h3&gt;
  
  
  Result:
&lt;/h3&gt;

&lt;p&gt;My root configuration became:&lt;/p&gt;

&lt;p&gt;✔️ Cleaner&lt;br&gt;
✔️ Easier to understand&lt;br&gt;
✔️ Faster to extend&lt;/p&gt;

&lt;p&gt;This makes future projects much easier to maintain.&lt;/p&gt;




&lt;h2&gt;
  
  
  Why This Matters in Real Organizations 🏢
&lt;/h2&gt;

&lt;p&gt;Custom modules are essential for enterprise DevOps because they help enforce standards.&lt;/p&gt;

&lt;p&gt;Examples:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Secure default VPC rules&lt;/li&gt;
&lt;li&gt;Required resource tags&lt;/li&gt;
&lt;li&gt;Approved IAM policies&lt;/li&gt;
&lt;li&gt;Logging standards&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Benefits for Teams:
&lt;/h3&gt;

&lt;p&gt;✔️ Consistency across environments&lt;br&gt;
✔️ Better compliance&lt;br&gt;
✔️ Faster deployments&lt;br&gt;
✔️ Reduced human error&lt;/p&gt;

&lt;p&gt;Today made me realize that writing reusable Terraform is as important as writing correct Terraform.&lt;/p&gt;




&lt;h2&gt;
  
  
  Key Learnings from Day 20 💡
&lt;/h2&gt;

&lt;p&gt;Today’s biggest takeaways:&lt;/p&gt;

&lt;p&gt;✔️ Terraform modules improve scalability&lt;br&gt;
✔️ Root / child architecture is powerful&lt;br&gt;
✔️ Variables &amp;amp; outputs are the backbone of modularity&lt;br&gt;
✔️ Encapsulation makes code production-ready&lt;br&gt;
✔️ Reusability saves time and reduces errors&lt;/p&gt;

&lt;p&gt;This felt like a major step toward thinking like a professional cloud engineer.&lt;/p&gt;




&lt;h2&gt;
  
  
  Advice for Beginners
&lt;/h2&gt;

&lt;p&gt;If you’re learning Terraform:&lt;/p&gt;

&lt;p&gt;Start small.&lt;/p&gt;

&lt;p&gt;Try creating modules for:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;VPC&lt;/li&gt;
&lt;li&gt;EC2 instance&lt;/li&gt;
&lt;li&gt;Security groups&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Before moving to advanced services like EKS.&lt;/p&gt;

&lt;p&gt;The sooner you understand modules, the easier Terraform becomes.&lt;/p&gt;




&lt;h2&gt;
  
  
  What’s Next? 🔥
&lt;/h2&gt;

&lt;p&gt;Looking ahead, I’m excited to explore:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Module versioning&lt;/li&gt;
&lt;li&gt;Remote module sources&lt;/li&gt;
&lt;li&gt;CI/CD integration&lt;/li&gt;
&lt;li&gt;Multi-environment deployments&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Still 10 days to go — excited to keep building.&lt;/p&gt;




&lt;h2&gt;
  
  
  Final Thoughts
&lt;/h2&gt;

&lt;p&gt;Day 20 was one of the most impactful milestones in this Terraform challenge.&lt;/p&gt;

&lt;p&gt;It taught me that great Infrastructure as Code is not just about provisioning resources — it’s about building systems that are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Reusable&lt;/li&gt;
&lt;li&gt;Scalable&lt;/li&gt;
&lt;li&gt;Maintainable&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Custom Terraform modules are a game-changer for anyone serious about cloud engineering.&lt;/p&gt;

&lt;p&gt;If you’re on your own Terraform journey, I highly recommend spending time mastering modules — they unlock the real power of IaC.&lt;/p&gt;

&lt;p&gt;How are you using Terraform modules in your projects? I’d love to hear your best practices.&lt;/p&gt;

</description>
      <category>terraform</category>
      <category>aws</category>
      <category>cloud</category>
      <category>devops</category>
    </item>
    <item>
      <title>Building a Serverless Image Processing Pipeline with Terraform</title>
      <dc:creator>Atul Vishwakarma</dc:creator>
      <pubDate>Wed, 15 Apr 2026 10:02:43 +0000</pubDate>
      <link>https://dev.to/vatul16/building-a-serverless-image-processing-pipeline-with-terraform-i6g</link>
      <guid>https://dev.to/vatul16/building-a-serverless-image-processing-pipeline-with-terraform-i6g</guid>
      <description>&lt;h2&gt;
  
  
  Automating Image Workflows with AWS Lambda, S3, and Terraform 🚀
&lt;/h2&gt;

&lt;p&gt;As part of my &lt;strong&gt;30 Days of AWS Terraform challenge&lt;/strong&gt;, Day 18 was one of the most exciting and practical projects so far.&lt;/p&gt;

&lt;p&gt;Today, I moved beyond basic infrastructure provisioning and built a &lt;strong&gt;fully automated serverless image processing pipeline&lt;/strong&gt; using:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;AWS S3&lt;/li&gt;
&lt;li&gt;AWS Lambda&lt;/li&gt;
&lt;li&gt;IAM Roles &amp;amp; Policies&lt;/li&gt;
&lt;li&gt;Terraform&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This project was a major step toward understanding how real-world event-driven cloud systems are designed and automated.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Project Goal 🎯
&lt;/h2&gt;

&lt;p&gt;The goal was simple but powerful:&lt;/p&gt;

&lt;p&gt;👉 Whenever an image is uploaded to a source S3 bucket, AWS should automatically process it and store multiple optimized versions in a destination bucket.&lt;/p&gt;

&lt;h3&gt;
  
  
  Output Variants Included:
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Compressed versions&lt;/li&gt;
&lt;li&gt;Different file formats&lt;/li&gt;
&lt;li&gt;Thumbnail image&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This type of architecture is highly relevant for:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Media platforms&lt;/li&gt;
&lt;li&gt;E-commerce websites&lt;/li&gt;
&lt;li&gt;User profile image optimization&lt;/li&gt;
&lt;li&gt;Content delivery systems&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Why Serverless Architecture Matters
&lt;/h2&gt;

&lt;p&gt;Traditional image processing systems often require:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Dedicated servers&lt;/li&gt;
&lt;li&gt;Manual scaling&lt;/li&gt;
&lt;li&gt;Ongoing maintenance&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That creates:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Higher cost&lt;/li&gt;
&lt;li&gt;Operational complexity&lt;/li&gt;
&lt;li&gt;Slower deployments&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Serverless changes everything.&lt;/p&gt;

&lt;h3&gt;
  
  
  Benefits of Serverless:
&lt;/h3&gt;

&lt;p&gt;✅ Event-driven execution&lt;br&gt;
✅ No server management&lt;br&gt;
✅ Auto scaling&lt;br&gt;
✅ Pay only for usage&lt;br&gt;
✅ Faster deployment cycles&lt;/p&gt;

&lt;p&gt;This project showed me exactly why serverless is such a powerful cloud pattern.&lt;/p&gt;




&lt;h2&gt;
  
  
  Architecture Overview 🏗️
&lt;/h2&gt;

&lt;p&gt;The architecture for today’s project looked like this:&lt;/p&gt;

&lt;h3&gt;
  
  
  Step-by-Step Flow:
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;User uploads image to Source S3 bucket&lt;/li&gt;
&lt;li&gt;S3 event notification triggers Lambda&lt;/li&gt;
&lt;li&gt;Lambda processes image using Pillow library&lt;/li&gt;
&lt;li&gt;Lambda generates multiple variants&lt;/li&gt;
&lt;li&gt;Processed images are uploaded to Destination bucket&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;This entire workflow runs automatically without any manual intervention.&lt;/p&gt;




&lt;h2&gt;
  
  
  Terraform Resources Used ⚙️
&lt;/h2&gt;

&lt;p&gt;This project was a great demonstration of how Terraform can automate complex serverless stacks.&lt;/p&gt;

&lt;h3&gt;
  
  
  Key Resources Provisioned:
&lt;/h3&gt;

&lt;h3&gt;
  
  
  1. S3 Buckets 📦
&lt;/h3&gt;

&lt;p&gt;Terraform created:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Source bucket (incoming uploads)&lt;/li&gt;
&lt;li&gt;Destination bucket (processed images)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Bucket setup included:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Event notification configuration&lt;/li&gt;
&lt;li&gt;Permissions for Lambda access&lt;/li&gt;
&lt;/ul&gt;




&lt;h3&gt;
  
  
  2. AWS Lambda Function 🧠
&lt;/h3&gt;

&lt;p&gt;The core logic was implemented inside a Lambda function.&lt;/p&gt;

&lt;p&gt;Responsibilities:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Read uploaded image&lt;/li&gt;
&lt;li&gt;Process and compress variants&lt;/li&gt;
&lt;li&gt;Generate thumbnail&lt;/li&gt;
&lt;li&gt;Upload outputs&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Why Lambda?
&lt;/h3&gt;

&lt;p&gt;Lambda is ideal because:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;No server maintenance&lt;/li&gt;
&lt;li&gt;Automatic scaling&lt;/li&gt;
&lt;li&gt;Fast event response&lt;/li&gt;
&lt;/ul&gt;




&lt;h3&gt;
  
  
  3. IAM Roles &amp;amp; Least Privilege Security 🔐
&lt;/h3&gt;

&lt;p&gt;A key learning today was implementing secure IAM access.&lt;/p&gt;

&lt;p&gt;Terraform provisioned:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Lambda execution role&lt;/li&gt;
&lt;li&gt;Scoped IAM policies&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Permissions included:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;s3:GetObject&lt;/code&gt; from source bucket&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;s3:PutObject&lt;/code&gt; to destination bucket&lt;/li&gt;
&lt;li&gt;CloudWatch logs access&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Why This Matters
&lt;/h3&gt;

&lt;p&gt;Security in automation is critical.&lt;/p&gt;

&lt;p&gt;This reinforced:&lt;/p&gt;

&lt;p&gt;👉 Always grant only the permissions required.&lt;/p&gt;




&lt;h3&gt;
  
  
  4. CloudWatch Logging 📊
&lt;/h3&gt;

&lt;p&gt;Terraform also provisioned:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Log groups&lt;/li&gt;
&lt;li&gt;Monitoring support&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This helped with:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Debugging failures&lt;/li&gt;
&lt;li&gt;Monitoring execution&lt;/li&gt;
&lt;li&gt;Troubleshooting events&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;CloudWatch visibility was extremely useful during testing.&lt;/p&gt;




&lt;h2&gt;
  
  
  Biggest Challenge: Dependency Management 🐳
&lt;/h2&gt;

&lt;p&gt;One of the most valuable lessons today came from troubleshooting Python dependencies.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Problem:
&lt;/h3&gt;

&lt;p&gt;Libraries like:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Pillow&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Often fail when packaged locally because:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Local machine OS differs from AWS Lambda runtime&lt;/li&gt;
&lt;li&gt;Binary dependencies break&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  The Solution:
&lt;/h3&gt;

&lt;p&gt;I used Docker to build Lambda Layers.&lt;/p&gt;

&lt;h3&gt;
  
  
  Why Docker Helped:
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Matched AWS Linux environment&lt;/li&gt;
&lt;li&gt;Prevented runtime compatibility issues&lt;/li&gt;
&lt;li&gt;Ensured consistent builds&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Key Lesson:
&lt;/h3&gt;

&lt;p&gt;👉 “It works on my machine” is not enough in cloud engineering.&lt;/p&gt;

&lt;p&gt;Environment consistency matters.&lt;/p&gt;

&lt;p&gt;This was one of the most important practical takeaways so far.&lt;/p&gt;




&lt;h2&gt;
  
  
  Key Learnings from Day 18 💡
&lt;/h2&gt;

&lt;p&gt;Today’s project taught me:&lt;/p&gt;

&lt;p&gt;✔️ How event-driven serverless systems work&lt;br&gt;
✔️ How S3 triggers Lambda in real workflows&lt;br&gt;
✔️ Why least privilege IAM matters&lt;br&gt;
✔️ How Terraform simplifies complex deployments&lt;br&gt;
✔️ Why dependency packaging is critical&lt;/p&gt;

&lt;p&gt;This was one of the clearest examples yet of Terraform enabling repeatable, scalable cloud systems.&lt;/p&gt;




&lt;h2&gt;
  
  
  Why Terraform Made This Easier
&lt;/h2&gt;

&lt;p&gt;Without Terraform, setting this up manually would involve:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Creating buckets&lt;/li&gt;
&lt;li&gt;Uploading Lambda code&lt;/li&gt;
&lt;li&gt;Configuring IAM roles&lt;/li&gt;
&lt;li&gt;Setting triggers&lt;/li&gt;
&lt;li&gt;Setting logs&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This would be:&lt;/p&gt;

&lt;p&gt;❌ Time-consuming&lt;br&gt;
❌ Error-prone&lt;br&gt;
❌ Hard to reproduce&lt;/p&gt;

&lt;p&gt;Terraform made the entire setup:&lt;/p&gt;

&lt;p&gt;✅ Repeatable&lt;br&gt;
✅ Version-controlled&lt;br&gt;
✅ Easy to destroy / recreate&lt;/p&gt;

&lt;p&gt;This is exactly why IaC is such a game changer.&lt;/p&gt;




&lt;h2&gt;
  
  
  What’s Next? 🔥
&lt;/h2&gt;

&lt;p&gt;Future improvements I’d like to explore:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Adding image watermarking&lt;/li&gt;
&lt;li&gt;Using Step Functions for complex workflows&lt;/li&gt;
&lt;li&gt;Integrating API Gateway for direct uploads&lt;/li&gt;
&lt;li&gt;Setting lifecycle rules on processed images&lt;/li&gt;
&lt;li&gt;Adding alerts for failed Lambda runs&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Excited to keep building.&lt;/p&gt;




&lt;h2&gt;
  
  
  Final Thoughts
&lt;/h2&gt;

&lt;p&gt;Day 18 was one of the most practical and rewarding milestones in this challenge so far.&lt;/p&gt;

&lt;p&gt;This project showed me how Terraform can be used not just to provision resources, but to automate intelligent business workflows.&lt;/p&gt;

&lt;p&gt;Serverless architectures are efficient, scalable, and increasingly relevant in modern cloud systems — and building one from scratch was a huge confidence boost.&lt;/p&gt;

&lt;p&gt;If you’re learning Terraform, I highly recommend exploring serverless projects like this. They teach infrastructure, automation, debugging, and cloud design all at once.&lt;/p&gt;

&lt;p&gt;Have you built event-driven serverless workflows with Terraform? I’d love to hear your experiences.&lt;/p&gt;

</description>
      <category>terraform</category>
      <category>aws</category>
      <category>cloud</category>
      <category>devops</category>
    </item>
    <item>
      <title>Mastering Blue-Green Deployments with Terraform &amp; AWS Elastic Beanstalk</title>
      <dc:creator>Atul Vishwakarma</dc:creator>
      <pubDate>Tue, 14 Apr 2026 10:33:51 +0000</pubDate>
      <link>https://dev.to/vatul16/mastering-blue-green-deployments-with-terraform-aws-elastic-beanstalk-3gpm</link>
      <guid>https://dev.to/vatul16/mastering-blue-green-deployments-with-terraform-aws-elastic-beanstalk-3gpm</guid>
      <description>&lt;h2&gt;
  
  
  Achieving Zero-Downtime Deployments with Infrastructure as Code 🚀
&lt;/h2&gt;

&lt;p&gt;As part of my &lt;strong&gt;30 Days of AWS Terraform challenge&lt;/strong&gt;, Day 17 was one of the most exciting milestones so far: implementing a &lt;strong&gt;Blue-Green Deployment strategy using Terraform and AWS Elastic Beanstalk&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;This project took my Terraform journey beyond provisioning infrastructure into the world of &lt;strong&gt;release engineering, deployment safety, and production reliability&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Until now, I had focused on building resources and automating cloud workflows. But today’s challenge taught me something even more valuable:&lt;/p&gt;

&lt;p&gt;👉 How to deploy application updates without downtime and with instant rollback capability.&lt;/p&gt;




&lt;h2&gt;
  
  
  Why Blue-Green Deployment Matters
&lt;/h2&gt;

&lt;p&gt;In production systems, every deployment introduces risk.&lt;/p&gt;

&lt;p&gt;Common challenges with traditional deployments:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Temporary downtime during updates&lt;/li&gt;
&lt;li&gt;Risk of failed releases affecting users&lt;/li&gt;
&lt;li&gt;Difficult rollbacks&lt;/li&gt;
&lt;li&gt;Limited testing in production-like conditions&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This is where &lt;strong&gt;Blue-Green Deployment&lt;/strong&gt; becomes a game-changer.&lt;/p&gt;

&lt;h3&gt;
  
  
  What is Blue-Green Deployment?
&lt;/h3&gt;

&lt;p&gt;Blue-Green is a deployment strategy where you maintain &lt;strong&gt;two identical production environments&lt;/strong&gt;:&lt;/p&gt;

&lt;h3&gt;
  
  
  🔵 Blue Environment
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Current live production version&lt;/li&gt;
&lt;li&gt;Serving real users&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  🟢 Green Environment
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;New version deployment&lt;/li&gt;
&lt;li&gt;Used for testing and validation&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Once Green is validated:&lt;/p&gt;

&lt;p&gt;👉 Traffic is switched from Blue to Green instantly.&lt;/p&gt;

&lt;p&gt;If something breaks:&lt;/p&gt;

&lt;p&gt;👉 Rollback is as simple as switching traffic back.&lt;/p&gt;




&lt;h2&gt;
  
  
  Project Goal 🎯
&lt;/h2&gt;

&lt;p&gt;The goal for today’s project was to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Deploy two versions of an application (v1 and v2)&lt;/li&gt;
&lt;li&gt;Host them in separate Elastic Beanstalk environments&lt;/li&gt;
&lt;li&gt;Manage infrastructure using Terraform&lt;/li&gt;
&lt;li&gt;Perform a zero-downtime DNS cutover&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This project was a perfect mix of:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Terraform provisioning&lt;/li&gt;
&lt;li&gt;AWS application hosting&lt;/li&gt;
&lt;li&gt;Deployment strategy&lt;/li&gt;
&lt;li&gt;Risk mitigation&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Architecture Overview 🏗️
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Key AWS Components Used
&lt;/h3&gt;

&lt;h3&gt;
  
  
  1. Amazon S3 for Application Packaging 📦
&lt;/h3&gt;

&lt;p&gt;I first packaged:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Application version v1.0&lt;/li&gt;
&lt;li&gt;Application version v2.0&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;As ZIP archives.&lt;/p&gt;

&lt;p&gt;These were uploaded to an S3 bucket, which served as the source for Elastic Beanstalk deployments.&lt;/p&gt;

&lt;h3&gt;
  
  
  Why This Matters
&lt;/h3&gt;

&lt;p&gt;S3 acts as:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Central artifact storage&lt;/li&gt;
&lt;li&gt;Version-controlled release source&lt;/li&gt;
&lt;li&gt;Secure deployment package repository&lt;/li&gt;
&lt;/ul&gt;




&lt;h3&gt;
  
  
  2. AWS Elastic Beanstalk Environments 🌱
&lt;/h3&gt;

&lt;p&gt;Terraform was used to provision:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Elastic Beanstalk Application&lt;/li&gt;
&lt;li&gt;Blue environment (live)&lt;/li&gt;
&lt;li&gt;Green environment (candidate release)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Elastic Beanstalk simplified:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;EC2 provisioning&lt;/li&gt;
&lt;li&gt;Load balancing&lt;/li&gt;
&lt;li&gt;Auto scaling&lt;/li&gt;
&lt;li&gt;Health monitoring&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This allowed me to focus more on deployment logic than server management.&lt;/p&gt;




&lt;h3&gt;
  
  
  3. IAM Roles &amp;amp; Instance Profiles 🔐
&lt;/h3&gt;

&lt;p&gt;To make the environments work securely, Terraform also provisioned:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;EC2 IAM roles&lt;/li&gt;
&lt;li&gt;Instance profiles&lt;/li&gt;
&lt;li&gt;Required permissions&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This ensured:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;S3 access for artifacts&lt;/li&gt;
&lt;li&gt;Elastic Beanstalk environment operations&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;A good reminder that security is always part of automation.&lt;/p&gt;




&lt;h2&gt;
  
  
  Terraform in Action ⚙️
&lt;/h2&gt;

&lt;p&gt;This project helped me apply Terraform in a real deployment workflow.&lt;/p&gt;

&lt;h3&gt;
  
  
  Key Terraform Concepts Used:
&lt;/h3&gt;

&lt;p&gt;✔️ Resource provisioning for Elastic Beanstalk&lt;br&gt;
✔️ S3 object uploads for artifacts&lt;br&gt;
✔️ IAM role automation&lt;br&gt;
✔️ Environment lifecycle management&lt;br&gt;
✔️ Infrastructure consistency between Blue &amp;amp; Green&lt;/p&gt;

&lt;h3&gt;
  
  
  Biggest Benefit
&lt;/h3&gt;

&lt;p&gt;Because both environments were managed as code:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Configuration stayed consistent&lt;/li&gt;
&lt;li&gt;Drift was minimized&lt;/li&gt;
&lt;li&gt;Deployment became repeatable&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This is exactly why IaC matters.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Best Part: Zero-Downtime DNS Swap 🔄
&lt;/h2&gt;

&lt;p&gt;The highlight of today’s project was performing the actual traffic cutover.&lt;/p&gt;

&lt;p&gt;Once Green was deployed and tested:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;I validated the new version&lt;/li&gt;
&lt;li&gt;Confirmed health checks&lt;/li&gt;
&lt;li&gt;Swapped the CNAME / DNS routing&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Result:
&lt;/h3&gt;

&lt;p&gt;✅ Users experienced zero downtime&lt;br&gt;
✅ Traffic shifted instantly&lt;br&gt;
✅ No service interruption&lt;/p&gt;

&lt;p&gt;This was one of the most satisfying hands-on moments in this challenge so far.&lt;/p&gt;




&lt;h2&gt;
  
  
  Key Learnings from Day 17 💡
&lt;/h2&gt;

&lt;h3&gt;
  
  
  1. Downtime Can Be Avoided
&lt;/h3&gt;

&lt;p&gt;Production deployments don’t have to impact users.&lt;/p&gt;

&lt;p&gt;Blue-Green strategies provide:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Safer releases&lt;/li&gt;
&lt;li&gt;Better customer experience&lt;/li&gt;
&lt;li&gt;Lower operational risk&lt;/li&gt;
&lt;/ul&gt;




&lt;h3&gt;
  
  
  2. Rollbacks Should Be Simple
&lt;/h3&gt;

&lt;p&gt;One of the strongest lessons:&lt;/p&gt;

&lt;p&gt;👉 Good systems are not just deployable — they are recoverable.&lt;/p&gt;

&lt;p&gt;If Green fails:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Switch traffic back to Blue&lt;/li&gt;
&lt;li&gt;Restore production quickly&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This kind of safety net is essential in real systems.&lt;/p&gt;




&lt;h3&gt;
  
  
  3. Testing Matters
&lt;/h3&gt;

&lt;p&gt;Green environments allow:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Smoke tests&lt;/li&gt;
&lt;li&gt;Health checks&lt;/li&gt;
&lt;li&gt;Validation before release&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This reduces bad deployments significantly.&lt;/p&gt;




&lt;h2&gt;
  
  
  Real-World Extensions 🔥
&lt;/h2&gt;

&lt;p&gt;To make this production-grade, future improvements could include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Route 53 weighted routing&lt;/li&gt;
&lt;li&gt;Automated DNS cutover via Terraform / CLI&lt;/li&gt;
&lt;li&gt;CI/CD integration with GitHub Actions / Jenkins&lt;/li&gt;
&lt;li&gt;Canary deployments&lt;/li&gt;
&lt;li&gt;Monitoring with CloudWatch&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These are areas I’m excited to explore next.&lt;/p&gt;




&lt;h2&gt;
  
  
  Final Thoughts
&lt;/h2&gt;

&lt;p&gt;Day 17 was a major mindset shift.&lt;/p&gt;

&lt;p&gt;This project showed me that DevOps is not just about creating infrastructure — it’s about designing systems that are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Reliable&lt;/li&gt;
&lt;li&gt;Recoverable&lt;/li&gt;
&lt;li&gt;Safe to change&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Blue-Green deployment is one of the clearest examples of how cloud engineering directly improves user experience.&lt;/p&gt;

&lt;p&gt;If you’re learning Terraform or AWS, I highly recommend trying a project like this. It teaches both technical depth and deployment discipline.&lt;/p&gt;

</description>
      <category>terraform</category>
      <category>aws</category>
      <category>cloud</category>
      <category>devops</category>
    </item>
    <item>
      <title>Scaling IAM User Management with Terraform</title>
      <dc:creator>Atul Vishwakarma</dc:creator>
      <pubDate>Tue, 14 Apr 2026 09:16:23 +0000</pubDate>
      <link>https://dev.to/vatul16/scaling-iam-user-management-with-terraform-gc5</link>
      <guid>https://dev.to/vatul16/scaling-iam-user-management-with-terraform-gc5</guid>
      <description>&lt;h2&gt;
  
  
  Automating User Onboarding and Access Control at Scale 🚀
&lt;/h2&gt;

&lt;p&gt;As part of my &lt;strong&gt;30 Days of AWS Terraform challenge&lt;/strong&gt;, Day 16 shifted from infrastructure provisioning into something just as critical in real-world DevOps: &lt;strong&gt;identity and access management (IAM) at scale&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Today’s hands-on project focused on automating &lt;strong&gt;AWS IAM user creation, login setup, tagging, and group assignment&lt;/strong&gt; using Terraform.&lt;/p&gt;

&lt;p&gt;This was a powerful reminder that Infrastructure as Code is not only about deploying servers and networks — it’s also about standardizing how people securely interact with cloud systems.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Real Problem: Manual IAM Doesn't Scale
&lt;/h2&gt;

&lt;p&gt;In many organizations, onboarding users manually through the AWS Console leads to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Human errors&lt;/li&gt;
&lt;li&gt;Inconsistent naming&lt;/li&gt;
&lt;li&gt;Delayed access provisioning&lt;/li&gt;
&lt;li&gt;Poor auditability&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;As teams grow, this process becomes inefficient and risky.&lt;/p&gt;

&lt;p&gt;Terraform solves this by making IAM onboarding:&lt;/p&gt;

&lt;p&gt;✅ Repeatable&lt;br&gt;
✅ Scalable&lt;br&gt;
✅ Auditable&lt;br&gt;
✅ Secure-by-design&lt;/p&gt;




&lt;h2&gt;
  
  
  Project Goal 🎯
&lt;/h2&gt;

&lt;p&gt;The goal for today’s project was simple:&lt;/p&gt;

&lt;p&gt;👉 Automatically provision multiple IAM users from a CSV file and manage access dynamically.&lt;/p&gt;

&lt;p&gt;This included:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Bulk user creation&lt;/li&gt;
&lt;li&gt;Naming standardization&lt;/li&gt;
&lt;li&gt;Metadata tagging&lt;/li&gt;
&lt;li&gt;Login profile setup&lt;/li&gt;
&lt;li&gt;Group assignment based on role/department&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Architecture &amp;amp; Workflow ⚙️
&lt;/h2&gt;

&lt;h2&gt;
  
  
  1. CSV Data Parsing with &lt;code&gt;csvdecode()&lt;/code&gt; 📄
&lt;/h2&gt;

&lt;p&gt;The first step was handling structured user input.&lt;/p&gt;

&lt;p&gt;I created a CSV file containing:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;First name&lt;/li&gt;
&lt;li&gt;Last name&lt;/li&gt;
&lt;li&gt;Department&lt;/li&gt;
&lt;li&gt;Role&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Using Terraform’s built-in &lt;code&gt;csvdecode()&lt;/code&gt; function, I converted the CSV into a list of maps that Terraform could iterate over.&lt;/p&gt;

&lt;h3&gt;
  
  
  Why This Matters
&lt;/h3&gt;

&lt;p&gt;This approach makes onboarding easy:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Just update the CSV&lt;/li&gt;
&lt;li&gt;Terraform handles the rest&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Perfect for HR / DevOps collaboration.&lt;/p&gt;




&lt;h2&gt;
  
  
  2. Bulk User Provisioning with &lt;code&gt;for_each&lt;/code&gt; 🔁
&lt;/h2&gt;

&lt;p&gt;Instead of manually creating IAM users one by one, I used:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;code&gt;for_each&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;Dynamic resource blocks&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This allowed Terraform to create multiple users in a single apply.&lt;/p&gt;

&lt;h3&gt;
  
  
  Benefits:
&lt;/h3&gt;

&lt;p&gt;✔️ No duplicate code&lt;br&gt;
✔️ Faster onboarding&lt;br&gt;
✔️ Easier scaling&lt;/p&gt;

&lt;p&gt;This is exactly where Terraform shines.&lt;/p&gt;




&lt;h2&gt;
  
  
  3. Dynamic Naming &amp;amp; Standardized Tags 🏷️
&lt;/h2&gt;

&lt;p&gt;To enforce consistency, I used Terraform functions like:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;code&gt;lower()&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;substr()&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Example:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Michael Scott → &lt;code&gt;mscott&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I also added tags such as:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Department&lt;/li&gt;
&lt;li&gt;Role&lt;/li&gt;
&lt;li&gt;Owner&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Why Tags Matter
&lt;/h3&gt;

&lt;p&gt;Tags improve:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Cost visibility&lt;/li&gt;
&lt;li&gt;Auditing&lt;/li&gt;
&lt;li&gt;Access control&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This was a great exercise in combining automation with governance.&lt;/p&gt;




&lt;h2&gt;
  
  
  4. Secure Login Profiles 🔐
&lt;/h2&gt;

&lt;p&gt;To make the users immediately usable, I provisioned:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;code&gt;aws_iam_user_login_profile&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;With:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;code&gt;password_reset_required = true&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This ensures users must reset passwords on first login.&lt;/p&gt;

&lt;h3&gt;
  
  
  Security Lesson
&lt;/h3&gt;

&lt;p&gt;While outputs were used for learning/demo purposes, this reinforced an important point:&lt;/p&gt;

&lt;p&gt;👉 Sensitive credentials should never be exposed carelessly.&lt;/p&gt;

&lt;p&gt;In production, this should be paired with:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;AWS Secrets Manager&lt;/li&gt;
&lt;li&gt;HashiCorp Vault&lt;/li&gt;
&lt;li&gt;Secure password delivery workflows&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  5. Dynamic Group Assignment Based on Role 🧠
&lt;/h2&gt;

&lt;p&gt;One of the most exciting parts of today’s project was automating IAM group membership.&lt;/p&gt;

&lt;p&gt;Instead of manually assigning users to groups:&lt;/p&gt;

&lt;p&gt;I used:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;code&gt;for_each&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;Conditional expressions&lt;/li&gt;
&lt;li&gt;Tag-based logic&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Example:
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Users tagged as &lt;code&gt;manager&lt;/code&gt; → Manager group&lt;/li&gt;
&lt;li&gt;Finance users → Finance access group&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Why This Matters
&lt;/h3&gt;

&lt;p&gt;This makes onboarding smarter by:&lt;/p&gt;

&lt;p&gt;✔️ Reducing manual work&lt;br&gt;
✔️ Enforcing policy automatically&lt;br&gt;
✔️ Improving consistency&lt;/p&gt;

&lt;p&gt;This felt like true Infrastructure as Code in action.&lt;/p&gt;




&lt;h2&gt;
  
  
  Key Takeaways 💡
&lt;/h2&gt;

&lt;p&gt;Day 16 taught me that DevOps is not just about infrastructure resources — it’s also about people, permissions, and secure workflows.&lt;/p&gt;

&lt;p&gt;Today’s biggest lessons:&lt;/p&gt;

&lt;p&gt;✔️ IAM automation improves speed and consistency&lt;br&gt;
✔️ Terraform can simplify complex onboarding workflows&lt;br&gt;
✔️ Security must always be part of automation design&lt;br&gt;
✔️ Dynamic logic makes systems scalable&lt;/p&gt;




&lt;h2&gt;
  
  
  What’s Next? 🔥
&lt;/h2&gt;

&lt;p&gt;To make this production-ready, my next steps would include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Applying least-privilege IAM policies&lt;/li&gt;
&lt;li&gt;Enabling MFA for all users&lt;/li&gt;
&lt;li&gt;Integrating with AWS SSO / IAM Identity Center&lt;/li&gt;
&lt;li&gt;Adding secure secret distribution&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Excited to keep building and improving.&lt;/p&gt;




&lt;h2&gt;
  
  
  Final Thoughts
&lt;/h2&gt;

&lt;p&gt;Day 16 was one of the most practical projects so far because it connected Terraform directly to real-world operational workflows.&lt;/p&gt;

&lt;p&gt;Automating IAM user management showed me how Infrastructure as Code can improve not just systems, but also team productivity and security posture.&lt;/p&gt;

&lt;p&gt;If you’re learning Terraform, don’t stop at servers and networks — explore IAM automation too. It’s one of the most valuable skills in cloud engineering.&lt;/p&gt;

</description>
      <category>terraform</category>
      <category>aws</category>
      <category>cloud</category>
    </item>
    <item>
      <title>Mastering AWS VPC Peering with Terraform</title>
      <dc:creator>Atul Vishwakarma</dc:creator>
      <pubDate>Tue, 14 Apr 2026 07:17:23 +0000</pubDate>
      <link>https://dev.to/vatul16/mastering-aws-vpc-peering-with-terraform-2dhp</link>
      <guid>https://dev.to/vatul16/mastering-aws-vpc-peering-with-terraform-2dhp</guid>
      <description>&lt;h2&gt;
  
  
  Connecting Networks Across Regions with Infrastructure as Code 🚀
&lt;/h2&gt;

&lt;p&gt;As part of my &lt;strong&gt;30 Days of AWS Terraform challenge&lt;/strong&gt;, Day 15 marked a major milestone: reaching the halfway point and stepping into one of the most practical areas of cloud engineering — &lt;strong&gt;AWS networking with VPC Peering&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Until now, much of my Terraform journey focused on provisioning resources and learning Terraform concepts. But today’s project shifted from deploying isolated services to solving a real-world networking challenge: &lt;strong&gt;securely connecting two private AWS VPCs across different regions using Terraform&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;This project was a strong reminder that Infrastructure as Code is not just about automation — it’s about understanding architecture, networking principles, and system design.&lt;/p&gt;




&lt;h2&gt;
  
  
  Why VPC Peering Matters
&lt;/h2&gt;

&lt;p&gt;In real-world cloud environments, applications and services often live in different VPCs:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Separate environments (Dev / Staging / Prod)&lt;/li&gt;
&lt;li&gt;Multi-account setups&lt;/li&gt;
&lt;li&gt;Regional redundancy architectures&lt;/li&gt;
&lt;li&gt;Shared services and private APIs&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;To allow these isolated networks to communicate securely over private IP addresses, AWS provides &lt;strong&gt;VPC Peering&lt;/strong&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  What VPC Peering Does
&lt;/h3&gt;

&lt;p&gt;VPC Peering creates a &lt;strong&gt;private connection between two VPCs&lt;/strong&gt;, enabling resources in each network to talk to each other as if they were on the same local network.&lt;/p&gt;

&lt;p&gt;Benefits include:&lt;/p&gt;

&lt;p&gt;✅ Private communication (no internet exposure)&lt;br&gt;
✅ Low latency&lt;br&gt;
✅ Better security&lt;br&gt;
✅ Simplified architecture&lt;/p&gt;




&lt;h2&gt;
  
  
  Project Overview 🏗️
&lt;/h2&gt;

&lt;p&gt;For this mini-project, I provisioned:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;VPC 1 in US East (N. Virginia)&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;VPC 2 in US West (Oregon)&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;EC2 instances in both VPCs&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;VPC Peering connection between them&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Route table updates for bidirectional traffic&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The goal was simple:&lt;/p&gt;

&lt;p&gt;👉 Allow private communication between instances in both regions.&lt;/p&gt;

&lt;p&gt;And yes — successfully testing cross-region private connectivity felt amazing. 🎯&lt;/p&gt;




&lt;h2&gt;
  
  
  Key Learnings from Day 15
&lt;/h2&gt;

&lt;h2&gt;
  
  
  1. Multi-Region Terraform with Provider Aliases 🌍
&lt;/h2&gt;

&lt;p&gt;One of the biggest learnings today was managing resources across multiple AWS regions in a single Terraform project.&lt;/p&gt;

&lt;p&gt;Using &lt;strong&gt;provider aliases&lt;/strong&gt;, I configured:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;aws.primary&lt;/code&gt; → US East 1&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;aws.secondary&lt;/code&gt; → US West 2&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This made it possible to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Deploy resources cleanly in separate regions&lt;/li&gt;
&lt;li&gt;Keep code structured and maintainable&lt;/li&gt;
&lt;li&gt;Avoid duplicated configurations&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Why This Matters
&lt;/h3&gt;

&lt;p&gt;In enterprise cloud environments, multi-region deployments are common for:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Disaster recovery&lt;/li&gt;
&lt;li&gt;High availability&lt;/li&gt;
&lt;li&gt;Geo-distributed applications&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Understanding provider aliases is essential for scalable Terraform projects.&lt;/p&gt;




&lt;h2&gt;
  
  
  2. CIDR Planning is Critical 🧠
&lt;/h2&gt;

&lt;p&gt;One of the first rules of VPC Peering:&lt;/p&gt;

&lt;p&gt;❌ &lt;strong&gt;CIDR ranges must NOT overlap&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;If both VPCs share the same IP range:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Route tables conflict&lt;/li&gt;
&lt;li&gt;Traffic breaks&lt;/li&gt;
&lt;li&gt;Connectivity fails&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This reinforced an important lesson:&lt;/p&gt;

&lt;p&gt;👉 Good networking starts with good IP planning.&lt;/p&gt;

&lt;p&gt;Even with Terraform, automation cannot fix poor architecture.&lt;/p&gt;




&lt;h2&gt;
  
  
  3. VPC Peering is NOT Transitive ⚠️
&lt;/h2&gt;

&lt;p&gt;A major cloud networking concept I learned:&lt;/p&gt;

&lt;p&gt;If:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;VPC A ↔ VPC B&lt;/li&gt;
&lt;li&gt;VPC B ↔ VPC C&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That does &lt;strong&gt;NOT&lt;/strong&gt; mean:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;VPC A can talk to VPC C&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This is called &lt;strong&gt;non-transitive peering&lt;/strong&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Why This Matters
&lt;/h3&gt;

&lt;p&gt;For larger enterprise setups:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;VPC Peering can become hard to manage&lt;/li&gt;
&lt;li&gt;Mesh architectures get complex quickly&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This is where services like:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;AWS Transit Gateway&lt;/li&gt;
&lt;li&gt;Hub-and-spoke networking&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Become important.&lt;/p&gt;

&lt;p&gt;Understanding this limitation was one of today’s biggest takeaways.&lt;/p&gt;




&lt;h2&gt;
  
  
  4. Route Tables Make the Magic Happen 🛣️
&lt;/h2&gt;

&lt;p&gt;Creating the peering connection is only half the setup.&lt;/p&gt;

&lt;p&gt;The real work is in routing.&lt;/p&gt;

&lt;p&gt;For both VPCs, I had to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Update route tables&lt;/li&gt;
&lt;li&gt;Add peer VPC CIDR blocks&lt;/li&gt;
&lt;li&gt;Point traffic to the peering connection ID&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Without correct routes:&lt;/p&gt;

&lt;p&gt;Terraform may succeed… but traffic will still fail.&lt;/p&gt;

&lt;p&gt;This taught me a practical lesson:&lt;/p&gt;

&lt;p&gt;👉 Cloud networking is as much about traffic flow as it is about resource creation.&lt;/p&gt;




&lt;h2&gt;
  
  
  Hands-On Testing &amp;amp; Validation 🔍
&lt;/h2&gt;

&lt;p&gt;The most satisfying part of today’s project was testing connectivity.&lt;/p&gt;

&lt;p&gt;After setup, I verified:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Ping between instances&lt;/li&gt;
&lt;li&gt;Curl requests to private IPs&lt;/li&gt;
&lt;li&gt;Cross-region communication over private network&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Seeing an EC2 instance in Oregon reach a web server in Virginia privately was an exciting milestone.&lt;/p&gt;

&lt;p&gt;This was one of the clearest demonstrations so far of how powerful Terraform + AWS can be together.&lt;/p&gt;




&lt;h2&gt;
  
  
  Key Takeaways 💡
&lt;/h2&gt;

&lt;p&gt;Day 15 taught me more than just Terraform syntax.&lt;/p&gt;

&lt;p&gt;It reinforced:&lt;/p&gt;

&lt;p&gt;✔️ Cloud networking fundamentals matter&lt;br&gt;
✔️ Architecture decisions impact scalability&lt;br&gt;
✔️ Debugging builds confidence&lt;br&gt;
✔️ Terraform is powerful when paired with strong design principles&lt;/p&gt;




&lt;h2&gt;
  
  
  What’s Next? 🔥
&lt;/h2&gt;

&lt;p&gt;Reaching the halfway point feels great, but there’s still a lot ahead:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Security hardening&lt;/li&gt;
&lt;li&gt;Modules &amp;amp; reusable architecture&lt;/li&gt;
&lt;li&gt;Advanced state management&lt;/li&gt;
&lt;li&gt;Production deployment workflows&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Excited for the second half of this challenge.&lt;/p&gt;




&lt;h2&gt;
  
  
  Final Thoughts
&lt;/h2&gt;

&lt;p&gt;Day 15 was one of the most practical and rewarding days of my Terraform journey so far.&lt;/p&gt;

&lt;p&gt;If you’re learning Terraform, don’t stop at basic resource creation. Start exploring networking, debugging routes, and understanding how systems communicate.&lt;/p&gt;

&lt;p&gt;That’s where real cloud engineering begins.&lt;/p&gt;

</description>
      <category>terraform</category>
      <category>aws</category>
      <category>cloud</category>
      <category>devops</category>
    </item>
    <item>
      <title>Hosting a Static Website on AWS with S3 and CloudFront using Terraform</title>
      <dc:creator>Atul Vishwakarma</dc:creator>
      <pubDate>Sat, 11 Apr 2026 17:16:39 +0000</pubDate>
      <link>https://dev.to/vatul16/hosting-a-static-website-on-aws-with-s3-and-cloudfront-using-terraform-174b</link>
      <guid>https://dev.to/vatul16/hosting-a-static-website-on-aws-with-s3-and-cloudfront-using-terraform-174b</guid>
      <description>&lt;h2&gt;
  
  
  From Terraform Basics to Real-World Deployment 🚀
&lt;/h2&gt;

&lt;p&gt;As part of my &lt;strong&gt;30 Days of AWS Terraform challenge&lt;/strong&gt;, Day 14 marked a major milestone in my learning journey: deploying a &lt;strong&gt;secure, scalable static website on AWS using Terraform&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;This project brought together everything I’ve learned so far—Terraform resources, variables, loops, functions, data sources, and debugging—to build something practical and production-relevant.&lt;/p&gt;

&lt;p&gt;Instead of just creating isolated AWS resources, I built a complete architecture for static website hosting using &lt;strong&gt;Amazon S3 + CloudFront&lt;/strong&gt;, fully managed through Infrastructure as Code.&lt;/p&gt;




&lt;h2&gt;
  
  
  Why Not Just Host Directly on S3?
&lt;/h2&gt;

&lt;p&gt;Hosting a static website directly from an S3 bucket is simple, but it has limitations:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Public bucket exposure increases security risks&lt;/li&gt;
&lt;li&gt;No built-in global caching&lt;/li&gt;
&lt;li&gt;Slower delivery for international users&lt;/li&gt;
&lt;li&gt;Harder to scale professionally&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;To solve this, I implemented a better architecture:&lt;/p&gt;

&lt;p&gt;✅ Private S3 bucket for secure storage&lt;br&gt;
✅ CloudFront CDN for global delivery&lt;br&gt;
✅ Origin Access Control (OAC) for secure access&lt;br&gt;
✅ Terraform automation for repeatability&lt;/p&gt;




&lt;h2&gt;
  
  
  Architecture Overview 🏗️
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Key AWS Components Used
&lt;/h3&gt;

&lt;h3&gt;
  
  
  1. Private S3 Bucket
&lt;/h3&gt;

&lt;p&gt;I created an S3 bucket to store all website assets:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;HTML files&lt;/li&gt;
&lt;li&gt;CSS files&lt;/li&gt;
&lt;li&gt;JavaScript files&lt;/li&gt;
&lt;li&gt;Images&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Important: &lt;strong&gt;Public access was fully blocked&lt;/strong&gt; to ensure security.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Origin Access Control (OAC)
&lt;/h3&gt;

&lt;p&gt;Instead of using the older OAI method, I configured:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Origin Access Control (OAC)&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This securely allows CloudFront to fetch content from the private bucket.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. Bucket Policy
&lt;/h3&gt;

&lt;p&gt;Using Terraform, I wrote a bucket policy that:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Allows only CloudFront to access bucket objects&lt;/li&gt;
&lt;li&gt;Prevents direct public access&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This follows AWS security best practices.&lt;/p&gt;

&lt;h3&gt;
  
  
  4. CloudFront Distribution
&lt;/h3&gt;

&lt;p&gt;CloudFront was used as the CDN layer to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Cache files globally&lt;/li&gt;
&lt;li&gt;Improve load times&lt;/li&gt;
&lt;li&gt;Reduce latency&lt;/li&gt;
&lt;li&gt;Add edge security&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This made the project feel much closer to real-world production hosting.&lt;/p&gt;




&lt;h2&gt;
  
  
  Terraform Concepts Applied ⚙️
&lt;/h2&gt;

&lt;p&gt;This project helped me apply several advanced Terraform concepts:&lt;/p&gt;

&lt;h3&gt;
  
  
  🔹 for_each + fileset
&lt;/h3&gt;

&lt;p&gt;One of the coolest parts was automating file uploads.&lt;/p&gt;

&lt;p&gt;Instead of uploading files manually, I used:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;fileset()&lt;/code&gt; to scan all files in the project folder&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;for_each&lt;/code&gt; to upload them dynamically&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This made the setup scalable and clean.&lt;/p&gt;

&lt;h3&gt;
  
  
  🔹 MIME Type Handling
&lt;/h3&gt;

&lt;p&gt;Different file types need proper content types.&lt;/p&gt;

&lt;p&gt;Using Terraform functions like &lt;code&gt;lookup()&lt;/code&gt;, I dynamically mapped:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;.html&lt;/code&gt; → text/html&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;.css&lt;/code&gt; → text/css&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;.js&lt;/code&gt; → application/javascript&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This ensured browsers rendered the site correctly.&lt;/p&gt;

&lt;h3&gt;
  
  
  🔹 Modern AWS Provider Practices
&lt;/h3&gt;

&lt;p&gt;While working on the project, I faced deprecation issues.&lt;/p&gt;

&lt;p&gt;For example:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Older resource: &lt;code&gt;aws_s3_bucket_object&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Updated resource: &lt;code&gt;aws_s3_object&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Debugging these issues taught me an important lesson:&lt;/p&gt;

&lt;p&gt;👉 Always refer to the latest Terraform provider documentation.&lt;/p&gt;




&lt;h2&gt;
  
  
  Key Challenges &amp;amp; Learnings 💡
&lt;/h2&gt;

&lt;p&gt;This project was not just about deployment—it was about troubleshooting and problem-solving.&lt;/p&gt;

&lt;p&gt;Some valuable lessons:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Writing correct S3 bucket policies is critical&lt;/li&gt;
&lt;li&gt;Small syntax issues can break deployments&lt;/li&gt;
&lt;li&gt;AWS provider updates matter a lot&lt;/li&gt;
&lt;li&gt;Debugging Terraform errors improves confidence&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;One big takeaway:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Errors are not setbacks—they are part of the learning process.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Every issue I solved gave me a better understanding of AWS and Terraform internals.&lt;/p&gt;




&lt;h2&gt;
  
  
  What’s Next? 🔥
&lt;/h2&gt;

&lt;p&gt;This setup is already solid, but to make it production-ready, the next logical steps are:&lt;/p&gt;

&lt;h3&gt;
  
  
  Future Enhancements
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Add a custom domain using Route 53&lt;/li&gt;
&lt;li&gt;Configure HTTPS with AWS ACM&lt;/li&gt;
&lt;li&gt;Build CI/CD pipelines for automatic deployment&lt;/li&gt;
&lt;li&gt;Add cache invalidation workflows&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These are areas I’m excited to explore next.&lt;/p&gt;




&lt;h2&gt;
  
  
  Final Thoughts
&lt;/h2&gt;

&lt;p&gt;Day 14 was one of the most rewarding milestones in this challenge so far.&lt;/p&gt;

&lt;p&gt;This project showed me how Infrastructure as Code can be used not just to create resources, but to design secure, scalable, and professional cloud systems.&lt;/p&gt;

&lt;p&gt;If you’re learning Terraform, I highly recommend trying a project like this. It ties together so many foundational concepts and gives you hands-on experience with real-world architecture.&lt;/p&gt;

&lt;p&gt;I’d love to hear your thoughts:&lt;br&gt;
Have you hosted static websites using Terraform? Any best practices or lessons you’ve learned?&lt;/p&gt;

</description>
      <category>terraform</category>
      <category>aws</category>
      <category>devops</category>
      <category>cloud</category>
    </item>
    <item>
      <title>Setting Up the ELK Stack on AWS EC2 for Log Monitoring</title>
      <dc:creator>Atul Vishwakarma</dc:creator>
      <pubDate>Sat, 30 Aug 2025 12:26:46 +0000</pubDate>
      <link>https://dev.to/vatul16/setting-up-the-elk-stack-on-aws-ec2-for-log-monitoring-1eme</link>
      <guid>https://dev.to/vatul16/setting-up-the-elk-stack-on-aws-ec2-for-log-monitoring-1eme</guid>
      <description>&lt;p&gt;Welcome to this comprehensive guide on setting up the ELK Stack (Elasticsearch, Logstash, Kibana) on AWS EC2 instances. If you're managing applications in the cloud, especially Java-based ones, efficient log monitoring is crucial for debugging, performance analysis, and security. The ELK Stack, combined with Filebeat, provides a powerful, open-source solution for collecting, processing, visualizing, and analyzing logs in real-time.&lt;/p&gt;

&lt;p&gt;In this blog post, we'll walk through a step-by-step setup using Ubuntu-based EC2 instances. This tutorial is based on a practical example involving a Java application, but the principles apply broadly. We'll cover everything from infrastructure provisioning to creating dashboards in Kibana. By the end, you'll have a fully functional log monitoring system.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Prerequisites:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;An AWS account with access to EC2.&lt;/li&gt;
&lt;li&gt;Basic knowledge of SSH, Linux commands, and AWS networking (e.g., security groups).&lt;/li&gt;
&lt;li&gt;Three EC2 instances (t3.micro or similar): One for the ELK server (Elasticsearch, Logstash, Kibana), one for the client machine (hosting the app and Filebeat), and an optional web server for testing.&lt;/li&gt;
&lt;li&gt;Ensure ports like 9200 (Elasticsearch), 5044 (Logstash), and 5601 (Kibana) are open in your security groups.&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  1. Overview of the ELK Stack
&lt;/h2&gt;

&lt;p&gt;The ELK Stack is a collection of tools from Elastic:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Elasticsearch:&lt;/strong&gt; A search and analytics engine that stores and indexes your logs.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Logstash:&lt;/strong&gt; A data processing pipeline that ingests, transforms, and sends logs to Elasticsearch.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Kibana:&lt;/strong&gt; A web interface for visualizing and querying your logs.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Filebeat:&lt;/strong&gt; A lightweight shipper that forwards logs from your application servers to Logstash.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This setup allows you to centralize logs from distributed systems like EC2, parse them into structured data, and gain insights through dashboards.&lt;/p&gt;




&lt;h2&gt;
  
  
  2. Infrastructure Setup
&lt;/h2&gt;

&lt;p&gt;Launch three Ubuntu EC2 instances:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;ELK Server:&lt;/strong&gt; Hosts the core ELK components. Assign a public IP for Kibana access.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Client Machine:&lt;/strong&gt; Runs your Java app and Filebeat. Use the ELK server's private IP for communication.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Web Server (Optional):&lt;/strong&gt; For simulating additional log sources.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Connect via SSH to each instance as the ubuntu user. Update packages on all machines:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;apt update
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  3. Step-by-Step Installation
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Step 1: Install and Configure Elasticsearch (on ELK Server)
&lt;/h3&gt;

&lt;p&gt;Elasticsearch is the backbone for storing logs.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Install Java (required for Elasticsearch and Logstash):&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;apt update &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="nb"&gt;sudo &lt;/span&gt;apt &lt;span class="nb"&gt;install &lt;/span&gt;openjdk-17-jre-headless &lt;span class="nt"&gt;-y&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Install Elasticsearch:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;wget &lt;span class="nt"&gt;-qO&lt;/span&gt; - https://artifacts.elastic.co/GPG-KEY-elasticsearch | &lt;span class="nb"&gt;sudo &lt;/span&gt;apt-key add -

&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"deb https://artifacts.elastic.co/packages/7.x/apt stable main"&lt;/span&gt; | &lt;span class="nb"&gt;sudo tee&lt;/span&gt; /etc/apt/sources.list.d/elastic-7.x.list

&lt;span class="nb"&gt;sudo &lt;/span&gt;apt update

&lt;span class="nb"&gt;sudo &lt;/span&gt;apt &lt;span class="nb"&gt;install &lt;/span&gt;elasticsearch &lt;span class="nt"&gt;-y&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Configure Elasticsearch: Edit &lt;code&gt;/etc/elasticsearch/elasticsearch.yml&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;vi /etc/elasticsearch/elasticsearch.yml
&lt;/code&gt;&lt;/pre&gt;


&lt;p&gt;Add or modify:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;network.host: 0.0.0.0
cluster.name: my-cluster
node.name: node-1
discovery.type: single-node
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Start and enable the service:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;systemctl start elasticsearch
&lt;span class="nb"&gt;sudo &lt;/span&gt;systemctl &lt;span class="nb"&gt;enable &lt;/span&gt;elasticsearch
&lt;span class="nb"&gt;sudo &lt;/span&gt;systemctl status elasticsearch

&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Verify it's running:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;curl &lt;span class="nt"&gt;-X&lt;/span&gt; GET &lt;span class="s2"&gt;"http://localhost:9200/"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;


&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8p7u3kmhf90sn3h47azt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8p7u3kmhf90sn3h47azt.png" alt=" " width="668" height="334"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Step 2: Install and Configure Logstash (on ELK Server)
&lt;/h3&gt;

&lt;p&gt;Logstash processes incoming logs.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Install Logstash:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;apt &lt;span class="nb"&gt;install &lt;/span&gt;logstash &lt;span class="nt"&gt;-y&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Configure Logstash: Edit &lt;code&gt;/etc/logstash/conf.d/logstash.conf&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;vi /etc/logstash/conf.d/logstash.conf
&lt;/code&gt;&lt;/pre&gt;


&lt;p&gt;Add:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;input &lt;span class="o"&gt;{&lt;/span&gt;
  beats &lt;span class="o"&gt;{&lt;/span&gt;
    port &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; 5044
  &lt;span class="o"&gt;}&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;

filter &lt;span class="o"&gt;{&lt;/span&gt;
  grok &lt;span class="o"&gt;{&lt;/span&gt;
    match &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt; &lt;span class="s2"&gt;"message"&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="s2"&gt;"%{TIMESTAMP_ISO8601:log_timestamp} %{LOGLEVEL:log_level} %{GREEDYDATA:log_message}"&lt;/span&gt; &lt;span class="o"&gt;}&lt;/span&gt;
  &lt;span class="o"&gt;}&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;

output &lt;span class="o"&gt;{&lt;/span&gt;
  elasticsearch &lt;span class="o"&gt;{&lt;/span&gt;
    hosts &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="o"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"http://localhost:9200"&lt;/span&gt;&lt;span class="o"&gt;]&lt;/span&gt;
    index &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="s2"&gt;"logs-%{+YYYY.MM.dd}"&lt;/span&gt;
  &lt;span class="o"&gt;}&lt;/span&gt;
  stdout &lt;span class="o"&gt;{&lt;/span&gt; codec &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; rubydebug &lt;span class="o"&gt;}&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;


&lt;p&gt;This config accepts logs from Beats (like Filebeat), parses them using Grok, and stores them in Elasticsearch.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwhzgfbb1he13igtibeg6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwhzgfbb1he13igtibeg6.png" alt=" " width="800" height="240"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Start and enable the service:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;systemctl start logstash
&lt;span class="nb"&gt;sudo &lt;/span&gt;systemctl &lt;span class="nb"&gt;enable &lt;/span&gt;logstash
&lt;span class="nb"&gt;sudo &lt;/span&gt;systemctl status logstash
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Allow traffic on port 5044:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;ufw allow 5044/tcp
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Step 3: Install and Configure Kibana (on ELK Server)
&lt;/h3&gt;

&lt;p&gt;Kibana provides the UI for log analysis.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Install Kibana:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;apt &lt;span class="nb"&gt;install &lt;/span&gt;kibana &lt;span class="nt"&gt;-y&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Configure Kibana: Edit &lt;code&gt;/etc/kibana/kibana.yml&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;vi /etc/kibana/kibana.yml
&lt;/code&gt;&lt;/pre&gt;


&lt;p&gt;Modify:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;server.host: &lt;span class="s2"&gt;"0.0.0.0"&lt;/span&gt;
elasticsearch.hosts: &lt;span class="o"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"http://localhost:9200"&lt;/span&gt;&lt;span class="o"&gt;]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Start and enable the service:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;systemctl start kibana
&lt;span class="nb"&gt;sudo &lt;/span&gt;systemctl &lt;span class="nb"&gt;enable &lt;/span&gt;kibana
&lt;span class="nb"&gt;sudo &lt;/span&gt;systemctl status kibana
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Allow traffic on port 5601:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;ufw allow 5601/tcp
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Access the Kibana dashboard: Open a browser and navigate to &lt;code&gt;http://&amp;lt;ELK_Server_Public_IP&amp;gt;:5601&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fexut3851x4bxpc9eg87x.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fexut3851x4bxpc9eg87x.png" alt=" " width="800" height="430"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Step 4: Install and Configure Filebeat (on Client Machine)
&lt;/h3&gt;

&lt;p&gt;Filebeat ships logs from your app to Logstash.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Install Filebeat:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;wget &lt;span class="nt"&gt;-qO&lt;/span&gt; - https://artifacts.elastic.co/GPG-KEY-elasticsearch | &lt;span class="nb"&gt;sudo &lt;/span&gt;apt-key add -

&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"deb https://artifacts.elastic.co/packages/7.x/apt stable main"&lt;/span&gt; | &lt;span class="nb"&gt;sudo tee&lt;/span&gt; /etc/apt/sources.list.d/elastic-7.x.list

&lt;span class="nb"&gt;sudo &lt;/span&gt;apt update 

&lt;span class="nb"&gt;sudo &lt;/span&gt;apt &lt;span class="nb"&gt;install &lt;/span&gt;filebeat &lt;span class="nt"&gt;-y&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Configure Filebeat: Edit &lt;code&gt;/etc/filebeat/filebeat.yml&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;vi /etc/filebeat/filebeat.yml
&lt;/code&gt;&lt;/pre&gt;


&lt;p&gt;Modify:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;filebeat.inputs:
  - &lt;span class="nb"&gt;type&lt;/span&gt;: log
    enabled: &lt;span class="nb"&gt;true
    &lt;/span&gt;paths:
      - /home/ubuntu/JavaApp/target/app.log

output.logstash:
  hosts: &lt;span class="o"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"&amp;lt;ELK_Server_Private_IP&amp;gt;:5044"&lt;/span&gt;&lt;span class="o"&gt;]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Start and enable the service:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;systemctl start filebeat
&lt;span class="nb"&gt;sudo &lt;/span&gt;systemctl &lt;span class="nb"&gt;enable &lt;/span&gt;filebeat
&lt;span class="nb"&gt;sudo &lt;/span&gt;systemctl status filebeat
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Verify Filebeat:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;filebeat &lt;span class="nb"&gt;test &lt;/span&gt;output
&lt;/code&gt;&lt;/pre&gt;


&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foueyfkxj55ov783mkya2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foueyfkxj55ov783mkya2.png" alt=" " width="624" height="210"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Step 5: Deploy a Java Application and Generate Logs (on Client Machine)
&lt;/h3&gt;

&lt;p&gt;To test, we'll use a sample Java app.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Install Java if needed:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;apt &lt;span class="nb"&gt;install &lt;/span&gt;openjdk-17-jre-headless &lt;span class="nt"&gt;-y&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Download and run a sample app:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;wget https://repo1.maven.org/maven2/org/springframework/boot/spring-boot-sample-simple/1.4.2.RELEASE/spring-boot-sample-simple-1.4.2.RELEASE.jar &lt;span class="nt"&gt;-O&lt;/span&gt; app.jar
&lt;span class="nb"&gt;nohup &lt;/span&gt;java &lt;span class="nt"&gt;-jar&lt;/span&gt; app.jar &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; /home/ubuntu/JavaApp/target/app.log 2&amp;gt;&amp;amp;1 &amp;amp;
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Generate test logs:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"Test log entry &lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;date&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&amp;gt;&lt;/span&gt; /home/ubuntu/JavaApp/target/app.log
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Step 6: View and Analyze Logs in Kibana
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;In Kibana, go to &lt;strong&gt;Discover&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;Select the &lt;code&gt;logs-*&lt;/code&gt; index pattern.&lt;/li&gt;
&lt;li&gt;Search for logs from your app, e.g., &lt;code&gt;log.file.path: "/home/ubuntu/Boardgame/target/app.log"&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;View parsed fields like &lt;code&gt;log_timestamp&lt;/code&gt;, &lt;code&gt;log_level&lt;/code&gt;, and &lt;code&gt;log_message&lt;/code&gt;.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa5a7of7oru14o81u8w5j.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa5a7of7oru14o81u8w5j.png" alt=" " width="800" height="430"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Create visualizations:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Pie Chart: For log level distribution.&lt;/li&gt;
&lt;li&gt;Line Chart: For logs over time.&lt;/li&gt;
&lt;li&gt;Data Table: For a structured log view.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Build a dashboard:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Go to &lt;strong&gt;Dashboard&lt;/strong&gt; &amp;gt; &lt;strong&gt;Create Dashboard&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;Add your visualizations.&lt;/li&gt;
&lt;li&gt;Save as "Java Application Log Monitoring".&lt;/li&gt;
&lt;/ol&gt;




&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Congratulations! You've set up the ELK Stack on AWS EC2, integrated Filebeat for log shipping, and created a dashboard for real-time monitoring. This setup scales well—add more Filebeat instances for multi-app environments. For production, consider security enhancements like SSL, authentication, and backups.&lt;/p&gt;

&lt;p&gt;If you encounter issues, check service logs with &lt;code&gt;journalctl&lt;/code&gt; or Elastic's documentation. Happy logging!&lt;/p&gt;




&lt;h2&gt;
  
  
  ☕ Support My Work
&lt;/h2&gt;

&lt;p&gt;If you found this guide helpful, consider &lt;a href="https://buymeacoffee.com/vatul16" rel="noopener noreferrer"&gt;buying me a coffee&lt;/a&gt; to support my work!&lt;/p&gt;

</description>
      <category>elasticsearch</category>
      <category>devops</category>
      <category>aws</category>
    </item>
    <item>
      <title>Common Data Loss Scenarios &amp; Solutions in Prisma Schema Changes</title>
      <dc:creator>Atul Vishwakarma</dc:creator>
      <pubDate>Tue, 07 Jan 2025 16:18:35 +0000</pubDate>
      <link>https://dev.to/vatul16/common-data-loss-scenarios-solutions-in-prisma-schema-changes-52id</link>
      <guid>https://dev.to/vatul16/common-data-loss-scenarios-solutions-in-prisma-schema-changes-52id</guid>
      <description>&lt;h3&gt;
  
  
  Common Data Loss Scenarios &amp;amp; Solutions in Prisma Schema Changes
&lt;/h3&gt;

&lt;p&gt;When evolving a database schema using Prisma, care must be taken to ensure data integrity and avoid loss. Below, we explore common data loss scenarios and provide step-by-step solutions to address them effectively.&lt;/p&gt;




&lt;h3&gt;
  
  
  1. &lt;strong&gt;Enum to String Conversion&lt;/strong&gt;
&lt;/h3&gt;

&lt;h4&gt;
  
  
  Issue:
&lt;/h4&gt;

&lt;p&gt;Converting an enum column to a string type can result in data inconsistencies.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Example:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Before: &lt;code&gt;payedBy Payment_By @default(NONE)&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;After: &lt;code&gt;payedBy String?&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Solution:&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Add a new column:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="k"&gt;ALTER&lt;/span&gt; &lt;span class="k"&gt;TABLE&lt;/span&gt; &lt;span class="nv"&gt;"Orders"&lt;/span&gt; &lt;span class="k"&gt;ADD&lt;/span&gt; &lt;span class="k"&gt;COLUMN&lt;/span&gt; &lt;span class="nv"&gt;"payedBy_new"&lt;/span&gt; &lt;span class="nb"&gt;TEXT&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Copy data:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="k"&gt;UPDATE&lt;/span&gt; &lt;span class="nv"&gt;"Orders"&lt;/span&gt; &lt;span class="k"&gt;SET&lt;/span&gt; &lt;span class="nv"&gt;"payedBy_new"&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nv"&gt;"payedBy"&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="nb"&gt;text&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Drop old column:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="k"&gt;ALTER&lt;/span&gt; &lt;span class="k"&gt;TABLE&lt;/span&gt; &lt;span class="nv"&gt;"Orders"&lt;/span&gt; &lt;span class="k"&gt;DROP&lt;/span&gt; &lt;span class="k"&gt;COLUMN&lt;/span&gt; &lt;span class="nv"&gt;"payedBy"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Rename the new column:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="k"&gt;ALTER&lt;/span&gt; &lt;span class="k"&gt;TABLE&lt;/span&gt; &lt;span class="nv"&gt;"Orders"&lt;/span&gt; &lt;span class="k"&gt;RENAME&lt;/span&gt; &lt;span class="k"&gt;COLUMN&lt;/span&gt; &lt;span class="nv"&gt;"payedBy_new"&lt;/span&gt; &lt;span class="k"&gt;TO&lt;/span&gt; &lt;span class="nv"&gt;"payedBy"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;/ol&gt;




&lt;h3&gt;
  
  
  2. &lt;strong&gt;Changing Column Type (e.g., Int to Decimal)&lt;/strong&gt;
&lt;/h3&gt;

&lt;h4&gt;
  
  
  Issue:
&lt;/h4&gt;

&lt;p&gt;Directly altering a column type can cause data loss.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Example:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Before: &lt;code&gt;amount Int&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;After: &lt;code&gt;amount Decimal&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Solution:&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Add a new column:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="k"&gt;ALTER&lt;/span&gt; &lt;span class="k"&gt;TABLE&lt;/span&gt; &lt;span class="nv"&gt;"TableName"&lt;/span&gt; &lt;span class="k"&gt;ADD&lt;/span&gt; &lt;span class="k"&gt;COLUMN&lt;/span&gt; &lt;span class="nv"&gt;"amount_new"&lt;/span&gt; &lt;span class="nb"&gt;DECIMAL&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Copy data:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="k"&gt;UPDATE&lt;/span&gt; &lt;span class="nv"&gt;"TableName"&lt;/span&gt; &lt;span class="k"&gt;SET&lt;/span&gt; &lt;span class="nv"&gt;"amount_new"&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nv"&gt;"amount"&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="nb"&gt;DECIMAL&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Drop old column:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="k"&gt;ALTER&lt;/span&gt; &lt;span class="k"&gt;TABLE&lt;/span&gt; &lt;span class="nv"&gt;"TableName"&lt;/span&gt; &lt;span class="k"&gt;DROP&lt;/span&gt; &lt;span class="k"&gt;COLUMN&lt;/span&gt; &lt;span class="nv"&gt;"amount"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Rename the new column:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="k"&gt;ALTER&lt;/span&gt; &lt;span class="k"&gt;TABLE&lt;/span&gt; &lt;span class="nv"&gt;"TableName"&lt;/span&gt; &lt;span class="k"&gt;RENAME&lt;/span&gt; &lt;span class="k"&gt;COLUMN&lt;/span&gt; &lt;span class="nv"&gt;"amount_new"&lt;/span&gt; &lt;span class="k"&gt;TO&lt;/span&gt; &lt;span class="nv"&gt;"amount"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;/ol&gt;




&lt;h3&gt;
  
  
  3. &lt;strong&gt;Making a Nullable Column Non-Nullable&lt;/strong&gt;
&lt;/h3&gt;

&lt;h4&gt;
  
  
  Issue:
&lt;/h4&gt;

&lt;p&gt;Enforcing a non-null constraint without handling existing &lt;code&gt;NULL&lt;/code&gt; values can break queries.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Example:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Before: &lt;code&gt;email String?&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;After: &lt;code&gt;email String&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Solution:&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Populate &lt;code&gt;NULL&lt;/code&gt; values with placeholders:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="k"&gt;UPDATE&lt;/span&gt; &lt;span class="nv"&gt;"TableName"&lt;/span&gt; &lt;span class="k"&gt;SET&lt;/span&gt; &lt;span class="nv"&gt;"email"&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s1"&gt;'placeholder@email.com'&lt;/span&gt; &lt;span class="k"&gt;WHERE&lt;/span&gt; &lt;span class="nv"&gt;"email"&lt;/span&gt; &lt;span class="k"&gt;IS&lt;/span&gt; &lt;span class="k"&gt;NULL&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Alter the column:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="k"&gt;ALTER&lt;/span&gt; &lt;span class="k"&gt;TABLE&lt;/span&gt; &lt;span class="nv"&gt;"TableName"&lt;/span&gt; &lt;span class="k"&gt;ALTER&lt;/span&gt; &lt;span class="k"&gt;COLUMN&lt;/span&gt; &lt;span class="nv"&gt;"email"&lt;/span&gt; &lt;span class="k"&gt;SET&lt;/span&gt; &lt;span class="k"&gt;NOT&lt;/span&gt; &lt;span class="k"&gt;NULL&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;/ol&gt;




&lt;h3&gt;
  
  
  4. &lt;strong&gt;Changing JSON Structure&lt;/strong&gt;
&lt;/h3&gt;

&lt;h4&gt;
  
  
  Issue:
&lt;/h4&gt;

&lt;p&gt;Modifying the structure of a JSON column can lead to data mismatches.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Example:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Before: &lt;code&gt;{oldField: "value"}&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;After: &lt;code&gt;{newField: "value"}&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Solution:&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Add a temporary column:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="k"&gt;ALTER&lt;/span&gt; &lt;span class="k"&gt;TABLE&lt;/span&gt; &lt;span class="nv"&gt;"TableName"&lt;/span&gt; &lt;span class="k"&gt;ADD&lt;/span&gt; &lt;span class="k"&gt;COLUMN&lt;/span&gt; &lt;span class="nv"&gt;"metadata_new"&lt;/span&gt; &lt;span class="n"&gt;JSONB&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Transform the data:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="k"&gt;UPDATE&lt;/span&gt; &lt;span class="nv"&gt;"TableName"&lt;/span&gt; &lt;span class="k"&gt;SET&lt;/span&gt; &lt;span class="nv"&gt;"metadata_new"&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;jsonb_build_object&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
   &lt;span class="s1"&gt;'newField'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
   &lt;span class="k"&gt;CASE&lt;/span&gt; &lt;span class="k"&gt;WHEN&lt;/span&gt; &lt;span class="n"&gt;metadata&lt;/span&gt;&lt;span class="o"&gt;-&amp;gt;&amp;gt;&lt;/span&gt;&lt;span class="s1"&gt;'oldField'&lt;/span&gt; &lt;span class="k"&gt;IS&lt;/span&gt; &lt;span class="k"&gt;NOT&lt;/span&gt; &lt;span class="k"&gt;NULL&lt;/span&gt; &lt;span class="k"&gt;THEN&lt;/span&gt; &lt;span class="n"&gt;metadata&lt;/span&gt;&lt;span class="o"&gt;-&amp;gt;&amp;gt;&lt;/span&gt;&lt;span class="s1"&gt;'oldField'&lt;/span&gt; &lt;span class="k"&gt;ELSE&lt;/span&gt; &lt;span class="k"&gt;NULL&lt;/span&gt; &lt;span class="k"&gt;END&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;WHERE&lt;/span&gt; &lt;span class="n"&gt;metadata&lt;/span&gt; &lt;span class="k"&gt;IS&lt;/span&gt; &lt;span class="k"&gt;NOT&lt;/span&gt; &lt;span class="k"&gt;NULL&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Drop the old column:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="k"&gt;ALTER&lt;/span&gt; &lt;span class="k"&gt;TABLE&lt;/span&gt; &lt;span class="nv"&gt;"TableName"&lt;/span&gt; &lt;span class="k"&gt;DROP&lt;/span&gt; &lt;span class="k"&gt;COLUMN&lt;/span&gt; &lt;span class="nv"&gt;"metadata"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Rename the new column:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="k"&gt;ALTER&lt;/span&gt; &lt;span class="k"&gt;TABLE&lt;/span&gt; &lt;span class="nv"&gt;"TableName"&lt;/span&gt; &lt;span class="k"&gt;RENAME&lt;/span&gt; &lt;span class="k"&gt;COLUMN&lt;/span&gt; &lt;span class="nv"&gt;"metadata_new"&lt;/span&gt; &lt;span class="k"&gt;TO&lt;/span&gt; &lt;span class="nv"&gt;"metadata"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;/ol&gt;




&lt;h3&gt;
  
  
  5. &lt;strong&gt;Array Type Changes&lt;/strong&gt;
&lt;/h3&gt;

&lt;h4&gt;
  
  
  Issue:
&lt;/h4&gt;

&lt;p&gt;Converting an array type (e.g., &lt;code&gt;String[]&lt;/code&gt; to &lt;code&gt;Int[]&lt;/code&gt;) can cause errors if the data types don’t align.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Example:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Before: &lt;code&gt;tags String[]&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;After: &lt;code&gt;tags Int[]&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Solution:&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Add a new column:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="k"&gt;ALTER&lt;/span&gt; &lt;span class="k"&gt;TABLE&lt;/span&gt; &lt;span class="nv"&gt;"TableName"&lt;/span&gt; &lt;span class="k"&gt;ADD&lt;/span&gt; &lt;span class="k"&gt;COLUMN&lt;/span&gt; &lt;span class="nv"&gt;"tags_new"&lt;/span&gt; &lt;span class="nb"&gt;INTEGER&lt;/span&gt;&lt;span class="p"&gt;[];&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Convert data:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="k"&gt;UPDATE&lt;/span&gt; &lt;span class="nv"&gt;"TableName"&lt;/span&gt; &lt;span class="k"&gt;SET&lt;/span&gt; &lt;span class="nv"&gt;"tags_new"&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;ARRAY&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
   &lt;span class="k"&gt;SELECT&lt;/span&gt; &lt;span class="k"&gt;NULLIF&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;value&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s1"&gt;''&lt;/span&gt;&lt;span class="p"&gt;)::&lt;/span&gt;&lt;span class="nb"&gt;INTEGER&lt;/span&gt;
   &lt;span class="k"&gt;FROM&lt;/span&gt; &lt;span class="k"&gt;unnest&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nv"&gt;"tags"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;AS&lt;/span&gt; &lt;span class="n"&gt;value&lt;/span&gt;
   &lt;span class="k"&gt;WHERE&lt;/span&gt; &lt;span class="n"&gt;value&lt;/span&gt; &lt;span class="o"&gt;~&lt;/span&gt; &lt;span class="s1"&gt;'^[0-9]+$'&lt;/span&gt;
&lt;span class="p"&gt;);&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Drop the old column:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="k"&gt;ALTER&lt;/span&gt; &lt;span class="k"&gt;TABLE&lt;/span&gt; &lt;span class="nv"&gt;"TableName"&lt;/span&gt; &lt;span class="k"&gt;DROP&lt;/span&gt; &lt;span class="k"&gt;COLUMN&lt;/span&gt; &lt;span class="nv"&gt;"tags"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Rename the new column:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="k"&gt;ALTER&lt;/span&gt; &lt;span class="k"&gt;TABLE&lt;/span&gt; &lt;span class="nv"&gt;"TableName"&lt;/span&gt; &lt;span class="k"&gt;RENAME&lt;/span&gt; &lt;span class="k"&gt;COLUMN&lt;/span&gt; &lt;span class="nv"&gt;"tags_new"&lt;/span&gt; &lt;span class="k"&gt;TO&lt;/span&gt; &lt;span class="nv"&gt;"tags"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;/ol&gt;




&lt;h3&gt;
  
  
  6. &lt;strong&gt;Adding/Removing Unique Constraints&lt;/strong&gt;
&lt;/h3&gt;

&lt;h4&gt;
  
  
  Issue:
&lt;/h4&gt;

&lt;p&gt;Adding a unique constraint without addressing duplicate values can cause migration failures.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Example:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Before: &lt;code&gt;email String&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;After: &lt;code&gt;email String @unique&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Solution:&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Identify duplicates:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="k"&gt;SELECT&lt;/span&gt; &lt;span class="n"&gt;email&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="k"&gt;COUNT&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;FROM&lt;/span&gt; &lt;span class="nv"&gt;"TableName"&lt;/span&gt; &lt;span class="k"&gt;GROUP&lt;/span&gt; &lt;span class="k"&gt;BY&lt;/span&gt; &lt;span class="n"&gt;email&lt;/span&gt; &lt;span class="k"&gt;HAVING&lt;/span&gt; &lt;span class="k"&gt;COUNT&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Handle duplicates:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="k"&gt;WITH&lt;/span&gt; &lt;span class="n"&gt;duplicates&lt;/span&gt; &lt;span class="k"&gt;AS&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;
   &lt;span class="k"&gt;SELECT&lt;/span&gt; &lt;span class="n"&gt;email&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;ROW_NUMBER&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="n"&gt;OVER&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;PARTITION&lt;/span&gt; &lt;span class="k"&gt;BY&lt;/span&gt; &lt;span class="n"&gt;email&lt;/span&gt; &lt;span class="k"&gt;ORDER&lt;/span&gt; &lt;span class="k"&gt;BY&lt;/span&gt; &lt;span class="n"&gt;id&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;AS&lt;/span&gt; &lt;span class="n"&gt;row_num&lt;/span&gt;
   &lt;span class="k"&gt;FROM&lt;/span&gt; &lt;span class="nv"&gt;"TableName"&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="k"&gt;UPDATE&lt;/span&gt; &lt;span class="nv"&gt;"TableName"&lt;/span&gt; &lt;span class="n"&gt;t&lt;/span&gt;
&lt;span class="k"&gt;SET&lt;/span&gt; &lt;span class="n"&gt;email&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;t&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;email&lt;/span&gt; &lt;span class="o"&gt;||&lt;/span&gt; &lt;span class="s1"&gt;'_'&lt;/span&gt; &lt;span class="o"&gt;||&lt;/span&gt; &lt;span class="n"&gt;d&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;row_num&lt;/span&gt;
&lt;span class="k"&gt;FROM&lt;/span&gt; &lt;span class="n"&gt;duplicates&lt;/span&gt; &lt;span class="n"&gt;d&lt;/span&gt;
&lt;span class="k"&gt;WHERE&lt;/span&gt; &lt;span class="n"&gt;t&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;email&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;d&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;email&lt;/span&gt; &lt;span class="k"&gt;AND&lt;/span&gt; &lt;span class="n"&gt;d&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;row_num&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Add the constraint:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="k"&gt;ALTER&lt;/span&gt; &lt;span class="k"&gt;TABLE&lt;/span&gt; &lt;span class="nv"&gt;"TableName"&lt;/span&gt; &lt;span class="k"&gt;ADD&lt;/span&gt; &lt;span class="k"&gt;CONSTRAINT&lt;/span&gt; &lt;span class="n"&gt;email_unique&lt;/span&gt; &lt;span class="k"&gt;UNIQUE&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;email&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;/ol&gt;




&lt;h3&gt;
  
  
  Best Practices for Safe Schema Changes
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Backup First:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;pg_dump &lt;span class="nt"&gt;-U&lt;/span&gt; username &lt;span class="nt"&gt;-d&lt;/span&gt; database_name &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; backup.sql
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Test in Development:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Create a development database.&lt;/li&gt;
&lt;li&gt;Restore the backup.&lt;/li&gt;
&lt;li&gt;Test migrations.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Use Transactions:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="k"&gt;BEGIN&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="c1"&gt;-- Migration steps&lt;/span&gt;
&lt;span class="k"&gt;COMMIT&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Implement Rollback Plans:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Save original data in a backup table.&lt;/li&gt;
&lt;li&gt;Rollback if necessary.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Conclusion
&lt;/h3&gt;

&lt;p&gt;Schema changes in Prisma require meticulous planning and execution. By following these structured solutions and best practices, you can ensure safe migrations while maintaining data integrity.&lt;/p&gt;




&lt;p&gt;If you found this article helpful, consider &lt;a href="https://buymeacoffee.com/vatul16" rel="noopener noreferrer"&gt;buying me a coffee&lt;/a&gt; to support my work!&lt;/p&gt;

</description>
      <category>prisma</category>
      <category>sql</category>
      <category>postgres</category>
      <category>webdev</category>
    </item>
    <item>
      <title>Creating and Connecting to a Virtual Machine in a Custom Virtual Network using Azure Portal</title>
      <dc:creator>Atul Vishwakarma</dc:creator>
      <pubDate>Wed, 24 Apr 2024 13:54:44 +0000</pubDate>
      <link>https://dev.to/vatul16/creating-and-connecting-to-a-virtual-machine-in-a-custom-virtual-network-using-azure-portal-blk</link>
      <guid>https://dev.to/vatul16/creating-and-connecting-to-a-virtual-machine-in-a-custom-virtual-network-using-azure-portal-blk</guid>
      <description>&lt;h2&gt;
  
  
  Step 01: Sign in to the Azure Portal
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;&lt;a href="https://azure.microsoft.com/en-in/get-started/azure-portal" rel="noopener noreferrer"&gt;Go to Microsoft Azure Portal.&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Sign in to your Azure account.&lt;/strong&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;




&lt;h2&gt;
  
  
  Step 02: Create a Virtual Network
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;In the Azure Portal, click on "Create a resource" in the upper-left corner.&lt;/strong&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fupzvhl6zc9hqe1lphh2u.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fupzvhl6zc9hqe1lphh2u.jpg" alt=" " width="800" height="424"&gt;&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Search for "Virtual network" and select it from the results.&lt;/strong&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fiylnp0pjlbpv7ls579ve.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fiylnp0pjlbpv7ls579ve.jpg" alt=" " width="800" height="424"&gt;&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Click on "Create" to start creating a virtual network.&lt;/strong&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fthk6a3pkb3o9xkvcgb72.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fthk6a3pkb3o9xkvcgb72.jpg" alt=" " width="800" height="424"&gt;&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Fill in the required information such as name, address space, and subnet details.&lt;/strong&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff7jw17urakiyardl5d0g.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff7jw17urakiyardl5d0g.jpg" alt=" " width="800" height="424"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpqryhssin9cjdykqb8aj.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpqryhssin9cjdykqb8aj.jpg" alt=" " width="800" height="424"&gt;&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Once configured, click on "Review + create" and then click on "Create" to create the virtual network.&lt;/strong&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Futdoy7km9ne33890w32b.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Futdoy7km9ne33890w32b.jpg" alt=" " width="800" height="424"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fln7jykus1zb90koj6q9w.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fln7jykus1zb90koj6q9w.jpg" alt=" " width="800" height="424"&gt;&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;




&lt;h2&gt;
  
  
  Step 03: Create a Virtual Machine
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;In the Azure Portal, click on "Create a resource" again.&lt;/strong&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqvr0isx7wnp78ire2y3g.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqvr0isx7wnp78ire2y3g.jpg" alt=" " width="800" height="424"&gt;&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Search for "Virtual machine" and select it from the results.&lt;/strong&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3q9k91yg969d2ke69ev9.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3q9k91yg969d2ke69ev9.jpg" alt=" " width="800" height="424"&gt;&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Click on "Create" to start creating a virtual machine.&lt;/strong&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmb8xjipwe02qtlviypja.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmb8xjipwe02qtlviypja.jpg" alt=" " width="800" height="424"&gt;&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Fill in the required information like resource group, virtual machine name, region, image, size, etc.&lt;/strong&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foi7wwmffss84pbkx12dz.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foi7wwmffss84pbkx12dz.jpg" alt=" " width="800" height="424"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8j2wqruiyrl56d79ndnu.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8j2wqruiyrl56d79ndnu.jpg" alt=" " width="800" height="424"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwby9sbd6dqpeymptn7yi.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwby9sbd6dqpeymptn7yi.jpg" alt=" " width="800" height="424"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fiu4mj0tbrp1fb3ppvzde.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fiu4mj0tbrp1fb3ppvzde.jpg" alt=" " width="800" height="424"&gt;&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Under the "Networking" section, select the virtual network you created in step 2 and configure other networking settings as needed.&lt;/strong&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxkvte8fxssaej8zj567r.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxkvte8fxssaej8zj567r.jpg" alt=" " width="800" height="424"&gt;&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Configure other settings like disks, management, tags, etc., according to your requirements.&lt;/strong&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Once configured, click on "Review + create" and then click on "Create" to create the virtual machine.&lt;/strong&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fegqe2fik1eogzs8kt2me.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fegqe2fik1eogzs8kt2me.jpg" alt=" " width="800" height="424"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqn07qa5xcp499poo3dh0.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqn07qa5xcp499poo3dh0.jpg" alt=" " width="800" height="424"&gt;&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;




&lt;h2&gt;
  
  
  Step 04: Connect to the Virtual Machine using RDP
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Once the virtual machine is created, navigate to the virtual machine resource in the Azure Portal.&lt;/strong&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6isg5ijg1e3q2b2rh7h3.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6isg5ijg1e3q2b2rh7h3.jpg" alt=" " width="800" height="424"&gt;&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;In the virtual machine's overview page, click on "Connect" at the top.&lt;/strong&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9evc5ab4qrbwggr0x4n7.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9evc5ab4qrbwggr0x4n7.jpg" alt=" " width="800" height="424"&gt;&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Download the RDP file and open it.&lt;/strong&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn6sxklt80l7ad876so81.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn6sxklt80l7ad876so81.jpg" alt=" " width="800" height="424"&gt;&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Enter the username and password for the virtual machine (these are the credentials you specified during VM creation).&lt;/strong&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F91m90fp4rsphoio4wc4f.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F91m90fp4rsphoio4wc4f.jpg" alt=" " width="800" height="424"&gt;&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Click "Connect" to initiate the RDP session.&lt;/strong&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy25msq0asyuwvl7bmhgx.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy25msq0asyuwvl7bmhgx.jpg" alt=" " width="800" height="424"&gt;&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;




&lt;h2&gt;
  
  
  Step 05: Verify Connection
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;After connecting via RDP, you should be able to access the virtual machine's desktop and interact with it as if you were physically sitting in front of it.&lt;/strong&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F22qfawna6xk0iwrchyw7.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F22qfawna6xk0iwrchyw7.jpg" alt=" " width="800" height="424"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>azure</category>
      <category>cloudcomputing</category>
      <category>cloudskills</category>
      <category>cloud</category>
    </item>
    <item>
      <title>Install Hadoop on Ubuntu</title>
      <dc:creator>Atul Vishwakarma</dc:creator>
      <pubDate>Sat, 04 Nov 2023 18:58:55 +0000</pubDate>
      <link>https://dev.to/vatul16/install-hadoop-on-ubuntu-3igb</link>
      <guid>https://dev.to/vatul16/install-hadoop-on-ubuntu-3igb</guid>
      <description>&lt;p&gt;Apache Hadoop is a collection of utilities that allows you to manage the processing of large datasets across clusters of computers.&lt;/p&gt;

&lt;p&gt;Also, it is tolerant to cluster failures. If one of your clusters crashes, Hadoop can be used to recover data from other nodes.&lt;/p&gt;

&lt;p&gt;And in this guide, I will walk you through the installation process.&lt;/p&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;How to install Hadoop on Ubuntu&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;To install Hadoop, you will have to go through various steps, which include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Installing Java and configuring environment variables&lt;/li&gt;
&lt;li&gt;Creating user and configuring SSH&lt;/li&gt;
&lt;li&gt;Installation and configuration of Hadoop&lt;/li&gt;
&lt;/ul&gt;




&lt;p&gt;So let's start with the first step:&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Step 1: Installing Java on Ubuntu&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;To install java on Ubuntu, all you have to do is execute the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo apt install default-jdk default-jre -y
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To verify the installation, check the java version on your system:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;java -version
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpjngfdqz2758pmavloxk.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpjngfdqz2758pmavloxk.jpg" alt=" " width="786" height="533"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;Step 2: Create a user for Hadoop and configure SSH&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;First, create a new user named &lt;code&gt;hadoop&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo adduser hadoop
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To enable superuser privileges to the new user, add it to the &lt;code&gt;sudo&lt;/code&gt; group:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo usermod -aG sudo hadoop
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once done, switch to the user &lt;code&gt;hadoop&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo su - hadoop
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Next, install the OpenSSH server and client:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo apt install openssh-server openssh-client -y
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now, use the following command to generate private and public keys:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ssh-keygen -t rsa
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here, it will ask you:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Where to save the key (hit enter to save it inside your home directory)&lt;/li&gt;
&lt;li&gt;Create passphrase for keys (leave blank for no passphrase)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2nz8p4aa783an4yst980.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2nz8p4aa783an4yst980.jpg" alt=" " width="786" height="533"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now, add the public key to &lt;code&gt;authorized_keys&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cat ~/.ssh/id_rsa.pub &amp;gt;&amp;gt; ~/.ssh/authorized_keys
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Use the chmod command to change the file permissions of &lt;code&gt;authorized_keys&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo chmod 640 ~/.ssh/authorized_keys
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Finally, verify the SSH configuration:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ssh localhost
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you have not configured the password, all you have to do is type &lt;code&gt;yes&lt;/code&gt; and hit enter if you added a passphrase for the keys, it will ask you to enter here:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F20d96xlbd1lh67j6f2mf.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F20d96xlbd1lh67j6f2mf.jpg" alt=" " width="786" height="533"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;Step 3: Download and install Apache Hadoop on Ubuntu&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;If you have created a user for Hadoop, first, log in as the &lt;code&gt;hadoop&lt;/code&gt; user:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo su - hadoop
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now, visit the download page for Apache Hadoop and copy the link for the most recent stable release.&lt;/p&gt;

&lt;p&gt;While writing, its &lt;code&gt;3.3.6&lt;/code&gt; so I will be using the wget command to download this release:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;wget https://downloads.apache.org/hadoop/common/stable/hadoop-3.3.6.tar.gz
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once you are done with the download, extract the file using the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;tar -xvzf hadoop-3.3.6.tar.gz
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Next, move the extracted file to the &lt;code&gt;/usr/local/hadoop&lt;/code&gt; using the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo mv hadoop-3.3.6 /usr/local/hadoop
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now, create a directory using mkdir command to store logs:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo mkdir /usr/local/hadoop/logs
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Finally, change the ownership of the &lt;code&gt;/usr/local/hadoop&lt;/code&gt; to the user &lt;code&gt;hadoop&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo chown -R hadoop:hadoop /usr/local/hadoop
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  &lt;strong&gt;Step 4: Configure Hadoop on Ubuntu&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Here, I will walk you through the configuration of the Hadoop environment variable.&lt;/p&gt;

&lt;p&gt;First, open the &lt;code&gt;.bashrc&lt;/code&gt; file using the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo nano ~/.bashrc
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Jump to the end of the line in the nano text editor by pressing &lt;code&gt;Alt + /&lt;/code&gt; and paste the following lines:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;export HADOOP_HOME=/usr/local/hadoop

export HADOOP_INSTALL=$HADOOP_HOME

export HADOOP_MAPRED_HOME=$HADOOP_HOME

export HADOOP_COMMON_HOME=$HADOOP_HOME

export HADOOP_HDFS_HOME=$HADOOP_HOME

export YARN_HOME=$HADOOP_HOME

export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native

export PATH=$PATH:$HADOOP_HOME/sbin:$HADOOP_HOME/bin

export HADOOP_OPTS="-Djava.library.path=$HADOOP_HOME/lib/native"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbudlru594i9ea61qt3gg.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbudlru594i9ea61qt3gg.jpg" alt=" " width="786" height="533"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Save changes and exit from the nano text editor.&lt;/p&gt;

&lt;p&gt;To enable the changes, source the &lt;code&gt;.bashrc&lt;/code&gt; file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;source ~/.bashrc
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  &lt;strong&gt;Step 5: Configure java environment variables&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;To use Hadoop, you are required to enable its core functions which include YARN, HDFS, MapReduce, and Hadoop-related project settings.&lt;/p&gt;

&lt;p&gt;To do that, you will have to define java environment variables in &lt;code&gt;hadoop-env.sh&lt;/code&gt; file.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Edit the hadoop-env.sh file&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;First, open the &lt;code&gt;hadoop-env.sh&lt;/code&gt; file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo nano $HADOOP_HOME/etc/hadoop/hadoop-env.sh
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Press &lt;code&gt;Alt + /&lt;/code&gt; to jump to the end of the file and paste the following lines in the file to add the path of the Java:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;export JAVA_HOME=/usr/lib/jvm/java-11-openjdk-amd64

export HADOOP_CLASSPATH+=" $HADOOP_HOME/lib/*.jar"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1grd0s4m4lsbw6yot88q.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1grd0s4m4lsbw6yot88q.jpg" alt=" " width="786" height="533"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Save changes and exit from the text editor.&lt;/p&gt;

&lt;p&gt;Next, change your current working directory to &lt;code&gt;/usr/local/hadoop/lib&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cd /usr/local/hadoop/lib
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here, download the javax activation file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo wget https://jcenter.bintray.com/javax/activation/javax.activation-api/1.2.0/javax.activation-api-1.2.0.jar
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once done, check the Hadoop version in Ubuntu:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;hadoop version
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjcirkkn3q6t8vdlfxlha.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjcirkkn3q6t8vdlfxlha.jpg" alt=" " width="786" height="533"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Next, you will have to edit the &lt;code&gt;core-site.xml&lt;/code&gt; file to specify the URL for the name node.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Edit the &lt;code&gt;core-site.xml&lt;/code&gt; file&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;First, open the &lt;code&gt;core-site.xml&lt;/code&gt; file using the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo nano $HADOOP_HOME/etc/hadoop/core-site.xml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And add the following lines in between &lt;code&gt;&amp;lt;configuration&amp;gt;  &amp;lt;/configuration&amp;gt;&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;lt;property&amp;gt;

     &amp;lt;name&amp;gt;fs.default.name&amp;lt;/name&amp;gt;

          &amp;lt;value&amp;gt;hdfs://0.0.0.0:9000&amp;lt;/value&amp;gt;

          &amp;lt;description&amp;gt;The default file system URI&amp;lt;/description&amp;gt;

&amp;lt;/property&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkpff73b7e1yjdyz8eyr8.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkpff73b7e1yjdyz8eyr8.jpg" alt=" " width="786" height="533"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Save the changes and exit from the text editor.&lt;/p&gt;

&lt;p&gt;Next, create a directory to store node metadata using the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo mkdir -p /home/hadoop/hdfs/{namenode,datanode}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And change the ownership of the created directory to the &lt;code&gt;hadoop&lt;/code&gt; user:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo chown -R hadoop:hadoop /home/hadoop/hdfs
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  &lt;strong&gt;Edit the &lt;code&gt;hdfs-site.xml&lt;/code&gt; configuration file&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;By configuring the &lt;code&gt;hdfs-site.xml&lt;/code&gt; file, you will define the location for storing node metadata, fs-image file.&lt;/p&gt;

&lt;p&gt;So first open the configuration file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo nano $HADOOP_HOME/etc/hadoop/hdfs-site.xml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And paste the following line in between &lt;code&gt;&amp;lt;configuration&amp;gt; ... &amp;lt;/configuration&amp;gt;&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;lt;property&amp;gt;

     &amp;lt;name&amp;gt;dfs.replication&amp;lt;/name&amp;gt;

     &amp;lt;value&amp;gt;3&amp;lt;/value&amp;gt;

&amp;lt;/property&amp;gt;

&amp;lt;property&amp;gt;

     &amp;lt;name&amp;gt;dfs.name.dir&amp;lt;/name&amp;gt;

     &amp;lt;value&amp;gt;file:///home/hadoop/hdfs/namenode&amp;lt;/value&amp;gt;

&amp;lt;/property&amp;gt;

&amp;lt;property&amp;gt;

     &amp;lt;name&amp;gt;dfs.data.dir&amp;lt;/name&amp;gt;

     &amp;lt;value&amp;gt;file:///home/hadoop/hdfs/datanode&amp;lt;/value&amp;gt;

&amp;lt;/property&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn6rkviih4g18078kubva.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn6rkviih4g18078kubva.jpg" alt=" " width="800" height="605"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Save changes and exit from the &lt;code&gt;hdfs-site.xml&lt;/code&gt; file.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Edit the &lt;code&gt;mapred-site.xml&lt;/code&gt; file&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;By editing the &lt;code&gt;mapred-site.xml&lt;/code&gt; file, you can define the MapReduce values.&lt;/p&gt;

&lt;p&gt;To do that, first, open the configuration file using the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo nano $HADOOP_HOME/etc/hadoop/mapred-site.xml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And paste the following line in between &lt;code&gt;&amp;lt;configuration&amp;gt; ... &amp;lt;/configuration&amp;gt;&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;lt;property&amp;gt;

      &amp;lt;name&amp;gt;mapreduce.framework.name&amp;lt;/name&amp;gt;

      &amp;lt;value&amp;gt;yarn&amp;lt;/value&amp;gt;

   &amp;lt;/property&amp;gt;

&amp;lt;property&amp;gt;
        &amp;lt;name&amp;gt;yarn.app.mapreduce.am.env&amp;lt;/name&amp;gt;
        &amp;lt;value&amp;gt;HADOOP_MAPRED_HOME=${HADOOP_HOME}&amp;lt;/value&amp;gt;
    &amp;lt;/property&amp;gt;
    &amp;lt;property&amp;gt;
        &amp;lt;name&amp;gt;mapreduce.map.env&amp;lt;/name&amp;gt;
        &amp;lt;value&amp;gt;HADOOP_MAPRED_HOME=${HADOOP_HOME}&amp;lt;/value&amp;gt;
    &amp;lt;/property&amp;gt;
    &amp;lt;property&amp;gt;
        &amp;lt;name&amp;gt;mapreduce.reduce.env&amp;lt;/name&amp;gt;
        &amp;lt;value&amp;gt;HADOOP_MAPRED_HOME=${HADOOP_HOME}&amp;lt;/value&amp;gt;
    &amp;lt;/property&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frb9dzxk858hyk2acs5fq.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frb9dzxk858hyk2acs5fq.jpg" alt=" " width="800" height="530"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Save and exit from the nano text editor.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Edit the &lt;code&gt;yarn-site.xml&lt;/code&gt; file&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;This is the last configuration file that needs to be edited to use the Hadoop service.&lt;/p&gt;

&lt;p&gt;The purpose of editing this file is to define the YARN settings.&lt;/p&gt;

&lt;p&gt;First, open the configuration file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo nano $HADOOP_HOME/etc/hadoop/yarn-site.xml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Paste the following in between &lt;code&gt;&amp;lt;configuration&amp;gt; ... &amp;lt;/configuration&amp;gt;&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;lt;property&amp;gt;

      &amp;lt;name&amp;gt;yarn.nodemanager.aux-services&amp;lt;/name&amp;gt;

      &amp;lt;value&amp;gt;mapreduce_shuffle&amp;lt;/value&amp;gt;

   &amp;lt;/property&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqyqbqqa5mqstzyw2cxkr.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqyqbqqa5mqstzyw2cxkr.jpg" alt=" " width="800" height="530"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Save changes and exit from the config file.&lt;/p&gt;

&lt;p&gt;Finally, use the following command to validate the Hadoop configuration and to format the HDFS NameNode:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;hdfs namenode -format
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdktrmu77pd5g4ximj67j.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdktrmu77pd5g4ximj67j.jpg" alt=" " width="786" height="533"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;Step 6: Start the Hadoop cluster&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;To start the Hadoop cluster, you will have to start the previously configured nodes.&lt;/p&gt;

&lt;p&gt;So let's start with starting the NameNode and DataNode:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;start-dfs.sh
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvocnhtsn62cmw7z64str.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvocnhtsn62cmw7z64str.jpg" alt=" " width="786" height="533"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Next, start the node manager and resource manager:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;start-yarn.sh
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ferzmpo8a4xxf69f00wnv.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ferzmpo8a4xxf69f00wnv.jpg" alt=" " width="786" height="533"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To verify whether the services are running as intended, use the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;jps
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw27m3b8fvq7i8ol8viqh.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw27m3b8fvq7i8ol8viqh.jpg" alt=" " width="786" height="533"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;Step 7: Access the Hadoop web interface&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;To access the Hadoop web interface, you will have to know your IP and append the port no &lt;code&gt;9870&lt;/code&gt; in your address bar:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;http://server-IP:9870
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;My IP is &lt;code&gt;10.0.2.15&lt;/code&gt; so I will be entering the following:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;http://10.0.2.15:98705
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu0c21sbh1tu8zgoe2kau.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu0c21sbh1tu8zgoe2kau.jpg" alt=" " width="800" height="390"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And there you have it!&lt;/p&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;Wrapping Up&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;While the tutorial was lengthy for sure, the major part was just copying and pasting lines to your terminal.&lt;/p&gt;

&lt;p&gt;I hope you will find this guide helpful.&lt;/p&gt;

&lt;p&gt;But if you encounter any errors while executing shown guide, let me know in the comments and I will try my best to come up with the best solution possible.&lt;/p&gt;

</description>
      <category>bigdata</category>
      <category>hadoop</category>
      <category>ubuntu</category>
      <category>learning</category>
    </item>
  </channel>
</rss>
