<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Nicholas Osi</title>
    <description>The latest articles on DEV Community by Nicholas Osi (@aidudo).</description>
    <link>https://dev.to/aidudo</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/aidudo"/>
    <language>en</language>
    <item>
      <title>Deploying NGINX as an Ingress Controller in Kubernetes: A Comprehensive Step-by-Step Guide with Architectural Diagram</title>
      <dc:creator>Nicholas Osi</dc:creator>
      <pubDate>Fri, 22 Nov 2024 16:22:40 +0000</pubDate>
      <link>https://dev.to/aidudo/deploying-nginx-as-an-ingress-controller-in-kubernetes-a-comprehensive-step-by-step-guide-with-dah</link>
      <guid>https://dev.to/aidudo/deploying-nginx-as-an-ingress-controller-in-kubernetes-a-comprehensive-step-by-step-guide-with-dah</guid>
      <description>&lt;p&gt;Using NGINX as an Ingress Controller in Kubernetes: A Step-by-Step Guide&lt;/p&gt;

&lt;p&gt;In modern cloud-native applications, managing external access to services within a Kubernetes cluster is crucial. NGINX is a popular choice for an Ingress Controller due to its performance, flexibility, and rich feature set. This technical article provides a comprehensive, step-by-step guide on how to deploy and configure NGINX as an Ingress Controller in a Kubernetes environment. Additionally, we'll outline an architectural diagram to help visualize the setup.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Introduction&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Ingress in Kubernetes manages external access to services within a cluster, typically HTTP and HTTPS. An Ingress Controller is responsible for fulfilling the Ingress resource's rules, usually by configuring a load balancer or proxy server. NGINX is widely adopted as an Ingress Controller due to its robust feature set, scalability, and community support.&lt;/p&gt;

&lt;p&gt;This guide will walk you through deploying NGINX as an Ingress Controller in a Kubernetes cluster, configuring DNS, deploying a sample application, and setting up Ingress resources to manage traffic routing.&lt;/p&gt;

&lt;p&gt;Prerequisites&lt;/p&gt;

&lt;p&gt;Before you begin, ensure you have the following:&lt;/p&gt;

&lt;p&gt;Kubernetes Cluster: A running Kubernetes cluster. You can set one up locally using &lt;a href="https://minikube.sigs.k8s.io/docs/" rel="noopener noreferrer"&gt;Minikube&lt;/a&gt; or use managed services like &lt;a href="https://cloud.google.com/kubernetes-engine" rel="noopener noreferrer"&gt;Google Kubernetes Engine (GKE)&lt;/a&gt;, &lt;a href="https://aws.amazon.com/eks/" rel="noopener noreferrer"&gt;Amazon Elastic Kubernetes Service (EKS)&lt;/a&gt;, or &lt;a href="https://azure.microsoft.com/en-us/services/kubernetes-service/" rel="noopener noreferrer"&gt;Azure Kubernetes Service (AKS)&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;kubectl: Kubernetes command-line tool installed and configured to communicate with your cluster. &lt;a href="https://kubernetes.io/docs/tasks/tools/" rel="noopener noreferrer"&gt;Install kubectl&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Helm (Optional but Recommended): Package manager for Kubernetes. It simplifies the installation of applications and services on Kubernetes. &lt;a href="https://helm.sh/docs/intro/install/" rel="noopener noreferrer"&gt;Install Helm&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Domain Name: A domain or subdomain that you can configure DNS records for. This will be used to route traffic to your Ingress Controller.&lt;/p&gt;

&lt;p&gt;Architecture Overview&lt;/p&gt;

&lt;p&gt;Before diving into the setup, it's helpful to understand the overall architecture of NGINX as an Ingress Controller within Kubernetes.&lt;/p&gt;

&lt;p&gt;Architectural Diagram Description&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Client: A user accessing your application via a web browser or API client.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;DNS: Resolves the domain name to the external IP address of the NGINX Ingress Controller.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;NGINX Ingress Controller:&lt;br&gt;
Load Balancer: Exposes the Ingress Controller to the internet.&lt;br&gt;
NGINX Pods: Handle incoming HTTP/HTTPS requests, terminate SSL/TLS if configured, and route traffic based on Ingress rules.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Kubernetes Services:&lt;br&gt;
Backend Services: Expose your applications (e.g., web servers, APIs) within the cluster.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Applications: Deployed within Kubernetes pods, managed by deployments or stateful sets.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Step 1: Setting Up Kubernetes Cluster&lt;/p&gt;

&lt;p&gt;If you don't have a Kubernetes cluster set up yet, follow the steps below. For this guide, we'll assume you're using &lt;strong&gt;Minikube&lt;/strong&gt; for a local setup. For production environments, consider using managed services like GKE, EKS, or AKS.&lt;/p&gt;

&lt;p&gt;Installing Minikube&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Install Minikube: Follow the official &lt;a href="https://minikube.sigs.k8s.io/docs/start/" rel="noopener noreferrer"&gt;Minikube installation guide&lt;/a&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Start Minikube:&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;minikube start --driver=docker&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
   Ensure you have Docker installed as the driver.

3. Verify the Cluster:


   kubectl cluster-info


   You should see output indicating that the Kubernetes master and services are running.

 Step 2: Install NGINX Ingress Controller

There are multiple ways to install the NGINX Ingress Controller in Kubernetes. Using Helm is the most straightforward method.

Using Helm to Install NGINX Ingress Controller

1. Add the NGINX Ingress Helm Repository:

   helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx

   helm repo update

2. Create a Namespace for Ingress:

   kubectl create namespace ingress-nginx

3. Install the Ingress Controller:

   helm install nginx-ingress ingress-nginx/ingress-nginx \
     --namespace ingress-nginx \
     --set controller.publishService.enabled=true

 The `controller.publishService.enabled=true` flag ensures that the external IP is published correctly.*

4. Verify the Installation:

   kubectl get pods -n ingress-nginx

   You should see pods with names starting with `nginx-ingress-controller` in the `Running` state.

5.Retrieve the External IP:

   kubectl get svc -n ingress-nginx

   Look for the `EXTERNAL-IP` of the `nginx-ingress-controller`. For Minikube, you might need to use `minikube service` to access the service.

   minikube service nginx-ingress-ingress-nginx-controller -n ingress-nginx

   This command will open the service in your default web browser.

Alternative: Manual Deployment

If you prefer not to use Helm, you can deploy the NGINX Ingress Controller using Kubernetes manifests.

1. Apply the Mandatory YAML:

   kubectl apply -f 
https://raw.githubusercontent.com/kubernetes/ingress-nginx/main/deploy/static/provider/cloud/deploy.yaml


2. Verify the Deployment:

   kubectl get pods -n ingress-nginx

 Step 3: Configure DNS

To route traffic to your Ingress Controller, you'll need to configure DNS records pointing your domain or subdomain to the Ingress Controller's external IP.

For Minikube Users

Minikube doesn't provide a real external IP. Instead, you can modify your `/etc/hosts` file to map a domain to Minikube's IP.

1. Get Minikube IP

   minikube ip

   Suppose the IP is `192.168.99.100`.

2. Edit `/etc/hosts`:

   Add the following line:

   192.168.99.100   example.com

   Replace `example.com` with your desired domain.

For Cloud Providers (GKE, EKS, AKS)

1. Obtain the External IP:


   kubectl get svc -n ingress-nginx

   Locate the `EXTERNAL-IP` of the `nginx-ingress-controller` service.

2. Update DNS Records:

   - Log in to your DNS provider's management console.
   - Create an `A` record for your domain pointing to the Ingress Controller's external IP.

   *Example:*

   | Type | Name    | Value         |
   |------|---------|---------------|
   | A    | @       | 203.0.113.10  |
   | A    | www     | 203.0.113.10  |


Step 4: Deploy a Sample Application

To demonstrate the Ingress Controller, deploy a simple web application. We'll use NGINX as the backend service.

 Create a Deployment and Service

1. Create a YAML file named `app-deployment.yaml`:

      yaml
   apiVersion: apps/v1
   kind: Deployment
   metadata:
     name: demo-app
     labels:
       app: demo
   spec:
     replicas: 2
     selector:
       matchLabels:
         app: demo
     template:
       metadata:
         labels:
           app: demo
       spec:
         containers:
         - name: demo-container
           image: nginx:latest
           ports:
           - containerPort: 80
   ---
   apiVersion: v1
   kind: Service
   metadata:
     name: demo-service
   spec:
     type: ClusterIP
     selector:
       app: demo
     ports:
       - port: 80
         targetPort: 80

2. Apply the Deployment

   kubectl apply -f app-deployment.yaml

3. Verify the Deployment:

   kubectl get deployments
   kubectl get pods
   kubectl get svc

   Ensure that the `demo-app` deployment is running with 2 replicas and that the `demo-service` is available.

Step 5: Create Ingress Resources

Now, configure the Ingress resource to route traffic from the Ingress Controller to your backend service based on the request's host and path.

Create an Ingress YAML File

1. Create a file named `demo-ingress.yaml`

     yaml
   apiVersion: networking.k8s.io/v1
   kind: Ingress
   metadata:
     name: demo-ingress
     annotations:
       nginx.ingress.kubernetes.io/rewrite-target: /
   spec:
     ingressClassName: nginx
     rules:
     - host: example.com
       http:
         paths:
         - path: /
           pathType: Prefix
           backend:
             service:
               name: demo-service
               port:
                 number: 80

   Replace `example.com` with your domain name.

2. Apply the Ingress Resource:

   kubectl apply -f demo-ingress.yaml

3. Verify the Ingress:

   kubectl get ingress

   You should see the `demo-ingress` with an address corresponding to the Ingress Controller's external IP.

Step 6: Verify the Setup

Ensure that the Ingress Controller is correctly routing traffic to your application.

Access the Application

1. Open a Web Browser:

   Navigate to `http://example.com` (replace with your domain).

2. Expected Result:

   You should see the default NGINX welcome page served by the `demo-service`.

  Troubleshooting Tips

DNS Propagation: It might take some time for DNS changes to propagate. Use tools like [dig](https://www.tecmint.com/useful-dig-command-examples/) or [nslookup](https://www.nslookup.io/) to verify DNS records.

  dig example.com
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Ingress Controller Logs:&lt;/p&gt;

&lt;p&gt;kubectl logs -n ingress-nginx -l app.kubernetes.io/name=ingress-nginx&lt;/p&gt;

&lt;p&gt;Check Ingress Rules:&lt;/p&gt;

&lt;p&gt;kubectl describe ingress demo-ingress&lt;/p&gt;

&lt;p&gt;Best Practices&lt;/p&gt;

&lt;p&gt;To ensure a robust and secure Ingress setup, consider the following best practices:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Enable SSL/TLS:&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Secure your application by configuring HTTPS. You can obtain certificates using &lt;a href="https://cert-manager.io/" rel="noopener noreferrer"&gt;Cert-Manager&lt;/a&gt; and Let's Encrypt.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Use Annotations Wisely:&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;NGINX Ingress Controller supports various annotations for customization, such as rate limiting, whitelisting, and custom error pages.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Monitor Ingress Traffic:&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Implement monitoring and logging to track traffic patterns, performance, and security threats. Tools like &lt;a href="https://prometheus.io/" rel="noopener noreferrer"&gt;Prometheus&lt;/a&gt; and &lt;a href="https://grafana.com/" rel="noopener noreferrer"&gt;Grafana&lt;/a&gt; are beneficial.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Implement RBAC:&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Control access to Kubernetes resources by configuring Role-Based Access Control (RBAC).&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Regularly Update NGINX Ingress Controller:&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Keep the Ingress Controller up-to-date to benefit from the latest features and security patches.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Scale Appropriately:&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Ensure the Ingress Controller can handle your traffic load by configuring horizontal pod autoscaling if necessary.&lt;/p&gt;

&lt;p&gt;Conclusion&lt;/p&gt;

&lt;p&gt;Deploying NGINX as an Ingress Controller in Kubernetes provides a powerful and flexible way to manage external access to your applications. By following this step-by-step guide, you can set up a robust Ingress architecture that efficiently routes traffic, supports SSL/TLS, and scales with your application's needs.&lt;/p&gt;

&lt;p&gt;Remember to adhere to best practices to maintain security, performance, and reliability. As your infrastructure grows, consider exploring advanced features of NGINX Ingress Controller and integrating additional tools to enhance your Kubernetes environment.&lt;/p&gt;

&lt;p&gt;Additional Resources&lt;/p&gt;

&lt;p&gt;Kubernetes Official Documentation:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://kubernetes.io/docs/concepts/services-networking/ingress/" rel="noopener noreferrer"&gt;Ingress&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;NGINX Ingress Controller GitHub Repository:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/kubernetes/ingress-nginx" rel="noopener noreferrer"&gt;kubernetes/ingress-nginx&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Helm Charts for Ingress NGINX:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://artifacthub.io/packages/helm/ingress-nginx/ingress-nginx" rel="noopener noreferrer"&gt;ingress-nginx Helm Chart&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Cert-Manager for SSL/TLS:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://cert-manager.io/" rel="noopener noreferrer"&gt;cert-manager&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Monitoring with Prometheus and Grafana:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://prometheus.io/" rel="noopener noreferrer"&gt;Prometheus&lt;/a&gt; | &lt;a href="https://grafana.com/" rel="noopener noreferrer"&gt;Grafana&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Kubernetes Best Practices:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://kubernetes.io/docs/concepts/overview/what-is-kubernetes/" rel="noopener noreferrer"&gt;Kubernetes Best Practices&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;By leveraging NGINX as your Ingress Controller, you gain granular control over traffic management, enhanced security features, and the scalability needed for modern applications. Happy deploying!&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Automating Azure VM Deployment with Terraform and Ansible in Azure DevOps Pipelines</title>
      <dc:creator>Nicholas Osi</dc:creator>
      <pubDate>Mon, 18 Nov 2024 16:25:16 +0000</pubDate>
      <link>https://dev.to/aidudo/automating-azure-vm-deployment-with-terraform-and-ansible-in-azure-devops-pipelines-4cph</link>
      <guid>https://dev.to/aidudo/automating-azure-vm-deployment-with-terraform-and-ansible-in-azure-devops-pipelines-4cph</guid>
      <description>&lt;p&gt;Deploying virtual machines (VMs) in Azure using a combination of Terraform for infrastructure provisioning and Ansible for configuration management is a powerful and efficient approach. By integrating these tools within an Azure DevOps pipeline, you can achieve automated, repeatable, and scalable deployments. Below is a comprehensive, step-by-step guide to setting up this automation.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Prerequisites&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Before you begin, ensure you have the following:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Azure Account: Access to Azure with permissions to create resources.&lt;/li&gt;
&lt;li&gt;Azure DevOps Account: Access to Azure DevOps Services with permissions to create projects and pipelines.&lt;/li&gt;
&lt;li&gt;Git Repository: A repository in Azure Repos or any other Git service where your Terraform and Ansible code will reside.&lt;/li&gt;
&lt;li&gt;Terraform Installed Locally: For initial testing and development.&lt;/li&gt;
&lt;li&gt;Ansible Installed Locally: For initial testing and development.&lt;/li&gt;
&lt;li&gt;Basic Knowledge: Familiarity with Terraform, Ansible, and Azure DevOps pipelines.&lt;/li&gt;
&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;Repository Setup&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Organize your codebase by separating Terraform and Ansible configurations, typically in different directories within the same repository.&lt;/p&gt;

&lt;h2&gt;
  
  
  Terraform Configuration
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Create a Directory&lt;/strong&gt;: Create a &lt;code&gt;terraform&lt;/code&gt; directory in your repository.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Define &lt;code&gt;main.tf&lt;/code&gt;&lt;/strong&gt;: This file will contain the Terraform configuration for deploying Azure VMs. Here’s a basic example:&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;provider “azurerm” {&lt;br&gt;
features = {}&lt;br&gt;
}&lt;/p&gt;

&lt;p&gt;resource “azurerm_resource_group” “rg” {&lt;br&gt;
name = “myResourceGroup”&lt;br&gt;
location = “East US”&lt;br&gt;
}&lt;/p&gt;

&lt;p&gt;resource “azurerm_virtual_network” “vnet” {&lt;br&gt;
name = “myVNet”&lt;br&gt;
address_space = [“10.0.0.0/16”]&lt;br&gt;
location = azurerm_resource_group.rg.location&lt;br&gt;
resource_group_name = azurerm_resource_group.rg.name&lt;br&gt;
}&lt;/p&gt;

&lt;p&gt;resource “azurerm_subnet” “subnet” {&lt;br&gt;
name = “mySubnet”&lt;br&gt;
resource_group_name = azurerm_resource_group.rg.name&lt;br&gt;
virtual_network_name = azurerm_virtual_network.vnet.name&lt;br&gt;
address_prefixes = [“10.0.1.0/24”]&lt;br&gt;
}&lt;/p&gt;

&lt;p&gt;resource “azurerm_network_interface” “nic” {&lt;br&gt;
name = “myNIC”&lt;br&gt;
location = azurerm_resource_group.rg.location&lt;br&gt;
resource_group_name = azurerm_resource_group.rg.name&lt;/p&gt;

&lt;p&gt;ip_configuration {&lt;br&gt;
name = “internal”&lt;br&gt;
subnet_id = azurerm_subnet.subnet.id&lt;br&gt;
private_ip_address_allocation = “Dynamic”&lt;br&gt;
}&lt;br&gt;
}&lt;/p&gt;

&lt;p&gt;resource “azurerm_virtual_machine” “vm” {&lt;br&gt;
name = “myVM”&lt;br&gt;
location = azurerm_resource_group.rg.location&lt;br&gt;
resource_group_name = azurerm_resource_group.rg.name&lt;br&gt;
network_interface_ids = [azurerm_network_interface.nic.id]&lt;br&gt;
vm_size = “Standard_DS1_v2”&lt;/p&gt;

&lt;p&gt;storage_image_reference {&lt;br&gt;
publisher = “Canonical”&lt;br&gt;
offer = “UbuntuServer”&lt;br&gt;
sku = “18.04-LTS”&lt;br&gt;
version = “latest”&lt;br&gt;
}&lt;/p&gt;

&lt;p&gt;storage_os_disk {&lt;br&gt;
name = “myOSDisk”&lt;br&gt;
caching = “ReadWrite”&lt;br&gt;
create_option = “FromImage”&lt;br&gt;
managed_disk_type = “Standard_LRS”&lt;br&gt;
}&lt;/p&gt;

&lt;p&gt;os_profile {&lt;br&gt;
computer_name = “myVM”&lt;br&gt;
admin_username = “azureuser”&lt;br&gt;
admin_password = “P@ssw0rd1234!” # Consider using SSH keys for better security&lt;br&gt;
}&lt;/p&gt;

&lt;p&gt;os_profile_linux_config {&lt;br&gt;
disable_password_authentication = false&lt;br&gt;
}&lt;br&gt;
}&lt;/p&gt;

&lt;p&gt;output “public_ip” {&lt;br&gt;
value = azurerm_network_interface.nic.private_ip_address&lt;br&gt;
}&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
3. **Variables and Outputs**: Customize variables and outputs as needed. For security, sensitive variables should be managed via Azure DevOps variables or Azure Key Vault.

### **b. Ansible Playbook**

1. **Create a Directory**: Create an `ansible` directory in your repository.

2. **Define `inventory.ini`**: This file will list the VMs to configure. Since Terraform outputs the VM IPs, you can use Terraform to generate the inventory or use dynamic inventory scripts.

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;br&gt;
ini&lt;br&gt;
[azure_vms]&lt;br&gt;
myVM ansible_host= ansible_user=azureuser ansible_ssh_pass=P@ssw0rd1234!&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
*Note: For enhanced security, consider using SSH keys instead of passwords.*

3. **Define `playbook.yml`**: This playbook will perform configuration tasks on the deployed VMs. Here’s an example that installs Nginx:

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;br&gt;
yaml&lt;br&gt;
— name: Configure Azure VMs&lt;br&gt;
hosts: azure_vms&lt;br&gt;
become: yes&lt;/p&gt;

&lt;p&gt;tasks:&lt;br&gt;
— name: Update apt cache&lt;br&gt;
apt:&lt;br&gt;
update_cache: yes&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;name: Install Nginx&lt;br&gt;
apt:&lt;br&gt;
name: nginx&lt;br&gt;
state: present&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;name: Ensure Nginx is running&lt;br&gt;
service:&lt;br&gt;
name: nginx&lt;br&gt;
state: started&lt;br&gt;
enabled: true&lt;br&gt;
&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
4. **Ansible Configuration**: Optionally, add an `ansible.cfg` file to define inventory paths, roles paths, etc.

— -

## **3. Azure DevOps Pipeline Configuration**

Set up the Azure DevOps pipeline to execute Terraform and Ansible steps sequentially.

### **a. Service Connections**

1. **Azure Service Connection**:
— Navigate to your Azure DevOps project.
— Go to **Project Settings** &amp;gt; **Service connections**.
— Click **New service connection** &amp;gt; **Azure Resource Manager**.
— Choose the appropriate authentication method (e.g., via Azure CLI, Service Principal).
— Name the connection (e.g., `AzureServiceConnection`).

2. **SSH Service Connection for Ansible** (if using SSH keys):
— If you’re using SSH keys for Ansible, store the private key as a secret variable or use Azure DevOps’ secure files.

### **b. Pipeline YAML Definition**

Create a YAML pipeline that defines the stages for Terraform and Ansible.

1. **Create a New Pipeline**:
— Navigate to **Pipelines** &amp;gt; **New Pipeline**.
— Select your repository.
— Choose **Starter pipeline** or **Existing YAML file**.

2. **Define the Pipeline**: Below is an example YAML pipeline integrating Terraform and Ansible.

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;br&gt;
yaml&lt;br&gt;
trigger:&lt;br&gt;
— main&lt;/p&gt;

&lt;p&gt;variables:&lt;/p&gt;

&lt;h1&gt;
  
  
  Terraform variables
&lt;/h1&gt;

&lt;p&gt;TF_VAR_location: “East US”&lt;/p&gt;

&lt;h1&gt;
  
  
  Add other Terraform variables as needed
&lt;/h1&gt;

&lt;p&gt;stages:&lt;br&gt;
— stage: Terraform&lt;br&gt;
displayName: “Terraform: Infrastructure Provisioning”&lt;br&gt;
jobs:&lt;br&gt;
— job: Terraform&lt;br&gt;
displayName: “Terraform Apply”&lt;br&gt;
pool:&lt;br&gt;
vmImage: ‘ubuntu-latest’&lt;/p&gt;

&lt;p&gt;steps:&lt;br&gt;
— checkout: self&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;task: TerraformInstaller@0&lt;br&gt;
displayName: “Install Terraform”&lt;br&gt;
inputs:&lt;br&gt;
terraformVersion: ‘1.3.0’ # Specify desired Terraform version&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;task: AzureCLI@2&lt;br&gt;
displayName: “Azure Login”&lt;br&gt;
inputs:&lt;br&gt;
azureSubscription: ‘AzureServiceConnection’&lt;br&gt;
scriptType: ‘bash’&lt;br&gt;
scriptLocation: ‘inlineScript’&lt;br&gt;
inlineScript: |&lt;br&gt;
echo “Logging into Azure…”&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;script: |&lt;br&gt;
cd terraform&lt;br&gt;
terraform init&lt;br&gt;
displayName: “Terraform Init”&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;script: |&lt;br&gt;
cd terraform&lt;br&gt;
terraform plan -out=tfplan&lt;br&gt;
displayName: “Terraform Plan”&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;script: |&lt;br&gt;
cd terraform&lt;br&gt;
terraform apply -auto-approve tfplan&lt;br&gt;
displayName: “Terraform Apply”&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;task: PublishPipelineArtifact@1&lt;br&gt;
displayName: “Publish Terraform Outputs”&lt;br&gt;
inputs:&lt;br&gt;
targetPath: ‘terraform’&lt;br&gt;
artifact: ‘terraform’&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;stage: Ansible&lt;br&gt;
displayName: “Ansible: Configuration Management”&lt;br&gt;
dependsOn: Terraform&lt;br&gt;
jobs:&lt;br&gt;
— job: Ansible&lt;br&gt;
displayName: “Run Ansible Playbook”&lt;br&gt;
pool:&lt;br&gt;
vmImage: ‘ubuntu-latest’&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;steps:&lt;br&gt;
— download: current&lt;br&gt;
artifact: terraform&lt;br&gt;
displayName: “Download Terraform Artifact”&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;task: UsePythonVersion@0&lt;br&gt;
inputs:&lt;br&gt;
versionSpec: ‘3.x’&lt;br&gt;
addToPath: true&lt;br&gt;
displayName: “Use Python”&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;script: |&lt;br&gt;
python -m pip install — upgrade pip&lt;br&gt;
pip install ansible&lt;br&gt;
displayName: “Install Ansible”&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;script: |&lt;br&gt;
cd ansible&lt;br&gt;
ansible-playbook -i inventory.ini playbook.yml&lt;br&gt;
displayName: “Run Ansible Playbook”&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;


*Note*: This is a simplified example. Depending on your specific requirements, you may need to adjust the pipeline, especially for handling variables, SSH keys, and dynamic inventories.

— -

## **4. Pipeline Steps Breakdown**

Let’s delve deeper into each step of the pipeline to understand their purpose and configuration.

### **Stage 1: Terraform**

1. **Checkout Code**:
— Retrieves the latest code from the repository.

2. **Install Terraform**:
— Uses the `TerraformInstaller` task to install the specified Terraform version.

3. **Azure Login**:
— Authenticates to Azure using the service connection. This is necessary for Terraform to interact with Azure resources.

4. **Terraform Init**:
— Initializes the Terraform working directory, downloads necessary providers, and sets up the backend.

5. **Terraform Plan**:
— Generates and displays an execution plan, showing what actions Terraform will take.

6. **Terraform Apply**:
— Applies the planned changes to Azure, creating or updating resources as defined.

7. **Publish Terraform Outputs**:
— Publishes Terraform outputs as pipeline artifacts. This can be used in subsequent stages or jobs.

### **Stage 2: Ansible**

1. **Download Terraform Artifact**:
— Retrieves the outputs from the Terraform stage, which may include VM IP addresses needed for Ansible inventory.

2. **Use Python**:
— Ensures that Python is available, as Ansible is a Python-based tool.

3. **Install Ansible**:
— Installs Ansible via `pip`.

4. **Run Ansible Playbook**:
— Executes the Ansible playbook to configure the deployed VMs.

— -

## **5. Testing and Validation**

After setting up the pipeline, it’s crucial to test and validate each component to ensure successful deployments.

1. **Validate Terraform Configuration**:
— Run `terraform validate` locally to check for syntax errors.
— Use `terraform plan` to ensure that the infrastructure changes are as expected.

2. **Validate Ansible Playbook**:
— Run the playbook locally against test VMs to ensure it performs the desired configurations without errors.

3. **Run the Pipeline**:
— Commit and push changes to trigger the pipeline.
— Monitor the pipeline execution in Azure DevOps for any failures or issues.

4. **Verify Deployment in Azure**:
— Check the Azure Portal to confirm that the resources (e.g., VMs) have been created as defined.
— Ensure that Ansible has successfully configured the VMs (e.g., Nginx is installed and running).

5. **Implement Logging and Monitoring**:
— Integrate logging mechanisms to capture Terraform and Ansible logs.
— Set up alerts for pipeline failures or deployment issues.

— -

## **6. Best Practices and Tips**

- **State Management**:
— Use remote state backends for Terraform, such as Azure Storage Account with state locking to prevent conflicts.

- **Secrets Management**:
— Store sensitive information (e.g., passwords, SSH keys) securely using Azure DevOps Pipeline variables, Azure Key Vault, or other secret management solutions.

- **Modular Code**:
— Structure Terraform and Ansible code into modules and roles for reusability and better organization.

- **Idempotency**:
— Ensure that Terraform and Ansible configurations are idempotent, meaning they can run multiple times without causing unintended changes.

- **Error Handling**:
— Implement error handling and notifications within the pipeline to promptly address failures.

- **Version Control**:
— Use version control best practices, such as branching strategies and pull requests, to manage changes to your infrastructure and configuration code.

- **Dynamic Inventory for Ansible**:
— Consider using Terraform outputs to dynamically generate the Ansible inventory, enhancing flexibility and scalability.

- **Use SSH Keys**:
— Prefer SSH key-based authentication over passwords for enhanced security in Ansible.

— -

## **7. Conclusion**

By integrating Terraform and Ansible within an Azure DevOps pipeline, you establish a robust automation workflow for deploying and configuring Azure VMs. Terraform handles the infrastructure provisioning, ensuring consistent and repeatable deployments, while Ansible manages the configuration of those resources, applying the desired state efficiently.

This setup not only accelerates deployment times but also enhances reliability and scalability, allowing your infrastructure to grow seamlessly with your application’s needs. Adhering to best practices in code organization, security, and pipeline management further ensures that your deployments remain maintainable and secure over time.

Feel free to customize and expand upon this foundation to suit your specific project requirements and organizational standards.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

</description>
    </item>
    <item>
      <title>Jenkins Pipeline Essentials: Deploying Applications to Kubernetes with Downtime Considerations</title>
      <dc:creator>Nicholas Osi</dc:creator>
      <pubDate>Mon, 18 Nov 2024 16:14:14 +0000</pubDate>
      <link>https://dev.to/aidudo/jenkins-pipeline-essentials-deploying-applications-to-kubernetes-with-downtime-considerations-24hh</link>
      <guid>https://dev.to/aidudo/jenkins-pipeline-essentials-deploying-applications-to-kubernetes-with-downtime-considerations-24hh</guid>
      <description>&lt;p&gt;Deploying applications to Kubernetes using Jenkins pipelines is a robust way to automate your CI/CD processes. This guide will walk you through setting up a Jenkins pipeline to deploy an application to a Kubernetes cluster, handling scenarios that may involve downtime during deployment. Whether you’re deploying updates that require downtime or managing releases in a controlled manner, this step-by-step approach will help you achieve a smooth deployment process.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;1. Prerequisites&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Before you begin, ensure you have the following:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Jenkins Server&lt;/strong&gt;: A running Jenkins instance. You can set this up on-premises or use a cloud-based Jenkins service.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Kubernetes Cluster&lt;/strong&gt;: Access to a Kubernetes cluster where the application will be deployed. This can be a managed service like Azure Kubernetes Service (AKS), Google Kubernetes Engine (GKE), Amazon EKS, or a self-managed cluster.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Docker Registry&lt;/strong&gt;: Access to a Docker registry (e.g., Docker Hub, Azure Container Registry, AWS ECR) to store your Docker images.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Source Code Repository&lt;/strong&gt;: A Git repository (e.g., GitHub, GitLab, Bitbucket) containing your application code.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;kubectl Configured&lt;/strong&gt;: &lt;code&gt;kubectl&lt;/code&gt; installed and configured to interact with your Kubernetes cluster.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Basic Knowledge&lt;/strong&gt;: Familiarity with Jenkins, Docker, Kubernetes, and CI/CD concepts.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;— -&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;2. Repository Setup&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Organize your code repository to include your application code, Dockerfile, and Kubernetes manifests. Here’s how to structure your repository:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;my-app/
├── src/
│ └── … (your application source code)
├── Dockerfile
├── k8s/
│ ├── deployment.yaml
│ └── service.yaml
├── Jenkinsfile
└── README.md
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  &lt;strong&gt;Application Code&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Ensure your application code is well-organized within the &lt;code&gt;src/&lt;/code&gt; directory. This example assumes a simple web application, but the steps apply to various types of applications.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Dockerfile&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Create a &lt;code&gt;Dockerfile&lt;/code&gt; at the root of your repository to containerize your application. Here’s an example for a Node.js application:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight docker"&gt;&lt;code&gt;&lt;span class="c"&gt;# Use an official Node.js runtime as the base image&lt;/span&gt;
&lt;span class="k"&gt;FROM&lt;/span&gt;&lt;span class="s"&gt; node:14-alpine&lt;/span&gt;

&lt;span class="c"&gt;# Set the working directory&lt;/span&gt;
&lt;span class="k"&gt;WORKDIR&lt;/span&gt;&lt;span class="s"&gt; /app&lt;/span&gt;

&lt;span class="c"&gt;# Copy package.json and package-lock.json&lt;/span&gt;
&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; package*.json ./&lt;/span&gt;

&lt;span class="c"&gt;# Install dependencies&lt;/span&gt;
&lt;span class="k"&gt;RUN &lt;/span&gt;npm &lt;span class="nb"&gt;install&lt;/span&gt;

&lt;span class="c"&gt;# Copy the rest of the application code&lt;/span&gt;
&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; . .&lt;/span&gt;

&lt;span class="c"&gt;# Expose the application port&lt;/span&gt;
&lt;span class="k"&gt;EXPOSE&lt;/span&gt;&lt;span class="s"&gt; 3000&lt;/span&gt;

&lt;span class="c"&gt;# Define the command to run the application&lt;/span&gt;
&lt;span class="k"&gt;CMD&lt;/span&gt;&lt;span class="s"&gt; [“npm”, “start”]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; Adjust the Dockerfile according to your application’s requirements.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Kubernetes Manifests&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Create Kubernetes manifests to define the desired state of your application in the cluster.&lt;/p&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;deployment.yaml&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;apiVersion: apps/v1&lt;br&gt;
kind: Deployment&lt;br&gt;
metadata:&lt;br&gt;
name: my-app-deployment&lt;br&gt;
labels:&lt;br&gt;
app: my-app&lt;br&gt;
spec:&lt;br&gt;
replicas: 3&lt;br&gt;
selector:&lt;br&gt;
matchLabels:&lt;br&gt;
app: my-app&lt;br&gt;
template:&lt;br&gt;
metadata:&lt;br&gt;
labels:&lt;br&gt;
app: my-app&lt;br&gt;
spec:&lt;br&gt;
containers:&lt;br&gt;
— name: my-app-container&lt;br&gt;
image: your-docker-registry/my-app:latest&lt;br&gt;
ports:&lt;br&gt;
— containerPort: 3000&lt;br&gt;
readinessProbe:&lt;br&gt;
httpGet:&lt;br&gt;
path: /health&lt;br&gt;
port: 3000&lt;br&gt;
initialDelaySeconds: 5&lt;br&gt;
periodSeconds: 10&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
#### **service.yaml**

apiVersion: v1
kind: Service
metadata:
name: my-app-service
spec:
type: LoadBalancer
selector:
app: my-app
ports:
— protocol: TCP
port: 80
targetPort: 3000
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; Replace &lt;code&gt;your-docker-registry&lt;/code&gt; with your actual Docker registry path.&lt;/p&gt;

&lt;p&gt;— -&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;3. Jenkins Setup&lt;/strong&gt;
&lt;/h2&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Install Jenkins&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;If you haven’t set up Jenkins yet, follow these steps:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Download Jenkins&lt;/strong&gt;: Visit the &lt;a href="https://www.jenkins.io/download/" rel="noopener noreferrer"&gt;official Jenkins website&lt;/a&gt; and download the appropriate installer for your operating system.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Install Jenkins&lt;/strong&gt;: Follow the installation instructions for your platform.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Start Jenkins&lt;/strong&gt;: Ensure Jenkins is running. By default, it runs on &lt;code&gt;http://localhost:8080&lt;/code&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Unlock Jenkins&lt;/strong&gt;: On first launch, Jenkins will prompt you to unlock it using an initial admin password. Follow the instructions provided.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Install Suggested Plugins&lt;/strong&gt;: Jenkins will offer to install suggested plugins. It’s recommended to proceed with this option.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Create Admin User&lt;/strong&gt;: Set up your first admin user as prompted.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Install Necessary Plugins&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;To integrate Jenkins with Kubernetes and Docker, install the following plugins:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Kubernetes Plugin&lt;/strong&gt;: Allows Jenkins to interact with Kubernetes clusters.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Docker Pipeline Plugin&lt;/strong&gt;: Enables building and pushing Docker images within Jenkins pipelines.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Git Plugin&lt;/strong&gt;: Facilitates cloning Git repositories.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Credentials Binding Plugin&lt;/strong&gt;: Manages sensitive data like passwords and SSH keys.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Pipeline Plugin&lt;/strong&gt;: Provides the foundational pipeline capabilities.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;To install plugins:&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Navigate to &lt;code&gt;Manage Jenkins&lt;/code&gt; &amp;gt; &lt;code&gt;Manage Plugins&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;Go to the &lt;code&gt;Available&lt;/code&gt; tab.&lt;/li&gt;
&lt;li&gt;Search for each plugin by name.&lt;/li&gt;
&lt;li&gt;Select the checkbox next to the plugin and click &lt;code&gt;Install without restart&lt;/code&gt;.&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Configure Credentials&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Securely store credentials required for accessing Docker registries and the Kubernetes cluster.&lt;/p&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;Docker Registry Credentials&lt;/strong&gt;
&lt;/h4&gt;

&lt;ol&gt;
&lt;li&gt;Navigate to &lt;code&gt;Manage Jenkins&lt;/code&gt; &amp;gt; &lt;code&gt;Manage Credentials&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;Select the appropriate domain (e.g., &lt;code&gt;Global&lt;/code&gt;).&lt;/li&gt;
&lt;li&gt;Click &lt;code&gt;Add Credentials&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;Choose &lt;code&gt;Username with password&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;Enter your Docker registry username and password.&lt;/li&gt;
&lt;li&gt;Assign an ID (e.g., &lt;code&gt;docker-registry-credentials&lt;/code&gt;) and a description.&lt;/li&gt;
&lt;li&gt;Click &lt;code&gt;OK&lt;/code&gt;.&lt;/li&gt;
&lt;/ol&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;Kubernetes Cluster Credentials&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;Depending on your Kubernetes cluster setup, you might use a &lt;code&gt;kubeconfig&lt;/code&gt; file or a service account token.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Using kubeconfig:&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Navigate to &lt;code&gt;Manage Jenkins&lt;/code&gt; &amp;gt; &lt;code&gt;Manage Credentials&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;Click &lt;code&gt;Add Credentials&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;Choose &lt;code&gt;Secret file&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;Upload your &lt;code&gt;kubeconfig&lt;/code&gt; file.&lt;/li&gt;
&lt;li&gt;Assign an ID (e.g., &lt;code&gt;kubeconfig-file&lt;/code&gt;) and a description.&lt;/li&gt;
&lt;li&gt;Click &lt;code&gt;OK&lt;/code&gt;.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Using Service Account Token:&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Create a Kubernetes service account with the necessary permissions.&lt;/li&gt;
&lt;li&gt;Obtain the token.&lt;/li&gt;
&lt;li&gt;Navigate to &lt;code&gt;Manage Jenkins&lt;/code&gt; &amp;gt; &lt;code&gt;Manage Credentials&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;Click &lt;code&gt;Add Credentials&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;Choose &lt;code&gt;Secret text&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;Paste the service account token.&lt;/li&gt;
&lt;li&gt;Assign an ID (e.g., &lt;code&gt;k8s-service-account-token&lt;/code&gt;) and a description.&lt;/li&gt;
&lt;li&gt;Click &lt;code&gt;OK&lt;/code&gt;.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; Ensure that the credentials have the necessary permissions to deploy applications to your Kubernetes cluster.&lt;/p&gt;

&lt;p&gt;— -&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;4. Creating the Jenkins Pipeline&lt;/strong&gt;
&lt;/h2&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Pipeline Overview&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;The Jenkins pipeline will perform the following steps:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Checkout Code&lt;/strong&gt;: Clone the repository containing the application code, Dockerfile, and Kubernetes manifests.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Build Docker Image&lt;/strong&gt;: Build the Docker image of the application.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Push Docker Image&lt;/strong&gt;: Push the built image to the Docker registry.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Deploy to Kubernetes&lt;/strong&gt;: Apply the Kubernetes manifests to deploy the application.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Handle Downtime&lt;/strong&gt;: Manage application downtime during deployment, if necessary.&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Jenkinsfile Configuration&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Create a &lt;code&gt;Jenkinsfile&lt;/code&gt; in the root of your repository to define the pipeline. Here’s a sample &lt;code&gt;Jenkinsfile&lt;/code&gt; with comments explaining each section:&lt;/p&gt;

&lt;p&gt;pipeline {&lt;br&gt;
agent any&lt;/p&gt;

&lt;p&gt;environment {&lt;br&gt;
// Docker Registry Variables&lt;br&gt;
DOCKER_REGISTRY = ‘your-docker-registry’ // e.g., docker.io/username&lt;br&gt;
IMAGE_NAME = ‘my-app’&lt;br&gt;
IMAGE_TAG = “${env.BUILD_NUMBER}”&lt;br&gt;
IMAGE_FULL_NAME = “${DOCKER_REGISTRY}/${IMAGE_NAME}:${IMAGE_TAG}”&lt;/p&gt;

&lt;p&gt;// Kubernetes Variables&lt;br&gt;
KUBECONFIG_CREDENTIAL_ID = ‘kubeconfig-file’ // ID of the kubeconfig secret&lt;br&gt;
KUBE_NAMESPACE = ‘default’ // Replace with your namespace&lt;br&gt;
}&lt;/p&gt;

&lt;p&gt;stages {&lt;br&gt;
stage(‘Checkout’) {&lt;br&gt;
steps {&lt;br&gt;
// Clone the repository&lt;br&gt;
git branch: ‘main’, url: ‘&lt;a href="https://github.com/your-repo/my-app.git%E2%80%99" rel="noopener noreferrer"&gt;https://github.com/your-repo/my-app.git’&lt;/a&gt;&lt;br&gt;
}&lt;br&gt;
}&lt;/p&gt;

&lt;p&gt;stage(‘Build Docker Image’) {&lt;br&gt;
steps {&lt;br&gt;
script {&lt;br&gt;
// Build the Docker image&lt;br&gt;
docker.build(“${IMAGE_FULL_NAME}”)&lt;br&gt;
}&lt;br&gt;
}&lt;br&gt;
}&lt;/p&gt;

&lt;p&gt;stage(‘Push Docker Image’) {&lt;br&gt;
steps {&lt;br&gt;
script {&lt;br&gt;
// Push the Docker image to the registry&lt;br&gt;
docker.withRegistry(‘&lt;a href="https://your-docker-registry%E2%80%99" rel="noopener noreferrer"&gt;https://your-docker-registry’&lt;/a&gt;, ‘docker-registry-credentials’) {&lt;br&gt;
docker.image(“${IMAGE_FULL_NAME}”).push()&lt;br&gt;
}&lt;br&gt;
}&lt;br&gt;
}&lt;br&gt;
}&lt;/p&gt;

&lt;p&gt;stage(‘Deploy to Kubernetes’) {&lt;br&gt;
steps {&lt;br&gt;
script {&lt;br&gt;
// Use the kubeconfig file for kubectl commands&lt;br&gt;
withCredentials([file(credentialsId: “${KUBECONFIG_CREDENTIAL_ID}”, variable: ‘KUBECONFIG’)]) {&lt;br&gt;
sh ‘’’&lt;/p&gt;
&lt;h1&gt;
  
  
  Optional: Scale down the deployment to handle downtime
&lt;/h1&gt;

&lt;p&gt;kubectl scale deployment my-app-deployment — replicas=0 -n ${KUBE_NAMESPACE}&lt;/p&gt;
&lt;h1&gt;
  
  
  Apply the Kubernetes manifests
&lt;/h1&gt;

&lt;p&gt;kubectl apply -f k8s/deployment.yaml -n ${KUBE_NAMESPACE}&lt;br&gt;
kubectl apply -f k8s/service.yaml -n ${KUBE_NAMESPACE}&lt;/p&gt;
&lt;h1&gt;
  
  
  Optional: Scale the deployment back up
&lt;/h1&gt;

&lt;p&gt;kubectl scale deployment my-app-deployment — replicas=3 -n ${KUBE_NAMESPACE}&lt;br&gt;
‘’’&lt;br&gt;
}&lt;br&gt;
}&lt;br&gt;
}&lt;br&gt;
}&lt;br&gt;
}&lt;/p&gt;

&lt;p&gt;post {&lt;br&gt;
success {&lt;br&gt;
echo ‘Deployment succeeded!’&lt;br&gt;
}&lt;br&gt;
failure {&lt;br&gt;
echo ‘Deployment failed.’&lt;br&gt;
}&lt;br&gt;
}&lt;br&gt;
}&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
**Explanation of the Jenkinsfile:**

- **Environment Variables**: Define variables for the Docker registry, image name, tag, and Kubernetes configurations.
- **Stages**:
— **Checkout**: Clones the Git repository.
— **Build Docker Image**: Builds the Docker image using the `Dockerfile`.
— **Push Docker Image**: Pushes the built image to the specified Docker registry.
— **Deploy to Kubernetes**: Uses `kubectl` to apply the Kubernetes manifests. To handle downtime, the deployment is scaled down to zero replicas before applying the new configuration and then scaled back up.

**Note:** Replace placeholders like `your-docker-registry`, `https://github.com/your-repo/my-app.git`, and credential IDs with your actual values.

— -

## **5. Handling Downtime During Deployment**

While Kubernetes supports zero-downtime deployments through strategies like rolling updates, there are scenarios where you might need to intentionally introduce downtime (e.g., for database migrations or significant changes). This section explains how to manage such scenarios within the Jenkins pipeline.

### **Strategy Overview**

1. **Scale Down**: Reduce the number of replicas to zero to stop serving traffic.
2. **Deploy Updates**: Apply the new Kubernetes manifests (e.g., updated Deployment).
3. **Scale Up**: Increase the number of replicas to resume serving traffic.

**Pros:**

- Ensures that users are not served a partially updated application.
- Useful for critical updates that require the application to be fully stopped during deployment.

**Cons:**

- Results in application downtime, affecting user experience.
- Not suitable for high-availability applications where uptime is critical.

### **Implementing Downtime in Jenkins Pipeline**

The `Deploy to Kubernetes` stage in the Jenkinsfile handles downtime by scaling the deployment down before applying updates and scaling it back up afterward.

**Detailed Steps:**

1. **Scale Down Deployment**

kubectl scale deployment my-app-deployment — replicas=0 -n default
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Sets the number of replicas to zero, effectively stopping all running pods.&lt;/li&gt;
&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Apply Kubernetes Manifests&lt;/strong&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;kubectl apply -f k8s/deployment.yaml -n default&lt;br&gt;
kubectl apply -f k8s/service.yaml -n default&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
- Applies the updated Deployment and Service configurations.

3. **Scale Up Deployment**

kubectl scale deployment my-app-deployment — replicas=3 -n default
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Restores the desired number of replicas, starting new pods with the updated configuration.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Enhanced Jenkinsfile with Downtime Handling:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;stage(‘Deploy to Kubernetes’) {&lt;br&gt;
steps {&lt;br&gt;
script {&lt;br&gt;
// Use the kubeconfig file for kubectl commands&lt;br&gt;
withCredentials([file(credentialsId: “${KUBECONFIG_CREDENTIAL_ID}”, variable: ‘KUBECONFIG’)]) {&lt;br&gt;
sh ‘’’&lt;br&gt;
echo “Scaling down the deployment to zero replicas for downtime…”&lt;br&gt;
kubectl scale deployment my-app-deployment — replicas=0 -n ${KUBE_NAMESPACE}&lt;/p&gt;

&lt;p&gt;echo “Applying Kubernetes manifests…”&lt;br&gt;
kubectl apply -f k8s/deployment.yaml -n ${KUBE_NAMESPACE}&lt;br&gt;
kubectl apply -f k8s/service.yaml -n ${KUBE_NAMESPACE}&lt;/p&gt;

&lt;p&gt;echo “Scaling up the deployment to desired replicas…”&lt;br&gt;
kubectl scale deployment my-app-deployment — replicas=3 -n ${KUBE_NAMESPACE}&lt;br&gt;
‘’’&lt;br&gt;
}&lt;br&gt;
}&lt;br&gt;
}&lt;br&gt;
}&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
**Alternative Approach: Using Kubernetes Rolling Updates**

If downtime is not acceptable, consider using Kubernetes’ rolling update strategy, which updates pods incrementally without taking the entire application offline.

**Modify `deployment.yaml` for Rolling Updates:**

spec:
replicas: 3
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
maxSurge: 1
…
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Adjust Jenkinsfile Deployment Stage:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Remove scaling steps and allow Kubernetes to handle updates seamlessly.&lt;/p&gt;

&lt;p&gt;stage(‘Deploy to Kubernetes’) {&lt;br&gt;
steps {&lt;br&gt;
script {&lt;br&gt;
withCredentials([file(credentialsId: “${KUBECONFIG_CREDENTIAL_ID}”, variable: ‘KUBECONFIG’)]) {&lt;br&gt;
sh ‘’’&lt;br&gt;
echo “Applying Kubernetes manifests with rolling update…”&lt;br&gt;
kubectl apply -f k8s/deployment.yaml -n ${KUBE_NAMESPACE}&lt;br&gt;
kubectl apply -f k8s/service.yaml -n ${KUBE_NAMESPACE}&lt;br&gt;
‘’’&lt;br&gt;
}&lt;br&gt;
}&lt;br&gt;
}&lt;br&gt;
}&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
**Recommendation:** Use rolling updates for zero-downtime deployments. Reserve downtime handling strategies for cases where they are absolutely necessary.

— -

## **6. Running and Validating the Pipeline**

Once the Jenkins pipeline is configured, follow these steps to run and validate the deployment.

### **Create a New Jenkins Pipeline Job**

1. **Navigate to Jenkins Dashboard**: Open your Jenkins instance in the browser.

2. **Create a New Item**:

- Click on `New Item`.
— Enter a name for your pipeline (e.g., `Deploy-My-App`).
— Select `Pipeline` as the project type.
— Click `OK`.

3. **Configure the Pipeline**:

- **Description**: Optionally, add a description.
— **Pipeline**:
— **Definition**: Choose `Pipeline script from SCM`.
— **SCM**: Select `Git`.
— **Repository URL**: Enter your Git repository URL (e.g., `https://github.com/your-repo/my-app.git`).
— **Credentials**: Select or add credentials if your repository is private.
— **Branches to Build**: Specify the branch (e.g., `main`).
— **Script Path**: Enter `Jenkinsfile` if it’s in the root directory.

4. **Save**: Click `Save` to create the pipeline job.

### **Trigger the Pipeline**

1. **Manual Trigger**:

- Navigate to the pipeline job.
— Click `Build Now` to start the pipeline manually.

2. **Automatic Trigger**:

- Ensure that your repository is set to trigger Jenkins builds on commits (using webhooks).
— Push changes to the repository to trigger the pipeline automatically.

### **Monitor the Pipeline**

1. **View Build Progress**:

- Click on the running build to see real-time logs.
— Monitor each stage (`Checkout`, `Build Docker Image`, `Push Docker Image`, `Deploy to Kubernetes`).

2. **Handle Failures**:

- If any stage fails, Jenkins will mark the build as `FAILED`.
— Review the console output to identify and fix issues.

### **Validate Deployment in Kubernetes**

1. **Check Pods**:

kubectl get pods -n default
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Ensure that the pods are running and match the desired replica count.&lt;/li&gt;
&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Verify Service&lt;/strong&gt;:&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;Obtain the service’s external IP:&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;kubectl get services -n default&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
- Access the application via the external IP to verify it’s working as expected.

3. **Check Application Logs**:

kubectl logs &amp;lt;pod-name&amp;gt; -n default
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Review application logs for any runtime issues.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;— -&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;7. Best Practices&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;To ensure a reliable and maintainable CI/CD pipeline, consider the following best practices:&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;1. Use Version Control for Manifests&lt;/strong&gt;
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Store all Kubernetes manifests in version control alongside your application code.&lt;/li&gt;
&lt;li&gt;This ensures that infrastructure changes are tracked and can be audited.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;2. Implement Infrastructure as Code (IaC)&lt;/strong&gt;
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Use tools like Terraform or Helm to manage Kubernetes resources.&lt;/li&gt;
&lt;li&gt;This promotes consistency and reusability across environments.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;3. Secure Credentials&lt;/strong&gt;
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Avoid hardcoding sensitive information.&lt;/li&gt;
&lt;li&gt;Use Jenkins credentials and Kubernetes secrets to manage sensitive data securely.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;4. Enable Pipeline as Code&lt;/strong&gt;
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Define your pipeline using a &lt;code&gt;Jenkinsfile&lt;/code&gt; stored in the repository.&lt;/li&gt;
&lt;li&gt;This allows versioning and easier collaboration.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;5. Implement Rollback Mechanisms&lt;/strong&gt;
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Configure your pipeline to handle failures gracefully.&lt;/li&gt;
&lt;li&gt;Use Kubernetes deployment strategies to rollback in case of deployment failures.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;6. Monitor and Log Deployments&lt;/strong&gt;
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Integrate monitoring tools (e.g., Prometheus, Grafana) to observe application performance.&lt;/li&gt;
&lt;li&gt;Centralize logs using tools like ELK Stack or Fluentd for easier troubleshooting.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;7. Optimize Docker Images&lt;/strong&gt;
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Use multi-stage builds to minimize image size.&lt;/li&gt;
&lt;li&gt;Regularly scan images for vulnerabilities.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;8. Test Before Deployment&lt;/strong&gt;
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Incorporate automated testing (unit, integration, end-to-end) in your pipeline.&lt;/li&gt;
&lt;li&gt;Validate that the application works as expected before deploying to production.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;9. Handle Downtime Carefully&lt;/strong&gt;
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Prefer zero-downtime deployment strategies like rolling updates.&lt;/li&gt;
&lt;li&gt;If downtime is necessary, communicate it clearly to stakeholders and plan deployments during maintenance windows.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;10. Document the Pipeline&lt;/strong&gt;
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Maintain clear documentation of your pipeline stages, configurations, and dependencies.&lt;/li&gt;
&lt;li&gt;This aids in onboarding new team members and troubleshooting issues.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;— -&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;8. Conclusion&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Automating application deployments to Kubernetes using Jenkins pipelines streamlines your CI/CD processes, enhances consistency, and reduces the potential for human error. By following this step-by-step guide, you can set up a robust pipeline that builds, tests, and deploys your applications efficiently. Handling downtime during deployments is crucial for maintaining application availability and user satisfaction. While Kubernetes offers advanced deployment strategies to minimize or eliminate downtime, understanding when and how to manage downtime ensures that your deployments align with your application’s operational requirements.&lt;/p&gt;

&lt;p&gt;Adhering to best practices around security, version control, testing, and monitoring further strengthens your deployment pipeline, enabling you to deliver high-quality applications reliably and consistently.&lt;/p&gt;

&lt;p&gt;Feel free to customize and expand upon this foundation to suit your specific project needs and organizational standards.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Production-Ready Terraform Module for Seamless Disaster Recovery: Primary and Secondary Clusters with Zero Downtime</title>
      <dc:creator>Nicholas Osi</dc:creator>
      <pubDate>Mon, 18 Nov 2024 15:59:30 +0000</pubDate>
      <link>https://dev.to/aidudo/production-ready-terraform-module-for-seamless-disaster-recovery-primary-and-secondary-clusters-5an4</link>
      <guid>https://dev.to/aidudo/production-ready-terraform-module-for-seamless-disaster-recovery-primary-and-secondary-clusters-5an4</guid>
      <description>&lt;p&gt;Creating a production-ready Terraform module for setting up a Disaster Recovery (DR) environment with primary and secondary clusters without downtime involves several components. This comprehensive guide provides you with a ready-to-use Terraform template that you can literally copy and deploy in your environment. The template is designed for AWS, but it can be adapted for other cloud providers with minimal changes.&lt;/p&gt;

&lt;p&gt;Disclaimer: While this template is designed to be as plug-and-play as possible, it’s crucial to review and understand each component to ensure it aligns with your specific requirements and compliance standards.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Prerequisites
Before deploying the Terraform template, ensure you have the following:&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Terraform Installed: Version 1.0 or later. (&lt;a href="https://learn.hashicorp.com/tutorials/terraform/install-cli" rel="noopener noreferrer"&gt;https://learn.hashicorp.com/tutorials/terraform/install-cli&lt;/a&gt;)&lt;br&gt;
AWS Account: With necessary permissions to create resources.&lt;br&gt;
AWS CLI Configured For authentication. (&lt;a href="https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-quickstart.html" rel="noopener noreferrer"&gt;https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-quickstart.html&lt;/a&gt;)&lt;br&gt;
Git Installed: To clone the repository (optional).&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Directory Structure
Organize your Terraform code for maintainability and scalability. Here’s the recommended&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;terraform-dr-setup/&lt;br&gt;
├── main.tf&lt;br&gt;
├── variables.tf&lt;br&gt;
├── outputs.tf&lt;br&gt;
├── backend.tf&lt;br&gt;
├── modules/&lt;br&gt;
│ ├── networking/&lt;br&gt;
│ │ ├── main.tf&lt;br&gt;
│ │ ├── variables.tf&lt;br&gt;
│ │ └── outputs.tf&lt;br&gt;
│ ├── compute/&lt;br&gt;
│ │ ├── main.tf&lt;br&gt;
│ │ ├── variables.tf&lt;br&gt;
│ │ └── outputs.tf&lt;br&gt;
│ ├── database/&lt;br&gt;
│ │ ├── main.tf&lt;br&gt;
│ │ ├── variables.tf&lt;br&gt;
│ │ └── outputs.tf&lt;br&gt;
│ ├── s3_replication/&lt;br&gt;
│ │ ├── main.tf&lt;br&gt;
│ │ ├── variables.tf&lt;br&gt;
│ │ └── outputs.tf&lt;br&gt;
│ └── route53_failover/&lt;br&gt;
│ ├── main.tf&lt;br&gt;
│ ├── variables.tf&lt;br&gt;
│ └── outputs.tf&lt;br&gt;
└── README.md&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Terraform Configuration Files
Below are the detailed configurations for each component.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;3.1 Provider and Backend Configuration&lt;/p&gt;

&lt;p&gt;backend.tf`&lt;/p&gt;

&lt;p&gt;terraform {&lt;br&gt;
required_version = “&amp;gt;= 1.0”&lt;/p&gt;

&lt;p&gt;backend “s3” {&lt;br&gt;
bucket = “my-terraform-state-bucket”&lt;br&gt;
key = “dr-setup/terraform.tfstate”&lt;br&gt;
region = “us-east-1”&lt;br&gt;
dynamodb_table = “terraform-lock-table”&lt;br&gt;
encrypt = true&lt;br&gt;
}&lt;br&gt;
}&lt;/p&gt;

&lt;p&gt;provider “aws” {&lt;br&gt;
region = var.primary_region&lt;br&gt;
}&lt;br&gt;
&lt;code&gt;&lt;/code&gt;`&lt;/p&gt;

&lt;p&gt;provider “aws” {&lt;br&gt;
alias = “secondary”&lt;br&gt;
region = var.secondary_region&lt;br&gt;
}&lt;/p&gt;

&lt;p&gt;Explanation:&lt;br&gt;
Backend: Uses AWS S3 for storing the Terraform state and DynamoDB for state locking.&lt;br&gt;
Providers: Defines two AWS providers for primary and secondary regions.&lt;/p&gt;

&lt;p&gt;3.2 Variables&lt;/p&gt;

&lt;p&gt;variables.tf`&lt;/p&gt;

&lt;p&gt;variable “primary_region” {&lt;br&gt;
description = “Primary AWS region”&lt;br&gt;
type = string&lt;br&gt;
default = “us-east-1”&lt;br&gt;
}&lt;/p&gt;

&lt;p&gt;variable “secondary_region” {&lt;br&gt;
description = “Secondary AWS region for DR”&lt;br&gt;
type = string&lt;br&gt;
default = “us-west-2”&lt;br&gt;
}&lt;/p&gt;

&lt;p&gt;variable “vpc_cidr_primary” {&lt;br&gt;
description = “CIDR block for primary VPC”&lt;br&gt;
type = string&lt;br&gt;
default = “10.0.0.0/16”&lt;br&gt;
}&lt;/p&gt;

&lt;p&gt;variable “vpc_cidr_secondary” {&lt;br&gt;
description = “CIDR block for secondary VPC”&lt;br&gt;
type = string&lt;br&gt;
default = “10.1.0.0/16”&lt;br&gt;
}&lt;/p&gt;

&lt;p&gt;variable “app_ami” {&lt;br&gt;
description = “AMI ID for application servers”&lt;br&gt;
type = string&lt;br&gt;
default = “ami-0c55b159cbfafe1f0” # Example AMI&lt;br&gt;
}&lt;/p&gt;

&lt;p&gt;variable “instance_type” {&lt;br&gt;
description = “EC2 instance type”&lt;br&gt;
type = string&lt;br&gt;
default = “t3.medium”&lt;br&gt;
}&lt;/p&gt;

&lt;p&gt;variable “db_engine” {&lt;br&gt;
description = “Database engine”&lt;br&gt;
type = string&lt;br&gt;
default = “postgres”&lt;br&gt;
}&lt;/p&gt;

&lt;p&gt;variable “db_username” {&lt;br&gt;
description = “Database admin username”&lt;br&gt;
type = string&lt;br&gt;
}&lt;/p&gt;

&lt;p&gt;variable “db_password” {&lt;br&gt;
description = “Database admin password”&lt;br&gt;
type = string&lt;br&gt;
sensitive = true&lt;br&gt;
}&lt;/p&gt;

&lt;p&gt;variable “s3_primary_bucket” {&lt;br&gt;
description = “Primary S3 bucket name”&lt;br&gt;
type = string&lt;br&gt;
default = “my-app-primary-bucket”&lt;br&gt;
}&lt;/p&gt;

&lt;p&gt;variable “s3_secondary_bucket” {&lt;br&gt;
description = “Secondary S3 bucket name”&lt;br&gt;
type = string&lt;br&gt;
default = “my-app-secondary-bucket”&lt;br&gt;
}&lt;/p&gt;

&lt;p&gt;variable “domain_name” {&lt;br&gt;
description = “Domain name for Route 53”&lt;br&gt;
type = string&lt;br&gt;
default = “example.com”&lt;br&gt;
}&lt;/p&gt;

&lt;p&gt;variable “hosted_zone_id” {&lt;br&gt;
description = “Route 53 Hosted Zone ID”&lt;br&gt;
type = string&lt;br&gt;
}&lt;/p&gt;

&lt;p&gt;Explanation:&lt;br&gt;
Defines all necessary variables with default values where applicable. Sensitive variables like &lt;code&gt;db_password&lt;/code&gt; are marked accordingly.&lt;/p&gt;

&lt;p&gt;3.3 Networking Module&lt;/p&gt;

&lt;p&gt;Path: modules/networking/main.tf`&lt;/p&gt;

&lt;p&gt;resource “aws_vpc” “this” {&lt;br&gt;
cidr_block = var.vpc_cidr&lt;br&gt;
enable_dns_support = true&lt;br&gt;
enable_dns_hostnames = true&lt;br&gt;
tags = {&lt;br&gt;
Name = “${var.name}-vpc”&lt;br&gt;
}&lt;br&gt;
}&lt;/p&gt;

&lt;p&gt;resource “aws_subnet” “public” {&lt;br&gt;
count = 2&lt;br&gt;
vpc_id = aws_vpc.this.id&lt;br&gt;
cidr_block = cidrsubnet(var.vpc_cidr, 8, count.index)&lt;br&gt;
availability_zone = element(data.aws_availability_zones.available.names, count.index)&lt;br&gt;
map_public_ip_on_launch = true&lt;br&gt;
tags = {&lt;br&gt;
Name = “${var.name}-public-subnet-${count.index + 1}”&lt;br&gt;
}&lt;br&gt;
}&lt;/p&gt;

&lt;p&gt;resource “aws_internet_gateway” “this” {&lt;br&gt;
vpc_id = aws_vpc.this.id&lt;br&gt;
tags = {&lt;br&gt;
Name = “${var.name}-igw”&lt;br&gt;
}&lt;br&gt;
}&lt;/p&gt;

&lt;p&gt;resource “aws_route_table” “public” {&lt;br&gt;
vpc_id = aws_vpc.this.id&lt;br&gt;
tags = {&lt;br&gt;
Name = “${var.name}-public-rt”&lt;br&gt;
}&lt;br&gt;
}&lt;/p&gt;

&lt;p&gt;resource “aws_route” “internet_access” {&lt;br&gt;
route_table_id = aws_route_table.public.id&lt;br&gt;
destination_cidr_block = “0.0.0.0/0”&lt;br&gt;
gateway_id = aws_internet_gateway.this.id&lt;br&gt;
}&lt;/p&gt;

&lt;p&gt;resource “aws_route_table_association” “public” {&lt;br&gt;
count = 2&lt;br&gt;
subnet_id = aws_subnet.public[count.index].id&lt;br&gt;
route_table_id = aws_route_table.public.id&lt;br&gt;
}&lt;/p&gt;

&lt;p&gt;modules/networking/variables.tf`&lt;/p&gt;

&lt;p&gt;variable “vpc_cidr” {&lt;br&gt;
description = “CIDR block for the VPC”&lt;br&gt;
type = string&lt;br&gt;
}&lt;/p&gt;

&lt;p&gt;variable “name” {&lt;br&gt;
description = “Name prefix for resources”&lt;br&gt;
type = string&lt;br&gt;
}&lt;/p&gt;

&lt;p&gt;modules/networking/outputs.tf`&lt;/p&gt;

&lt;p&gt;output “vpc_id” {&lt;br&gt;
description = “VPC ID”&lt;br&gt;
value = aws_vpc.this.id&lt;br&gt;
}&lt;/p&gt;

&lt;p&gt;output “public_subnets” {&lt;br&gt;
description = “List of public subnet IDs”&lt;br&gt;
value = aws_subnet.public[*].id&lt;br&gt;
}&lt;/p&gt;

&lt;p&gt;Explanation:&lt;br&gt;
Sets up a VPC with two public subnets, an Internet Gateway, and associated route tables. This setup is replicated in both primary and secondary regions.&lt;/p&gt;

&lt;p&gt;3.4 Compute Module&lt;/p&gt;

&lt;p&gt;Path: modules/compute/main.tf`&lt;/p&gt;

&lt;p&gt;resource “aws_security_group” “app_sg” {&lt;br&gt;
vpc_id = var.vpc_id&lt;br&gt;
ingress {&lt;br&gt;
from_port = 80&lt;br&gt;
to_port = 80&lt;br&gt;
protocol = “tcp”&lt;br&gt;
cidr_blocks = [“0.0.0.0/0”]&lt;br&gt;
}&lt;br&gt;
ingress {&lt;br&gt;
from_port = 22&lt;br&gt;
to_port = 22&lt;br&gt;
protocol = “tcp”&lt;br&gt;
cidr_blocks = [“0.0.0.0/0”]&lt;br&gt;
}&lt;br&gt;
egress {&lt;br&gt;
from_port = 0&lt;br&gt;
to_port = 0&lt;br&gt;
protocol = “-1”&lt;br&gt;
cidr_blocks = [“0.0.0.0/0”]&lt;br&gt;
}&lt;br&gt;
tags = {&lt;br&gt;
Name = “${var.name}-sg”&lt;br&gt;
}&lt;br&gt;
}&lt;/p&gt;

&lt;p&gt;resource “aws_instance” “app” {&lt;br&gt;
count = var.instance_count&lt;br&gt;
ami = var.app_ami&lt;br&gt;
instance_type = var.instance_type&lt;br&gt;
subnet_id = element(var.subnet_ids, count.index)&lt;br&gt;
security_groups = [aws_security_group.app_sg.name]&lt;/p&gt;

&lt;p&gt;tags = {&lt;br&gt;
Name = “${var.name}-app-${count.index + 1}”&lt;br&gt;
}&lt;br&gt;
}&lt;/p&gt;

&lt;p&gt;modules/compute/variables.tf`&lt;/p&gt;

&lt;p&gt;variable “vpc_id” {&lt;br&gt;
description = “VPC ID”&lt;br&gt;
type = string&lt;br&gt;
}&lt;/p&gt;

&lt;p&gt;variable “subnet_ids” {&lt;br&gt;
description = “List of subnet IDs”&lt;br&gt;
type = list(string)&lt;br&gt;
}&lt;/p&gt;

&lt;p&gt;variable “app_ami” {&lt;br&gt;
description = “AMI ID for the application servers”&lt;br&gt;
type = string&lt;br&gt;
}&lt;/p&gt;

&lt;p&gt;variable “instance_type” {&lt;br&gt;
description = “EC2 instance type”&lt;br&gt;
type = string&lt;br&gt;
}&lt;/p&gt;

&lt;p&gt;variable “instance_count” {&lt;br&gt;
description = “Number of EC2 instances”&lt;br&gt;
type = number&lt;br&gt;
default = 2&lt;br&gt;
}&lt;/p&gt;

&lt;p&gt;variable “name” {&lt;br&gt;
description = “Name prefix for resources”&lt;br&gt;
type = string&lt;br&gt;
}&lt;/p&gt;

&lt;p&gt;modules/compute/outputs.tf`&lt;/p&gt;

&lt;p&gt;output “app_instance_ids” {&lt;br&gt;
description = “List of application EC2 instance IDs”&lt;br&gt;
value = aws_instance.app[*].id&lt;br&gt;
}&lt;/p&gt;

&lt;p&gt;Explanation:&lt;br&gt;
Deploys EC2 instances with a security group allowing HTTP and SSH access. The number of instances and other parameters are configurable.&lt;/p&gt;

&lt;p&gt;3.5 Database Module&lt;/p&gt;

&lt;p&gt;Path: &lt;code&gt;modules/database/main.tf&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;resource “aws_db_subnet_group” “this” {&lt;br&gt;
name = “${var.name}-db-subnet-group”&lt;br&gt;
subnet_ids = var.subnet_ids&lt;br&gt;
tags = {&lt;br&gt;
Name = “${var.name}-db-subnet-group”&lt;br&gt;
}&lt;br&gt;
}&lt;/p&gt;

&lt;p&gt;resource “aws_db_instance” “this” {&lt;br&gt;
identifier = var.db_identifier&lt;br&gt;
engine = var.db_engine&lt;br&gt;
instance_class = var.db_instance_class&lt;br&gt;
allocated_storage = 100&lt;br&gt;
storage_type = “gp2”&lt;br&gt;
engine_version = “13.3”&lt;br&gt;
name = var.db_name&lt;br&gt;
username = var.db_username&lt;br&gt;
password = var.db_password&lt;br&gt;
db_subnet_group_name = aws_db_subnet_group.this.name&lt;br&gt;
vpc_security_group_ids = [var.sg_id]&lt;br&gt;
multi_az = var.multi_az&lt;br&gt;
publicly_accessible = false&lt;br&gt;
skip_final_snapshot = true&lt;br&gt;
backup_retention_period = 7&lt;br&gt;
tags = {&lt;br&gt;
Name = “${var.name}-db”&lt;br&gt;
}&lt;/p&gt;

&lt;p&gt;Replication for DR&lt;br&gt;
replicate_source_db = var.replicate_source_db&lt;br&gt;
}&lt;br&gt;
`&lt;code&gt;&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;code&gt;modules/database/variables.tf&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;variable “subnet_ids” {&lt;br&gt;
description = “List of subnet IDs”&lt;br&gt;
type = list(string)&lt;br&gt;
}&lt;/p&gt;

&lt;p&gt;variable “sg_id” {&lt;br&gt;
description = “Security Group ID for the database”&lt;br&gt;
type = string&lt;br&gt;
}&lt;/p&gt;

&lt;p&gt;variable “db_engine” {&lt;br&gt;
description = “Database engine”&lt;br&gt;
type = string&lt;br&gt;
}&lt;/p&gt;

&lt;p&gt;variable “db_instance_class” {&lt;br&gt;
description = “Database instance class”&lt;br&gt;
type = string&lt;br&gt;
default = “db.t3.medium”&lt;br&gt;
}&lt;/p&gt;

&lt;p&gt;variable “db_identifier” {&lt;br&gt;
description = “Database identifier”&lt;br&gt;
type = string&lt;br&gt;
}&lt;/p&gt;

&lt;p&gt;variable “db_name” {&lt;br&gt;
description = “Database name”&lt;br&gt;
type = string&lt;br&gt;
}&lt;/p&gt;

&lt;p&gt;variable “db_username” {&lt;br&gt;
description = “Database admin username”&lt;br&gt;
type = string&lt;br&gt;
}&lt;/p&gt;

&lt;p&gt;variable “db_password” {&lt;br&gt;
description = “Database admin password”&lt;br&gt;
type = string&lt;br&gt;
sensitive = true&lt;br&gt;
}&lt;/p&gt;

&lt;p&gt;variable “multi_az” {&lt;br&gt;
description = “Enable Multi-AZ deployment”&lt;br&gt;
type = bool&lt;br&gt;
default = true&lt;br&gt;
}&lt;/p&gt;

&lt;p&gt;variable “name” {&lt;br&gt;
description = “Name prefix for resources”&lt;br&gt;
type = string&lt;br&gt;
}&lt;/p&gt;

&lt;p&gt;variable “replicate_source_db” {&lt;br&gt;
description = “ARN of the source DB instance for replication”&lt;br&gt;
type = string&lt;br&gt;
default = null&lt;br&gt;
}&lt;/p&gt;

&lt;p&gt;&lt;code&gt;modules/database/outputs.tf&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;output “db_instance_endpoint” {&lt;br&gt;
description = “Database instance endpoint”&lt;br&gt;
value = aws_db_instance.this.endpoint&lt;br&gt;
}&lt;/p&gt;

&lt;p&gt;output “db_instance_id” {&lt;br&gt;
description = “Database instance ID”&lt;br&gt;
value = aws_db_instance.this.id&lt;br&gt;
}&lt;/p&gt;

&lt;p&gt;Explanation:&lt;br&gt;
Creates an RDS PostgreSQL instance with Multi-AZ for high availability. In the secondary region, it sets up the database as a read replica by specifying the &lt;code&gt;replicate_source_db&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;3.6 S3 Bucket Replication Module&lt;br&gt;
Path: &lt;code&gt;modules/s3_replication/main.tf&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;resource “aws_s3_bucket” “source” {&lt;br&gt;
bucket = var.source_bucket&lt;br&gt;
acl = “private”&lt;/p&gt;

&lt;p&gt;versioning {&lt;br&gt;
enabled = true&lt;br&gt;
}&lt;/p&gt;

&lt;p&gt;replication_configuration {&lt;br&gt;
role = aws_iam_role.replication_role.arn&lt;/p&gt;

&lt;p&gt;rules {&lt;br&gt;
id = “replicate-all”&lt;br&gt;
status = “Enabled”&lt;/p&gt;

&lt;p&gt;filter {&lt;br&gt;
prefix = “”&lt;br&gt;
}&lt;/p&gt;

&lt;p&gt;destination {&lt;br&gt;
bucket = “arn:aws:s3:::${var.destination_bucket}”&lt;br&gt;
storage_class = “STANDARD”&lt;br&gt;
}&lt;br&gt;
}&lt;br&gt;
}&lt;/p&gt;

&lt;p&gt;tags = {&lt;br&gt;
Name = var.source_bucket&lt;br&gt;
}&lt;br&gt;
}&lt;/p&gt;

&lt;p&gt;resource “aws_s3_bucket” “destination” {&lt;br&gt;
provider = aws.secondary&lt;br&gt;
bucket = var.destination_bucket&lt;br&gt;
acl = “private”&lt;/p&gt;

&lt;p&gt;versioning {&lt;br&gt;
enabled = true&lt;br&gt;
}&lt;/p&gt;

&lt;p&gt;tags = {&lt;br&gt;
Name = var.destination_bucket&lt;br&gt;
}&lt;br&gt;
}&lt;/p&gt;

&lt;p&gt;resource “aws_iam_role” “replication_role” {&lt;br&gt;
name = “${var.name}-s3-replication-role”&lt;/p&gt;

&lt;p&gt;assume_role_policy = jsonencode({&lt;br&gt;
Version = “2012–10–17”&lt;br&gt;
Statement = [{&lt;br&gt;
Action = “sts:AssumeRole”&lt;br&gt;
Effect = “Allow”&lt;br&gt;
Principal = {&lt;br&gt;
Service = “s3.amazonaws.com”&lt;br&gt;
}&lt;br&gt;
}]&lt;br&gt;
})&lt;/p&gt;

&lt;p&gt;managed_policy_arns = [&lt;br&gt;
“arn:aws:iam::aws:policy/service-role/AmazonS3ReplicationServiceRole”&lt;br&gt;
]&lt;br&gt;
}&lt;/p&gt;

&lt;p&gt;&lt;code&gt;modules/s3_replication/variables.tf&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;variable “source_bucket” {&lt;br&gt;
description = “Source S3 bucket name”&lt;br&gt;
type = string&lt;br&gt;
}&lt;/p&gt;

&lt;p&gt;variable “destination_bucket” {&lt;br&gt;
description = “Destination S3 bucket name”&lt;br&gt;
type = string&lt;br&gt;
}&lt;/p&gt;

&lt;p&gt;variable “name” {&lt;br&gt;
description = “Name prefix for resources”&lt;br&gt;
type = string&lt;br&gt;
}&lt;/p&gt;

&lt;p&gt;&lt;code&gt;modules/s3_replication/outputs.tf&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;output “source_bucket_id” {&lt;br&gt;
description = “Source S3 bucket ID”&lt;br&gt;
value = aws_s3_bucket.source.id&lt;br&gt;
}&lt;/p&gt;

&lt;p&gt;output “destination_bucket_id” {&lt;br&gt;
description = “Destination S3 bucket ID”&lt;br&gt;
value = aws_s3_bucket.destination.id&lt;br&gt;
}&lt;/p&gt;

&lt;p&gt;Explanation:&lt;br&gt;
Sets up S3 bucket replication from the primary to the secondary region. It creates both source and destination buckets with versioning enabled and configures replication rules.&lt;/p&gt;

&lt;p&gt;3.7 Route 53 Failover Configuration&lt;/p&gt;

&lt;p&gt;Path: &lt;code&gt;modules/route53_failover/main.tf&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;resource “aws_route53_health_check” “primary_health” {&lt;br&gt;
fqdn = var.primary_fqdn&lt;br&gt;
type = “HTTP”&lt;br&gt;
resource_path = “/health”&lt;br&gt;
failure_threshold = 3&lt;br&gt;
request_interval = 30&lt;br&gt;
}&lt;/p&gt;

&lt;p&gt;resource “aws_route53_record” “primary” {&lt;br&gt;
zone_id = var.zone_id&lt;br&gt;
name = var.record_name&lt;br&gt;
type = “A”&lt;/p&gt;

&lt;p&gt;set_identifier = “primary”&lt;br&gt;
weight = 100&lt;/p&gt;

&lt;p&gt;alias {&lt;br&gt;
name = var.primary_elb_dns&lt;br&gt;
zone_id = var.primary_elb_zone_id&lt;br&gt;
evaluate_target_health = true&lt;br&gt;
}&lt;/p&gt;

&lt;p&gt;health_check_id = aws_route53_health_check.primary_health.id&lt;/p&gt;

&lt;p&gt;failover_routing_policy {&lt;br&gt;
type = “PRIMARY”&lt;br&gt;
}&lt;br&gt;
}&lt;/p&gt;

&lt;p&gt;resource “aws_route53_record” “secondary” {&lt;br&gt;
zone_id = var.zone_id&lt;br&gt;
name = var.record_name&lt;br&gt;
type = “A”&lt;/p&gt;

&lt;p&gt;set_identifier = “secondary”&lt;br&gt;
weight = 100&lt;/p&gt;

&lt;p&gt;alias {&lt;br&gt;
name = var.secondary_elb_dns&lt;br&gt;
zone_id = var.secondary_elb_zone_id&lt;br&gt;
evaluate_target_health = true&lt;br&gt;
}&lt;/p&gt;

&lt;p&gt;failover_routing_policy {&lt;br&gt;
type = “SECONDARY”&lt;br&gt;
}&lt;br&gt;
}&lt;/p&gt;

&lt;p&gt;&lt;code&gt;modules/route53_failover/variables.tf&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;variable “zone_id” {&lt;br&gt;
description = “Route 53 Hosted Zone ID”&lt;br&gt;
type = string&lt;br&gt;
}&lt;/p&gt;

&lt;p&gt;variable “record_name” {&lt;br&gt;
description = “DNS record name”&lt;br&gt;
type = string&lt;br&gt;
}&lt;/p&gt;

&lt;p&gt;variable “primary_fqdn” {&lt;br&gt;
description = “Primary application FQDN for health checks”&lt;br&gt;
type = string&lt;br&gt;
}&lt;/p&gt;

&lt;p&gt;variable “primary_elb_dns” {&lt;br&gt;
description = “Primary ELB DNS name”&lt;br&gt;
type = string&lt;br&gt;
}&lt;/p&gt;

&lt;p&gt;variable “primary_elb_zone_id” {&lt;br&gt;
description = “Primary ELB Hosted Zone ID”&lt;br&gt;
type = string&lt;br&gt;
}&lt;/p&gt;

&lt;p&gt;variable “secondary_elb_dns” {&lt;br&gt;
description = “Secondary ELB DNS name”&lt;br&gt;
type = string&lt;br&gt;
}&lt;/p&gt;

&lt;p&gt;variable “secondary_elb_zone_id” {&lt;br&gt;
description = “Secondary ELB Hosted Zone ID”&lt;br&gt;
type = string&lt;br&gt;
}&lt;/p&gt;

&lt;p&gt;&lt;code&gt;modules/route53_failover/outputs.tf&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;output “primary_health_check_id” {&lt;br&gt;
description = “Primary health check ID”&lt;br&gt;
value = aws_route53_health_check.primary_health.id&lt;br&gt;
}&lt;/p&gt;

&lt;p&gt;Explanation:&lt;br&gt;
Configures Route 53 DNS failover with health checks. If the primary ELB fails the health check, traffic is routed to the secondary ELB.&lt;/p&gt;

&lt;p&gt;3.8 Outputs&lt;br&gt;
&lt;code&gt;outputs.tf&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;output “primary_vpc_id” {&lt;br&gt;
description = “Primary VPC ID”&lt;br&gt;
value = module.networking_primary.vpc_id&lt;br&gt;
}&lt;/p&gt;

&lt;p&gt;output “secondary_vpc_id” {&lt;br&gt;
description = “Secondary VPC ID”&lt;br&gt;
value = module.networking_secondary.vpc_id&lt;br&gt;
}&lt;/p&gt;

&lt;p&gt;output “primary_app_instances” {&lt;br&gt;
description = “Primary application EC2 instances”&lt;br&gt;
value = module.compute_primary.app_instance_ids&lt;br&gt;
}&lt;/p&gt;

&lt;p&gt;output “secondary_app_instances” {&lt;br&gt;
description = “Secondary application EC2 instances”&lt;br&gt;
value = module.compute_secondary.app_instance_ids&lt;br&gt;
}&lt;/p&gt;

&lt;p&gt;output “primary_db_endpoint” {&lt;br&gt;
description = “Primary DB Endpoint”&lt;br&gt;
value = module.database_primary.db_instance_endpoint&lt;br&gt;
}&lt;/p&gt;

&lt;p&gt;output “secondary_db_endpoint” {&lt;br&gt;
description = “Secondary DB Endpoint”&lt;br&gt;
value = module.database_secondary.db_instance_endpoint&lt;br&gt;
}&lt;/p&gt;

&lt;p&gt;output “s3_primary_bucket” {&lt;br&gt;
description = “Primary S3 Bucket”&lt;br&gt;
value = module.s3_replication_primary.source_bucket_id&lt;br&gt;
}&lt;/p&gt;

&lt;p&gt;output “s3_secondary_bucket” {&lt;br&gt;
description = “Secondary S3 Bucket”&lt;br&gt;
value = module.s3_replication_primary.destination_bucket_id&lt;br&gt;
}&lt;br&gt;
&lt;code&gt;&lt;/code&gt;`&lt;/p&gt;

&lt;p&gt;Explanation:&lt;br&gt;
Exports essential information about the deployed resources, such as VPC IDs, EC2 instance IDs, database endpoints, and S3 bucket IDs.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Deploying the Terraform Template&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Follow these steps to deploy the DR setup using the provided Terraform template.&lt;/p&gt;

&lt;p&gt;4.1 Clone the Repository&lt;/p&gt;

&lt;p&gt;git clone &lt;a href="https://github.com/your-repo/terraform-dr-setup.git" rel="noopener noreferrer"&gt;https://github.com/your-repo/terraform-dr-setup.git&lt;/a&gt;&lt;br&gt;
cd terraform-dr-setup&lt;/p&gt;

&lt;p&gt;Note: Replace &lt;code&gt;https://github.com/your-repo/terraform-dr-setup.git&lt;/code&gt; with your actual repository URL if applicable.&lt;/p&gt;

&lt;p&gt;4.2 Initialize Terraform&lt;/p&gt;

&lt;p&gt;Initialize the Terraform working directory, download plugins, and configure the backend.&lt;/p&gt;

&lt;p&gt;terraform init&lt;/p&gt;

&lt;p&gt;4.3 Review the Plan&lt;/p&gt;

&lt;p&gt;Generate and review the execution plan to ensure resources are created as expected.&lt;/p&gt;

&lt;p&gt;terraform plan -var=”db_username=admin” - var=”db_password=yourpassword” -var=”hosted_zone_id=Z1234567890"&lt;/p&gt;

&lt;p&gt;Replace &lt;code&gt;yourpassword&lt;/code&gt; with a secure password and &lt;code&gt;Z1234567890&lt;/code&gt; with your actual Route 53 Hosted Zone ID.&lt;/p&gt;

&lt;p&gt;4.4 Apply the Configuration&lt;/p&gt;

&lt;p&gt;Apply the Terraform configuration to create the resources.&lt;/p&gt;

&lt;p&gt;terraform apply -var=”db_username=admin” -var=”db_password=yourpassword” -var=”hosted_zone_id=Z1234567890" -auto-approve&lt;/p&gt;

&lt;p&gt;Warning: The &lt;code&gt;-auto-approve&lt;/code&gt; flag skips the confirmation prompt. Remove it if you prefer manual approval.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Testing the DR Setup
After deployment, it’s essential to test the DR setup to ensure failover works seamlessly.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;5.1 Verify Resource Creation&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;VPCs: Ensure both primary and secondary VPCs are created.&lt;/li&gt;
&lt;li&gt;EC2 Instances: Check that EC2 instances are running in both regions.&lt;/li&gt;
&lt;li&gt;RDS Instances: Confirm that the secondary RDS instance is a read replica.&lt;/li&gt;
&lt;li&gt;S3 Buckets: Verify that replication is configured between primary and secondary buckets.&lt;/li&gt;
&lt;li&gt;Route 53: Ensure DNS records are set up with failover policies.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;5.2 Simulate Failover&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Primary Application Down:&lt;br&gt;
— Stop or terminate primary EC2 instances or the ELB.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Health Check Failure:&lt;br&gt;
— Ensure Route 53 detects the failure via health checks.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Traffic Routing:&lt;br&gt;
— Verify that traffic is routed to the secondary ELB without downtime.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Data Consistency:&lt;br&gt;
— Check that data in the secondary database and S3 bucket is up-to-date.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;5.3 Restore Primary Services&lt;/p&gt;

&lt;p&gt;Once testing is complete, restore the primary services and ensure Route 53 redirects traffic back if primary services are healthy.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Maintenance and Best Practices
To ensure the DR setup remains robust and secure, follow these best practices:&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;6.1 Regular Updates&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Terraform: Keep Terraform updated to the latest version.&lt;/li&gt;
&lt;li&gt;AWS Services: Monitor and apply updates to AWS services and configurations.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;6.2 Monitoring and Alerts&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Implement monitoring using AWS CloudWatch or other monitoring tools.&lt;/li&gt;
&lt;li&gt;Set up alerts for critical events, such as failovers or resource failures.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;6.3 Security Management&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Regularly rotate database passwords and access keys.&lt;/li&gt;
&lt;li&gt;Implement IAM best practices, granting least privilege.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;6.4 Cost Management&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Monitor AWS costs to avoid unexpected charges.&lt;/li&gt;
&lt;li&gt;Utilize AWS Cost Explorer and budgeting tools.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;6.5 Documentation&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Maintain up-to-date documentation of the infrastructure and DR procedures.&lt;/li&gt;
&lt;li&gt;Document any changes made to the Terraform configuration.&lt;/li&gt;
&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;Complete Terraform Code
For your convenience, here’s the complete Terraform code structured as described above. You can &lt;strong&gt;copy and use&lt;/strong&gt; it directly, ensuring you adjust variables like &lt;code&gt;db_username&lt;/code&gt;, &lt;code&gt;db_password&lt;/code&gt;, and &lt;code&gt;hosted_zone_id&lt;/code&gt; as needed.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;7.1 Root Module Files&lt;/p&gt;

&lt;p&gt;&lt;code&gt;main.tf&lt;/code&gt;&lt;br&gt;
module “networking_primary” {&lt;br&gt;
source = “./modules/networking”&lt;br&gt;
vpc_cidr = var.vpc_cidr_primary&lt;br&gt;
name = “primary”&lt;br&gt;
}&lt;/p&gt;

&lt;p&gt;module “networking_secondary” {&lt;br&gt;
source = “./modules/networking”&lt;br&gt;
providers = { aws = aws.secondary }&lt;br&gt;
vpc_cidr = var.vpc_cidr_secondary&lt;br&gt;
name = “secondary”&lt;br&gt;
}&lt;/p&gt;

&lt;p&gt;module “compute_primary” {&lt;br&gt;
source = “./modules/compute”&lt;br&gt;
vpc_id = module.networking_primary.vpc_id&lt;br&gt;
subnet_ids = module.networking_primary.public_subnets&lt;br&gt;
app_ami = var.app_ami&lt;br&gt;
instance_type = var.instance_type&lt;br&gt;
instance_count = 2&lt;br&gt;
name = “primary”&lt;br&gt;
}&lt;/p&gt;

&lt;p&gt;module “compute_secondary” {&lt;br&gt;
source = “./modules/compute”&lt;br&gt;
providers = { aws = aws.secondary }&lt;br&gt;
vpc_id = module.networking_secondary.vpc_id&lt;br&gt;
subnet_ids = module.networking_secondary.public_subnets&lt;br&gt;
app_ami = var.app_ami&lt;br&gt;
instance_type = var.instance_type&lt;br&gt;
instance_count = 2&lt;br&gt;
name = “secondary”&lt;br&gt;
}&lt;/p&gt;

&lt;p&gt;module “database_primary” {&lt;br&gt;
source = “./modules/database”&lt;br&gt;
subnet_ids = module.networking_primary.public_subnets&lt;br&gt;
sg_id = module.compute_primary.app_security_group_id&lt;br&gt;
db_engine = var.db_engine&lt;br&gt;
db_instance_class = “db.t3.medium”&lt;br&gt;
db_identifier = “primary-db”&lt;br&gt;
db_name = “appdb”&lt;br&gt;
db_username = var.db_username&lt;br&gt;
db_password = var.db_password&lt;br&gt;
multi_az = true&lt;br&gt;
name = “primary”&lt;br&gt;
}&lt;/p&gt;

&lt;p&gt;module “database_secondary” {&lt;br&gt;
source = “./modules/database”&lt;br&gt;
providers = { aws = aws.secondary }&lt;br&gt;
subnet_ids = module.networking_secondary.public_subnets&lt;br&gt;
sg_id = module.compute_secondary.app_security_group_id&lt;br&gt;
db_engine = var.db_engine&lt;br&gt;
db_instance_class = “db.t3.medium”&lt;br&gt;
db_identifier = “secondary-db”&lt;br&gt;
db_name = “appdb”&lt;br&gt;
db_username = var.db_username&lt;br&gt;
db_password = var.db_password&lt;br&gt;
multi_az = true&lt;br&gt;
name = “secondary”&lt;br&gt;
replicate_source_db = module.database_primary.db_instance_id&lt;br&gt;
}&lt;/p&gt;

&lt;p&gt;module “s3_replication_primary” {&lt;br&gt;
source = “./modules/s3_replication”&lt;br&gt;
source_bucket = var.s3_primary_bucket&lt;br&gt;
destination_bucket = var.s3_secondary_bucket&lt;br&gt;
name = “s3-replication”&lt;br&gt;
}&lt;/p&gt;

&lt;p&gt;module “route53_failover” {&lt;br&gt;
source = “./modules/route53_failover”&lt;br&gt;
zone_id = var.hosted_zone_id&lt;br&gt;
record_name = “app.${var.domain_name}”&lt;br&gt;
primary_fqdn = “app.primary.${var.domain_name}”&lt;br&gt;
primary_elb_dns = module.compute_primary.app_elb_dns&lt;br&gt;
primary_elb_zone_id = module.compute_primary.app_elb_zone_id&lt;br&gt;
secondary_elb_dns = module.compute_secondary.app_elb_dns&lt;br&gt;
secondary_elb_zone_id = module.compute_secondary.app_elb_zone_id&lt;br&gt;
}&lt;/p&gt;

&lt;p&gt;&lt;code&gt;variables.tf&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;[As defined earlier]&lt;/p&gt;

&lt;p&gt;&lt;code&gt;outputs.tf&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;[As defined earlier]&lt;/p&gt;

&lt;p&gt;&lt;code&gt;backend.tf&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;[As defined earlier]&lt;/p&gt;

&lt;p&gt;7.2 Modules&lt;br&gt;
For brevity, only key components of each module are shown. Ensure each module (&lt;code&gt;networking&lt;/code&gt;, &lt;code&gt;compute&lt;/code&gt;, &lt;code&gt;database&lt;/code&gt;, &lt;code&gt;s3_replication&lt;/code&gt;, &lt;code&gt;route53_failover&lt;/code&gt;) contains the respective &lt;code&gt;main.tf&lt;/code&gt;, &lt;code&gt;variables.tf&lt;/code&gt;, and &lt;code&gt;outputs.tf&lt;/code&gt; as outlined in sections 3.3 to 3.7.&lt;/p&gt;

&lt;p&gt;Conclusion&lt;br&gt;
This comprehensive Terraform module template provides a robust foundation for setting up a Disaster Recovery environment with primary and secondary clusters on AWS. By following this guide, you can deploy a resilient infrastructure designed to handle failovers seamlessly without downtime.&lt;/p&gt;

&lt;p&gt;Next Steps:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Customize Variables: Adjust variables like &lt;code&gt;db_username&lt;/code&gt;, &lt;code&gt;db_password&lt;/code&gt;, and &lt;code&gt;hosted_zone_id&lt;/code&gt; to match your environment.&lt;/li&gt;
&lt;li&gt;Secure Secrets: Consider using Terraform’s &lt;a href="https://www.terraform.io/language/values/variables#sensitive-variables" rel="noopener noreferrer"&gt;Sensitive Variables&lt;/a&gt; or integrating with secret management tools like &lt;a href="https://aws.amazon.com/secrets-manager/" rel="noopener noreferrer"&gt;AWS Secrets Manager&lt;/a&gt; or &lt;a href="https://www.vaultproject.io/" rel="noopener noreferrer"&gt;HashiCorp Vault&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;Enhance Security: Implement additional security measures such as restricting SSH access, enabling encryption for data at rest and in transit, and configuring IAM roles with least privilege.&lt;/li&gt;
&lt;li&gt;Automate Deployments: Integrate this Terraform setup into your CI/CD pipelines for automated deployments and updates.&lt;/li&gt;
&lt;li&gt;Continuous Monitoring: Set up comprehensive monitoring and alerting to proactively manage the health of your infrastructure.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;By leveraging Terraform’s infrastructure as code capabilities, you can maintain consistency, reproducibility, and scalability in your Disaster Recovery strategy, ensuring high availability and business continuity.&lt;/p&gt;

</description>
    </item>
  </channel>
</rss>
