<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Deeksha Gunde</title>
    <description>The latest articles on DEV Community by Deeksha Gunde (@deekshagunde).</description>
    <link>https://dev.to/deekshagunde</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/deekshagunde"/>
    <language>en</language>
    <item>
      <title>Cross-Project Dependencies Handling with DBT in AWS MWAA</title>
      <dc:creator>Deeksha Gunde</dc:creator>
      <pubDate>Mon, 04 Nov 2024 19:26:53 +0000</pubDate>
      <link>https://dev.to/deekshagunde/cross-project-dependencies-handling-with-dbt-in-aws-mwaa-cn1</link>
      <guid>https://dev.to/deekshagunde/cross-project-dependencies-handling-with-dbt-in-aws-mwaa-cn1</guid>
      <description>&lt;p&gt;Managing data transformations and dependencies across multiple projects in data engineering can be tricky. As more organizations move toward modularizing data pipelines, one of the major challenges faced by data engineers is how to handle dependencies between various DBT (data build tool) projects. In this blog, we will dive into how to tackle cross-project dependencies in DBT while orchestrating workflows using AWS Managed Workflows for Apache Airflow (MWAA).&lt;/p&gt;

&lt;h2&gt;
  
  
  Overview of DBT and Cross-Project Dependencies
&lt;/h2&gt;

&lt;p&gt;DBT (Data Build Tool) allows you to define models in SQL, and those models are organized in projects. However, as the complexity of data pipelines grows, it's common for different DBT projects to have dependencies on models in other projects. Managing such dependencies in production environments is critical for smooth data pipeline execution.&lt;/p&gt;

&lt;p&gt;I was trying to build on top of the Multi-repo strategy provided by &lt;a class="mentioned-user" href="https://dev.to/elliott_cordo"&gt;@elliott_cordo&lt;/a&gt; in &lt;a href="https://dev.to/elliott_cordo/avoiding-the-dbt-monolith-7ep"&gt;Avoiding the DBT Monolith&lt;/a&gt;, by extending the solution to run on MWAA environment. &lt;/p&gt;

&lt;p&gt;In complex data pipeline environments, managing cross-project dependencies, especially for tools like dbt (Data Build Tool), can pose unique challenges. If you're running Apache Airflow on AWS Managed Workflows for Apache Airflow (MWAA) and integrating private Git repositories, securing access to these repositories during DAG execution often requires creative problem-solving.&lt;/p&gt;

&lt;p&gt;I encountered a specific issue where &lt;code&gt;dbt deps&lt;/code&gt; command failed to clone a private Git repository in an MWAA environment due to issues with the SSH key stored in AWS Secrets Manager. This post explains how I attempted to resolve this problem with the PythonVirtualenvOperator and ultimately transitioned to packaging the parent dbt project inside a plugins.zip file. This method allowed me to resolve dependency issues and securely reference models from other dbt projects.&lt;/p&gt;

&lt;h2&gt;
  
  
  Setting Up AWS MWAA with Terraform
&lt;/h2&gt;

&lt;p&gt;To run DBT projects in the cloud, AWS Managed Workflows for Apache Airflow (MWAA) provides an easy-to-manage solution. For this project, I used Terraform to create MWAA environments, manage Airflow DAGs, and provision other necessary resources. Here’s a snippet of how we set up the MWAA environment using Terraform:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;module "mwaa" {
  source = "aws-ia/mwaa/aws"
  name            = local.resource_name
  airflow_version = var.mwaa_airflow_version
  environment_class = var.environment_class
  create_s3_bucket  = false
  source_bucket_arn = module.bucket.s3_bucket_arn
  dag_s3_path       = "dags"
  iam_role_name     = local.resource_name
  requirements_s3_path   = "mwaa/requirements.txt"
  startup_script_s3_path = "mwaa/startup.sh"

  min_workers = var.min_workers
  max_workers = var.max_workers
  vpc_id      = try(var.vpc_id, module.vpc.vpc_id)
  private_subnet_ids = try(var.private_subnet_ids, module.vpc.private_subnets)
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This code provisions the MWAA environment and links it to an S3 bucket that houses Airflow DAGs, and Python dependencies. &lt;/p&gt;

&lt;h2&gt;
  
  
  One way to manage DBT Project Dependencies in Airflow
&lt;/h2&gt;

&lt;p&gt;When orchestrating DBT projects using Airflow in MWAA, one of the primary concerns is making sure models from one DBT project can reference those from another. Airflow’s DAGs (Directed Acyclic Graphs) are designed to handle such dependencies, ensuring that one DBT project can finish its transformations before another one starts.&lt;/p&gt;

&lt;p&gt;In this scenario, we can use the &lt;strong&gt;PythonVirtualenvOperator&lt;/strong&gt; in Airflow, which allows us to create isolated virtual environments to run Python code, including DBT commands. We use this operator because DBT has difficulties resolving dependency issues with the standard Airflow constraints file, and modifying this file is complex and potentially unstable for long-term use. Refer to the &lt;a href="https://airflow.apache.org/docs/apache-airflow/stable/installation/installing-from-pypi.html#constraints-files" rel="noopener noreferrer"&gt;Airflow constraint file documentation&lt;/a&gt;   for more details.&lt;/p&gt;

&lt;p&gt;The &lt;code&gt;PythonVirtualenvOperator&lt;/code&gt; allows us to install additional libraries and dependencies specific to each task, without interfering with the main Airflow environment or requiring permanent changes to Airflow’s global environment constraints. This flexibility is essential for DBT because DBT projects often require specific versions of libraries that may conflict with the dependencies needed by other parts of Airflow and modifying Airflow’s constraints file directly to add these dependencies could cause future instability and is generally discouraged. By isolating DBT’s dependencies within a virtual environment, we avoid compatibility issues, ensuring a stable execution environment for DBT commands, such as &lt;code&gt;dbt deps&lt;/code&gt; or &lt;code&gt;dbt run&lt;/code&gt;, without risking conflicts with Airflow’s core dependencies. &lt;/p&gt;

&lt;p&gt;As part of a DAG in Airflow, I needed to clone a private dbt project from a Git repository. This project contained key models and macros that needed to be referenced in a child dbt project. To handle the secure access to this repository, I stored the SSH private key in &lt;em&gt;AWS Secrets Manager&lt;/em&gt; and used the &lt;em&gt;PythonVirtualenvOperator&lt;/em&gt; to run the &lt;code&gt;dbt deps&lt;/code&gt; command inside the virtual environment during DAG execution, which adds the parent projects to the dbt_packages making the models easily accessible and referable in the child DBT project models.&lt;/p&gt;

&lt;p&gt;Here’s a simplified structure of the DAG I was working with that manages cross-project dependencies using &lt;em&gt;PythonVirtualenvOperator&lt;/em&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;from airflow import DAG
from airflow.operators.python import PythonVirtualenvOperator
from airflow.utils.dates import days_ago
from datetime import datetime
def dbt_run(selected_folder=None):
    import os
    os.system(f"dbt deps")
    os.system(f"dbt run --models {selected_folder}")
with DAG('dbt_cross_project', 
          start_date=days_ago(1), 
          schedule_interval='@daily') as dag:
    run_dbt_project = PythonVirtualenvOperator(
        task_id='run_dbt_project',
        python_callable=dbt_run,
        op_kwargs={'selected_folder': 'dbt_poc_child'},
        requirements=['dbt'],
        system_site_packages=False
    )
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And the corresponding packages.yml file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;packages:
  - package: dbt-labs/dbt_external_tables
    version: 0.8.7
  - package: dbt-labs/audit_helper
    version: 0.9.0

  # Replace below package references with your parent DBT projects
  - git: 'git@github.com:d-gunde/dbt_poc.git'
    revision: main
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The idea was that by invoking the &lt;code&gt;dbt deps&lt;/code&gt; command within the virtual environment, I could securely clone the parent dbt project from the private repository using the SSH key stored in AWS Secrets Manager.&lt;/p&gt;

&lt;p&gt;However, during DAG execution, I encountered the following error:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Permission denied (publickey).
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Despite configuring the SSH key in Secrets Manager, the Airflow worker couldn’t access the private repository. Upon investigating the issue, it turned out to be related to how the libcrypto library and SSH keys are handled in the MWAA environment, which was preventing proper authentication.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Pivot - Leveraging custom Plugins feature of Apache Airflow
&lt;/h2&gt;

&lt;p&gt;After multiple attempts to resolve the issue with PythonVirtualenvOperator, it became clear that managing SSH keys dynamically during DAG execution wasn't the most reliable approach in MWAA. So, I opted for an alternative solution: zipping the parent dbt project and deploying it as part of plugins.zip in the Airflow environment.&lt;/p&gt;

&lt;p&gt;I leveraged and modified the custom plugins feature of Apache Airflow. The custom plugins feature in Apache Airflow allows users to extend Airflow’s core functionality by adding custom operators, sensors, hooks, and other components. By packaging these extensions into a plugins.zip file and uploading it to Airflow, you can introduce new behaviors or integrate external systems into your workflows. However, instead of adding a custom operator/sensor, I have added the parent DBT projects as plugins, giving the following advantages: &lt;/p&gt;

&lt;p&gt;a. &lt;strong&gt;&lt;em&gt;No need for SSH keys&lt;/em&gt;&lt;/strong&gt;: By pre-packaging the parent dbt project inside plugins.zip, I eliminated the need for SSH key management and authentication during the DAG run.&lt;/p&gt;

&lt;p&gt;b. &lt;strong&gt;&lt;em&gt;Local availability&lt;/em&gt;&lt;/strong&gt;: The parent dbt project is now always available locally in the Airflow environment, ensuring that all models and macros are accessible without additional network calls.&lt;/p&gt;

&lt;p&gt;c. &lt;strong&gt;&lt;em&gt;Scalability&lt;/em&gt;&lt;/strong&gt;: This approach scales well for environments with multiple dbt projects and can be easily automated with Terraform and other CI/CD tools.&lt;/p&gt;

&lt;p&gt;It’s crucial to ensure that cross-project DBT dependencies are accounted for in the plugins and requirements files. Modify the MWAA provisioning code as below to add the plugins.zip:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;module "mwaa" {
  source = "aws-ia/mwaa/aws"
  name            = local.resource_name
  airflow_version = var.mwaa_airflow_version
  environment_class = var.environment_class
  create_s3_bucket  = false
  source_bucket_arn = module.bucket.s3_bucket_arn
  dag_s3_path       = "dags"
  iam_role_name     = local.resource_name
  plugins_s3_path        = "mwaa/plugins.zip"
  requirements_s3_path   = "mwaa/requirements.txt"
  startup_script_s3_path = "mwaa/startup.sh"

  min_workers = var.min_workers
  max_workers = var.max_workers
  vpc_id      = try(var.vpc_id, module.vpc.vpc_id)
  private_subnet_ids = try(var.private_subnet_ids, module.vpc.private_subnets)
}
resource "aws_s3_bucket_object" "mwaa_plugins" {
  bucket = var.mwaa_s3_bucket
  key    = "plugins.zip"
  source = "path/to/plugins.zip"
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;With the parent dbt project now part of the Airflow environment, updated the &lt;code&gt;packages.yml&lt;/code&gt; file in the child dbt project to reference the parent project locally:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;packages:
  - package: dbt-labs/dbt_external_tables
    version: 0.8.7
  - package: dbt-labs/audit_helper
    version: 0.9.0
  - local: /usr/local/airflow/plugins/dbt_poc
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;By using the local keyword, dbt could access the parent project's models and macros from the Airflow plugin directory during DAG execution.&lt;/p&gt;

&lt;h2&gt;
  
  
  Handling Common Errors and Solutions
&lt;/h2&gt;

&lt;p&gt;Working with DBT in Airflow is generally smooth, but cross-project dependencies can lead to some common issues:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;&lt;em&gt;Invalid Identifiers&lt;/em&gt;&lt;/strong&gt;: One issue you may encounter is an SQL compilation error due to an invalid identifier when referencing models from another project. This happens when DBT can't find the model you're referencing. To fix this, ensure that the correct path and project configurations are in place within your DBT models.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;&lt;em&gt;SSH Key Issues&lt;/em&gt;&lt;/strong&gt;: If your DBT project references a private git repository, you may run into permission errors like &lt;code&gt;Permission denied (publickey)&lt;/code&gt; when trying to clone the repository. In my case, I stored the SSH private key in AWS Secrets Manager and retrieved it at runtime during the Airflow DAG execution. However, this approach did not seem to work.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Handling cross-project dependencies in DBT with AWS MWAA might seem complicated, but with the right setup and orchestration, it becomes manageable. By using tools like Terraform for provisioning, PythonVirtualenvOperator for isolated execution, and AWS MWAA environment for managing workflows, you can build scalable, reliable data pipelines that handle dependencies across DBT projects efficiently. While effective, this approach can add complexity to CI/CD pipelines due to the need for customized environment configurations and dependency management within MWAA.&lt;/p&gt;

&lt;p&gt;An alternative to consider is the &lt;a href="https://airflow.apache.org/docs/apache-airflow-providers-cncf-kubernetes/stable/operators.html" rel="noopener noreferrer"&gt;&lt;strong&gt;KubernetesPodOperator&lt;/strong&gt;&lt;/a&gt;, which executes tasks in isolated Kubernetes pods, potentially simplifying dependency handling and CI/CD processes. Before fully committing to the &lt;strong&gt;PythonVirtualenvOperator&lt;/strong&gt; approach, weigh in pros and cons of both the operators carefully before investing time for setup. &lt;/p&gt;

</description>
      <category>aws</category>
      <category>terraform</category>
      <category>mwaa</category>
      <category>dbt</category>
    </item>
    <item>
      <title>Automating Secure VPN Setup on AWS with Terraform</title>
      <dc:creator>Deeksha Gunde</dc:creator>
      <pubDate>Wed, 21 Aug 2024 16:54:40 +0000</pubDate>
      <link>https://dev.to/deekshagunde/automating-secure-vpn-setup-on-aws-with-terraform-1lp3</link>
      <guid>https://dev.to/deekshagunde/automating-secure-vpn-setup-on-aws-with-terraform-1lp3</guid>
      <description>&lt;p&gt;As an AWS best practice, you want to build security into your solution at every level, and that includes networking.   In the majority of cases this means keeping your resources in a private subnet, only accessible through VPN or your corporate network.   &lt;/p&gt;

&lt;p&gt;I recently ran into this challenge working with Managed Workflows for Apache Airflow (MWAA), a game-changer for orchestrating data workflows.  Being security conscious,we wanted to set our MWAA environment to private access, . This is where a VPN (Virtual Private Network) setup becomes invaluable, ensuring that your team can access the MWAA environment safely and efficiently. &lt;/p&gt;

&lt;p&gt;In this post, we'll explore the manual steps to set up a VPN client endpoint and then see how Terraform can automate and streamline this process.&lt;/p&gt;

&lt;h2&gt;
  
  
  Prerequisites
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;AWS CLI&lt;/li&gt;
&lt;li&gt;Server and Client certificate uploaded to ACM&lt;/li&gt;
&lt;li&gt;Terragrunt/ Terraform&lt;/li&gt;
&lt;li&gt;MWAA environment with private access&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;We assume you already have ACM certificates generated and uploaded to ACM (Amazon Certificate Manager). If not, you can refer to and follow the steps in this tutorial&lt;br&gt;
Generate and add Server and Client certificates to ACM -  &lt;a href="https://docs.aws.amazon.com/vpn/latest/clientvpn-admin/mutual.html" rel="noopener noreferrer"&gt;Authentication&lt;/a&gt; &lt;/p&gt;

&lt;p&gt;Once you have generated the certificates upload them to ACM using below AWS CLI commands&lt;br&gt;
Server certificate&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws acm import-certificate --certificate fileb://server.crt --private-key fileb://server.key --certificate-chain fileb://ca.crt –region &amp;lt;your_region&amp;gt; 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Client certificate&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws acm import-certificate --certificate fileb://client1.domain.tld.crt --private-key fileb://client1.domain.tld.key --certificate-chain fileb://ca.crt –region &amp;lt;your_region&amp;gt; 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Setup of VPN access, the hard way
&lt;/h2&gt;

&lt;p&gt;You can set up VPN connection access to your private MWAA environment manually following this tutorial - &lt;a href="https://docs.aws.amazon.com/vpn/latest/clientvpn-admin/cvpn-getting-started.html" rel="noopener noreferrer"&gt;AWS Client VPN&lt;/a&gt;, and as you can see there are so many resources that need to be created and configured. The whole process of doing this feels long and tedious. &lt;/p&gt;

&lt;h2&gt;
  
  
  Terraform to the Rescue
&lt;/h2&gt;

&lt;p&gt;While these manual steps are straightforward, they can be time-consuming and error-prone, especially when you need to repeat them across multiple environments. Terraform can automate this entire process, ensuring consistency and saving time.&lt;/p&gt;

&lt;p&gt;Terraform, an open-source infrastructure as a code (IaC) tool, allows you to define and provision your AWS infrastructure using a simple, declarative configuration language. You can create a simple terraform script that handles the creation and configuring of the AWS resources for client VPN setup just by running one simple command from your CLI!!!   &lt;/p&gt;

&lt;h2&gt;
  
  
  THE Terraform script
&lt;/h2&gt;

&lt;p&gt;Below we provide a simple way of structuring your terraform script ( &lt;code&gt;main.tf&lt;/code&gt; ) to achieve this. &lt;br&gt;
Let’s break down the script and see how each part contributes to the setup.&lt;/p&gt;
&lt;h3&gt;
  
  
  1. Set up AWS provider
&lt;/h3&gt;

&lt;p&gt;Tell Terraform to use AWS as the provider and specify which AWS region where the resources will be created. The &lt;code&gt;var.aws_region&lt;/code&gt; is the terraform variable defined to make it easy to switch the regions as needed.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;provider "aws" {
  region = var.aws_region
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  2. Cloudformation stack for Client VPN endpoint
&lt;/h3&gt;

&lt;p&gt;Here, we’re using a CloudFormation stack to set up the VPN client endpoint. This stack includes all the necessary resources and configurations, such as security group IDs, VPC ID, and subnet ID. By using CloudFormation, we can manage the resources as a single unit, making updates and maintenance straightforward. We leverage the terraform-defined resource &lt;code&gt;aws_cloudformation_stack&lt;/code&gt; to configure it on the AWS region specified.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "aws_cloudformation_stack" "vpn_client_stack" {
  name = "VPNClientStack"

  template_body = &amp;lt;&amp;lt;TEMPLATE
  AWSTemplateFormatVersion: '2010-09-09'
  Description: AWS VPN Client Setup

  Parameters:
    SecurityGroupId:
      Type: String
      Description: The ID of the existing security group to use
    VpcId:
      Type: String
      Description: The ID of the VPC where the VPN will be created
    SubnetId:
      Type: String
      Description: The ID of the subnet to associate with the VPN

  Resources:
    VpnClientEndpoint:
      Type: AWS::EC2::ClientVpnEndpoint
      Properties: 
        AuthenticationOptions:
          - Type: certificate-authentication
            MutualAuthentication:
              ClientRootCertificateChainArn: "${data.aws_acm_certificate.client.arn}"
        ClientCidrBlock: "${var.client_cidr}"  
        ConnectionLogOptions:
          Enabled: false
        Description: "Client VPN Endpoint"
        ServerCertificateArn: "${data.aws_acm_certificate.server.arn}"
        VpcId: !Ref VpcId
        VpnPort: 443
        TransportProtocol: udp
        TagSpecifications:
          - ResourceType: "client-vpn-endpoint"
            Tags:
            - Key: Name
              Value: Test-Client-VPN
        SplitTunnel: true
        SecurityGroupIds:
          - Ref: SecurityGroupId

  Outputs:
    VpnClientEndpointId:
      Description: "VPN Client Endpoint ID"
      Value: !Ref VpnClientEndpoint
  TEMPLATE

  parameters = {
    SecurityGroupId = var.security_group_id
    VpcId           = var.vpc_id
    SubnetId        = var.private_subnet_ids[0]
  }

  depends_on = [
    data.aws_acm_certificate.server,
    data.aws_acm_certificate.client
  ]
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This snippet assumes that you have already uploaded the certificates to ACM, and fetches the client and server certificates ARNs - &lt;code&gt;data.aws_acm_certificate.server, data.aws_acm_certificate.client&lt;/code&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. Associating subnet to VPN endpoint
&lt;/h3&gt;

&lt;p&gt;This resource associates the VPN client endpoint with a specified subnet, defining the network range that can be accessed via the VPN. If you are setting up the VPN endpoint for a private resource, ( in our case a an MWAA environment), make sure that the private subnet you are passing here is the one that your resource is in.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "aws_ec2_client_vpn_network_association" "client_vpn_association_private" {
  client_vpn_endpoint_id = aws_cloudformation_stack.vpn_client_stack.outputs["VpnClientEndpointId"]
  subnet_id              = var.private_subnet_id
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  4. Adding authorization rule
&lt;/h3&gt;

&lt;p&gt;Authorization rules determine who can access the VPN. In this script, all traffic within the VPC CIDR block is authorized, and access is granted to all user groups. Make sure to provide the VPC CIDR block associated with the MWAA environment.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "aws_ec2_client_vpn_authorization_rule" "authorization_rule" {
  client_vpn_endpoint_id = aws_cloudformation_stack.vpn_client_stack.outputs["VpnClientEndpointId"]

  target_network_cidr  = var.vpc_cidr
  authorize_all_groups = true
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Steps to run the script
&lt;/h2&gt;

&lt;p&gt;We discussed the core content of the terraform script, feel free to add any missing information like variables, data resources, and output blocks appropriately and package them neatly in separate files in a project folder or within the same script. See below sections for sample references.&lt;br&gt;
Once you have everything ready and in place, we are ready to test our script! &lt;/p&gt;
&lt;h3&gt;
  
  
  1. Initialize Your Terraform Workspace
&lt;/h3&gt;

&lt;p&gt;Navigate to the directory containing your &lt;code&gt;main.tf&lt;/code&gt; file and run the following command to initialize your Terraform workspace:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform init
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  2. Configure Your Variables
&lt;/h3&gt;

&lt;p&gt;Create a variables.tf file to define the variables used in your script, such as aws_region, security_group_id, vpc_id, private_subnet_ids, and client_cidr. Follow the sample below:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;variable "aws_region" {
  description = "The AWS region to deploy the resources"
  type        = string
}

variable "security_group_id" {
  description = "The ID of the security group"
  type        = string
}

variable "vpc_id" {
  description = "The ID of the VPC"
  type        = string
}

variable "private_subnet_ids" {
  description = "List of private subnet IDs"
  type        = list(string)
}

variable "client_cidr" {
  description = "The CIDR block for the VPN client"
  type        = string
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now populate the variable values by creating a &lt;code&gt;terraform.tfvars&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws_region        = "us-east-1"
security_group_id = "&amp;lt;security_group_id&amp;gt;"
vpc_id            = "&amp;lt;vpc_id&amp;gt;"
private_subnet_ids = ["&amp;lt;private_subnet_id&amp;gt;"]
client_cidr       = "10.0.0.0/16" # Adjust CIDR block as needed
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  3.Plan and apply
&lt;/h3&gt;

&lt;p&gt;Run the following commands in the command prompt&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform plan
terraform apply
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Review the outputs of the &lt;code&gt;plan&lt;/code&gt; command to ensure the configuration, and the &lt;code&gt;apply&lt;/code&gt; command will create the resources on AWS. &lt;/p&gt;

&lt;h2&gt;
  
  
  Testing the setup
&lt;/h2&gt;

&lt;p&gt;Now you have deployed the script that created the client VPN endpoint. To test if the endpoint is created properly, first ensure that the endpoint is created successfully.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Login to your AWS account, navigate to the VPC dashboard, and select "Client VPN Endpoints", you should see the client endpoint "Test-Client-VPN".&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Download the configuration file of the client VPN endpoint by clicking on "Download Client Configuration".&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;In the configuration.ovpn file, add the below block (paste above &lt;code&gt;reneg-sec 0&lt;/code&gt;):
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;lt;cert&amp;gt;
Contents of client certificate (.crt) file
&amp;lt;/cert&amp;gt;

&amp;lt;key&amp;gt;
Contents of private key (.key) file
&amp;lt;/key&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Copy the contents of your client certificate and client private key files in their respective tags in the above block. Save and close the file.&lt;/li&gt;
&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;
Connecting to the VPN using the AWS VPN client app:
You can download the AWS Client VPN from here &lt;a href="https://docs.aws.amazon.com/vpn/latest/clientvpn-user/connect-aws-client-vpn-connect.html" rel="noopener noreferrer"&gt;AWS Provided Client Connect&lt;/a&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Open the AWS VPN client application on your machine &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Select File -&amp;gt; Manage Profiles&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnijeutfl1jamti2w7rxg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnijeutfl1jamti2w7rxg.png" alt="manage profiles" width="538" height="280"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Click on Add Profile, give a name to your connection, select the configuration file you downloaded from AWS Client VPN, and click Add Profile.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgn7qpio23upikmyhm2bb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgn7qpio23upikmyhm2bb.png" alt="Add profile" width="777" height="472"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Select your profile name from the drop-down and click on Connect - establishes the connection through the VPN&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fude2mlqzq98lb6zx0pwo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fude2mlqzq98lb6zx0pwo.png" alt="Test connection" width="525" height="271"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Cleaning up! Deleting the client VPN endpoint
&lt;/h2&gt;

&lt;p&gt;To delete the client VPN setup and all the resources associated with it, you can do it by simply running the below command&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform destroy
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;It is as simple as this, now you can set up a client VPN connection to your AWS resources securely.&lt;/p&gt;

&lt;p&gt;Happy Coding!! :)&lt;/p&gt;

</description>
    </item>
  </channel>
</rss>
