<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Katherine Lin</title>
    <description>The latest articles on DEV Community by Katherine Lin (@katherine_lin_f690f55bbf7).</description>
    <link>https://dev.to/katherine_lin_f690f55bbf7</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/katherine_lin_f690f55bbf7"/>
    <language>en</language>
    <item>
      <title>Terraform with Terragrunt to Reduce Duplicate Definitions</title>
      <dc:creator>Katherine Lin</dc:creator>
      <pubDate>Tue, 08 Oct 2024 03:04:08 +0000</pubDate>
      <link>https://dev.to/katherine_lin_f690f55bbf7/terraform-with-terragrunt-to-reduce-duplicate-definitions-2aci</link>
      <guid>https://dev.to/katherine_lin_f690f55bbf7/terraform-with-terragrunt-to-reduce-duplicate-definitions-2aci</guid>
      <description>&lt;p&gt;Infrastructure as Code (IaC) is a critical component in modern cloud deployments, and &lt;strong&gt;Terraform&lt;/strong&gt; has become one of the most popular tools for defining cloud resources. But when you manage complex infrastructure spread across multiple environments, the problem of duplicate code often arises. That’s where &lt;strong&gt;Terragrunt&lt;/strong&gt; comes into play.&lt;/p&gt;

&lt;p&gt;In this article, I’ll explain how &lt;strong&gt;Terragrunt&lt;/strong&gt; can help simplify your Terraform workflow, avoid redundancy, and enable reusable cloud infrastructure by deploying resources in multiple modules.&lt;/p&gt;

&lt;p&gt;Let's dive in! 🏗️&lt;/p&gt;




&lt;h2&gt;
  
  
  What is Terraform?
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Terraform&lt;/strong&gt; is an open-source tool for building, changing, and versioning infrastructure safely and efficiently. It allows users to define cloud resources declaratively using HCL (HashiCorp Configuration Language) and automates the provisioning process across cloud providers like AWS, GCP, and Azure.&lt;/p&gt;

&lt;p&gt;While Terraform is extremely powerful, managing different environments like &lt;code&gt;dev&lt;/code&gt;, &lt;code&gt;staging&lt;/code&gt;, and &lt;code&gt;production&lt;/code&gt; can lead to repetitive code as you define similar resources across multiple modules or environments. This is where &lt;strong&gt;Terragrunt&lt;/strong&gt; steps in.&lt;/p&gt;




&lt;h2&gt;
  
  
  What is Terragrunt?
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Terragrunt&lt;/strong&gt; is a thin wrapper around Terraform that provides extra tools for working with multiple Terraform modules. It simplifies the process of deploying infrastructure by handling things like remote state management, managing dependencies, and reducing duplicate code by using reusable configurations.&lt;/p&gt;

&lt;p&gt;In simple terms, Terragrunt helps you DRY (Don't Repeat Yourself) your Terraform code.&lt;/p&gt;




&lt;h2&gt;
  
  
  Why Use Terragrunt with Terraform?
&lt;/h2&gt;

&lt;p&gt;As your infrastructure grows, you often find yourself writing the same Terraform code over and over for each environment or module. For example, you might define an S3 bucket in &lt;code&gt;dev&lt;/code&gt;, &lt;code&gt;staging&lt;/code&gt;, and &lt;code&gt;prod&lt;/code&gt; environments. This leads to code duplication, which can become hard to maintain.&lt;/p&gt;

&lt;p&gt;Here’s why you should consider &lt;strong&gt;Terragrunt&lt;/strong&gt;:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;DRY Principle&lt;/strong&gt;: Eliminate duplicate definitions by defining reusable Terraform configurations.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Simplified Infrastructure&lt;/strong&gt;: Keep your environment-specific configurations separate from your infrastructure definitions.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Easy Multi-Environment Support&lt;/strong&gt;: Deploy the same infrastructure across multiple environments with ease.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Remote State Management&lt;/strong&gt;: Automatically configure remote state and locking.&lt;/li&gt;
&lt;/ol&gt;




&lt;h2&gt;
  
  
  Applying Terraform Code with Terragrunt 🛠️
&lt;/h2&gt;

&lt;p&gt;Let’s go through an example where we use Terragrunt to reduce duplicate Terraform code by deploying cloud resources across multiple environments.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 1: Organize Your Terraform Modules
&lt;/h3&gt;

&lt;p&gt;First, you need to structure your Terraform code into reusable &lt;strong&gt;modules&lt;/strong&gt;. For example, let’s say you want to create an S3 bucket and an RDS database for your application. Here’s how you can structure the Terraform code:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;└── terraform-modules/
    ├── s3/
    │   └── main.tf
    ├── rds/
    │   └── main.tf
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Inside each module (s3/main.tf and rds/main.tf), you define the infrastructure resources, for example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight hcl"&gt;&lt;code&gt;&lt;span class="c1"&gt;# s3/main.tf&lt;/span&gt;
&lt;span class="nx"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"aws_s3_bucket"&lt;/span&gt; &lt;span class="s2"&gt;"this"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;bucket&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;var&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;bucket_name&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight hcl"&gt;&lt;code&gt;&lt;span class="c1"&gt;# rds/main.tf&lt;/span&gt;
&lt;span class="nx"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"aws_db_instance"&lt;/span&gt; &lt;span class="s2"&gt;"this"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;identifier&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;var&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;db_name&lt;/span&gt;
  &lt;span class="nx"&gt;engine&lt;/span&gt;     &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"mysql"&lt;/span&gt;
  &lt;span class="nx"&gt;instance_class&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"db.t3.micro"&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Step 2: Write Terragrunt Configuration
&lt;/h3&gt;

&lt;p&gt;Now that you have your reusable Terraform modules, you can use Terragrunt to apply these modules across multiple environments like dev, staging, and production.&lt;/p&gt;

&lt;p&gt;Create the following directory structure:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;└── live/
    ├── dev/
    │   └── terragrunt.hcl
    ├── staging/
    │   └── terragrunt.hcl
    ├── prod/
    │   └── terragrunt.hcl
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In each terragrunt.hcl file, you reference the Terraform module and define any environment-specific variables. For example, here’s the configuration for the dev environment:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight hcl"&gt;&lt;code&gt;&lt;span class="c1"&gt;# live/dev/terragrunt.hcl&lt;/span&gt;
&lt;span class="nx"&gt;terraform&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;source&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"../../terraform-modules/s3"&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="nx"&gt;inputs&lt;/span&gt; &lt;span class="err"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;bucket_name&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"my-app-dev-bucket"&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For the staging environment:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight hcl"&gt;&lt;code&gt;&lt;span class="c1"&gt;# live/staging/terragrunt.hcl&lt;/span&gt;
&lt;span class="nx"&gt;terraform&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;source&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"../../terraform-modules/s3"&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="nx"&gt;inputs&lt;/span&gt; &lt;span class="err"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;bucket_name&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"my-app-staging-bucket"&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;As you can see, you reuse the same Terraform module and only change the inputs (like the bucket_name) for each environment. Terragrunt automatically ensures that the module is applied correctly for each environment.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 3: Manage Remote State
&lt;/h3&gt;

&lt;p&gt;Terragrunt makes it easy to manage remote state for Terraform. You can define a common backend configuration in a root terragrunt.hcl file, which all environments inherit:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight hcl"&gt;&lt;code&gt;&lt;span class="c1"&gt;# live/terragrunt.hcl&lt;/span&gt;
&lt;span class="nx"&gt;remote_state&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;backend&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"s3"&lt;/span&gt;
  &lt;span class="nx"&gt;config&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;bucket&lt;/span&gt;         &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"my-terraform-state-bucket"&lt;/span&gt;
    &lt;span class="nx"&gt;key&lt;/span&gt;            &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"${path_relative_to_include()}/terraform.tfstate"&lt;/span&gt;
    &lt;span class="nx"&gt;region&lt;/span&gt;         &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"us-east-1"&lt;/span&gt;
    &lt;span class="nx"&gt;dynamodb_table&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"terraform-locks"&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now, all environments (dev, staging, prod) will use this same S3 bucket to store their Terraform state, making it easy to manage and share the state between different environments.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Benefits of Using Terragrunt
&lt;/h2&gt;

&lt;p&gt;By applying Terragrunt in your Terraform workflow, you unlock several advantages:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Reduced Duplication&lt;/strong&gt;: Instead of copying and pasting similar Terraform code across environments, you create reusable modules and simply adjust inputs for each environment.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Easy Environment Management&lt;/strong&gt;: Spin up new environments with just a few changes in the terragrunt.hcl files.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Centralized State Management&lt;/strong&gt;: With Terragrunt, managing remote state is as simple as defining it once in the root configuration.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Clear Separation of Concerns&lt;/strong&gt;: Separate your infrastructure logic (modules) from environment-specific configurations (Terragrunt configs).&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Final Thoughts
&lt;/h2&gt;

&lt;p&gt;By using &lt;strong&gt;Terragrunt&lt;/strong&gt; with &lt;strong&gt;Terraform&lt;/strong&gt;, you can significantly reduce the complexity and repetition in your infrastructure code. This approach scales well as your infrastructure grows, making it easier to manage different environments and resources.&lt;/p&gt;

&lt;p&gt;If you’re working with multi-environment setups or struggling with redundant Terraform code, Terragrunt is an excellent tool to streamline your workflow.&lt;/p&gt;

&lt;p&gt;Feel free to leave any questions in the comments or share your experiences using Terraform and Terragrunt. Happy coding! 🎉&lt;/p&gt;

&lt;h3&gt;
  
  
  Related Resources:
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Terragrunt Documentation&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://www.terraform.io/" rel="noopener noreferrer"&gt;Terraform by HashiCorp&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Gruntwork's Guide to Terragrunt&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
    </item>
    <item>
      <title>Release Automation through Parallel Codefresh Pipelines</title>
      <dc:creator>Katherine Lin</dc:creator>
      <pubDate>Tue, 08 Oct 2024 02:49:00 +0000</pubDate>
      <link>https://dev.to/katherine_lin_f690f55bbf7/release-automation-through-parallel-codefresh-pipelines-2h8g</link>
      <guid>https://dev.to/katherine_lin_f690f55bbf7/release-automation-through-parallel-codefresh-pipelines-2h8g</guid>
      <description>&lt;p&gt;In this post, we’ll explore how to improve your CI/CD workflow by promoting release automation using &lt;strong&gt;parallel Codefresh pipelines&lt;/strong&gt;. This process includes defining custom &lt;strong&gt;variables&lt;/strong&gt;, integrating &lt;strong&gt;approval stages&lt;/strong&gt;, and running tasks in &lt;strong&gt;infradev&lt;/strong&gt; and &lt;strong&gt;staging pipelines&lt;/strong&gt; in parallel. With Codefresh’s flexibility, we can drastically reduce deployment time while maintaining a high level of control.&lt;/p&gt;

&lt;p&gt;By the end of this guide, you will have a solid foundation to create scalable, parallel pipelines for your projects.&lt;/p&gt;

&lt;p&gt;Let’s dive in! 💻✨&lt;/p&gt;




&lt;h2&gt;
  
  
  What We’ll Cover
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;What is Codefresh?&lt;/li&gt;
&lt;li&gt;Why Parallel Pipelines?&lt;/li&gt;
&lt;li&gt;Setting Up the Infradev Pipeline&lt;/li&gt;
&lt;li&gt;Configuring Staging Pipeline&lt;/li&gt;
&lt;li&gt;Using Variables in Pipelines&lt;/li&gt;
&lt;li&gt;Adding Stage Approvals&lt;/li&gt;
&lt;li&gt;Putting It All Together&lt;/li&gt;
&lt;/ol&gt;




&lt;h2&gt;
  
  
  What is Codefresh?
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Codefresh&lt;/strong&gt; is a modern CI/CD tool built for cloud-native applications. It supports container-based pipelines with a highly scalable and flexible architecture.&lt;/p&gt;

&lt;p&gt;In this guide, we’ll leverage Codefresh’s parallel pipeline capability to run an &lt;strong&gt;Infradev pipeline&lt;/strong&gt; and a &lt;strong&gt;Staging pipeline&lt;/strong&gt; simultaneously, configure custom &lt;strong&gt;variables&lt;/strong&gt; for flexibility, and include &lt;strong&gt;approval stages&lt;/strong&gt; to ensure that releases are thoroughly reviewed.&lt;/p&gt;




&lt;h2&gt;
  
  
  Why Parallel Pipelines?
&lt;/h2&gt;

&lt;p&gt;Parallel pipelines help reduce bottlenecks in your release process by allowing different stages of your pipeline to run at the same time.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Faster Feedback Loops&lt;/strong&gt;: Parallel pipelines decrease overall build time, providing quicker feedback on deployments.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Stage Isolation&lt;/strong&gt;: Run Infradev and Staging pipelines in isolation to ensure environment-specific configurations without causing interference.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Custom Approvals&lt;/strong&gt;: Control which stages need manual intervention and which can proceed automatically, offering a balance between automation and manual oversight.&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Setting Up the Infradev Pipeline
&lt;/h2&gt;

&lt;p&gt;The &lt;strong&gt;Infradev&lt;/strong&gt; pipeline is designed to deploy infrastructure components or perform infrastructure validation.&lt;/p&gt;

&lt;h3&gt;
  
  
  1. Define Pipeline Stages
&lt;/h3&gt;

&lt;p&gt;In Codefresh, you’ll need to define multiple stages for different tasks such as infrastructure provisioning, validation, and security checks.&lt;/p&gt;

&lt;p&gt;Here’s a simple YAML configuration to define the Infradev pipeline:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;version&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;1.0'&lt;/span&gt;
&lt;span class="na"&gt;stages&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Provision&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;Infrastructure"&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Run&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;Security&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;Scans"&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Validate&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;Resources"&lt;/span&gt;
&lt;span class="na"&gt;steps&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;provision_infra&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;title&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Provisioning&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;Infrastructure"&lt;/span&gt;
    &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;hashicorp/terraform'&lt;/span&gt;
    &lt;span class="na"&gt;commands&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;terraform init&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;terraform apply&lt;/span&gt;
  &lt;span class="na"&gt;run_security_scan&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;title&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Running&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;Security&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;Scan"&lt;/span&gt;
    &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;aquasec/trivy'&lt;/span&gt;
    &lt;span class="na"&gt;commands&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;trivy filesystem --severity HIGH --exit-code 1 .&lt;/span&gt;
  &lt;span class="na"&gt;validate_resources&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;title&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Validating&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;Resources"&lt;/span&gt;
    &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;python:3.8'&lt;/span&gt;
    &lt;span class="na"&gt;commands&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;python validate_resources.py&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Each stage performs a key function, ensuring that the infrastructure is provisioned, security-compliant, and ready for further operations.&lt;/p&gt;




&lt;h2&gt;
  
  
  Configuring Staging Pipeline
&lt;/h2&gt;

&lt;p&gt;While Infradev handles infrastructure, the &lt;strong&gt;Staging pipeline&lt;/strong&gt; focuses on deploying the application to a staging environment for final validation before production.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Staging Pipeline Definition
&lt;/h3&gt;

&lt;p&gt;Here's an example of a Staging pipeline that runs tests, deploys the application, and performs end-to-end tests:&lt;/p&gt;

&lt;p&gt;Plain textANTLR4BashCC#CSSCoffeeScriptCMakeDartDjangoDockerEJSErlangGitGoGraphQLGroovyHTMLJavaJavaScriptJSONJSXKotlinLaTeXLessLuaMakefileMarkdownMATLABMarkupObjective-CPerlPHPPowerShell.propertiesProtocol BuffersPythonRRubySass (Sass)Sass (Scss)SchemeSQLShellSwiftSVGTSXTypeScriptWebAssemblyYAMLXML&lt;code&gt;yamlCopy codeversion: '1.0'  stages:    - "Run Unit Tests"    - "Deploy to Staging"    - "Run End-to-End Tests"  steps:    run_tests:      title: "Running Unit Tests"      image: 'node:14'      commands:        - npm install        - npm run test    deploy_staging:      title: "Deploying to Staging"      image: 'alpine/kubectl'      commands:        - kubectl apply -f deployment.yaml    run_e2e_tests:      title: "Running E2E Tests"      image: 'cypress/included:8.3.1'      commands:        - npm run cypress:run&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Both pipelines can be configured to run in parallel, cutting down deployment time significantly.&lt;/p&gt;

&lt;h2&gt;
  
  
  Using Variables in Pipelines
&lt;/h2&gt;

&lt;p&gt;Variables in Codefresh pipelines provide flexibility and reusability. You can define variables at different levels – pipeline, stage, or globally – and pass them between steps.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. Defining Variables
&lt;/h3&gt;

&lt;p&gt;In our pipelines, we’ll use variables to handle configuration, environment names, or credentials dynamically.&lt;/p&gt;

&lt;p&gt;Plain textANTLR4BashCC#CSSCoffeeScriptCMakeDartDjangoDockerEJSErlangGitGoGraphQLGroovyHTMLJavaJavaScriptJSONJSXKotlinLaTeXLessLuaMakefileMarkdownMATLABMarkupObjective-CPerlPHPPowerShell.propertiesProtocol BuffersPythonRRubySass (Sass)Sass (Scss)SchemeSQLShellSwiftSVGTSXTypeScriptWebAssemblyYAMLXML&lt;code&gt;yamlCopy codevariables:    ENVIRONMENT: "staging"    APP_NAME: "my-app"    IMAGE_TAG: "v1.0.0"&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;By defining variables, we can avoid hardcoding values and easily change configurations when needed, such as switching from staging to production.&lt;/p&gt;

&lt;h2&gt;
  
  
  Adding Stage Approvals
&lt;/h2&gt;

&lt;p&gt;Codefresh allows you to set &lt;strong&gt;manual approval&lt;/strong&gt; gates between stages to ensure that critical steps get human oversight before proceeding to the next stage.&lt;/p&gt;

&lt;h3&gt;
  
  
  4. Adding Approval Gates
&lt;/h3&gt;

&lt;p&gt;To add an approval step between the staging deployment and the production pipeline, include the manual-approval step in your pipeline:&lt;/p&gt;

&lt;p&gt;Plain textANTLR4BashCC#CSSCoffeeScriptCMakeDartDjangoDockerEJSErlangGitGoGraphQLGroovyHTMLJavaJavaScriptJSONJSXKotlinLaTeXLessLuaMakefileMarkdownMATLABMarkupObjective-CPerlPHPPowerShell.propertiesProtocol BuffersPythonRRubySass (Sass)Sass (Scss)SchemeSQLShellSwiftSVGTSXTypeScriptWebAssemblyYAMLXML&lt;code&gt;yamlCopy codesteps:    manual_approval:      type: "manual"      title: "Approval Required"      description: "Approve deployment to Production"      on_success:        next_step: deploy_production&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;With this step in place, you can pause the pipeline at critical points, requiring a team member to manually review and approve the deployment.&lt;/p&gt;

&lt;h2&gt;
  
  
  Putting It All Together
&lt;/h2&gt;

&lt;p&gt;Now that we’ve set up both pipelines, defined variables, and added approval stages, here’s how it all fits together in Codefresh:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Parallel Pipelines&lt;/strong&gt;: Run the &lt;strong&gt;Infradev&lt;/strong&gt; and &lt;strong&gt;Staging&lt;/strong&gt; pipelines simultaneously to improve efficiency.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Variables&lt;/strong&gt;: Use configurable variables to keep the pipelines dynamic and reusable.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Approval Stages&lt;/strong&gt;: Add approval gates at key points to ensure that releases are carefully reviewed before moving forward.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;This configuration offers a highly flexible and efficient workflow for deploying infrastructure and applications with proper safeguards in place.&lt;/p&gt;

&lt;h2&gt;
  
  
  Final Thoughts
&lt;/h2&gt;

&lt;p&gt;By implementing parallel pipelines, custom variables, and manual approvals in &lt;strong&gt;Codefresh&lt;/strong&gt;, you can automate your CI/CD process while maintaining control over critical deployments. This setup ensures high availability and efficient workflow across environments like infradev and staging.&lt;/p&gt;

&lt;p&gt;If you have any questions or need further clarification, feel free to ask in the comments! 🚀&lt;/p&gt;

&lt;h3&gt;
  
  
  Related Resources:
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Codefresh Documentation&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Terraform Documentation&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Kubernetes Documentation&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
    </item>
    <item>
      <title>Configure a Kubernetes Cluster with Control Plane, Worker Nodes, and Load Balancer with Ingress Controller 🚀</title>
      <dc:creator>Katherine Lin</dc:creator>
      <pubDate>Mon, 07 Oct 2024 12:04:13 +0000</pubDate>
      <link>https://dev.to/katherine_lin_f690f55bbf7/configure-a-kubernetes-cluster-with-control-plane-worker-nodes-and-load-balancer-with-ingress-controller-2obl</link>
      <guid>https://dev.to/katherine_lin_f690f55bbf7/configure-a-kubernetes-cluster-with-control-plane-worker-nodes-and-load-balancer-with-ingress-controller-2obl</guid>
      <description>&lt;p&gt;Kubernetes is a powerful container orchestration system that simplifies the deployment, scaling, and management of containerized applications. Setting up a Kubernetes cluster involves configuring several key components: the control plane, worker nodes, and a load balancer with an ingress controller. In this guide, I’ll walk you through the step-by-step process of setting up a Kubernetes cluster with these essential parts.&lt;/p&gt;

&lt;p&gt;Let’s dive right into it! 🚀&lt;/p&gt;




&lt;h3&gt;
  
  
  Table of Contents
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Introduction&lt;/li&gt;
&lt;li&gt;Setting Up the Control Plane&lt;/li&gt;
&lt;li&gt;Adding Worker Nodes&lt;/li&gt;
&lt;li&gt;Configuring Load Balancer&lt;/li&gt;
&lt;li&gt;Setting Up the Ingress Controller&lt;/li&gt;
&lt;li&gt;Final Thoughts&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;Kubernetes clusters are composed of two main parts: the control plane and worker nodes. &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Control Plane&lt;/strong&gt;: Manages the cluster and makes decisions about scheduling, scaling, and maintaining the overall state.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Worker Nodes&lt;/strong&gt;: Run the actual applications, with containers managed by the control plane.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Load Balancer and Ingress Controller&lt;/strong&gt;: Handle external traffic, distributing requests across the worker nodes, ensuring that services are accessible and highly available.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In this guide, I’ll help you configure a basic Kubernetes cluster using a control plane, worker nodes, and a load balancer with an ingress controller.&lt;/p&gt;




&lt;h2&gt;
  
  
  Setting Up the Control Plane
&lt;/h2&gt;

&lt;p&gt;The control plane is the brain of the Kubernetes cluster, responsible for managing the cluster's lifecycle. It consists of several key components, such as the API server, etcd, scheduler, and controller manager.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 1: Initialize Kubernetes Control Plane
&lt;/h3&gt;

&lt;p&gt;First, make sure your machine has &lt;code&gt;kubeadm&lt;/code&gt;, &lt;code&gt;kubelet&lt;/code&gt;, and &lt;code&gt;kubectl&lt;/code&gt; installed. Once installed, initialize the control plane by running the following command on your master node:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;kubeadm init &lt;span class="nt"&gt;--pod-network-cidr&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;192.168.0.0/16
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;--pod-network-cidr&lt;/code&gt; specifies the range of IP addresses for the pod network.&lt;br&gt;&lt;br&gt;
After the initialization completes, the command will output instructions on how to set up &lt;code&gt;kubectl&lt;/code&gt; for the master node.&lt;/p&gt;
&lt;h3&gt;
  
  
  Step 2: Configure kubectl Access
&lt;/h3&gt;

&lt;p&gt;To allow the master node to use &lt;code&gt;kubectl&lt;/code&gt;, copy the kubeconfig file to your home directory:&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;&lt;code&gt;mkdir -p $HOME/.kube&lt;br&gt;
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config&lt;br&gt;
sudo chown $(id -u):$(id -g) $HOME/.kube/config&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;Now, your control plane is up and running!&lt;/p&gt;

&lt;h2&gt;
  
  
  Adding Worker Nodes
&lt;/h2&gt;

&lt;p&gt;Worker nodes are responsible for running the actual applications inside containers. You’ll need to join these worker nodes to the control plane to form a full Kubernetes cluster.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 3: Join Worker Nodes to the Cluster
&lt;/h3&gt;

&lt;p&gt;After initializing the control plane, &lt;code&gt;kubeadm&lt;/code&gt; will output a join command. Run this on each of your worker nodes to add them to the cluster:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;kubeadm &lt;span class="nb"&gt;join&lt;/span&gt; &amp;lt;control-plane-ip&amp;gt;:6443 &lt;span class="nt"&gt;--token&lt;/span&gt; &amp;lt;token&amp;gt; &lt;span class="nt"&gt;--discovery-token-ca-cert-hash&lt;/span&gt; sha256:&amp;lt;&lt;span class="nb"&gt;hash&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Make sure to replace , , and  with the actual values provided by the kubeadm init command.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 4: Verify Nodes are Connected
&lt;/h3&gt;

&lt;p&gt;Once your nodes are added, verify that they are connected to the cluster by running:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl get nodes
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You should see all worker nodes listed as "Ready" in the output.&lt;/p&gt;

&lt;h2&gt;
  
  
  Configuring Load Balancer
&lt;/h2&gt;

&lt;p&gt;To ensure high availability and balanced distribution of incoming traffic, you need to set up a load balancer. This example will use &lt;strong&gt;MetalLB&lt;/strong&gt;, a load balancer specifically designed for bare metal Kubernetes clusters.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 5: Install MetalLB
&lt;/h3&gt;

&lt;p&gt;First, install MetalLB by applying the manifest:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; https://raw.githubusercontent.com/metallb/metallb/v0.9.3/manifests/metallb.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Next, configure a Layer 2 mode IP range for MetalLB. Create a ConfigMap for it:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ConfigMap&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;metallb-system&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;config&lt;/span&gt;
&lt;span class="na"&gt;data&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;config&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;
    &lt;span class="s"&gt;address-pools:&lt;/span&gt;
    &lt;span class="s"&gt;- name: default&lt;/span&gt;
      &lt;span class="s"&gt;protocol: layer2&lt;/span&gt;
      &lt;span class="s"&gt;addresses:&lt;/span&gt;
      &lt;span class="s"&gt;- 192.168.1.240-192.168.1.250&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This IP range should be from the same network as your Kubernetes nodes, but outside the range of IPs used by your DHCP.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 6: Install NGINX Ingress Controller
&lt;/h3&gt;

&lt;p&gt;To install the NGINX ingress controller, apply the following manifest:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; https://raw.githubusercontent.com/kubernetes/ingress-nginx/main/deploy/static/provider/cloud/deploy.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This will create an ingress controller that listens to HTTP/HTTPS traffic and directs it based on your ingress rules.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 7: Create an Ingress Resource
&lt;/h3&gt;

&lt;p&gt;Next, define an ingress resource for your services. Here's a sample ingress definition:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: example-ingress
  annotations:
    nginx.ingress.kubernetes.io/rewrite-target: /
spec:
  rules:
  - host: example.com
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: example-service
            port:
              number: 80
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Make sure your DNS is pointing to the external IP provided by the load balancer.&lt;/p&gt;




&lt;h2&gt;
  
  
  Final Thoughts
&lt;/h2&gt;

&lt;p&gt;By following these steps, you've set up a complete Kubernetes cluster with a control plane, worker nodes, a load balancer, and an ingress controller! Kubernetes clusters can be highly complex, but understanding how the components work together simplifies the management of containerized applications.&lt;/p&gt;

&lt;p&gt;Feel free to ask questions or leave feedback below. Happy clustering! ✨&lt;/p&gt;




&lt;h3&gt;
  
  
  Related Resources:
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://kubernetes.io/docs/home/" rel="noopener noreferrer"&gt;Kubernetes Documentation&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://metallb.universe.tf/" rel="noopener noreferrer"&gt;MetalLB Load Balancer&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://kubernetes.github.io/ingress-nginx/" rel="noopener noreferrer"&gt;NGINX Ingress Controller&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
    </item>
    <item>
      <title>Enhancing Secure Connection Speed with PrivateLink</title>
      <dc:creator>Katherine Lin</dc:creator>
      <pubDate>Fri, 04 Oct 2024 12:44:22 +0000</pubDate>
      <link>https://dev.to/katherine_lin_f690f55bbf7/enhancing-secure-connection-speed-with-privatelink-services-in-aws-3lif</link>
      <guid>https://dev.to/katherine_lin_f690f55bbf7/enhancing-secure-connection-speed-with-privatelink-services-in-aws-3lif</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;In today’s cloud-native world, ensuring secure and efficient communication between services is paramount. Amazon Web Services (AWS) provides a robust solution with &lt;strong&gt;AWS PrivateLink&lt;/strong&gt;, allowing you to privately access services hosted on AWS without using public IPs. In this article, we'll walk through the process of adding additional PrivateLink services from endpoints to a Network Load Balancer (NLB) to enhance secure connection speed with whitelisted ports.&lt;/p&gt;

&lt;h2&gt;
  
  
  Prerequisites
&lt;/h2&gt;

&lt;p&gt;Before we begin, ensure you have the following:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;An AWS account with permissions to create NLBs and manage VPC endpoints.&lt;/li&gt;
&lt;li&gt;Basic understanding of AWS services, particularly VPC, NLB, and PrivateLink.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Understanding the Components
&lt;/h2&gt;

&lt;h3&gt;
  
  
  What is AWS PrivateLink?
&lt;/h3&gt;

&lt;p&gt;AWS PrivateLink simplifies the security of data shared with applications by enabling private connectivity between VPCs, AWS services, and on-premises networks. PrivateLink provides private IP addresses to these services, ensuring that data doesn't traverse the public internet.&lt;/p&gt;

&lt;h3&gt;
  
  
  What is a Network Load Balancer (NLB)?
&lt;/h3&gt;

&lt;p&gt;A Network Load Balancer is designed to handle millions of requests per second while maintaining ultra-low latencies. It operates at the connection level (Layer 4) and is ideal for TCP traffic.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step-by-Step Guide to Adding PrivateLink Services to NLB
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Step 1: Create a VPC Endpoint for the PrivateLink Service
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Login to AWS Management Console&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Navigate to the VPC Dashboard&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;On the left sidebar, click on &lt;strong&gt;Endpoints&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;Click on &lt;strong&gt;Create Endpoint&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;Select the &lt;strong&gt;Service category&lt;/strong&gt;. You can choose from the following options:

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;AWS Services&lt;/strong&gt;: Access AWS services privately.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Marketplace&lt;/strong&gt;: Access partner services.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Custom&lt;/strong&gt;: For your custom services.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Choose the service you want to connect to and select the &lt;strong&gt;VPC&lt;/strong&gt; where your NLB is located.&lt;/li&gt;
&lt;li&gt;Specify the &lt;strong&gt;subnets&lt;/strong&gt; where the endpoint will be created.&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Step 2: Configure Security Group for the VPC Endpoint
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;Navigate to the &lt;strong&gt;Security Groups&lt;/strong&gt; associated with your VPC endpoint.&lt;/li&gt;
&lt;li&gt;Add an inbound rule to allow traffic from your NLB on the necessary ports.&lt;/li&gt;
&lt;li&gt;Ensure the source is set to your NLB’s security group for whitelisting purposes.&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Step 3: Create or Modify an Existing Network Load Balancer
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;Navigate to the &lt;strong&gt;EC2 Dashboard&lt;/strong&gt; and click on &lt;strong&gt;Load Balancers&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;Select &lt;strong&gt;Create Load Balancer&lt;/strong&gt; and choose &lt;strong&gt;Network Load Balancer&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;Configure the basic settings, such as name and scheme.&lt;/li&gt;
&lt;li&gt;For &lt;strong&gt;Listeners&lt;/strong&gt;, set the appropriate protocol and port (e.g., TCP on port 80).&lt;/li&gt;
&lt;li&gt;In the &lt;strong&gt;Target Groups&lt;/strong&gt; section, select &lt;strong&gt;Create a new target group&lt;/strong&gt; or choose an existing one.&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Step 4: Register Targets
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;For the target group, select &lt;strong&gt;Register targets&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;Choose the VPC endpoint created in Step 1 as the target.&lt;/li&gt;
&lt;li&gt;Click on &lt;strong&gt;Add to registered&lt;/strong&gt;.&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Step 5: Test the Configuration
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;Once everything is set up, you can test the configuration.&lt;/li&gt;
&lt;li&gt;Use tools like &lt;code&gt;curl&lt;/code&gt; or Postman to send requests to your NLB.&lt;/li&gt;
&lt;li&gt;Ensure that the connections are established over the private link.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;By integrating AWS PrivateLink services with your Network Load Balancer, you enhance the security and speed of your connections while maintaining control over your network traffic. This setup not only optimizes performance but also protects sensitive data by keeping it off the public internet.&lt;/p&gt;

&lt;h2&gt;
  
  
  Further Reading
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://docs.aws.amazon.com/vpc/latest/userguide/endpoint-services-overview.html" rel="noopener noreferrer"&gt;AWS PrivateLink Documentation&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.aws.amazon.com/elasticloadbalancing/latest/network/introduction.html" rel="noopener noreferrer"&gt;Network Load Balancer Documentation&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>aws</category>
      <category>privatelink</category>
      <category>nlb</category>
    </item>
  </channel>
</rss>
