<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Hai Nguyen</title>
    <description>The latest articles on DEV Community by Hai Nguyen (@haintkit).</description>
    <link>https://dev.to/haintkit</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/haintkit"/>
    <language>en</language>
    <item>
      <title>How to automatically create S3 lifecycles using AWS CLI and AWS SDK</title>
      <dc:creator>Hai Nguyen</dc:creator>
      <pubDate>Sun, 29 Sep 2024 15:58:45 +0000</pubDate>
      <link>https://dev.to/haintkit/how-to-automatically-create-s3-lifecycles-using-aws-cli-and-aws-sdk-3kib</link>
      <guid>https://dev.to/haintkit/how-to-automatically-create-s3-lifecycles-using-aws-cli-and-aws-sdk-3kib</guid>
      <description>&lt;p&gt;Co-author: Le Hai Dang &lt;a class="mentioned-user" href="https://dev.to/natsu08122"&gt;@natsu08122&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Introduction&lt;/strong&gt;&lt;br&gt;
S3 lifecycle is a feature in Amazon S3 that allows you to automatically manage the storage of objects throughout their lifecycle. The main purpose of S3 lifecycle is help AWS user to optimize storage costs and automate data management. There are two main actions:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Transition: Move objects between storage classes.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Expiration: Delete objects after a specified time.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;S3 lifecycle does support these storage classes: Standard, Intelligent-Tiering, Standard-IA (Infrequent Access), One Zone-IA, Glacier Instant Retrieval, Glacier Flexible Retrieval, Glacier Deep Archive. One important thing to note: &lt;strong&gt;S3 Policy cannot block S3 Lifecycle Rules&lt;/strong&gt;. &lt;a href="https://docs.aws.amazon.com/AmazonS3/latest/userguide/object-lifecycle-mgmt.html" rel="noopener noreferrer"&gt;Here&lt;/a&gt; is the official AWS docs.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2cz8sexwvlcoksfe6cvb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2cz8sexwvlcoksfe6cvb.png" alt="S3lifecyclenotice"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Problem statement&lt;/strong&gt;&lt;br&gt;
One day, we have a bunch of S3 buckets with the below folder structure: //&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0b6b518lvea4humajc4a.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0b6b518lvea4humajc4a.png" alt="S3bucketfolderstructure"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The retention directory of an object is defined by its expiration time. So, we need a solution to make a appropriate lifecycle of all S3 buckets in the corresponding folders. By utilizing S3 lifecycle management feature, we will create the rules using two main information:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Prefix: /&lt;/li&gt;
&lt;li&gt;Object Tag: "isFile:True" 
(It means that all objects should be tagged with : that is "isFile:True"&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;We have several methods to do it using AWS management console, IaC, CDK, AWS CLI, AWS SDK. This blog will cover using AWS CLI and SDK to accomplish this task.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. Solutions&lt;/strong&gt;&lt;br&gt;
&lt;u&gt;3.1. Using AWS CLI&lt;/u&gt;&lt;br&gt;
Prerequisites:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;AWS CLI version 2&lt;/li&gt;
&lt;li&gt;AWS Credentials
In order to create S3 lifecycle rule, we use the following command:
&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws s3api put-bucket-lifecycle-configuration --bucket my-bucket --lifecycle-configuration file://config.json
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;However, there is one point that we should pay our attention to:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;All old rules will be overwritten by the new rule. In other words, all old rules will be deleted, new rules will be created when running the command.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;This is an example of config.json file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
  "Rules": [
      {
          "Expiration": {
              "Days": 7,
          },
          "Filter": {
              "And":
                  {
                  "Prefix": "7days/",
                  "Tags": [{
                      "Key": "isFile",
                      "Value": "true"
                  }],
              }
          },
          "ID": "7days-retention-policy",
          "Status": "Enabled",
      },
  ],
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Note: If we want to have multiple conditions in "Filter" block, just use "And". Otherwise, we will get an error "MalformedXML".&lt;/p&gt;

&lt;p&gt;&lt;u&gt;3.2. Using AWS SDK&lt;/u&gt;&lt;br&gt;
Prerequisites: AWS SAM latest version&lt;/p&gt;

&lt;p&gt;How to do:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Step 1: Clone &lt;a href="https://github.com/donglu1000tu/s3_lifecycle" rel="noopener noreferrer"&gt;this code&lt;/a&gt; into your environment.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Step 2: Follow README.md in above code.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Step 3: Run the command to setup AWS environment:&lt;br&gt;
&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sam deploy --guided
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After deployment is complete, the S3 lifecycle rules generation code will run on Lambda.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5mj4e6stlt69kk8jro44.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5mj4e6stlt69kk8jro44.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4. Solutions&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This blog shows how to automatically create S3 lifecycles using AWS CLI and AWS SDK. I hope it will be useful for you. Thank you for your reading and feel free to leave your comment!&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Mitigating disruption during Amazon EKS cluster upgrade with blue/green deployment</title>
      <dc:creator>Hai Nguyen</dc:creator>
      <pubDate>Thu, 27 Jun 2024 04:43:13 +0000</pubDate>
      <link>https://dev.to/haintkit/mitigating-disruption-during-amazon-eks-cluster-upgrade-with-bluegreen-deployments-5co</link>
      <guid>https://dev.to/haintkit/mitigating-disruption-during-amazon-eks-cluster-upgrade-with-bluegreen-deployments-5co</guid>
      <description>&lt;p&gt;Co-author &lt;a class="mentioned-user" href="https://dev.to/coangha21"&gt;@coangha21&lt;/a&gt; &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Table of Contents&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;In-place and blue/green upgrade strategies&lt;/li&gt;
&lt;li&gt;Upgrade cluster process

&lt;ul&gt;
&lt;li&gt;Prerequisite&lt;/li&gt;
&lt;li&gt;Update manifests&lt;/li&gt;
&lt;li&gt;Bootstrap new cluster&lt;/li&gt;
&lt;li&gt;Re-deploy add-ons and third-party tools with compatible version&lt;/li&gt;
&lt;li&gt;Re-deploy workloads&lt;/li&gt;
&lt;li&gt;Verify workloads&lt;/li&gt;
&lt;li&gt;DNS switchover&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;Stateful workloads migration&lt;/li&gt;

&lt;li&gt;Conclusion&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Introduction&lt;/strong&gt;&lt;br&gt;
Upgrading your Amazon EKS cluster version is necessary for security, performance optimization, new features, and long-term support. Nowadays, Amazon EKS introduces extended support plan for Kubernetes version that will cost you remarkably. The upgrade is never a easy game and can feel like a business continuity nightmare. Some may feel tempted to postpone the inevitable. In this blog, we will walk you through our upgrade process using the Blue/Green deployment strategy.&lt;/p&gt;

&lt;p&gt;We’ll demonstrate this on an EKS cluster with EC2 instances as worker nodes. This strategy can be also applied the same for Fargate, and we'll leverage the popular AWS Retail Store sample application to demonstrate the steps. For the code, head over to the &lt;a href="https://github.com/aws-containers/retail-store-sample-app" rel="noopener noreferrer"&gt;AWS repository&lt;/a&gt;. By the end of this blog, you'll have a clear understanding of what an EKS upgrade entails and how to navigate it with confidence.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;In-Place vs. Blue/Green upgrade strategies&lt;/strong&gt;&lt;br&gt;
Upgrading a cluster can be a balance between cost and risk. There are two common strategies that be widely used: in-place and blue/green upgrades.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;In-Place Upgrades:&lt;/strong&gt; Simpler and more cost-effective. This strategy will modify your existing cluster directly. While this minimizes resource usage, it carries the risk of downtime and limits upgrades to single versions at a time. Additionally, rolling back requires extra steps.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Blue/Green Upgrades:&lt;/strong&gt; This strategy prioritizes zero downtime by creating a brand new, upgraded cluster (the "green" environment) alongside the existing one (the "blue" environment). Here, you can migrate workloads individually, enabling upgrades across multiple versions. However, blue/green deployment requires managing two clusters simultaneously, which can be costly and strain regional resource capacity. Additionally, API endpoints and authentication methods change, requiring updates to tools like kubectl and CI/CD pipelines.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In-place upgrade method is ideal for cost-sensitive scenarios where downtime is less critical or where the two versions don’t have breaking changes. For situations demanding high availability or the ability to jump multiple versions, the blue/green strategy provides a safer solution but is also more resource-intensive and costly. Thoroughly consider your specific needs, resource constraints, and infra cost to determine the best suitable upgrade method for your cluster.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Upgrade cluster process&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Prerequisite&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Explore your cluster&lt;/strong&gt;: Before diving into your cluster upgrade, system inventory is a mandatory step in order to have insight of what is running in your cluster. Note down your cluster version, add-on versions, and the number of services and applications running. This intel helps you choose the right upgrade strategy, identify potential compatibility issues, and plan a smooth migration for all your workloads. It's like gathering intel before a mission - the more you know, the smoother the upgrade!&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2e9fxxiopxmcujevp6m8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2e9fxxiopxmcujevp6m8.png" alt="The current cluster’s version is 1.24 and it is running on extended support.n"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;center&gt;The current cluster’s version is 1.24 and it running on extended support&lt;/center&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhf37ij2uhncrb00c9h7x.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhf37ij2uhncrb00c9h7x.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;center&gt;Currently 04 adds-on are running.&lt;/center&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmnnp0bwoyg9t8ovs9isd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmnnp0bwoyg9t8ovs9isd.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;center&gt;The cluster is using EC2 instances as worker nodes &lt;/center&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs5aq7ot96x5171xh9rbt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs5aq7ot96x5171xh9rbt.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;center&gt; Karpenter adds-on for node autoscaling.&lt;/center&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbt9mc6jtxccx7svvxnts.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbt9mc6jtxccx7svvxnts.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;center&gt; Around 12 services found &lt;/center&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6g2do6hucl313vilwfqu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6g2do6hucl313vilwfqu.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;center&gt; The application UI &lt;/center&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Assess the impact of new version upgrade&lt;/strong&gt;: Thoroughly review the release notes for the EKS and Kubernetes versions you want to upgrade to in order to fully grasp important information such as breaking changes and deprecated APIs. For instance, if I want to upgrade to EKS 1.29, I will read the following documents:

&lt;ul&gt;
&lt;li&gt;Release notes của EKS: &lt;a href="https://docs.aws.amazon.com/eks/latest/userguide/kubernetes-versions-standard.html" rel="noopener noreferrer"&gt;https://docs.aws.amazon.com/eks/latest/userguide/kubernetes-versions-standard.html&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Kubernetes change log: &lt;a href="https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.29.md" rel="noopener noreferrer"&gt;https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.29.md&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Kubernetes new version release notes: &lt;a href="https://kubernetes.io/blog/2023/12/13/kubernetes-v1-29-release/" rel="noopener noreferrer"&gt;https://kubernetes.io/blog/2023/12/13/kubernetes-v1-29-release/&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;strong&gt;Backup EKS cluster&lt;/strong&gt; (Optional)&lt;/li&gt;

&lt;li&gt;

&lt;strong&gt;Review and address deprecated APIs&lt;/strong&gt;:

&lt;ul&gt;
&lt;li&gt;Kubernetes may deprecates some APIs in new version. So, we need to identify and fix any usage of deprecated APIs within our workloads to ensure compatibility in new EKS version.&lt;/li&gt;
&lt;li&gt;It’s worth to read &lt;a href="https://kubernetes.io/docs/reference/using-api/deprecation-policy/" rel="noopener noreferrer"&gt;this deprecation policy&lt;/a&gt; to understand how Kubernetes deprecate APIs.&lt;/li&gt;
&lt;li&gt;There are several tools that help us find out the API deprecations in our clusters. One of them is “&lt;a href="https://github.com/doitintl/kube-no-trouble" rel="noopener noreferrer"&gt;kube-no-trouble&lt;/a&gt;” aka kubent. At the time I write this document, the latest ruleset is for 1.29 in kubent. I run kubent with target version of 1.29 and got the below result. As you can see, kubent shows the deprecated APIs.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffsc61grpguijbrt6bnzq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffsc61grpguijbrt6bnzq.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;center&gt; Deprecated APIs found by kubent &lt;/center&gt;
 

&lt;p&gt;&lt;strong&gt;2. Update manifests&lt;/strong&gt;&lt;br&gt;
When we have deprecated APIs in our hands. For next steps, we need to update those API version by manually or tools such as “kubectl convert” that actually depends on number of deprecated APIs. We recommend you to update the API version manually to avoid any unforeseen error. For example, based on above kubent result, we see that our HPA apiVersion will be removed since version 1.26. This is original HPA manifest in the current EKS cluster v1.24 and updated HPA manifest in new version, respectively:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp43yp7p8a4fnugj41yy0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp43yp7p8a4fnugj41yy0.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;center&gt; Old version &lt;/center&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F26njvqpiygfx6mwt6dyd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F26njvqpiygfx6mwt6dyd.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;center&gt; New version &lt;/center&gt;

&lt;p&gt;&lt;strong&gt;3. Bootstrap new cluster&lt;/strong&gt;&lt;br&gt;
There are some typical options for a new Amazon EKS cluster deployment with your desired Kubernetes version such as AWS Management Console, &lt;a href="https://eksctl.io/" rel="noopener noreferrer"&gt;eksctl&lt;/a&gt; tool, or Terraform. In this blog, we have deployed a new cluster, namely "green-eks", using version v1.29 and EC2 worker nodes.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4bfapoyhhhdyraieu9o5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4bfapoyhhhdyraieu9o5.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;center&gt; New EKS cluster &lt;/center&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuecemurz5et7rj5e1cf3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuecemurz5et7rj5e1cf3.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;center&gt;EC2 worker nodes&lt;/center&gt;

&lt;p&gt;&lt;strong&gt;4. Re-deploy add-ons and third-party tools with compatible version&lt;/strong&gt;&lt;br&gt;
Once the "green-eks" cluster is ready, we've re-deployed required custom add-ons and third-party tools. It's crucial to ensure those adds-on and third-party tools version are compatible with new cluster. For instance, &lt;a href="https://docs.aws.amazon.com/eks/latest/userguide/managing-vpc-cni.html" rel="noopener noreferrer"&gt;this document&lt;/a&gt; shows us the suggested version of the Amazon VPC CNI add-on to use for each cluster version.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fswpgwjsrsgqsm9kqf1f7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fswpgwjsrsgqsm9kqf1f7.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;center&gt;EKS adds-on in new cluster&lt;/center&gt;

&lt;p&gt;&lt;strong&gt;5. Re-deploy workloads&lt;/strong&gt;&lt;br&gt;
Now that the foundation is laid, we can begin redeploying our workloads to the new "green-eks" cluster.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl45qjczec1v4s081kjk1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl45qjczec1v4s081kjk1.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;center&gt;Application deployment in new cluster&lt;/center&gt;

&lt;p&gt;&lt;strong&gt;6. Verify workloads&lt;/strong&gt;&lt;br&gt;
Once our workloads are deployed successfully in the "green-eks" cluster, it's verification time! The specific tests you run will depend on your application development process. You might opt for smoke test, integration test, manual test, or even a simple UI check like we did in this blog for demo purpose only. The key purpose is to ensure everything functions as intended in the new environment.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flnf3ct29dgr7e8wi5awl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flnf3ct29dgr7e8wi5awl.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;center&gt; Application in new cluster&lt;/center&gt;

&lt;p&gt;We also would check EKS adds-on operation. For example, Karpenter works well by scaling node as expected.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxxljupqsvqtfrv8wppx2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxxljupqsvqtfrv8wppx2.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;center&gt; Karpenter deployment logs &lt;/center&gt;

&lt;p&gt;&lt;strong&gt;7. DNS Switchover&lt;/strong&gt;&lt;br&gt;
When application is ready to serve the client requests, the final step is to switch traffic over to the "green-eks" cluster. We achieved this by updating our DNS records in DNS management such as Amazon Route 53 or any other DNS provider. Amazon Route 53 provides weighted routing policy, so we can initially direct a small percentage of users to the new cluster. This allows us perform a staged rollout and verify everything functions smoothly before migrating all traffic.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5r7k25xom32q0h7u7uby.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5r7k25xom32q0h7u7uby.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;center&gt; Weighted routing policy (&lt;a href="https://aws.amazon.com/blogs/containers/blue-green-or-canary-amazon-eks-clusters-migration-for-stateless-argocd-workloads/" rel="noopener noreferrer"&gt;source&lt;/a&gt;)    &lt;/center&gt; 

&lt;p&gt;&lt;strong&gt;Stateful workloads migration&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;During workload deployments to new Kubernetes clusters, specific considerations arise for stateful workloads. These workloads, such as Solr databases or monitoring stacks like Prometheus and Grafana, require data persistence and careful migration strategies. One proven and reliable migration approach for ensuring data integrity is the backup and restore method. We shared our experience in Solr database migration between EKS cluster in previous &lt;a href="https://dev.to/haintkit/how-to-migrate-apache-solr-from-the-existing-cluster-to-amazon-eks-3b3l"&gt;blog&lt;/a&gt;. The blog serves as a reference guide for migrating your stateful workloads.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Conclusion&lt;/strong&gt;&lt;br&gt;
By leveraging the Blue/Green deployment strategy, we've successfully navigated our EKS upgrade with minimal disruption. This approach offers several benefits:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Reduced Downtime:&lt;/strong&gt; Since you maintain a fully functional "blue" cluster while deploying the upgrade on "green," user traffic experiences minimal interruption.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Phased Rollout:&lt;/strong&gt; Weighted routing policy with Amazon Route 53 allows for a staged rollout, letting you test the new cluster with a small percentage of users before fully traffic migration.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Rollback:&lt;/strong&gt; If any issues arise in the new environment, you can easily switch traffic back to the "blue" cluster with minimum overhead.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This blog provides a high-level guideline for EKS upgrade process using blue/green deployment to mitigate system disruption. Remember to tailor the specific steps to your application and infrastructure. Through a well-prepared planning and execution, blue/green deployment can make your EKS upgrade a breeze!&lt;/p&gt;

</description>
      <category>aws</category>
      <category>eks</category>
      <category>upgrade</category>
    </item>
    <item>
      <title>What's Gateway API and how to deploy on AWS?</title>
      <dc:creator>Hai Nguyen</dc:creator>
      <pubDate>Mon, 15 Jan 2024 09:15:44 +0000</pubDate>
      <link>https://dev.to/haintkit/whats-gateway-api-and-how-to-deploy-on-aws-3ma1</link>
      <guid>https://dev.to/haintkit/whats-gateway-api-and-how-to-deploy-on-aws-3ma1</guid>
      <description>&lt;p&gt;Co-author: &lt;a class="mentioned-user" href="https://dev.to/coangha21"&gt;@coangha21&lt;/a&gt; &lt;/p&gt;

&lt;p&gt;Gateway API is recently standing out to be a promising project that will change the way we manage traffic in Kubernetes. It is looking forward to being the next generation of APIs used for Ingress, Load Balancing, and Service Mesh functionalities. In today's blog, we will discuss what Gateway API is, what it offers and finally we will get our hands dirty to gain better understanding of the service. Let’s get started.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Gateway API overview&lt;/strong&gt;&lt;br&gt;
The Gateway API is a recently graduated (version 1.0 in October 2023) official Kubernetes project that aims to revolutionize L4 and L7 traffic routing within Kubernetes. The goal is to simplify and standardize the way ingress and load balancing are configured and managed, addressing limitations of existing solutions like Ingress and Service APIs.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3cv3xwtuanmaiexbc2wd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3cv3xwtuanmaiexbc2wd.png" alt="Gateway API logo" width="760" height="211"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You can see above Gateway API logo, it already speak for it self, it illustrates the dual purpose of this API, enabling routing for both North-South (Ingress) and East-West (Mesh) traffic to share the same configuration.&lt;/p&gt;

&lt;p&gt;Now, let’s take a look at some of the key features that Gateway API offers:&lt;br&gt;
&lt;em&gt;&lt;strong&gt;1. Extensible and Role-oriented:&lt;/strong&gt;&lt;/em&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Unlike the single-purpose Ingress controller, Gateway API is designed with flexibility and specialization in mind.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;It offers various resource types like Gateway, GatewayClass, HTTPRoute, GRPCRoute, and Policy that work together to define specific roles and capabilities for different networking tasks.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;This allows for building sophisticated networking configurations with greater control and clarity.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcaew00ny3k3w0j7dpwpn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcaew00ny3k3w0j7dpwpn.png" alt="Gateway API is aiming for RBAC" width="760" height="665"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;2. Advanced Traffic Routing:&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Gateway API goes beyond simple load balancing and provides powerful routing capabilities based on HTTP routing rules, path matching, headers, and even gRPC service names.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;This facilitates setting up complex traffic destinations, traffic splitting, and A/B testing scenarios.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxr7c0k8fvvrqztt62ffv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxr7c0k8fvvrqztt62ffv.png" alt="Gateway API supports advance routing" width="760" height="422"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;3. Protocol-Aware and Scalable:&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;The API supports both L4 (TCP/UDP) and L7 (HTTP/gRPC) protocols, offering a unified platform for all your networking needs.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Additionally, it's designed for scalability and performance to handle large workloads and complex network topologies.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;4. Community-Driven and Evolving:&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Gateway API is a community-driven project under the Kubernetes SIG Network, actively maintained and constantly evolving.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;New features and capabilities are being added regularly, making it a future-proof solution for your Kubernetes networking needs.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx7j0gexuguc5yykzcu6l.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx7j0gexuguc5yykzcu6l.png" alt="Kubernetes SIGs" width="224" height="224"&gt;&lt;/a&gt;&lt;br&gt;
From my point of view, Gateway API represents a significant leap forward in Kubernetes service networking. Its dynamic capabilities, flexible routing, and robust policy tools will empower developers and operators to manage external traffic with greater control, precision, and agility. If you have time, try it yourself, it will be “worth your time”.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What is the differences between Gateway API and Ingress?&lt;/strong&gt;&lt;br&gt;
While both Gateway API and Ingress manage traffic routing in Kubernetes, there are several key differences between the Gateway API and the traditional Ingress API, let’s go through some of them:&lt;br&gt;
&lt;strong&gt;Functionality:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Ingress&lt;/strong&gt;: Primarily focused on exposing HTTP applications with a straightforward, declarative syntax.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Gateway API&lt;/strong&gt;: A more general API for proxying traffic, supporting various protocols like HTTP, gRPC, and even different backend targets like buckets or functions.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Flexibility:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Ingress&lt;/strong&gt;: Limited configuration options with heavy reliance on annotations for advanced features.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Gateway API&lt;/strong&gt;: More fine-grained control with dedicated objects for defining routes, listeners, and backends, promoting cleaner configuration and extensibility.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Protocol Support:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Ingress&lt;/strong&gt;: Only supports HTTP.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Gateway API&lt;/strong&gt;: Supports multiple protocols beyond HTTP, like gRPC and WebSockets.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwubfdv1f8pve1jxz1bxv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwubfdv1f8pve1jxz1bxv.png" alt="Ingress support gRPC protocol" width="600" height="384"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Scalability:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Ingress:&lt;/strong&gt; Can become complex to scale, often requiring external load balancers or intricate configurations.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Gateway API:&lt;/strong&gt; Designed with scalability in mind, easily integrating with various data plane implementations.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Security:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Ingress:&lt;/strong&gt; Limited built-in security features, primarily relying on annotations for authentication and authorization.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Gateway API:&lt;/strong&gt; Supports extensions for implementing enhanced security features like authentication and authorization.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fynb30534qo87901qj5ck.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fynb30534qo87901qj5ck.png" alt="Ingress vs Gateway API" width="760" height="304"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Other Differences:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Portability:&lt;/strong&gt; Gateway API configurations are more portable across data planes due to its separation of concerns.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Management:&lt;/strong&gt; Gateway API allows for better cluster operator control with dedicated objects for managing various components.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Maturity:&lt;/strong&gt; Ingress is a stable, GA (General Availability) API, while Gateway API is still under development but rapidly gaining traction.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In summary, Ingress is a basic but mature solution for exposing simple HTTP applications in Kubernetes. Gateway API is a more powerful and flexible API that caters to diverse use cases, supports broader protocols, and scales more efficiently. It offers greater control and extensibility at the cost of slightly increased complexity.&lt;/p&gt;

&lt;p&gt;"Which one should I choose?", it depends on your use case:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;For Ingress: If you need a simple solution for exposing an HTTP application and don't require advanced features.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;For Gateway API: If you need flexibility for various protocols, backends, or require extensibility for security or advanced routing features.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Please keep in mind that, Gateway API is not meant to replace Ingress entirely, but rather provide a more comprehensive and future-proof option for complex traffic routing needs in Kubernetes.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How to deploy Gateway API on AWS EKS&lt;/strong&gt;&lt;br&gt;
Finally, this is probably the part you are waiting for . Let’s deploy a Gateway API on our AWS EKS cluster. I will only show high-level steps that need to be done. For manifest deployment, please refer to &lt;a href="https://github.com/haicasgox/demo-gatewayapi.git"&gt;this repository&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Architecture demo: we have 02 services (user and post). We use the picture of VPC Lattice and Gateway API for your mapping overview.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjf7xy8nlws4625zjnv2o.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjf7xy8nlws4625zjnv2o.png" alt="[VPC Lattice and Gateway API](https://www.gateway-api-controller.eks.aws.dev/concepts/overview/)" width="761" height="428"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You can see in the picture that the Gateway API is composed of three main components: GatewayClass(Controller), Gateway, HTTPRoute/GRPCRoute, each of them is related to VPC Lattice objects.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How it works&lt;/strong&gt;&lt;br&gt;
The AWS Gateway API controller (GatewayClass) integrates VPC Lattice with the Kubernetes Gateway API. When installed in your cluster, the controller watches for the creation of Gateway API resources such as gateways, routes, and provisions corresponding Amazon VPC Lattice objects. This enables users to configure VPC Lattice Service Networks using Kubernetes APIs, without needing to write custom code or manage sidecar proxies. The AWS Gateway API Controller is an open-source project and is fully supported by AWS team.&lt;/p&gt;

&lt;p&gt;Now let’s go through step by step to set this up on our EKS cluster.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step by step guide:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Create GatewayClass:&lt;/strong&gt;&lt;br&gt;
First we need to create a GatewayClass (Gateway API controller), we will using AWS Gateway API controller. Before you create the GatewayClass, you need to setup 2 following things:&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Setup security groups to allow all Pods communicating with VPC Lattice to allow traffic from the VPC Lattice managed prefix lists.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Create IRSA for Gateway API Controller.&lt;br&gt;
For those steps, please refer to &lt;a href="https://www.gateway-api-controller.eks.aws.dev/guides/deploy/#using-eks-cluster"&gt;this link&lt;/a&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;After all of that is done, we will create our first GatewayClass. You can find all manifests used in this demo &lt;a href="https://github.com/haicasgox/demo-gatewayapi.git"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;The outcome should be look like this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffbiy2or64xxnpfjwwj0j.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffbiy2or64xxnpfjwwj0j.png" alt="Image description" width="760" height="64"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Service networks (Gateway):&lt;/strong&gt;
Next, we will create a Gateway. Gateway describes how traffic can be translated to Services within the cluster (through Load Balancer, in-cluster proxy, external hardware, etc.). In AWS, Gateway points to a &lt;a href="https://docs.aws.amazon.com/vpc-lattice/latest/ug/service-networks.html"&gt;VPC Lattice service network&lt;/a&gt;. Services associated with the service network can be authorized for discovery, connectivity, accessibility, and observability.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzwnughlmvw3xcbvo8rqx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzwnughlmvw3xcbvo8rqx.png" alt="Image description" width="760" height="405"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Services and HTTPRoute:&lt;/strong&gt;
Finally, we will define Services and Routes using K8s object Service and HTTPRoute to start routing traffic between services.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Service: User&lt;br&gt;
&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhmmkdrp16zf0fba5skyp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhmmkdrp16zf0fba5skyp.png" alt="Service: User" width="760" height="401"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Service: Post&lt;br&gt;
&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa64plcbhouzde7xp981x.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa64plcbhouzde7xp981x.png" alt="Service: Post" width="761" height="394"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Target groups for 02 services:&lt;br&gt;
&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj2so8kvawrwat8yoyngy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj2so8kvawrwat8yoyngy.png" alt="Target groups for 02 services post and user" width="761" height="162"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Result:&lt;/strong&gt;
Now let’s check if service “post” can called service “user” via domain name in VPC Lattice and vice versa.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4wsy2nz57wrczsid1t23.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4wsy2nz57wrczsid1t23.png" alt="Service post calls service user via DNS provided by AWS Lattice" width="760" height="49"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb079a8gcy5cobf4xxmsn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb079a8gcy5cobf4xxmsn.png" alt="Service user calls service post via DNS provided by AWS Lattice" width="760" height="46"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;It's worked!&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Conclusion&lt;/strong&gt;&lt;br&gt;
Even though Gateway API is new and on it way to accomplish, it’s already showing lots of potentials. With more features and improvement coming in the future, we can expect it to be the future of APIs used for Ingress, Load Balancing, and Service Mesh functionalities.&lt;/p&gt;

&lt;p&gt;References:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;a href="https://gateway-api.sigs.k8s.io/"&gt;Introduction - Kubernetes Gateway API&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.gateway-api-controller.eks.aws.dev/"&gt;AWS Gateway API Controller &lt;/a&gt;&lt;/li&gt;
&lt;/ol&gt;

</description>
      <category>gatewayapi</category>
      <category>kubernetes</category>
      <category>aws</category>
      <category>vpclattice</category>
    </item>
    <item>
      <title>How to migrate Apache Solr from the existing cluster to Amazon EKS</title>
      <dc:creator>Hai Nguyen</dc:creator>
      <pubDate>Thu, 31 Aug 2023 03:24:37 +0000</pubDate>
      <link>https://dev.to/haintkit/how-to-migrate-apache-solr-from-the-existing-cluster-to-amazon-eks-3b3l</link>
      <guid>https://dev.to/haintkit/how-to-migrate-apache-solr-from-the-existing-cluster-to-amazon-eks-3b3l</guid>
      <description>&lt;p&gt;Co-author: &lt;a class="mentioned-user" href="https://dev.to/coangha21"&gt;@coangha21&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Solr is an open-source enterprise-search platform, written in Java. Its major features include full-text search, hit highlighting, faceted search, real-time indexing, dynamic clustering, database integration, NoSQL features and rich document (e.g., Word, PDF) handling. Providing distributed search and index replication, Solr is designed for scalability and fault tolerance. Solr is widely used for enterprise search and analytics use cases and has an active development community and regular releases.&lt;/p&gt;

&lt;p&gt;Solr runs as a standalone full-text search server. It uses the Lucene Java search library at its core for full-text indexing and search, and has REST-like HTTP/XML and JSON APIs that make it usable from most popular programming languages. Solr's external configuration allows it to be tailored to many types of applications without Java coding, and it has a plugin architecture to support more advanced customization.&lt;/p&gt;

&lt;p&gt;In this article, I'll walk through the process of migrate Solr from a Kubernetes cluster to Amazon EKS (Elastic Kubernetes Service) using backup and restore method. Please take into account, depend on your Solr's version, system requirement, circumstance, etc, you will need to extra setup on EFS to ensure your K8s cluster is able to access to network file system on AWS. This will not be required if your Solr’s version can use S3 as backup repository. Please refer to the links below:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://apache.github.io/solr-operator/docs/solr-backup/#s3-backup-repositories" rel="noopener noreferrer"&gt;Solr Operator documentation&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.aws.amazon.com/efs/latest/ug/efs-access-points.html" rel="noopener noreferrer"&gt;Working with Amazon EFS access points - EFS&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://repost.aws/questions/QUmW_DqR8gSeK8yI86EXaEgA/is-it-possible-to-make-efs-publicly-accessible" rel="noopener noreferrer"&gt;Is it possible to make EFS publicly accessible?&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;For the purpose of demonstrating, I'll migrate Solr from an EKS cluster to another one within the same region. However, you can apply this migration method to any Kubernetes cluster running on any platform.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Prerequisite:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Before you begin, make sure you have the following available:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;AWS account and required permission to create resources&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Terraform or AWS CLI, kubectl and helm installed on your machine&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The following step is what we will do in this article:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 1:&lt;/strong&gt; Create target EKS cluster.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 2:&lt;/strong&gt; Install Solr using Helm.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 3:&lt;/strong&gt; Setup Solr backup storage.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 4:&lt;/strong&gt; Create backup from origin cluster.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 5:&lt;/strong&gt; Restore Solr to EKS using backup.&lt;/p&gt;

&lt;p&gt;Let's go in details!&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Create target EKS cluster&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;There are many ways to create a cluster such as using &lt;a href="https://eksctl.io/" rel="noopener noreferrer"&gt;eksctl&lt;/a&gt;. In my case, I will use terraform module cause it’s easy to reuse and comprehend.&lt;/p&gt;

&lt;p&gt;This is my Terraform code template to create the cluster. You can just copy and run it or customize based on your desired configurations:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

provider "aws" {
  region = "ap-southeast-1"
  default_tags {
    tags = {
      environment = "Dev"
    }
  }
}

provider "kubernetes" {
  host                   = module.eks.cluster_endpoint
  cluster_ca_certificate = base64decode(module.eks.cluster_certificate_authority_data)
  token                  = data.aws_eks_cluster_auth.this.token
}

provider "helm" {
  kubernetes {
    host                   = module.eks.cluster_endpoint
    cluster_ca_certificate = base64decode(module.eks.cluster_certificate_authority_data)
    token                  = data.aws_eks_cluster_auth.this.token
  }
}

provider "kubectl" {
  apply_retry_count      = 10
  host                   = module.eks.cluster_endpoint
  cluster_ca_certificate = base64decode(module.eks.cluster_certificate_authority_data)
  load_config_file       = false
  token                  = data.aws_eks_cluster_auth.this.token
}

data "aws_eks_cluster_auth" "this" {
  name = module.eks.cluster_name
}

data "aws_availability_zones" "available" {} 

locals {
  region = "ap-southeast-1"

  vpc_cidr = "10.0.0.0/16"
  azs      = slice(data.aws_availability_zones.available.names, 0, 3)
}

module "eks" {
  source  = "terraform-aws-modules/eks/aws"
  version = "~&amp;gt; 19.12"

  ## EKS Cluster Config
  cluster_name       = "solr-demo"
  cluster_version    = "1.25"

  ## VPC Config
  vpc_id                   = module.vpc.vpc_id
  subnet_ids               = module.vpc.private_subnets

  # EKS Cluster Network Config
  cluster_endpoint_private_access      = true
  cluster_endpoint_public_access       = true

  ## EKS Worker
  eks_managed_node_groups  = {
    "solr-nodegroup" = {
      node_group_name    = "solr_managed_node_group"
      # launch_template_os = "amazonlinux2eks"
      public_ip          = false
      pre_userdata       = &amp;lt;&amp;lt;-EOF
          yum install -y amazon-ssm-agent
          systemctl enable amazon-ssm-agent &amp;amp;&amp;amp; systemctl start amazon-ssm-agent
        EOF
      desired_size       = 2
      ami_type           = "AL2_x86_64"
      capacity_type      = "ON_DEMAND"
      instance_types     = ["t3.medium"]
      disk_size          = 30
    }
  }
}

module "eks_blueprints_addons_common" {
  source  = "aws-ia/eks-blueprints-addons/aws"
  version = "~&amp;gt; 1.3.0"

  cluster_name      = module.eks.cluster_name
  cluster_endpoint  = module.eks.cluster_endpoint
  cluster_version   = module.eks.cluster_version
  oidc_provider_arn = module.eks.oidc_provider_arn

  create_delay_dependencies = [for ng in module.eks.eks_managed_node_groups: ng.node_group_arn]

  eks_addons = {
    aws-ebs-csi-driver = {
      service_account_role_arn = module.ebs_csi_driver_irsa.iam_role_arn
    }
    vpc-cni = {
      service_account_role_arn = module.aws_node_irsa.iam_role_arn
    }
    coredns = {
    }
    kube-proxy = {
    }
  }
  enable_aws_efs_csi_driver = true
}

## Resource for VPC CNI Addon
module "aws_node_irsa" {
  source  = "terraform-aws-modules/iam/aws//modules/iam-role-for-service-accounts-eks"
  version = "~&amp;gt; 5.20"

  role_name_prefix = "${module.eks.cluster_name}-aws-node-"

  attach_vpc_cni_policy = true
  vpc_cni_enable_ipv4   = true

  oidc_providers = {
    main = {
      provider_arn               = module.eks.oidc_provider_arn
      namespace_service_accounts = ["kube-system:aws-node"]
    }
  }
}

module "ebs_csi_driver_irsa" {
  source  = "terraform-aws-modules/iam/aws//modules/iam-role-for-service-accounts-eks"
  version = "~&amp;gt; 5.20"

  role_name_prefix = "${module.eks.cluster_name}-ebs-csi-driver-"

  attach_ebs_csi_policy = true

  oidc_providers = {
    main = {
      provider_arn               = module.eks.oidc_provider_arn
      namespace_service_accounts = ["kube-system:ebs-csi-controller-sa"]
    }
  }
}

module "vpc" {
  source  = "terraform-aws-modules/vpc/aws"
  version = "~&amp;gt; 5.0"

  name = "solr-demo-subnet"
  cidr = local.vpc_cidr

  azs             = local.azs
  private_subnets = [for k, v in local.azs : cidrsubnet(local.vpc_cidr, 4, k)]
  public_subnets  = [for k, v in local.azs : cidrsubnet(local.vpc_cidr, 8, k + 48)]

  enable_nat_gateway = true
  single_nat_gateway = true

  public_subnet_tags = {
    "kubernetes.io/role/elb" = 1
  }

  private_subnet_tags = {
    "kubernetes.io/role/internal-elb" = 1
  }
}


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;The following AWS resources will be created:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;VPC with private and public subnets.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;EKS cluster along with a node group (t3.medium x 01) and EKS adds-on (aws-ebs-csi-driver, vpc-cni, coredns, kube-proxy).&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You can check AWS resources in AWS management console:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fawo8n0vwwykkcw3g6if5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fawo8n0vwwykkcw3g6if5.png" alt="EKS Cluster"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo9fim7cbdfz3ykpcll5v.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo9fim7cbdfz3ykpcll5v.png" alt="EKS Adds-on"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Install Solr using helm&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Next step is to install Solr using helm:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

### Install the Solr &amp;amp; Zookeeper CRDs
helm repo add apache-solr https://solr.apache.org/charts
helm repo update

### Install the Solr operator and Zookeeper Operator
kubectl create -f https://solr.apache.org/operator/downloads/crds/&amp;lt;version&amp;gt;/all-with-dependencies.yaml
helm install solr-operator apache-solr/solr-operator --version &amp;lt;version&amp;gt;

### Install the Solr, zookeeper
helm install solr apache-solr/solr -n solr --version &amp;lt;version&amp;gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Replace version with your chart version or chart version which contain your Solr’s version.&lt;/p&gt;

&lt;p&gt;Next run this command to get admin’s password and access to Solr UI&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

### Get solr password
kubectl get secret solrcloud-security-bootstrap -n solr -o jsonpath='{.data.admin}' | base64 --decode
### Port forward Solr UI
kubectl port-forward service/solrcloud-common 3000:80 -n solr


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Now open your browser and type &lt;a href="http://localhost:3000" rel="noopener noreferrer"&gt;http://localhost:3000&lt;/a&gt;, the result should be:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsamu5ic5ffwcp2izn6b7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsamu5ic5ffwcp2izn6b7.png" alt="Image description"&gt;&lt;/a&gt;&lt;br&gt;
Then using the admin’s password we get above to login.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Setup Solr backup storage&lt;/strong&gt;&lt;br&gt;
After you have Solr installation done, it is time to setup Solr backup storage. At the time of writing this post, AWS supports 2 backup storage types: EFS and S3. Depending on your Solr’s version and system requirement, you can choose either of them. In this demo, I’ll use EFS as backup storage since this storage type is compatible to most Solr’s version. For more information, please visit this &lt;a href="https://apache.github.io/solr-operator/docs/solr-backup/" rel="noopener noreferrer"&gt;link&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;To setup EFS as Solr’s backup storage, you need to create an EFS in AWS. This terraform code template will create EFS resource:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

module "efs" {
  source  = "terraform-aws-modules/efs/aws"
  version = "1.2.0"

  # File system
  name           = "solr-backup-storage"

  performance_mode = "generalPurpose"
  throughput_mode  = "bursting"

  # Mount targets / security group
  mount_targets = {
    "ap-southeast-1a" = {
      subnet_id = module.vpc.private_subnets[0]
    }
    "ap-southeast-1b" = {
      subnet_id = module.vpc.private_subnets[1]
    }
    "ap-southeast-1c" = {
      subnet_id = module.vpc.private_subnets[2]
    }
  }

  deny_nonsecure_transport = false

  security_group_description = "EFS security group"
  security_group_vpc_id      = module.vpc.vpc_id
  security_group_rules       = {
    "private-subnet" = {
      cidr_blocks = module.vpc.private_subnets
    }
  }
}


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Replace EFS-id with EFS resource ID (e.g. fs-1234567890abcdef) you just created in previous step and then run command:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

kubectl apply -f solr-efs-pvc.yaml -n solr


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;From now on EFS is ready to use on your cluster. In next step, you need to upgrade Solr to take EFS as backup storage. First, create values.yaml file as below:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

backupRepositories:
  - name: "solr-backup"
    volume:
      source: # Required
        persistentVolumeClaim:
          claimName: "solr-efs-claim"
      directory: "solr-backup/" # Optional


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;Note that you will need to do this for both origin and target cluster.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Second, you need to roll it out using helm, run command:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

helm upgrade --install solr -f values.yaml apache-solr/solr -n solr --version &amp;lt;version&amp;gt; 


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Finally, you should see EFS claim name in your Solr pod using this command as an expected result:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

kubeclt describe statefulset/dica-solrcloud -n solr | grep solr-efs-claim
    ClaimName:  solr-efs-claim


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;Create backup from source cluster&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In order to restore Solr to new cluster, you definitely need to have a backup file in your hand. You are going to backup a collection using Solr API by using the following command:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

curl --user admin:&amp;lt;password&amp;gt; https://&amp;lt;origin-solr-endpoint&amp;gt;/solr/admin/collections?action=BACKUP&amp;amp;name=&amp;lt;backup-name&amp;gt;&amp;amp;collection=&amp;lt;collection-name&amp;gt;&amp;amp;location=file:///var/solr/data/backup-restore/solr-backup&amp;amp;repository=solr-backup


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;If you have more than one collection, just repeat the process. Replace password with admin's password, origin-solr-endpoint for your origin Solr’s endpoint and  as your choice.&lt;/p&gt;

&lt;p&gt;You can check your backup progress by accessing to a pod and then check the directory:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffp8lvh9jcf20dq237g5w.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffp8lvh9jcf20dq237g5w.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Restore Solr to target EKS cluster&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Since both origin and target Solr’s backup storage are using the same directory in AWS EFS as you setup in previous steps. You only need to invoke the restore API in your target cluster:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

curl --user admin:&amp;lt;password&amp;gt; http://localhost:3000/solr/admin/collections?action=RESTORE&amp;amp;name=&amp;lt;backup-name&amp;gt;&amp;amp;location="/var/solr/data/backup-restore/solr-backup"&amp;amp;collection=&amp;lt;collection-name&amp;gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;As I configured to use port-fowarding, I only need to replace Solr’s endpoint with localhost:3000. Finally, let’s go to Solr UI and you should see the collection have been restored successfully to your new EKS cluster.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwa1sd0y1l1ztw3qhn4b0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwa1sd0y1l1ztw3qhn4b0.png" alt="Restored collection"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After that, you can start setting up autoscaling, ingress, security, and other resources for Solr in new EKS cluster and connecting your application to the database.&lt;/p&gt;

&lt;p&gt;Should you need any further information regarding Solr’s backup and restore API, please visit this &lt;a href="https://solr.apache.org/guide/6_6/making-and-restoring-backups.html" rel="noopener noreferrer"&gt;link&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Conclusion&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Solr is widely used in enterprise and SMB. Depending on your system requirements and circumstances, migrating Solr to Amazon EKS will require different setup and approaches. I hope this post will provide you with useful information about Solr migration using backup and restore. Any comments are welcomed. Thank you for your reading!&lt;/p&gt;

&lt;p&gt;Thanks co-author &lt;a class="mentioned-user" href="https://dev.to/coangha21"&gt;@coangha21&lt;/a&gt; for your effort in this post!&lt;/p&gt;

</description>
      <category>aws</category>
      <category>eks</category>
      <category>solr</category>
      <category>migration</category>
    </item>
    <item>
      <title>Automatically delete S3 buckets in all AWS regions</title>
      <dc:creator>Hai Nguyen</dc:creator>
      <pubDate>Fri, 23 Sep 2022 10:06:15 +0000</pubDate>
      <link>https://dev.to/haintkit/automatically-delete-s3-buckets-in-all-aws-regions-2dbk</link>
      <guid>https://dev.to/haintkit/automatically-delete-s3-buckets-in-all-aws-regions-2dbk</guid>
      <description>&lt;p&gt;In this blog, I will demonstrate how to automatically delete S3 buckets in all AWS regions based on a pre-defined schedule using &lt;a href="https://docs.aws.amazon.com/eventbridge/latest/userguide/eb-what-is.html"&gt;EventBridge&lt;/a&gt; and &lt;a href="https://aws.amazon.com/lambda/?nc1=h_ls"&gt;Lambda&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Problem statement&lt;/strong&gt;&lt;br&gt;
We have many S3 buckets in several AWS regions in our account. Due to some reasons, our task is to delete all those S3 buckets in a pre-defined schedule such as once per week, once per month. A manual deletion using AWS management console will be a burden and time-consuming job. So, we have to figure it out in automatic way, which help us save our effort. Currently, AWS provides services that help us out.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Solution&lt;/strong&gt;&lt;br&gt;
We will use 02 services to automatically delete all existing S3 buckets in all AWS regions in a schedule. The workflow is shown as below:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--5wyL1iFe--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9bra4uu5njothsmt5p5l.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--5wyL1iFe--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9bra4uu5njothsmt5p5l.png" alt="Image description" width="880" height="322"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;EventBridge&lt;/strong&gt;: A serverless event bus that let you receive, filter, transform, route, and deliver events. EventBridge can be considered a rule-driven router, which allows to define event patterns based on the actual content of the events to decide that targets receive each event passing through the bus. A target can be Lambda, &lt;a href="https://aws.amazon.com/sns/?nc1=h_ls&amp;amp;whats-new-cards.sort-by=item.additionalFields.postDateTime&amp;amp;whats-new-cards.sort-order=desc"&gt;AWS SNS&lt;/a&gt;, &lt;a href="https://aws.amazon.com/sqs/?nc1=h_ls"&gt;AWS SQS&lt;/a&gt;, etc.
A rule in EventBridge that run on a schedule by using rate expression or cron expression can trigger a Lamda function to do a specific job. The following show a rule, which will run each 30 days using rate expression.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Ht_IunF5--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/4ql0brn4k42rwwh0wz7p.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Ht_IunF5--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/4ql0brn4k42rwwh0wz7p.png" alt="Image description" width="880" height="304"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Lambda&lt;/strong&gt;: A serverless service that run a custom code to delete all S3 buckets in all regions. It will be triggered periodically to run the function by a given EventBridge rule.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In order to delete a S3 bucket, all resources inside the bucket such as object, access points, etc. must be deleted in first. For multi-region access points, because all requests to create or maintain will be routed to the us-west-2 (Oregon region), the lambda function, which runs code to delete S3 buckets, should be deployed in us-west-2.&lt;br&gt;
Refer to &lt;a href="https://docs.aws.amazon.com/AmazonS3/latest/userguide/ManagingMultiRegionAccessPoints.html"&gt;this&lt;/a&gt; for more information.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Lambda deployment:&lt;/strong&gt;&lt;br&gt;
My colleague and I wrote a Lambda function using Python to do the following steps:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;List all bucket names in all regions.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Delete all multi-region access points.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Delete other resources (bucket policy, objects) and then permanently delete S3 buckets except the bucket name including a string "cloudtrail".&lt;br&gt;
P/s: Bucket name, which contains a string "cloudtrail", stores the CloudTrail logs for our security audit. So, do not delete this bucket.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--6enCJppY--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/4zpgeqajxrzsynwkqcdg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--6enCJppY--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/4zpgeqajxrzsynwkqcdg.png" alt="Image description" width="880" height="587"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--z3K89YsY--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/gqghn1k8gom7ie3yy6k0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--z3K89YsY--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/gqghn1k8gom7ie3yy6k0.png" alt="Image description" width="880" height="468"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--csDTUHZV--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ed57sxoz79hkqa6vokx0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--csDTUHZV--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ed57sxoz79hkqa6vokx0.png" alt="Image description" width="880" height="283"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;For source code reference, you can refer to &lt;a href="https://github.com/haicasgox/delete-s3-bucket"&gt;this repository&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. Conclusion&lt;/strong&gt;&lt;br&gt;
In this post, I showed a method that automatically delete all existing S3 buckets in all AWS regions in a specific schedule. Thanks &lt;a class="mentioned-user" href="https://dev.to/natsu08122"&gt;@natsu08122&lt;/a&gt; for your source code contribution. I hope this blog will be helpful for you. Should you have any question, feel free to leave your comment. Thank you for your reading!&lt;/p&gt;

</description>
      <category>aws</category>
      <category>lambda</category>
      <category>s3</category>
    </item>
    <item>
      <title>AWS VPN site-to-site troubleshooting</title>
      <dc:creator>Hai Nguyen</dc:creator>
      <pubDate>Thu, 22 Sep 2022 08:39:11 +0000</pubDate>
      <link>https://dev.to/haintkit/aws-vpn-site-to-site-troubleshooting-3g1o</link>
      <guid>https://dev.to/haintkit/aws-vpn-site-to-site-troubleshooting-3g1o</guid>
      <description>&lt;p&gt;In this blog, I will show you how to troubleshoot a VPN site-to-site connection between AWS and other side.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Problem&lt;/strong&gt;&lt;br&gt;
Our customer wants to continuously backup their data from &lt;a href="https://aws.amazon.com/rds/aurora/"&gt;AWS Aurora MySQL&lt;/a&gt; to local provider cloud in private connection. So, a IPsec VPN site-to-site connection is required.&lt;br&gt;
In AWS side, Virtual Private Gateway (VGW) provides dual-tunnels to Customer Gateway (CW) for high availability. If there is a device failure within AWS, the VPN connection automatically fails over to the second tunnel so that the connection is not interrupted. Meanwhile, local cloud provider's border router supports a single tunnel only. The below picture shows the scenario:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--mxwXu3VP--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/i6r9ik46ms1if9lncx1e.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--mxwXu3VP--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/i6r9ik46ms1if9lncx1e.png" alt="Image description" width="880" height="336"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;From time to time, AWS performs routine maintenance on the VPN connection such as tunnel endpoint replacement, which might briefly disable the tunnel. Refer to &lt;a href="https://docs.aws.amazon.com/vpn/latest/s2svpn/your-cgw.html"&gt;this&lt;/a&gt; for your information. It's going to interrupt the data replication progress.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--tP2Jo0EP--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/c9iay20ea1yk7121odel.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--tP2Jo0EP--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/c9iay20ea1yk7121odel.png" alt="Image description" width="880" height="502"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In my case, VPN connection was configured using default parameters. It caused the tunnel down and could not be recovered automatically even the endpoint replacement finished. We can check VPN connection status metric in &lt;a href="https://docs.aws.amazon.com/vpn/latest/s2svpn/monitoring-cloudwatch-vpn.html"&gt;CloudWatch&lt;/a&gt;:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--u-h9dn9D--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ucro0z1rwfxwopu0vep7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--u-h9dn9D--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ucro0z1rwfxwopu0vep7.png" alt="Image description" width="880" height="189"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Solution&lt;/strong&gt;&lt;br&gt;
After the investigation, I found that the issue occurred by IKE_SA (Internet Key Exchange_Session Association) deletion between VGW and CGW. IKE is an IPsec based tunneling protocol that provides a secure VPN communication channel between peer VPN devices and defines negotiation and authentication for IPsec security associations (SAs) in a protected manner. So, if IKE_SA is deleted, the IPsec VPN connection will be down.&lt;/p&gt;

&lt;p&gt;When AWS does the endpoint replacement, VPN connection will be interrupted. IKE_SA will be kept in a specific time, which defined by Dead Peer Detection (DPD) timeout parameter in both VGW and CGW. After DPD timeout occurs, VGW or CGW will send IKE_SA deletion request to the other. IKE_SA will be deleted then.&lt;/p&gt;

&lt;p&gt;As soon as the endpoint replacement finishes, CGW or VGW should initiate the IKE negotiation to restart VPN tunnel. If both of them just keep waiting for the other, VPN tunnel will be down forever. Unfortunately, it is my case.&lt;/p&gt;

&lt;p&gt;Because I could not change the configuration in CGW, which belongs to local cloud provider, I tried to change the setting in AWS side. There are 03 VPN tunnel options which we should consider, they are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;DPD timeout&lt;/strong&gt;: it is 30 seconds in default. We can increase this value to cover the endpoint replacement time. But, we do not know how long it takes indeed and be careful it will affect the failover time to 2nd tunnel.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;DPD timeout action&lt;/strong&gt;: In default, the value is "Clear". It means end the IKE session and clear the routes. I changed to "Restart", which will restart the IKE session when DPD timeout occurs.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Startup action&lt;/strong&gt;: "Add" is the default value, which request that CGW must initiate the IKE negotiation to bring the tunnel up. In my case, it should be "start". AWS will proactively initiates the IKE negotiation to bring the tunnel up instead of CGW.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--L7bJUPQ3--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/45u5uzsc67adctplr6dg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--L7bJUPQ3--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/45u5uzsc67adctplr6dg.png" alt="Image description" width="880" height="463"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--e_KZXyty--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/c3z0zc6jsbm6fyogu59w.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--e_KZXyty--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/c3z0zc6jsbm6fyogu59w.png" alt="Image description" width="880" height="396"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;For more information, you can refer to &lt;a href="https://docs.aws.amazon.com/vpn/latest/s2svpn/VPNTunnels.html"&gt;this&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. Conclusion&lt;/strong&gt;&lt;br&gt;
In this blog, I showed the solution to deal with the AWS VPN site-to-site connection issue. Hopefully, it will be helpful for you. If you have any question, do not hesitate to leave your comment. Thank you for your reading!&lt;/p&gt;

</description>
      <category>aws</category>
      <category>vpn</category>
    </item>
    <item>
      <title>Case Study: How to replicate database from AWS to outside?</title>
      <dc:creator>Hai Nguyen</dc:creator>
      <pubDate>Sat, 17 Sep 2022 14:43:42 +0000</pubDate>
      <link>https://dev.to/haintkit/case-study-how-to-replicate-database-from-aws-to-outside-3obc</link>
      <guid>https://dev.to/haintkit/case-study-how-to-replicate-database-from-aws-to-outside-3obc</guid>
      <description>&lt;p&gt;In this post, I will explain how to use AWS Database Migration Service (DMS) to replicate the data from AWS to outside.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. The requirement&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In my country - Vietnam, every company must to store the client information such as personal information in local for security concern. Please check &lt;a href="https://caa.gov.vn/van-ban/53-2022-nd-cp-28435.htm"&gt;this link&lt;/a&gt; for more information. Based on the government regulation, it does not matter whether you are a local company or international one, your client's personal data must be stored in Vietnam. In case your application is running on AWS or other public clouds, which currently does not have a region in Vietnam, how your company comply with our government regulation? &lt;/p&gt;

&lt;p&gt;In this blog, I will show an approach which help you leverage AWS services and still comply with the government regulation when your company invests to Vietnam market.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Solution&lt;/strong&gt;&lt;br&gt;
Let's say you are running your application on top of AWS infrastructure. Your client data is stored in RDS instance. Based on the government regulation, you have to store the client data both in AWS and in the local environment. You rent a VM from local provider to store the data and replicate the data from AWS to VM.&lt;/p&gt;

&lt;p&gt;The below diagram demonstrates the solution:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--mBraYKXE--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/56c0w3g1c1s0tldf2al0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--mBraYKXE--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/56c0w3g1c1s0tldf2al0.png" alt="Image description" width="880" height="324"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The components are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://aws.amazon.com/rds/?nc1=h_ls"&gt;AWS RDS for MySQL&lt;/a&gt;: storing the production database including the client's personal data.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://aws.amazon.com/vi/dms/"&gt;AWS Database Migration Server (DMS&lt;/a&gt;): a managed service to help migrate commercial and open-source database to AWS quickly and securely. In this case, I used DMS to replicate continuously the data from a source (RDS instance) to a destination (VM in local cloud provider).&lt;/li&gt;
&lt;li&gt;Virtual Private Gateway and firewall: establish a IPsec VPN site-to-site between AWS and local cloud environment. It provides a secured communication channel.&lt;/li&gt;
&lt;li&gt;Virtual machine: running MySQL engine to store the replicated data from AWS RDS.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Let's go to the details:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;a) AWS RDS instance:&lt;/strong&gt; Assuming that I have a RDS instance for MySQL engine which stores the private client data.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--LD4gKdva--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/xq88ghwn8528klue1ppe.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--LD4gKdva--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/xq88ghwn8528klue1ppe.png" alt="Image description" width="880" height="445"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;b) Virtual machine in local provider&lt;/strong&gt;:&lt;br&gt;
A virtual machine is running MySQL engine.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--yyHms6_b--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ix4snheqrk7s3p7zwuj9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--yyHms6_b--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ix4snheqrk7s3p7zwuj9.png" alt="Image description" width="880" height="365"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;c) DMS:&lt;/strong&gt;&lt;br&gt;
For DMS step-by-step configuration and best practices, you can refer to these link: &lt;a href="https://docs.aws.amazon.com/dms/latest/userguide/CHAP_GettingStarted.html"&gt;Get started&lt;/a&gt; and &lt;a href="https://docs.aws.amazon.com/dms/latest/userguide/CHAP_BestPractices.html"&gt;best practices&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 1&lt;/strong&gt;: Creating a replication instance to replicate the data from RDS to VM based on the pre-defined migration task.&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--BHDlTTaN--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/bgml03suicgnyky5gjjo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--BHDlTTaN--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/bgml03suicgnyky5gjjo.png" alt="Image description" width="880" height="222"&gt;&lt;/a&gt;&lt;br&gt;
Note: In production environment, a multi-AZ replication instance is highly recommended to provide high availability. In order to achieve high-performance replication, you should use instance type such as compute optimized or memory optimized instead of burstable one. I just use it for the demonstration only. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 2:&lt;/strong&gt; Creating endpoints for source (RDS instance) and destination (virtual machine). You should run test the connection to make sure RDS instance can communicate with VM smoothly.&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--k5ZDlE8T--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/s7s592fnp9ekq171h19r.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--k5ZDlE8T--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/s7s592fnp9ekq171h19r.png" alt="Image description" width="880" height="173"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--PcZWePs3--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/81eduwdxddwl00uibf2k.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--PcZWePs3--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/81eduwdxddwl00uibf2k.png" alt="Image description" width="880" height="191"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 3:&lt;/strong&gt; Creating migration task:&lt;br&gt;
There is a bunch of parameters which you should consider to configure. It depends on your requirements and your source database status. Refer to &lt;a href="https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Tasks.Creating.html"&gt;this&lt;/a&gt; for your information.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--cFE_rqst--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/mjljl1vptnzz6efmgy3d.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--cFE_rqst--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/mjljl1vptnzz6efmgy3d.png" alt="Image description" width="880" height="717"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In migration task, validation can be enabled if you want DMS to compare the data between source and destination. Please keep in mind that this validation progress will make the replication task longer. Otherwise, you can compare the source data and destination one manually such as number of database, number of tables, and size of database, etc.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--wxSwDWhw--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/kziug73aoo7q9ocrbu0v.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--wxSwDWhw--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/kziug73aoo7q9ocrbu0v.png" alt="Image description" width="880" height="806"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The migration task in running status shows that it is replicating the data based on the pre-defined parameters. For other status, refer to this &lt;a href="https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Monitoring.html"&gt;link&lt;/a&gt; for your information.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--7M4HAsso--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/yq13f3kfjvrb5idf2q7o.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--7M4HAsso--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/yq13f3kfjvrb5idf2q7o.png" alt="Image description" width="880" height="146"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In order to monitor the migration task, you can check CloudWatch Logs and Table statistics, which updates information regarding the state of your tables during replication. The possible table state can be:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Table does not exist: AWS DMS can't find the table on the source endpoint.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Before load: The full load process is enabled, but it hasn't started yet.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Full load: The full load process is in progress.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Table completed: Full load is completed.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Table cancelled: Loading of the table is canceled.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Table error: An error occurred when loading the table.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The following shows that full load is completed. It means all tables was replicated to the destination (VM).&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--TC4tM4se--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/yrf51ucvo1rhckn8vvm6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--TC4tM4se--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/yrf51ucvo1rhckn8vvm6.png" alt="Image description" width="880" height="203"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Other parameters in table statistics such as Inserts, Deletes, Updates, and DDLs show the number of these statements that were replicated during the change data capture (CDC) phase.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. Conclusion&lt;/strong&gt;&lt;br&gt;
In this blog, I demonstrated how to use AWS Database Migration Service (DMS) to replicate the data from a source (RDS instance) to a destination (VM in local provider) to comply the government regulation. Should you have any question, feel free to leave your comment. Thank you for your reading!&lt;/p&gt;

</description>
      <category>aws</category>
      <category>dms</category>
      <category>rds</category>
    </item>
    <item>
      <title>CloudFront with JWT authentication</title>
      <dc:creator>Hai Nguyen</dc:creator>
      <pubDate>Thu, 25 Aug 2022 04:10:00 +0000</pubDate>
      <link>https://dev.to/haintkit/cloudfront-with-jwt-authentication-46dh</link>
      <guid>https://dev.to/haintkit/cloudfront-with-jwt-authentication-46dh</guid>
      <description>&lt;p&gt;In this post, I will explain how CloudFront provides JWT authentication, which I applied successfully in the project. Hopefully, it will be helpful for you.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. The requirement:&lt;/strong&gt;&lt;br&gt;
My customer provides LMS (Learning Management System) for language training on AWS.  Their clients should buy training course to learn from recorded videos (Video on Demand). Videos are stored in S3 bucket as origin. After the client logins the web portal successfully, he will get a &lt;a href="https://jwt.io/" rel="noopener noreferrer"&gt;JWT&lt;/a&gt; (JSON Web Token) from webapp and then use the token to get video from origin via CloudFront.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Solution:&lt;/strong&gt;&lt;br&gt;
CloudFront authenticates using CloudFront Function the client's JWT and then do the following actions:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;If the token is valid or the expiry time hasn't passed, CloudFront distribution checks its cache. In case of cache hit, CloudFront Edge delivers the video to client. Otherwise, CloudFront distribution request origin (S3 bucket) for the video.&lt;/li&gt;
&lt;li&gt;If the token is invalid or the expiry time has passed or the token is not included in the client request, CloudFront Edge will send a response with error code 401 (Unauthorized) to the client.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The below shows the solution concept:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgkyoucr0zs9qtei6vj37.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgkyoucr0zs9qtei6vj37.png" alt="CloudFront with S3"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The components are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;CloudFront: CloudFront Functions runs JWT authentication source code. S3 bucket, which stores training videos, will be an origin of CloudFront distribution with &lt;a href="https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/private-content-restricting-access-to-s3.html" rel="noopener noreferrer"&gt;Origin Access Identity (OAI)&lt;/a&gt; to restrict access to the bucket only through CloudFront.&lt;/li&gt;
&lt;li&gt;S3 bucket: storing training videos. S3 bucket policy should be configured to allow only CloudFront distribution to access the bucket.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;3. Introduction to CloudFront and CloudFront Functions:&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;3.1. CloudFront:&lt;/strong&gt;&lt;br&gt;
CloudFront is AWS CDN (Content Delivery Network), securely delivers content with low latency and high transfer speeds. It consists of a global network of CloudFront Edge locations that are distributed across the globe and use AWS Shield Standard in default to defend against DDoS attacks at no additional charge. Please check the below for more information:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://aws.amazon.com/cloudfront/?nc1=h_ls" rel="noopener noreferrer"&gt;Official CloudFront document &lt;/a&gt;&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://aws.amazon.com/cloudfront/features/?whats-new-cloudfront.sort-by=item.additionalFields.postDateTime&amp;amp;whats-new-cloudfront.sort-order=desc#Global_Edge_Network" rel="noopener noreferrer"&gt;CloudFront infrastructure information&lt;/a&gt; &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;3.2. CloudFront Functions:&lt;/strong&gt;&lt;br&gt;
&lt;a href="https://aws.amazon.com/blogs/aws/introducing-cloudfront-functions-run-your-code-at-the-edge-with-low-latency-at-any-scale/" rel="noopener noreferrer"&gt;CloudFront Functions&lt;/a&gt; are ideal for lightweight computation tasks on web requests. Some popular use cases are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;HTTP header manipulation&lt;/strong&gt;: View, add, modify, or delete any of the request or response headers. For example, add HTTP Strict Transport Security (HSTS) headers to your response or copy the client IP address into a new HTTP header (like True-Client-IP) to forward this IP to the origin with the request.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;URL rewrites and redirects&lt;/strong&gt; : Generate a response from within CloudFront Functions to redirect requests to a different URL. For example, redirect a non-authenticated user from a restricted page to a paywall. You could also use URL rewrites for A/B testing a website.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Cache key manipulations and normalization&lt;/strong&gt; : Transform HTTP request attributes (URL, headers, cookies, query strings) to construct the &lt;a href="https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/understanding-the-cache-key.html" rel="noopener noreferrer"&gt;CloudFront cache key&lt;/a&gt; that is used for determining cache hits on future requests. By transforming the request attributes, you can normalize multiple requests to a single cache key, leading to an improved cache hit ratio.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Access authorization&lt;/strong&gt;: Implement access control and authorization for the content delivered through CloudFront by creating and validating user-generated tokens, such as HMAC tokens or JSON web tokens (JWT), to allow or deny requests.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;At the time I write this post, CloudFront Functions only supports JavaScript runtime. For more information, kindly refer to the &lt;a href="https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/cloudfront-functions.html" rel="noopener noreferrer"&gt;link&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;I referred to &lt;a href="https://github.com/aws-samples/amazon-cloudfront-functions/tree/main/verify-jwt" rel="noopener noreferrer"&gt;this AWS GitHub repository&lt;/a&gt; for JWT authentication source code. You can write the JWT authentication approach by yourself. It's up to you!&lt;/p&gt;

&lt;p&gt;Here are some configurations in CloudFront:&lt;/p&gt;

&lt;p&gt;a) CloudFront Functions after published in live deployment:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8pz0xzo9zu0d0pbeu8ff.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8pz0xzo9zu0d0pbeu8ff.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;CloudFront Functions written in JavaScript to authenticate client request with JWT string parameter. It will decode JWT value using pre-defined key (in my case, it is AWS secret key). If JWT value is validated successfully by CloudFront Functions or the expiry time hasn't passed, the client request will be served. Otherwise, the response message “401 - Unauthorized” will be sent to the client.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fszd36axsuzz95hewbise.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fszd36axsuzz95hewbise.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;CloudFront function can be also published in development environment for testing. Please remember that only in production environment, the function will be associated with CloudFront distribution.&lt;/p&gt;

&lt;p&gt;b) In my case, the client request contains a JWT value in a query string parameter named jwt. CloudFront cache policy should be configured with cache key setting like this:&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi0wgo8639zdp62kxn7we.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi0wgo8639zdp62kxn7we.png" alt="Image description"&gt;&lt;/a&gt;&lt;br&gt;
&lt;strong&gt;Note:&lt;/strong&gt; &lt;em&gt;query strings value may be different in your case. It depends on the parameter string which is defined in your backend code. In some cases, the parameter string is "token=xxxyyyzzz". So, query string value should be "token".&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;c) After creating CloudFront Functions, it should integrate with CloudFront behavior, in order to make the function in use:&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa7ziptjzl8w23569q9qq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa7ziptjzl8w23569q9qq.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; CloudFront Functions can be integrated with Viewer request and Viewer response. For Origin request and Origin response, only &lt;a href="https://aws.amazon.com/lambda/edge/" rel="noopener noreferrer"&gt;Lamda@Edge&lt;/a&gt; can be integrated.&lt;/p&gt;

&lt;p&gt;After above steps done, I tested the function with Postman application before applying it into the project. A video was uploaded into S3 bucket (origin). An API request with valid JWT string parameter was passed from CloudFront’s authentication and received the response from CloudFront. Otherwise, the response message “401 - Unauthorized” was sent back. Here is the result:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbi7ibhz423io54gv66m4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbi7ibhz423io54gv66m4.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;The API with valid JWT parameter string. The response code was 200 OK.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7eue0gf1q22z6ovbmj18.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7eue0gf1q22z6ovbmj18.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;The API request without JWT parameter string. The response code is 401 Unauthorized.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4.Conclusion:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In this post, I demonstrated how CloudFront can authenticate JWT from the client request to serve the video stored in S3 bucket. The solution may be helpful for who has a VoD application.&lt;/p&gt;

&lt;p&gt;P/s: AWS document also provides the tutorial for hosting on-demand streaming video with S3, CloudFront and Route 53. If you are interested in this scenario, kindly refer to &lt;a href="https://docs.aws.amazon.com/AmazonS3/latest/userguide/tutorial-s3-cloudfront-route53-video-streaming.html" rel="noopener noreferrer"&gt;this link&lt;/a&gt;. &lt;/p&gt;

&lt;p&gt;Thank you for your reading! I look forward to hearing any comments from you!&lt;/p&gt;

</description>
      <category>aws</category>
      <category>cloudfront</category>
      <category>s3</category>
      <category>jwt</category>
    </item>
    <item>
      <title>Case Study - AWS Application Migration Service (MGN)</title>
      <dc:creator>Hai Nguyen</dc:creator>
      <pubDate>Tue, 23 Aug 2022 11:19:00 +0000</pubDate>
      <link>https://dev.to/haintkit/case-study-aws-application-migration-service-mgn-239e</link>
      <guid>https://dev.to/haintkit/case-study-aws-application-migration-service-mgn-239e</guid>
      <description>&lt;p&gt;In this post, I will demonstrate how AWS Application Migration Service (MGN) helped me out from the issue which I got in the project.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. My issue&lt;/strong&gt;&lt;br&gt;
As you may know, in order to launch the instance with these kind of OS (e.g. CentOS, SUSE, RedHat), an AMI subscription from AWS Market Place must be required. I subscribed AMI CentOS 7 Updates HVM published by AWS and then launched an EC2 instance with this OS. A lot of applications and middleware was installed on the instance, and an AMI was created for further use.&lt;/p&gt;

&lt;p&gt;A few weeks later, I needed to share the AMI (including applications and middleware) to new AWS account. The customer tried to restore the instance using the shared AMI but it was failed because that CentOS 7 Updates HVM image published by AWS was no longer available for a new customer at that time. &lt;br&gt;
&lt;em&gt;(Note: At the time I write this post, CentOS 7 Updates HVM image by AWS come back available for new customer)&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjai4k3ef6qgiksoj5gg6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjai4k3ef6qgiksoj5gg6.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;So, how could the customer restore the instance with unavailable base image? The only way was to create a new instance with available image in AWS Market Place. But, installing applications and middleware from scratch is a nightmare to our customer. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;It's time for AWS MGN to action! Let's started.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Introduction to AWS MGN&lt;/strong&gt;&lt;br&gt;
Previously, CloudEndure (&lt;a href="https://www.cloudendure.com/" rel="noopener noreferrer"&gt;https://www.cloudendure.com/&lt;/a&gt;) or AWS Server Migration Service (AWS SMS) was migration tools to replicate the servers from on-premises and other clouds to AWS. In 2019, AWS acquired CloudEndure and continue developing the product. In March 2022, AWS introduced a new migration powerful tool, namely the AWS Application Migration Service (AWS MGN), which is now highly recommended as the primary migration service for lift-and-shift migration to AWS. Customers are encouraged to use AWS MGN for further migration. &lt;/p&gt;

&lt;p&gt;AWS MGN enables the customers to migrate their applications to AWS with minimal downtime and without having to make any changes to the applications and the source servers.&lt;/p&gt;

&lt;p&gt;Refer to the official link for more information: &lt;a href="https://aws.amazon.com/application-migration-service/?nc1=h_ls" rel="noopener noreferrer"&gt;https://aws.amazon.com/application-migration-service/?nc1=h_ls&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. How I used MGN to replicate the data:&lt;/strong&gt;&lt;br&gt;
AWS MGN will replicate every applications, middleware, and config files from old server to new one. It helped my customer avoid their nightmare. The below diagram shows my scenario:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2lea3ysg22i6ga1d3kq5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2lea3ysg22i6ga1d3kq5.png" alt="AWS MGN"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The components are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Source server:&lt;/strong&gt; The applications and middleware were running on. AWS replication agent has to be installed in the server to communicate with AWS MGN Service Manager via TCP port 443, with replication server via TCP port 1500 for data replication, and with S3 via VPC gateway endpoint.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Replication sever:&lt;/strong&gt; The server stays in Staging Area to continuously replicate the data from source server. It will connect to MGN Service Manager via TCP port 443 and with S3 via VPC gateway endpoint. The replicated data will be stored in EBS volume.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Destination server&lt;/strong&gt;: When data replication finish, I will do cut-over to launch a new server based on the pre-defined launch template. The EBS volume, which stores the replicated data, will be mounted to new server.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Let's go to step-by-step in AWS management console:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 1&lt;/strong&gt;: Go to AWS Application Migration Service --&amp;gt; Add source server:&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fenwluqm6mqdtl4ayx3jo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fenwluqm6mqdtl4ayx3jo.png" alt="Add Servers"&gt;&lt;/a&gt;&lt;br&gt;
Installing replication agent in source server. The agent needs some permission to communicate with AWS MGN service Manager, so an IAM credentials (access key and secret key) is required. Following the guideline to create the required IAM credentials: &lt;a href="https://docs.aws.amazon.com/mgn/latest/ug/credentials.html" rel="noopener noreferrer"&gt;https://docs.aws.amazon.com/mgn/latest/ug/credentials.html&lt;/a&gt;&lt;br&gt;
or click on "Create IAM user":&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmp1x7x8ybmv53slv3hap.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmp1x7x8ybmv53slv3hap.png" alt="AWS Replication Agent installation"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 2&lt;/strong&gt;: Download "aws-replication-installer-init.py" file:&lt;/p&gt;

&lt;p&gt;wget -O ./aws-replication-installer-init.py &lt;a href="https://aws-application-migration-service-ap-southeast-1.s3.ap-southeast-1.amazonaws.com/latest/linux/aws-replication-installer-init.py" rel="noopener noreferrer"&gt;https://aws-application-migration-service-ap-southeast-1.s3.ap-southeast-1.amazonaws.com/latest/linux/aws-replication-installer-init.py&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 3:&lt;/strong&gt; Running the file using python3:&lt;/p&gt;

&lt;p&gt;sudo python3 aws-replication-installer-init.py --region ap-southeast-1&lt;/p&gt;

&lt;p&gt;Following the guideline from CLI, entering the above IAM credentials and disks which will be replicated:&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhk0fyrr6er8tv9ntxdbx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhk0fyrr6er8tv9ntxdbx.png" alt="AWS replication agent installation"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If the agent is installed successfully, it should be like this:&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F91myp8b95s1irr7fmr6b.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F91myp8b95s1irr7fmr6b.png" alt="AWS replication agent installation"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 4:&lt;/strong&gt; Checking the source server status in AWS MGN console:&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl01nexdzl12eyfxxot7d.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl01nexdzl12eyfxxot7d.png" alt="Data replication status"&gt;&lt;/a&gt;&lt;br&gt;
It shows the initial sync between source server and replication instance is finished and then the replication server is creating a EBS snapshot. Just wait for migration lifecycle status to be Ready.&lt;/p&gt;

&lt;p&gt;Meanwhile, I had to configure EC2 launch template which defines instance specifications of destination server. Click on source server and go to tab "Launch setting", and then move to "Modify" of "EC2 Launch Template":&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F47p3to5zeoreusj3te4c.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F47p3to5zeoreusj3te4c.png" alt="Source server"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftm8pnsrt4tbg9qm3sfot.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftm8pnsrt4tbg9qm3sfot.png" alt="EC2 Launch Template"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Note that you must mark the latest version of launch template to be default in order for AWS MGN to recognize it.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsi6c5d3a399qxo0g242a.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsi6c5d3a399qxo0g242a.png" alt="Default launch template"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 5:&lt;/strong&gt; Checking migration lifecycle of source server, making sure the status ready. It should be "Ready for testing" like this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbjokho2erhnc6meyo42f.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbjokho2erhnc6meyo42f.png" alt="migration lifecycle"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 6:&lt;/strong&gt; I run "Launch test instances" to check the replication status:&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4tt9pospfp7rxfavvtjn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4tt9pospfp7rxfavvtjn.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You will see one m4.large instance in initializing status. It is AWS MGN conversion server, which converts the disks to boot and run on AWS. In particular, it makes bootloader changes, injects hypervisor drivers and installs cloud tools. Refer to the document: &lt;a href="https://docs.aws.amazon.com/mgn/latest/ug/AWS-Related-FAQ.html#What-Conversion-Server-Do" rel="noopener noreferrer"&gt;https://docs.aws.amazon.com/mgn/latest/ug/AWS-Related-FAQ.html#What-Conversion-Server-Do&lt;/a&gt;&lt;br&gt;
The instance will be terminated as soon as the conversion job finishes.&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fysiz3p9ez4zq0aljqo6b.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fysiz3p9ez4zq0aljqo6b.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 7:&lt;/strong&gt; Make sure status of Alerts "launched". It means that successfully launched test/ cutover EC2 instance. Next, mark as "Ready for cutover":&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzxhq9yjioai1upp7hme5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzxhq9yjioai1upp7hme5.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 8:&lt;/strong&gt; Launch cutover instance and just wait for the instance running:&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn8858wsblxq1srwt9z3q.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn8858wsblxq1srwt9z3q.png" alt="Image description"&gt;&lt;/a&gt;&lt;br&gt;
You can check the cutover job status here:&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxvk5kutfuk1gmqzf88li.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxvk5kutfuk1gmqzf88li.png" alt="Image description"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0hb4ndnq7grtxn906tna.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0hb4ndnq7grtxn906tna.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 9:&lt;/strong&gt; Make sure cutover progress finishes with successfully launched cutover EC2 instance like this:&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjb3gyp17nx1ds4p68t1p.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjb3gyp17nx1ds4p68t1p.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 10:&lt;/strong&gt; I finalized cutover by clicking on "Finalize cutover":&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftao5s852da9awt1qmp4o.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftao5s852da9awt1qmp4o.png" alt="Image description"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdu206tmsogcph9wtxf10.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdu206tmsogcph9wtxf10.png" alt="Image description"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh5ixakwznmzcxiza4b30.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh5ixakwznmzcxiza4b30.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Finally, I connected to the destination server (cutover instance) to verify the data. It was amazing!!! Every data was replicated to the cutover instance same as source one.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4. Conclusion&lt;/strong&gt;&lt;br&gt;
In this post, I demonstrated my real case study regarding AWS Application Migration Service (MGN), which successfully replicated the data (applications and middleware) from source server to new one. It helped my customer save much efforts compared to installing everything from the scratch.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>mgn</category>
      <category>ec2</category>
      <category>marketplace</category>
    </item>
  </channel>
</rss>
