<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Edward Allen Mercado</title>
    <description>The latest articles on DEV Community by Edward Allen Mercado (@edwardmercado).</description>
    <link>https://dev.to/edwardmercado</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/edwardmercado"/>
    <language>en</language>
    <item>
      <title>From Detection to Resolution: A Closed-Loop System for Managing AWS CloudFormation Drift</title>
      <dc:creator>Edward Allen Mercado</dc:creator>
      <pubDate>Wed, 03 Dec 2025 03:50:47 +0000</pubDate>
      <link>https://dev.to/edwardmercado/from-detection-to-resolution-a-closed-loop-system-for-managing-aws-cloudformation-drift-4a9l</link>
      <guid>https://dev.to/edwardmercado/from-detection-to-resolution-a-closed-loop-system-for-managing-aws-cloudformation-drift-4a9l</guid>
      <description>&lt;p&gt;As cloud estates grow, maintaining the integrity of Infrastructure as Code (IaC) is a critical challenge. AWS CloudFormation provides the blueprint for our infrastructure, but the reality of day-to-day operations—manual hotfixes, temporary changes, and urgent interventions—inevitably leads to configuration drift. Detecting this drift is only half the battle. The real challenge, especially when managing hundreds of stacks, is prioritizing what to fix and cutting through the noise.&lt;/p&gt;

&lt;p&gt;What if you could move beyond simple alerts and build a closed-loop system that not only detects drift but allows your team to manage, acknowledge, and prioritize it, all from within your primary communication tools?&lt;/p&gt;

&lt;p&gt;This post details the architecture for just such a solution: an intelligent, interactive drift management system built on serverless AWS services.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Solution: An Interactive Drift Management Tool
&lt;/h3&gt;

&lt;p&gt;Instead of just another notification system that adds to alert fatigue, this solution creates an interactive workflow. It delivers actionable alerts that empower engineers to make decisions directly from Slack. By allowing teams to formally "Acknowledge" or "Ignore" a detected drift, the system brings order to the chaos, creating a clear audit trail and allowing teams to focus on what matters most.&lt;/p&gt;

&lt;h3&gt;
  
  
  Architectural Blueprint: A Closed-Loop System
&lt;/h3&gt;

&lt;p&gt;This solution moves beyond simple notifications and creates a full, closed-loop system for managing configuration drift at scale. It’s built on a foundation of event-driven, serverless components that provide not just information, but control.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;The Trigger (AWS Config):&lt;/strong&gt; The process begins with the AWS Config service. Using a built-in rule named &lt;code&gt;cloudformation-stack-drift-detection-check&lt;/code&gt;, it continuously monitors your CloudFormation stacks. The moment a stack’s actual configuration deviates from its template, AWS Config flags it as &lt;code&gt;NON_COMPLIANT&lt;/code&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;The Router (Amazon EventBridge):&lt;/strong&gt; This &lt;code&gt;NON_COMPLIANT&lt;/code&gt; status is published as an event. An Amazon EventBridge rule is set up to specifically listen for these events from AWS Config. Upon catching one, it immediately forwards the event payload to our first AWS Lambda function for processing.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;The Notifier (AWS Lambda):&lt;/strong&gt; This first Lambda function acts as the initial alert mechanism. Triggered by the EventBridge event, it performs two key actions:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  It first inspects the drifted stack to confirm it contains the &lt;code&gt;MONITOR_DRIFT&lt;/code&gt; tag with a value of &lt;code&gt;true&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;  If the tag is present, it constructs a rich notification—complete with "Acknowledge" and "Ignore" buttons—and sends it to a designated Slack channel, providing the team with immediate visibility and a direct call to action.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;The State Manager (AWS Lambda, API Gateway &amp;amp; DynamoDB):&lt;/strong&gt; This is where the system becomes truly powerful. A second, distinct workflow handles the interactive state management:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  An AWS Lambda function is responsible for persisting the details of drifted stacks into an Amazon DynamoDB table, creating a centralized source of truth.&lt;/li&gt;
&lt;li&gt;  When an engineer clicks "Acknowledge" or "Ignore" in the Slack message, the action is sent to an Amazon API Gateway endpoint.&lt;/li&gt;
&lt;li&gt;  This API Gateway call invokes our state manager Lambda, which then updates the corresponding stack's status in the DynamoDB table. This allows the team to manage priorities, reduce alert noise by ignoring known drifts, and maintain a clear audit trail.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Putting It Into Practice
&lt;/h3&gt;

&lt;p&gt;Enrolling a stack into this management system remains incredibly simple. To enable drift detection and interactive alerts for any CloudFormation stack, you only need to perform one action:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Add the tag &lt;code&gt;MONITOR_DRIFT&lt;/code&gt; with a value of &lt;code&gt;true&lt;/code&gt; to the stack.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Once tagged, the stack is automatically picked up by the system. Any future drift will trigger the interactive notification in Slack, allowing your team to begin managing it immediately.&lt;/p&gt;

&lt;h3&gt;
  
  
  Behind the Code: An Interactive Slack Message
&lt;/h3&gt;

&lt;p&gt;The key to this workflow is the interactive Slack message. Here’s a simplified look at how the JSON payload for a message with action buttons is constructed.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="c1"&gt;// A simplified look at an interactive Slack message payload&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;slackMessage&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="na"&gt;channel&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;your-drift-alerts-channel&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;text&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;`*Drift Detected in Stack: YourStackName*`&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;attachments&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
        &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="na"&gt;text&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;A drift from the expected template has been detected. Please review and choose an action.&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="na"&gt;fallback&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;You are unable to choose an action.&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="na"&gt;callback_id&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;drift_action_callback&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="na"&gt;color&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;#F35B5B&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="na"&gt;attachment_type&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;default&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="na"&gt;fields&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
                &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;title&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Account&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;value&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;123456789012&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;short&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt; &lt;span class="p"&gt;},&lt;/span&gt;
                &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;title&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Region&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;value&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;us-east-1&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;short&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;
            &lt;span class="p"&gt;],&lt;/span&gt;
            &lt;span class="na"&gt;actions&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
                &lt;span class="p"&gt;{&lt;/span&gt;
                    &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;acknowledge&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
                    &lt;span class="na"&gt;text&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Acknowledge&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
                    &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;button&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
                    &lt;span class="na"&gt;value&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;acknowledged&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
                    &lt;span class="na"&gt;style&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;primary&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;
                &lt;span class="p"&gt;},&lt;/span&gt;
                &lt;span class="p"&gt;{&lt;/span&gt;
                    &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;ignore&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
                    &lt;span class="na"&gt;text&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Ignore&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
                    &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;button&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
                    &lt;span class="na"&gt;value&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;ignored&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;
                &lt;span class="p"&gt;}&lt;/span&gt;
            &lt;span class="p"&gt;]&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="p"&gt;]&lt;/span&gt;
&lt;span class="p"&gt;};&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;em&gt;This snippet illustrates how action buttons are added to a Slack message, enabling the interactive workflow.&lt;/em&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Conclusion
&lt;/h3&gt;

&lt;p&gt;Effective infrastructure management at scale requires moving beyond passive detection to active resolution. By creating a closed-loop, interactive system, you empower your engineers to manage CloudFormation drift efficiently, directly from the tools they use every day. This architecture not only provides a robust audit trail and reduces alert fatigue but also fosters a more organized and prioritized approach to maintaining infrastructure integrity. It’s a powerful pattern for transforming a persistent operational challenge into a streamlined, manageable process.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>security</category>
      <category>serverless</category>
      <category>monitoring</category>
    </item>
    <item>
      <title>Migrating Kubernetes volume contents to another</title>
      <dc:creator>Edward Allen Mercado</dc:creator>
      <pubDate>Sat, 01 Feb 2025 06:54:20 +0000</pubDate>
      <link>https://dev.to/aws-builders/migrating-kubernetes-volume-contents-to-another-1e04</link>
      <guid>https://dev.to/aws-builders/migrating-kubernetes-volume-contents-to-another-1e04</guid>
      <description>&lt;p&gt;When managing Kubernetes workloads, you may encounter scenarios where you must migrate data from one persistent volume to another. This can happen due to storage class changes, resizing constraints, cloud provider migrations, or performance optimizations. Ensuring a smooth transition while maintaining data integrity is crucial.&lt;/p&gt;

&lt;p&gt;As per experience, migrating persistent volume (PV) data in Kubernetes can be complex, especially when dealing with large datasets or ensuring minimal downtime.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;PV-Migrate&lt;/strong&gt; is an open-source tool designed to simplify this process by providing a reliable, automated way to transfer data between two persistent volumes in the same or different namespaces. &lt;/p&gt;

&lt;p&gt;It leverages &lt;strong&gt;rsync&lt;/strong&gt; over Kubernetes jobs to efficiently copy data while preserving permissions, file structures, and symbolic links. It works seamlessly across different storage classes, allowing users to migrate data without needing manual intervention or external backup tools.&lt;/p&gt;

&lt;p&gt;This guide will explore the strategies to migrate Kubernetes volume &lt;code&gt;PersistentVolumeClaims&lt;/code&gt; from one AWS EKS Amazon (Elastic Kubernetes Service) Cluster to another using PV-Migrate.&lt;/p&gt;

&lt;h2&gt;
  
  
  Installation
&lt;/h2&gt;

&lt;p&gt;There are various installation methods for different operating systems. You can follow your relevant installation method here:&lt;br&gt;
&lt;a href="https://github.com/utkuozdemir/pv-migrate/blob/master/INSTALL.md" rel="noopener noreferrer"&gt;https://github.com/utkuozdemir/pv-migrate/blob/master/INSTALL.md&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  Usage
&lt;/h2&gt;

&lt;p&gt;Once the pv-migrate is installed, we can now explore the command to start our migration. &lt;/p&gt;

&lt;p&gt;Here's what you need to know the command, it's usage and the flags.&lt;br&gt;
&lt;a href="https://github.com/utkuozdemir/pv-migrate/blob/master/USAGE.md#usage" rel="noopener noreferrer"&gt;https://github.com/utkuozdemir/pv-migrate/blob/master/USAGE.md#usage&lt;/a&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  Notable Flags
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;--dest&lt;/code&gt; and &lt;code&gt;--source&lt;/code&gt; - Using these flags we can specify what &lt;code&gt;PersistentVolumeClaims&lt;/code&gt; were copying from and where &lt;code&gt;PersistentVolumeClaims&lt;/code&gt; were copying to.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;--source-kubeconfig&lt;/code&gt; and &lt;code&gt;--dest-kubeconfig&lt;/code&gt; - Using these flags we can specify on what clusters we're working.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;--source-context&lt;/code&gt; and &lt;code&gt;--dest-kubeconfig&lt;/code&gt; - Using these flags we will specify the needed context of the clusters we're working, this goes hand in hand with the above flags.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;--source-namespace&lt;/code&gt; and &lt;code&gt;--dest-namespace&lt;/code&gt; - specify the namespaces of where the source and destination &lt;code&gt;PersistentVolumeClaims&lt;/code&gt; resides.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;--helm-set&lt;/code&gt; - Using this flag we can pass rsync extra arguments and other &lt;code&gt;pv-migrate&lt;/code&gt; helm configuration values.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;--strategies&lt;/code&gt; - Using this flag you can specify the order of strategies you want to implement in your migration. &lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;
  
  
  Strategies
&lt;/h3&gt;

&lt;p&gt;PV-Migrate offers variety of strategies on how you want to migrate you volume contents and it during the execution of the migration it will try these strategies except until one of the strategies work or all of them will fail except for the &lt;code&gt;local&lt;/code&gt; which is in experimental phase at the moment. &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;mnt2&lt;/code&gt; (&lt;strong&gt;Mount both&lt;/strong&gt;) - Mounts both PVCs in a single pod and runs a regular rsync, without using SSH or the network. Only applicable if source and destination PVCs are in the same namespace and both can be mounted from a single pod.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;svc&lt;/code&gt; (&lt;strong&gt;Service&lt;/strong&gt;) - Runs &lt;code&gt;rsync+ssh&lt;/code&gt; over a Kubernetes Service (ClusterIP). Only applicable when source and destination PVCs are in the same Kubernetes cluster.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;lbsvc&lt;/code&gt; (&lt;strong&gt;Load Balancer Service&lt;/strong&gt;) - Runs &lt;code&gt;rsync+ssh&lt;/code&gt; over a Kubernetes Service of type LoadBalancer. Always applicable (will fail if LoadBalancer IP is not assigned for a long period).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;local&lt;/code&gt; (&lt;strong&gt;Local Transfer&lt;/strong&gt;) - Runs sshd on both source and destination, then uses a combination of &lt;code&gt;kubectl&lt;/code&gt; port-forward logic and an SSH reverse proxy to tunnel all the traffic over the client device (the device which runs pv-migrate, e.g. your laptop). Requires ssh command to be available on the client device.&lt;br&gt;
Note that this strategy is experimental (and not enabled by default), potentially can put heavy load on both apiservers and is not as resilient as others. It is recommended for small amounts of data and/or when the only access to both clusters seems to be through kubectl (e.g. for air-gapped clusters, on jump hosts etc.).&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://github.com/utkuozdemir/pv-migrate/blob/master/USAGE.md#strategies" rel="noopener noreferrer"&gt;https://github.com/utkuozdemir/pv-migrate/blob/master/USAGE.md#strategies&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  Setup
&lt;/h2&gt;

&lt;p&gt;In our scenario, our EKS Clusters live in different accounts and different VPC, so there are additional steps to configure networking accessibility compared on working inside the same cluster.&lt;/p&gt;
&lt;h3&gt;
  
  
  Configure EKS Cluster Config file
&lt;/h3&gt;

&lt;p&gt;You can generate or update your Kubernetes Config file using the below commands.&lt;/p&gt;

&lt;p&gt;Don't forget to modify the values for each flags to relevant values for your source and destination cluster.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Generate&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;aws eks update-kubeconfig &lt;span class="nt"&gt;--region&lt;/span&gt; &amp;lt;region-code&amp;gt; &lt;span class="nt"&gt;--name&lt;/span&gt; &amp;lt;eks-cluster-name&amp;gt; &lt;span class="nt"&gt;--kubeconfig&lt;/span&gt; ./&amp;lt;kube-config-file-name&amp;gt; &lt;span class="nt"&gt;--profile&lt;/span&gt; &amp;lt;aws-config-profile&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Update&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;aws eks update-kubeconfig &lt;span class="nt"&gt;--region&lt;/span&gt; &amp;lt;region-code&amp;gt; &lt;span class="nt"&gt;--name&lt;/span&gt; &amp;lt;eks-cluster-name&amp;gt; &lt;span class="nt"&gt;--profile&lt;/span&gt; &amp;lt;aws-config-profile&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Whether you generated or updated your kube-config file, you can either open the file or use the below command to know the context for your source and destination cluster.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl config get-contexts
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Take note for the values of your source and destination &lt;code&gt;kube-config&lt;/code&gt; files and contexts.&lt;/p&gt;

&lt;h3&gt;
  
  
  Configure Networking
&lt;/h3&gt;

&lt;p&gt;As previously stated, our EKS clusters lives on different account and different VPC. &lt;/p&gt;

&lt;p&gt;In this scenario, we have couple of options on how to configure our networks; either the communication will be public or we will preserve the communication in private leveraging AWS backbone network.&lt;/p&gt;

&lt;p&gt;In our case, we choose the latter and the below are our configuration.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;VPC Peering&lt;/strong&gt;&lt;br&gt;
We've established VPC Peering between our VPCs&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Route Table&lt;/strong&gt;&lt;br&gt;
We've modify the Route Table we have per relevant subnets and created routes to point traffic between the VPCs using their CIDR.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Security Groups&lt;/strong&gt;&lt;br&gt;
We've modify the Control Plane Security Group for each EKS cluster to allow communication to another.&lt;/p&gt;
&lt;h3&gt;
  
  
  Configure Permissions
&lt;/h3&gt;

&lt;p&gt;As our EKS clusters lives on different accounts, we need to allow cross-account access between EKS clusters. &lt;/p&gt;

&lt;p&gt;This is simply configuring either the &lt;code&gt;aws-auth&lt;/code&gt; ConfigMap or EKS iam access entries.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Using&lt;/strong&gt; &lt;code&gt;aws-auth&lt;/code&gt;&lt;br&gt;
Using &lt;code&gt;aws-auth&lt;/code&gt; you can create your Kubernetes Roles and Role Bindings to map your Users or Roles that you are using in this migration. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; This option will be deprecated soon.&lt;/p&gt;

&lt;p&gt;Here's the example &lt;code&gt;aws-auth&lt;/code&gt; ConfigMap:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Please edit the object below. Lines beginning with a '#' will be ignored,
# and an empty file will abort the edit. If an error occurs while saving this file will be
# reopened with the relevant failures.
#
apiVersion: v1
data:
  mapRoles: |
    - groups:
      - system:bootstrappers
      - system:nodes
      rolearn: arn:aws:iam::111122223333:role/my-role
      username: system:node:{{EC2PrivateDNSName}}
    - groups:
      - eks-console-dashboard-full-access-group
      rolearn: arn:aws:iam::111122223333:role/my-console-viewer-role
      username: my-console-viewer-role
  mapUsers: |
    - groups:
      - system:masters
      userarn: arn:aws:iam::111122223333:user/admin
      username: admin
    - groups:
      - eks-console-dashboard-restricted-access-group
      userarn: arn:aws:iam::444455556666:user/my-user
      username: my-user
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For more details, you can follow an AWS documentation &lt;a href="https://docs.aws.amazon.com/eks/latest/userguide/auth-configmap.html" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Using EKS IAM access entries&lt;/strong&gt;&lt;br&gt;
This is mostly similar with the former but this can be managed outside your EKS cluster. &lt;/p&gt;

&lt;p&gt;Fundamentally, an EKS access entry associates a set of Kubernetes permissions with an IAM identity, such as an IAM role.&lt;/p&gt;

&lt;p&gt;Considering that &lt;code&gt;aws-auth&lt;/code&gt; will be deprecated soon, it's better to practice using this.&lt;/p&gt;

&lt;p&gt;For more details, you can follow an AWS documentation &lt;a href="https://docs.aws.amazon.com/eks/latest/userguide/access-entries.html" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;
&lt;h2&gt;
  
  
  Migration
&lt;/h2&gt;

&lt;p&gt;Once the above steps are configured, we can start the migration. In the previous steps, we gathered all the values needed and we just need to substitute those values into the below command.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Note&lt;/strong&gt;: Before starting the migration, consider stopping or scaling down the relevant Kubernetes resources to zero. This prevents new data from being written during the migration, ensuring data consistency and avoiding potential conflicts.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;pv-migrate &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--strategies&lt;/span&gt; &lt;span class="s2"&gt;"lbsvc"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--lbsvc-timeout&lt;/span&gt; 10m0s &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--helm-timeout&lt;/span&gt; 10m0s &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--helm-set&lt;/span&gt; rsync.extraArgs&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"--ignore-times --checksum"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--helm-set&lt;/span&gt; rsync.maxRetries&lt;span class="o"&gt;=&lt;/span&gt;20 &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--helm-set&lt;/span&gt; rsync.retryPeriodSeconds&lt;span class="o"&gt;=&lt;/span&gt;60 &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--log-level&lt;/span&gt; &lt;span class="s2"&gt;"DEBUG"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--source-kubeconfig&lt;/span&gt; &lt;span class="s2"&gt;"&amp;lt;source-kube-config-file&amp;gt;"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--source-context&lt;/span&gt; &lt;span class="s2"&gt;"&amp;lt;source-context&amp;gt;"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--source-namespace&lt;/span&gt; &lt;span class="s2"&gt;"&amp;lt;source-namespace&amp;gt;"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--source&lt;/span&gt; &lt;span class="s2"&gt;"&amp;lt;source-persistent-volume-claim-name&amp;gt;"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--dest-kubeconfig&lt;/span&gt; &lt;span class="s2"&gt;"&amp;lt;destination-kube-config-file&amp;gt;"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--dest-context&lt;/span&gt; &lt;span class="s2"&gt;"&amp;lt;destination-context&amp;gt;"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--dest-namespace&lt;/span&gt; &lt;span class="s2"&gt;"&amp;lt;destination-namespace&amp;gt;"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--dest&lt;/span&gt; &lt;span class="s2"&gt;"&amp;lt;destination-persistent-volume-claim-name&amp;gt;"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--dest-delete-extraneous-files&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In the command above, we've prioritized using the &lt;code&gt;lbsvc&lt;/code&gt; strategy. As stated in the &lt;a href=""&gt;Strategy&lt;/a&gt; section, this implementation leverages Kubernetes Service type of &lt;code&gt;LoadBalancer&lt;/code&gt; in which in AWS it creates a AWS Load Balancer and performs the migration. &lt;/p&gt;

&lt;p&gt;Although this setup does not work as it is, we need to override the default values of the &lt;code&gt;pv-migrate&lt;/code&gt; and tailor it on how AWS Load Balancer interacts with the EKS Cluster. &lt;/p&gt;

&lt;p&gt;Specifically AWS does takes time to create the Load Balancer and granting it an IP sometimes it takes it 10 minutes to complete the Load Balancer creation. Hence we've added couple of &lt;code&gt;--helm-set&lt;/code&gt; flags.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;--helm-set rsync.maxRetries=20&lt;/code&gt; and &lt;code&gt;--helm-set rsync.retryPeriodSeconds=60&lt;/code&gt; - Using this flag, we can provide EKS with sufficient time to wait for AWS to assign an IP address to the Load Balancer. Since this process can take some time, EKS may initially be unable to communicate using the FQDN.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;rsync.extraArgs="--ignore-times --checksum"&lt;/code&gt; - Using this flag we ensures a reliable sync by verifying all file contents, even if timestamps match.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;PV-Migrate&lt;/strong&gt; is a powerful tool for migrating Persistent Volume Claims (PVCs) between EKS clusters across different AWS accounts. By leveraging rsync over Kubernetes jobs, it ensures efficient and reliable data transfer while preserving file integrity and permissions. Unlike manual methods or snapshot-based approaches, pv-migrate simplifies the migration process without requiring downtime or complex configurations.&lt;/p&gt;

&lt;p&gt;With its ability to handle cross-cluster and cross-account migrations,  its an excellent choice for EKS administrators looking for a seamless and automated way to transfer persistent data between environments.&lt;/p&gt;

&lt;p&gt;Before we embark further into our cloud journey, I invite you to stay connected with me on social media platforms. Follow along on &lt;a href="https://twitter.com/edwardmercado_" rel="noopener noreferrer"&gt;Twitter&lt;/a&gt;, &lt;a href="https://www.linkedin.com/in/edwardallenmercado-677b69139/" rel="noopener noreferrer"&gt;LinkedIn&lt;/a&gt;. Let's continue this exploration together and build a thriving community of cloud enthusiasts. Join me on this exciting adventure!&lt;/p&gt;

</description>
      <category>eks</category>
      <category>kubernetes</category>
      <category>aws</category>
      <category>linux</category>
    </item>
    <item>
      <title>Unveiling the Kubernetes Resume Challenge: A Quest for Professional Growth - Extra Steps</title>
      <dc:creator>Edward Allen Mercado</dc:creator>
      <pubDate>Sun, 03 Mar 2024 04:46:41 +0000</pubDate>
      <link>https://dev.to/edwardmercado/unveiling-the-kubernetes-resume-challenge-a-quest-for-professional-growth-extra-steps-38pl</link>
      <guid>https://dev.to/edwardmercado/unveiling-the-kubernetes-resume-challenge-a-quest-for-professional-growth-extra-steps-38pl</guid>
      <description>&lt;p&gt;Welcome back, fellow adventurers! In this new installment of my blog, we're diving deeper into the Cloud Resume Challenge, exploring the additional steps beyond the core requirements. &lt;/p&gt;

&lt;p&gt;Having successfully completed the initial challenge, I'm eager to share with you the next phase of my journey. These extra steps promise to further enrich our understanding of cloud technologies and push the boundaries of our skills.&lt;/p&gt;

&lt;p&gt;Join me as we embark on this exciting continuation!&lt;/p&gt;

&lt;h2&gt;
  
  
  Implementation
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Extra Step 1: Package Everything in Helm
&lt;/h3&gt;

&lt;p&gt;I'm not very familiar in using Helm, fortunately, Helm offers extensive and understandable &lt;a href="https://helm.sh/docs/"&gt;documentation&lt;/a&gt;.&lt;br&gt;
, making it an invaluable resource for Kubernetes resource management.&lt;/p&gt;

&lt;p&gt;In this step, I realized the efficiency and convenience Helm brings to the table compared to manually creating Kubernetes resource definition files. While creating &lt;code&gt;.yaml&lt;/code&gt; files was essential for laying the foundation of Kubernetes resource creation, Helm allows us to recycle these files and manage them more gracefully.&lt;/p&gt;

&lt;p&gt;One of the standout features of Helm is its ability to group Kubernetes resources as needed and handle their management seamlessly. By utilizing the &lt;code&gt;values.yaml&lt;/code&gt; file, we can define dynamic data for each Kubernetes resource, enhancing flexibility and convenience.&lt;/p&gt;

&lt;p&gt;To dive into the world of Helm, you can start by creating, packaging, and deploying your own Helm chart using the following commands:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;helm create deis-workflow
helm package deis-workflow
helm install deis-workflow ./deis-workflow-0.1.0.tgz
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;blockquote&gt;
&lt;p&gt;These commands, straight from the Helm documentation, provide a solid starting point for exploring Helm and its capabilities.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;h3&gt;
  
  
  Extra Step 2: Implement Persistent Storage
&lt;/h3&gt;

&lt;p&gt;Throughout the Cloud Resume Challenge, I encountered scenarios where modifying the database required either recreating the &lt;code&gt;Deployment&lt;/code&gt; or experiencing database &lt;code&gt;Pod&lt;/code&gt; restarts. In both cases, all previously applied configurations in the database would be overwritten, essentially resetting it to a blank slate.&lt;/p&gt;

&lt;p&gt;To address this kind of scenarios, I realized the importance of implementing persistent storage for our database. With the assistance of Kubernetes resources such as &lt;code&gt;PersistentVolume&lt;/code&gt; and &lt;code&gt;PersistentVolumeClaim&lt;/code&gt;, we can ensure that the data in our database remains persistent, regardless of &lt;code&gt;Deployment&lt;/code&gt; recreation or &lt;code&gt;Pod&lt;/code&gt; restarts.&lt;/p&gt;

&lt;p&gt;The outcome of this step is significant: the lifecycle of the database becomes separated from the storage itself, ensuring that our data will &lt;code&gt;Retain&lt;/code&gt; and accessible even amidst infrastructure changes or failures.&lt;/p&gt;
&lt;h3&gt;
  
  
  Extra Step 3: Implement Basic CI/CD Pipeline
&lt;/h3&gt;

&lt;p&gt;In this phase, we'll streamline the build and deployment process of our resources, extending beyond just our Docker Image and Helm Charts. To achieve this, we'll leverage GitHub Actions, a powerful automation tool provided by GitHub.&lt;/p&gt;

&lt;p&gt;Here are some of the GitHub Marketplace Actions that I've utilized to accomplish these tasks. I hope you find them as useful as I did. It's worth noting that there are several ways to achieve these steps, whether by using different GitHub Marketplace Actions or running your own custom commands.&lt;/p&gt;


&lt;div class="ltag-github-readme-tag"&gt;
  &lt;div class="readme-overview"&gt;
    &lt;h2&gt;
      &lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--A9-wwsHG--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev.to/assets/github-logo-5a155e1f9a670af7944dd5e12375bc76ed542ea80224905ecaf878b9157cdefc.svg" alt="GitHub logo"&gt;
      &lt;a href="https://github.com/docker"&gt;
        docker
      &lt;/a&gt; / &lt;a href="https://github.com/docker/login-action"&gt;
        login-action
      &lt;/a&gt;
    &lt;/h2&gt;
    &lt;h3&gt;
      GitHub Action to login against a Docker registry
    &lt;/h3&gt;
  &lt;/div&gt;
  &lt;div class="ltag-github-body"&gt;
    
&lt;div id="readme" class="md"&gt;
&lt;p&gt;&lt;a href="https://github.com/docker/login-action/releases/latest"&gt;&lt;img src="https://camo.githubusercontent.com/ca9fa1f18bb3e3e0ed7cb14c7b819594e30e4b32461a2151824e6e8917e2057d/68747470733a2f2f696d672e736869656c64732e696f2f6769746875622f72656c656173652f646f636b65722f6c6f67696e2d616374696f6e2e7376673f7374796c653d666c61742d737175617265" alt="GitHub release"&gt;&lt;/a&gt;
&lt;a href="https://github.com/marketplace/actions/docker-login"&gt;&lt;img src="https://camo.githubusercontent.com/b06277126c93824a7aad561e47f4813708e4ef75979005b76ac176f723dca320/68747470733a2f2f696d672e736869656c64732e696f2f62616467652f6d61726b6574706c6163652d646f636b65722d2d6c6f67696e2d626c75653f6c6f676f3d676974687562267374796c653d666c61742d737175617265" alt="GitHub marketplace"&gt;&lt;/a&gt;
&lt;a href="https://github.com/docker/login-action/actions?workflow=ci"&gt;&lt;img src="https://camo.githubusercontent.com/399202c892eaaf9ba907061a9bd8fd9774c3dc99608668bc8e9b21c21f7752f0/68747470733a2f2f696d672e736869656c64732e696f2f6769746875622f616374696f6e732f776f726b666c6f772f7374617475732f646f636b65722f6c6f67696e2d616374696f6e2f63692e796d6c3f6272616e63683d6d6173746572266c6162656c3d6369266c6f676f3d676974687562267374796c653d666c61742d737175617265" alt="CI workflow"&gt;&lt;/a&gt;
&lt;a href="https://github.com/docker/login-action/actions?workflow=test"&gt;&lt;img src="https://camo.githubusercontent.com/3fab809e957fbb12bb90b2f377046b921a77ec218c5c6cd4afa18cced5119772/68747470733a2f2f696d672e736869656c64732e696f2f6769746875622f616374696f6e732f776f726b666c6f772f7374617475732f646f636b65722f6c6f67696e2d616374696f6e2f746573742e796d6c3f6272616e63683d6d6173746572266c6162656c3d74657374266c6f676f3d676974687562267374796c653d666c61742d737175617265" alt="Test workflow"&gt;&lt;/a&gt;
&lt;a href="https://codecov.io/gh/docker/login-action" rel="nofollow"&gt;&lt;img src="https://camo.githubusercontent.com/ab4b4fc69a2cf9e7f437d54beb05a9c019a419e1ed96c224980db603bd4cb306/68747470733a2f2f696d672e736869656c64732e696f2f636f6465636f762f632f6769746875622f646f636b65722f6c6f67696e2d616374696f6e3f6c6f676f3d636f6465636f76267374796c653d666c61742d737175617265" alt="Codecov"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;div class="markdown-heading"&gt;
&lt;h2 class="heading-element"&gt;About&lt;/h2&gt;
&lt;/div&gt;

&lt;p&gt;GitHub Action to login against a Docker registry.&lt;/p&gt;

&lt;p&gt;&lt;a rel="noopener noreferrer" href="https://github.com/docker/login-action.github/docker-login.png"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--WTJc6IRd--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://github.com/docker/login-action.github/docker-login.png" alt="Screenshot"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://github.com/docker/login-action#usage"&gt;Usage&lt;/a&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="https://github.com/docker/login-action#docker-hub"&gt;Docker Hub&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/docker/login-action#github-container-registry"&gt;GitHub Container Registry&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/docker/login-action#gitlab"&gt;GitLab&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/docker/login-action#azure-container-registry-acr"&gt;Azure Container Registry (ACR)&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/docker/login-action#google-container-registry-gcr"&gt;Google Container Registry (GCR)&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/docker/login-action#google-artifact-registry-gar"&gt;Google Artifact Registry (GAR)&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/docker/login-action#aws-elastic-container-registry-ecr"&gt;AWS Elastic Container Registry (ECR)&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/docker/login-action#aws-public-elastic-container-registry-ecr"&gt;AWS Public Elastic Container Registry (ECR)&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/docker/login-action#oci-oracle-cloud-infrastructure-registry-ocir"&gt;OCI Oracle Cloud Infrastructure Registry (OCIR)&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/docker/login-action#quayio"&gt;Quay.io&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://github.com/docker/login-action#customizing"&gt;Customizing&lt;/a&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://github.com/docker/login-action#inputs"&gt;inputs&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/docker/login-action#keep-up-to-date-with-github-dependabot"&gt;Keep up-to-date with GitHub Dependabot&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="markdown-heading"&gt;
&lt;h2 class="heading-element"&gt;Usage&lt;/h2&gt;
&lt;/div&gt;

&lt;div class="markdown-heading"&gt;
&lt;h3 class="heading-element"&gt;Docker Hub&lt;/h3&gt;
&lt;/div&gt;

&lt;p&gt;When authenticating to &lt;a href="https://hub.docker.com" rel="nofollow"&gt;Docker Hub&lt;/a&gt; with GitHub Actions,
use a &lt;a href="https://docs.docker.com/docker-hub/access-tokens/" rel="nofollow"&gt;personal access token&lt;/a&gt;.
Don't use your account password.&lt;/p&gt;
&lt;div class="highlight highlight-source-yaml notranslate position-relative overflow-auto js-code-highlight"&gt;
&lt;pre&gt;&lt;span class="pl-ent"&gt;name&lt;/span&gt;: &lt;span class="pl-s"&gt;ci&lt;/span&gt;

&lt;span class="pl-ent"&gt;on&lt;/span&gt;:
  &lt;span class="pl-ent"&gt;push&lt;/span&gt;:
    &lt;span class="pl-ent"&gt;branches&lt;/span&gt;: &lt;span class="pl-s"&gt;main&lt;/span&gt;

&lt;span class="pl-ent"&gt;jobs&lt;/span&gt;:
  &lt;span class="pl-ent"&gt;login&lt;/span&gt;:
    &lt;span class="pl-ent"&gt;runs-on&lt;/span&gt;: &lt;span class="pl-s"&gt;ubuntu-latest&lt;/span&gt;
    &lt;span class="pl-ent"&gt;steps&lt;/span&gt;:
      -
        &lt;span class="pl-ent"&gt;name&lt;/span&gt;: &lt;span class="pl-s"&gt;Login to Docker Hub&lt;/span&gt;
        &lt;span class="pl-ent"&gt;uses&lt;/span&gt;: &lt;span class="pl-s"&gt;docker/login-action@v3&lt;/span&gt;
        &lt;span class="pl-ent"&gt;with&lt;/span&gt;:
          &lt;span class="pl-ent"&gt;username&lt;/span&gt;: &lt;span class="pl-s"&gt;${{ secrets.DOCKERHUB_USERNAME }}&lt;/span&gt;
          &lt;span class="pl-ent"&gt;password&lt;/span&gt;: &lt;span class="pl-s"&gt;${{ secrets.DOCKERHUB_TOKEN }}&lt;/span&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;div class="markdown-heading"&gt;
&lt;h3 class="heading-element"&gt;GitHub Container Registry&lt;/h3&gt;

&lt;/div&gt;
&lt;p&gt;To authenticate to the &lt;a href="https://docs.github.com/en/packages/working-with-a-github-packages-registry/working-with-the-container-registry"&gt;GitHub Container Registry&lt;/a&gt;,
use the &lt;a href="https://docs.github.com/en/actions/reference/authentication-in-a-workflow"&gt;&lt;code&gt;GITHUB_TOKEN&lt;/code&gt;&lt;/a&gt;
secret.&lt;/p&gt;
&lt;div class="highlight highlight-source-yaml notranslate position-relative overflow-auto js-code-highlight"&gt;
&lt;pre&gt;&lt;span class="pl-ent"&gt;name&lt;/span&gt;: &lt;span class="pl-s"&gt;ci&lt;/span&gt;
&lt;span class="pl-ent"&gt;on&lt;/span&gt;:
  &lt;span class="pl-ent"&gt;push&lt;/span&gt;:
    &lt;span class="pl-ent"&gt;branches&lt;/span&gt;: &lt;span class="pl-s"&gt;main&lt;/span&gt;

&lt;span class="pl-ent"&gt;jobs&lt;/span&gt;:
  &lt;span class="pl-ent"&gt;login&lt;/span&gt;:
    &lt;span class="pl-ent"&gt;runs-on&lt;/span&gt;: &lt;span class="pl-s"&gt;ubuntu-latest&lt;/span&gt;
    &lt;span class="pl-ent"&gt;steps&lt;/span&gt;:&lt;/pre&gt;…
&lt;/div&gt;
&lt;/div&gt;
  &lt;/div&gt;
  &lt;div class="gh-btn-container"&gt;&lt;a class="gh-btn" href="https://github.com/docker/login-action"&gt;View on GitHub&lt;/a&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;br&gt;
&lt;div class="ltag-github-readme-tag"&gt;
  &lt;div class="readme-overview"&gt;
    &lt;h2&gt;
      &lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--A9-wwsHG--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev.to/assets/github-logo-5a155e1f9a670af7944dd5e12375bc76ed542ea80224905ecaf878b9157cdefc.svg" alt="GitHub logo"&gt;
      &lt;a href="https://github.com/docker"&gt;
        docker
      &lt;/a&gt; / &lt;a href="https://github.com/docker/setup-buildx-action"&gt;
        setup-buildx-action
      &lt;/a&gt;
    &lt;/h2&gt;
    &lt;h3&gt;
      GitHub Action to set up Docker Buildx
    &lt;/h3&gt;
  &lt;/div&gt;
  &lt;div class="ltag-github-body"&gt;
    
&lt;div id="readme" class="md"&gt;
&lt;p&gt;&lt;a href="https://github.com/docker/setup-buildx-action/releases/latest"&gt;&lt;img src="https://camo.githubusercontent.com/5b18f11b76263fc7448ec9e8769d1f0881183b5b56680c9b5032ce26dcd17ae6/68747470733a2f2f696d672e736869656c64732e696f2f6769746875622f72656c656173652f646f636b65722f73657475702d6275696c64782d616374696f6e2e7376673f7374796c653d666c61742d737175617265" alt="GitHub release"&gt;&lt;/a&gt;
&lt;a href="https://github.com/marketplace/actions/docker-setup-buildx"&gt;&lt;img src="https://camo.githubusercontent.com/0a1afd6da70dceac80a5ea1b1a18cf1d4b0b4a4390d719272a3e44aa6722a095/68747470733a2f2f696d672e736869656c64732e696f2f62616467652f6d61726b6574706c6163652d646f636b65722d2d73657475702d2d6275696c64782d626c75653f6c6f676f3d676974687562267374796c653d666c61742d737175617265" alt="GitHub marketplace"&gt;&lt;/a&gt;
&lt;a href="https://github.com/docker/setup-buildx-action/actions?workflow=ci"&gt;&lt;img src="https://camo.githubusercontent.com/72dde81b5f2341fb9ca10482c757c0a65e3647256c624b7961b6b1005e8ffcf2/68747470733a2f2f696d672e736869656c64732e696f2f6769746875622f616374696f6e732f776f726b666c6f772f7374617475732f646f636b65722f73657475702d6275696c64782d616374696f6e2f63692e796d6c3f6272616e63683d6d6173746572266c6162656c3d6369266c6f676f3d676974687562267374796c653d666c61742d737175617265" alt="CI workflow"&gt;&lt;/a&gt;
&lt;a href="https://github.com/docker/setup-buildx-action/actions?workflow=test"&gt;&lt;img src="https://camo.githubusercontent.com/c44c8b22f7f697a2f7a076beae4dc84dfa3a41cf7854727f5041228250a00ac7/68747470733a2f2f696d672e736869656c64732e696f2f6769746875622f616374696f6e732f776f726b666c6f772f7374617475732f646f636b65722f73657475702d6275696c64782d616374696f6e2f746573742e796d6c3f6272616e63683d6d6173746572266c6162656c3d74657374266c6f676f3d676974687562267374796c653d666c61742d737175617265" alt="Test workflow"&gt;&lt;/a&gt;
&lt;a href="https://codecov.io/gh/docker/setup-buildx-action" rel="nofollow"&gt;&lt;img src="https://camo.githubusercontent.com/179e5917a6da47a83b721a4074aa1624162030ccea67e137126d969f0455ec88/68747470733a2f2f696d672e736869656c64732e696f2f636f6465636f762f632f6769746875622f646f636b65722f73657475702d6275696c64782d616374696f6e3f6c6f676f3d636f6465636f76267374796c653d666c61742d737175617265" alt="Codecov"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;div class="markdown-heading"&gt;
&lt;h2 class="heading-element"&gt;About&lt;/h2&gt;
&lt;/div&gt;
&lt;p&gt;GitHub Action to set up Docker &lt;a href="https://github.com/docker/buildx"&gt;Buildx&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;This action will create and boot a builder that can be used in the following
steps of your workflow if you're using Buildx or the &lt;a href="https://github.com/docker/build-push-action/"&gt;&lt;code&gt;build-push&lt;/code&gt; action&lt;/a&gt;
By default, the &lt;a href="https://docs.docker.com/build/building/drivers/docker-container/" rel="nofollow"&gt;&lt;code&gt;docker-container&lt;/code&gt; driver&lt;/a&gt;
will be used to be able to build multi-platform images and export cache using
a &lt;a href="https://github.com/moby/buildkit"&gt;BuildKit&lt;/a&gt; container.&lt;/p&gt;
&lt;p&gt;&lt;a rel="noopener noreferrer" href="https://github.com/docker/setup-buildx-action.github/setup-buildx-action.png"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--MsHB4Veg--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://github.com/docker/setup-buildx-action.github/setup-buildx-action.png" alt="Screenshot"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://github.com/docker/setup-buildx-action#usage"&gt;Usage&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/docker/setup-buildx-action#configuring-your-builder"&gt;Configuring your builder&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://github.com/docker/setup-buildx-action#customizing"&gt;Customizing&lt;/a&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="https://github.com/docker/setup-buildx-action#inputs"&gt;inputs&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/docker/setup-buildx-action#outputs"&gt;outputs&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/docker/setup-buildx-action#environment-variables"&gt;environment variables&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://github.com/docker/setup-buildx-action#notes"&gt;Notes&lt;/a&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="https://github.com/docker/setup-buildx-action#nodes-output"&gt;&lt;code&gt;nodes&lt;/code&gt; output&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/docker/setup-buildx-action#contributing"&gt;Contributing&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="markdown-heading"&gt;
&lt;h2 class="heading-element"&gt;Usage&lt;/h2&gt;
&lt;/div&gt;
&lt;div class="highlight highlight-source-yaml notranslate position-relative overflow-auto js-code-highlight"&gt;
&lt;pre&gt;&lt;span class="pl-ent"&gt;name&lt;/span&gt;: &lt;span class="pl-s"&gt;ci&lt;/span&gt;

&lt;span class="pl-ent"&gt;on&lt;/span&gt;:
  &lt;span class="pl-ent"&gt;push&lt;/span&gt;:

&lt;span class="pl-ent"&gt;jobs&lt;/span&gt;:
  &lt;span class="pl-ent"&gt;buildx&lt;/span&gt;:
    &lt;span class="pl-ent"&gt;runs-on&lt;/span&gt;: &lt;span class="pl-s"&gt;ubuntu-latest&lt;/span&gt;
    &lt;span class="pl-ent"&gt;steps&lt;/span&gt;:
      -
        &lt;span class="pl-ent"&gt;name&lt;/span&gt;: &lt;span class="pl-s"&gt;Checkout&lt;/span&gt;
        &lt;span class="pl-ent"&gt;uses&lt;/span&gt;: &lt;span class="pl-s"&gt;actions/checkout@v4&lt;/span&gt;
      -
        &lt;span class="pl-c"&gt;&lt;span class="pl-c"&gt;#&lt;/span&gt; Add support for more platforms with QEMU (optional)&lt;/span&gt;
        &lt;span class="pl-c"&gt;&lt;span class="pl-c"&gt;#&lt;/span&gt; https://github.com/docker/setup-qemu-action&lt;/span&gt;
        &lt;span class="pl-ent"&gt;name&lt;/span&gt;: &lt;span class="pl-s"&gt;Set up QEMU&lt;/span&gt;
        &lt;span class="pl-ent"&gt;uses&lt;/span&gt;: &lt;span class="pl-s"&gt;docker/setup-qemu-action@v3&lt;/span&gt;
      -
        &lt;span class="pl-ent"&gt;name&lt;/span&gt;: &lt;span class="pl-s"&gt;Set up Docker Buildx&lt;/span&gt;
        &lt;span class="pl-ent"&gt;uses&lt;/span&gt;: &lt;span class="pl-s"&gt;docker/setup-buildx-action@v3&lt;/span&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;div class="markdown-heading"&gt;
&lt;h2 class="heading-element"&gt;Configuring your builder&lt;/h2&gt;

&lt;/div&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://docs.docker.com/build/ci/github-actions/configure-builder/#version-pinning" rel="nofollow"&gt;Version pinning&lt;/a&gt;: Pin to a specific Buildx or BuildKit version&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://docs.docker.com/build/ci/github-actions/configure-builder/#buildkit-container-logs" rel="nofollow"&gt;BuildKit container logs&lt;/a&gt;: Enable BuildKit container logs for debugging…&lt;/li&gt;
&lt;/ul&gt;
&lt;/div&gt;
  &lt;/div&gt;
  &lt;div class="gh-btn-container"&gt;&lt;a class="gh-btn" href="https://github.com/docker/setup-buildx-action"&gt;View on GitHub&lt;/a&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;br&gt;
&lt;div class="ltag-github-readme-tag"&gt;
  &lt;div class="readme-overview"&gt;
    &lt;h2&gt;
      &lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--A9-wwsHG--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev.to/assets/github-logo-5a155e1f9a670af7944dd5e12375bc76ed542ea80224905ecaf878b9157cdefc.svg" alt="GitHub logo"&gt;
      &lt;a href="https://github.com/Azure"&gt;
        Azure
      &lt;/a&gt; / &lt;a href="https://github.com/Azure/setup-helm"&gt;
        setup-helm
      &lt;/a&gt;
    &lt;/h2&gt;
    &lt;h3&gt;
      Github Action for installing Helm
    &lt;/h3&gt;
  &lt;/div&gt;
  &lt;div class="ltag-github-body"&gt;
    
&lt;div id="readme" class="md"&gt;
&lt;div class="markdown-heading"&gt;
&lt;h1 class="heading-element"&gt;Setup Helm&lt;/h1&gt;
&lt;/div&gt;
&lt;p&gt;Install a specific version of helm binary on the runner.&lt;/p&gt;
&lt;div class="markdown-heading"&gt;
&lt;h2 class="heading-element"&gt;Example&lt;/h2&gt;
&lt;/div&gt;
&lt;p&gt;Acceptable values are latest or any semantic version string like v3.5.0 Use this action in workflow to define which version of helm will be used. v2+ of this action only support Helm3.&lt;/p&gt;
&lt;div class="highlight highlight-source-yaml notranslate position-relative overflow-auto js-code-highlight"&gt;
&lt;pre&gt;- &lt;span class="pl-ent"&gt;uses&lt;/span&gt;: &lt;span class="pl-s"&gt;azure/setup-helm@v4.1.0&lt;/span&gt;
  &lt;span class="pl-ent"&gt;with&lt;/span&gt;:
     &lt;span class="pl-ent"&gt;version&lt;/span&gt;: &lt;span class="pl-s"&gt;&lt;span class="pl-pds"&gt;'&lt;/span&gt;&amp;lt;version&amp;gt;&lt;span class="pl-pds"&gt;'&lt;/span&gt;&lt;/span&gt; &lt;span class="pl-c"&gt;&lt;span class="pl-c"&gt;#&lt;/span&gt; default is latest (stable)&lt;/span&gt;
  &lt;span class="pl-ent"&gt;id&lt;/span&gt;: &lt;span class="pl-s"&gt;install&lt;/span&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;div class="markdown-alert markdown-alert-note"&gt;
&lt;p class="markdown-alert-title"&gt;Note&lt;/p&gt;
&lt;p&gt;If something goes wrong with fetching the latest version the action will use the hardcoded default stable version (currently v3.13.3). If you rely on a certain version higher than the default, you should explicitly use that version instead of latest.&lt;/p&gt;
&lt;/div&gt;
&lt;p&gt;The cached helm binary path is prepended to the PATH environment variable as well as stored in the helm-path output variable.
Refer to the action metadata file for details about all the inputs &lt;a href="https://github.com/Azure/setup-helm/blob/master/action.yml"&gt;https://github.com/Azure/setup-helm/blob/master/action.yml&lt;/a&gt;&lt;/p&gt;
&lt;div class="markdown-heading"&gt;
&lt;h2 class="heading-element"&gt;Contributing&lt;/h2&gt;

&lt;/div&gt;
&lt;p&gt;This project welcomes contributions and suggestions. Most contributions require you…&lt;/p&gt;
&lt;/div&gt;
  &lt;/div&gt;
  &lt;div class="gh-btn-container"&gt;&lt;a class="gh-btn" href="https://github.com/Azure/setup-helm"&gt;View on GitHub&lt;/a&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;br&gt;
&lt;div class="ltag-github-readme-tag"&gt;
  &lt;div class="readme-overview"&gt;
    &lt;h2&gt;
      &lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--A9-wwsHG--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev.to/assets/github-logo-5a155e1f9a670af7944dd5e12375bc76ed542ea80224905ecaf878b9157cdefc.svg" alt="GitHub logo"&gt;
      &lt;a href="https://github.com/bitovi"&gt;
        bitovi
      &lt;/a&gt; / &lt;a href="https://github.com/bitovi/github-actions-deploy-eks-helm"&gt;
        github-actions-deploy-eks-helm
      &lt;/a&gt;
    &lt;/h2&gt;
    &lt;h3&gt;
      
    &lt;/h3&gt;
  &lt;/div&gt;
  &lt;div class="ltag-github-body"&gt;
    
&lt;div id="readme" class="md"&gt;
&lt;div class="markdown-heading"&gt;
&lt;h1 class="heading-element"&gt;Deploy Helm charts to AWS EKS cluster&lt;/h1&gt;
&lt;/div&gt;
&lt;p&gt;&lt;code&gt;bitovi/github-actions-deploy-eks-helm&lt;/code&gt; deploys helm charts to an EKS Cluster
&lt;a rel="noopener noreferrer nofollow" href="https://camo.githubusercontent.com/694138ba994f0a0294cd9ca44b2931234018676b5206643ff1c6e745051b022f/68747470733a2f2f6269746f76692d6768612d706978656c2d747261636b65722d6465706c6f796d656e742d6d61696e2e6269746f76692d73616e64626f782e636f6d2f706978656c2f4c57303671677a3337775330653632473455485953"&gt;&lt;img src="https://camo.githubusercontent.com/694138ba994f0a0294cd9ca44b2931234018676b5206643ff1c6e745051b022f/68747470733a2f2f6269746f76692d6768612d706978656c2d747261636b65722d6465706c6f796d656e742d6d61696e2e6269746f76692d73616e64626f782e636f6d2f706978656c2f4c57303671677a3337775330653632473455485953" alt="alt"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;div class="markdown-heading"&gt;
&lt;h2 class="heading-element"&gt;Action Summary&lt;/h2&gt;
&lt;/div&gt;
&lt;p&gt;This action deploys Helm charts to an EKS cluster, allowing ECR/OCI as sources, and handling plugin installation, using &lt;a href="https://github.com/alpine-docker/k8s"&gt;this awesome Docker image&lt;/a&gt; as base.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; If your EKS cluster administrative access is in a private network, you will need to use a self hosted runner in that network to use this action.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;If you would like to deploy a backend app/service, check out our other actions:&lt;/p&gt;
&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Action&lt;/th&gt;
&lt;th&gt;Purpose&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="https://github.com/marketplace/actions/deploy-docker-to-aws-ec2"&gt;Deploy Docker to EC2&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Deploys a repo with a Dockerized application to a virtual machine (EC2) on AWS&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="https://github.com/marketplace/actions/deploy-react-to-github-pages"&gt;Deploy React to GitHub Pages&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Builds and deploys a React application to GitHub Pages.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="https://github.com/marketplace/actions/deploy-static-site-to-aws-s3-cdn-r53"&gt;Deploy static site to AWS (S3/CDN/R53)&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Hosts a static site in AWS S3 with CloudFront&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;
&lt;br&gt;
&lt;p&gt;&lt;strong&gt;And more!&lt;/strong&gt;, check our &lt;a href="https://github.com/marketplace?category=&amp;amp;type=actions&amp;amp;verification=&amp;amp;query=bitovi"&gt;list of actions in the GitHub marketplace&lt;/a&gt;&lt;/p&gt;
&lt;div class="markdown-heading"&gt;
&lt;h1 class="heading-element"&gt;Need help or have questions?&lt;/h1&gt;

&lt;/div&gt;
&lt;p&gt;This…&lt;/p&gt;
&lt;/div&gt;
  &lt;/div&gt;
  &lt;div class="gh-btn-container"&gt;&lt;a class="gh-btn" href="https://github.com/bitovi/github-actions-deploy-eks-helm"&gt;View on GitHub&lt;/a&gt;&lt;/div&gt;
&lt;/div&gt;


&lt;p&gt;By automating our build and deployment workflows, we can ensure faster and more consistent releases, ultimately enhancing the efficiency and reliability of our development pipeline.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;And with that, this is my conclusion for this challenge, I reflect on the invaluable experiences gained throughout this journey. As I navigated each step, from setting up the infrastructure to fine-tuning the deployment, I encountered various obstacles and triumphs that deepened my understanding of cloud technologies. I reflect on the invaluable experiences gained throughout this journey. As I navigated each step, from setting up the infrastructure to fine-tuning the deployment, I encountered various obstacles and triumphs that deepened my understanding of cloud technologies.&lt;/p&gt;

&lt;p&gt;Moving forward, I carry with me the lessons learned and insights gained from this experience. I'm excited to continue exploring new avenues in Kubernetes, Containerization, Cloud Computing and further honing my skills.&lt;/p&gt;

&lt;p&gt;I trust that you've found this article helpful in some capacity. It's been a pleasure documenting my journey through the Cloud Resume Challenge and sharing insights and learnings along the way.&lt;/p&gt;

&lt;p&gt;If you have any feedback, questions, or suggestions for future topics, I'd love to hear from you. Feel free to reach out at &lt;a href="https://twitter.com/edwardmercado_"&gt;Twitter&lt;/a&gt; and &lt;a href="https://www.linkedin.com/in/edwardallenmercado-677b69139/"&gt;LinkedIn&lt;/a&gt; and let's continue the conversation. Here's to more learning and growth ahead!&lt;/p&gt;

</description>
      <category>aws</category>
      <category>kubernetes</category>
      <category>terraform</category>
      <category>php</category>
    </item>
    <item>
      <title>Unveiling the Kubernetes Resume Challenge: A Quest for Professional Growth - Part 2</title>
      <dc:creator>Edward Allen Mercado</dc:creator>
      <pubDate>Sun, 03 Mar 2024 03:51:20 +0000</pubDate>
      <link>https://dev.to/edwardmercado/unveiling-the-kubernetes-resume-challenge-a-quest-for-professional-growth-part-2-2pmd</link>
      <guid>https://dev.to/edwardmercado/unveiling-the-kubernetes-resume-challenge-a-quest-for-professional-growth-part-2-2pmd</guid>
      <description>&lt;p&gt;As we embark on the second leg of our journey, it's time to take another step towards conquering the Kubernetes Resume Challenge! &lt;/p&gt;

&lt;p&gt;In Part 1, we laid the groundwork, setting up our Kubernetes cluster, configuring services, and preparing our database which covers the Step 1 to 5. &lt;/p&gt;

&lt;p&gt;Please join me again as we navigate through the challenges and triumphs that lie ahead, inching closer into completing this quest.&lt;/p&gt;

&lt;h2&gt;
  
  
  Implementation
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Step 6: Implement Configuration Management
&lt;/h3&gt;

&lt;p&gt;In this step, our objective is to create a &lt;code&gt;ConfigMap&lt;/code&gt; with the data of &lt;code&gt;FEATURE_DARK_MODE&lt;/code&gt; set to &lt;code&gt;true&lt;/code&gt;. Subsequently, we'll need to adjust our &lt;code&gt;app/&lt;/code&gt; code to accommodate this configuration by adapting to the value of the &lt;code&gt;Environment Variable&lt;/code&gt;. Finally, we'll modify our website &lt;code&gt;Deployment&lt;/code&gt; to include the &lt;code&gt;ConfigMap&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;We'll implement the Dark Mode feature by creating a separate &lt;code&gt;.css&lt;/code&gt; file specifically for this mode. Our approach involves verifying that the &lt;code&gt;Environment Variable&lt;/code&gt; &lt;code&gt;FEATURE_DARK_MODE&lt;/code&gt; is set to &lt;code&gt;true&lt;/code&gt;. Once confirmed, we'll render the dark mode style by linking the appropriate &lt;code&gt;.css&lt;/code&gt; file in our application.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 7: Scale Your Application
&lt;/h3&gt;

&lt;p&gt;In this phase, we'll evaluate the scalability of our application. Can it gracefully handle increased traffic without manual intervention? &lt;/p&gt;

&lt;p&gt;To simulate a scale-up scenario, execute the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl scale deployment/&amp;lt;website_deployment_name&amp;gt; --replicas=6
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This command increases the number of &lt;code&gt;Pods&lt;/code&gt; in your &lt;code&gt;Deployment&lt;/code&gt; to 6. You can monitor the growing number of &lt;code&gt;Pods&lt;/code&gt; by running:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl get po -w
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once the scaling operation is complete, see if the newly created &lt;code&gt;Pods&lt;/code&gt; are in &lt;code&gt;Running&lt;/code&gt; State, then navigate to our website endpoint and verify if it behaves as expected, without any errors, despite the increased resources.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 8: Perform a Rolling Update
&lt;/h3&gt;

&lt;p&gt;In this phase, we'll enhance our website by adding a promotional banner to the &lt;code&gt;body&lt;/code&gt; of our web pages as part of our marketing campaign. To accomplish this, we'll need to modify the code in the &lt;code&gt;app/&lt;/code&gt; directory accordingly. Once the changes are made, we'll rebuild and push our Docker image with the new tag &lt;code&gt;v2&lt;/code&gt; to our Docker Hub repository.&lt;/p&gt;

&lt;p&gt;After updating the Docker image, we'll need to ensure that our website &lt;code&gt;Deployment&lt;/code&gt; is using the latest version. We can achieve this by either deleting and recreating the &lt;code&gt;Deployment&lt;/code&gt; or by executing the &lt;code&gt;set image&lt;/code&gt; command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl set image deployment/&amp;lt;web_deployment_name&amp;gt; &amp;lt;container_name&amp;gt;=&amp;lt;your_docker_repo&amp;gt;:&amp;lt;new_tag&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In my experience, there have been cases where changes made to the Docker image are not immediately reflected, even when referring to the correct tags. This can occur if we're constantly pushing changes to the same Docker tag, such as &lt;code&gt;v1&lt;/code&gt;, without incrementing it. To mitigate this issue, I recommend using the &lt;code&gt;sha&lt;/code&gt; of the Docker image instead, as it points to the most recent changes pushed to your Docker repository. You can find this &lt;code&gt;sha&lt;/code&gt; output in your repository or every time you push the image via the command line.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;v1: digest: sha256:eba9e3bb273a1b62fae7fe6de36956760b06daaa7d8b9e0b70cb054d13822623 size: 5113
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn1ft1lj7mg2fmzu6xw5g.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn1ft1lj7mg2fmzu6xw5g.png" alt="SHA Digest in Docker Hub" width="583" height="71"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 9: Roll Back a Deployment
&lt;/h3&gt;

&lt;p&gt;Uh-oh! It seems that the banner we recently deployed has introduced a bug to our website. To rectify this issue, we'll need to roll back the deployment to a previous state.&lt;/p&gt;

&lt;p&gt;You can accomplish this by executing the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl rollout undo deployment/&amp;lt;website_deployment_name&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once the rollout completes successfully, verify if the website returns to its previous state, without the banner, ensuring that the bug has been resolved.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 10: Autoscale Your Application
&lt;/h3&gt;

&lt;p&gt;Now that we've observed how our website behaves under increased traffic and during rollbacks, it's time to implement autoscaling to ensure optimal performance under varying workloads. To achieve this, we'll utilize the &lt;code&gt;Horizontal Pod Autoscaler&lt;/code&gt; resource.&lt;/p&gt;

&lt;p&gt;Simply execute the following command to implement autoscaling:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl autoscale deployment &amp;lt;website_deployment_name&amp;gt; --cpu-percent=50 --min=2 --max=10.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To verify that the &lt;code&gt;Horizontal Pod Autoscaler&lt;/code&gt; is functioning as expected, we can use tools like Apache Bench to generate traffic to our endpoint. First, install Apache Bench and generate load to your Website Endpoint using the command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ab -n 100 -c 10 &amp;lt;website_endpoint&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can monitor the behavior of the &lt;code&gt;Horizontal Pod Autoscaler&lt;/code&gt; and your &lt;code&gt;Pods&lt;/code&gt; by executing the following commands in separate command line tabs or windows:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl get hpa -w
kubectl get pod -w
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This allows you to observe how the autoscaler adjusts the number of Pods based on the generated load, ensuring optimal resource utilization.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 11: Implement Liveness and Readiness Probes
&lt;/h3&gt;

&lt;p&gt;In this phase, we'll enhance the reliability of our website by adding liveness and readiness probes. These probes ensure that before our website starts running, it is verified as working, and it maintains its operational state throughout its lifecycle.&lt;/p&gt;

&lt;p&gt;To achieve this, we'll modify the &lt;code&gt;Deployment&lt;/code&gt; definition to include these probes and then recreate our &lt;code&gt;Deployment&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;In my case, I've utilized specific &lt;code&gt;path&lt;/code&gt; and separate &lt;code&gt;php&lt;/code&gt; files to verify their functionality, such as &lt;code&gt;/db_healthcheck.php&lt;/code&gt; and &lt;code&gt;healthcheck.php&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;If you'd like to try this on my website endpoint, simply append the mentioned path to the URL.&lt;/p&gt;

&lt;p&gt;This implementation ensures that our website is always responsive and maintains its availability, contributing to a seamless user experience.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 12: Utilize ConfigMaps and Secrets
&lt;/h3&gt;

&lt;p&gt;In this phase, we revisit the implementation of our database and website to ensure secure management of database connection strings and feature toggles without hardcoding them in the application.&lt;/p&gt;

&lt;p&gt;As previously mentioned, we've already stored these database connection strings and configurations either in an &lt;code&gt;Environment Variable&lt;/code&gt; or a &lt;code&gt;ConfigMap&lt;/code&gt;, requiring only minor adjustments.&lt;/p&gt;

&lt;p&gt;To enhance security, we'll ensure that sensitive data is stored using &lt;code&gt;Secrets&lt;/code&gt;, while non-sensitive information can remain in a &lt;code&gt;ConfigMap&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;We'll then modify the &lt;code&gt;Deployment&lt;/code&gt; definition to include both &lt;code&gt;ConfigMap&lt;/code&gt; and &lt;code&gt;Secrets&lt;/code&gt;, and subsequently recreate our &lt;code&gt;Deployment&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;This approach ensures that our application's sensitive information remains protected, contributing to a more robust and secure deployment.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;And there we have it, we've successfully completed the challenge! I hope you found value in following along with my blog. &lt;/p&gt;

&lt;p&gt;I'm immensely grateful for the opportunity to tackle this challenge, as it not only tested my technical skills but also fostered personal growth and resilience. The satisfaction of overcoming each hurdle and witnessing the evolution of my project fills me with a sense of accomplishment and pride.&lt;/p&gt;

&lt;p&gt;For me, this journey doesn't end here. I've also taken on the extra steps mentioned in the challenge, so stay tuned if you're interested in continuing to explore this challenge further with me!&lt;/p&gt;

&lt;p&gt;Before we embark further into our next journey, I invite you to stay connected with me on social media platforms. Follow along on &lt;a href="https://twitter.com/edwardmercado_"&gt;Twitter&lt;/a&gt;, &lt;a href="https://www.linkedin.com/in/edwardallenmercado-677b69139/"&gt;LinkedIn&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>kubernetes</category>
      <category>terraform</category>
      <category>php</category>
    </item>
    <item>
      <title>Unveiling the Kubernetes Resume Challenge: A Quest for Professional Growth - Part 1</title>
      <dc:creator>Edward Allen Mercado</dc:creator>
      <pubDate>Sat, 02 Mar 2024 17:38:16 +0000</pubDate>
      <link>https://dev.to/edwardmercado/unveiling-the-kubernetes-resume-challenge-a-quest-for-professional-growth-part-1-nf5</link>
      <guid>https://dev.to/edwardmercado/unveiling-the-kubernetes-resume-challenge-a-quest-for-professional-growth-part-1-nf5</guid>
      <description>&lt;p&gt;As March unfolds its chapter in 2024, I was thrilled when I saw this LinkedIn post by &lt;a class="mentioned-user" href="https://dev.to/forrestbrazeal"&gt;@forrestbrazeal&lt;/a&gt;.&lt;/p&gt;


&lt;div class="crayons-card c-embed text-styles text-styles--secondary"&gt;
      &lt;div class="c-embed__cover"&gt;
        &lt;a href="https://www.linkedin.com/posts/forrestbrazeal_take-on-the-kubernetes-resume-challenge-activity-7167902297430159361-mIqi/" class="c-link s:max-w-50 align-middle" rel="noopener noreferrer"&gt;
          &lt;img alt="" src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmedia.licdn.com%2Fdms%2Fimage%2Fsync%2Fv2%2FD5627AQF9TFRBSKP30w%2Farticleshare-shrink_800%2Farticleshare-shrink_800%2F0%2F1711870041208%3Fe%3D2147483647%26v%3Dbeta%26t%3DH3XSpAsqga_RlJHSDrIZs7HnyaiFo0XtKdsbfViiMcM" height="auto" class="m-0"&gt;
        &lt;/a&gt;
      &lt;/div&gt;
    &lt;div class="c-embed__body"&gt;
      &lt;h2 class="fs-xl lh-tight"&gt;
        &lt;a href="https://www.linkedin.com/posts/forrestbrazeal_take-on-the-kubernetes-resume-challenge-activity-7167902297430159361-mIqi/" rel="noopener noreferrer" class="c-link"&gt;
          Forrest Brazeal on LinkedIn: Take on the Kubernetes Resume Challenge
        &lt;/a&gt;
      &lt;/h2&gt;
        &lt;p class="truncate-at-3"&gt;
          Just one week left until the Kubernetes Resume Challenge kicks off on March 4th!

I&amp;amp;#39;ve heard from several experienced engineers who are excited about jumping…
        &lt;/p&gt;
      &lt;div class="color-secondary fs-s flex items-center"&gt;
          &lt;img alt="favicon" class="c-embed__favicon m-0 mr-2 radius-0" src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fstatic.licdn.com%2Faero-v1%2Fsc%2Fh%2Fal2o9zrvru7aqj8e1x2rzsrca"&gt;
        linkedin.com
      &lt;/div&gt;
    &lt;/div&gt;
&lt;/div&gt;


&lt;p&gt;And from that post I knew it will be a a tale of exploration, learning, and growth in the vast expanse of cloud computing, a realm where innovation meets opportunity.&lt;/p&gt;

&lt;p&gt;Join me as I unveil the beginning of this adventure, a journey fueled by passion, curiosity, and the desire to elevate my skills to new heights.&lt;/p&gt;

&lt;h2&gt;
  
  
  Scenario
&lt;/h2&gt;

&lt;p&gt;We have to deploy an e-commerce website. This is a modern web application which poses challenges about Scalability, Consistency, and Availability. To address these, we've opted for a solution that harnesses containerization managed by Kubernetes.&lt;/p&gt;

&lt;h2&gt;
  
  
  Prerequisites
&lt;/h2&gt;

&lt;p&gt;In this challenge, certain prerequisites are necessary, outlined in detail &lt;a href="https://cloudresumechallenge.dev/docs/extensions/kubernetes-challenge/#prerequisites" rel="noopener noreferrer"&gt;here&lt;/a&gt;. However, among these, the most crucial for me is gaining familiarity with the Application Source Code, accessible &lt;a href="https://github.com/kodekloudhub/learning-app-ecommerce" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Implementation
&lt;/h2&gt;

&lt;p&gt;Before commencing each step, it's crucial to fulfill all necessary requirements as progression to subsequent steps may be impeded otherwise. However, Step 1 might be an exception to this rule.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 1: Certification
&lt;/h3&gt;

&lt;p&gt;Fortunate to have this certification already secured, yet it's been a while since I delved into Kubernetes. This challenge serves as a refresher, ensuring I'm up to speed with this essential technology.&lt;/p&gt;


&lt;div class="crayons-card c-embed text-styles text-styles--secondary"&gt;
      &lt;div class="c-embed__cover"&gt;
        &lt;a href="https://www.credly.com/badges/30f35ca8-24b8-49c2-92cb-5624e3278bd3/public_url" class="c-link s:max-w-50 align-middle" rel="noopener noreferrer"&gt;
          &lt;img alt="" src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fimages.credly.com%2Fimages%2F8b8ed108-e77d-4396-ac59-2504583b9d54%2Flinkedin_thumb_cka_from_cncfsite__281_29.png" height="auto" class="m-0"&gt;
        &lt;/a&gt;
      &lt;/div&gt;
    &lt;div class="c-embed__body"&gt;
      &lt;h2 class="fs-xl lh-tight"&gt;
        &lt;a href="https://www.credly.com/badges/30f35ca8-24b8-49c2-92cb-5624e3278bd3/public_url" rel="noopener noreferrer" class="c-link"&gt;
          CKA: Certified Kubernetes Administrator - Credly
        &lt;/a&gt;
      &lt;/h2&gt;
        &lt;p class="truncate-at-3"&gt;
          Earners of this designation demonstrated the skills, knowledge and competencies to perform the responsibilities of a Kubernetes Administrator. Earners demonstrated proficiency in Application Lifecycle Management, Installation, Configuration &amp;amp; Validation, Core Concepts, Networking, Scheduling, Security, Cluster Maintenance, Logging / Monitoring, Storage, and Troubleshooting
        &lt;/p&gt;
      &lt;div class="color-secondary fs-s flex items-center"&gt;
        credly.com
      &lt;/div&gt;
    &lt;/div&gt;
&lt;/div&gt;


&lt;h3&gt;
  
  
  Step 2: Containerize Your E-Commerce Website and Database
&lt;/h3&gt;

&lt;h4&gt;
  
  
  Web Application Containerization
&lt;/h4&gt;

&lt;p&gt;In this step, our task is to craft our own Docker image starting from the base image of &lt;code&gt;php:7.4-apache&lt;/code&gt; and configure the essential components. Fortunately, the provided hints are thorough, guiding us through the process. Let's proceed by crafting a Dockerfile to translate these hints into commands.&lt;/p&gt;

&lt;p&gt;This &lt;code&gt;Dockerfile&lt;/code&gt; will then be build into a Docker Image and this image will then be pushed into our &lt;a href="https://hub.docker.com/repositories/edwardallen" rel="noopener noreferrer"&gt;Docker Hub account&lt;/a&gt;. You can use this command to achieve this.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Build Docker Image
docker build -t "&amp;lt;docker_username&amp;gt;/&amp;lt;repositry_name&amp;gt;:&amp;lt;tag&amp;gt;" . 

# Push to Docker Hub
docker push "&amp;lt;docker_username&amp;gt;/&amp;lt;repositry_name&amp;gt;:&amp;lt;tag&amp;gt;"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h4&gt;
  
  
  Database Containerization
&lt;/h4&gt;

&lt;p&gt;For our Database component, we won't need to create a custom Docker image; instead, we'll simply pull an image from the Public DockerHub.&lt;/p&gt;

&lt;p&gt;Notice that there is a &lt;code&gt;db-load-script.sql&lt;/code&gt; script, it's essential to understand its functionality before proceeding confidently. Let's delve into its purpose to ensure we're well-prepared for the next steps.&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;USE ecomdb;
CREATE TABLE products (id mediumint(8) unsigned NOT NULL auto_increment,Name varchar(255) default NULL,Price varchar(255) default NULL, ImageUrl varchar(255) default NULL,PRIMARY KEY (id)) AUTO_INCREMENT=1;

INSERT INTO products (Name,Price,ImageUrl) VALUES ("Laptop","100","c-1.png"),("Drone","200","c-2.png"),("VR","300","c-3.png"),("Tablet","50","c-5.png"),("Watch","90","c-6.png"),("Phone Covers","20","c-7.png"),("Phone","80","c-8.png"),("Laptop","150","c-4.png");
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h3&gt;
  
  
  Step 3: Set Up Kubernetes on a Public Cloud Provider
&lt;/h3&gt;

&lt;p&gt;We're now at the stage where we must set up our Kubernetes cluster. For this, I've opted for AWS (EKS). I've taken the initiative to create the necessary resources, starting from the AWS VPC components and extending up to the EKS cluster.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Reminder:&lt;/strong&gt; In the EKS cluster setup, ensure that you have permission to assume the IAM role on which you'll to create the cluster as you will not be able to access the cluster without it. Unless, you will configure the &lt;code&gt;authentication_mode&lt;/code&gt; to &lt;code&gt;API_AND_CONFIG_MAP&lt;/code&gt; and &lt;code&gt;cluster_endpoint_public_access&lt;/code&gt; to &lt;code&gt;true&lt;/code&gt;. &lt;/p&gt;

&lt;p&gt;Please be patient as the creation of the EKS cluster may take between 10 to 20 minutes.&lt;/p&gt;

&lt;p&gt;Once the cluster is successfully created, you'll want to verify your access by ensuring that you can execute &lt;code&gt;kubectl&lt;/code&gt; commands.&lt;/p&gt;

&lt;p&gt;To connect to the cluster, add a new cluster context to your &lt;code&gt;.kubeconfig&lt;/code&gt; file. You can achieve this by using the following command:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws eks update-kubeconfig --region &amp;lt;aws_region&amp;gt; --name &amp;lt;cluster_name&amp;gt; --profile &amp;lt;profile_name&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;If you're utilizing an &lt;code&gt;EKSClusterCreatorRole&lt;/code&gt; IAM Role, you can assume the role and execute the aforementioned command. An effective tool for this purpose is &lt;strong&gt;aws-vault&lt;/strong&gt;.&lt;/p&gt;


&lt;div class="ltag-github-readme-tag"&gt;
  &lt;div class="readme-overview"&gt;
    &lt;h2&gt;
      &lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev.to%2Fassets%2Fgithub-logo-5a155e1f9a670af7944dd5e12375bc76ed542ea80224905ecaf878b9157cdefc.svg" alt="GitHub logo"&gt;
      &lt;a href="https://github.com/99designs" rel="noopener noreferrer"&gt;
        99designs
      &lt;/a&gt; / &lt;a href="https://github.com/99designs/aws-vault" rel="noopener noreferrer"&gt;
        aws-vault
      &lt;/a&gt;
    &lt;/h2&gt;
    &lt;h3&gt;
      A vault for securely storing and accessing AWS credentials in development environments
    &lt;/h3&gt;
  &lt;/div&gt;
  &lt;div class="ltag-github-body"&gt;
    
&lt;div id="readme" class="md"&gt;
&lt;div class="markdown-heading"&gt;
&lt;h1 class="heading-element"&gt;AWS Vault&lt;/h1&gt;
&lt;/div&gt;

&lt;p&gt;&lt;a href="https://github.com/99designs/aws-vault/releases" rel="noopener noreferrer"&gt;&lt;img src="https://camo.githubusercontent.com/2fc629180cf3f6175bc45f4f96e974ce1f42687839ac584e961db0aa0fd79c17/68747470733a2f2f696d672e736869656c64732e696f2f6769746875622f646f776e6c6f6164732f393964657369676e732f6177732d7661756c742f746f74616c2e737667" alt="Downloads"&gt;&lt;/a&gt;
&lt;a href="https://github.com/99designs/aws-vault/actions" rel="noopener noreferrer"&gt;&lt;img src="https://github.com/99designs/aws-vault/workflows/Continuous%20Integration/badge.svg" alt="Continuous Integration"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;AWS Vault is a tool to securely store and access AWS credentials in a development environment.&lt;/p&gt;
&lt;p&gt;AWS Vault stores IAM credentials in your operating system's secure keystore and then generates temporary credentials from those to expose to your shell and applications. It's designed to be complementary to the AWS CLI tools, and is aware of your &lt;a href="https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-getting-started.html#cli-config-files" rel="nofollow noopener noreferrer"&gt;profiles and configuration in &lt;code&gt;~/.aws/config&lt;/code&gt;&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Check out the &lt;a href="https://99designs.com.au/tech-blog/blog/2015/10/26/aws-vault/" rel="nofollow noopener noreferrer"&gt;announcement blog post&lt;/a&gt; for more details.&lt;/p&gt;
&lt;div class="markdown-heading"&gt;
&lt;h2 class="heading-element"&gt;Installing&lt;/h2&gt;
&lt;/div&gt;
&lt;p&gt;You can install AWS Vault:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;by downloading the &lt;a href="https://github.com/99designs/aws-vault/releases/latest" rel="noopener noreferrer"&gt;latest release&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;on macOS with &lt;a href="https://formulae.brew.sh/cask/aws-vault" rel="nofollow noopener noreferrer"&gt;Homebrew Cask&lt;/a&gt;: &lt;code&gt;brew install --cask aws-vault&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;on macOS with &lt;a href="https://ports.macports.org/port/aws-vault/summary" rel="nofollow noopener noreferrer"&gt;MacPorts&lt;/a&gt;: &lt;code&gt;port install aws-vault&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;on Windows with &lt;a href="https://chocolatey.org/packages/aws-vault" rel="nofollow noopener noreferrer"&gt;Chocolatey&lt;/a&gt;: &lt;code&gt;choco install aws-vault&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;on Windows with &lt;a href="https://scoop.sh/" rel="nofollow noopener noreferrer"&gt;Scoop&lt;/a&gt;: &lt;code&gt;scoop install aws-vault&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;on Linux with &lt;a href="https://formulae.brew.sh/formula/aws-vault" rel="nofollow noopener noreferrer"&gt;Homebrew on Linux&lt;/a&gt;: &lt;code&gt;brew install aws-vault&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;on &lt;a href="https://www.archlinux.org/packages/community/x86_64/aws-vault/" rel="nofollow noopener noreferrer"&gt;Arch Linux&lt;/a&gt;: &lt;code&gt;pacman -S aws-vault&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;on &lt;a href="https://github.com/gentoo/guru/tree/master/app-admin/aws-vault" rel="noopener noreferrer"&gt;Gentoo Linux&lt;/a&gt;: &lt;code&gt;emerge --ask app-admin/aws-vault&lt;/code&gt; (&lt;a href="https://wiki.gentoo.org/wiki/Project:GURU/Information_for_End_Users" rel="nofollow noopener noreferrer"&gt;enable Guru first&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;on &lt;a href="https://www.freshports.org/security/aws-vault/" rel="nofollow noopener noreferrer"&gt;FreeBSD&lt;/a&gt;…&lt;/li&gt;
&lt;/ul&gt;
&lt;/div&gt;
  &lt;/div&gt;
  &lt;div class="gh-btn-container"&gt;&lt;a class="gh-btn" href="https://github.com/99designs/aws-vault" rel="noopener noreferrer"&gt;View on GitHub&lt;/a&gt;&lt;/div&gt;
&lt;/div&gt;


&lt;h3&gt;
  
  
  Step 4: Deploy Your Website to Kubernetes
&lt;/h3&gt;

&lt;p&gt;In this step, I've generated a Kubernetes definition file to instantiate a Kubernetes &lt;code&gt;Deployment&lt;/code&gt; resource. This deployment utilizes the Docker image we previously pushed to our Docker Hub repository.&lt;/p&gt;

&lt;p&gt;In this step, we've also set up another &lt;code&gt;Deployment&lt;/code&gt; resource to host our &lt;code&gt;mariadb&lt;/code&gt; image. However, configuring this resource involves additional steps, such as specifying the &lt;code&gt;ROOT PASSWORD&lt;/code&gt; for the database and setting up the &lt;code&gt;db-load-script.sql&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;To set the &lt;code&gt;ROOT PASSWORD&lt;/code&gt;, you can define your desired value as an &lt;code&gt;Environment Variable&lt;/code&gt; named &lt;code&gt;MYSQL_ROOT_PASSWORD&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;As for the &lt;code&gt;db-load-script.sql&lt;/code&gt;, we've created a &lt;code&gt;ConfigMap&lt;/code&gt; Kubernetes resource to store its data. &lt;/p&gt;

&lt;p&gt;A useful trick to streamline this process is employing &lt;code&gt;kubectl&lt;/code&gt; commands to automatically generate the Kubernetes definition file. For instance, if you wish to create a Deployment definition file, you can execute the command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl create deploy --image=busybox sample --dry-run=client -o yaml &amp;gt; sample.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This will generate a &lt;code&gt;sample.yaml&lt;/code&gt; file and below are the contents.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: apps/v1
kind: Deployment
metadata:
  creationTimestamp: null
  labels:
    app: sample
  name: sample
spec:
  replicas: 1
  selector:
    matchLabels:
      app: sample
  strategy: {}
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: sample
    spec:
      containers:
      - image: busybox
        name: busybox
        resources: {}
status: {}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And if you want to get the properties of an existing resource, you can do this command instead.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl get deploy &amp;lt;existing_deployment_name&amp;gt; -o yaml &amp;gt; sample.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; This commands are not limited to &lt;code&gt;Deployment&lt;/code&gt; resource only, you can also use other resources such as &lt;code&gt;Pod&lt;/code&gt;, &lt;code&gt;Service&lt;/code&gt;, etc.&lt;/p&gt;

&lt;h4&gt;
  
  
  Configure the Database
&lt;/h4&gt;

&lt;p&gt;In line with best practices, it's recommended to deploy the database before the website. This allows for thorough testing of the database configuration and the creation of a new &lt;code&gt;Database User&lt;/code&gt;, as it's considered best practice to avoid using &lt;code&gt;root&lt;/code&gt; for day-to-day tasks.&lt;/p&gt;

&lt;p&gt;Once the database &lt;code&gt;Deployment&lt;/code&gt; is created, you can remotely connect to the &lt;code&gt;Pod&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Once connected, login to the database.&lt;/p&gt;

&lt;p&gt;Check if there are &lt;code&gt;Database&lt;/code&gt; created, specifically, the &lt;code&gt;ecomdb&lt;/code&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;+--------------------+
| Database           |
+--------------------+
| ecomdb             |
| information_schema |
| mysql              |
| performance_schema |
| sys                |
+--------------------+
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Check if there are data inserted to the &lt;code&gt;ecomdb&lt;/code&gt; database.&lt;/p&gt;

&lt;p&gt;Check if there is an created &lt;code&gt;products&lt;/code&gt; table and see if there are data inserted.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; All of these predefined data are created by the &lt;code&gt;db-load-script.sql&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Once everything is verified, create a Database user.&lt;/p&gt;

&lt;p&gt;Take note of the username and password that we've given to the user as website &lt;code&gt;Pod&lt;/code&gt; will use this user the  to connect to the database.&lt;/p&gt;

&lt;h4&gt;
  
  
  Configure the Website
&lt;/h4&gt;

&lt;p&gt;With our database preparations complete, our website now possesses the requisite variables for authenticating with the database.&lt;/p&gt;

&lt;p&gt;However, before proceeding, we need to make adjustments in the &lt;code&gt;app/&lt;/code&gt; &lt;code&gt;index.php&lt;/code&gt; file to ensure our PHP application can fetch the database connection strings that we will provide via &lt;code&gt;Environment Variables&lt;/code&gt;. &lt;/p&gt;

&lt;p&gt;This &lt;code&gt;Environment Variables&lt;/code&gt; will then be defined in our &lt;code&gt;Deployment&lt;/code&gt; definition file. &lt;/p&gt;

&lt;p&gt;This step is undeniably both laborious and pivotal in ensuring the functionality of our web application. I'm grateful for the little to none hints offered in the challenge steps, as they motivated me to explore diverse strategies to surmount this obstacle. I swear after completing this step, it will be even more exhilarating!&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 5: Expose Your Website
&lt;/h3&gt;

&lt;p&gt;Now, it's time to set up a Kubernetes &lt;code&gt;Service&lt;/code&gt; to make our &lt;code&gt;Deployment&lt;/code&gt; accessible. We'll opt for a &lt;code&gt;LoadBalancer&lt;/code&gt; type &lt;code&gt;Service&lt;/code&gt;, which will generate an AWS Load Balancer to expose our Web Application beyond the confines of our Kubernetes cluster.&lt;/p&gt;

&lt;p&gt;It's crucial to ensure the &lt;code&gt;selector&lt;/code&gt; section in your definition file contains the correct values. Below is a sample Service definition file for your reference.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: v1
kind: Service
metadata:
  name: &amp;lt;name_of_service&amp;gt;
spec:
  type: LoadBalancer
  ports:
  - port: 80
    protocol: TCP
  selector:
    &amp;lt;label_key_of_webapp_pod&amp;gt;: &amp;lt;label_value_of_webapp_pod&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Even though its not mentioned in the step, I think its also beneficial to create another &lt;code&gt;Service&lt;/code&gt; for our Database &lt;code&gt;Deployment&lt;/code&gt; resource so it can be reached in a consistent manner using the &lt;code&gt;Service&lt;/code&gt; endpoint. &lt;/p&gt;

&lt;p&gt;Once the website &lt;code&gt;Service&lt;/code&gt; is established, it generates a DNS endpoint through which you can access the web application. You can find this endpoint either by retrieving the details of the &lt;code&gt;Service&lt;/code&gt; or within the AWS Management Console as a Load Balancer.&lt;/p&gt;

&lt;p&gt;For instance, you can access my website at &lt;a href="http://a5df0e7e514b2432182bbc918d391c96-2143269091.us-east-1.elb.amazonaws.com/" rel="noopener noreferrer"&gt;this link&lt;/a&gt; as an example of such an endpoint.&lt;/p&gt;

&lt;p&gt;You might encounter several issues here, specifically regarding the authentication of our web application to our database, such as:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ERROR 1045 (28000): Access denied for user 'username'@ 'localhost' (using password: NO)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In such cases, it's crucial to exercise extra caution and ensure that you're providing the correct credentials as configured in the previous step.&lt;/p&gt;

&lt;p&gt;I'm concerned that the length of this blog post might be making it feel tedious. However, it's important to remember that despite being only halfway through our journey, each step we take brings us closer on completing this challenge.&lt;/p&gt;

&lt;p&gt;Before we embark further into our cloud journey, I invite you to stay connected with me on social media platforms. Follow along on &lt;a href="https://twitter.com/edwardmercado_" rel="noopener noreferrer"&gt;Twitter&lt;/a&gt;, &lt;a href="https://www.linkedin.com/in/edwardallenmercado-677b69139/" rel="noopener noreferrer"&gt;LinkedIn&lt;/a&gt;. Let's continue this exploration together and build a thriving community of cloud enthusiasts. Join me on this exciting adventure!&lt;/p&gt;

</description>
      <category>aws</category>
      <category>kubernetes</category>
      <category>terraform</category>
      <category>php</category>
    </item>
    <item>
      <title>AWS Client VPN Implementation for Private Networks</title>
      <dc:creator>Edward Allen Mercado</dc:creator>
      <pubDate>Thu, 25 Jan 2024 13:50:03 +0000</pubDate>
      <link>https://dev.to/aws-builders/aws-client-vpn-implementation-for-private-networks-16oe</link>
      <guid>https://dev.to/aws-builders/aws-client-vpn-implementation-for-private-networks-16oe</guid>
      <description>&lt;p&gt;In the dynamic landscape of cloud computing, seamless and secure connectivity is paramount for organizations managing private networks. As businesses harness the power of AWS (Amazon Web Services), establishing a robust and secure communication channel becomes essential to access resources in a private network from virtually anywhere. Enter AWS Client VPN, a versatile solution that bridges the gap between remote users and private networks, ensuring a streamlined and secure connection.&lt;/p&gt;

&lt;h2&gt;
  
  
  Scenario
&lt;/h2&gt;

&lt;p&gt;Imagine a complex network landscape where the need for a seamless and secure solution is paramount. Picture a scenario where all traffic must flow seamlessly from a dedicated Network or Shared Services AWS account to various private networks or Virtual Private Cloud (VPC) residing in different AWS accounts, aptly named as workload accounts.&lt;/p&gt;

&lt;p&gt;Each workload account boasts its own set of computing resources, necessitating strict isolation from one another. In this intricate web of connectivity, only the Network or Shared Services account should wield the power to communicate with each workload account. The catch? Workload accounts must remain oblivious to one another's existence, ensuring airtight segregation.&lt;/p&gt;

&lt;p&gt;This connectivity conundrum doesn't stop there – these workload accounts might find themselves dispersed across different AWS regions, adding an extra layer of complexity to the puzzle.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Architecture
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwn46bv3pu1tna5s0hgcb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwn46bv3pu1tna5s0hgcb.png" alt="The Architecture" width="800" height="490"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Setup
&lt;/h2&gt;

&lt;p&gt;Now that we've laid eyes on the architectural diagram, it's time to roll up our sleeves and dive into the nitty-gritty of setting up the AWS services. This section is your backstage pass to the configuration wizardry that makes the entire system click.&lt;/p&gt;

&lt;h3&gt;
  
  
  AWS Organizations (Optional)
&lt;/h3&gt;

&lt;p&gt;Let's create an AWS Organization, Ideally, in this scenario you will need to create another account which is called Management Account on where you will create this AWS Organization. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://docs.aws.amazon.com/organizations/latest/userguide/orgs_tutorials_basic.html" rel="noopener noreferrer"&gt;Creating and Configuring an Organization&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You can then invite the Shared Services or Network account and each of the workload account to these organization. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_accounts_invites.html" rel="noopener noreferrer"&gt;Inviting an Account To Your Organization&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Transit Gateway
&lt;/h3&gt;

&lt;p&gt;We need create a Transit Gateway in the Network or Shared Services Account. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://docs.aws.amazon.com/vpc/latest/tgw/tgw-getting-started.html#step-create-tgw" rel="noopener noreferrer"&gt;Creating the Transit Gateway&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; Make sure to disable the &lt;em&gt;Auto accept shared attachments&lt;/em&gt;, this is for the best practice, and when we share this Transit Gateway later using Resource Access Manager to the other accounts, the account needs to accept the shared resource. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq2666xu7n1q4u52ys08y.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq2666xu7n1q4u52ys08y.png" alt="Transit Gateway" width="800" height="273"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Resource Access Manager (RAM)
&lt;/h3&gt;

&lt;p&gt;We will use Resource Access Manager to share the Transit Gateway to each workload accounts. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://repost.aws/knowledge-center/transit-gateway-sharing" rel="noopener noreferrer"&gt;Transit Gateway Resource Sharing&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Transit Gateway Attachments
&lt;/h3&gt;

&lt;p&gt;Now that the Transit Gateway has been seamlessly shared with the workload accounts, the next step is to establish the vital connections through attachments. Attachments play a pivotal role in associating the Transit Gateway with specific subnets within each Virtual Private Cloud (VPC).&lt;/p&gt;

&lt;p&gt;&lt;a href="https://docs.aws.amazon.com/vpc/latest/tgw/tgw-vpc-attachments.html#create-vpc-attachment" rel="noopener noreferrer"&gt;Create a Transit Gateway Attachment to a VPC&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If your architecture spans multiple workload accounts, repeat the process for each one. This ensures a comprehensive and interconnected network across all relevant accounts.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; It is recommended to create a separate subnet on where you will place your attachment. &lt;/p&gt;

&lt;p&gt;Once finished, all of the created attachments will be shown when you go back to Shared Services or Network account Transit Gateway Attachments Console. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F09719gfkupn2jmi7zhc0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F09719gfkupn2jmi7zhc0.png" alt="TGW Attachments" width="800" height="165"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  VPC Route Tables
&lt;/h3&gt;

&lt;p&gt;We now need to configure the routing of each VPC Route table in each involved Subnets in each AWS account. We need to point out the traffic that needs to be forwarded to the Transit Gateway. &lt;/p&gt;

&lt;p&gt;We need to modify the VPC Route Table for each account involved. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://docs.aws.amazon.com/vpc/latest/userguide/WorkWithRouteTables.html#AddRemoveRoutes" rel="noopener noreferrer"&gt;Add or Remove Routes from a Route Table&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvicgow4hjc0hzs68rorz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvicgow4hjc0hzs68rorz.png" alt="VPC RT" width="800" height="201"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;For the sake of simplicity, we've used the &lt;em&gt;10.0.0.0/8&lt;/em&gt; CIDR Block to cover the IP address ranges of all of the accounts. &lt;/p&gt;

&lt;p&gt;In doubt that we will confuse the routing, we can still point the traffic later using the Transit Gateway Route Tables and the &lt;em&gt;10.0.0.0/8&lt;/em&gt; CIDR Block is spread to all of the accounts without overlap.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; We need to modify the VPC Route Table for each account.&lt;/p&gt;

&lt;h3&gt;
  
  
  Transit Gateway Route Tables
&lt;/h3&gt;

&lt;p&gt;Now that the traffic is pointing to the Transit Gateway, we now need to create the Transit Gateway Route Tables to manage the routing at the Transit Gateway level. &lt;/p&gt;

&lt;p&gt;Go back to your Shared Services or Network Account, create Transit Gateway Route Table. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://docs.aws.amazon.com/vpc/latest/tgw/tgw-route-tables.html#create-tgw-route-table" rel="noopener noreferrer"&gt;Create Transit Gateway Route Table&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; We need to create a Transit Gateway Route Table for each of the account involved. &lt;/p&gt;

&lt;p&gt;It should be like the below, where all of the Route Table are created inside the Shared Services or Network Account only. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk150f7u2g6andu2nlj38.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk150f7u2g6andu2nlj38.png" alt="TGW RT" width="800" height="195"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  Transit Gateway Route Table &lt;em&gt;Associations&lt;/em&gt;
&lt;/h4&gt;

&lt;p&gt;Once the Transit Gateway Route Tables are now created, we now need to associate them to the Transit Gateway Attachments we created in the previous step. &lt;/p&gt;

&lt;p&gt;To put it simply, the Transit Gateway Route Table created for the Shared Services or Network account will be associated with the Transit Gateway Attachment we created for the account. This will be same for the workload account. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://docs.aws.amazon.com/vpc/latest/tgw/tgw-route-tables.html#associate-tgw-route-table" rel="noopener noreferrer"&gt;Associate a Transit Gateway Route Table&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  Transit Gateway Route Table &lt;em&gt;Propagations&lt;/em&gt;
&lt;/h4&gt;

&lt;p&gt;Now that we've associated the Transit Gateway Route Table to their respective attachments, we now need to create the Transit Gateway Route Table &lt;em&gt;Propagations&lt;/em&gt; so the routes advertised in the Transit Gateway are dynamically added to the Transit Gateway Route Table &lt;em&gt;Routes&lt;/em&gt;. &lt;/p&gt;

&lt;p&gt;At least the dynamic propagation will be beneficial to the Shared Services or Network Account as it needs to learn the routes so it can communicate to each workload account. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Shared Services or Network Account&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Select the Transit Gateway Route Table of the Shared Services or Network Account and go to the &lt;em&gt;Propagations&lt;/em&gt; Tab then &lt;em&gt;Create Propagation&lt;/em&gt; for each of the Workload accounts Transit Gateway Attachments. It will look like the below.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu3e4j8vb26wnwt8vnv7m.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu3e4j8vb26wnwt8vnv7m.png" alt="TGW Propagations - SS" width="800" height="170"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In the &lt;em&gt;Routes&lt;/em&gt; Tab, it will then add the advertised routes which you can see as &lt;em&gt;Propagated&lt;/em&gt; in the &lt;em&gt;Route Type&lt;/em&gt; column.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3i70grt94wbv6akua4l3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3i70grt94wbv6akua4l3.png" alt="TGW Routes - SS" width="800" height="180"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Workload Account&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;We need to also create &lt;em&gt;Propagations&lt;/em&gt; for each workload account but we only need to create propagations to the Shared Services or Network Account. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjcgwlrxophjhxcjr0gj2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjcgwlrxophjhxcjr0gj2.png" alt="TGW Propagation - W" width="800" height="143"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The routes will also be added dynamically but only the routes of the Shared Services or Network Account. &lt;/p&gt;

&lt;h3&gt;
  
  
  Client VPN
&lt;/h3&gt;

&lt;p&gt;The Transit Gateway section is now finished, we now need to create the Client VPN. &lt;/p&gt;

&lt;p&gt;For the sake of simplicity, we will use mutual authentication for the Client VPN.&lt;/p&gt;

&lt;p&gt;During its creation, we need the below details:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Client IPv4 CIDR&lt;br&gt;
This just needs to be a non-overlapping CIDR block from the CIDR blocks we’ve used earlier for the VPCs. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Server Certificate ARN&lt;br&gt;
You can create this certificate using easyrsa, you can use this &lt;a href="https://docs.aws.amazon.com/vpn/latest/clientvpn-admin/mutual.html" rel="noopener noreferrer"&gt;guide&lt;/a&gt; to generate a server certificate and upload this to the AWS Certificate Manager. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;VPC ID&lt;br&gt;
This needs to be the VPC ID to where you associated the Transit Gateway using the Transit Gateway Attachments.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Security Group IDs&lt;br&gt;
The important is the Security Group that you associate with has an Outbound and is open to all traffic.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Once all the relevant configurations are filled, we can now create the Client VPN endpoint. &lt;/p&gt;

&lt;h4&gt;
  
  
  Client VPN &lt;em&gt;Target Network Associations&lt;/em&gt;
&lt;/h4&gt;

&lt;p&gt;In the creation of the Client VPN, we only specified the VPC where will be the Client VPN hosted. Using the Target network associations, you need to associate the same VPC and the subnets where you want the Client VPN endpoints created. We need to select the VPC Subnets we configured earlier that have been associated with the relevant VPC Route Tables. &lt;/p&gt;

&lt;p&gt;You can only create one association per VPC subnet and it's okay to create an association per relevant VPC subnet.&lt;/p&gt;

&lt;h4&gt;
  
  
  Client VPN &lt;em&gt;Authorization Rules&lt;/em&gt;
&lt;/h4&gt;

&lt;p&gt;We need to create rules to grant clients access to networks. Here you can specify the CIDR block and you can granularly grant access to either &lt;em&gt;Allow access to all users&lt;/em&gt; or &lt;em&gt;Allow access to users in a specific access group&lt;/em&gt;. &lt;/p&gt;

&lt;p&gt;For testing purposes, we’ve just added a 0.0.0.0/0 CIDR block to all networks. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9e5ml785quyga8lvuwt8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9e5ml785quyga8lvuwt8.png" alt="CVPN - AR" width="800" height="121"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  Client VPN &lt;em&gt;Route Table&lt;/em&gt;
&lt;/h4&gt;

&lt;p&gt;We need create another Route Table in the Client VPN level to create routes for each CIDR block destination. Here you can specify the CIDR block and select the Subnet ID (this Subnet ID goes back to the selected VPC Subnet when we did the Target network associations) so if you created multiple Target network associates for multiple VPC Subnets, you will then need to create multiple Client VPN Route Table which is the best practice.&lt;/p&gt;

&lt;p&gt;In this case, we’ve only created one Target network association and we use this to route to multiple VPC CIDRs&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5qnv5z4ttc7gip7j3kcs.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5qnv5z4ttc7gip7j3kcs.png" alt="CVPN - RT" width="800" height="163"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Finally, we’ve finished setting up the Client VPN and its network connectivity to all the accounts, we can now download and set up the AWS VPN Client on your machine. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://docs.aws.amazon.com/vpn/latest/clientvpn-admin/cvpn-working-endpoint-export.html#export-client-config-file" rel="noopener noreferrer"&gt;Download AWS VPN Client Configuration&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://docs.aws.amazon.com/vpn/latest/clientvpn-user/connect-aws-client-vpn-connect.html" rel="noopener noreferrer"&gt;Configure AWS VPN Client Configuration&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;br&gt;&lt;br&gt;
&lt;strong&gt;Follow for the next part of this article to learn on how to extend this solution to another AWS region&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;For any queries, you can reach me at:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://dev.to/edwardmercado"&gt;Dev.to&lt;/a&gt;&lt;br&gt;
&lt;a href="https://twitter.com/edwardmercado_" rel="noopener noreferrer"&gt;Twitter&lt;/a&gt;&lt;br&gt;
&lt;a href="https://www.linkedin.com/in/edwardallenmercado-677b69139/" rel="noopener noreferrer"&gt;LinkedIn&lt;/a&gt;&lt;/p&gt;

</description>
      <category>aws</category>
      <category>networking</category>
      <category>cloud</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>How Cloud Bridge uses Cloudamize to accelerate client workload migration to AWS</title>
      <dc:creator>Edward Allen Mercado</dc:creator>
      <pubDate>Thu, 16 Mar 2023 09:02:57 +0000</pubDate>
      <link>https://dev.to/edwardmercado/how-cloud-bridge-uses-cloudamize-to-accelerate-client-workload-migration-to-aws-j6b</link>
      <guid>https://dev.to/edwardmercado/how-cloud-bridge-uses-cloudamize-to-accelerate-client-workload-migration-to-aws-j6b</guid>
      <description>&lt;p&gt;Nowadays, cloud technology is one of the biggest IT trends. This trend offers a lot of benefits, mostly in terms of cost savings, which is why a lot of enterprises are encouraged to use it.  &lt;/p&gt;

&lt;p&gt;When it comes to transforming and migrating various workloads to the cloud, enterprises have a longer road ahead of them than their smaller SME counterparts. The same question is at the top of many large corporations' list of concerns: how can we migrate workloads quickly and efficiently to meet business objectives? &lt;/p&gt;

&lt;p&gt;Migrating current on-premises infrastructure requires a significant amount of planning and analysis to achieve the client’s desired migration results. The size and complexity of the initial migration can be complicated for many enterprises. This process can be time-consuming and there’s a risk of exceeding predicted timeframes if that complexity is initially underestimated. &lt;/p&gt;

&lt;p&gt;That’s why the initial discovery and planning stages of a migration are so vital.  &lt;/p&gt;

&lt;p&gt;Cloudamize is an analytics platform that produces highly accurate diagnostics that significantly improve the speed, ease, and accuracy of migrating to AWS. These diagnostics will help you to make the right migration decision across your infrastructure migration and will help when building the optimal operating model for your organization.  &lt;/p&gt;

&lt;p&gt;This article describes how Cloud Bridge utilized the Cloudamize tool to migrate client workloads at pace. &lt;/p&gt;

&lt;p&gt;Cloud Bridge has designed a series of methodologies that build on the core migration logic of the Cloudamize tool. These agendas include the analysis of the platform, gathering of the relevant data, and evaluation of the deliverables.  &lt;/p&gt;

&lt;p&gt;Cloud Bridge is an AWS Advanced Consulting Partner with over 100 years of combined AWS expertise across technical teams. Cloud Bridge guides customers from all industries on their cloud journey, through our tailored professional management and cost-optimization services.&lt;/p&gt;

&lt;h2&gt;
  
  
  Platform Overview
&lt;/h2&gt;

&lt;p&gt;For the first agenda, let us analyze the hardware or the workloads that are in scope for the migration.&lt;br&gt;&lt;br&gt;
The installation of the Cloudamize software needs to be done in this process.  &lt;/p&gt;

&lt;h3&gt;
  
  
  Installation Methods
&lt;/h3&gt;

&lt;p&gt;Cloudamize can be installed on different Linux flavors and Windows Server OSes.&lt;/p&gt;

&lt;h4&gt;
  
  
  Agent-based
&lt;/h4&gt;

&lt;p&gt;From a variety of Linux flavors and Windows Server OSes, the Cloudamize Agent collects performance measurements, application dependencies, and SQL data. We recommend using this installation method as this collects more information on the servers.&lt;/p&gt;

&lt;h4&gt;
  
  
  Agentless
&lt;/h4&gt;

&lt;p&gt;Each server detected is collected by the Cloudamize Agentless Data Collector (ADC), which gathers performance data and application requirements. Although agent-based approaches are still preferable for this purpose, the agentless method may now be used to find SQL workloads.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Feature Comparison between Agent-based vs Agentless&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fopmfe2obv9tiftrtnn88.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fopmfe2obv9tiftrtnn88.png" alt="Agent-based vs Agentless"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  How Cloudamize Analytics Engine works
&lt;/h3&gt;

&lt;p&gt;Once the Cloudamize software is installed on the server the Data Analytics Engine will begin collecting high level information about the machine’s CPU, RAM, running processes, interdependencies, firewall rules, SQL license editions/versions and operating systems. This process of gathering data will take two weeks. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm0owli3j1wo988p2ukiw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm0owli3j1wo988p2ukiw.png" alt="Cloudamize Analytics Engine "&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Data Collection
&lt;/h2&gt;

&lt;p&gt;After two weeks of collecting the data from the servers, Cloudamize will generate an overview of the current state of the infrastructure. Using the gathered data, the tool will recommend several optimization options around the Storage, Compute, Database, Network, etc.&lt;/p&gt;

&lt;p&gt;The flow of the data from the servers to Cloudamize varies based on the options you have chosen to install the software. &lt;/p&gt;

&lt;h3&gt;
  
  
  Agent-based Data flow
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz142l9haxiso8nmd21ft.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz142l9haxiso8nmd21ft.png" alt="Agent-based Data flow"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Agentless Data flow
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftg4u6ypw8aeuqz7noz62.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftg4u6ypw8aeuqz7noz62.png" alt="Agentless Data flow"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Summary Overview
&lt;/h3&gt;

&lt;p&gt;Based on the discovered data, Cloudamize will show deck designs in relation to the mappings of your workload or hardware.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Hardware Mapping&lt;/strong&gt;&lt;br&gt;
This is a like-to-like mapping of system configurations to an equivalent AWS instance and storage size. This mapping is based on system hardware specifications (e.g., number of CPUs, CPU speed, assigned memory, disk size, etc.). Total Cost of Ownership (TCO) is estimated based on this configuration.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Workload Mapping&lt;/strong&gt;&lt;br&gt;
This takes system configurations into account and incorporates actual workload and usage characteristics. That data is then projected to an AWS environment. Mapping of instance sizes, storage, and network demand is provided and the TCO is estimated based on the suggested configuration.&lt;br&gt;
Cloud Bridge examines the generated design decks from Cloudamize then generates its own design based on the requirements from the clients.&lt;/p&gt;

&lt;h2&gt;
  
  
  Deliverable Review
&lt;/h2&gt;

&lt;p&gt;Using the information gathered from the previous steps, Cloud Bridge presents the Optimization and License Assessment (OLA) analysis decks and distinctive designs which highlight the options that the client will benefit from. &lt;/p&gt;

&lt;p&gt;Based on the recommended optimizations, the data will be projected to an AWS environment. Mapping of instance sizes, storage, and network demand is provided and the TCO is estimated based on the suggested configuration.&lt;/p&gt;

&lt;h2&gt;
  
  
  Go or No-Go
&lt;/h2&gt;

&lt;p&gt;This is a crucial step for the client. After presenting several migration options to the client. Cloud Bridge will let the client decide whether to migrate or not. &lt;/p&gt;

&lt;p&gt;Once the client has confirmed that they want to continue the migration, Cloud Bridge will then take the initiative to plan the migration by producing an architectural design that is situated around a series of wave migrations that will act as the blueprint during the migration. &lt;/p&gt;

&lt;h2&gt;
  
  
  Customer Success Story – UK Pub chain
&lt;/h2&gt;

&lt;p&gt;Cloud Bridge serves up and end-to-end AWS cloud migration program to a pub company.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Challenge
&lt;/h3&gt;

&lt;p&gt;When the organisation chose to migrate its on-premises environment to the cloud, it sought a partner to assist it in becoming acquainted with cloud technologies, selecting a cloud service provider, completing the whole migration, and managing the new cloud infrastructure.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Solution
&lt;/h3&gt;

&lt;p&gt;The company wanted to see a zero-change migration and Cloud Bridge delivered.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Cloud Bridge scoped the migration and uses the Cloudamize tool to perform TCO analysis and apply its recommended optimizations.&lt;/li&gt;
&lt;li&gt;Provide architectural designs that help clients to understand their infrastructure in AWS.&lt;/li&gt;
&lt;li&gt;We deployed AWS resources using automated provisioning based on infrastructure-as-code approach.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  The Outcome
&lt;/h3&gt;

&lt;p&gt;Cloud Bridge migrated their infrastructure ahead of the expected time. This bought more time for the pub company to market their services. &lt;/p&gt;

&lt;p&gt;The pub chain reaped the following benefits by moving to the cloud:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Improved security, stability, and reliability of its technology foundation.&lt;/li&gt;
&lt;li&gt;Long-term cost savings by eliminating on-premises infrastructure management costs.&lt;/li&gt;
&lt;li&gt;Improved application performance.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Our Professional Services experts at Cloud Bridge wrap governance and procedure around an AWS migration. They ensure that workloads are migrated into AWS in the correct sequence based on business goals and objectives by using transparent project management.&lt;/p&gt;

&lt;p&gt;To learn more about how Cloud Bridge can assist with your business challenges related to digital transformation, migration, and application modernization, visit the &lt;a href="https://www.cloud-bridge.co.uk/" rel="noopener noreferrer"&gt;Cloud Bridge website&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>migration</category>
      <category>cloud</category>
      <category>architecture</category>
      <category>aws</category>
    </item>
    <item>
      <title>Sync Git Repository from Github to AWS CodeCommit with Terraform</title>
      <dc:creator>Edward Allen Mercado</dc:creator>
      <pubDate>Wed, 17 Aug 2022 08:14:00 +0000</pubDate>
      <link>https://dev.to/aws-builders/sync-git-repository-from-github-to-aws-codecommit-with-terraform-4iap</link>
      <guid>https://dev.to/aws-builders/sync-git-repository-from-github-to-aws-codecommit-with-terraform-4iap</guid>
      <description>&lt;h2&gt;
  
  
  What is a Git Repository?
&lt;/h2&gt;

&lt;p&gt;Git repository is like a data structure that VCS uses to store metadata for a set of files and directories. Contains a collection of  files and a history of changes made to those files.&lt;/p&gt;

&lt;p&gt;This git repository can then be a remote repository stored on a code hosting service like Github, BitBucket, etc. &lt;/p&gt;

&lt;p&gt;The advantage of storing your git repository on a code hosting services like Github is to promote collaboration whereas it will be a common repository that all team members use to exchange their changes among the files. &lt;/p&gt;

&lt;p&gt;Having a central code repository is essential part of development. Hence this needs to be planned well. &lt;/p&gt;

&lt;p&gt;In our case, we have our private Github organization where we store our project codes and we have a requirement to synchronized it to another repository in AWS CodeCommit.&lt;/p&gt;

&lt;h2&gt;
  
  
  Getting Started
&lt;/h2&gt;

&lt;p&gt;At first, we need to generate several tokens and an SSH Key. &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://docs.github.com/en/github/authenticating-to-github/creating-a-personal-access-token" rel="noopener noreferrer"&gt;Generate a GitHub Token&lt;/a&gt; - provide workflow, organization, create and delete repository permissions&lt;/li&gt;
&lt;li&gt;Clone this &lt;a href="https://github.com/edwardmercado/mirror-repo-from-github-to-codecommit" rel="noopener noreferrer"&gt;repository&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://docs.gitlab.com/ee/ssh/#generate-an-ssh-key-pair" rel="noopener noreferrer"&gt;Generate an SSH Key&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;Substitute the appropriate values below (see description), create a &lt;code&gt;terraform.tfvars&lt;/code&gt; file and replace the placeholders.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;github_token          = "PUT YOUR GITHUB PERSONAL ACCESS TOKEN HERE"

aws_access_key_id     = "PUT AWS SECRET ACCESS KEY ID HERE"
aws_secret_access_key = "PUT AWS SECRET ACCESS KEY HERE"

github_repository_name     = "Github Repository Name e.g. 'samplegithubrepo'"
codecommit_repository_name = "CodeCommit Repository Name e.g. 'samplecodecommitrepo'"

ssh_private_key_path  = "PUT THE PATH WHERE SSH PRIVATE KEY IS STORED e.g. ~/.ssh/id_rsa"
ssh_public_key_path   = "PUT THE PATH WHERE SSH PUBLIC KEY IS STORED e.g. ~/.ssh/id_rsa.pub"

aws_region            = "PUT YOUR AWS REGION OF CHOICE"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Please take note that the values inside your &lt;code&gt;terraform.tfvars&lt;/code&gt; file are not to be pushed to the Github repository. To prevent this, the Github repository itself has &lt;code&gt;.gitignore&lt;/code&gt; file which &lt;code&gt;*.tfvars&lt;/code&gt; are included.&lt;/p&gt;
&lt;h2&gt;
  
  
  Deployment
&lt;/h2&gt;

&lt;p&gt;After doing the prerequisites above, you can start deploying the solution.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Run &lt;code&gt;terraform init&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;Run &lt;code&gt;terraform apply&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;Check the resources to be deployed and type &lt;em&gt;"yes"&lt;/em&gt; to deploy the resources.&lt;/li&gt;
&lt;/ol&gt;
&lt;h2&gt;
  
  
  Troubleshooting
&lt;/h2&gt;

&lt;p&gt;Here are some error's we encountered during testing of this solution.&lt;/p&gt;
&lt;h3&gt;
  
  
  Workflow Failed
&lt;/h3&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;gt; Run pixta-dev/repository-mirroring-action@v1
&amp;gt; fatal: no path specified; see 'git help pull' for valid url syntax
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;&lt;strong&gt;Resolution:&lt;/strong&gt; &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Open the created GitHub Repository, Go to Settings &amp;gt; Secrets &amp;gt; Update &lt;code&gt;CODECOMMIT_SSH_PRIVATE_KEY&lt;/code&gt; with your private SSH key. &lt;/li&gt;
&lt;li&gt;Rerun workflow.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
  
  
  Result
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;All parts of the GitHub Repo are mirrored, i.e. branches, commits, etc.&lt;/li&gt;
&lt;li&gt;Setup CodePipeline to use CodeCommit as source.&lt;/li&gt;
&lt;li&gt;Pushing to Git auto-triggers a push up to CodeCommit, which in turn triggers a pipeline rerun.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
  
  
  Github Repository
&lt;/h2&gt;

&lt;p&gt;See the Github repository here.&lt;br&gt;
&lt;/p&gt;
&lt;div class="ltag-github-readme-tag"&gt;
  &lt;div class="readme-overview"&gt;
    &lt;h2&gt;
      &lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev.to%2Fassets%2Fgithub-logo-5a155e1f9a670af7944dd5e12375bc76ed542ea80224905ecaf878b9157cdefc.svg" alt="GitHub logo"&gt;
      &lt;a href="https://github.com/edwardmercado" rel="noopener noreferrer"&gt;
        edwardmercado
      &lt;/a&gt; / &lt;a href="https://github.com/edwardmercado/mirror-repo-from-github-to-codecommit" rel="noopener noreferrer"&gt;
        mirror-repo-from-github-to-codecommit
      &lt;/a&gt;
    &lt;/h2&gt;
    &lt;h3&gt;
      
    &lt;/h3&gt;
  &lt;/div&gt;
  &lt;div class="ltag-github-body"&gt;
    
&lt;div id="readme" class="md"&gt;
&lt;p&gt;
  &lt;a rel="noopener noreferrer nofollow" href="https://camo.githubusercontent.com/f7a26c2737a293e4f48de93118c6e9bc120ca3116d94635a246e6cf3a05b2804/68747470733a2f2f6d69726f2e6d656469756d2e636f6d2f6d61782f3631302f312a5562426e416f62666e4635797848594b4343373058672e706e67"&gt;&lt;img src="https://camo.githubusercontent.com/f7a26c2737a293e4f48de93118c6e9bc120ca3116d94635a246e6cf3a05b2804/68747470733a2f2f6d69726f2e6d656469756d2e636f6d2f6d61782f3631302f312a5562426e416f62666e4635797848594b4343373058672e706e67" alt="Sublime's custom image"&gt;&lt;/a&gt;
&lt;/p&gt;
&lt;div class="markdown-heading"&gt;
&lt;h1 class="heading-element"&gt;Mirror Github Repository to CodeCommit&lt;/h1&gt;
&lt;/div&gt;

&lt;div class="markdown-heading"&gt;
&lt;h2 class="heading-element"&gt;Prerequisites:&lt;/h2&gt;
&lt;/div&gt;

&lt;ul&gt;
&lt;li&gt;Terraform&lt;/li&gt;
&lt;li&gt;AWS with appropriate credentials&lt;/li&gt;
&lt;li&gt;SSH Key Pair (public and private)&lt;/li&gt;
&lt;li&gt;GitHub Token&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="markdown-heading"&gt;
&lt;h2 class="heading-element"&gt;Getting Started&lt;/h2&gt;
&lt;/div&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://docs.github.com/en/github/authenticating-to-github/creating-a-personal-access-token" rel="noopener noreferrer"&gt;Generate a GitHub Token&lt;/a&gt; - provide (workflow, create and delete repository permissions)&lt;/li&gt;
&lt;li&gt;Clone this repository.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://docs.gitlab.com/ee/ssh/#generate-an-ssh-key-pair" rel="nofollow noopener noreferrer"&gt;Generate an SSH Key&lt;/a&gt; - keep the default name if possible as it points to &lt;code&gt;~/.ssh/id_rsa&lt;/code&gt; and &lt;code&gt;~/.ssh/id_rsa.pub&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;Substitute values below, create a &lt;code&gt;terraform.tfvars&lt;/code&gt; file and paste the below values.&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="snippet-clipboard-content notranslate position-relative overflow-auto"&gt;&lt;pre class="notranslate"&gt;&lt;code&gt;github_token = "PUT YOUR GITHUB PERSONAL ACCESS TOKEN HERE"
aws_access_key_id     = "PUT AWS SECRET ACCESS KEY ID HERE"
aws_secret_access_key = "PUT AWS SECRET ACCESS KEY HERE"
repository_name = "PUT YOUR REPOSITORY NAME HERE (SAME ON BOTH GITHUB &amp;amp; CODECOMMIT)"
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;ul&gt;
&lt;li&gt;Run &lt;code&gt;terraform init&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Run &lt;code&gt;terraform apply -auto-approve&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="markdown-heading"&gt;
&lt;h2 class="heading-element"&gt;Troubleshooting&lt;/h2&gt;

&lt;/div&gt;

&lt;div class="markdown-heading"&gt;
&lt;h3 class="heading-element"&gt;Workflow Failed&lt;/h3&gt;

&lt;/div&gt;

&lt;div class="snippet-clipboard-content notranslate position-relative overflow-auto"&gt;&lt;pre class="notranslate"&gt;&lt;code&gt;&amp;gt; Run pixta-dev/repository-mirroring-action@v1
&amp;gt; fatal: no path specified; see 'git help pull' for valid url syntax
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;p&gt;&lt;strong&gt;Resolution:&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Open the created GitHub Repository, Go to Settings &amp;gt; Secrets &amp;gt; Update &lt;code&gt;CODECOMMIT_SSH_PRIVATE_KEY&lt;/code&gt; with your private SSH…&lt;/li&gt;
&lt;/ul&gt;
&lt;/div&gt;
  &lt;/div&gt;
  &lt;div class="gh-btn-container"&gt;&lt;a class="gh-btn" href="https://github.com/edwardmercado/mirror-repo-from-github-to-codecommit" rel="noopener noreferrer"&gt;View on GitHub&lt;/a&gt;&lt;/div&gt;
&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;For any queries, you can reach me at:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://dev.to/edwardmercado"&gt;Dev.to&lt;/a&gt;&lt;br&gt;
&lt;a href="https://twitter.com/edwardmercado_" rel="noopener noreferrer"&gt;Twitter&lt;/a&gt;&lt;br&gt;
&lt;a href="https://www.linkedin.com/in/edwardallenmercado-677b69139/" rel="noopener noreferrer"&gt;LinkedIn&lt;/a&gt;&lt;/p&gt;

</description>
      <category>aws</category>
      <category>terraform</category>
      <category>devops</category>
      <category>github</category>
    </item>
    <item>
      <title>Continuous Delivery is NOT Continuous Deployment</title>
      <dc:creator>Edward Allen Mercado</dc:creator>
      <pubDate>Thu, 07 Oct 2021 06:56:32 +0000</pubDate>
      <link>https://dev.to/aws-builders/continuous-delivery-is-not-continuous-deployment-2kke</link>
      <guid>https://dev.to/aws-builders/continuous-delivery-is-not-continuous-deployment-2kke</guid>
      <description>&lt;p&gt;In DevOps methodologies, &lt;strong&gt;Continuous Delivery&lt;/strong&gt; and &lt;strong&gt;Continuous Deployment&lt;/strong&gt; are vague terms that we mostly took for granted and often compared as similar. &lt;/p&gt;

&lt;p&gt;Let's discuss first about the well known term in DevOps called the &lt;strong&gt;CICD - Continuous Integration and Continuous Delivery/Deployment&lt;/strong&gt;. &lt;/p&gt;

&lt;h2&gt;
  
  
  What is Continuous Integration and Continuous Delivery/Deployment?
&lt;/h2&gt;

&lt;p&gt;This is considered as one of the best practices for DevOps teams to implement. This is a method to frequently deliver applications to customers by introducing automation into the stages of application development. &lt;/p&gt;

&lt;p&gt;It also integrates with the &lt;a href="https://en.wikipedia.org/wiki/Agile_software_development" rel="noopener noreferrer"&gt;Agile Methodology&lt;/a&gt; best practice as it will enable developers to focus on providing code quality, and meet business requirements. &lt;/p&gt;

&lt;p&gt;Now, let's break down these terms into chunks. &lt;/p&gt;

&lt;h3&gt;
  
  
  Continuous Integration
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4yvpwx3uzww64oy34nlc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4yvpwx3uzww64oy34nlc.png" alt="Continuous Integration"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Continuous Integration (CI)&lt;/em&gt; is a practice where developers frequently merge the changes to the main repository (such as Github, AWS CodeCommit, etc.), after which automated builds and tests are run. &lt;/p&gt;

&lt;p&gt;&lt;em&gt;CI&lt;/em&gt; most often refers to the build or integration stage of application development. Successful CI means new code changes to an app are regularly built, tested, and merged to a shared repository. &lt;/p&gt;

&lt;h3&gt;
  
  
  Continuous Delivery and Deployment
&lt;/h3&gt;

&lt;p&gt;&lt;br&gt;&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi975tstpadaibox3digr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi975tstpadaibox3digr.png" alt="DevOps"&gt;&lt;/a&gt;&lt;br&gt;
&lt;br&gt;&lt;br&gt;
When the code changes have been built, tested. &lt;em&gt;Continuous Delivery and Deployment (CD)&lt;/em&gt; stage will prepare the changes for production release. In simple terms it is an extension of the &lt;em&gt;Continuous Integration&lt;/em&gt; stage by deploying all code changes to a testing environment, a production environment, or both after the build stage has been completed. &lt;/p&gt;

&lt;h2&gt;
  
  
  Continuous Delivery is not Continuous Deployment
&lt;/h2&gt;



&lt;h3&gt;
  
  
  Continuous Delivery
&lt;/h3&gt;

&lt;p&gt;We tend to misinterpret &lt;em&gt;Continuous Delivery&lt;/em&gt; as after the code has been built, tested in &lt;em&gt;Continuous Integration&lt;/em&gt; stage is that every change will immediately be applied to the destination environment such as QA, PROD, etc. but it is not. &lt;/p&gt;

&lt;p&gt;The point of &lt;em&gt;Continuous Delivery&lt;/em&gt; is to ensure that every change is ready to deploy to the destination environment which involves reviews or manual integrations from non-technical team members to control the process.&lt;/p&gt;

&lt;h3&gt;
  
  
  Benefits of Continuous Delivery
&lt;/h3&gt;

&lt;p&gt;Enabling a non-technical team as part of the process reduces the burden on the development team so they may continue to execute subsequent application improvements. &lt;/p&gt;

&lt;h3&gt;
  
  
  Continuous Deployment
&lt;/h3&gt;

&lt;p&gt;On the other hand, &lt;em&gt;Continuous Deployment&lt;/em&gt; is an improved version of the &lt;em&gt;Continuous Delivery&lt;/em&gt; where all of the changes are ready to deploy to the destination environment without the manual integration, this process is completely automated, and only failed verification step will prevent pushing the changes to the environment. &lt;/p&gt;

&lt;p&gt;You can achieve &lt;em&gt;Continuous Deployment&lt;/em&gt; when your pipeline is mature enough where the involved teams are confident in the applied automation inside your pipeline. &lt;/p&gt;

&lt;h3&gt;
  
  
  Benefits of Continuous Deployment
&lt;/h3&gt;

&lt;p&gt;With all of the stages automated, it means that the developer's change could go live within minutes of writing it, you can deliver to customers quicker and start to do iterations base on their feedbacks. It's easier to release changes to apps in small pieces, rather than all at once. &lt;/p&gt;

&lt;p&gt;Building a pipeline base on your business needs could be difficult, having these terms clear can help the planning much easier. &lt;/p&gt;

&lt;p&gt;Remember that &lt;em&gt;DevOps is a journey, not the destination&lt;/em&gt;. Feedback to the pipeline is continuously collected and metrics are still needed to be in place to monitor the critical parts of the pipeline. &lt;/p&gt;

&lt;p&gt;&lt;br&gt;&lt;br&gt;
&lt;strong&gt;Hey! You can reach me at&lt;/strong&gt;:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://dev.to/edwardmercado"&gt;Dev.to&lt;/a&gt;&lt;br&gt;
&lt;a href="https://twitter.com/edwardmercado_" rel="noopener noreferrer"&gt;Twitter&lt;/a&gt;&lt;br&gt;
&lt;a href="https://www.linkedin.com/in/edwardallenmercado-677b69139" rel="noopener noreferrer"&gt;LinkedIn&lt;/a&gt;&lt;/p&gt;

</description>
      <category>aws</category>
      <category>cloud</category>
      <category>devops</category>
    </item>
    <item>
      <title>The 6 R's Of Cloud Migration</title>
      <dc:creator>Edward Allen Mercado</dc:creator>
      <pubDate>Sun, 13 Jun 2021 08:07:41 +0000</pubDate>
      <link>https://dev.to/awscommunity-asean/the-6-r-s-of-cloud-migration-2p17</link>
      <guid>https://dev.to/awscommunity-asean/the-6-r-s-of-cloud-migration-2p17</guid>
      <description>&lt;p&gt;Nowadays, a lot of companies are investing towards the migration of their on-premises applications towards the cloud.&lt;/p&gt;

&lt;p&gt;In this article, we will learn about the 6 &lt;strong&gt;R&lt;/strong&gt; that will guide  your cloud migration journey.&lt;/p&gt;

&lt;p&gt;The &lt;strong&gt;6 R's&lt;/strong&gt; Of Cloud Migration are set of strategies for migrating things into the cloud, by understanding the pros and cons of each you'll be able to plan on what &lt;strong&gt;R&lt;/strong&gt; is appropriate for your application. &lt;/p&gt;

&lt;h2&gt;
  
  
  Re-host
&lt;/h2&gt;

&lt;p&gt;Also known as &lt;em&gt;lift and shift&lt;/em&gt;. &lt;br&gt;
Migrate your application as is. This is the easiest path to get your on-premises application migrated to the cloud. Using this strategy, you &lt;em&gt;copy&lt;/em&gt; your application infrastructure to your cloud provider. &lt;/p&gt;

&lt;p&gt;You can use tools such as &lt;a href="https://www.cloudendure.com/" rel="noopener noreferrer"&gt;AWS Cloud Endure&lt;/a&gt; and &lt;a href="https://aws.amazon.com/ec2/vm-import/" rel="noopener noreferrer"&gt;VM Import/Export&lt;/a&gt; to automate this strategy.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmzz7i5e2ih78hwz7jgz0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmzz7i5e2ih78hwz7jgz0.png" alt="Rehost"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Pros
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Reduced management overhead as your cloud provider will managed the physical infrastructure on where your application be hosted also known as &lt;em&gt;Infrastructure as a Service&lt;/em&gt; (IAAS).&lt;/li&gt;
&lt;li&gt;Easier to optimize, when your application is deployed in your cloud provider it can now easily transformed to fully adopt the benefits of the cloud.&lt;/li&gt;
&lt;li&gt;Still offers cost savings as your physical infrastructure is managed.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Cons
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Not taking the full advantage of the cloud.&lt;/li&gt;
&lt;li&gt;It can delay things that you could do better.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  When to choose this strategy?
&lt;/h3&gt;

&lt;p&gt;This fits well if you want to migrate your application without the need of code or infrastructure changes, just implement the same thing you did on-premises and if you're new in the cloud and want try things out.&lt;/p&gt;

&lt;h2&gt;
  
  
  Re-platform
&lt;/h2&gt;

&lt;p&gt;Also known as &lt;em&gt;lift, tinker, and shift&lt;/em&gt;. &lt;br&gt;
Has similarities with Re-hosting but it gradually take the advantage of the cloud offerings without having to change the core infrastructure of your application.&lt;/p&gt;

&lt;p&gt;Take this strategy as a safe point for your application. It's moving your database from &lt;em&gt;Infrastructure as a Service&lt;/em&gt; to a &lt;em&gt;Database as a Service&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fscwfaqwk5r9vohewc7xi.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fscwfaqwk5r9vohewc7xi.png" alt="Replatform"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Pros
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Reduced management overhead - better than rehosting.&lt;/li&gt;
&lt;li&gt;Increased resiliency.&lt;/li&gt;
&lt;li&gt;Reduced cost.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Cons
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;No real negatives as you only allow your cloud service provider to manage more parts of your infrastructure.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Examples
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Provisioning &lt;a href="https://aws.amazon.com/rds/" rel="noopener noreferrer"&gt;Amazon Relational Database Service&lt;/a&gt; instead of managed database instance in &lt;a href="https://aws.amazon.com/ec2/?ec2-whats-new.sort-by=item.additionalFields.postDateTime&amp;amp;ec2-whats-new.sort-order=desc" rel="noopener noreferrer"&gt;Elastic Compute Cloud&lt;/a&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  When to choose this strategy?
&lt;/h3&gt;

&lt;p&gt;This suits your application migration if you want to gradually adapt on cloud functionalities such as auto-scaling, managed services, etc. without committing to a large migration effort, and by doing these you can achieve benefits than rehosting can offer.&lt;/p&gt;

&lt;h2&gt;
  
  
  Re-factor or Re-architect
&lt;/h2&gt;

&lt;p&gt;Review the architecture of an application and adopt to a &lt;em&gt;cloud-native&lt;/em&gt; architectures and products, such as:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Service Oriented or Microservices&lt;/li&gt;
&lt;li&gt;Serverless Architecture &lt;/li&gt;
&lt;li&gt;Event-Driven Architecture&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This strategy is offers the best long term benefits but comes with stiff price and time consuming process. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4nc8lim9xrrrcrcy8vzk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4nc8lim9xrrrcrcy8vzk.png" alt="Refactor"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Pros
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Takes the full advantages of using the cloud.&lt;/li&gt;
&lt;li&gt;Produce a much more scalable, better high availability and fault tolerant infrastructure&lt;/li&gt;
&lt;li&gt;Cost is aligned according the usage. &lt;em&gt;Pay as you use&lt;/em&gt; model.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Cons
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Initially, it is expensive and time-consuming.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  When to choose this strategy?
&lt;/h3&gt;

&lt;p&gt;This strategy is for those who really knows what are the full advantages of using cloud and make the most out of it. This will require you to drastically modify your application core infrastructure to be suited base on the cloud-native model, although this entails a lot of work this strategy will produce more value to your business in the long run.&lt;/p&gt;

&lt;h2&gt;
  
  
  Re-purchase
&lt;/h2&gt;

&lt;p&gt;Move from managing installed applications on-premise and consume a &lt;em&gt;Software as a Service&lt;/em&gt; (SaaS) model. Many of the common applications nowadays are available and offered as a SaaS. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgzxz1wsgymxuvaod436e.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgzxz1wsgymxuvaod436e.png" alt="Re-purchase"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Example
&lt;/h3&gt;

&lt;p&gt;Move from a Customer Relationship Management (CRM) to Salesforce.com, from Microsoft Exchange to Microsoft 365, an HR system to Workday, or a content management system (CMS) to Drupal. &lt;/p&gt;

&lt;h3&gt;
  
  
  When to choose this strategy?
&lt;/h3&gt;

&lt;p&gt;This is for the applications that already exist with SaaS offerings that you can subscribe base on your needs. &lt;/p&gt;

&lt;h2&gt;
  
  
  Retire
&lt;/h2&gt;

&lt;p&gt;In other words &lt;em&gt;If you don't need the application, switch it off&lt;/em&gt;. Remove the applications that are no longer needed, applications that are no longer produce value to you or your business. These applications are often running for no reason.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcyhvrdca9c3401oz6qwx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcyhvrdca9c3401oz6qwx.png" alt="Retire"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Pros
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Often provides 10% - 20% cost savings.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  When to choose this strategy?
&lt;/h3&gt;

&lt;p&gt;You'll know, after your initial cloud migration assessment and base on the data you gathered, &lt;em&gt;"Does this application benefit me?"&lt;/em&gt; &lt;/p&gt;

&lt;h2&gt;
  
  
  Retain
&lt;/h2&gt;

&lt;p&gt;Also known as &lt;em&gt;re-visit&lt;/em&gt; or &lt;em&gt;do nothing, for now&lt;/em&gt;. &lt;br&gt;
Commonly, applications fall on this criteria are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Old application that has some usage but not worth the move.&lt;/li&gt;
&lt;li&gt;Complex application that need to leave till later.&lt;/li&gt;
&lt;li&gt;Super important application and it's to risky. &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4ssrmij0x5dmte5o905y.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4ssrmij0x5dmte5o905y.png" alt="Retain"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  When to choose this strategy?
&lt;/h3&gt;

&lt;p&gt;This strategy is used when most of the applications were deployed and working properly in the cloud, you'll look back and start to plan the migration for the application that falls in this category.&lt;/p&gt;

&lt;p&gt;Using these &lt;strong&gt;6 R's&lt;/strong&gt; you'll be able to produce a table that contains your on-premises application together with the &lt;strong&gt;R&lt;/strong&gt; that fits it. This detailed assessment per application will serve as your guidebook when it's time to migrate to the cloud.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;You can reach me at&lt;/strong&gt;:&lt;br&gt;
&lt;a href="https://dev.to/edwardmercado"&gt;Dev.to&lt;/a&gt;&lt;br&gt;
&lt;a href="https://twitter.com/edwardmercado_" rel="noopener noreferrer"&gt;Twitter&lt;/a&gt;&lt;/p&gt;

</description>
      <category>aws</category>
      <category>cloud</category>
      <category>architecture</category>
      <category>awscommunity</category>
    </item>
    <item>
      <title>How to extract emails from Gmail using Python and AWS</title>
      <dc:creator>Edward Allen Mercado</dc:creator>
      <pubDate>Tue, 29 Dec 2020 15:46:20 +0000</pubDate>
      <link>https://dev.to/edwardmercado/how-to-extract-emails-from-gmail-using-python-and-aws-2hp9</link>
      <guid>https://dev.to/edwardmercado/how-to-extract-emails-from-gmail-using-python-and-aws-2hp9</guid>
      <description>&lt;p&gt;Let's build an email extractor application in AWS! &lt;/p&gt;

&lt;p&gt;Our goal is to extract emails from our Google Mail and store the data (e.g. Email ID, Subject, Date, etc.) in a CSV file for further processing. This application will be executed every 12 hours. &lt;/p&gt;

&lt;h1&gt;
  
  
  Setup our Google Mail Account
&lt;/h1&gt;

&lt;p&gt;Let's secure our access in our google mail account. First, we need to generate an &lt;em&gt;App Password&lt;/em&gt; for our application, we will then use this generated key for login instead of using our usual google mail password.&lt;/p&gt;

&lt;h1&gt;
  
  
  Create a Lambda function
&lt;/h1&gt;

&lt;h3&gt;
  
  
  Authentication
&lt;/h3&gt;

&lt;p&gt;We'll use the &lt;strong&gt;imaplib&lt;/strong&gt; library to help with the connection and authentication in our google mail account. Using the imaplib &lt;code&gt;IMAP4_SSL&lt;/code&gt;, we can established a SSL connectivity with gmail imap url &lt;code&gt;imap.gmail.com&lt;/code&gt;, we can then pass the our email and the generated app password for our login.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;imap = imaplib.IMAP4_SSL('imap.gmail.com')

try:
    imap.login(email_username, email_pwd)
except Exception as e:
    print(f"Unable to login due to {e}")
else:
    print("Login successfully")
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Extract
&lt;/h3&gt;

&lt;p&gt;After authentication, we can now extract the email data. For this, I selected my &lt;em&gt;Inbox&lt;/em&gt; and get all the email id's and iterate through them one by one.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;imap.select('inbox')
data = imap.search(None, 'ALL')
mail_ids = data[1]
id_list = mail_ids[0].split()   
first_email_id = int(id_list[0])
latest_email_id = int(id_list[-1])

for email_ids in range(latest_email_id, first_email_id, -1):
    raw_data = imap.fetch(str(email_ids), '(RFC822)' )

    for response_part in raw_data:
        arr = response_part[0]
        if isinstance(arr, tuple):

            msg = email.message_from_string(str(arr[1],'utf-8'))
            email_subject = msg['subject']
            email_from = msg['from']
            email_date = msg['Date']
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You noticed that we split the mail id's because we need to remove a character so we can easily iterate through them. Google mail id's are incremental numbers, the lowest number is the oldest mail and the highest number will be the most recent mail in your mailbox.&lt;/p&gt;

&lt;h3&gt;
  
  
  Store
&lt;/h3&gt;

&lt;p&gt;At this point we're able to extract the needed details from our emails, we're now ready to store them. For this, we're gonna use the &lt;strong&gt;CSV&lt;/strong&gt; library that is included in &lt;strong&gt;Python&lt;/strong&gt; standard library to easily manage our data's inside a &lt;code&gt;.csv&lt;/code&gt; file.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;with open('/tmp/&amp;lt;your_file&amp;gt;.csv', 'a', newline='') as file:
        writer = csv.writer(file)
        writer.writerow([email_ids, email_from, email_subject, email_date])
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Transformation
&lt;/h3&gt;

&lt;p&gt;Our lambda function has a &lt;code&gt;/tmp&lt;/code&gt; directory for temporary storage of our data. Let's use this as a storage for our extracted data and transform the data using the power of &lt;strong&gt;Pandas&lt;/strong&gt; Library before uploading it to S3.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Note: &lt;code&gt;/tmp&lt;/code&gt; is a &lt;code&gt;512MB&lt;/code&gt; storage in our Lambda function. This directory will not be wiped after each invocation it actually preserve itself for 30 minutes or so .. to anticipate any subsequent invocations.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  Upload
&lt;/h3&gt;

&lt;p&gt;Let's now upload our &lt;code&gt;.csv&lt;/code&gt; file to our &lt;strong&gt;S3 Bucket&lt;/strong&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;try:
    response = s3_client.upload_file("/tmp/&amp;lt;your_file&amp;gt;.csv", s3_bucket, "&amp;lt;file_name_in_s3&amp;gt;")
except Exception as e:
    print(f"Unable to upload to s3, ERROR: {e}")
else:
    print("Uploaded file to s3")
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Control Flow
&lt;/h3&gt;

&lt;p&gt;Let's add a control flow to determine whether we're inserting a whole new data initially or just need to append the most recent data on our &lt;code&gt;.csv&lt;/code&gt; file because after all, this process is scheduled to execute every 12 hours.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;If our .csv file does not exists in our AWS S3 Bucket&lt;/strong&gt; then we'll extract all of the existing emails in our account. We can use the boto3 library to interact with AWS resources.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;If it does exists&lt;/strong&gt;, we can just append the new emails in our data.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What if there's no new email received?&lt;/strong&gt; Let's then compare the &lt;code&gt;latest_email_id&lt;/code&gt; from our gmail and the &lt;code&gt;latest_email_id&lt;/code&gt; from our &lt;code&gt;.csv&lt;/code&gt; file, also using these variables we can substitute them in our &lt;strong&gt;Extract&lt;/strong&gt; code to control the &lt;strong&gt;range&lt;/strong&gt; of email data that we're gonna extract and append in our &lt;code&gt;.csv&lt;/code&gt; file.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;try:
    s3_resource.Object(s3_bucket, 'emails.csv').load()
except botocore.exceptions.ClientError as e:

    if e.response['Error']['Code'] == "404":
        # The object does not exist.
        print("Excel file not found.. Initial Insert")
        #create csv
        create_csv()
        #insert initial values
        initial_insert(imap, latest_email_id, first_email_id)
        #store to s3
else:
    #compare
    if latest_email_id != last_email_id_from_csv:
        #append data
        update_insert(imap, last_email_id_from_csv, latest_email_id)
    else:
        #dont append anything
        print("Nothing is inserted")    
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That's it! You can now view your &lt;code&gt;.csv&lt;/code&gt; file on your &lt;strong&gt;S3 Bucket&lt;/strong&gt;. &lt;/p&gt;

&lt;h1&gt;
  
  
  Schedule Actions
&lt;/h1&gt;

&lt;p&gt;Let's leverage &lt;strong&gt;AWS EventsBridge&lt;/strong&gt; and configure it to execute our Lambda function every 12 hours. &lt;/p&gt;

&lt;h1&gt;
  
  
  Deployment and Security
&lt;/h1&gt;

&lt;p&gt;I created all of the resources using an &lt;strong&gt;AWS Cloudformation&lt;/strong&gt; template and use it's &lt;strong&gt;Paremeters&lt;/strong&gt; section to provide &lt;em&gt;email&lt;/em&gt; and &lt;em&gt;app generated password&lt;/em&gt; and store them in &lt;strong&gt;AWS SSM Parameter Store&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Overall, here's the diagram of our application.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Zt8fWGri--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/rrifoq9ja1zpmlp5xxfg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Zt8fWGri--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/rrifoq9ja1zpmlp5xxfg.png" alt="Final Diagram"&gt;&lt;/a&gt; &lt;/p&gt;

&lt;p&gt;If you happen encountered some issues, just hit me up in the comments.&lt;/p&gt;

&lt;p&gt;You'll see the full implementation and codebase in &lt;a href="https://github.com/edwardmercado/Event-Drive-Email-Extractor-in-AWS"&gt;my repository&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;You can reach me at&lt;/strong&gt;:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://dev.to/edwardmercado"&gt;Dev.to&lt;/a&gt;&lt;br&gt;
&lt;a href="https://twitter.com/edwardmercado_"&gt;Twitter&lt;/a&gt;&lt;br&gt;
&lt;a href="https://www.linkedin.com/in/edwardallenmercado-677b69139"&gt;LinkedIn&lt;/a&gt;&lt;/p&gt;

</description>
      <category>aws</category>
      <category>serverless</category>
      <category>python</category>
      <category>cloudformation</category>
    </item>
    <item>
      <title>Build a modern Web Application</title>
      <dc:creator>Edward Allen Mercado</dc:creator>
      <pubDate>Thu, 17 Dec 2020 07:14:41 +0000</pubDate>
      <link>https://dev.to/edwardmercado/build-a-modern-web-application-3o4f</link>
      <guid>https://dev.to/edwardmercado/build-a-modern-web-application-3o4f</guid>
      <description>&lt;h2&gt;
  
  
  What makes your web application &lt;em&gt;modern&lt;/em&gt;?
&lt;/h2&gt;

&lt;p&gt;There's a lot of aspects you need to consider to call it a modern web application. For me, the most important one is that your application can dynamically alter their own content without loading a new document and can handle large to intermittent shifts of traffic to meet the demands. &lt;/p&gt;

&lt;p&gt;Modern applications utilizes the use of cloud for it to be highly available and scalable, this kind of applications isolates business logic, optimize reuse and iteration and remove administrative overhead whenever possible. For example, AWS cloud has a lot of services that enables you to just focus on writing your code while automating infrastructure maintenance tasks.&lt;/p&gt;

&lt;h1&gt;
  
  
  Create a Static Website
&lt;/h1&gt;

&lt;p&gt;I said earlier that the most important aspect for your web application is to be &lt;em&gt;dynamic&lt;/em&gt; (don't worry we'll get there), we can't also deny the fact that there are always parts of the website that are &lt;em&gt;fixed&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;The best and the cheapest services that will handle our static contents are &lt;strong&gt;AWS CloudFront&lt;/strong&gt; and &lt;strong&gt;AWS S3&lt;/strong&gt;. &lt;/p&gt;

&lt;p&gt;Let's create an &lt;strong&gt;AWS S3 bucket&lt;/strong&gt; and upload all of our static web  contents (e.g. html, css, JS, medias, etc.). I'll configure our &lt;strong&gt;CloudFront distribution&lt;/strong&gt; to deliver these contents in multiple Edge Locations around the world. &lt;/p&gt;

&lt;p&gt;For security, I'll create an &lt;strong&gt;CloudFront Origin Access Identity (OAI)&lt;/strong&gt; and create a &lt;strong&gt;S3 Bucket Policy&lt;/strong&gt; stating that this identity &lt;strong&gt;only&lt;/strong&gt; can have a read access on our S3 Bucket. I'll use &lt;strong&gt;AWS Certificate Manager&lt;/strong&gt; to provision a certificate for our website so we can deliver our content via HTTPS. &lt;/p&gt;

&lt;p&gt;Additionally, you can register a domain in &lt;strong&gt;AWS Route53&lt;/strong&gt; for the &lt;strong&gt;FQDN (Fully Qualified Domain Name)&lt;/strong&gt; of our website and create a record that will point to our CloudFront distribution. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fy3g3csc99kqdhrbzjxri.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fy3g3csc99kqdhrbzjxri.png" alt="Create a Static Website"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Build a Dynamic Website
&lt;/h1&gt;

&lt;p&gt;Here I will create a &lt;strong&gt;Flask application&lt;/strong&gt; in a container behind a &lt;strong&gt;Network Load Balancer&lt;/strong&gt;. These will make our frontend website more interactive and yes, you read it right, &lt;em&gt;dynamic&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;Here I'll used &lt;strong&gt;AWS Elastic Container Service (ECS)&lt;/strong&gt; with the deployment option of &lt;strong&gt;Fargate&lt;/strong&gt; so I can deploy containers without having to manage any servers. &lt;/p&gt;

&lt;p&gt;Build a &lt;strong&gt;docker image&lt;/strong&gt; from the dockerfile with our application dependencies and push the image to &lt;strong&gt;AWS Elastic Container Repository (ECR)&lt;/strong&gt;, you can troubleshoot your docker image by running it locally.&lt;/p&gt;

&lt;p&gt;After the docker image is pushed to ECR, let's create ECS Cluster, Service, and Task Definition so we can place where (subnet) our containers will run and set resources and configurations that they require.&lt;/p&gt;

&lt;p&gt;Let's then create a Network Load Balancer and configure the listener to our &lt;strong&gt;Target group&lt;/strong&gt; that will forward the traffic to our containers. &lt;/p&gt;

&lt;p&gt;Here we will create &lt;strong&gt;AWS API Gateway&lt;/strong&gt; that will proxy the traffic from the internal Load Balancer. To make this work we will provision a &lt;strong&gt;VPC Endpoint&lt;/strong&gt; for our Load Balancer that is inside the &lt;strong&gt;VPC&lt;/strong&gt; to be able to communicate with the API Gateway.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F46bkhzkwkav1f71sy8jz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F46bkhzkwkav1f71sy8jz.png" alt="Build a Dynamic Website"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Build a CI/CD Pipeline
&lt;/h1&gt;

&lt;p&gt;Let's integrate &lt;strong&gt;Continuous Integration/Continuous Deployment (CI/CD)&lt;/strong&gt; in our application so every changes that will be made are automatically built and deployed to our docker image. Its a good practice to increase the development speed, we will not go through all the same steps every time we wanted make some changes in our application.&lt;/p&gt;

&lt;p&gt;First, let's create an &lt;strong&gt;AWS CodeCommit Repository&lt;/strong&gt; where we can store our code, then an artifacts bucket that will store all our CI/CD artifacts for every build in our pipeline.&lt;/p&gt;

&lt;p&gt;Let's continue with the service that will do the most of the tasks in our pipeline &lt;strong&gt;CodeBuild&lt;/strong&gt;, it will provision a build server using the configuration provided and execute the steps required to build our docker image and push every new version to ECR.&lt;/p&gt;

&lt;p&gt;Finally, lets arrange our pipeline to automatically build whenever a code change was pushed in our CodeCommit repository then configure it to deliver the latest code that was build using our CodeBuild project to our ECR, all of this will be orchestrated using &lt;strong&gt;CodePipeline&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;At this point, if you have any issues on your builds, check the &lt;strong&gt;Identity Access Management (IAM) Roles&lt;/strong&gt; you granted on every service and also check the &lt;strong&gt;CloudWatch Logs&lt;/strong&gt;. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Ff71ngpphicrbsf0kv5wy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Ff71ngpphicrbsf0kv5wy.png" alt="Build a CI/CD Pipeline"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Store Data
&lt;/h1&gt;

&lt;p&gt;We will create a &lt;strong&gt;DynamoDB Table&lt;/strong&gt; that will store our data.&lt;/p&gt;

&lt;p&gt;While we are on the table creation, let's also create &lt;strong&gt;Secondary Indexes&lt;/strong&gt; so we can filter items efficiently.&lt;/p&gt;

&lt;p&gt;Again, we will create a VPC Endpoint for our container be able to communicate to DynamoDB without traversing to the public network.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F6x3e8yktybx7fktvj9kw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F6x3e8yktybx7fktvj9kw.png" alt="Store Data"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  User Registration
&lt;/h1&gt;

&lt;p&gt;User Registration will help us modulate our website features access. Obviously, authenticated users will have more features then unauthenticated users.&lt;/p&gt;

&lt;p&gt;Let's create &lt;strong&gt;AWS Cognito&lt;/strong&gt;, using this service we will be able to require our users to be authenticated before they can do something that may affect our database. &lt;/p&gt;

&lt;p&gt;We will setup our Cognito to require users to verify their email address before they can complete their registration&lt;/p&gt;

&lt;p&gt;Again, we will setup an API Gateway that will be use to authorize actions for our authenticated users. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F2gg74wok7a9p6makj3lu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F2gg74wok7a9p6makj3lu.png" alt="User Registration"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Capture User Behaviors
&lt;/h1&gt;

&lt;p&gt;By implementing this, we can understand the actions that our users performs in our website (e.g. User Clicks, etc.). This will help us design our website efficiently so we can provide better user experience in the future. &lt;/p&gt;

&lt;p&gt;To help us gain insights on user behaviors, let's use &lt;strong&gt;AWS Kinesis Firehose&lt;/strong&gt; that can ingest data and will help us store these data's to several storage destinations, for this we'll gonna store the ingested data to S3 (e.g. S3, ElasticSearch, RedShift, etc.) &lt;/p&gt;

&lt;p&gt;We will again use API Gateway to abstract the request being made to the Kinesis Firehose.&lt;/p&gt;

&lt;p&gt;While the website interactions are being ingested, we will use &lt;strong&gt;AWS Lambda&lt;/strong&gt; to process these data furthermore.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F3stdkygm53cs9tw167w8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F3stdkygm53cs9tw167w8.png" alt="Capture User Behaviors"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Problems encountered along the way
&lt;/h1&gt;

&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Docker Pull Rate Limits&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This made my build fail multiple times and by the time I checked my CloudWatch Logs, I saw error:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;I found out that this was caused by Docker Pull Rate Limit. This rate limit has been announced by Docker, Inc. that took effect last November 2, 2020.&lt;/p&gt;

&lt;p&gt;For more information about this, you can check &lt;a href="https://www.docker.com/blog/what-you-need-to-know-about-upcoming-docker-hub-rate-limiting/" rel="noopener noreferrer"&gt;this blog&lt;/a&gt; from Docker.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Solution&lt;/strong&gt;&lt;br&gt;
&lt;a href="https://jrklein.com/2020/11/04/aws-codebuild-failed-due-to-docker-pull-rate-limit-solution-update-buildspec-yml-file/**" rel="noopener noreferrer"&gt;Here&lt;/a&gt; is the solution that I used.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Insufficient space on my build&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I updated my dockerfile to a specific Linux distribution version and by the time I try to build my image, it throws an error:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Docker error : no space left on device
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Solution&lt;/strong&gt;&lt;br&gt;
&lt;code&gt;docker prune -all&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Overall, here's the diagram with all the related microservices integrated with each other. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F1qrh8kfodfkbu50w3p47.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F1qrh8kfodfkbu50w3p47.png" alt="Final Diagram"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Here's my final output: &lt;a href="https://mythicalmysfits.edwardallen.de/" rel="noopener noreferrer"&gt;Mythical Mysfits&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Does it look familiar? Yes! it's a workshop made by &lt;strong&gt;Amazon Web Services (AWS)&lt;/strong&gt;. You can see the detailed information &lt;a href="https://aws.amazon.com/getting-started/hands-on/build-modern-app-fargate-lambda-dynamodb-python/" rel="noopener noreferrer"&gt;here&lt;/a&gt;. I added steps and integrated services for the application to be much efficient and highly available. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;You can reach me at&lt;/strong&gt;:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://dev.to/edwardmercado"&gt;Dev.to&lt;/a&gt;&lt;br&gt;
&lt;a href="https://twitter.com/edwardmercado_" rel="noopener noreferrer"&gt;Twitter&lt;/a&gt;&lt;br&gt;
&lt;a href="https://www.linkedin.com/in/edwardallenmercado-677b69139" rel="noopener noreferrer"&gt;LinkedIn&lt;/a&gt;&lt;/p&gt;

</description>
      <category>aws</category>
      <category>docker</category>
      <category>devops</category>
      <category>serverless</category>
    </item>
  </channel>
</rss>
