<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: jekobokidou</title>
    <description>The latest articles on DEV Community by jekobokidou (@jekobokidou).</description>
    <link>https://dev.to/jekobokidou</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/jekobokidou"/>
    <language>en</language>
    <item>
      <title>Deploying Multi-Provider Site-to-Site VPNs: Connecting AWS with Azure, GCP, and Beyond</title>
      <dc:creator>jekobokidou</dc:creator>
      <pubDate>Thu, 09 Oct 2025 20:01:01 +0000</pubDate>
      <link>https://dev.to/aws-builders/deploying-multi-provider-site-to-site-vpns-connecting-aws-with-azure-gcp-and-beyond-50a5</link>
      <guid>https://dev.to/aws-builders/deploying-multi-provider-site-to-site-vpns-connecting-aws-with-azure-gcp-and-beyond-50a5</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;In today's cloud ecosystem, businesses rarely rely on a single provider. Multi-cloud adoption has become the standard, creating a critical need to securely interconnect different environments. This article explores strategies and best practices for deploying site-to-site VPNs between AWS and multiple cloud providers.&lt;/p&gt;

&lt;h2&gt;
  
  
  Multi-VPN Architecture: Why and How
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Benefits of Connected Multi-Cloud
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Resilience&lt;/strong&gt;: Avoid Single Point of Failure (SPOF)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Cost Optimization&lt;/strong&gt;: Leverage each provider's strengths&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Compliance&lt;/strong&gt;: Distribute data according to regulatory requirements&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Performance&lt;/strong&gt;: Reduce latency for end users&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  AWS-VPN Deployment with Azure
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Azure Configuration
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Creating a Gateway Subnet&lt;/span&gt;
az network vnet subnet create &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--resource-group&lt;/span&gt; MyResourceGroup &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--vnet-name&lt;/span&gt; MyVnet &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--name&lt;/span&gt; GatewaySubnet &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--address-prefixes&lt;/span&gt; 10.0.1.0/24
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  AWS Configuration
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight terraform"&gt;&lt;code&gt;&lt;span class="k"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"aws_customer_gateway"&lt;/span&gt; &lt;span class="s2"&gt;"azure"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;bgp_asn&lt;/span&gt;    &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;65000&lt;/span&gt;
  &lt;span class="nx"&gt;ip_address&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"52.143.72.189"&lt;/span&gt; &lt;span class="c1"&gt;# Azure VPN Gateway Public IP&lt;/span&gt;
  &lt;span class="nx"&gt;type&lt;/span&gt;       &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"ipsec.1"&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="k"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"aws_vpn_connection"&lt;/span&gt; &lt;span class="s2"&gt;"azure"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;vpn_gateway_id&lt;/span&gt;      &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;aws_vpn_gateway&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;main&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;
  &lt;span class="nx"&gt;customer_gateway_id&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;aws_customer_gateway&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;azure&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;
  &lt;span class="nx"&gt;type&lt;/span&gt;                &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"ipsec.1"&lt;/span&gt;
  &lt;span class="nx"&gt;static_routes_only&lt;/span&gt;  &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Connecting AWS with Google Cloud Platform
&lt;/h2&gt;

&lt;h3&gt;
  
  
  GCP Cloud VPN Setup
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;gcloud compute vpn-gateways create aws-gateway &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--region&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;us-central1 &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--network&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;default

gcloud compute external-vpn-gateways create aws-external &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--interfaces&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nv"&gt;0&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;34.203.215.127 &lt;span class="c"&gt;# AWS VPN Public IP&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  AWS Side Configuration
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight terraform"&gt;&lt;code&gt;&lt;span class="k"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"aws_customer_gateway"&lt;/span&gt; &lt;span class="s2"&gt;"gcp"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;bgp_asn&lt;/span&gt;    &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;64514&lt;/span&gt;
  &lt;span class="nx"&gt;ip_address&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;google_compute_address&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;vpn_external&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;address&lt;/span&gt;
  &lt;span class="nx"&gt;type&lt;/span&gt;       &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"ipsec.1"&lt;/span&gt;

  &lt;span class="nx"&gt;tags&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;Provider&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"GCP"&lt;/span&gt;
    &lt;span class="nx"&gt;Environment&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"Production"&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Interconnecting AWS Accounts
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Via AWS Transit Gateway
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight terraform"&gt;&lt;code&gt;&lt;span class="c1"&gt;# Central account (Hub)&lt;/span&gt;
&lt;span class="k"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"aws_ec2_transit_gateway"&lt;/span&gt; &lt;span class="s2"&gt;"main"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;description&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"Multi-account TGW"&lt;/span&gt;
  &lt;span class="nx"&gt;auto_accept_shared_attachments&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"enable"&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="c1"&gt;# Spoke account&lt;/span&gt;
&lt;span class="k"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"aws_ec2_transit_gateway_vpc_attachment"&lt;/span&gt; &lt;span class="s2"&gt;"spoke1"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;subnet_ids&lt;/span&gt;         &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;aws_subnet&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;private&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
  &lt;span class="nx"&gt;transit_gateway_id&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;aws_ec2_transit_gateway&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;main&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;
  &lt;span class="nx"&gt;vpc_id&lt;/span&gt;             &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;aws_vpc&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;main&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Centralized Configuration Management
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Automation with Terraform
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight hcl"&gt;&lt;code&gt;&lt;span class="c1"&gt;# Reusable module for different providers&lt;/span&gt;
&lt;span class="nx"&gt;module&lt;/span&gt; &lt;span class="s2"&gt;"site_to_site_vpn"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;source&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"./modules/vpn-connection"&lt;/span&gt;

  &lt;span class="nx"&gt;for_each&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;var&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;cloud_providers&lt;/span&gt;

  &lt;span class="nx"&gt;provider_name&lt;/span&gt;    &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;each&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;key&lt;/span&gt;
  &lt;span class="nx"&gt;customer_gateway_ip&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;each&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;value&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;gateway_ip&lt;/span&gt;
  &lt;span class="nx"&gt;bgp_asn&lt;/span&gt;         &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;each&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;value&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;bgp_asn&lt;/span&gt;
  &lt;span class="nx"&gt;routes&lt;/span&gt;          &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;each&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;value&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;routes&lt;/span&gt;
  &lt;span class="nx"&gt;vpn_gateway_id&lt;/span&gt;  &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;aws_vpn_gateway&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;main&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Unified Monitoring
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# CloudWatch dashboard for multi-VPN monitoring&lt;/span&gt;
aws cloudwatch put-dashboard &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--dashboard-name&lt;/span&gt; &lt;span class="s2"&gt;"Multi-Cloud-VPN-Monitoring"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--dashboard-body&lt;/span&gt; file://vpn-dashboard.json
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Security Best Practices
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Encryption and Authentication
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Use IKEv2 with AES-256-GCM&lt;/li&gt;
&lt;li&gt;Implement Perfect Forward Secrecy (PFS)&lt;/li&gt;
&lt;li&gt;Regular rotation of pre-shared keys&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Network Segmentation
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight terraform"&gt;&lt;code&gt;&lt;span class="c1"&gt;# Provider-specific ACLs&lt;/span&gt;
&lt;span class="k"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"aws_network_acl"&lt;/span&gt; &lt;span class="s2"&gt;"azure_traffic"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;vpc_id&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;aws_vpc&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;main&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;

  &lt;span class="nx"&gt;ingress&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;rule_no&lt;/span&gt;    &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;100&lt;/span&gt;
    &lt;span class="nx"&gt;from_port&lt;/span&gt;  &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;
    &lt;span class="nx"&gt;to_port&lt;/span&gt;    &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;
    &lt;span class="nx"&gt;protocol&lt;/span&gt;   &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"-1"&lt;/span&gt;
    &lt;span class="nx"&gt;cidr_block&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"10.1.0.0/16"&lt;/span&gt; &lt;span class="c1"&gt;# Azure range&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Dynamic Route Management
&lt;/h2&gt;

&lt;h3&gt;
  
  
  BGP Configuration
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight terraform"&gt;&lt;code&gt;&lt;span class="k"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"aws_vpn_connection"&lt;/span&gt; &lt;span class="s2"&gt;"bgp_enabled"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;vpn_gateway_id&lt;/span&gt;      &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;aws_vpn_gateway&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;main&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;
  &lt;span class="nx"&gt;customer_gateway_id&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;aws_customer_gateway&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;azure&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;
  &lt;span class="nx"&gt;type&lt;/span&gt;                &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"ipsec.1"&lt;/span&gt;

  &lt;span class="c1"&gt;# BGP activation for dynamic route exchange&lt;/span&gt;
  &lt;span class="nx"&gt;tunnel1_inside_cidr&lt;/span&gt;   &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"169.254.10.0/30"&lt;/span&gt;
  &lt;span class="nx"&gt;tunnel2_inside_cidr&lt;/span&gt;   &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"169.254.20.0/30"&lt;/span&gt;
  &lt;span class="nx"&gt;tunnel1_bgp_asn&lt;/span&gt;       &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"65000"&lt;/span&gt;
  &lt;span class="nx"&gt;tunnel2_bgp_asn&lt;/span&gt;       &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"65000"&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Troubleshooting Common Issues
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Connectivity Diagnostics
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# VPN tunnel verification&lt;/span&gt;
aws ec2 describe-vpn-connections &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--vpn-connection-ids&lt;/span&gt; vpn-12345678

&lt;span class="c"&gt;# Metric monitoring&lt;/span&gt;
aws cloudwatch get-metric-statistics &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--namespace&lt;/span&gt; AWS/VPN &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--metric-name&lt;/span&gt; TunnelState &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--dimensions&lt;/span&gt; &lt;span class="nv"&gt;Name&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;VpnId,Value&lt;span class="o"&gt;=&lt;/span&gt;vpn-12345678
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Failure Management
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Implementation of redundant tunnels&lt;/li&gt;
&lt;li&gt;Proactive health metric monitoring&lt;/li&gt;
&lt;li&gt;Automated failover&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Deploying multi-provider site-to-site VPNs requires careful planning but offers significant flexibility and resilience. By standardizing configurations, automating deployments, and implementing centralized monitoring, organizations can create robust and scalable hybrid cloud networks.&lt;/p&gt;

&lt;p&gt;The multi-cloud approach is not just a technical trend but a business strategy that, when properly implemented, can provide a lasting competitive advantage.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>vpn</category>
      <category>awscommunity</category>
    </item>
    <item>
      <title>Check this if you plan to take SCAA-C03 soon</title>
      <dc:creator>jekobokidou</dc:creator>
      <pubDate>Wed, 09 Apr 2025 21:19:55 +0000</pubDate>
      <link>https://dev.to/jekobokidou/check-this-if-you-plan-to-take-scaa-c03-soon-take-a-look-at-this-article-5f7o</link>
      <guid>https://dev.to/jekobokidou/check-this-if-you-plan-to-take-scaa-c03-soon-take-a-look-at-this-article-5f7o</guid>
      <description>&lt;div class="ltag__link--embedded"&gt;
  &lt;div class="crayons-story "&gt;
  &lt;a href="https://dev.to/aws-builders/the-190-things-you-need-to-know-to-be-an-aws-certified-solutions-architect-associate-epn" class="crayons-story__hidden-navigation-link"&gt;The 190 things you need to know to be an AWS Certified Solutions Architect – Associate&lt;/a&gt;


  &lt;div class="crayons-story__body crayons-story__body-full_post"&gt;
    &lt;div class="crayons-story__top"&gt;
      &lt;div class="crayons-story__meta"&gt;
        &lt;div class="crayons-story__author-pic"&gt;
          &lt;a class="crayons-logo crayons-logo--l" href="/aws-builders"&gt;
            &lt;img alt="AWS Community Builders  logo" src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Forganization%2Fprofile_image%2F2794%2F88da75b6-aadd-4ea1-8083-ae2dfca8be94.png" class="crayons-logo__image"&gt;
          &lt;/a&gt;

          &lt;a href="/jekobokidou" class="crayons-avatar  crayons-avatar--s absolute -right-2 -bottom-2 border-solid border-2 border-base-inverted  "&gt;
            &lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Fuser%2Fprofile_image%2F633961%2F9e2ab464-dbaa-40af-9f6d-fd067c0a5a1c.jpeg" alt="jekobokidou profile" class="crayons-avatar__image"&gt;
          &lt;/a&gt;
        &lt;/div&gt;
        &lt;div&gt;
          &lt;div&gt;
            &lt;a href="/jekobokidou" class="crayons-story__secondary fw-medium m:hidden"&gt;
              jekobokidou
            &lt;/a&gt;
            &lt;div class="profile-preview-card relative mb-4 s:mb-0 fw-medium hidden m:inline-block"&gt;
              
                jekobokidou
                
              
              &lt;div id="story-author-preview-content-1181508" class="profile-preview-card__content crayons-dropdown branded-7 p-4 pt-0"&gt;
                &lt;div class="gap-4 grid"&gt;
                  &lt;div class="-mt-4"&gt;
                    &lt;a href="/jekobokidou" class="flex"&gt;
                      &lt;span class="crayons-avatar crayons-avatar--xl mr-2 shrink-0"&gt;
                        &lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Fuser%2Fprofile_image%2F633961%2F9e2ab464-dbaa-40af-9f6d-fd067c0a5a1c.jpeg" class="crayons-avatar__image" alt=""&gt;
                      &lt;/span&gt;
                      &lt;span class="crayons-link crayons-subtitle-2 mt-5"&gt;jekobokidou&lt;/span&gt;
                    &lt;/a&gt;
                  &lt;/div&gt;
                  &lt;div class="print-hidden"&gt;
                    
                      Follow
                    
                  &lt;/div&gt;
                  &lt;div class="author-preview-metadata-container"&gt;&lt;/div&gt;
                &lt;/div&gt;
              &lt;/div&gt;
            &lt;/div&gt;

            &lt;span&gt;
              &lt;span class="crayons-story__tertiary fw-normal"&gt; for &lt;/span&gt;&lt;a href="/aws-builders" class="crayons-story__secondary fw-medium"&gt;AWS Community Builders &lt;/a&gt;
            &lt;/span&gt;
          &lt;/div&gt;
          &lt;a href="https://dev.to/aws-builders/the-190-things-you-need-to-know-to-be-an-aws-certified-solutions-architect-associate-epn" class="crayons-story__tertiary fs-xs"&gt;&lt;time&gt;Mar 17 '25&lt;/time&gt;&lt;span class="time-ago-indicator-initial-placeholder"&gt;&lt;/span&gt;&lt;/a&gt;
        &lt;/div&gt;
      &lt;/div&gt;

    &lt;/div&gt;

    &lt;div class="crayons-story__indention"&gt;
      &lt;h2 class="crayons-story__title crayons-story__title-full_post"&gt;
        &lt;a href="https://dev.to/aws-builders/the-190-things-you-need-to-know-to-be-an-aws-certified-solutions-architect-associate-epn" id="article-link-1181508"&gt;
          The 190 things you need to know to be an AWS Certified Solutions Architect – Associate
        &lt;/a&gt;
      &lt;/h2&gt;
        &lt;div class="crayons-story__tags"&gt;
            &lt;a class="crayons-tag  crayons-tag--monochrome " href="/t/aws"&gt;&lt;span class="crayons-tag__prefix"&gt;#&lt;/span&gt;aws&lt;/a&gt;
            &lt;a class="crayons-tag  crayons-tag--monochrome " href="/t/saa03"&gt;&lt;span class="crayons-tag__prefix"&gt;#&lt;/span&gt;saa03&lt;/a&gt;
            &lt;a class="crayons-tag  crayons-tag--monochrome " href="/t/certification"&gt;&lt;span class="crayons-tag__prefix"&gt;#&lt;/span&gt;certification&lt;/a&gt;
        &lt;/div&gt;
      &lt;div class="crayons-story__bottom"&gt;
        &lt;div class="crayons-story__details"&gt;
          &lt;a href="https://dev.to/aws-builders/the-190-things-you-need-to-know-to-be-an-aws-certified-solutions-architect-associate-epn" class="crayons-btn crayons-btn--s crayons-btn--ghost crayons-btn--icon-left"&gt;
            &lt;div class="multiple_reactions_aggregate"&gt;
              &lt;span class="multiple_reactions_icons_container"&gt;
                  &lt;span class="crayons_icon_container"&gt;
                    &lt;img src="https://assets.dev.to/assets/fire-f60e7a582391810302117f987b22a8ef04a2fe0df7e3258a5f49332df1cec71e.svg" width="18" height="18"&gt;
                  &lt;/span&gt;
                  &lt;span class="crayons_icon_container"&gt;
                    &lt;img src="https://assets.dev.to/assets/raised-hands-74b2099fd66a39f2d7eed9305ee0f4553df0eb7b4f11b01b6b1b499973048fe5.svg" width="18" height="18"&gt;
                  &lt;/span&gt;
                  &lt;span class="crayons_icon_container"&gt;
                    &lt;img src="https://assets.dev.to/assets/sparkle-heart-5f9bee3767e18deb1bb725290cb151c25234768a0e9a2bd39370c382d02920cf.svg" width="18" height="18"&gt;
                  &lt;/span&gt;
              &lt;/span&gt;
              &lt;span class="aggregate_reactions_counter"&gt;24&lt;span class="hidden s:inline"&gt; reactions&lt;/span&gt;&lt;/span&gt;
            &lt;/div&gt;
          &lt;/a&gt;
            &lt;a href="https://dev.to/aws-builders/the-190-things-you-need-to-know-to-be-an-aws-certified-solutions-architect-associate-epn#comments" class="crayons-btn crayons-btn--s crayons-btn--ghost crayons-btn--icon-left flex items-center"&gt;
              Comments


              7&lt;span class="hidden s:inline"&gt; comments&lt;/span&gt;
            &lt;/a&gt;
        &lt;/div&gt;
        &lt;div class="crayons-story__save"&gt;
          &lt;small class="crayons-story__tertiary fs-xs mr-2"&gt;
            38 min read
          &lt;/small&gt;
            
              &lt;span class="bm-initial"&gt;
                

              &lt;/span&gt;
              &lt;span class="bm-success"&gt;
                

              &lt;/span&gt;
            
        &lt;/div&gt;
      &lt;/div&gt;
    &lt;/div&gt;
  &lt;/div&gt;
&lt;/div&gt;

&lt;/div&gt;


</description>
      <category>aws</category>
      <category>saa03</category>
      <category>certification</category>
    </item>
    <item>
      <title>The 190 things you need to know to be an AWS Certified Solutions Architect – Associate</title>
      <dc:creator>jekobokidou</dc:creator>
      <pubDate>Mon, 17 Mar 2025 02:23:51 +0000</pubDate>
      <link>https://dev.to/aws-builders/the-190-things-you-need-to-know-to-be-an-aws-certified-solutions-architect-associate-epn</link>
      <guid>https://dev.to/aws-builders/the-190-things-you-need-to-know-to-be-an-aws-certified-solutions-architect-associate-epn</guid>
      <description>&lt;p&gt;I passed the AWS Certified Solutions Architect – Associate certification and I want to share with you the 190 things you need to know for this exam.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;001&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Design High-Performing Architectures&lt;/strong&gt;&lt;br&gt;
On termination of an EC2 instance, the default behavior is to also terminate the attached EBS root volume.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;002&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Design Cost-Optimized Architectures&lt;/strong&gt;&lt;br&gt;
Amazon ECS with EC2 launch type is charged based on EC2 instances and EBS volumes used.&lt;br&gt;
Amazon ECS with Fargate launch type is charged based on vCPU and memory resources that the containerized application requests.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;003&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Design Cost-Optimized Architectures&lt;/strong&gt;&lt;br&gt;
The minimum storage duration is 30 days before you can transition objects from Amazon S3 Standard to Amazon S3 One Zone-IA or Amazon S3 Standard-IA.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;004&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Design Secure Architectures&lt;/strong&gt;&lt;br&gt;
Using VPC sharing, an account that owns the VPC (owner) shares one or more subnets with other accounts (participants) that belong to the same organization from AWS Organizations. The owner account cannot share the VPC itself.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;005&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Design Resilient Architectures&lt;/strong&gt;&lt;br&gt;
With AWS Global Accelerator, you can shift traffic gradually or all at once between the blue and the green environment and vice-versa without being subject to DNS caching on client devices and internet resolvers, traffic dials and endpoint weights changes are effective within seconds. AWS Global Accelerator does not rely on DNS. Using Anycast IP addresses that dynamically direct traffic to new Backend instances.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;006&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Design High-Performing Architectures&lt;/strong&gt;&lt;br&gt;
To establish a private connection between your virtual private cloud (VPC) and the Amazon EFS API, you can create an interface VPC endpoint.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;007&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Design High-Performing Architectures&lt;/strong&gt;&lt;br&gt;
You can send data over a Direct Connect connection to an AWS PrivateLink interface VPC endpoint for Amazon EFS by using a private VIF.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;008&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Design High-Performing Architectures&lt;/strong&gt;&lt;br&gt;
Using task scheduling in AWS DataSync, you can periodically execute a transfer task from an on-premises storage system to Amazon EFS over Direct Connect.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;009&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Design High-Performing Architectures&lt;/strong&gt;&lt;br&gt;
AWS Direct Connect provides three types of virtual interfaces: public, private, and transit.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Public virtual interface&lt;/strong&gt; : To connect to AWS resources that are reachable by a public IP address such as an Amazon Simple Storage Service (Amazon S3) bucket or AWS public endpoints, use a public virtual interface.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Private virtual interface&lt;/strong&gt; : To connect to your resources hosted in an Amazon Virtual Private Cloud (Amazon VPC) using their private IP addresses, use a private virtual interface.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Transit virtual interface&lt;/strong&gt; : To connect to your resources hosted in an Amazon VPC (using their private IP addresses) through a transit gateway, use a transit virtual interface.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;010&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Design Resilient Architectures&lt;/strong&gt;&lt;br&gt;
With Amazon RDS Custom for Oracle, you can access and customize your database server host and operating system, for example by applying special patches and changing the database software settings to support third-party applications that require privileged access.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;011&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Design Cost-Optimized Architectures&lt;/strong&gt;&lt;br&gt;
You can only use a launch template to provision capacity across multiple instance types using both On-Demand Instances and Spot Instances to achieve the desired scale, performance, and cost.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;012&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Design High-Performing Architectures&lt;/strong&gt;&lt;br&gt;
All the dependencies of a Lambda function are also packaged into the single Lambda deployment package.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;013&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Design High-Performing Architectures&lt;/strong&gt;&lt;br&gt;
By default, AWS Lambda functions always operate from an AWS-owned VPC and hence have access to any public internet address or public AWS APIs. Once an AWS Lambda function is VPC-enabled, it will need a route through a Network Address Translation gateway (NAT gateway) in a public subnet to access public resources.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;014&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Design High-Performing Architectures&lt;/strong&gt;&lt;br&gt;
If you intend to reuse code in more than one AWS Lambda function, you should consider creating an AWS Lambda Layer for the reusable code.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;015&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Design Cost-Optimized Architectures&lt;/strong&gt;&lt;br&gt;
Dedicated Hosts enable you to use your existing server-bound software licenses like Windows Server and address corporate compliance and regulatory requirements.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;016&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Design Cost-Optimized Architectures&lt;/strong&gt;&lt;br&gt;
Use multipart uploads for faster file uploads into the destination Amazon S3 bucket.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;017&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Design Cost-Optimized Architectures&lt;/strong&gt;&lt;br&gt;
Use Amazon S3 Transfer Acceleration (Amazon S3TA) to enable faster file uploads into the destination S3 bucket.&lt;br&gt;
Amazon S3 Transfer Acceleration (S3TA) can speed up content transfers to and from Amazon S3 by as much as 50-500% for long-distance transfer of larger objects.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;018&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Design Resilient Architectures&lt;/strong&gt;&lt;br&gt;
Amazon S3 delivers strong read-after-write consistency automatically, without changes to performance or availability, without sacrificing regional isolation for applications, and at no additional cost.&lt;br&gt;
After a successful write of a new object or an overwrite of an existing object, any subsequent read request immediately receives the latest version of the object.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;019&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Design High-Performing Architectures&lt;/strong&gt;&lt;br&gt;
Leverage AWS Database Migration Service (AWS DMS) as a bridge between Amazon S3 and Amazon Kinesis Data Streams, so that data stored in an S3 bucket is streamed to Kinesis.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;020&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Design Secure Architectures&lt;/strong&gt;&lt;br&gt;
The Health Insurance Portability and Accountability Act (HIPAA) of 1996 establishes federal standards protecting sensitive health information from disclosure without patient's consent. Both Amazon ElastiCache for Redis and Amazon ElastiCache for Memcached are HIPAA Eligible.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;021&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Design Resilient Architectures&lt;/strong&gt;&lt;br&gt;
Any database engine level upgrade for an Amazon RDS database instance with Multi-AZ deployment triggers both the primary and standby database instances to be upgraded at the same time. This causes downtime until the upgrade is complete&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;022&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Design Cost-Optimized Architectures&lt;/strong&gt;&lt;br&gt;
AWS Snowball Edge Storage Optimized is the optimal choice if you need to securely and quickly transfer dozens of terabytes to petabytes of data to AWS.&lt;br&gt;
&lt;em&gt;Effective November 12, 2024, AWS has discontinued end-of-life AWS Snow devices, specifically the Snowcone HDD, Snowcone SSD, Snowball Edge Storage optimized 80TB, Snowball Edge Compute optimized with 52 vCPUs, and the Snowball Edge Compute optimized with GPU.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;023&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Design Cost-Optimized Architectures&lt;/strong&gt;&lt;br&gt;
The data stored on AWS Snowball Edge device can be copied into Amazon S3 bucket and later transitioned into Amazon S3 Glacier via a lifecycle policy. You can't directly copy data from AWS Snowball Edge devices into Amazon S3 Glacier.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;024&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Design Secure Architectures&lt;/strong&gt;&lt;br&gt;
Use Amazon GuardDuty to monitor any malicious activity on data stored in Amazon S3. Use Amazon Macie to identify any sensitive data stored on Amazon S3.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;025&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Design Secure Architectures&lt;/strong&gt;&lt;br&gt;
With bucket policies, you can grant users within your AWS Account or other AWS Accounts access to your Amazon S3 resources.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;026&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Design Secure Architectures&lt;/strong&gt;&lt;br&gt;
You can leverage an AWS Config managed rule to check if any ACM certificates in your account are marked for expiration within the specified number of days.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;027&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Design Secure Architectures&lt;/strong&gt;&lt;br&gt;
You can configure AWS Config to stream configuration changes and notifications to an Amazon SNS topic.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;028&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Design High-Performing Architectures&lt;/strong&gt;&lt;br&gt;
Amazon FSx for Lustre makes it easy and cost-effective to launch and run the world’s most popular high-performance file system. It is used for workloads such as machine learning, high-performance computing (HPC), video processing, and financial modeling.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;029&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Design High-Performing Architectures&lt;/strong&gt;&lt;br&gt;
FSx for Lustre integrates with Amazon S3, making it easy to process data sets with the Lustre file system. When linked to an S3 bucket, an FSx for Lustre file system transparently presents S3 objects as files and allows you to write changed data back to S3.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;030&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Design Secure Architectures&lt;/strong&gt;&lt;br&gt;
When you create an encrypted Amazon EBS volume and attach it to a supported instance type, data stored at rest on the volume, data moving between the volume and the instance, snapshots created from the volume and volumes created from those snapshots are all encrypted.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;031&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Design Cost-Optimized Architectures&lt;/strong&gt;&lt;br&gt;
AWS Cost Explorer helps you identify under-utilized Amazon EC2 instances that may be downsized on an instance by instance basis within the same instance family, and also understand the potential impact on your AWS bill by taking into account your Reserved Instances and Savings Plans.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;032&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Design Cost-Optimized Architectures&lt;/strong&gt;&lt;br&gt;
AWS Compute Optimizer recommends optimal AWS Compute resources for your workloads to reduce costs and improve performance by using machine learning to analyze historical utilization metrics.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;033&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Design Resilient Architectures&lt;/strong&gt;&lt;br&gt;
Amazon SQS offers two types of message queues. Standard queues offer maximum throughput, best-effort ordering, and at-least-once delivery. SQS FIFO queues are designed to guarantee that messages are processed exactly once, in the exact order that they are sent.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;034&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Design Resilient Architectures&lt;/strong&gt;&lt;br&gt;
By default, FIFO queues support up to 3,000 messages per second with batching, or up to 300 messages per second (300 send, receive, or delete operations per second) without batching.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;035&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Design Resilient Architectures&lt;/strong&gt;&lt;br&gt;
The name of a FIFO queue must end with the .fifo suffix. The suffix counts towards the 80-character queue name limit.&lt;br&gt;
You can't convert an existing standard queue into a FIFO queue.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;036&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Design High-Performing Architectures&lt;/strong&gt;&lt;br&gt;
AWS Global Accelerator is designed to optimize traffic latency and utilize the AWS global network. AWS Global Accelerator works at the network couch (Layer 3 – TCP/UDP), which is an excellent choice for applications that use UDP.&lt;br&gt;
Without AWS Global Accelerator: It can take many networks to reach the application. Paths to and from the application may differ. Each hop impacts performance and can introduce risks.&lt;br&gt;
With AWS Global Accelerator: Adding AWS Global Accelerator removes these inefficiencies. It leverages the Global AWS Network, resulting in improved performance.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;037&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Design Resilient Architectures&lt;/strong&gt;&lt;br&gt;
A Golden AMI is an AMI that you standardize through configuration, consistent security patching, and hardening. It also contains agents you approve for logging, security, performance monitoring, etc.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;038&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Design Resilient Architectures&lt;/strong&gt;&lt;br&gt;
Amazon EC2 instance user data is the data that you specified in the form of a configuration script while launching your instance. You can use Amazon EC2 user data to customize the dynamic installation parts at boot time, rather than installing the application itself at boot time.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;039&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Design Resilient Architectures&lt;/strong&gt;&lt;br&gt;
The aws S3 sync command uses the CopyObject APIs to copy objects between Amazon S3 buckets. The sync command lists the source and target buckets to identify objects that are in the source bucket but that aren't in the target bucket. If the operation fails, you can run the sync command again without duplicating previously copied objects.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;040&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Design Resilient Architectures&lt;/strong&gt;&lt;br&gt;
Amazon S3 Batch Replication provides you a way to replicate objects that existed before a replication configuration was in place, objects that have previously been replicated, and objects that have failed replication.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;041&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Design Resilient Architectures&lt;/strong&gt;&lt;br&gt;
With AWS Database Migration Service, you can continuously replicate your data with high availability and consolidate databases into a petabyte-scale data warehouse by streaming data to Amazon Redshift and Amazon S3.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;042&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Design Resilient Architectures&lt;/strong&gt;&lt;br&gt;
It is not possible to modify a launch configuration once it is created. The correct option is to create a new launch configuration to use the correct instance type.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;043&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Design High-Performing Architectures&lt;/strong&gt;&lt;br&gt;
Amazon Aurora Global Database is designed for globally distributed applications, allowing a single Amazon Aurora database to span multiple AWS regions. It replicates your data with no impact on database performance, enables fast local reads with low latency in each region, and provides disaster recovery from region-wide outages.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;044&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Design Secure Architectures&lt;/strong&gt;&lt;br&gt;
When you launch an Amazon EC2 instance, you specify an IAM role to associate with the instance. Applications that run on the instance can then use the role-supplied temporary credentials to sign API requests.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;045&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Design Cost-Optimized Architectures&lt;/strong&gt;&lt;br&gt;
Distributing the static content through Amazon S3 allows us to offload most of the network usage to Amazon S3 and free up our applications running on Amazon ECS.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;046&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Design Resilient Architectures&lt;/strong&gt;&lt;br&gt;
Each Aurora DB cluster can have up to 15 Aurora Replicas in addition to the primary DB instance.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;047&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Design Resilient Architectures&lt;/strong&gt;&lt;br&gt;
You use the reader endpoint for read-only connections for your Aurora cluster. This endpoint uses a load-balancing mechanism to help your cluster handle a query-intensive workload. The reader endpoint is the endpoint that you supply to applications that do reporting or other read-only operations on the cluster.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;048&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Design Secure Architectures&lt;/strong&gt;&lt;br&gt;
An AWS transit gateway is a network transit hub that you can use to interconnect your virtual private clouds (VPC) and on-premises networks.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;049&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Design High-Performing Architectures&lt;/strong&gt;&lt;br&gt;
A NAT instance or a NAT Gateway can be used in a public subnet in your VPC to enable instances in the private subnet to initiate outbound IPv4 traffic to the Internet.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;050&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Design High-Performing Architectures&lt;/strong&gt;&lt;br&gt;
NAT instance can be used as a bastion server. Security Groups can be associated with a NAT instance. NAT instance supports port forwarding.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;051&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Design Cost-Optimized Architectures&lt;/strong&gt;&lt;br&gt;
Amazon DynamoDB Accelerator (DAX) is used to natively cache Amazon DynamoDB reads.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;052&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Design Cost-Optimized Architectures&lt;/strong&gt;&lt;br&gt;
You can use Amazon CloudFront to improve application performance to serve static content from Amazon S3.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;053&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Design Resilient Architectures&lt;/strong&gt;&lt;br&gt;
You can use Amazon CloudWatch Alarms to send an email via Amazon SNS whenever any of the Amazon EC2 instances breaches a certain threshold.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;054&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Design Secure Architectures&lt;/strong&gt;&lt;br&gt;
AWS customers can access Amazon Simpl Queue Service (Amazon SQS) from their Amazon Virtual Private Cloud (Amazon VPC) using VPC endpoints, without using public IPs, and without needing to traverse the public internet. VPC endpoints for Amazon SQS are powered by AWS PrivateLink, a highly available, scalable technology that enables you to privately connect your VPC to supported AWS services.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;055&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Design High-Performing Architectures&lt;/strong&gt;&lt;br&gt;
If you specify targets using an instance ID, traffic is routed to instances using the primary private IP address specified in the primary network interface for the instance. The load balancer rewrites the destination IP address from the data packet before forwarding it to the target instance.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;056&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Design Resilient Architectures&lt;/strong&gt;&lt;br&gt;
Versioning-enabled buckets enable you to recover objects from accidental deletion or overwrite.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;057&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Design Resilient Architectures&lt;/strong&gt;&lt;br&gt;
MFA delete requires secondary authentication to take place before objects can be permanently deleted from an Amazon S3 bucket.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;058&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Design Cost-Optimized Architectures&lt;/strong&gt;&lt;br&gt;
For heterogeneous database migrations, first use the AWS Schema Conversion Tool to convert the source schema and code to match that of the target database, and then use the AWS Database Migration Service to migrate data from the source database to the target database.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;059&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Design Secure Architectures&lt;/strong&gt;&lt;br&gt;
Create a virtual private gateway (VGW) on the AWS side of the VPN and a Customer Gateway on the on-premises side of the VPN.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;060&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Design Cost-Optimized Architectures&lt;/strong&gt;&lt;br&gt;
The main pricing parameter while using the AWS Direct Connect connection is the Data Transfer Out (DTO) from AWS to the on-premises data center. DTO refers to the cumulative network traffic that is sent through AWS Direct Connect to destinations outside of AWS. This is charged per gigabyte (GB), and unlike capacity measurements, DTO refers to the amount of data transferred, not the speed.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;061&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Design Resilient Architectures&lt;/strong&gt;&lt;br&gt;
A spread placement group can span multiple Availability Zones in the same Region. You can have a maximum of seven running instances per Availability Zone per group. Therefore, to deploy 15 Amazon EC2 instances in a single Spread placement group, the company needs to use 3 Availability Zones.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;062&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Design Resilient Architectures&lt;/strong&gt;&lt;br&gt;
AWS Lambda can be combined with DynamoDB to run code for virtually any type of application or backend service.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;063&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Design Secure Architectures&lt;/strong&gt;&lt;br&gt;
If you have multiple AWS Site-to-Site VPN connections, you can provide secure communication between sites using the AWS VPN CloudHub. This enables your remote sites to communicate with each other, and not just with the VPC.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;064&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Design Secure Architectures&lt;/strong&gt;&lt;br&gt;
Using IAM roles, it is possible to access cross-account resources.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;065&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Design Cost-Optimized Architectures&lt;/strong&gt;&lt;br&gt;
A user pool is a user directory in Amazon Cognito. You can leverage Amazon Cognito User Pools to either provide built-in user management or integrate with external identity providers, such as Facebook, Twitter, Google+, and Amazon.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;066&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Design Secure Architectures&lt;/strong&gt;&lt;br&gt;
You can use Secure Socket Layer / Transport Layer Security (SSL/TLS) connections to encrypt data in transit. Amazon RDS creates an SSL certificate and installs the certificate on the DB instance when the instance is provisioned. For MySQL, you launch the MySQL client using the --ssl_ca parameter to reference the public key to encrypt connections. Using SSL, you can encrypt a PostgreSQL connection between your applications and your PostgreSQL DB instances. You can also force all connections to your PostgreSQL DB instance to use SSL.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;067&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Design Secure Architectures&lt;/strong&gt;&lt;br&gt;
A permissions boundary can be used to control the maximum permissions employees can grant to the IAM principals (that is, users and roles) that they create and manage.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;068&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Design High-Performing Architectures&lt;/strong&gt;&lt;br&gt;
Using Amazon Redshift Spectrum, you can efficiently query and retrieve structured and semistructured data from files in Amazon S3 without having to load the data into Amazon Redshift tables.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;069&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Design Secure Architectures&lt;/strong&gt;&lt;br&gt;
Amazon GuardDuty analyzes tens of billions of events across multiple AWS data sources, such as AWS CloudTrail events, Amazon VPC Flow Logs, and DNS logs.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;070&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Design Cost-Optimized Architectures&lt;/strong&gt;&lt;br&gt;
If your organization has multiple AWS accounts, then you can subscribe multiple AWS Accounts to AWS Shield Advanced by individually enabling it on each account using the AWS Management Console or API. You will pay the monthly fee once as long as the AWS accounts are all under a single consolidated billing, and you own all the AWS accounts and resources in those accounts.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;071&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Design Resilient Architectures&lt;/strong&gt;&lt;br&gt;
Kinesis data streams can continuously capture gigabytes of data per second from hundreds of thousands of sources such as website clickstreams, database event streams, financial transactions, social media feeds, IT logs, and location-tracking events.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;072&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Design High-Performing Architectures&lt;/strong&gt;&lt;br&gt;
By default, scripts entered as user data are executed with root user privileges. By default, user data runs only during the boot cycle when you first launch an instance.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;073&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Design Cost-Optimized Architectures&lt;/strong&gt;&lt;br&gt;
You can create an Amazon CloudWatch alarm that monitors an Amazon EC2 instance and automatically reboots the instance. The reboot alarm action is recommended for Instance Health Check failures (as opposed to the recover alarm action, which is suited for System Health Check failures)&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;074&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Design High-Performing Architectures&lt;/strong&gt;&lt;br&gt;
For example, if you are building a social feed into your application, you can use Neptune to provide results that prioritize showing your users the latest updates from their family, from friends whose updates they ‘Like,’ and from friends who live close to them.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;075&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Design Secure Architecturesv&lt;/strong&gt;&lt;br&gt;
There are two types of VPC endpoints: Interface Endpoints and Gateway Endpoints. &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;An &lt;strong&gt;Interface Endpoint&lt;/strong&gt; is an Elastic Network Interface with a private IP address from the IP address range of your subnet that serves as an entry point for traffic destined to a supported service.&lt;/li&gt;
&lt;li&gt;A &lt;strong&gt;Gateway Endpoint&lt;/strong&gt; is a gateway that you specify as a target for a route in your route table for traffic destined to a supported AWS service. The following AWS services are supported: Amazon S3 and Amazon DynamoDB.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;076&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Design High-Performing Architectures&lt;/strong&gt;&lt;br&gt;
Amazon Kinesis Data Streams Enhanced fan-out allows developers to scale up the number of stream consumers (applications reading data from a stream in real-time) by offering each stream consumer its own 2MB/second pipe of read throughput per shard.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;077&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Design Resilient Architectures&lt;/strong&gt;&lt;br&gt;
AWS recommends that you use AWS CloudTrail for logging bucket and object-level actions for your Amazon S3 resources.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;078&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Design Resilient Architectures&lt;/strong&gt;&lt;br&gt;
Amazon Simple Notification Service (SNS) is a fully managed messaging service for both application-to-application (A2A) and application-to-person (A2P) communication. It facilitates the delivery of messages or notifications to subscribing endpoints or clients, including mobile devices, email addresses, and SQS queues. A connection between Amazon Kinesis Data Streams (KDS) and SNS can be established using EventBridge Pipes.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;079&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Design Secure Architectures&lt;/strong&gt;&lt;br&gt;
If a target group contains only unhealthy registered targets, the load balancer nodes route requests across its unhealthy targets.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;080&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Design Secure Architectures&lt;/strong&gt;&lt;br&gt;
For a Lambda function to be able to access an S3 bucket, create an IAM role for the AWS Lambda function that grants access to the Amazon S3 bucket. Set the IAM role as the AWS Lambda function's execution role. Make sure that the bucket policy also grants access to the AWS Lambda function's execution role.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;081&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Design High-Performing Architectures&lt;/strong&gt;&lt;br&gt;
You can use placement groups to influence the placement of a group of interdependent EC2 instances to meet the needs of your workload.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Cluster&lt;/strong&gt; – packs instances close together inside an Availability Zone (AZ). This strategy enables workloads to achieve the low-latency network performance necessary for tightly-coupled node-to-node communication that is typical of HPC applications.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Partition&lt;/strong&gt; – spreads your instances across logical partitions such that groups of instances in one partition do not share the underlying hardware with groups of instances in different partitions. This strategy is typically used by large distributed and replicated workloads, such as Hadoop, Cassandra, and Kafka.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Spread&lt;/strong&gt; – strictly places a small group of instances across distinct underlying hardware to reduce correlated failures.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;082&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Design Resilient Architectures&lt;/strong&gt;&lt;br&gt;
Throttling is the process of limiting the number of requests an authorized program can submit to a given operation in a given amount of time.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Amazon API Gateway&lt;/strong&gt; - To prevent your API from being overwhelmed by too many requests, Amazon API Gateway throttles requests to your API using the token bucket algorithm, where a token counts for a request. Specifically, API Gateway sets a limit on a steady-state rate and a burst of request submissions against all APIs in your account. In the token bucket algorithm, the burst is the maximum bucket size.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Amazon Simple Queue Service (SQS)&lt;/strong&gt; is a fully managed message queuing service that enables you to decouple and scale microservices, distributed systems, and serverless applications. Amazon SQS offers buffer capabilities to smooth out temporary volume spikes without losing messages or increasing latency.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Amazon Kinesis&lt;/strong&gt; - Amazon Kinesis is a fully managed, scalable service that can ingest, buffer, and process streaming data in real-time.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;083&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Design High-Performing Architectures&lt;/strong&gt;&lt;br&gt;
Kinesis Agent cannot write to Amazon Kinesis Firehose for which the delivery stream source is already set as Amazon Kinesis Data Streams. When an Amazon Kinesis Data Stream is configured as the source of a Kinesis Firehose delivery stream, Firehose’s PutRecord and PutRecordBatch operations are disabled and Kinesis Agent cannot write to Kinesis Firehose Delivery Stream directly. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;084&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Design Secure Architectures&lt;/strong&gt;&lt;br&gt;
By default, the root volume for an AMI backed by Amazon EBS is deleted when the instance terminates. The default behavior can be changed to ensure that the volume persists after the instance terminates. To change the default behavior, set the DeleteOnTermination attribute to false using a block device mapping.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;085&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Design Secure Architectures&lt;/strong&gt;&lt;br&gt;
You can only enable encryption for an Amazon RDS DB instance when you create it, not after the DB instance is created. However, because you can encrypt a copy of an unencrypted DB snapshot, you can effectively add encryption to an unencrypted DB instance. That is, you can create a snapshot of your DB instance, and then create an encrypted copy of that snapshot. So this is the correct option.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;086&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Design High-Performing Architectures&lt;/strong&gt;&lt;br&gt;
Amazon FSx for Windows File Server provides fully managed, highly reliable file storage that is accessible over the industry-standard Service Message Block (SMB) protocol. It is built on Windows Server, delivering a wide range of administrative features such as user quotas, end-user file restore, and Microsoft Active Directory (AD) integration. The Distributed File System Replication (DFSR) service is a new multi-master replication engine that is used to keep folders synchronized on multiple servers. Amazon FSx supports the use of Microsoft’s Distributed File System (DFS) to organize shares into a single folder structure up to hundreds of PB in size.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;087&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Design Secure Architectures&lt;/strong&gt;&lt;br&gt;
you cannot convert an existing KMS single-Region key to a KMS multi-Region key.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;088&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Design High-Performing Architectures&lt;/strong&gt;&lt;br&gt;
EFS Max I/O performance mode is used to scale to higher levels of aggregate throughput and operations per second. This scaling is done with a tradeoff of slightly higher latencies for file metadata operations&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;089&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Design High-Performing Architectures&lt;/strong&gt;&lt;br&gt;
EFS General Purpose performance mode is ideal for latency-sensitive use cases, like web serving environments, content management systems, home directories, and general file serving. If you don't choose a performance mode when you create your file system, Amazon EFS selects the General Purpose mode for you by default.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;090&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Design Cost-Optimized Architectures&lt;/strong&gt;&lt;br&gt;
With Amazon RDS Read Replicas there are data transfer charges for replicating data across AWS Regions. You are not charged for the data transfer incurred in replicating data between your source DB instance and read replica within the same AWS Region.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;091&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Design High-Performing Architectures&lt;/strong&gt;&lt;br&gt;
Amazon Kinesis Data Firehose is the easiest way to reliably load streaming data into data lakes, data stores, and analytics tools. It can capture, transform, and load streaming data into Amazon S3, Amazon Redshift, Amazon Elasticsearch Service, and Splunk, enabling near real-time analytics with existing business intelligence tools and dashboards you’re already using today. It is a fully managed service that automatically scales to match the throughput of your data and requires no ongoing administration.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;092&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Design Cost-Optimized Architectures&lt;/strong&gt;&lt;br&gt;
AWS Resource Access Manager (RAM) is a service that enables you to easily and securely share AWS resources with any AWS account or within your AWS Organization. You can share AWS Transit Gateways, Subnets, AWS License Manager configurations, and Amazon Route 53 Resolver rules resources with RAM. RAM eliminates the need to create duplicate resources in multiple accounts, reducing the operational overhead of managing those resources in every single account you own.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;093&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Design Resilient Architectures&lt;/strong&gt;&lt;br&gt;
Amazon EC2 Auto Scaling does not immediately terminate instances with an Impaired status. Instead, Amazon EC2 Auto Scaling waits a few minutes for the instance to recover. Amazon EC2 Auto Scaling might also delay or not terminate instances that fail to report data for status checks. This usually happens when there is insufficient data for the status check metrics in Amazon CloudWatch.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;094&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Design Secure Architectures&lt;/strong&gt;&lt;br&gt;
By default, an Amazon S3 object is owned by the AWS account that uploaded it. This is true even when the bucket is owned by another account.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;095&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Design Cost-Optimized Architectures&lt;/strong&gt;&lt;br&gt;
Termination priority of EC2 instances in an AutoScaling group - Per the default termination policy, the first priority is given to any allocation strategy for On-Demand vs Spot instances. The next priority is to consider any instance with the oldest launch template unless there is an instance that uses a launch configuration.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;096&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Design Secure Architectures&lt;/strong&gt;&lt;br&gt;
Bucket policies in Amazon S3 can be used to add or deny permissions across some or all of the objects within a single bucket. Policies can be attached to users, groups, or Amazon S3 buckets, enabling centralized management of permissions. With bucket policies, you can grant users within your AWS Account or other AWS Accounts access to your Amazon S3 resources.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;097&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Design High-Performing Architectures&lt;/strong&gt;&lt;br&gt;
On a database instance running with Amazon RDS encryption, data stored at rest in the underlying storage is encrypted, as are its automated backups, read replicas, and snapshots.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;098&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Design Secure Architectures&lt;/strong&gt;&lt;br&gt;
AWS Secrets Manager helps you protect secrets needed to access your applications, services, and IT resources. The service enables you to easily rotate, manage, and retrieve database credentials, API keys, and other secrets throughout their lifecycle.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;099&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Design Resilient Architectures&lt;/strong&gt;&lt;br&gt;
To migrate accounts from one organization to another, you must have root or IAM access to both the member and master accounts. Here are the steps to follow: &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Remove the member account from the old organization &lt;/li&gt;
&lt;li&gt;Send an invite to the member account from the new Organization &lt;/li&gt;
&lt;li&gt;Accept the invite to the new organization from the member account&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;100&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Design Secure Architectures&lt;/strong&gt;&lt;br&gt;
s3:ListBucket is applied to buckets, so the ARN is in the form "Resource":"arn:aws:s3:::mybucket", without a trailing / s3:GetObject is applied to objects within the bucket, so the ARN is in the form "Resource":"arn:aws:s3:::mybucket/&lt;em&gt;", with a trailing /&lt;/em&gt; to indicate all objects within the bucket mybucket&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;101&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Design Secure Architectures&lt;/strong&gt;&lt;br&gt;
IAM permission boundary can only be applied to roles or users, not IAM groups.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;102&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Design High-Performing Architectures&lt;/strong&gt;&lt;br&gt;
To allow for multiple consumers to read data from and SQS FIFO queue we should use the message groups FIFO feature by setting the "Group ID" attribute.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;103&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Design Secure Architectures&lt;/strong&gt;&lt;br&gt;
When you enable automatic key rotation for a KMS key, AWS KMS generates new cryptographic material for the KMS key every year.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;104&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Design Secure Architectures&lt;/strong&gt;&lt;br&gt;
Dedicated Instances are Amazon EC2 instances that run in a virtual private cloud (VPC) on hardware that's dedicated to a single customer. Dedicated Instances that belong to different AWS accounts are physically isolated at a hardware level, even if those accounts are linked to a single-payer account. However, Dedicated Instances may share hardware with other instances from the same AWS account that are not Dedicated Instances.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;105&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Design Resilient Architectures&lt;/strong&gt;&lt;br&gt;
Partition placement group – spreads your instances across logical partitions such that groups of instances in one partition do not share the underlying hardware with groups of instances in different partitions. This strategy is typically used by large distributed and replicated workloads, such as Hadoop, Cassandra, and Kafka.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;106&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Design Secure Architectures&lt;/strong&gt;&lt;br&gt;
You can share the AWS Key Management Service (AWS KMS) key that was used to encrypt the snapshot with any accounts that you want to be able to access the snapshot. You can share AWS KMS Key with another AWS account by adding the other account to the AWS KMS key policy.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;107&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Design Secure Architectures&lt;/strong&gt;&lt;br&gt;
A centralized Shared Services VPC hosts common services like databases, authentication servers (AD), monitoring, and proxies. All other VPCs can access it through AWS Transit Gateway, simplifying connectivity and network management.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;108&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Design Secure Architectures&lt;/strong&gt;&lt;br&gt;
To create a NAT gateway, you must specify the public subnet in which the NAT gateway should reside. You must also specify an Elastic IP address to associate with the NAT gateway when you create it. The Elastic IP address cannot be changed after you associate it with the NAT Gateway. After you've created a NAT gateway, you must update the route table associated with one or more of your private subnets to point internet-bound traffic to the NAT gateway. This enables instances in your private subnets to communicate with the internet.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;109&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Design High-Performing Architectures&lt;/strong&gt;&lt;br&gt;
Amazon ElastiCache can be used to significantly improve latency and throughput for many read-heavy application workloads (such as social networking, gaming, media sharing, leaderboard, and Q&amp;amp;A portals) or compute-intensive workloads (such as a recommendation engine) by allowing you to store the objects that are often read in the cache.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;110&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Design Resilient Architectures&lt;/strong&gt;&lt;br&gt;
An Internet Gateway serves two purposes: to provide a target in your VPC route tables for internet-routable traffic and to perform network address translation (NAT) for instances that have been assigned public IPv4 addresses.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;111&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Design High-Performing Architectures&lt;/strong&gt;&lt;br&gt;
Provisioned IOPS SSD (io1) is backed by solid-state drives (SSDs) and is a high-performance Amazon EBS storage option designed for critical, I/O intensive database and application workloads, as well as throughput-intensive database workloads. io1 is designed to deliver a consistent baseline performance of up to 50 IOPS/GB to a maximum of 64,000 IOPS and provide up to 1,000 MB/s of throughput per volume.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;112&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Design Secure Architectures&lt;/strong&gt;&lt;br&gt;
When you use Amazon CloudFront with an Amazon S3 bucket as the origin, configure an origin access identity (OAI) and associate it with the Amazon CloudFront distribution. Set up the permissions in the Amazon S3 bucket policy so that only the OAI can read the objects.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;113&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Design Secure Architectures&lt;/strong&gt;&lt;br&gt;
Amazon CloudFront provides two ways to send authenticated requests to an Amazon S3 origin: origin access control (OAC) and origin access identity (OAI). &lt;br&gt;
AWS recommends using OAC because it supports:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;All Amazon S3 buckets in all AWS Regions, including opt-in Regions launched after December 2022&lt;/li&gt;
&lt;li&gt;Amazon S3 server-side encryption with AWS KMS (SSE-KMS)&lt;/li&gt;
&lt;li&gt;Dynamic requests (PUT and DELETE) to Amazon S3&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;114&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Design Resilient Architectures&lt;/strong&gt;&lt;br&gt;
Amazon Kinesis Data Streams enables real-time processing of streaming big data. It provides ordering of records, as well as the ability to read and/or replay records in the same order to multiple Amazon Kinesis Applications. The Amazon Kinesis Client Library (KCL) delivers all records for a given partition key to the same record processor, making it easier to build multiple applications reading from the same Amazon Kinesis data stream (for example, to perform counting, aggregation, and filtering). Amazon Kinesis Data Streams is recommended when you need the ability to consume records in the same order a few hours later.&lt;/p&gt;

&lt;p&gt;For example, you have a billing application and an audit application that runs a few hours behind the billing application. Because Amazon Kinesis Data Streams stores data for a maximum of 365 days, you can easily run the audit application up to 7 days behind the billing application.&lt;/p&gt;

&lt;p&gt;KDS provides the ability to consume records in the same order a few hours later.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;115&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Design Cost-Optimized Architectures&lt;/strong&gt;&lt;br&gt;
A CNAME record maps DNS queries for the name of the current record, such as acme.example.com, to another domain (example.com or example.net) or subdomain (acme.example.com or zenith.example.org).&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;CNAME&lt;/strong&gt; records can be used to map one domain name to another. Although you should keep in mind that the DNS protocol does not allow you to create a CNAME record for the top node of a DNS namespace, also known as the zone apex. For example, if you register the DNS name example.com, the zone apex is example.com. You cannot create a CNAME record for example.com, but you can create CNAME records for &lt;a href="http://www.example.com" rel="noopener noreferrer"&gt;www.example.com&lt;/a&gt;, newproduct.example.com, and so on.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Alias&lt;/strong&gt; records let you route traffic to selected AWS resources, such as Amazon CloudFront distributions and Amazon S3 buckets. They also let you route traffic from one record in a hosted zone to another record. 3rd party websites do not qualify for these as we have no control over those. 'Alias record' cannot be used to map one domain name to another.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;116&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Design Secure Architectures&lt;/strong&gt;&lt;br&gt;
If a user or role has an IAM permission policy that grants access to an action that is either not allowed or explicitly denied by the applicable service control policy (SCP), the user or role can't perform that action.&lt;br&gt;
Service control policy (SCP) affects all users and roles in the member accounts, including root user of the member accounts.&lt;br&gt;
Service control policy (SCP) does not affect service-linked role.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;117&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Design Cost-Optimized Architectures&lt;/strong&gt;&lt;br&gt;
You can change the tenancy of an instance from dedicated to host.&lt;br&gt;
You can change the tenancy of an instance from host to dedicated.&lt;/p&gt;

&lt;p&gt;Each Amazon EC2 instance that you launch into a VPC has a tenancy attribute. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;118&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Design Resilient Architectures&lt;/strong&gt;&lt;br&gt;
The nodes for your load balancer distribute requests from clients to registered targets.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;When cross-zone load balancing is enabled&lt;/strong&gt;, each load balancer node distributes traffic across the registered targets in all enabled Availability Zones. &lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;When cross-zone load balancing is disabled&lt;/strong&gt;, each load balancer node distributes traffic only across the registered targets in its Availability Zone.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;119&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Design Cost-Optimized Architectures&lt;/strong&gt;&lt;br&gt;
Amazon SQS provides short polling and long polling to receive messages from a queue. By default, queues use short polling. With short polling, Amazon SQS sends the response right away, even if the query found no messages. With long polling, Amazon SQS sends a response after it collects at least one available message, up to the maximum number of messages specified in the request. Amazon SQS sends an empty response only if the polling wait time expires.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;120&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Design High-Performing Architectures&lt;/strong&gt;&lt;br&gt;
AWS Storage Gateway is a hybrid cloud storage service that gives you on-premises access to virtually unlimited cloud storage. The service provides three different types of gateways – Tape Gateway, File Gateway, and Volume Gateway – that seamlessly connect on-premises applications to cloud storage, caching data locally for low-latency access.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;With &lt;strong&gt;cached volumes&lt;/strong&gt;, the AWS Volume Gateway stores the full volume in its Amazon S3 service bucket, and just the recently accessed data is retained in the gateway’s local cache for low-latency access.&lt;/li&gt;
&lt;li&gt;With &lt;strong&gt;stored volumes&lt;/strong&gt;, your entire data volume is available locally in the gateway, for fast read access. Volume Gateway also maintains an asynchronous copy of your stored volume in the service’s Amazon S3 bucket. &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;121&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Design Secure Architectures&lt;/strong&gt;&lt;br&gt;
You can copy an Amazon Machine Image (AMI) across AWS Regions&lt;br&gt;
You can share an Amazon Machine Image (AMI) with another AWS account&lt;br&gt;
Copying an Amazon Machine Image (AMI) backed by an encrypted snapshot cannot result in an unencrypted target snapshot.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;122&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Design Resilient Architectures&lt;/strong&gt;&lt;br&gt;
If your instance fails a system status check, you can use Amazon CloudWatch alarm actions to automatically recover it. The recover option is available for over 90% of deployed customer Amazon EC2 instances. The Amazon CloudWatch recovery option works only for system check failures, not for instance status check failures. Also, if you terminate your instance, then it can't be recovered.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;123&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Design Secure Architectures&lt;/strong&gt;&lt;br&gt;
To ensure that Elastic Load Balancing stops sending requests to instances that are de-registering or unhealthy while keeping the existing connections open, use connection draining. This enables the load balancer to complete in-flight requests made to instances that are de-registering or unhealthy. The maximum timeout value can be set between 1 and 3,600 seconds (the default is 300 seconds). When the maximum time limit is reached, the load balancer forcibly closes connections to the de-registering instance.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;124&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Design High-Performing Architectures&lt;/strong&gt;&lt;br&gt;
When you create a Launch Template, the default value for the instance tenancy is shared and the instance tenancy is controlled by the tenancy attribute of the VPC. If you set the Launch Template Tenancy to shared (default) and the VPC Tenancy is set to dedicated, then the instances have dedicated tenancy. If you set the Launch Template Tenancy to dedicated and the VPC Tenancy is set to default, then again the instances have dedicated tenancy.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;125&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Design High-Performing Architectures&lt;/strong&gt;&lt;br&gt;
An Elastic Fabric Adapter (EFA) is a network device that you can attach to your Amazon EC2 instance to accelerate High Performance Computing (HPC) and machine learning applications. It enhances the performance of inter-instance communication that is critical for scaling HPC and machine learning applications.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;126&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Design Resilient Architectures&lt;/strong&gt;&lt;br&gt;
Amazon DynamoDB enables you to back up your table data continuously by using point-in-time recovery (PITR). When you enable PITR, DynamoDB backs up your table data automatically with per-second granularity so that you can restore to any given second in the preceding 35 days.&lt;/p&gt;

&lt;p&gt;PITR helps protect you against accidental writes and deletes. For example, if a test script writes accidentally to a production DynamoDB table or someone mistakenly issues a "DeleteItem" call, PITR has you covered.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;127&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Design High-Performing Architectures&lt;/strong&gt;&lt;br&gt;
AWS DataSync is an online data transfer service that simplifies, automates, and accelerates copying large amounts of data to and from AWS storage services over the internet or AWS Direct Connect.&lt;/p&gt;

&lt;p&gt;AWS DataSync fully automates and accelerates moving large active datasets to AWS, up to 10 times faster than command-line tools. It is natively integrated with Amazon S3, Amazon EFS, Amazon FSx for Windows File Server, Amazon CloudWatch, and AWS CloudTrail, which provides seamless and secure access to your storage services, as well as detailed monitoring of the transfer.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;128&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Design High-Performing Architectures&lt;/strong&gt;&lt;br&gt;
AWS CloudFormation StackSet extends the functionality of stacks by enabling you to create, update, or delete stacks across multiple accounts and regions with a single operation.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;129&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Design Resilient Architectures&lt;/strong&gt;&lt;br&gt;
Amazon Simple Queue Service (Amazon SQS) is a fully managed message queuing service that enables you to decouple and scale microservices, distributed systems, and serverless applications. Amazon SQS eliminates the complexity and overhead associated with managing and operating message-oriented middleware and empowers developers to focus on differentiating work. Using SQS, you can send, store, and receive messages between software components at any volume, without losing messages or requiring other services to be available.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;130&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Design Secure Architectures&lt;/strong&gt;&lt;br&gt;
To resolve DNS queries for any resources in the on-premises network from the AWS VPC, you can create an outbound endpoint on Amazon Route 53 Resolver and then Amazon Route 53 Resolver can conditionally forward queries to resolvers on the on-premises network via this endpoint.&lt;br&gt;
To resolve any DNS queries for resources in the AWS VPC from the on-premises network, you can create an inbound endpoint on Amazon Route 53 Resolver and then DNS resolvers on the on-premises network can forward DNS queries to Amazon Route 53 Resolver via this endpoint.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;131&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Design Cost-Optimized Architectures&lt;/strong&gt;&lt;br&gt;
If your Spot Instance request is active and has an associated running Spot Instance, or your Spot Instance request is disabled and has an associated stopped Spot Instance, canceling the request does not terminate the instance; you must terminate the running Spot Instance manually.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;132&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Design Resilient Architectures&lt;/strong&gt;&lt;br&gt;
A recovered instance is identical to the original instance, including the instance ID, private IP addresses, Elastic IP addresses, and all instance metadata&lt;br&gt;
If your instance has a public IPv4 address, it retains the public IPv4 address after recovery.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;133&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Design Secure Architectures&lt;/strong&gt;&lt;br&gt;
You can use a Network Address Translation gateway (NAT gateway) to enable instances in a private subnet to connect to the internet or other AWS services, but prevent the internet from initiating a connection with those instances. To create a NAT gateway, you must specify the public subnet in which the NAT gateway should reside.&lt;br&gt;
You must also specify an Elastic IP address to associate with the NAT gateway when you create it. The Elastic IP address cannot be changed after you associate it with the NAT Gateway.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;134&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Design High-Performing Architectures&lt;/strong&gt;&lt;br&gt;
AWS Directory Service for Microsoft Active Directory (aka AWS Managed Microsoft AD) is powered by an actual Microsoft Windows Server Active Directory (AD), managed by AWS. With AWS Managed Microsoft AD, you can run directory-aware workloads in the AWS Cloud such as SQL Server-based applications. You can also configure a trust relationship between AWS Managed Microsoft AD in the AWS Cloud and your existing on-premises Microsoft Active Directory, providing users and groups with access to resources in either domain, using single sign-on (SSO).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;135&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Design Resilient Architectures&lt;/strong&gt;&lt;br&gt;
Amazon Simple Queue Service (SQS) delay queues let you postpone the delivery of new messages to a queue for several seconds, for example, when your consumer application needs additional time to process messages. If you create a delay queue, any messages that you send to the queue remain invisible to consumers for the duration of the delay period. The default (minimum) delay for a queue is 0 seconds. The maximum is 15 minutes.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;136&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Design Resilient Architectures&lt;/strong&gt;&lt;br&gt;
Amazon Aurora Serverless is an on-demand, auto-scaling configuration for Amazon Aurora (MySQL-compatible and PostgreSQL-compatible editions), where the database will automatically start-up, shut down, and scale capacity up or down based on your application's needs. It enables you to run your database in the cloud without managing any database instances.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;137&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Design High-Performing Architectures&lt;/strong&gt;&lt;br&gt;
Amazon DynamoDB stream is an ordered flow of information about changes to items in Amazon DynamoDB table. When you enable a stream on a table, DynamoDB captures information about every modification to data items in the table. Whenever an application creates, updates, or deletes items in the table, DynamoDB Streams writes a stream record with the primary key attributes of the items that were modified. A stream record contains information about a data modification to a single item in a DynamoDB table.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;138&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Design Resilient Architectures&lt;/strong&gt;&lt;br&gt;
Network Load Balancers expose a fixed IP to the public web, therefore allowing your application to be predictably reached using this IP, while allowing you to scale your application behind the Network Load Balancer using an ASG.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;139&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Design Resilient Architectures&lt;/strong&gt;&lt;br&gt;
Amazon CloudFront can route to multiple origins based on the content type.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;140&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Design Cost-Optimized Architectures&lt;/strong&gt;&lt;br&gt;
Amazon ElastiCache is an ideal front-end for data stores such as Amazon RDS, providing a high-performance middle tier for applications with extremely high request rates and/or low latency requirements.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;141&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Design High-Performing Architectures&lt;/strong&gt;&lt;br&gt;
Amazon DynamoDB has two read/write capacity modes for processing reads and writes on your tables: On-demand and Provisioned (default, free-tier eligible)&lt;br&gt;
Amazon DynamoDB on-demand is a flexible billing option capable of serving thousands of requests per second without capacity planning. DynamoDB on-demand offers pay-per-request pricing for read and write requests so that you pay only for what you use.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;142&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Design Resilient Architectures&lt;/strong&gt;&lt;br&gt;
With Auto Scaling group, you can control when it adds instances (referred to as scaling out) or removes instances (referred to as scaling in) from your network architecture.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;143&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Design Secure Architectures&lt;/strong&gt;&lt;br&gt;
You can authenticate to your database instance using AWS Identity and Access Management (IAM) database authentication. IAM database authentication works with MySQL and PostgreSQL. With this authentication method, you don't need to use a password when you connect to a database instance. Instead, you use an authentication token.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;144&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Design Secure Architectures&lt;/strong&gt;&lt;br&gt;
Amazon S3 Object Lock is an Amazon S3 feature that allows you to store objects using a write once, read many (WORM) model. You can use WORM protection for scenarios where it is imperative that data is not changed or deleted after it has been written.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;145&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Design Cost-Optimized Architectures&lt;/strong&gt;&lt;br&gt;
You can enable Amazon API caching in Amazon API Gateway to cache your endpoint's responses. With caching, you can reduce the number of calls made to your endpoint and also improve the latency of requests to your API. When you enable caching for a stage, API Gateway caches responses from your endpoint for a specified time-to-live (TTL) period, in seconds. Amazon API Gateway then responds to the request by looking up the endpoint response from the cache instead of requesting your endpoint. The default TTL value for API caching is 300 seconds. The maximum TTL value is 3600 seconds. TTL=0 means caching is disabled.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;146&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Design High-Performing Architectures&lt;/strong&gt;&lt;br&gt;
VPN connection is a secure connection between your on-premises equipment and your VPCs. Each VPN connection has two VPN tunnels which you can use for high availability. A VPN tunnel is an encrypted link where data can pass from the customer network to or from AWS.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;147&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Design High-Performing Architectures&lt;/strong&gt;&lt;br&gt;
With AWS Transit Gateway, you can simplify the connectivity between multiple VPCs and also connect to any VPC attached to AWS Transit Gateway with a single VPN connection. AWS Transit Gateway also enables you to scale the IPsec VPN throughput with equal cost multi-path (ECMP) routing support over multiple VPN tunnels. A single VPN tunnel still has a maximum throughput of 1.25 Gbps. If you establish multiple VPN tunnels to an ECMP-enabled transit gateway, it can scale beyond the default maximum limit of 1.25 Gbps. You also must enable the dynamic routing option on your transit gateway to be able to take advantage of ECMP for scalability.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;148&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Design Resilient Architectures&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Backup and Restore&lt;/strong&gt; - In most traditional environments, data is backed up to tape and sent off-site regularly. If you use this method, it can take a long time to restore your system in the event of a disruption or disaster.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Pilot Light&lt;/strong&gt; - The term pilot light is often used to describe a DR scenario in which a minimal version of an environment is always running in the cloud.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Warm Standby&lt;/strong&gt; - The term warm standby is used to describe a DR scenario in which a scaled-down version of a fully functional environment is always running in the cloud.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Multi Site&lt;/strong&gt; - A multi-site solution runs in AWS as well as on your existing on-site infrastructure, in an active-active configuration. &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;149&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Design Secure Architectures&lt;/strong&gt;&lt;br&gt;
You can host multiple TLS secured applications, each with its own TLS certificate, behind a single load balancer. To use SNI, all you need to do is bind multiple certificates to the same secure listener on your load balancer. ALB will automatically choose the optimal TLS certificate for each client.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;150&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Design High-Performing Architectures&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Geoproximity routing&lt;/strong&gt; - (route based on users location) (This routing directs users based on the geographic proximity of their location to AWS resources or defined locations.) Geoproximity routing lets Amazon Route 53 route traffic to your resources based on the geographic location of your users and your resources. For example, Redirect users to the nearest data center or AWS region.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Geolocation routing&lt;/strong&gt; (choosing the content based on users location) - Geolocation routing lets you choose the resources that serve your traffic based on the geographic location of your users, meaning the location that DNS queries originate from. For example, you might want all queries from Europe to be routed to an Elastic Load Balancing (ELB) load balancer in the Frankfurt region.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;151&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Design Resilient Architectures&lt;/strong&gt;&lt;br&gt;
Amazon EventBridge is recommended when you want to build an application that reacts to events from SaaS applications and/or AWS services. Amazon EventBridge is the only event-based service that integrates directly with third-party SaaS partners.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;152&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Design Secure Architectures&lt;/strong&gt;&lt;br&gt;
Trust policy - Trust policies define which principal entities (accounts, users, roles, and federated users) can assume the role. An IAM role is both an identity and a resource that supports resource-based policies. For this reason, you must attach both a trust policy and an identity-based policy to an IAM role. The IAM service supports only one type of resource-based policy called a role trust policy, which is attached to an IAM role.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;153&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Design Resilient Architectures&lt;/strong&gt;&lt;br&gt;
In a Multi-AZ deployment, Amazon RDS automatically provisions and maintains a synchronous standby replica in a different Availability Zone. The primary DB instance is synchronously replicated across Availability Zones to a standby replica to provide data redundancy, eliminate I/O freezes, and minimize latency spikes during system backups. Running a DB instance with high availability can enhance availability during planned system maintenance, and help protect your databases against DB instance failure and Availability Zone disruption.&lt;/p&gt;

&lt;p&gt;Failover is automatically handled by Amazon RDS so that you can resume database operations as quickly as possible without administrative intervention. When failing over, Amazon RDS simply flips the canonical name record (CNAME) for your DB instance to point at the standby, which is in turn promoted to become the new primary. Multi-AZ means the URL is the same, the failover is automated, and the CNAME will automatically be updated to point to the standby database.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;154&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Design Resilient Architectures&lt;/strong&gt;&lt;br&gt;
Only Standard Amazon SQS queue is allowed as an Amazon S3 event notification destination, whereas FIFO SQS queue is not allowed.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;155&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Design High-Performing Architectures&lt;/strong&gt;&lt;br&gt;
Redis has purpose-built commands for working with real-time geospatial data at scale. You can perform operations like finding the distance between two elements (for example people or places) and finding all elements within a given distance of a point.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;156&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Design Resilient Architectures&lt;/strong&gt;&lt;br&gt;
If the Auto Scaling group (ASG) is using EC2 as the health check type and the Application Load Balancer (ALB) is using its in-built health check, there may be a situation where the ALB health check fails because the health check pings fail to receive a response from the instance. At the same time, ASG health check can come back as successful because it is based on EC2 based health check. Therefore, in this scenario, the ALB will remove the instance from its inventory, however, the Auto Scaling Group will fail to provide the replacement instance.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;157&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Design Resilient Architectures&lt;/strong&gt;&lt;br&gt;
By default, cross-zone load balancing is enabled for Application Load Balancer and disabled for Network Load Balancer.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;158&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Design Resilient Architectures&lt;/strong&gt;&lt;br&gt;
Using Amazon Route 53 DNS Failover, you can run your primary application simultaneously in multiple AWS regions around the world and failover across regions.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;159&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Design Resilient Architectures&lt;/strong&gt;&lt;br&gt;
Set up Amazon Route 53 active-passive type of failover routing policy when you want a primary resource or group of resources to be available the majority of the time and you want a secondary resource or group of resources to be on standby in case all the primary resources become unavailable.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;160&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Design High-Performing Architectures&lt;/strong&gt;&lt;br&gt;
If your application endpoint has a failure or availability issue, AWS Global Accelerator will automatically redirect your new connections to a healthy endpoint within seconds.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;161&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Design Resilient Architectures&lt;/strong&gt;&lt;br&gt;
Depending on your Region, your Amazon S3 website endpoints follow one of these two formats.&lt;br&gt;
s3-website dot (.) Region ‐ &lt;em&gt;&lt;a href="http://bucket-name.s3-website.Region.amazonaws.com" rel="noopener noreferrer"&gt;http://bucket-name.s3-website.Region.amazonaws.com&lt;/a&gt;&lt;/em&gt;&lt;br&gt;
s3-website dash (-) Region ‐ &lt;em&gt;&lt;a href="http://bucket-name.s3-website-Region.amazonaws.com" rel="noopener noreferrer"&gt;http://bucket-name.s3-website-Region.amazonaws.com&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;162&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Design Secure Architectures&lt;/strong&gt;&lt;br&gt;
Amazon S3 object metadata, which can be included with the object, is not encrypted while being stored on Amazon S3. Therefore, AWS recommends that customers not place sensitive information in Amazon S3 metadata.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;163&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Design Secure Architectures&lt;/strong&gt;&lt;br&gt;
An IAM user with full administrator access can perform almost all AWS tasks except a few tasks designated only for the root account user. Some of the AWS tasks that only a root account user can do are as follows: change account name or root password or root email address, change AWS support plan, close AWS account, enable AWS Multi-Factor Authentication (AWS MFA) on S3 bucket delete, create Cloudfront key pair, register for GovCloud.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;164&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Design Resilient Architectures&lt;/strong&gt;&lt;br&gt;
Auto Scaling group scheduled action - The scheduled action tells Amazon EC2 Auto Scaling to perform a scaling action at specified times. To create a scheduled scaling action, you specify the start time when the scaling action should take effect, and the new minimum, maximum, and desired sizes for the scaling action.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;165&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Design High-Performing Architectures&lt;/strong&gt;&lt;br&gt;
When a host needs to send many records per second (RPS) to Amazon Kinesis, simply calling the basic PutRecord API action in a loop is inadequate. To reduce overhead and increase throughput, the application must batch records and implement parallel HTTP requests. This will increase the efficiency overall and ensure you are optimally using the shards.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;166&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Design High-Performing Architectures&lt;/strong&gt;&lt;br&gt;
Amazon Simple Queue Service (Amazon SQS) temporary queues help you save development time and deployment costs when using common message patterns such as request-response. You can use the Temporary Queue Client to create high-throughput, cost-effective, application-managed temporary queues.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;167&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Design Resilient Architectures&lt;/strong&gt;&lt;br&gt;
Amazon RDS applies operating system updates by performing maintenance on the standby, then promoting the standby to primary and finally performing maintenance on the old primary, which becomes the new standby.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;168&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Design High-Performing Architectures&lt;/strong&gt;&lt;br&gt;
AWS Lambda functions time out after 15 minutes, and are not usually meant for long-running jobs.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;169&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Design Resilient Architectures&lt;/strong&gt;&lt;br&gt;
Amazon EBS Multi-Attach is supported exclusively on Provisioned IOPS SSD volumes (io1 or io2).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;170&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Design Cost-Optimized Architectures&lt;/strong&gt;&lt;br&gt;
AWS recommends using Snowmobile to migrate large datasets of 10PB or more in a single location. For datasets less than 10PB or distributed in multiple locations, you should use Snowball.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;171&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Design Secure Architectures&lt;/strong&gt;&lt;br&gt;
If the instance is already running, you can set DeleteOnTermination to False using the command line.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;172&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Design High-Performing Architectures&lt;/strong&gt;&lt;br&gt;
IOPS cannot be directly increased on a gp2 volume without increasing its size, which is not possible due to the question's constraints.&lt;br&gt;
An io1 volume allows you to specify a consistent IOPS rate when you create the volume, and Amazon EBS delivers the provisioned performance 99.9 percent of the time.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;173&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Design Resilient Architectures&lt;/strong&gt;&lt;br&gt;
Use Amazon CloudFront signed cookies - Amazon CloudFront signed cookies allow you to control who can access your content when you don't want to change your current URLs or when you want to provide access to multiple restricted files, for example, all of the files in the subscribers' area of a website.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;174&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Design Secure Architectures&lt;/strong&gt;&lt;br&gt;
Amazon FSx for NetApp ONTAP is a storage service that allows customers to launch and run fully managed ONTAP file systems in the cloud. ONTAP is NetApp’s file system technology that provides a widely adopted set of data access and data management capabilities. Amongst the Amazon FSx family, FSx for ONTAP is the only file system that supports access by Windows, Mac, and Linux-based Amazon EC2 instances within the same AWS region using both SMB and NFS protocols.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;175&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Design Cost-Optimized Architectures&lt;/strong&gt;&lt;br&gt;
A Spot fleet is a collection, or fleet, of Spot Instances, and optionally On-Demand Instances. The Spot fleet attempts to launch the number of Spot Instances and On-Demand Instances to meet the target capacity that you specified in the Spot fleet request.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;176&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Design Secure Architectures&lt;/strong&gt;&lt;br&gt;
Using AWS Firewall Manager, you can centrally configure AWS WAF rules, AWS Shield Advanced protection, Amazon Virtual Private Cloud (VPC) security groups, AWS Network Firewalls, and Amazon Route 53 Resolver DNS Firewall rules across accounts and resources in your organization. It does not support Network ACLs as of today.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;177&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Design Resilient Architectures&lt;/strong&gt;&lt;br&gt;
If you have resources in multiple Availability Zones and they share one NAT gateway, and if the NAT gateway’s Availability Zone is down, resources in the other Availability Zones lose internet access. To create a highly available or an Availability Zone independent architecture, create a NAT gateway in each Availability Zone and configure your routing to ensure that resources use the NAT gateway in the same Availability Zone.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;178&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Design Resilient Architectures&lt;/strong&gt;&lt;br&gt;
Amazon ElastiCache for Memcached supports multithreading.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;179&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Design High-Performing Architectures&lt;/strong&gt;&lt;br&gt;
Create a public Network Load Balancer that links to Amazon EC2 instances that are bastion hosts managed by an Auto Scaling Group. use a Network Load Balancer, which supports TCP traffic, and will automatically allow you to connect to the Amazon EC2 instance bastion.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;180&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Design High-Performing Architectures&lt;/strong&gt;&lt;br&gt;
When rebalancing, Amazon EC2 Auto Scaling launches new instances before terminating the old ones, so that rebalancing does not compromise the performance or availability of your application.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;181&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Design Secure Architectures&lt;/strong&gt;&lt;br&gt;
Amazon GuardDuty continuously monitors for malicious or unauthorized behavior to help protect your AWS resources, including your AWS accounts and access keys. Amazon GuardDuty identifies any unusual or unauthorized activity, like cryptocurrency mining or infrastructure deployments in a region that has never been used. Powered by threat intelligence and machine learning, GuardDuty is continuously evolving to help you protect your AWS environment.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;182&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Design Secure Architectures&lt;/strong&gt;&lt;br&gt;
Amazon FSx file storage is accessible from Windows, Linux, and macOS compute instances and devices running on AWS or on-premises. Thousands of compute instances and devices can access a file system concurrently. Amazon FSx for Windows File Server supports Microsoft Active Directory (AD) integration so the same user permissions and access credentials can be used to access the files on FSx Windows File Server.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;183&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Design Secure Architectures&lt;/strong&gt;&lt;br&gt;
You control which Amazon EC2 instances can access your Amazon EFS file system by using VPC security group rules and AWS Identity and Access Management (IAM) policies. Use VPC security groups to control the network traffic to and from your file system. Attach an IAM policy to your file system to control which clients can mount your file system and with what permissions, and you may use Amazon EFS Access Points to manage application access. Control access to files and directories with POSIX-compliant user and group-level permissions.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;184&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Design Cost-Optimized Architectures&lt;/strong&gt;&lt;br&gt;
Storage class analysis only provides recommendations for Standard to Standard IA classes.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;185&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Design Secure Architectures&lt;/strong&gt;&lt;br&gt;
When you use server-side encryption with Amazon S3 managed keys (SSE-S3), each object is encrypted with a unique key. As an additional safeguard, it encrypts the key itself with a root key that it regularly rotates.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;186&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Design Cost-Optimized Architectures&lt;/strong&gt;&lt;br&gt;
Spot Instance - The request type (one-time or persistent) determines whether the request is opened again when Amazon EC2 interrupts a Spot Instance or if you stop a Spot Instance. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;187&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Design Resilient Architectures&lt;/strong&gt;&lt;br&gt;
In Amazon ECS, an Application Load Balancer uses dynamic port mapping, you can run multiple tasks from a single service on the same container instance.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;188&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Design Resilient Architectures&lt;/strong&gt;&lt;br&gt;
If an organization is using messaging with existing applications and wants to move the messaging service to the cloud quickly and easily, AWS recommends Amazon MQ for such a use case.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;189&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Design High-Performing Architectures&lt;/strong&gt;&lt;br&gt;
Using the Range HTTP header in a GET Object request, you can fetch a byte-range from an object, transferring only the specified portion. You can use concurrent connections to Amazon S3 to fetch different byte ranges from within the same object. This helps you achieve higher aggregate throughput versus a single whole-object request. Fetching smaller ranges of a large object also allows your application to improve retry times when requests are interrupted.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;190&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Design Secure Architectures&lt;/strong&gt;&lt;br&gt;
To encrypt an object at the time of upload, you need to add a header called x-amz-server-side-encryption to the request to tell S3 to encrypt the object using SSE-C, SSE-S3, or SSE-KMS.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>saa03</category>
      <category>certification</category>
    </item>
    <item>
      <title>My Service Mesh journey with Terraform on AWS Cloud - Part 2</title>
      <dc:creator>jekobokidou</dc:creator>
      <pubDate>Mon, 01 Jul 2024 16:08:52 +0000</pubDate>
      <link>https://dev.to/aws-builders/my-service-mesh-journey-with-terraform-on-aws-cloud-part-2-58fd</link>
      <guid>https://dev.to/aws-builders/my-service-mesh-journey-with-terraform-on-aws-cloud-part-2-58fd</guid>
      <description>&lt;p&gt;As we already stated (&lt;a href="https://dev.to/aws-builders/my-service-mesh-journey-with-terraform-on-aws-cloud-part-1-3hee"&gt;Part 1&lt;/a&gt;) we are living in a world were microservice architectures are becoming increasingly complex. Companies are constantly seeking solutions to enhance the resilience, security, and visibility of their distributed applications. This is where AWS App Mesh comes into play by offering enhanced observability, granular traffic control, and improved security. Just discover throught this solution proposal how this revolutionary solution is transforming the way we design and operate modern applications. &lt;/p&gt;

&lt;h2&gt;
  
  
  Prerequisites
&lt;/h2&gt;

&lt;p&gt;First of all you need an AWS Route 53 domain, the one we will use here is &lt;strong&gt;skyscaledev.com&lt;/strong&gt; it's mine, sorry you can't use it.&lt;/p&gt;

&lt;p&gt;Be also sure you have installed the following tools installed : &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Terraform&lt;/li&gt;
&lt;li&gt;AWS CLI&lt;/li&gt;
&lt;li&gt;Kubectl&lt;/li&gt;
&lt;li&gt;eks-node-viewer&lt;/li&gt;
&lt;li&gt;Postman&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The following products are also used :&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Docker&lt;/li&gt;
&lt;li&gt;Keycloak&lt;/li&gt;
&lt;li&gt;PostgreSQL&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Solution Architecture
&lt;/h2&gt;

&lt;p&gt;This is the Solution Architecture we are going to deploy.&lt;/p&gt;

&lt;p&gt;The following AWS Services are used :&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Amazon EKS&lt;/li&gt;
&lt;li&gt;Amazon EC2&lt;/li&gt;
&lt;li&gt;Amazon API Gateway&lt;/li&gt;
&lt;li&gt;Amazon Cognito&lt;/li&gt;
&lt;li&gt;AWS Cloudfront&lt;/li&gt;
&lt;li&gt;AWS Lambda&lt;/li&gt;
&lt;li&gt;AWS CloudMap&lt;/li&gt;
&lt;li&gt;AWS App Mesh&lt;/li&gt;
&lt;li&gt;Amazon CloudWatch&lt;/li&gt;
&lt;li&gt;AWS X-Ray&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fraw.githubusercontent.com%2Fagoralabs%2Fappmesh-demo-aws%2Fmain%2Fimages%2Fsolutions-APPMESH.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fraw.githubusercontent.com%2Fagoralabs%2Fappmesh-demo-aws%2Fmain%2Fimages%2Fsolutions-APPMESH.png" alt="Solution Architecture App Mesh" width="800" height="1400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  App Mesh demo infrastructure resources creation step by step
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Step 0 : Clone the Git repository containing the Terraform scripts
&lt;/h3&gt;

&lt;p&gt;Just hit the following command :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;git clone https://github.com/agoralabs/appmesh-demo-aws.git
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You should have the following directory structure :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;.
├── 01-vpc
├── 02-k8scluster
├── 03-karpenter
├── 04-mesh
├── 05-apigateway
├── 06-csivolume
├── 07-k8smanifest-pvolume
├── 08-meshexposer-keycloak
├── 09-meshservice-postgre-keycloak
├── 10-meshservice-keycloak
├── 11-kurler-keycloak-realm
├── 12-fedusers
├── 13-fedclient-spa
├── 14-apiauthorizer
├── 15-atedge
├── 16-apigwfront
├── 17-meshcfexposer-spa
├── 18-meshexposer-spa
├── 19-meshservice-spa
├── 20-meshservice-postgre-api
├── 21-meshexposer-api
├── 22-meshservice-api
├── 23-fedclient-api
├── images
├── modules
└── README.md
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Folders from &lt;strong&gt;01-&lt;/strong&gt; to &lt;strong&gt;23-&lt;/strong&gt; contains the following files :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;.
├── apply.sh
├── destroy.sh
├── main.tf
├── output.tf
├── _provider.tf
└── _variables.tf
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For each step, you can just cd inside the &lt;strong&gt;XX-&lt;/strong&gt; folder and hit the well know &lt;strong&gt;terraform&lt;/strong&gt; commands :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform init
terraform apply
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;But I recommend you the provided shell script &lt;strong&gt;./apply.sh&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;To create the infrastructure elements just cd inside the folder and use apply.sh :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ ./apply.sh
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;strong&gt;./destroy.sh&lt;/strong&gt; shell is used to destroy created resources when you are done.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 1 : Create a VPC
&lt;/h3&gt;

&lt;p&gt;With Amazon Virtual Private Cloud (Amazon VPC), you can launch AWS resources in a logically isolated virtual network that you've defined. This virtual network closely resembles a traditional network that you'd operate in your own data center, with the benefits of using the scalable infrastructure of AWS.&lt;/p&gt;

&lt;p&gt;To create a brand new VPC for this App Mesh showcase just do the following : &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;cd to &lt;strong&gt;01-vpc&lt;/strong&gt; folder.&lt;/li&gt;
&lt;li&gt;Run &lt;strong&gt;apply.sh&lt;/strong&gt; script : a vpn named &lt;strong&gt;k8s-mesh-staging-vpc&lt;/strong&gt; should be created, and you should have the following result :
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ ./apply.sh

...
Apply complete! Resources: 19 added, 0 changed, 0 destroyed.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;[!TIP]&lt;br&gt;
The main terraform section used here is the following :&lt;br&gt;
&lt;/p&gt;
&lt;/blockquote&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;module "vpc" {
  source = "terraform-aws-modules/vpc/aws"
  name                             = "${local.vpc_name}"
  cidr                             = var.ENV_APP_GL_VPC_CIDR
  azs                              = split(",", var.ENV_APP_GL_AWS_AZS)
  public_subnets                   = ["${var.ENV_APP_GL_VPC_CIDR_SUBNET1}","${var.ENV_APP_GL_VPC_CIDR_SUBNET2}"]
  private_subnets                  = local.private_subnets_cidrs
  enable_nat_gateway               = local.enable_nat_gateway
  single_nat_gateway               = local.single_nat_gateway
  public_subnet_names              = local.public_subnets_names
  private_subnet_names             = local.private_subnets_names
  map_public_ip_on_launch          = true
  enable_dns_support               = true
  enable_dns_hostnames             = true

  private_subnet_tags = {
    "kubernetes.io/role/internal-elb" = 1
    "kubernetes.io/cluster/${local.eks_cluster_name}" = "owned"
  }

  public_subnet_tags = {
    "kubernetes.io/role/elb" = 1 
    "kubernetes.io/cluster/${local.eks_cluster_name}" = "owned"
  }
  ...
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The terraform module &lt;strong&gt;terraform-aws-modules/vpc/aws&lt;/strong&gt; is used here to create :&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;a VPC with 2 public subnets and 2 private subnets.&lt;/li&gt;
&lt;li&gt;2 NAT Gateways for external traffic&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;And if you take a look at the AWS VPC Service Web Console, you should see this :&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F64aik7zcjggajoqb7n7i.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F64aik7zcjggajoqb7n7i.png" alt="App Mesh VPC" width="800" height="466"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 2 : Create a Kubernetes EKS cluster
&lt;/h3&gt;

&lt;p&gt;Amazon Elastic Kubernetes Service (Amazon EKS) is a fully-managed, certified Kubernetes conformant service that simplifies the process of building, securing, operating, and maintaining Kubernetes clusters on AWS.&lt;/p&gt;

&lt;p&gt;Since we need a Kubernetes cluster to showcase App Mesh, just do the following to create a brand new EKS Kubernetes cluster :&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;cd to &lt;strong&gt;02-k8scluster&lt;/strong&gt; folder.&lt;/li&gt;
&lt;li&gt;Run &lt;strong&gt;apply.sh&lt;/strong&gt; script : an EKS cluster named &lt;strong&gt;k8s-mesh-staging-vpc&lt;/strong&gt; should be created, and you should have the following result :
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ ./apply.sh

...
Apply complete! Resources: 60 added, 0 changed, 0 destroyed.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;[!TIP]&lt;br&gt;
The post important terraform section used here is the following :&lt;br&gt;
&lt;/p&gt;
&lt;/blockquote&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;module "eks" {
  source  = "terraform-aws-modules/eks/aws"
  cluster_name    = local.eks_cluster_name
  cluster_version = var.cluster_version
  enable_irsa     = true
  vpc_id                         = var.global_vpc_id
  subnet_ids                     = local.subnet_ids
  cluster_endpoint_public_access = true
  cluster_enabled_log_types = []
  enable_cluster_creator_admin_permissions = true

  eks_managed_node_group_defaults = {
    ami_type = "AL2_x86_64"
  }

  eks_managed_node_groups = {
    one = {
      name = "${local.eks_cluster_name}-n1"

      instance_types = ["${var.node_group_instance_type}"]

      min_size     = var.node_group_min_size
      max_size     = var.node_group_max_size
      desired_size = var.node_group_desired_size
      subnet_ids = [local.subnet_ids[0]]
      iam_role_additional_policies = {
        AmazonEC2FullAccess = "arn:aws:iam::aws:policy/AmazonEC2FullAccess"
        AmazonEBSCSIDriverPolicy = "arn:aws:iam::aws:policy/service-role/AmazonEBSCSIDriverPolicy"
      }

      labels = {
        "karpenter.sh/disruption" = "NoSchedule"
      }
    }

  }
...
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The terraform module &lt;strong&gt;terraform-aws-modules/eks/aws&lt;/strong&gt; is used here to create :&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;an Amazon EKS Kubernetes cluster with 2 Amazon EC2 nodes.&lt;/li&gt;
&lt;li&gt;Additional policies with permissions to manage EBS volumes and EC2 instances&lt;/li&gt;
&lt;li&gt;Labels are also added to keep thoses nodes out of control of Karpenter nodes scheduler&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;And if you take a look at the Amazon EKS Web Console, you should see this :&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa5cm6kcyolubr15iiiov.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa5cm6kcyolubr15iiiov.png" alt="App Mesh EKS cluster" width="800" height="141"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You should also see the two nodes created for this EKS Cluster if you use the EKS Node viewer tool :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ eks-node-viewer
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvv6afo71rhffbk25wit0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvv6afo71rhffbk25wit0.png" alt="App Mesh EKS cluster nodes" width="800" height="122"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 3 : Create a Karpenter Kubernetes cluster nodes manager (OPTIONAL SECTION)
&lt;/h3&gt;

&lt;blockquote&gt;
&lt;p&gt;[!CAUTION]&lt;br&gt;
YOU CAN SKIP THIS SECTION.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Karpenter is a Kubernetes node's manager. Karpenter automatically launches just the right compute resources to handle your cluster's applications. It is designed to let you take full advantage of the cloud with fast and simple compute provisioning for Kubernetes clusters.&lt;/p&gt;

&lt;p&gt;Eventhough Karpenter is no mandatory for our showcase, if you want to try it, just do the following :&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;cd to &lt;strong&gt;03-karpenter&lt;/strong&gt; folder.&lt;/li&gt;
&lt;li&gt;Run &lt;strong&gt;apply.sh&lt;/strong&gt; script : Karpenter should be installed in your cluster and you should have the following result :
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ ./apply.sh

...
Apply complete! Resources: 20 added, 0 changed, 0 destroyed.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;[!TIP]&lt;br&gt;
The most important terraform section used here is the following :&lt;br&gt;
&lt;/p&gt;
&lt;/blockquote&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "kubectl_manifest" "karpenter_node_pool" {
  yaml_body = &amp;lt;&amp;lt;-YAML
    apiVersion: karpenter.sh/v1beta1
    kind: NodePool
    metadata:
      name: default
    spec:
      disruption:
        consolidationPolicy: ${var.consolidation_policy}
        expireAfter: ${var.expire_after}
      limits:
        cpu: "${var.cpu_limits}"
        memory: "${var.mem_limits}"
      template:
        metadata:
          labels:
            #cluster-name: ${local.cluster_name}
            type : karpenter
        spec:
          nodeClassRef:
            name: default
          requirements:
            - key: "karpenter.k8s.aws/instance-category"
              operator: In
              values: ${local.instance_category}
            - key: kubernetes.io/arch
              operator: In
              values: ${local.architecture}
            - key: karpenter.sh/capacity-type
              operator: In
              values: ${local.capacity_type}
            - key: kubernetes.io/os
              operator: In
              values: ${local.os}
            - key: node.kubernetes.io/instance-type
              operator: In
              values: ${local.instance_type}

  YAML

  depends_on = [
    kubectl_manifest.karpenter_node_class
  ]
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Verify that the Karpenter Custom Resources Definitions (CRD) are added in the Kubernetes cluster :
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl get crd | grep karpenter

ec2nodeclasses.karpenter.k8s.aws             2024-06-24T12:33:01Z
nodeclaims.karpenter.sh                      2024-06-24T12:33:01Z
nodepools.karpenter.sh                       2024-06-24T12:33:02Z
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Verify that the Karperter Controller pods are created in the Kubernetes clsuter :
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl get pods -n karpenter

NAME                         READY   STATUS    RESTARTS   AGE
karpenter-676bb4f846-4jkmt   1/1     Running   0          90m
karpenter-676bb4f846-mcz9k   1/1     Running   0          90m
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Verify that a node is provisionned by Karpenter by running a script to deploy a demo application :
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ ./try.sh
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;With the EKS Node viewer tool, take a look at the nodes :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ eks-node-viewer
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp8mctvlo3bvlk3er64g1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp8mctvlo3bvlk3er64g1.png" alt="App Mesh EKS cluster Karpenter nodes" width="800" height="132"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 4 : Create AppMesh Controller and AppMesh Gateway
&lt;/h3&gt;

&lt;p&gt;In this step, we will create an AWS App Mesh Controller For K8s. It is a controller to help manage App Mesh resources for a Kubernetes cluster and injecting sidecars to Kubernetes Pods. The controller watches custom resources for changes and reflects those changes into the App Mesh API. The controller maintains the custom resources (CRDs): meshes, virtualnodes, virtualrouters, virtualservices, virtualgateways and gatewayroutes. The custom resources map to App Mesh API objects.&lt;/p&gt;

&lt;p&gt;The Terraform script used here will all so create an AppMesh Gateway. It is a virtual gateway that allows resources that are outside of your mesh to communicate to resources that are inside of your mesh. The virtual gateway represents an Envoy proxy running in a Kubernetes service, or on an Amazon EC2 instance. Unlike a virtual node, which represents Envoy running with an application, a virtual gateway represents Envoy deployed by itself.&lt;/p&gt;

&lt;p&gt;Just do the following : &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;cd to &lt;strong&gt;04-mesh&lt;/strong&gt; folder.&lt;/li&gt;
&lt;li&gt;Run &lt;strong&gt;apply.sh&lt;/strong&gt; script : a Mesh named &lt;strong&gt;k8s-mesh-staging&lt;/strong&gt; should be created.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You should also see the following :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ ./apply.sh

...
Apply complete! Resources: 25 added, 0 changed, 0 destroyed.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;[!TIP]&lt;br&gt;
The most important section of the terraform script is the following :&lt;br&gt;
&lt;/p&gt;
&lt;/blockquote&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "kubectl_manifest" "mesh" {
    for_each  = data.kubectl_file_documents.mesh.manifests
    yaml_body = each.value

    depends_on = [ 
      helm_release.appmesh_controller
    ]
}

resource "kubectl_manifest" "virtual_gateway" {
    for_each  = data.kubectl_file_documents.virtual_gateway.manifests
    yaml_body = each.value

    depends_on = [ 
      helm_release.appmesh_gateway
    ]
}

resource "aws_service_discovery_http_namespace" "service_discovery" {
  name        = "${local.service_discovery_name}"
  description = "Service Discovery for App Mesh ${local.service_discovery_name}"
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Those instructions are used to apply the following manifests :&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;[!NOTE]&lt;br&gt;
&lt;strong&gt;Mesh&lt;/strong&gt; : A service mesh is a logical boundary for network traffic between the services that reside within it. When creating a Mesh, you must add a namespace selector. If the namespace selector is empty, it selects all namespaces. To restrict the namespaces, use a label to associate App Mesh resources to the created mesh.&lt;br&gt;
&lt;/p&gt;
&lt;/blockquote&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;---
apiVersion: appmesh.k8s.aws/v1beta2
kind: Mesh
metadata:
  name: k8s-mesh-staging
spec:
  egressFilter:
    type: DROP_ALL
  namespaceSelector:
    matchLabels:
        mesh: k8s-mesh-staging
  tracing:
    provider:
      xray:
        daemonEndpoint: 127.0.0.1:2000
...
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkkl0fr2cbvfut9vxrxcs.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkkl0fr2cbvfut9vxrxcs.png" alt="AppMesh namespace selector" width="800" height="248"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;[!NOTE]&lt;br&gt;
&lt;strong&gt;VirtualGateway&lt;/strong&gt; : A virtual gateway allows resources that are outside of your mesh to communicate to resources that are inside of your mesh. The virtual gateway represents an Envoy proxy running in a Kubernetes service. When creating a Virtual Gateway, you must add a namespace selector with a label to identify the list of namespaces with which to associate Gateway Routes to the created Virtual Gateway.&lt;br&gt;
&lt;/p&gt;
&lt;/blockquote&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;---
apiVersion: appmesh.k8s.aws/v1beta2
kind: VirtualGateway
metadata:
  name: appmesh-gateway
  namespace: gateway
spec:
  namespaceSelector:
    matchLabels:
      mesh: k8s-mesh-staging
      appmesh.k8s.aws/sidecarInjectorWebhook: enabled
  podSelector:
    matchLabels:
      app.kubernetes.io/name: appmesh-gateway
  listeners:
    - portMapping:
        port: 8088
        protocol: http
  logging:
    accessLog:
      file:
        path: "/dev/stdout"

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And if you take a look at the AWS App Mesh Web Console, you should see this :&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhsw1nzfnad692v6d2dhb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhsw1nzfnad692v6d2dhb.png" alt="App Mesh" width="800" height="151"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The script should also create the following resources : &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Pods for App Mesh Controller&lt;/li&gt;
&lt;li&gt;Pods for App Mesh Gateway&lt;/li&gt;
&lt;li&gt;Pods for AWS XRay daemon for Tracing&lt;/li&gt;
&lt;li&gt;Pods for Amazon CloudWatch daemon for logging&lt;/li&gt;
&lt;li&gt;Pods for Fluentd daemon for logs aggregation to Cloudwatch&lt;/li&gt;
&lt;li&gt;A Service Discovery Namespace in AWS Cloud Map
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl get pods --all-namespaces --namespace=appmesh-system,gateway,aws-observability

NAMESPACE           NAME                                  READY   STATUS    RESTARTS   AGE
appmesh-system      appmesh-controller-57d947c9bc-ltmml   1/1     Running   0          21m
aws-observability   cloudwatch-agent-f7lsx                1/1     Running   0          20m
aws-observability   cloudwatch-agent-lqhks                1/1     Running   0          20m
aws-observability   fluentd-cloudwatch-hlq24              1/1     Running   0          20m
aws-observability   fluentd-cloudwatch-hpjzp              1/1     Running   0          20m
aws-observability   xray-daemon-ncfj8                     1/1     Running   0          20m
aws-observability   xray-daemon-nxqnt                     1/1     Running   0          20m
gateway             appmesh-gateway-78dbc94897-vb5f4      2/2     Running   0          20m
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The script should also create a Network Load Balancer :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl get svc -n gateway

NAME              TYPE           CLUSTER-IP     EXTERNAL-IP                                                                     PORT(S)        AGE
appmesh-gateway   LoadBalancer   172.20.82.97   a7d0b077d231e4713a90dbb62382168b-15706dbc33b16c0c.elb.us-west-2.amazonaws.com   80:30205/TCP   26m
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frfo2ckvxf7trq6hivhmn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frfo2ckvxf7trq6hivhmn.png" alt="App Mesh Gateway Network Load Balancer" width="800" height="456"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;[!NOTE]&lt;br&gt;
&lt;strong&gt;Other Mesh resources&lt;/strong&gt; : After you create your service mesh, you can create virtual services, virtual nodes, virtual routers, and routes to distribute traffic between the applications in your mesh.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Those resources will be created in the following steps : &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Step 9&lt;/strong&gt; : deploy a Postgre SQL Database for Keycloak&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Step 10&lt;/strong&gt; : deploy a Keycloak Identity Provider instance connected to the database created in Step 9&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Step 19&lt;/strong&gt; : deploy an Angular Single Page Application&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Step 20&lt;/strong&gt; : deploy a Postgre SQL Database for a SpringBoot API&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Step 22&lt;/strong&gt; : deploy a SpringBoot API connected to the database created in Step 20&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5kjdhoz5wqnhzwdab1ys.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5kjdhoz5wqnhzwdab1ys.png" alt="Mesh resources" width="800" height="672"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;[!NOTE]&lt;br&gt;
&lt;strong&gt;Mesh Observability&lt;/strong&gt; : In this step we also added the creation of XRay, Fluentd and CloudWatch DaemonSets for observability inside the Mesh. You can find them in the following manifest file &lt;strong&gt;files/4-mesh.yaml&lt;/strong&gt;.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;XRay DaemonSet&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: xray-daemon
  namespace: aws-observability
spec:
  selector:
    matchLabels:
      name: xray-daemon
  template:
    metadata:
      labels:
        name: xray-daemon
    spec:
      containers:
        - name: xray-daemon
          image: amazon/aws-xray-daemon
          ports:
            - containerPort: 2000
              protocol: UDP
          env:
            - name: AWS_REGION
              value: us-west-2
          resources:
            limits:
              memory: 256Mi
              cpu: 200m
            requests:
              memory: 128Mi
              cpu: 100m
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;CloudWatch DaemonSet&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: cloudwatch-agent
  namespace: aws-observability
spec:
  selector:
    matchLabels:
      name: cloudwatch-agent
  template:
    metadata:
      labels:
        name: cloudwatch-agent
    spec:
      containers:
        - name: cloudwatch-agent
          image: amazon/cloudwatch-agent:latest
          imagePullPolicy: Always
          ...
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Fluentd DaemonSet&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: fluentd-cloudwatch
  namespace: aws-observability
  labels:
    k8s-app: fluentd-cloudwatch
spec:
  selector:
    matchLabels:
      k8s-app: fluentd-cloudwatch
  template:
    metadata:
      labels:
        k8s-app: fluentd-cloudwatch
      annotations:
        configHash: 8915de4cf9c3551a8dc74c0137a3e83569d28c71044b0359c2578d2e0461825
    spec:
      serviceAccountName: fluentd
      terminationGracePeriodSeconds: 30
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Step 5 : Create an HTTP API Gateway
&lt;/h3&gt;

&lt;p&gt;Amazon API Gateway is an AWS service for creating, publishing, maintaining, monitoring, and securing REST, HTTP, and WebSocket APIs at any scale. So an HTTP is suitable to expose services deployed in your Mesh.&lt;/p&gt;

&lt;p&gt;Jus do the following to create an HTT API Gateway :&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;cd to &lt;strong&gt;05-apigateway&lt;/strong&gt; folder.&lt;/li&gt;
&lt;li&gt;Run &lt;strong&gt;apply.sh&lt;/strong&gt; script : an API Gateway named &lt;strong&gt;k8s-mesh-staging-api-gw&lt;/strong&gt; should be created.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ ./apply.sh

...
module.apigw.aws_apigatewayv2_vpc_link.api_gw: Creating...
module.apigw.aws_apigatewayv2_api.api_gw: Creating...
module.apigw.aws_apigatewayv2_api.api_gw: Creation complete after 2s [id=0b0l2fr08e]
module.apigw.aws_cloudwatch_log_group.api_gw: Creating...
module.apigw.aws_cloudwatch_log_group.api_gw: Creation complete after 1s [id=/aws/api_gw/k8s-mesh-staging-api-gw]
module.apigw.aws_apigatewayv2_stage.api_gw: Creating...
module.apigw.aws_apigatewayv2_stage.api_gw: Creation complete after 1s [id=$default]
module.apigw.aws_apigatewayv2_vpc_link.api_gw: Still creating... [2m0s elapsed]
module.apigw.aws_apigatewayv2_vpc_link.api_gw: Creation complete after 2m9s [id=j7svx3]
module.apigw.aws_apigatewayv2_integration.api_gw: Creating...
module.apigw.aws_apigatewayv2_integration.api_gw: Creation complete after 1s [id=jwautpk]
module.apigw.aws_apigatewayv2_route.options_route: Creating...
module.apigw.aws_apigatewayv2_route.api_gw: Creating...
module.apigw.aws_apigatewayv2_route.api_gw: Creation complete after 0s [id=n51kti4]
module.apigw.aws_apigatewayv2_route.options_route: Creation complete after 0s [id=psxri8s]

Apply complete! Resources: 7 added, 0 changed, 0 destroyed.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;[!TIP]&lt;br&gt;
The most important sections in the terraform script are the following :&lt;br&gt;
&lt;/p&gt;
&lt;/blockquote&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "aws_apigatewayv2_integration" "api_gw" {
  api_id           = aws_apigatewayv2_api.api_gw.id
  integration_type = "HTTP_PROXY"
  connection_id    = aws_apigatewayv2_vpc_link.api_gw.id
  connection_type  = "VPC_LINK"
  description      = "Integration with Network Load Balancer"
  integration_method = "ANY"
  integration_uri  = "${data.aws_lb_listener.nlb.arn}"
  payload_format_version = "1.0"
}

resource "aws_apigatewayv2_route" "api_gw" {
  api_id    = aws_apigatewayv2_api.api_gw.id
  route_key = "ANY /{proxy+}"
  target    = "integrations/${aws_apigatewayv2_integration.api_gw.id}"
  authorization_type = "NONE"
}

resource "aws_apigatewayv2_route" "options_route" {
  api_id    = aws_apigatewayv2_api.api_gw.id
  route_key = "OPTIONS /{proxy+}"
  target    = "integrations/${aws_apigatewayv2_integration.api_gw.id}"
  authorization_type = "NONE"
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The terraform resource &lt;strong&gt;aws_apigatewayv2_integration and aws_apigatewayv2_route&lt;/strong&gt; are used here to create :&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;an API Gateway integrated with the network load balancer created in the previous step&lt;/li&gt;
&lt;li&gt;a &lt;strong&gt;"ANY /{proxy+}"&lt;/strong&gt; route to forward requests to the Network Load Balancer, this route will later be protected by a Lambda Authorizer&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;a &lt;strong&gt;"OPTIONS /{proxy+}"&lt;/strong&gt; route with no authorization. Since many browsers uses an HTTP OPTIONS request before any others requests.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Verify that the HTTP API Gateway created is integrated with the Network Load Balancer created in the previous step.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flq2qn3a6e2ko9l8mr0wn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flq2qn3a6e2ko9l8mr0wn.png" alt="API Gateway" width="800" height="459"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 6 : Create an EBS volume and install the CSI Driver on kubernetes
&lt;/h3&gt;

&lt;p&gt;We need a persistent volume in the Kubernetes cluster, this volume will be useful to persist datas when needed.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;cd to &lt;strong&gt;06-csivolume&lt;/strong&gt; folder.&lt;/li&gt;
&lt;li&gt;Run &lt;strong&gt;apply.sh&lt;/strong&gt; script : an EBS Volume named &lt;strong&gt;terraform-ebs-volume&lt;/strong&gt; should be created and the EBS CSI Driver should be installed in your Kubernetes cluster.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Plan: 5 to add, 0 to change, 0 to destroy.
module.ebscsi.aws_iam_policy.ebs_csi_driver_policy: Creating...
module.ebscsi.aws_iam_role.ebs_csi_driver_role: Creating...
module.ebscsi.aws_iam_policy.ebs_csi_driver_policy: Creation complete after 1s [id=arn:aws:iam::041292242005:policy/EBS_CSI_Driver_Policy]
module.ebscsi.aws_iam_role.ebs_csi_driver_role: Creation complete after 1s [id=EBS_CSI_Driver_Role]
module.ebscsi.aws_ebs_volume.aws_volume: Creating...
module.ebscsi.aws_iam_role_policy_attachment.attach_ebs_csi_driver_policy: Creating...
module.ebscsi.aws_eks_addon.this: Creating...
module.ebscsi.aws_iam_role_policy_attachment.attach_ebs_csi_driver_policy: Creation complete after 0s [id=EBS_CSI_Driver_Role-20240626204824296100000003]
module.ebscsi.aws_ebs_volume.aws_volume: Still creating... [10s elapsed]
module.ebscsi.aws_eks_addon.this: Still creating... [10s elapsed]
module.ebscsi.aws_ebs_volume.aws_volume: Creation complete after 11s [id=vol-01ae2bb82c2704d3f]
module.ebscsi.aws_eks_addon.this: Still creating... [20s elapsed]
module.ebscsi.aws_eks_addon.this: Creation complete after 26s [id=k8s-mesh-staging:aws-ebs-csi-driver]

Apply complete! Resources: 5 added, 0 changed, 0 destroyed.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;[!TIP]&lt;br&gt;
The most important sections of the Terraform script are :&lt;br&gt;
&lt;/p&gt;
&lt;/blockquote&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "aws_eks_addon" "this" {

  cluster_name = data.aws_eks_cluster.eks.name
  addon_name   = "aws-ebs-csi-driver"

  addon_version               = data.aws_eks_addon_version.this.version
  configuration_values        = null
  preserve                    = true
  resolve_conflicts_on_create = "OVERWRITE"
  resolve_conflicts_on_update = "OVERWRITE"
  service_account_role_arn    = null

  depends_on = [
    aws_iam_role.ebs_csi_driver_role
  ]

}

resource "aws_ebs_volume" "aws_volume" {
  availability_zone = "${var.aws_az}"
  size              = 20
  tags = {
    Name = "${var.eks_cluster_name}-ebs"
  }
  depends_on = [
    aws_iam_role.ebs_csi_driver_role
  ]
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;In the EKS console, check the added Amazon EBS CSI Driver addon&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3l0l2ewierbjejrecg6m.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3l0l2ewierbjejrecg6m.png" alt="EBS CSI Volume Addon" width="800" height="467"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;In the EC2 console, check the created EBS volume&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnbr3shbnwub1y9k383zo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnbr3shbnwub1y9k383zo.png" alt="EBS CSI Volume" width="800" height="115"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 7 : Create a Persistent Volume
&lt;/h3&gt;

&lt;p&gt;After the EBS CSI Addon installed and the EBS Volume created, we can now create the Persistent Volume :&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;cd to &lt;strong&gt;07-k8smanifest-pvolume&lt;/strong&gt; folder.&lt;/li&gt;
&lt;li&gt;Update the manifest in &lt;em&gt;files/7-ebs-csi-driver-pv.yaml&lt;/em&gt; with the correct volume ID&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;[!TIP]&lt;br&gt;
The most important section of the Terraform script is the following :&lt;br&gt;
&lt;/p&gt;
&lt;/blockquote&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "kubectl_manifest" "resource" {
    for_each  = data.kubectl_file_documents.docs.manifests
    yaml_body = each.value
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;It is used to create a PV using the following manifest :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: k8s-mesh-staging-ebs-pv
spec:
  capacity:
    storage: 20Gi
  volumeMode: Filesystem
  accessModes:
    - ReadWriteOnce
  persistentVolumeReclaimPolicy: Retain
  storageClassName: "gp2"
  csi:
    driver: ebs.csi.aws.com
    volumeHandle: "vol-0117b1f7cd479bb5f"

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Run &lt;strong&gt;apply.sh&lt;/strong&gt; script : a Persistent Volume named &lt;strong&gt;k8s-mesh-staging-ebs-pv&lt;/strong&gt; should be created.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ ./apply.sh

...
module.k8smanifest.kubectl_manifest.resource["/api/v1/persistentvolumes/k8s-mesh-staging-ebs-pv"]: Creating...
module.k8smanifest.kubectl_manifest.resource["/api/v1/persistentvolumes/k8s-mesh-staging-ebs-pv"]: Creation complete after 3s [id=/api/v1/persistentvolumes/k8s-mesh-staging-ebs-pv]

Apply complete! Resources: 1 added, 0 changed, 0 destroyed.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Verify the creation of the Persistent Volume.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl get pv

NAME     CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM   STORAGECLASS   VOLUMEATTRIBUTESCLASS   REASON   AGE
k8s-mesh-staging-ebs-pv   20Gi       RWO            Retain           Available           gp2            &amp;lt;unset&amp;gt;                          2m36s
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Step 8 : Create a DNS record for Keycloak identity provider
&lt;/h3&gt;

&lt;p&gt;Keycloak will be used as the Federated Identity Provider in this demo.&lt;br&gt;
So Let's start by creating a user friendly DNS record for Keycloak. &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;cd to &lt;strong&gt;08-meshexposer-keycloak&lt;/strong&gt; folder.&lt;/li&gt;
&lt;li&gt;Update &lt;strong&gt;terraform.tfvars&lt;/strong&gt; to specify a Route53 Hosted Zone.&lt;/li&gt;
&lt;li&gt;Run &lt;strong&gt;apply.sh&lt;/strong&gt; script : a DNS record &lt;strong&gt;keycloak-demo1-prod.example.com&lt;/strong&gt; should be created in your hosted zone.
&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ ./apply.sh

...
module.exposer.aws_apigatewayv2_domain_name.api_gw: Creating...
module.exposer.aws_apigatewayv2_domain_name.api_gw: Creation complete after 4s [id=keycloak-demo1-prod.example.com]
module.exposer.aws_apigatewayv2_api_mapping.s1_mapping: Creating...
module.exposer.aws_route53_record.dnsapi: Creating...
module.exposer.aws_apigatewayv2_api_mapping.s1_mapping: Creation complete after 0s [id=68eu9j]
module.exposer.aws_route53_record.dnsapi: Still creating... [40s elapsed]
module.exposer.aws_route53_record.dnsapi: Creation complete after 46s [id=Z0017173R6DN4LL9QIY3_keycloak-demo1-prod_CNAME]

Apply complete! Resources: 3 added, 0 changed, 0 destroyed.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;blockquote&gt;
&lt;p&gt;[!TIP]&lt;br&gt;
The most important section of the Terraform script is the following :&lt;br&gt;
&lt;/p&gt;


&lt;/blockquote&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "aws_apigatewayv2_domain_name" "api_gw" {
  domain_name = "${var.dns_record_name}.${var.dns_domain}"
  domain_name_configuration {
    certificate_arn = data.aws_acm_certificate.acm_cert.arn
    endpoint_type   = "REGIONAL"
    security_policy = "TLS_1_2"
  }
}

resource "aws_route53_record" "dnsapi" {
  zone_id = data.aws_route53_zone.dns_zone.zone_id
  name    = "${var.dns_record_name}"
  type    = "CNAME"
  records = [local.api_gw_endpoint]
  ttl     = 300
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Verify that the created DNS Record value is the API Gateway Endpoint.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhse7a5q1k8i3q07gzl1u.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhse7a5q1k8i3q07gzl1u.png" alt="DNS Record Keycloak" width="800" height="338"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 9 : Deploy a Postgre database for Keycloak inside the App Mesh
&lt;/h3&gt;

&lt;p&gt;Keycloak can use a PostgreSQL instance as a Persistent Storage.&lt;br&gt;
To create a PostgreSQL instance for Keycloak, just do the following : &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;cd to &lt;strong&gt;09-meshservice-postgre-keycloak&lt;/strong&gt; folder.&lt;/li&gt;
&lt;li&gt;Run &lt;strong&gt;apply.sh&lt;/strong&gt; script : a Postgre SQL pod should be created.
&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ ./apply.sh

...
module.appmeshservice.kubectl_manifest.resource["/apis/apps/v1/namespaces/postgre/deployments/postgre"]: Creation complete after 1m0s [id=/apis/apps/v1/namespaces/postgre/deployments/postgre]

Apply complete! Resources: 14 added, 0 changed, 0 destroyed.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;blockquote&gt;
&lt;p&gt;[!TIP]&lt;br&gt;
The most important section of the Terraform script is the following :&lt;br&gt;
&lt;/p&gt;


&lt;/blockquote&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "kubectl_manifest" "resource" {
    for_each  = data.kubectl_file_documents.docs.manifests
    yaml_body = each.value
}

data "aws_service_discovery_http_namespace" "service_discovery" {
  name = "${local.service_discovery_name}"
}

resource "aws_service_discovery_service" "service" {
  name         = "${var.service_name}"
  namespace_id = data.aws_service_discovery_http_namespace.service_discovery.id
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Those section are used to :&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;apply the manifest documents for the postgre SQL instance, &lt;/li&gt;
&lt;li&gt;and also to register the postgre service in the AWS Cloud Map namespace created in step 4.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Here is an overview of the manifest files :&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;[!NOTE]&lt;br&gt;
&lt;strong&gt;Namespace&lt;/strong&gt; : App Mesh uses namespace and/or pod annotations to determine if pods in a namespace will be marked for sidecar injection. To achieve this add &lt;strong&gt;appmesh.k8s.aws/sidecarInjectorWebhook: enabled:&lt;/strong&gt; annotation, to inject the sidecar into pods by default inside this namespace.&lt;br&gt;
&lt;/p&gt;
&lt;/blockquote&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;---
apiVersion: v1
kind: Namespace
metadata:
  name: postgre
  labels:
    mesh: k8s-mesh-staging
    appmesh.k8s.aws/sidecarInjectorWebhook: enabled
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;[!NOTE]&lt;br&gt;
&lt;strong&gt;ServiceAccount&lt;/strong&gt; : A service account provides an identity for processes that run in a Pod, and maps to a ServiceAccount object.&lt;br&gt;
&lt;/p&gt;
&lt;/blockquote&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: postgre
  namespace: postgre
  annotations:
    eks.amazonaws.com/role-arn: arn:aws:iam::041292242005:role/k8s-mesh-staging-eks-postgre

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;[!NOTE]&lt;br&gt;
&lt;strong&gt;Deployment&lt;/strong&gt; : You create a Deployment to manage your pods easily. This is your application.&lt;br&gt;
&lt;/p&gt;
&lt;/blockquote&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: postgre
  namespace: postgre
  labels:
    app: postgre
spec:
  replicas: 1
  selector:
    matchLabels:
      app: postgre
  strategy:
    type: Recreate
  template:
    metadata:
      annotations:
        appmesh.k8s.aws/mesh: k8s-mesh-staging
        appmesh.k8s.aws/virtualNode: postgre
      labels:
        app: postgre
    spec:
      serviceAccountName: postgre
      containers:
        - name: postgre
          image: postgres:15.6-alpine
          imagePullPolicy: Always
          ports:
            - containerPort: 5432
          livenessProbe:
            exec:
              command:
                - pg_isready
                - -U
                - keycloak
          volumeMounts:
            - mountPath: /var/lib/postgresql/data
              name: postgres-claim0
              subPath: db-files
          envFrom:
          - configMapRef: 
              name: postgre
      restartPolicy: Always
      volumes:
        - name: postgres-claim0
          persistentVolumeClaim:
            claimName: postgres-claim0

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;[!NOTE]&lt;br&gt;
&lt;strong&gt;PersistentVolumeClaim&lt;/strong&gt; : For persistent storage requirements, we have already provision a Persistent Volume in step 6. To bind a pod to a PV, the pod must contain a volume mount and a Persistent Volume Claim (PVC).&lt;br&gt;
&lt;/p&gt;
&lt;/blockquote&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  labels:
    io.kompose.service: postgres-claim0
  name: postgres-claim0
  namespace: postgre
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 100Mi
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;[!NOTE]&lt;br&gt;
&lt;strong&gt;Service&lt;/strong&gt; : Use a Service to expose your application that is running as one or more Pods in your cluster.&lt;br&gt;
&lt;/p&gt;
&lt;/blockquote&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;---
apiVersion: v1
kind: Service
metadata:
  name: postgre
  namespace: postgre
spec:
  ports:
    - port: 5432
      protocol: TCP
  selector:
    app: postgre
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;[!NOTE]&lt;br&gt;
&lt;strong&gt;VirtualNode&lt;/strong&gt; : In App Mesh, a virtual node acts as a logical pointer to a Kubernetes deployment via a service. The &lt;em&gt;serviceDiscovery&lt;/em&gt; attribute indicates that the service will be discovered via App Mesh. To have Envoy access logs sent to CloudWatch Logs, be sure to configure the log path to be /dev/stdout in each virtual node.&lt;br&gt;
&lt;/p&gt;
&lt;/blockquote&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;---
apiVersion: appmesh.k8s.aws/v1beta2
kind: VirtualNode
metadata:
  name: postgre
  namespace: postgre
spec:
  podSelector:
    matchLabels:
      app: postgre
  listeners:
    - portMapping:
        port: 5432
        protocol: tcp
  serviceDiscovery:
    awsCloudMap:
      serviceName: postgre
      namespaceName: k8s-mesh-staging
  logging:
    accessLog:
      file:
        path: "/dev/stdout"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;[!NOTE]&lt;br&gt;
&lt;strong&gt;VirtualService&lt;/strong&gt; : A virtual service is an abstraction of a real service that is provided by a virtual node directly or indirectly by means of a virtual router. Dependent services call your virtual service by its virtualServiceName, and those requests are routed to the virtual node or virtual router that is specified as the provider for the virtual service.&lt;br&gt;
&lt;/p&gt;
&lt;/blockquote&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;---
apiVersion: appmesh.k8s.aws/v1beta2
kind: VirtualService
metadata:
  name: postgre
  namespace: postgre
spec:
  provider:
    virtualNode:
      virtualNodeRef:
        name: postgre
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Verify the created PostgreSQL pod :
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl get pod -n postgre
NAME                       READY   STATUS    RESTARTS   AGE
postgre-58fbbd958d-w8zhh   3/3     Running   0          10m
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Verify the Discovered service in AWS Cloud Map Service Console&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu17fbzzj86mtrzznln7r.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu17fbzzj86mtrzznln7r.png" alt="Postgre Keycloak Discovered service" width="800" height="100"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 10 : Deploy Keycloak inside the App Mesh
&lt;/h3&gt;

&lt;p&gt;Keycloak is an open source identity and access management solution. It is easy to setup and support standards like OpenID Connect, which will be useful for us in the AppMesh showcase.&lt;/p&gt;

&lt;p&gt;To deploy your Keycloak instance, just do the following : &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;cd to &lt;strong&gt;10-meshservice-keycloak&lt;/strong&gt; folder.&lt;/li&gt;
&lt;li&gt;Run &lt;strong&gt;apply.sh&lt;/strong&gt; script : a Keycloak pod should be created.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ ./apply.sh

...
module.appmeshservice.kubectl_manifest.resource["/apis/apps/v1/namespaces/keycloak-demo1-prod/deployments/keycloak-demo1-prod"]: Creation complete after 5m6s [id=/apis/apps/v1/namespaces/keycloak-demo1-prod/deployments/keycloak-demo1-prod]

Apply complete! Resources: 14 added, 0 changed, 0 destroyed.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;[!TIP]&lt;br&gt;
The most important section of the Terraform script is the following :&lt;br&gt;
&lt;/p&gt;
&lt;/blockquote&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "kubectl_manifest" "resource" {
    for_each  = data.kubectl_file_documents.docs.manifests
    yaml_body = each.value
}

data "aws_service_discovery_http_namespace" "service_discovery" {
  name = "${local.service_discovery_name}"
}

resource "aws_service_discovery_service" "service" {
  name         = "${var.service_name}"
  namespace_id = data.aws_service_discovery_http_namespace.service_discovery.id
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Those section are used to :&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;apply the manifest documents for the Keycloak instance, &lt;/li&gt;
&lt;li&gt;and also to register the keycloak service in the AWS Cloud Map namespace created in step 4.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Here is an overview of the manifest files :&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;[!NOTE]&lt;br&gt;
&lt;strong&gt;Namespace, ServiceAccount, Deployment, Service, VirtualNode and VirtualService&lt;/strong&gt; are already covered in step 9.  &lt;/p&gt;

&lt;p&gt;[!NOTE]&lt;br&gt;
&lt;strong&gt;VirtualRouter&lt;/strong&gt; : Virtual routers handle traffic for one or more virtual services within your mesh. After you create a virtual router, you can create and associate routes for your virtual router that direct incoming requests to different virtual nodes.  &lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs0g90e463d5lhprq43ir.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs0g90e463d5lhprq43ir.png" alt="Virtual Router concept" width="538" height="365"&gt;&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;---
apiVersion: appmesh.k8s.aws/v1beta2
kind: VirtualRouter
metadata:
  name: keycloak-demo1-prod
  namespace: keycloak-demo1-prod
spec:
  listeners:
    - portMapping:
        port: 8080
        protocol: http
  routes:
    - name: keycloak-demo1-prod
      httpRoute:
        match:
          prefix: /
        action:
          weightedTargets:
            - virtualNodeRef:
                name: keycloak-demo1-prod
              weight: 1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;[!NOTE]&lt;br&gt;
&lt;strong&gt;GatewayRoute&lt;/strong&gt; : A gateway route is attached to a virtual gateway and routes traffic to an existing virtual service. If a route matches a request, it can distribute traffic to a target virtual service.&lt;br&gt;
&lt;/p&gt;
&lt;/blockquote&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;---
apiVersion: appmesh.k8s.aws/v1beta2
kind: GatewayRoute
metadata:
  name: keycloak-demo1-prod
  namespace: keycloak-demo1-prod
spec:
  httpRoute:
    match:
      prefix: "/"
      hostname:
        exact: keycloak-demo1-prod.skyscaledev.com
    action:
      target:
        virtualService:
          virtualServiceRef:
            name: keycloak-demo1-prod
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Verify the created Keycloak pod :
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl get pod -n keycloak-demo1-prod
NAME                                   READY   STATUS    RESTARTS   AGE
keycloak-demo1-prod-7857d7d59d-7qbfw   3/3     Running   0          11m
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Verify the Discovered service in AWS Cloud Map Service Console&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxina2pgnnbvd8n7fiapg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxina2pgnnbvd8n7fiapg.png" alt="Keycloak Discovered service" width="800" height="124"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Verify the deployed Keycloak instance by opening your browser&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm6pdo8f36cdjnlgoxoro.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm6pdo8f36cdjnlgoxoro.png" alt="Keycloak login" width="800" height="449"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 11 : Create a realm in Keycloak
&lt;/h3&gt;

&lt;blockquote&gt;
&lt;p&gt;[!NOTE]&lt;br&gt;
&lt;strong&gt;realm&lt;/strong&gt; : A realm is a space where you manage objects, including users, applications, roles, and groups. A user belongs to and logs into a realm. One Keycloak deployment can define, store, and manage as many realms as there is space for in the database.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;To create a realm to store our users : &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;cd to &lt;strong&gt;11-kurler-keycloak-realm&lt;/strong&gt; folder.&lt;/li&gt;
&lt;li&gt;Run &lt;strong&gt;apply.sh&lt;/strong&gt; script : a Keycloak Realm should be created.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ ./apply.sh

...
module.kurl.null_resource.kurl_command: Creation complete after 11s [id=9073997509073050065]

Apply complete! Resources: 1 added, 0 changed, 0 destroyed.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;[!TIP]&lt;br&gt;
The most important section of the Terraform script is the following :&lt;br&gt;
&lt;/p&gt;
&lt;/blockquote&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "null_resource" "kurl_command" {

  triggers = {
    always_run = "${timestamp()}"
    input_config_file = "${var.input_config_file}"
  }

  provisioner "local-exec" {
    when = create
    command = "chmod +x ${path.module}/files/kurl.sh &amp;amp;&amp;amp; input_command=CREATE input_config_file=${var.input_config_file} ${path.module}/files/kurl.sh"
  }

  provisioner "local-exec" {
    when = destroy
    command = "chmod +x ${path.module}/files/kurl.sh &amp;amp;&amp;amp; input_command=DELETE input_config_file=${self.triggers.input_config_file} ${path.module}/files/kurl.sh"
  }

  lifecycle {
    create_before_destroy = true
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This Terraform script uses a local-exec provisioner to execute curl commands in other create a realm with the configuration defined in &lt;strong&gt;files/11_kurler_keycloak_realm_config.json&lt;/strong&gt; file.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
    "realm": "rcognito",
    "enabled": true,
    "requiredCredentials": [
        "password"
    ],
    "users": [
        {
        "username": "alice",
        "firstName": "Alice",
        "lastName": "Liddel",
        "email": "alice@keycloak.org",
        "enabled": true,
        "credentials": [
            {
            "type": "password",
            "value": "alice"
            }
        ],
        "realmRoles": [
            "user", "offline_access"
        ],
        "clientRoles": {
            "account": [ "manage-account" ]
            }
        },
        {
        "username": "jdoe",
        "firstName": "jdoe",
        "lastName": "jdoe",
        "email": "jdoe@keycloak.org",
        "enabled": true,
        "credentials": [
            {
            "type": "password",
            "value": "jdoe"
            }
        ],
        "realmRoles": [
            "user",
            "user_premium"
        ]
        },
        {
        "username": "service-account-authz-servlet",
        "enabled": true,
        "serviceAccountClientId": "authz-servlet",
        "clientRoles": {
            "authz-servlet" : ["uma_protection"]
        }
        },
        {
            "username" : "admin",
            "enabled": true,
            "email" : "test@admin.org",
            "firstName": "Admin",
            "lastName": "Test",
            "credentials" : [
            { "type" : "password",
                "value" : "admin" }
            ],
            "realmRoles": [ "user","admin" ],
            "clientRoles": {
            "realm-management": [ "realm-admin" ],
            "account": [ "manage-account" ]
            }
        }
    ],
    "roles": {
        "realm": [
        {
            "name": "user",
            "description": "User privileges"
        },
        {
            "name": "user_premium",
            "description": "User Premium privileges"
        },
            {
            "name": "admin",
            "description": "Administrator privileges"
            }
        ]
    },
    "clients": [
        {
        "clientId": "authz-servlet",
        "enabled": true,
        "baseUrl": "https://keycloak-api-prod.skyscaledev.com/authz-servlet",
        "adminUrl": "https://keycloak-api-prod.skyscaledev.com/authz-servlet",
        "bearerOnly": false,
        "redirectUris": [
            "https://keycloak-api-prod.skyscaledev.com/authz-servlet/*",
            "http://127.0.0.1:8080/authz-servlet/*"
        ],
        "secret": "secret",
        "authorizationServicesEnabled": true,
        "directAccessGrantsEnabled": true,
        "authorizationSettings": {
            "resources": [
            {
                "name": "Protected Resource",
                "uri": "/*",
                "type": "http://servlet-authz/protected/resource",
                "scopes": [
                {
                    "name": "urn:servlet-authz:protected:resource:access"
                }
                ]
            },
            {
                "name": "Premium Resource",
                "uri": "/protected/premium/*",
                "type": "urn:servlet-authz:protected:resource",
                "scopes": [
                {
                    "name": "urn:servlet-authz:protected:premium:access"
                }
                ]
            }
            ],
            "policies": [
            {
                "name": "Any User Policy",
                "description": "Defines that any user can do something",
                "type": "role",
                "logic": "POSITIVE",
                "decisionStrategy": "UNANIMOUS",
                "config": {
                "roles": "[{\"id\":\"user\"}]"
                }
            },
            {
                "name": "Only Premium User Policy",
                "description": "Defines that only premium users can do something",
                "type": "role",
                "logic": "POSITIVE",
                "decisionStrategy": "UNANIMOUS",
                "config": {
                "roles": "[{\"id\":\"user_premium\"}]"
                }
            },
            {
                "name": "All Users Policy",
                "description": "Defines that all users can do something",
                "type": "aggregate",
                "logic": "POSITIVE",
                "decisionStrategy": "AFFIRMATIVE",
                "config": {
                "applyPolicies": "[\"Any User Policy\",\"Only Premium User Policy\"]"
                }
            },
            {
                "name": "Premium Resource Permission",
                "description": "A policy that defines access to premium resources",
                "type": "resource",
                "logic": "POSITIVE",
                "decisionStrategy": "UNANIMOUS",
                "config": {
                "resources": "[\"Premium Resource\"]",
                "applyPolicies": "[\"Only Premium User Policy\"]"
                }
            },
            {
                "name": "Protected Resource Permission",
                "description": "A policy that defines access to any protected resource",
                "type": "resource",
                "logic": "POSITIVE",
                "decisionStrategy": "UNANIMOUS",
                "config": {
                "resources": "[\"Protected Resource\"]",
                "applyPolicies": "[\"All Users Policy\"]"
                }
            }
            ],
            "scopes": [
            {
                "name": "urn:servlet-authz:protected:admin:access"
            },
            {
                "name": "urn:servlet-authz:protected:resource:access"
            },
            {
                "name": "urn:servlet-authz:protected:premium:access"
            },
            {
                "name": "urn:servlet-authz:page:main:actionForPremiumUser"
            },
            {
                "name": "urn:servlet-authz:page:main:actionForAdmin"
            },
            {
                "name": "urn:servlet-authz:page:main:actionForUser"
            }
            ]
        }
        },
        {
        "clientId": "spa",
        "enabled": true,
        "publicClient": true,
        "directAccessGrantsEnabled": true,
        "redirectUris": [ "https://service-spa.skyscaledev.com/*" ]
        },
        {
            "clientId": "rcognitoclient",
            "name": "rcognitoclient",
            "adminUrl": "https://keycloak-demo1-prod.skyscaledev.com/realms/rcognito",
            "alwaysDisplayInConsole": false,
            "access": {
                "view": true,
                "configure": true,
                "manage": true
            },
            "attributes": {},
            "authenticationFlowBindingOverrides" : {},
            "authorizationServicesEnabled": true,
            "bearerOnly": false,
            "directAccessGrantsEnabled": true,
            "enabled": true,
            "protocol": "openid-connect",
            "description": "Client OIDC pour application KaiaC",

            "rootUrl": "${authBaseUrl}",
            "baseUrl": "/realms/rcognito/account/",
            "surrogateAuthRequired": false,
            "clientAuthenticatorType": "client-secret",
            "defaultRoles": [
                "manage-account",
                "view-profile"
            ],
            "redirectUris": [
                "https://kaiac.auth.us-west-2.amazoncognito.com/oauth2/idpresponse", "https://service-spa.skyscaledev.com/*"
            ],
            "webOrigins": [],
            "notBefore": 0,
            "consentRequired": false,
            "standardFlowEnabled": true,
            "implicitFlowEnabled": false,
            "serviceAccountsEnabled": true,
            "publicClient": false,
            "frontchannelLogout": false,
            "fullScopeAllowed": false,
            "nodeReRegistrationTimeout": 0,
            "defaultClientScopes": [
                "web-origins",
                "role_list",
                "profile",
                "roles",
                "email"
            ],
            "optionalClientScopes": [
                "address",
                "phone",
                "offline_access",
                "microprofile-jwt"
            ]
        }
    ]
    }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Verify the realm in Keycloak admin console : &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftgb0g109ls5ru7ezwn97.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftgb0g109ls5ru7ezwn97.png" alt="Keycloak realm" width="800" height="449"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 12 : Create a Cognito user pool to federate Keycloak identities
&lt;/h3&gt;

&lt;p&gt;Our demo app users will federate through a third-party identity provider (IdP), which is the Keycloak instance we deployed in step 11. The user pool manages the overhead of handling the tokens that are returned from Keycloak OpenID Connect (OIDC) IdP. With the built-in hosted web UI, Amazon Cognito provides token handling and management for authenticated users from all IdPs. This way, your backend systems can standardize on one set of user pool tokens.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyyrl60dpehzlaz9nu4uu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyyrl60dpehzlaz9nu4uu.png" alt="How federated sign-in works in Amazon Cognito user pools" width="782" height="201"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To create a Cognito user pool to federate Keycloak identities :&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;cd to &lt;strong&gt;12-fedusers&lt;/strong&gt; folder.&lt;/li&gt;
&lt;li&gt;Run &lt;strong&gt;apply.sh&lt;/strong&gt; script : a Cognito User Pool should be created.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;module.cogusrpool.aws_cognito_identity_provider.keycloak_oidc: Creation complete after 1s [id=us-west-2_BPVSBRdNl:keycloak]
module.cogusrpool.aws_cognito_user_pool_domain.user_pool_domain: Creation complete after 2s [id=kaiac]

Apply complete! Resources: 3 added, 0 changed, 0 destroyed.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;[!TIP]&lt;br&gt;
The most important section of the Terraform script is the following :&lt;br&gt;
&lt;/p&gt;
&lt;/blockquote&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "aws_cognito_user_pool" "user_pool" {
  name = "${var.user_pool_name}" 
  username_configuration {
    case_sensitive = false
  }
}

resource "aws_cognito_user_pool_domain" "user_pool_domain" {
  domain           = "${var.cognito_domain_name}"
  user_pool_id     = aws_cognito_user_pool.user_pool.id
}

resource "aws_cognito_identity_provider" "keycloak_oidc" {
  user_pool_id                 = aws_cognito_user_pool.user_pool.id
  provider_name                = "${var.user_pool_provider_name}"
  provider_type                = "${var.user_pool_provider_type}"
  provider_details             = {
    client_id                 = "${var.user_pool_provider_client_id}"
    client_secret             = "${data.external.client_secret.result.client_secret}"
    attributes_request_method = "${var.user_pool_provider_attributes_request_method}"
    oidc_issuer               = "${var.user_pool_provider_issuer_url}"
    authorize_scopes          = "${var.user_pool_authorize_scopes}"

    token_url            = "${var.user_pool_provider_issuer_url}/protocol/openid-connect/token" 
    attributes_url         = "${var.user_pool_provider_issuer_url}/protocol/openid-connect/userinfo" 
    authorize_url    = "${var.user_pool_provider_issuer_url}/protocol/openid-connect/auth" 
    #end_session_endpoint      = "${var.user_pool_provider_issuer_url}/protocol/openid-connect/logout" 
    jwks_uri                  = "${var.user_pool_provider_issuer_url}/protocol/openid-connect/certs"
  }

  attribute_mapping = local.attribute_mapping

}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Verify the created Cognito User Pool in Amazon Cognito console : &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fiie7b1f9z5rpmfumvgho.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fiie7b1f9z5rpmfumvgho.png" alt="Cognito Keycloak User Pool" width="800" height="259"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Verify that the Cognito User Pool federates the previous Keycloak Identity provider : &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fleqbxs3itbn1i3hcscin.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fleqbxs3itbn1i3hcscin.png" alt="Cognito Federated Keycloak User Pool" width="800" height="339"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 13 : Create a Cognito user pool client to integrate a Single Page Application (OAuth2 implicit flow)
&lt;/h3&gt;

&lt;p&gt;In our showcase, users will access an Angular single page application if they authenticate successfully through Cognito.&lt;/p&gt;

&lt;p&gt;A user pool app client is a configuration within a user pool that interacts with one mobile or web application that authenticates with Amazon Cognito. When you create an app client in Amazon Cognito, you can pre-populate options based on the standard OAuth flows types.&lt;/p&gt;

&lt;p&gt;For our Single page Application we need the standard &lt;strong&gt;OAuth2 implicit grant flow&lt;/strong&gt;. The implicit grant delivers an access and ID token, but not refresh token, to your user's browser session directly from the Authorize endpoint.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy9sqx3czyu2dv5yz34zj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy9sqx3czyu2dv5yz34zj.png" alt="Navigation Flow" width="800" height="694"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To create a Cognito user pool client to integrate a Single Page Application which support OAuth2 implicit flow : &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;cd to &lt;strong&gt;13-fedclient-spa&lt;/strong&gt; folder.&lt;/li&gt;
&lt;li&gt;Run &lt;strong&gt;apply.sh&lt;/strong&gt; script : a Cognito User Pool Client should be created.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ ./apply.sh

...
module.cogpoolclient.aws_cognito_user_pool_client.user_pool_client: Creating...
module.cogpoolclient.aws_cognito_user_pool_client.user_pool_client: Creation complete after 1s [id=4ivne28uad3dp6uncttem7sf20]

Apply complete! Resources: 2 added, 0 changed, 0 destroyed.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;[!TIP]&lt;br&gt;
The most important section of the Terraform script is the following :&lt;br&gt;
&lt;/p&gt;
&lt;/blockquote&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "aws_cognito_user_pool_client" "user_pool_client" {
  name                     = "${var.user_pool_app_client_name}"
  user_pool_id             = local.user_pool_id
  generate_secret          = local.generate_secret
  allowed_oauth_flows      = local.oauth_flows
  allowed_oauth_scopes     = local.all_scopes
  allowed_oauth_flows_user_pool_client = true
  callback_urls    = local.callback_urls
  logout_urls      = local.logout_urls
  supported_identity_providers        = ["${var.user_pool_provider_name}"] 

  refresh_token_validity = var.user_pool_oauth_refresh_token_validity
  access_token_validity = var.user_pool_oauth_access_token_validity
  id_token_validity        = var.user_pool_oauth_id_token_validity

  token_validity_units {
    access_token  = "minutes"
    id_token      = "minutes"
    refresh_token = "days"
  }

  depends_on = [ aws_cognito_resource_server.resource_server ]
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Verify the created Cognito User Pool client : &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Furhgorgwiw5b9jvwtni3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Furhgorgwiw5b9jvwtni3.png" alt="Cognito User Pool client" width="800" height="317"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Verify the client OAuth grant types (flows) : &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx9iz8721x4wpsxyhltmb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx9iz8721x4wpsxyhltmb.png" alt="Cognito client OAuth grant types" width="800" height="279"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 14 : Create a Lambda Authorizer and attach it to the API Gateway
&lt;/h3&gt;

&lt;p&gt;A Lambda authorizer is used to control access to your API. When a client makes a request your API's method, API Gateway calls your Lambda authorizer. The Lambda authorizer takes the caller's identity as the input and returns an IAM policy as the output.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4ibt6r3oc5pgcpr7qmcx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4ibt6r3oc5pgcpr7qmcx.png" alt="Lambda Authorizer" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We created an API Gateway in step 5. And if you remember the route &lt;strong&gt;ANY /{proxy+}&lt;/strong&gt; has been created without any access control mecanism defined. This is what we will achieve with our Lambda. &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;cd to &lt;strong&gt;14-apiauthorizer&lt;/strong&gt; folder.&lt;/li&gt;
&lt;li&gt;Update the &lt;strong&gt;JWKS_ENDPOINT&lt;/strong&gt; value in &lt;strong&gt;files/14_authorizer_real_token_env_vars.json&lt;/strong&gt; with the &lt;strong&gt;Token signing key URL&lt;/strong&gt; of the Cognito User Pool.&lt;/li&gt;
&lt;li&gt;Run &lt;strong&gt;apply.sh&lt;/strong&gt; script : a Lambda function named &lt;strong&gt;authorizer&lt;/strong&gt; should be created and also attached to the API Gateway.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ ./apply.sh

...
module.authorizer.null_resource.attach_authorizer (local-exec): {
module.authorizer.null_resource.attach_authorizer (local-exec):     "ApiKeyRequired": false,
module.authorizer.null_resource.attach_authorizer (local-exec):     "AuthorizationType": "CUSTOM",
module.authorizer.null_resource.attach_authorizer (local-exec):     "AuthorizerId": "frnyjk",
module.authorizer.null_resource.attach_authorizer (local-exec):     "RouteId": "1gxncjc",
module.authorizer.null_resource.attach_authorizer (local-exec):     "RouteKey": "ANY /{proxy+}",
module.authorizer.null_resource.attach_authorizer (local-exec):     "Target": "integrations/g06nq5l"
module.authorizer.null_resource.attach_authorizer (local-exec): }
module.authorizer.null_resource.attach_authorizer: Creation complete after 3s [id=8645727514132363023]

Apply complete! Resources: 10 added, 0 changed, 0 destroyed.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;[!TIP]&lt;br&gt;
The most important section of the Terraform script is the following :&lt;br&gt;
&lt;/p&gt;
&lt;/blockquote&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "aws_lambda_function" "authorizer" {
  function_name = "${var.authorizer_name}"

  runtime = "${var.authorizer_runtime}"
  handler = "${var.authorizer_name}.handler"
  timeout = var.authorizer_timeout

  role = aws_iam_role.iam_for_lambda.arn

  filename      = "${data.archive_file.lambda_archive.output_path}"

  environment {
    variables = local.env_vars
  }

  depends_on = [ null_resource.create_file ]

}

resource "aws_apigatewayv2_authorizer" "api_gw" {
  api_id   = "${local.api_id}"
  authorizer_type = "REQUEST"
  authorizer_uri  = "arn:aws:apigateway:${var.region}:lambda:path/2015-03-31/functions/${aws_lambda_function.authorizer.arn}/invocations"
  name            = "${var.authorizer_name}"
  authorizer_payload_format_version = "2.0"

  depends_on = [
    aws_lambda_permission.allow_apigateway
  ]
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This Terraform script will deploy our Lambda function and attach the function to the principal route of our API Gateway.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Verify the API Gateway and the attached Lambda Authorizer&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw1oyujb1carfcvzqnxm9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw1oyujb1carfcvzqnxm9.png" alt="API Gateway and the attached Lambda Authorizer" width="800" height="459"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Verify the Lambda Authorizer&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjefx55intn6dzsbp1fpm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjefx55intn6dzsbp1fpm.png" alt="Lambda Authorizer" width="800" height="416"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flct7f35pqb8fohcgq58d.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flct7f35pqb8fohcgq58d.png" alt="Lambda Authorizer Environment variables" width="800" height="165"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 15 : Create a Lambda@Edge function to redirect to Cognito if unauthorized access
&lt;/h3&gt;

&lt;p&gt;Lambda@Edge is an extension of AWS Lambda. Lambda@Edge is a compute service that lets you execute functions that customize the content that Amazon CloudFront delivers. You can author Node.js or Python functions in the Lambda console in one AWS Region, US East (N. Virginia).&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft1qv4ulurvzd30kperth.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft1qv4ulurvzd30kperth.png" alt="Lambda at Edge" width="800" height="264"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To create a Lambda@Edge function to redirect to Cognito login if unauthorized access :&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;cd to &lt;strong&gt;15-atedge&lt;/strong&gt; folder.&lt;/li&gt;
&lt;li&gt;Update the file &lt;strong&gt;files/14_authorizer_real_token_env_vars.json&lt;/strong&gt; 

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;JWKS_ENDPOINT&lt;/strong&gt; with the &lt;strong&gt;Token signing key URL&lt;/strong&gt; of the Cognito User Pool.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;CLIENT_ID&lt;/strong&gt; with Client ID of the Cognito User Pool client created for SPA.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;Run &lt;strong&gt;apply.sh&lt;/strong&gt; script : a Lambda function named &lt;strong&gt;lamda_edge&lt;/strong&gt; should be created in &lt;strong&gt;us-east-1&lt;/strong&gt; region.
&lt;/li&gt;

&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ ./apply.sh

...
module.lambda_edge.aws_lambda_function.lambda_edge: Creating...
module.lambda_edge.aws_lambda_function.lambda_edge: Still creating... [10s elapsed]
module.lambda_edge.aws_lambda_function.lambda_edge: Creation complete after 13s [id=lamda_edge]

Apply complete! Resources: 10 added, 0 changed, 0 destroyed.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;[!TIP]&lt;br&gt;
The most important section of the Terraform script is the following :&lt;br&gt;
&lt;/p&gt;
&lt;/blockquote&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "aws_lambda_function" "lambda_edge" {
  function_name = "${var.lambda_edge_name}"

  runtime = "${var.lambda_edge_runtime}"
  handler = "${var.lambda_edge_name}.handler"
  timeout = var.lambda_edge_timeout

  role = aws_iam_role.iam_for_lambda.arn
  filename      = "${data.archive_file.lambda_archive.output_path}"
  publish = true
  provider = aws.us_east_1
  depends_on = [ null_resource.create_file ]
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now that our Lambda@Edge function is created, we can now create a Cloudfront distribution and attach our Lambda@Edge function.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 16 : Create a Cloudfront distribution with the API Gateway as origin and the Lambda@Edge attached in the View-Request
&lt;/h3&gt;

&lt;p&gt;When a user enters the Angular SPA, we need a mecanism to check wheater or not the user is authenticated. With Amazon CloudFront, you can write your own code to customize how your CloudFront distributions process HTTP requests and responses.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbcc0fszl660yrq2cnht8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbcc0fszl660yrq2cnht8.png" alt="Cloudfront Edge overview" width="800" height="263"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;So we will create a Cloudfront distribution to take advantage of those HTTP behaviour customization features by adding a Lambda@Edge function which will be invoked on each &lt;strong&gt;Viewer-request&lt;/strong&gt;.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;cd to &lt;strong&gt;16-apigwfront&lt;/strong&gt; folder.&lt;/li&gt;
&lt;li&gt;Run &lt;strong&gt;apply.sh&lt;/strong&gt; script : a Cloudfront distribution with the API Gateway as origin should be created and the Lambda@Edge attached in the View-Request.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;module.cloudfront.aws_cloudfront_distribution.distribution: Still creating... [6m10s elapsed]
module.cloudfront.aws_cloudfront_distribution.distribution: Creation complete after 6m12s [id=EHJSCY1ZDZ8CD]

Apply complete! Resources: 1 added, 0 changed, 0 destroyed.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;[!TIP]&lt;br&gt;
The most important section of the Terraform script is the following :&lt;br&gt;
&lt;/p&gt;
&lt;/blockquote&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "aws_cloudfront_distribution" "distribution" {
  comment = "Distribution for ${var.app_namespace} ${var.app_name} ${var.app_env}"
  origin {
    domain_name = "${local.api_gw_endpoint}"
    origin_id   = "${local.origin_id}"

    custom_origin_config {
      http_port              = 80
      https_port             = 443
      origin_ssl_protocols   = ["TLSv1.1", "TLSv1.2"]
      origin_protocol_policy = "https-only"
    }
  }

  enabled             = true
  default_cache_behavior {
    allowed_methods        = ["DELETE", "GET", "HEAD", "OPTIONS", "PATCH", "POST", "PUT"]
    cached_methods         = ["GET", "HEAD"]
    target_origin_id       = "${local.origin_id}"
    forwarded_values {
      query_string = true
      cookies {
        forward = "all"
      }

      headers = ["*"]
    }
    viewer_protocol_policy = "redirect-to-https"
    min_ttl                = 0
    default_ttl            = 3600
    max_ttl                = 86400

    lambda_function_association {
      event_type   = "viewer-request"
      lambda_arn   = "${data.aws_lambda_function.lambda_edge.qualified_arn}"
      include_body = false
    }
  }

  restrictions {
    geo_restriction {
      restriction_type = "none"
    }
  }

    # SSL certificate for the service.
    viewer_certificate {
        cloudfront_default_certificate = false
        acm_certificate_arn = data.aws_acm_certificate.acm_cert.arn
        ssl_support_method = "sni-only"
        minimum_protocol_version = "TLSv1.2_2021"
    }

}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4k3rj5txugfns7l54593.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4k3rj5txugfns7l54593.png" alt="Cloudfront Lambda at Edge" width="800" height="325"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 17 : Create a DNS record for the Cloudfront distribution
&lt;/h3&gt;

&lt;p&gt;We need a DNS record to expose our cloudfront distribution.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;cd to &lt;strong&gt;17-meshcfexposer-spa&lt;/strong&gt; folder.&lt;/li&gt;
&lt;li&gt;Update &lt;strong&gt;terraform.tfvars&lt;/strong&gt; to specify a Route53 Hosted Zone and the Cloudfront Distribution ID.&lt;/li&gt;
&lt;li&gt;Run &lt;strong&gt;apply.sh&lt;/strong&gt; script : a DNS record &lt;strong&gt;front-service-spa.example.com&lt;/strong&gt; should be created in your hosted zone.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ ./apply.sh

...
module.cfexposer.null_resource.alias (local-exec): Alias front-service-spa.skyscaledev.com ajouté à la distribution CloudFront EHJSCY1ZDZ8CD
module.cfexposer.null_resource.alias: Creation complete after 4s [id=4441363320886135252]

Apply complete! Resources: 2 added, 0 changed, 0 destroyed.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Step 18 : Create a DNS record for the SPA
&lt;/h3&gt;

&lt;p&gt;We need a DNS record to expose our Angular SPA.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;cd to &lt;strong&gt;18-meshexposer-spa&lt;/strong&gt; folder.&lt;/li&gt;
&lt;li&gt;Update &lt;strong&gt;terraform.tfvars&lt;/strong&gt; to specify a Route53 Hosted Zone.&lt;/li&gt;
&lt;li&gt;Run &lt;strong&gt;apply.sh&lt;/strong&gt; script : a DNS record &lt;strong&gt;service-spa.example.com&lt;/strong&gt; should be created in your hosted zone.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Plan: 3 to add, 0 to change, 0 to destroy.
module.exposer.aws_apigatewayv2_domain_name.api_gw: Creating...
module.exposer.aws_apigatewayv2_domain_name.api_gw: Creation complete after 1s [id=service-spa.skyscaledev.com]
module.exposer.aws_apigatewayv2_api_mapping.s1_mapping: Creating...
module.exposer.aws_route53_record.dnsapi: Creating...
module.exposer.aws_apigatewayv2_api_mapping.s1_mapping: Creation complete after 1s [id=c7mn30]
module.exposer.aws_route53_record.dnsapi: Still creating... [10s elapsed]
module.exposer.aws_route53_record.dnsapi: Still creating... [20s elapsed]
module.exposer.aws_route53_record.dnsapi: Still creating... [30s elapsed]
module.exposer.aws_route53_record.dnsapi: Still creating... [40s elapsed]
module.exposer.aws_route53_record.dnsapi: Creation complete after 45s [id=Z0017173R6DN4LL9QIY3_service-spa_CNAME]
Apply complete! Resources: 3 added, 0 changed, 0 destroyed.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Step 19 : Deploy the SPA inside the App Mesh
&lt;/h3&gt;

&lt;p&gt;The Angular Single page application we deploy is contained in the image &lt;strong&gt;041292242005.dkr.ecr.us-west-2.amazonaws.com/k8s:spa_staging&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;You can find the source code in the following github repository : &lt;a href="https://github.com/agoralabs/demo-kaiac-cognito-spa.git" rel="noopener noreferrer"&gt;https://github.com/agoralabs/demo-kaiac-cognito-spa.git&lt;/a&gt; &lt;/p&gt;

&lt;p&gt;The image contains a simple angular code to fetch the JWT token received after the OAuth2 implicit flow.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;private fetchAuthInfoFromURL(): void {
    const params = new URLSearchParams(window.location.hash.substring(1));
    const accessToken = params.get('access_token');
    const idToken = params.get('id_token');
    this.accessToken = accessToken;
    this.idToken = idToken;

    if (idToken) {
        const [header, payload, signature] = idToken.split('.');

        const decodedPayload = JSON.parse(atob(payload));
        if (decodedPayload) {
        const userId = decodedPayload.sub;
        const username = decodedPayload.username;
        const email = decodedPayload.email;

        this.email = email;
        this.username = username;
        this.userId = userId;

        }
    }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To deploy your SPA, do the following : &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;cd to &lt;strong&gt;19-meshservice-spa&lt;/strong&gt; folder.&lt;/li&gt;
&lt;li&gt;In the file &lt;strong&gt;files/19-appmesh-service-spa.yaml&lt;/strong&gt;, update ENV_APP_GL_USER_POOL_ID and ENV_APP_GL_USER_POOL_CLIENT_ID in the &lt;em&gt;service-spa&lt;/em&gt; ConfigMap
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;---
apiVersion: v1
kind: ConfigMap
metadata:
  name: service-spa
  namespace: service-spa
data:
  ENV_APP_KC_URL: "https://keycloak-demo1-prod.skyscaledev.com/"
  ENV_APP_BE_LOCAL_PORT: "8083"
  ENV_APP_BE_URL: "https://service-api.skyscaledev.com/"
  ENV_APP_GL_IDENTITY_POOL_NAME: "keycloak-identity-pool"
  ENV_APP_GL_AWS_REGION: "us-west-2"
  ENV_APP_GL_USER_POOL_ID: "us-west-2_BPVSBRdNl"
  ENV_APP_GL_USER_POOL_CLIENT_ID: "4ivne28uad3dp6uncttem7sf20"
  ENV_APP_GL_OAUTH_DOMAIN: "kaiac.auth.us-west-2.amazoncognito.com"
  ENV_APP_GL_OAUTH_REDIRECT_LOGIN: "https://service-spa.skyscaledev.com/login"
  ENV_APP_GL_OAUTH_REDIRECT_LOGOUT: "https://service-spa.skyscaledev.com/login"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Run &lt;strong&gt;apply.sh&lt;/strong&gt; script : The spa should be created.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ ./apply.sh

...
module.appmeshservice.kubectl_manifest.resource["/apis/apps/v1/namespaces/service-spa/deployments/service-spa"]: Creation complete after 1m42s [id=/apis/apps/v1/namespaces/service-spa/deployments/service-spa]

Apply complete! Resources: 13 added, 0 changed, 0 destroyed.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl get pod -n service-spa
NAME                           READY   STATUS    RESTARTS   AGE
service-spa-7b4884cd4f-pzpjl   3/3     Running   0          2m49s
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Navigate to &lt;a href="https://front-service-spa.example.com/" rel="noopener noreferrer"&gt;https://front-service-spa.example.com/&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3ltkmbu8j2zuamq1g7zd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3ltkmbu8j2zuamq1g7zd.png" alt="Cognito Redirection" width="800" height="446"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbg7olkaknje0lgn4w9nt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbg7olkaknje0lgn4w9nt.png" alt="Cognito Keycloak login" width="800" height="446"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F25a8j6id70d90fvmefdi.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F25a8j6id70d90fvmefdi.png" alt="Keycloak Redirection to SPA" width="800" height="309"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 20 : Deploy a Postgre database for the API inside the App Mesh
&lt;/h3&gt;

&lt;p&gt;We also need to deploy a Java SpringBoot API for our demo. To be realistic, we need another Postgre SQL Database for our SspringBoot application. To achieve it do the following : &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;cd to &lt;strong&gt;20-meshservice-postgre-api&lt;/strong&gt; folder.&lt;/li&gt;
&lt;li&gt;Run &lt;strong&gt;apply.sh&lt;/strong&gt; script : a Postgre SQL pod should be created.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;module.appmeshservice.kubectl_manifest.resource["/apis/apps/v1/namespaces/postgreapi/deployments/postgreapi"]: Creation complete after 50s [id=/apis/apps/v1/namespaces/postgreapi/deployments/postgreapi]

Apply complete! Resources: 12 added, 0 changed, 0 destroyed.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Verify the created PostgreSQL pod :
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl get pod -n postgreapi
NAME                          READY   STATUS    RESTARTS   AGE
postgreapi-754bd8b77d-7lhv2   3/3     Running   0          49s
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Step 21 : Create a DNS record for the API
&lt;/h3&gt;

&lt;p&gt;Our SpringBoot API should be accessible at &lt;strong&gt;&lt;a href="https://service-api.skyscaledev.com/" rel="noopener noreferrer"&gt;https://service-api.skyscaledev.com/&lt;/a&gt;&lt;/strong&gt;. So we need to create a DNS record.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Go to &lt;strong&gt;21-meshexposer-api&lt;/strong&gt; directory.&lt;/li&gt;
&lt;li&gt;Run &lt;strong&gt;apply.sh&lt;/strong&gt; script : The DNS record should be created.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;module.exposer.aws_route53_record.dnsapi: Creation complete after 50s [id=Z0017173R6DN4LL9QIY3_service-api_CNAME]

Apply complete! Resources: 3 added, 0 changed, 0 destroyed.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Step 22 : Deploy the API inside the App Mesh
&lt;/h3&gt;

&lt;p&gt;We will now deploy our Java SpringBoot API using the following Docker image :           &lt;strong&gt;041292242005.dkr.ecr.us-west-2.amazonaws.com/springbootapi:24.0.1&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;You can also find the code source in the following github repository : &lt;a href="https://github.com/agoralabs/demo-kaiac-cognito-springboot-api.git" rel="noopener noreferrer"&gt;https://github.com/agoralabs/demo-kaiac-cognito-springboot-api.git&lt;/a&gt; &lt;/p&gt;

&lt;p&gt;The following step will deploy your SpringBoot API inside your Service Mesh.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;cd to &lt;strong&gt;22-meshservice-api&lt;/strong&gt; folder.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ ./apply.sh

...
module.appmeshservice.kubectl_manifest.resource["/apis/apps/v1/namespaces/service-api/deployments/service-api"]: Creation complete after 1m20s [id=/apis/apps/v1/namespaces/service-api/deployments/service-api]

Apply complete! Resources: 15 added, 0 changed, 0 destroyed.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Step 23 : Create a Cognito user pool client to integrate an API (OAuth2 client_credentials flow)
&lt;/h3&gt;

&lt;p&gt;We need to insert some datas in the SpringBoot API Postgre SQL database. To do that we will use SpringBoot API POST &lt;a href="https://service-api.skyscaledev.com/employee/v1/" rel="noopener noreferrer"&gt;https://service-api.skyscaledev.com/employee/v1/&lt;/a&gt; endpoint via Postman. To avoid an error we will add a specific cognito client with a &lt;strong&gt;standard client_credentials grant flow&lt;/strong&gt;. Client credentials is an authorization-only grant suitable for machine-to-machine access. To receive a client credentials grant, bypass the Authorize endpoint and generate a request directly to the Token endpoint. Your app client must have a client secret and support client credentials grants only. In response to your successful request, the authorization server returns an access token.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8uurzb71nxfj6snplhif.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8uurzb71nxfj6snplhif.png" alt="Cognito Client Credentials overview" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To create the client_credentials Cognito client :&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;cd to &lt;strong&gt;23-fedclient-api&lt;/strong&gt; folder.&lt;/li&gt;
&lt;li&gt;Run &lt;strong&gt;apply.sh&lt;/strong&gt; script : a userpool client named &lt;strong&gt;springbootapi&lt;/strong&gt; should be created.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ ./apply.sh

...
module.cogpoolclient.aws_cognito_user_pool_client.user_pool_client: Creation complete after 0s [id=3gcoov4vlfqhmuimqildosljr6]

Apply complete! Resources: 2 added, 0 changed, 0 destroyed.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We first retrieve a token using client credentials grant with Postman :&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Furdxrcli13gr6ss1gg3i.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Furdxrcli13gr6ss1gg3i.png" alt="Retrieve token via client credentials grant" width="800" height="519"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We can now use our SpringBoot API in Postman with the following payload :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
    "id": 1,
    "first_name": "Marie",
    "last_name": "POPPINS",
    "age": "20",
    "designation": "Developer",
    "phone_number": "0624873333",
    "joined_on": "2024-06-02",
    "address": "3 allée louise bourgeois Clamart",
    "date_of_birth": "2018-04-30"

}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4ruvolspojsfs16ly09b.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4ruvolspojsfs16ly09b.png" alt="POST Add Datas using token" width="800" height="442"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Finally we can use our Angular Single Page Application to call the SpringBoot API.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5pfb8tkh28ogujpgco13.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5pfb8tkh28ogujpgco13.png" alt="Call API from SPA" width="800" height="438"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 24 : Observability
&lt;/h3&gt;

&lt;h4&gt;
  
  
  CloudWatch Logs
&lt;/h4&gt;

&lt;p&gt;In Step 4 Fluentd is set up as a DaemonSet to send logs to CloudWatch Logs. Fluentd creates the following log groups if they don't already exist : &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;/aws/containerinsights/Cluster_Name/application&lt;/strong&gt; : All log files in /var/log/containers&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;/aws/containerinsights/Cluster_Name/host&lt;/strong&gt; : Logs from /var/log/dmesg, /var/log/secure, and /var/log/messages&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;/aws/containerinsights/Cluster_Name/dataplane&lt;/strong&gt; : The logs in /var/log/journal for kubelet.service, kubeproxy.service, and docker.service&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In CloudWatch you can also observe API Gateway Logs, Lambda Authorizer Logs and Lamda@Edge Logs.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flrstmpcq0foeau0zf82x.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flrstmpcq0foeau0zf82x.png" alt="API Gateway Logs" width="800" height="413"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyw7nn1zja0tprajdj2yx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyw7nn1zja0tprajdj2yx.png" alt="API Gateway Logs Lambda Authorizer logs" width="800" height="480"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  X-Ray Tracing
&lt;/h4&gt;

&lt;p&gt;In Step 4 X-Ray Tracing is enabled in the App Mesh Controller configuration by including --set tracing.enabled=true and --set tracing.provider=x-ray.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# 04-mesh/files/4-appmesh-controller.yaml
tracing:
  # tracing.enabled: `true` if Envoy should be configured tracing
  enabled: true
  # tracing.provider: can be x-ray, jaeger or datadog
  provider: x-ray
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;X-Ray Traces&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvmttfpvh6vorv4pax3to.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvmttfpvh6vorv4pax3to.png" alt="X-Ray Traces" width="800" height="432"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;X-Ray Map&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F38iwcogh7797ynalyyd3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F38iwcogh7797ynalyyd3.png" alt="X-Ray Traces Map" width="800" height="395"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Clean up
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;cd in each folder,&lt;/li&gt;
&lt;li&gt;Run &lt;strong&gt;./destroy.sh&lt;/strong&gt; shell script.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;App Mesh, being a managed service, reduces the complexity and overhead of managing the service mesh. There many other features provided by App Mesh that we didn't cover in deep like Traffic shifting, Request Timeouts, Circuit Breaker, Retry or mTLS.&lt;/p&gt;

&lt;p&gt;Due to the number of steps we went through, people can think that this shift is a way to complex, but in my point of view, all the dirty stuff is just done one time, and after everything is setup correctly, Developpers will enjoy and will find it easy to dive in a world full of &lt;strong&gt;sidecars&lt;/strong&gt; as you can see in the following image, each line represent a pod we created in Kubernetes cluster, and each small green square represent a sidecar container, so beside each application you deploy an envoy container, an x-ray container, a fluentd container will be added as sidecars to give you the maximum observability on what's is going on inside your Kubernetes cluster.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgs1bxo7gjhk6tgp6608t.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgs1bxo7gjhk6tgp6608t.png" alt="App Mesh Pods and containers and sidecars" width="800" height="453"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This demo could also be easy lauch using the kaiac tool &lt;a href="https://www.kaiac.io/solutions/appmesh" rel="noopener noreferrer"&gt;App Mesh with KaiaC&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Resources
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;AppMesh - Service Mesh &amp;amp; Beyond&lt;/strong&gt; : &lt;a href="https://tech.forums.softwareag.com/t/appmesh-service-mesh-beyond/" rel="noopener noreferrer"&gt;https://tech.forums.softwareag.com/t/appmesh-service-mesh-beyond/&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;AWS App Mesh: Hosted Service Mesh Control Plane for Envoy Proxy&lt;/strong&gt; : &lt;a href="https://www.infoq.com/news/2019/01/aws-app-mesh/" rel="noopener noreferrer"&gt;https://www.infoq.com/news/2019/01/aws-app-mesh/&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;The Istio service mesh&lt;/strong&gt; : &lt;a href="https://istio.io/latest/about/service-mesh/" rel="noopener noreferrer"&gt;https://istio.io/latest/about/service-mesh/&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;AWS App Mesh ingress and route enhancements&lt;/strong&gt; : &lt;a href="https://aws.amazon.com/blogs/containers/app-mesh-ingress-route-enhancements/" rel="noopener noreferrer"&gt;https://aws.amazon.com/blogs/containers/app-mesh-ingress-route-enhancements/&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;How to use OAuth 2.0 in Amazon Cognito: Learn about the different OAuth 2.0 grants&lt;/strong&gt;: &lt;a href="https://aws.amazon.com/blogs/security/how-to-use-oauth-2-0-in-amazon-cognito-learn-about-the-different-oauth-2-0-grants/" rel="noopener noreferrer"&gt;https://aws.amazon.com/blogs/security/how-to-use-oauth-2-0-in-amazon-cognito-learn-about-the-different-oauth-2-0-grants/&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;AWS App Mesh — Deep Dive&lt;/strong&gt; : &lt;a href="https://medium.com/@iyer.hareesh/aws-app-mesh-deep-dive-60c9ad227c9d" rel="noopener noreferrer"&gt;https://medium.com/@iyer.hareesh/aws-app-mesh-deep-dive-60c9ad227c9d&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Circuit breaking&lt;/strong&gt; : &lt;a href="https://www.envoyproxy.io/docs/envoy/latest/intro/arch_overview/upstream/circuit_breaking#arch-overview-circuit-break" rel="noopener noreferrer"&gt;https://www.envoyproxy.io/docs/envoy/latest/intro/arch_overview/upstream/circuit_breaking#arch-overview-circuit-break&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Envoy defaults set by App Mesh&lt;/strong&gt; : &lt;a href="https://docs.aws.amazon.com/app-mesh/latest/userguide/envoy-defaults.html#default-circuit-breaker" rel="noopener noreferrer"&gt;https://docs.aws.amazon.com/app-mesh/latest/userguide/envoy-defaults.html#default-circuit-breaker&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Monolithic vs Microservices Architecture&lt;/strong&gt; : &lt;a href="https://www.geeksforgeeks.org/monolithic-vs-microservices-architecture/" rel="noopener noreferrer"&gt;https://www.geeksforgeeks.org/monolithic-vs-microservices-architecture/&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>appmesh</category>
      <category>servicemesh</category>
      <category>aws</category>
      <category>terraform</category>
    </item>
    <item>
      <title>My Service Mesh journey with Terraform on AWS Cloud - Part 1</title>
      <dc:creator>jekobokidou</dc:creator>
      <pubDate>Mon, 01 Jul 2024 14:25:26 +0000</pubDate>
      <link>https://dev.to/aws-builders/my-service-mesh-journey-with-terraform-on-aws-cloud-part-1-3hee</link>
      <guid>https://dev.to/aws-builders/my-service-mesh-journey-with-terraform-on-aws-cloud-part-1-3hee</guid>
      <description>&lt;p&gt;In the ever-evolving landscape of modern applications and cloud native architectures, the need for efficient, scalable, and secure communication between services is paramount.&lt;br&gt;
If you still have a doubt for your own organization, just take a look at your workloads and if you are deploying more and more services and observing thoses services is a bit challenging, for sure your organization probably need a service Mesh.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs87r5lbvtdbomjhlpbpp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs87r5lbvtdbomjhlpbpp.png" alt="Microservices Before and After" width="800" height="263"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;My purpose is to showcase the capabilities of service mesh concept on Amazon Web Services Cloud (AWS) with Terraform. AWS App Mesh is AWS implementation of the mesh concept and his primary purpose is to allow developers to focus on innovation rather than infrastructure. But before diving into terraform code, let's explore some core knowledge to better understanging of the service Mesh interesting concept.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why an organization needs a service mesh?
&lt;/h2&gt;

&lt;p&gt;A monolithic architecture is a traditional approach to designing software where an entire application is built as a single, indivisible unit. In this architecture, all the different components of the application, such as the user interface, business logic, and data access layer, are tightly integrated and deployed together.&lt;br&gt;
As a monolithic application grows, it becomes more complex and harder to manage.&lt;br&gt;
This complexity can make it difficult for developers to understand how different parts of the application interact, leading to longer development times and increased risk of errors.&lt;/p&gt;

&lt;p&gt;In modern application architecture, you can build applications as a collection of small, independently deployable microservices. Different teams may build individual microservices and choose their coding languages and tools. However, the microservices must communicate for the application code to work correctly.&lt;/p&gt;

&lt;p&gt;Application performance depends on the speed and resiliency of communication between services. Developers must monitor and optimize the application across services, but it’s hard to gain visibility due to the system's distributed nature. As applications scale, it becomes even more complex to manage communications.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz2vg1qulaknvqm687d0z.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz2vg1qulaknvqm687d0z.png" alt="Monolith Microservices and Service Mesh" width="800" height="327"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;There are two main drivers to service mesh adoption :&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Service-level observability&lt;/strong&gt; : As more workloads and services are deployed, developers find it challenging to understand how everything works together. For example, service teams want to know what their downstream and upstream dependencies are. They want greater visibility into how services and workloads communicate at the application layer.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Service-level control&lt;/strong&gt; : Administrators want to control which services talk to one another and what actions they perform. They want fine-grained control and governance over the behavior, policies, and interactions of services within a microservices architecture. Enforcing security policies is essential for regulatory compliance.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Those drivers leeds to a Service Mesh Architecture as a response. In facts, a service mesh provides a centralized, dedicated infrastructure layer that handles the intricacies of service-to-service communication within a distributed application.&lt;/p&gt;

&lt;h2&gt;
  
  
  What are the benefits of a service mesh?
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Service discovery&lt;/strong&gt; : Service meshes provide automated service discovery, which reduces the operational load of managing service endpoints.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Load balancing&lt;/strong&gt; : Service meshes use various algorithms—such to distribute requests across multiple service instances intelligently.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Traffic management&lt;/strong&gt; : Service meshes offer advanced traffic management features, which provide fine-grained control over request routing and traffic behavior. &lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  How does a service mesh work?
&lt;/h2&gt;

&lt;p&gt;A service mesh removes the logic governing service-to-service communication from individual services and abstracts communication to its own infrastructure layer. It uses several network proxies to route and track communication between services.&lt;/p&gt;

&lt;p&gt;A proxy acts as an intermediary gateway between your organization’s network and the microservice. All traffic to and from the service is routed through the proxy server. Individual proxies are sometimes called &lt;em&gt;sidecars&lt;/em&gt;, because they run separately but are logically next to each service. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd830ke7thebzz8dmzid5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd830ke7thebzz8dmzid5.png" alt="Service Mesh works" width="800" height="536"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;That's all for this first part, and as you can see, it is just about Service Mesh "Theory". In &lt;a href="https://dev.to/aws-builders/my-service-mesh-journey-with-terraform-on-aws-cloud-part-2-58fd"&gt;Part 2&lt;/a&gt; we will get our hands dirty going deep in AWS Service Mesh Demo.&lt;br&gt;
Stay focus and click here to try this exciting &lt;a href="https://dev.to/aws-builders/my-service-mesh-journey-with-terraform-on-aws-cloud-part-2-58fd"&gt;Part 2&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Resources
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;AppMesh - Service Mesh &amp;amp; Beyond&lt;/strong&gt; : &lt;a href="https://tech.forums.softwareag.com/t/appmesh-service-mesh-beyond/" rel="noopener noreferrer"&gt;https://tech.forums.softwareag.com/t/appmesh-service-mesh-beyond/&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;AWS App Mesh: Hosted Service Mesh Control Plane for Envoy Proxy&lt;/strong&gt; : &lt;a href="https://www.infoq.com/news/2019/01/aws-app-mesh/" rel="noopener noreferrer"&gt;https://www.infoq.com/news/2019/01/aws-app-mesh/&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;The Istio service mesh&lt;/strong&gt; : &lt;a href="https://istio.io/latest/about/service-mesh/" rel="noopener noreferrer"&gt;https://istio.io/latest/about/service-mesh/&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;AWS App Mesh ingress and route enhancements&lt;/strong&gt; : &lt;a href="https://aws.amazon.com/blogs/containers/app-mesh-ingress-route-enhancements/" rel="noopener noreferrer"&gt;https://aws.amazon.com/blogs/containers/app-mesh-ingress-route-enhancements/&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;How to use OAuth 2.0 in Amazon Cognito: Learn about the different OAuth 2.0 grants&lt;/strong&gt;: &lt;a href="https://aws.amazon.com/blogs/security/how-to-use-oauth-2-0-in-amazon-cognito-learn-about-the-different-oauth-2-0-grants/" rel="noopener noreferrer"&gt;https://aws.amazon.com/blogs/security/how-to-use-oauth-2-0-in-amazon-cognito-learn-about-the-different-oauth-2-0-grants/&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;AWS App Mesh — Deep Dive&lt;/strong&gt; : &lt;a href="https://medium.com/@iyer.hareesh/aws-app-mesh-deep-dive-60c9ad227c9d" rel="noopener noreferrer"&gt;https://medium.com/@iyer.hareesh/aws-app-mesh-deep-dive-60c9ad227c9d&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Circuit breaking&lt;/strong&gt; : &lt;a href="https://www.envoyproxy.io/docs/envoy/latest/intro/arch_overview/upstream/circuit_breaking#arch-overview-circuit-break" rel="noopener noreferrer"&gt;https://www.envoyproxy.io/docs/envoy/latest/intro/arch_overview/upstream/circuit_breaking#arch-overview-circuit-break&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Envoy defaults set by App Mesh&lt;/strong&gt; : &lt;a href="https://docs.aws.amazon.com/app-mesh/latest/userguide/envoy-defaults.html#default-circuit-breaker" rel="noopener noreferrer"&gt;https://docs.aws.amazon.com/app-mesh/latest/userguide/envoy-defaults.html#default-circuit-breaker&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Monolithic vs Microservices Architecture&lt;/strong&gt; : &lt;a href="https://www.geeksforgeeks.org/monolithic-vs-microservices-architecture/" rel="noopener noreferrer"&gt;https://www.geeksforgeeks.org/monolithic-vs-microservices-architecture/&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>appmesh</category>
      <category>servicemesh</category>
      <category>aws</category>
      <category>terraform</category>
    </item>
    <item>
      <title>Infrastructure As Code on AWS made simple</title>
      <dc:creator>jekobokidou</dc:creator>
      <pubDate>Tue, 05 Mar 2024 21:56:48 +0000</pubDate>
      <link>https://dev.to/aws-builders/infrastructure-as-code-on-aws-made-simpler-3gm</link>
      <guid>https://dev.to/aws-builders/infrastructure-as-code-on-aws-made-simpler-3gm</guid>
      <description>&lt;p&gt;This post will introduce you into KaiaC, a tool made to simplify Infrastructure As Code. KaiaC is built on top of Terraform and allow you to create cloud resources just with few key values.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr3w5qwkun0veir2sefvv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr3w5qwkun0veir2sefvv.png" alt=" " width="800" height="206"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Key features of KaiaC&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Infrastructure as Key Value&lt;/strong&gt; : Infrastructure is described using a simple configuration syntax. This allows a blueprint of your datacenter to be versioned and treated as you would any other code. Additionally, infrastructure can be shared and re-used.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Terraform features&lt;/strong&gt; : KaiaC mostly used Terraform in backgroup so you can inherit all the powerfull Terraform features like execution plans. For more information on Terraform, refer to the Terraform website.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Let's create an EC2 t2.micro with KaiaC&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;&lt;strong&gt;Step 1 : Create a KvBook&lt;/strong&gt;&lt;/em&gt;&lt;br&gt;
A KvBook is just the file that contains your key/value(s) configuration informations. You have to create a kvbook to tell KaiaC which type of resources you need.&lt;br&gt;
Hit the following command!&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kaiac create vmonly
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The result should look like :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;KvBook /root/kvbooks/vmonly-3Bpkcs9rPV1a.cfg created!
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;em&gt;&lt;strong&gt;Step 2 : Edit your KvBook&lt;/strong&gt;&lt;/em&gt;&lt;br&gt;
Hit the following command!&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ nano /root/kvbooks/vmonly-3Bpkcs9rPV1a.cfg
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The file will look like this, if it is ok for you, leave it as is :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;GL_KAIAC_MODULE="vmonly"
GL_NAMESPACE="vmonly001"
GL_NAME="demo1"
GL_STAGE="staging"
BE_AMI_NAME="ubuntu/images/hvm-ssd/ubuntu-jammy-22.04-amd64-server-20230919"
BE_EC2_TYPE="t2.micro"
BE_EC2_VOLUME_SIZE="8"
BE_PATH_EC2_PUBKEY="~/.ssh/id_rsa.pub"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;em&gt;&lt;strong&gt;Step 3 : Register your KvBook&lt;/strong&gt;&lt;/em&gt;&lt;br&gt;
Hit the following command!&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kaiac register /root/kvbooks/vmonly-3Bpkcs9rPV1a.cfg
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;em&gt;&lt;strong&gt;Step 4 : Check your KvBook&lt;/strong&gt;&lt;/em&gt;&lt;br&gt;
Hit the following command!&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kaiac plan
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;em&gt;&lt;strong&gt;Step 5 : Apply your KvBook&lt;/strong&gt;&lt;/em&gt;&lt;br&gt;
Hit the following command!&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kaiac apply
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;em&gt;&lt;strong&gt;Step 6 : Check your resources&lt;/strong&gt;&lt;/em&gt;&lt;br&gt;
Connect to your &lt;a href="https://us-west-2.console.aws.amazon.com/ec2/home?region=us-west-2#Home:" rel="noopener noreferrer"&gt;AWS EC2 Console&lt;/a&gt;.&lt;br&gt;
You should see your created EC2 instance&lt;/p&gt;

&lt;p&gt;&lt;em&gt;&lt;strong&gt;Step 7 : Destroy your resources&lt;/strong&gt;&lt;/em&gt;&lt;br&gt;
Hit the following command!&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kaiac destroy
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;em&gt;&lt;strong&gt;Step 8 : Check your resources again&lt;/strong&gt;&lt;/em&gt;&lt;br&gt;
Connect to your &lt;a href="https://us-west-2.console.aws.amazon.com/ec2/home?region=us-west-2#Home:" rel="noopener noreferrer"&gt;AWS EC2 Console&lt;/a&gt;.&lt;br&gt;
You should see your terminated EC2 instance&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Where can you find KaiaC ?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;KaiaC is distributed as a Debian package and installation instructions can be found here &lt;a href="https://www.kaiac.io/install" rel="noopener noreferrer"&gt;https://www.kaiac.io/install&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;If you find this post interesting please leave comments.&lt;/p&gt;

</description>
      <category>kaiac</category>
      <category>iac</category>
      <category>aws</category>
      <category>infrastructureascode</category>
    </item>
    <item>
      <title>Enable encryption on existing RDS instance</title>
      <dc:creator>jekobokidou</dc:creator>
      <pubDate>Mon, 27 Feb 2023 19:44:02 +0000</pubDate>
      <link>https://dev.to/aws-builders/enable-encryption-on-existing-rds-instance-2gad</link>
      <guid>https://dev.to/aws-builders/enable-encryption-on-existing-rds-instance-2gad</guid>
      <description>&lt;p&gt;&lt;strong&gt;Do you know that it is not possible to enable encryption for an Amazon RDS database after it is created?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fov9w3sd97nfxcxuiz7l8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fov9w3sd97nfxcxuiz7l8.png" alt="Unencrypted AWS RDS Instances" width="800" height="277"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Encryption is essential for corporate datas. You must therefore ensure that encryption is enabled for all your Amazon RDS databases at creation.&lt;/p&gt;

&lt;p&gt;Imagine the case where a new unencrypted database joins your organization following a merger for example. You will have to catch up.&lt;/p&gt;

&lt;p&gt;Don't panic! The solution is quite simple even in a situation where your database is large and is heavily used.&lt;/p&gt;

&lt;p&gt;Basically, you will have to do the following:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; &lt;strong&gt;Create a snapshot of your database&lt;/strong&gt;;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqpdiec709y9wyodsu0ko.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqpdiec709y9wyodsu0ko.jpg" alt="Create a snapshot of your database" width="800" height="244"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Create an encrypted copy of your snapshot&lt;/strong&gt;;
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqg863nsad8dghsxejnfy.png" alt="Create an encrypted copy of your snapshot" width="800" height="722"&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftt1sa77syje5nm32on0f.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftt1sa77syje5nm32on0f.png" alt="Create an encrypted copy of your snapshot" width="773" height="1001"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; &lt;strong&gt;Restore your encrypted snapshot in a new database&lt;/strong&gt;.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6jay7ep21ulvt8ibddft.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6jay7ep21ulvt8ibddft.png" alt="Restore your encrypted snapshot" width="360" height="438"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Your Amazon RDS database is now encrypted!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5w146kexbg9uh5qeepn5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5w146kexbg9uh5qeepn5.png" alt="Encrypted Amazon RDS database" width="800" height="428"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;For more follow this link &lt;a href="https://docs.aws.amazon.com/fr_fr/AmazonRDS/latest/UserGuide/Overview.Encryption.html" rel="noopener noreferrer"&gt;Amazon RDS Encryption&lt;/a&gt; &lt;/p&gt;

</description>
      <category>rds</category>
      <category>encryption</category>
      <category>aws</category>
      <category>awscommunitybuilders</category>
    </item>
    <item>
      <title>Sexiest way to manage your AWS resources</title>
      <dc:creator>jekobokidou</dc:creator>
      <pubDate>Tue, 14 Feb 2023 23:49:43 +0000</pubDate>
      <link>https://dev.to/aws-builders/sexiest-way-to-manage-your-aws-resources-3nc9</link>
      <guid>https://dev.to/aws-builders/sexiest-way-to-manage-your-aws-resources-3nc9</guid>
      <description>&lt;p&gt;A friend has been editing a SaaS solution for a few years on AWS Cloud. Step by step, his SaaS solution is starting to take hold, his client portfolio is growing, he has to hire more developers. The code that used to be modified by a single person will now be modified by several, so far its git repository is only composed of two branches “develop” for its developments and “main” for what goes into production.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4cqeutwkvrmq5cwe28pl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4cqeutwkvrmq5cwe28pl.png" alt="Suitable git flow for one single developer" width="509" height="344"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This situation terribly anguishes him, he wonders if he will be able to scale up. So he asks me :&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;How can I manage the development of features in parallel?&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;How can I minimize the risks of regression?&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;How to carry out production releases with zero downtime?&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4umuvsyvj62k0z7jy2ef.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4umuvsyvj62k0z7jy2ef.png" alt="The Gitflow like for bigger developers teams" width="690" height="819"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Before answering those questions, I wanted to reassure him because being on AWS cloud is a great starting point. For sure AWS certainly has the most complete cloud ecosystem, and any SaaS application should feel safe there.&lt;br&gt;
So I told him to close his eyes and imagine a solution able to dynamically :&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;em&gt;&lt;strong&gt;Creates a "feature" branch from "develop"&lt;/strong&gt;&lt;/em&gt;&lt;/li&gt;
&lt;li&gt;&lt;em&gt;&lt;strong&gt;Creates a "feature" dedicated environment on the AWS cloud&lt;/strong&gt;&lt;/em&gt;&lt;/li&gt;
&lt;li&gt;&lt;em&gt;&lt;strong&gt;Create a deployment pipeline on the "feature" environment and which is triggered at each commit on the "feature" branch&lt;/strong&gt;&lt;/em&gt;&lt;/li&gt;
&lt;li&gt;&lt;em&gt;&lt;strong&gt;Create a DNS record that allows easy testing of the "feature" branch&lt;/strong&gt;&lt;/em&gt;&lt;/li&gt;
&lt;li&gt;&lt;em&gt;&lt;strong&gt;Create a load balancer and deploy an SSL certificate for the "feature" environment&lt;/strong&gt;&lt;/em&gt;&lt;/li&gt;
&lt;li&gt;&lt;em&gt;&lt;strong&gt;Deploy in "Staging" with each validation of "merge request" carried out on the "develop" branch&lt;/strong&gt;&lt;/em&gt;&lt;/li&gt;
&lt;li&gt;&lt;em&gt;&lt;strong&gt;Create an image of the "staging" environment for a deployment in production&lt;/strong&gt;&lt;/em&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9govxsu0exx27myay2wi.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9govxsu0exx27myay2wi.png" alt="Automating environment management" width="787" height="451"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Then I told him to open his eyes, because it can easily be implemented on the AWS cloud, let’s deal with it.&lt;/p&gt;

&lt;h2&gt;
  
  
  How to Create a “feature” branch from “develop”?
&lt;/h2&gt;

&lt;p&gt;Terraform is your friend, the code below allows you to have a Terraform module that allows you to manage any git repository.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqimzb5nij6wf9gs0ioe9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqimzb5nij6wf9gs0ioe9.png" alt="Terrform code snippet to create or delete a git branch" width="800" height="719"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  How do I create a feature-dedicated environment on the AWS Cloud?
&lt;/h2&gt;

&lt;p&gt;Terraform is once again your friend if you need to interact with AWS. All deployment models can be automated with Terraform:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;EC2 servers&lt;/li&gt;
&lt;li&gt;Docker containers on ECS&lt;/li&gt;
&lt;li&gt;Docker containers on Kubernetes&lt;/li&gt;
&lt;li&gt;Lambda functions&lt;/li&gt;
&lt;li&gt;RDS databases&lt;/li&gt;
&lt;li&gt;A combination of all these workloads&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdu474imwhaq02jq938c1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdu474imwhaq02jq938c1.png" alt="Terrform code snippet to create an EC2 instance" width="800" height="979"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fufwb2izkfo0zs38htngj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fufwb2izkfo0zs38htngj.png" alt="Terrform code snippet to create an AWS Fargate service for Docker containers" width="800" height="890"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  How do I create a deployment pipeline on the “feature” environment and which is triggered on commit on the “feature” branch?
&lt;/h2&gt;

&lt;p&gt;Here again Terraform will allow you to automate the creation of a pipeline with the tool of your choice. But since you're on AWS, you can use Terraform to build a pipeline with AWS CodePipeline.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjvwxpxpqwvk0k0acsn11.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjvwxpxpqwvk0k0acsn11.png" alt="Terrform code snippet to create an AWS CodePipeline pipeline" width="800" height="1076"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  How do I create a DNS record that makes it easy to test the "feature" branch?
&lt;/h2&gt;

&lt;p&gt;We can definitely do everything with Terraform, because creating a DNS record on Amazon Route 53 is child's play.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe8wl7qkamrlgpbyumrnb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe8wl7qkamrlgpbyumrnb.png" alt="Terraform code snippet to create a DNS record in a Route 53 hosted zone" width="800" height="399"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  How to create a Load Balancer and deploy an SSL certificate to it on the “feature” environment?
&lt;/h2&gt;

&lt;p&gt;Terraform is once again the solution. All you need is to provide the ARN of your certificate.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fii9kcw1vl7u0wf36p3eq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fii9kcw1vl7u0wf36p3eq.png" alt="Terraform code snippet to create an AWS ALB with an SSL certificate attached on it" width="800" height="639"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  How to automatically deploy each time a merge request is validated?
&lt;/h2&gt;

&lt;p&gt;The validation of a "merge request" corresponds to a commit on the destination branch. With AWS CodePipeline and Terraform, automatic triggering of a pipeline execution is done by configuring a connection at the Source stage of your pipeline.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fumbpr1kov2ycv90lfl9d.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fumbpr1kov2ycv90lfl9d.png" alt="Using a codestar connection to trigger pipeline execution" width="800" height="560"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  How do I create an image of the staging environment for a production deployment?
&lt;/h2&gt;

&lt;p&gt;With Terraform creating an AMI Image of an instance is easy, creating a Docker image is even easier.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Filboojaro47ay2s9qbm8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Filboojaro47ay2s9qbm8.png" alt="Terraform code snippet to create an AMI" width="800" height="387"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  So what's the sexiest way to manage your AWS resources ?
&lt;/h2&gt;

&lt;p&gt;All this code allows you to have scripts that will allow you to manage your resources, create new environments, streamline and enhance the work of your developers.&lt;br&gt;
To go further, you can even develop a small “user friendly” application to control the execution of your scripts and thus control the management of a complex infrastructure with a click.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8dhvhkz56v34hvdd6op1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8dhvhkz56v34hvdd6op1.png" alt="Managing IaC scripts" width="800" height="307"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Then you entire architecture could look like this.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0kpar1sag4jsu51dl8s5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0kpar1sag4jsu51dl8s5.png" alt="Solution architecture including a IaC tool to manage AWS cloud resources" width="800" height="522"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Terraform is great
&lt;/h2&gt;

&lt;p&gt;IaC tools are the perfect answer to manage Cloud resources. You can still have fun using the AWS Web Console, but tools like Terraform will makes you speed, go fast like Usain Bolt, and improve the maintainability, of your AWS Cloud infrastructure.&lt;/p&gt;

</description>
      <category>terraform</category>
      <category>aws</category>
      <category>codepipeline</category>
      <category>iac</category>
    </item>
  </channel>
</rss>
