<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Nowsath</title>
    <description>The latest articles on DEV Community by Nowsath (@nowsathk).</description>
    <link>https://dev.to/nowsathk</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/nowsathk"/>
    <language>en</language>
    <item>
      <title>Deploying a Node.js E-Commerce API with Amazon ECS Express Mode: A Revolution in Container Deployment</title>
      <dc:creator>Nowsath</dc:creator>
      <pubDate>Sun, 18 Jan 2026 13:55:58 +0000</pubDate>
      <link>https://dev.to/aws-builders/deploying-a-nodejs-e-commerce-api-with-amazon-ecs-express-mode-a-revolution-in-container-2bd4</link>
      <guid>https://dev.to/aws-builders/deploying-a-nodejs-e-commerce-api-with-amazon-ecs-express-mode-a-revolution-in-container-2bd4</guid>
      <description>&lt;p&gt;The traditional process of deploying containerized applications to AWS has long been a multi-step journey involving manual configuration of load balancers, auto-scaling groups, VPCs, security groups, and domain mappings. For developers who simply want to get their applications running in production, this complexity can be frustrating and time-consuming.&lt;/p&gt;

&lt;p&gt;Enter Amazon ECS Express Mode, announced at AWS re:Invent 2025. This game-changing feature transforms container deployment from a complex, multi-hour configuration task into a single command that handles everything automatically. In this article, we'll walk through deploying a real-world e-commerce product API using ECS Express Mode, demonstrating just how simple cloud deployment has become.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Use Case: Product Catalog API
&lt;/h2&gt;

&lt;p&gt;Let's build and deploy a realistic product catalog API for an e-commerce platform. This API will handle:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Product listing and search&lt;/li&gt;
&lt;li&gt;Product details retrieval&lt;/li&gt;
&lt;li&gt;Inventory management&lt;/li&gt;
&lt;li&gt;High-traffic scenarios (typical for e-commerce during sales events)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This is exactly the kind of application that benefits from containerization, auto-scaling, and load balancing features that traditionally required extensive AWS configuration.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 1: Building the Application
&lt;/h3&gt;

&lt;p&gt;First, let's create our Node.js application. Here's a simplified but functional product API:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;app.js:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;express&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;require&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;express&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;app&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;express&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;port&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;process&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;env&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;PORT&lt;/span&gt; &lt;span class="o"&gt;||&lt;/span&gt; &lt;span class="mi"&gt;3000&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="nx"&gt;app&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;use&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;express&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;json&lt;/span&gt;&lt;span class="p"&gt;());&lt;/span&gt;

&lt;span class="c1"&gt;// Mock product database&lt;/span&gt;
&lt;span class="kd"&gt;let&lt;/span&gt; &lt;span class="nx"&gt;products&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
  &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;id&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;Wireless Headphones&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;price&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mf"&gt;89.99&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;stock&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;150&lt;/span&gt; &lt;span class="p"&gt;},&lt;/span&gt;
  &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;id&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;Smart Watch&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;price&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mf"&gt;249.99&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;stock&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;75&lt;/span&gt; &lt;span class="p"&gt;},&lt;/span&gt;
  &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;id&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;3&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;Laptop Stand&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;price&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mf"&gt;45.50&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;stock&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;200&lt;/span&gt; &lt;span class="p"&gt;},&lt;/span&gt;
  &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;id&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;4&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;USB-C Hub&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;price&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mf"&gt;39.99&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;stock&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;300&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;];&lt;/span&gt;

&lt;span class="c1"&gt;// Health check endpoint (important for load balancer)&lt;/span&gt;
&lt;span class="nx"&gt;app&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;/health&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;req&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;res&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;res&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;status&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;200&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;json&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="na"&gt;status&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;healthy&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt; &lt;span class="p"&gt;});&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;

&lt;span class="c1"&gt;// Get all products&lt;/span&gt;
&lt;span class="nx"&gt;app&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;/api/products&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;req&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;res&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;res&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;json&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="nx"&gt;products&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;total&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;products&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;length&lt;/span&gt; &lt;span class="p"&gt;});&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;

&lt;span class="c1"&gt;// Get product by ID&lt;/span&gt;
&lt;span class="nx"&gt;app&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;/api/products/:id&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;req&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;res&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;product&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;products&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;find&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;p&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="nx"&gt;p&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt; &lt;span class="o"&gt;===&lt;/span&gt; &lt;span class="nf"&gt;parseInt&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;req&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;params&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;&lt;span class="p"&gt;));&lt;/span&gt;
  &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;!&lt;/span&gt;&lt;span class="nx"&gt;product&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nx"&gt;res&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;status&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;404&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;json&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="na"&gt;error&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Product not found&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt; &lt;span class="p"&gt;});&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="nx"&gt;res&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;json&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;product&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;

&lt;span class="c1"&gt;// Update inventory&lt;/span&gt;
&lt;span class="nx"&gt;app&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;patch&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;/api/products/:id/inventory&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;req&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;res&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;product&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;products&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;find&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;p&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="nx"&gt;p&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt; &lt;span class="o"&gt;===&lt;/span&gt; &lt;span class="nf"&gt;parseInt&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;req&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;params&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;&lt;span class="p"&gt;));&lt;/span&gt;
  &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;!&lt;/span&gt;&lt;span class="nx"&gt;product&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nx"&gt;res&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;status&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;404&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;json&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="na"&gt;error&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Product not found&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt; &lt;span class="p"&gt;});&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="nx"&gt;product&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;stock&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;req&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;body&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;stock&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="nx"&gt;res&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;json&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;product&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;

&lt;span class="nx"&gt;app&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;listen&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;port&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;`Product API running on port &lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;port&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;`&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;package.json:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"name"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"product-api"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"version"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"1.0.0"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"main"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"app.js"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"scripts"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"start"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"node app.js"&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"dependencies"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"express"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"^4.18.2"&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Step 2: Containerizing the Application
&lt;/h3&gt;

&lt;p&gt;Create a Dockerfile to package our application:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Dockerfile:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight docker"&gt;&lt;code&gt;&lt;span class="k"&gt;FROM&lt;/span&gt;&lt;span class="s"&gt; node:18-alpine&lt;/span&gt;

&lt;span class="k"&gt;WORKDIR&lt;/span&gt;&lt;span class="s"&gt; /app&lt;/span&gt;

&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; package*.json ./&lt;/span&gt;
&lt;span class="k"&gt;RUN &lt;/span&gt;npm &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;--production&lt;/span&gt;

&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; app.js ./&lt;/span&gt;

&lt;span class="k"&gt;EXPOSE&lt;/span&gt;&lt;span class="s"&gt; 3000&lt;/span&gt;

&lt;span class="k"&gt;CMD&lt;/span&gt;&lt;span class="s"&gt; ["npm", "start"]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Step 3: The Magic - Deploying with ECS Express Mode
&lt;/h3&gt;

&lt;p&gt;Here's where the revolution happens. Instead of spending hours configuring AWS resources, we deploy with a single command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;aws ecs express deploy &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--name&lt;/span&gt; product-api &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--image&lt;/span&gt; product-api:latest &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--port&lt;/span&gt; 3000 &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--domain&lt;/span&gt; api.mystore.com &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--auto-scale-min&lt;/span&gt; 2 &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--auto-scale-max&lt;/span&gt; 10 &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--health-check-path&lt;/span&gt; /health
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That's it. One command.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What Happens Behind the Scenes?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;When you run this single command, ECS Express Mode automatically:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Creates an ECR repository and pushes your Docker image&lt;/li&gt;
&lt;li&gt;Provisions a VPC with public and private subnets across multiple availability zones&lt;/li&gt;
&lt;li&gt;Sets up an Application Load Balancer with health checks pointing to /health&lt;/li&gt;
&lt;li&gt;Configures security groups allowing traffic on port 3000 from the load balancer&lt;/li&gt;
&lt;li&gt;Creates an ECS cluster with Fargate capacity&lt;/li&gt;
&lt;li&gt;Deploys your containers with the specified minimum of 2 instances&lt;/li&gt;
&lt;li&gt;Configures auto-scaling to handle traffic spikes (up to 10 instances)&lt;/li&gt;
&lt;li&gt;Maps your custom domain (api.mystore.com) with SSL/TLS certificate provisioning&lt;/li&gt;
&lt;li&gt;Sets up CloudWatch logging for monitoring and debugging&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;All of this happens automatically, typically completing in under 5 minutes.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 4: Verifying the Deployment
&lt;/h3&gt;

&lt;p&gt;Once deployment completes, you can immediately test your API:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Get all products&lt;/span&gt;
curl https://api.mystore.com/api/products

&lt;span class="c"&gt;# Get specific product&lt;/span&gt;
curl https://api.mystore.com/api/products/1

&lt;span class="c"&gt;# Update inventory&lt;/span&gt;
curl &lt;span class="nt"&gt;-X&lt;/span&gt; PATCH https://api.mystore.com/api/products/1/inventory &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-H&lt;/span&gt; &lt;span class="s2"&gt;"Content-Type: application/json"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-d&lt;/span&gt; &lt;span class="s1"&gt;'{"stock": 125}'&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Real-World Benefits
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Time Savings&lt;/strong&gt;&lt;br&gt;
Traditional ECS deployment: 2-4 hours of configuration&lt;br&gt;
ECS Express Mode deployment: 5 minutes&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Cost Optimization&lt;/strong&gt;&lt;br&gt;
The auto-scaling configuration means you only pay for what you need. During low-traffic periods, you run 2 containers. During a flash sale, the system automatically scales to 10 containers and back down when traffic subsides.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Production-Ready from Day One&lt;/strong&gt;&lt;br&gt;
You get enterprise-grade features without manual configuration including high availability across multiple availability zones, automatic health checks and container replacement, SSL/TLS encryption, distributed load balancing, and comprehensive monitoring and logging.&lt;/p&gt;
&lt;h3&gt;
  
  
  Monitoring and Management
&lt;/h3&gt;

&lt;p&gt;View your deployment status:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;aws ecs express status &lt;span class="nt"&gt;--name&lt;/span&gt; product-api
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Check logs:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;aws ecs express logs &lt;span class="nt"&gt;--name&lt;/span&gt; product-api &lt;span class="nt"&gt;--follow&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Update the deployment with a new version:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;aws ecs express deploy &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--name&lt;/span&gt; product-api &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--image&lt;/span&gt; product-api:v2.0 &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--update&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  When to Use ECS Express Mode
&lt;/h3&gt;

&lt;p&gt;ECS Express Mode is ideal for situations where you want to deploy containerized applications quickly without infrastructure complexity, need production-grade features (load balancing, auto-scaling) without manual setup, are prototyping or building MVPs or are migrating from simpler platforms like Heroku to AWS. You want to focus on application code, not infrastructure.&lt;/p&gt;

&lt;p&gt;It might not be the best fit if you need highly customized networking configurations, have complex multi-service architectures requiring service mesh or need to integrate with existing custom VPC configurations.&lt;/p&gt;

&lt;h3&gt;
  
  
  Conclusion
&lt;/h3&gt;

&lt;p&gt;Amazon ECS Express Mode represents a fundamental shift in how we deploy containerized applications to AWS. By abstracting away infrastructure complexity while maintaining enterprise-grade capabilities, it enables developers to focus on what truly matters-building great applications.&lt;/p&gt;

&lt;p&gt;Our product API deployment demonstrates that you can go from code to production-ready, auto-scaling, load-balanced infrastructure with a single command. This is the future of cloud deployment: simple, powerful and developer-friendly.&lt;/p&gt;

&lt;p&gt;Whether you're a startup looking to move fast or an enterprise team wanting to reduce operational overhead, ECS Express Mode offers a compelling path to modern containerized deployments on AWS.&lt;/p&gt;




&lt;h3&gt;
  
  
  Try It Yourself
&lt;/h3&gt;

&lt;p&gt;Ready to deploy your own application with ECS Express Mode? Here are the prerequisites:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;AWS CLI version 2.15 or later&lt;/li&gt;
&lt;li&gt;Docker installed locally&lt;/li&gt;
&lt;li&gt;An AWS account with appropriate permissions&lt;/li&gt;
&lt;li&gt;A container image ready to deploy&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Start with the simple command shown in this article and experience the future of container deployment today.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>ecs</category>
      <category>expressmode</category>
      <category>containerapps</category>
    </item>
    <item>
      <title>Security Considerations for Multi-Cluster Cloud Architecture (HA EKS with Databases)</title>
      <dc:creator>Nowsath</dc:creator>
      <pubDate>Sun, 26 Oct 2025 14:54:40 +0000</pubDate>
      <link>https://dev.to/aws-builders/security-considerations-for-multi-cluster-cloud-architecture-ha-eks-with-databases-1edp</link>
      <guid>https://dev.to/aws-builders/security-considerations-for-multi-cluster-cloud-architecture-ha-eks-with-databases-1edp</guid>
      <description>&lt;p&gt;Running a highly available multi-cluster EKS architecture brings powerful benefits—zero downtime, disaster recovery, and global scalability. But it also multiplies your security challenges.&lt;/p&gt;

&lt;p&gt;Securing a single EKS cluster is already complex. Add multiple clusters across regions, databases with sensitive data, and cross-cluster communication, and the attack surface grows significantly. One misconfigured security group or exposed secret can compromise your entire infrastructure.&lt;/p&gt;

&lt;p&gt;This guide covers essential security considerations for multi-cluster architectures: network isolation, encryption, IAM management, secrets handling, and incident response. We'll focus on practical measures that protect your infrastructure without sacrificing performance or availability.&lt;/p&gt;

&lt;p&gt;Let's build a secure, highly available system.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;1. Network Security &amp;amp; Isolation&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;VPC Architecture&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Separate VPCs per cluster or use shared VPC with isolated subnets&lt;/li&gt;
&lt;li&gt;Private subnets for EKS nodes and databases (no direct internet access)&lt;/li&gt;
&lt;li&gt;Public subnets only for load balancers and NAT gateways&lt;/li&gt;
&lt;li&gt;Implement VPC peering or AWS Transit Gateway for inter-cluster communication&lt;/li&gt;
&lt;li&gt;Use separate VPCs per environment (dev, staging, production)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Network Segmentation&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Production VPC-1 (Region A)&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Public Subnets: ALB only&lt;/li&gt;
&lt;li&gt;Private Subnets: EKS Nodes&lt;/li&gt;
&lt;li&gt;Database Subnets: RDS/Aurora (isolated)&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;p&gt;Production VPC-2 (Region B)&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Public Subnets: ALB only&lt;/li&gt;
&lt;li&gt;Private Subnets: EKS Nodes&lt;/li&gt;
&lt;li&gt;Database Subnets: RDS/Aurora (isolated)&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Security Groups&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Principle of least privilege - only allow necessary ports&lt;/li&gt;
&lt;li&gt;Database security groups: Only allow traffic from EKS node security groups&lt;/li&gt;
&lt;li&gt;EKS control plane: Restrict API access to specific CIDR ranges&lt;/li&gt;
&lt;li&gt;No 0.0.0.0/0 rules except for outbound NAT traffic&lt;/li&gt;
&lt;li&gt;Document and regularly audit security group rules&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Network Policies&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;networking.k8s.io/v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;NetworkPolicy&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;deny-all-default&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;podSelector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;{}&lt;/span&gt;
  &lt;span class="na"&gt;policyTypes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;Ingress&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;Egress&lt;/span&gt;
&lt;span class="nn"&gt;---&lt;/span&gt;
&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;networking.k8s.io/v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;NetworkPolicy&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;allow-app-to-db&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;podSelector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;matchLabels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;backend&lt;/span&gt;
  &lt;span class="na"&gt;egress&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;to&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;namespaceSelector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;matchLabels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;database&lt;/span&gt;
    &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;protocol&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;TCP&lt;/span&gt;
      &lt;span class="na"&gt;port&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;5432&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  &lt;strong&gt;2. Identity &amp;amp; Access Management (IAM)&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Cluster Access Control&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;AWS IAM authentication for cluster access via aws-auth ConfigMap&lt;/li&gt;
&lt;li&gt;Never use permanent credentials in pods or containers&lt;/li&gt;
&lt;li&gt;Implement IAM Roles for Service Accounts (IRSA) for pod-level permissions&lt;/li&gt;
&lt;li&gt;Use AWS SSO/IAM Identity Center for human access&lt;/li&gt;
&lt;li&gt;Separate IAM roles for different teams/applications&lt;/li&gt;
&lt;li&gt;Enable MFA for all human users&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;IRSA (IAM Roles for Service Accounts)&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ServiceAccount&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;app-sa&lt;/span&gt;
  &lt;span class="na"&gt;annotations&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;eks.amazonaws.com/role-arn&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;arn:aws:iam::ACCOUNT:role/app-role&lt;/span&gt;
&lt;span class="nn"&gt;---&lt;/span&gt;
&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;apps/v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Deployment&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;template&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;serviceAccountName&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;app-sa&lt;/span&gt;
      &lt;span class="na"&gt;containers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;app&lt;/span&gt;
        &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;myapp:latest&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Kubernetes RBAC&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Role-Based Access Control (RBAC) for fine-grained permissions&lt;/li&gt;
&lt;li&gt;Namespace isolation - separate namespaces per team/application&lt;/li&gt;
&lt;li&gt;Principle of least privilege - minimal permissions needed&lt;/li&gt;
&lt;li&gt;ClusterRoles for cluster-wide resources (use sparingly)&lt;/li&gt;
&lt;li&gt;Roles for namespace-scoped resources&lt;/li&gt;
&lt;li&gt;Regular RBAC audits
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;rbac.authorization.k8s.io/v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Role&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;developer&lt;/span&gt;
  &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;app-prod&lt;/span&gt;
&lt;span class="na"&gt;rules&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;apiGroups&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;"&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;apps"&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
  &lt;span class="na"&gt;resources&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;pods"&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;deployments"&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;services"&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
  &lt;span class="na"&gt;verbs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;get"&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;list"&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;watch"&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
&lt;span class="c1"&gt;# Read-only access, no delete/update&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Database Access&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;IAM Database Authentication for RDS/Aurora (where possible)&lt;/li&gt;
&lt;li&gt;Avoid hardcoded credentials - use Secrets Manager or Parameter Store&lt;/li&gt;
&lt;li&gt;Rotate credentials regularly (automated rotation)&lt;/li&gt;
&lt;li&gt;Separate database users per application/service&lt;/li&gt;
&lt;li&gt;Read-only replicas for non-critical workloads&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;3. Secrets Management&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Never Store Secrets in Code or ConfigMaps&lt;/strong&gt;&lt;br&gt;
❌ Bad: Secrets in environment variables or ConfigMaps&lt;br&gt;
✅ Good: External secrets management&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;AWS Secrets Manager / Parameter Store&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Use External Secrets Operator or Secrets Store CSI Driver&lt;/li&gt;
&lt;li&gt;Automatic rotation enabled&lt;/li&gt;
&lt;li&gt;Encryption at rest with KMS&lt;/li&gt;
&lt;li&gt;Audit access via CloudTrail
&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;external-secrets.io/v1beta1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ExternalSecret&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;db-credentials&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;refreshInterval&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;1h&lt;/span&gt;
  &lt;span class="na"&gt;secretStoreRef&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;aws-secrets-manager&lt;/span&gt;
  &lt;span class="na"&gt;target&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;db-secret&lt;/span&gt;
  &lt;span class="na"&gt;data&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;secretKey&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;password&lt;/span&gt;
    &lt;span class="na"&gt;remoteRef&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;key&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;prod/db/password&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;&lt;strong&gt;Alternative: HashiCorp Vault&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Centralized secrets management across clusters&lt;/li&gt;
&lt;li&gt;Dynamic secrets generation&lt;/li&gt;
&lt;li&gt;Lease-based credentials&lt;/li&gt;
&lt;li&gt;Fine-grained access policies&lt;/li&gt;
&lt;/ul&gt;


&lt;h2&gt;
  
  
  &lt;strong&gt;4. Encryption&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Data at Rest&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;EKS etcd encryption using AWS KMS&lt;/li&gt;
&lt;li&gt;EBS volumes encrypted (gp3 with KMS)&lt;/li&gt;
&lt;li&gt;RDS/Aurora encryption enabled with KMS&lt;/li&gt;
&lt;li&gt;S3 encryption (SSE-S3 or SSE-KMS)&lt;/li&gt;
&lt;li&gt;Use customer-managed KMS keys for compliance requirements&lt;/li&gt;
&lt;li&gt;Separate KMS keys per environment/cluster&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Data in Transit&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;TLS/SSL everywhere:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;ALB → Pods (via Ingress with TLS)&lt;/li&gt;
&lt;li&gt;Pod → Pod (service mesh or mTLS)&lt;/li&gt;
&lt;li&gt;Application → Database (SSL/TLS enforced)&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Certificate management with AWS Certificate Manager or cert-manager&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;mTLS with service mesh (Istio, Linkerd, AWS App Mesh)&lt;br&gt;
&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;networking.k8s.io/v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Ingress&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;secure-ingress&lt;/span&gt;
  &lt;span class="na"&gt;annotations&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;alb.ingress.kubernetes.io/certificate-arn&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;arn:aws:acm:...&lt;/span&gt;
    &lt;span class="na"&gt;alb.ingress.kubernetes.io/ssl-policy&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ELBSecurityPolicy-TLS-1-2-2017-01&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;tls&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;hosts&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;app.example.com&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  5. Pod Security
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Pod Security Standards&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Enforce restricted pod security standards&lt;/li&gt;
&lt;li&gt;No privileged containers unless absolutely necessary&lt;/li&gt;
&lt;li&gt;Read-only root filesystem where possible&lt;/li&gt;
&lt;li&gt;Non-root users for containers&lt;/li&gt;
&lt;li&gt;Drop all capabilities and add only required ones
&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Pod&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;secure-pod&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;securityContext&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;runAsNonRoot&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
    &lt;span class="na"&gt;runAsUser&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;1000&lt;/span&gt;
    &lt;span class="na"&gt;fsGroup&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;2000&lt;/span&gt;
    &lt;span class="na"&gt;seccompProfile&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;RuntimeDefault&lt;/span&gt;
  &lt;span class="na"&gt;containers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;app&lt;/span&gt;
    &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;myapp:latest&lt;/span&gt;
    &lt;span class="na"&gt;securityContext&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;allowPrivilegeEscalation&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;
      &lt;span class="na"&gt;readOnlyRootFilesystem&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
      &lt;span class="na"&gt;capabilities&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;drop&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;ALL&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;&lt;strong&gt;Image Security&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Scan images for vulnerabilities (Amazon ECR scanning, Trivy, Snyk)&lt;/li&gt;
&lt;li&gt;Use minimal base images (distroless, Alpine)&lt;/li&gt;
&lt;li&gt;Pin image versions - never use :latest&lt;/li&gt;
&lt;li&gt;Sign and verify images (Sigstore/Cosign, Notary)&lt;/li&gt;
&lt;li&gt;Private container registry (Amazon ECR with VPC endpoints)&lt;/li&gt;
&lt;li&gt;Image pull secrets for private registries&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Admission Controllers&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;OPA/Gatekeeper or Kyverno for policy enforcement&lt;/li&gt;
&lt;li&gt;Enforce security policies:&lt;/li&gt;
&lt;li&gt;No privileged pods&lt;/li&gt;
&lt;li&gt;Required resource limits&lt;/li&gt;
&lt;li&gt;Approved registries only&lt;/li&gt;
&lt;li&gt;Required security contexts
&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;kyverno.io/v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ClusterPolicy&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;disallow-privileged&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;validationFailureAction&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;enforce&lt;/span&gt;
  &lt;span class="na"&gt;rules&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;check-privileged&lt;/span&gt;
    &lt;span class="na"&gt;match&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;resources&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;kinds&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;Pod&lt;/span&gt;
    &lt;span class="na"&gt;validate&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;message&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Privileged&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;containers&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;are&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;not&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;allowed"&lt;/span&gt;
      &lt;span class="na"&gt;pattern&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;containers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;securityContext&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
              &lt;span class="na"&gt;privileged&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  &lt;strong&gt;6. Multi-Cluster Security&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Cluster Isolation&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Separate clusters for different security zones (DMZ, internal, data)&lt;/li&gt;
&lt;li&gt;Separate clusters per environment (never share prod and non-prod)&lt;/li&gt;
&lt;li&gt;Separate AWS accounts per environment (AWS Organizations)&lt;/li&gt;
&lt;li&gt;Service Control Policies (SCPs) to restrict actions at account level&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Cross-Cluster Communication&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Service mesh for secure cross-cluster communication (Istio multi-cluster)&lt;/li&gt;
&lt;li&gt;VPC peering or Transit Gateway with strict security groups&lt;/li&gt;
&lt;li&gt;mTLS for service-to-service authentication&lt;/li&gt;
&lt;li&gt;API Gateway or Internal Load Balancer as entry points&lt;/li&gt;
&lt;li&gt;Zero-trust networking - verify every request&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;DNS Security&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Private Route53 hosted zones for internal services&lt;/li&gt;
&lt;li&gt;DNSSEC where applicable&lt;/li&gt;
&lt;li&gt;Avoid DNS-based service discovery across clusters (security risk)&lt;/li&gt;
&lt;/ul&gt;


&lt;h2&gt;
  
  
  &lt;strong&gt;7. Database Security&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;RDS/Aurora Security&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Multi-AZ deployment for availability&lt;/li&gt;
&lt;li&gt;Private subnets only - no public access&lt;/li&gt;
&lt;li&gt;Encryption at rest (KMS) and in transit (SSL/TLS enforce)&lt;/li&gt;
&lt;li&gt;Automated backups with encryption&lt;/li&gt;
&lt;li&gt;Point-in-time recovery enabled&lt;/li&gt;
&lt;li&gt;Enhanced monitoring enabled&lt;/li&gt;
&lt;li&gt;Performance Insights with encryption&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Connection Security&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;RDS Proxy for connection pooling and IAM authentication&lt;/li&gt;
&lt;li&gt;SSL/TLS enforcement on database side&lt;/li&gt;
&lt;li&gt;Certificate validation on client side&lt;/li&gt;
&lt;li&gt;No hardcoded connection strings
&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# Database connection with SSL&lt;/span&gt;
&lt;span class="na"&gt;DATABASE_URL&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;postgresql://user@host:5432/db?sslmode=require"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;&lt;strong&gt;Database Access Control&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Separate database users per service&lt;/li&gt;
&lt;li&gt;Minimal privileges (SELECT only for read-only services)&lt;/li&gt;
&lt;li&gt;No superuser access from applications&lt;/li&gt;
&lt;li&gt;Parameter groups to enforce security settings&lt;/li&gt;
&lt;li&gt;Audit logging enabled (PostgreSQL pgaudit, MySQL audit log)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Database Activity Monitoring&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;AWS Database Activity Streams for real-time monitoring&lt;/li&gt;
&lt;li&gt;Alert on suspicious queries or access patterns&lt;/li&gt;
&lt;li&gt;Log all DDL and privilege changes&lt;/li&gt;
&lt;/ul&gt;


&lt;h2&gt;
  
  
  &lt;strong&gt;8. Logging &amp;amp; Monitoring&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Comprehensive Logging&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;EKS Control Plane logs to CloudWatch (API, audit, authenticator)&lt;/li&gt;
&lt;li&gt;Application logs via FluentBit/Fluentd to centralized location&lt;/li&gt;
&lt;li&gt;Database logs (query logs, error logs, slow query logs)&lt;/li&gt;
&lt;li&gt;VPC Flow Logs for network traffic analysis&lt;/li&gt;
&lt;li&gt;CloudTrail for all API calls&lt;/li&gt;
&lt;li&gt;Immutable logs - prevent tampering&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Security Monitoring&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Amazon GuardDuty - threat detection&lt;/li&gt;
&lt;li&gt;AWS Security Hub - centralized security findings&lt;/li&gt;
&lt;li&gt;Amazon Detective - security investigation&lt;/li&gt;
&lt;li&gt;Falco - runtime security monitoring in Kubernetes&lt;/li&gt;
&lt;li&gt;Prometheus + Grafana for metrics and alerting&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Audit Logging&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# Enable EKS audit logs&lt;/span&gt;
&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ConfigMap&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;audit-policy&lt;/span&gt;
&lt;span class="na"&gt;data&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;policy.yaml&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;
    &lt;span class="s"&gt;apiVersion: audit.k8s.io/v1&lt;/span&gt;
    &lt;span class="s"&gt;kind: Policy&lt;/span&gt;
    &lt;span class="s"&gt;rules:&lt;/span&gt;
    &lt;span class="s"&gt;- level: Metadata&lt;/span&gt;
      &lt;span class="s"&gt;omitStages:&lt;/span&gt;
      &lt;span class="s"&gt;- RequestReceived&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Critical Alerts&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Failed authentication attempts&lt;/li&gt;
&lt;li&gt;Privileged container creation&lt;/li&gt;
&lt;li&gt;Security group changes&lt;/li&gt;
&lt;li&gt;Database connection failures&lt;/li&gt;
&lt;li&gt;Unusual API calls&lt;/li&gt;
&lt;li&gt;Resource exhaustion&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;9. Compliance &amp;amp; Governance&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Compliance Frameworks&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;AWS Config - track configuration changes&lt;/li&gt;
&lt;li&gt;AWS Audit Manager - compliance reporting&lt;/li&gt;
&lt;li&gt;CIS Kubernetes Benchmark - security hardening&lt;/li&gt;
&lt;li&gt;PCI-DSS, HIPAA, SOC 2 compliance where required&lt;/li&gt;
&lt;li&gt;Regular penetration testing&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Policy as Code&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;AWS Organizations with SCPs&lt;/li&gt;
&lt;li&gt;CloudFormation/Terraform for infrastructure&lt;/li&gt;
&lt;li&gt;GitOps for cluster configuration (ArgoCD/FluxCD)&lt;/li&gt;
&lt;li&gt;OPA/Kyverno for admission control&lt;/li&gt;
&lt;li&gt;Version control and peer review for all changes&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Tagging Strategy&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Mandatory tags: Environment, Owner, Project, CostCenter&lt;/li&gt;
&lt;li&gt;Enforce tagging via AWS Config rules&lt;/li&gt;
&lt;li&gt;Use tags for resource-level IAM policies&lt;/li&gt;
&lt;li&gt;Cost allocation and chargeback&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;10. Disaster Recovery &amp;amp; Backup&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Backup Strategy&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Automated RDS snapshots (daily, 7-30 day retention)&lt;/li&gt;
&lt;li&gt;Cross-region snapshot copies for DR&lt;/li&gt;
&lt;li&gt;EBS snapshots for persistent volumes&lt;/li&gt;
&lt;li&gt;etcd backups (Velero for cluster backups)&lt;/li&gt;
&lt;li&gt;GitOps - cluster configuration in Git&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Disaster Recovery&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Multi-region setup for critical applications&lt;/li&gt;
&lt;li&gt;RTO/RPO requirements documented&lt;/li&gt;
&lt;li&gt;Failover procedures tested regularly&lt;/li&gt;
&lt;li&gt;Regular DR drills (quarterly minimum)&lt;/li&gt;
&lt;li&gt;Automated failover where possible (Route53 health checks)&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;11. Supply Chain Security&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Container Supply Chain&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Verify base images from trusted sources&lt;/li&gt;
&lt;li&gt;SBOM (Software Bill of Materials) for dependencies&lt;/li&gt;
&lt;li&gt;Vulnerability scanning in CI/CD pipeline&lt;/li&gt;
&lt;li&gt;Sign images (Cosign/Notary)&lt;/li&gt;
&lt;li&gt;Admission controller to verify signatures&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Dependency Management&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Dependabot or Renovate for automated updates&lt;/li&gt;
&lt;li&gt;Regular security patching&lt;/li&gt;
&lt;li&gt;Monitor CVEs for used software&lt;/li&gt;
&lt;li&gt;Minimal dependencies principle&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;12. Incident Response&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Preparation&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Incident response plan documented&lt;/li&gt;
&lt;li&gt;Runbooks for common scenarios&lt;/li&gt;
&lt;li&gt;On-call rotation defined&lt;/li&gt;
&lt;li&gt;Communication channels established&lt;/li&gt;
&lt;li&gt;Post-mortem process defined&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Detection &amp;amp; Response&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Automated alerting for security events&lt;/li&gt;
&lt;li&gt;Isolate compromised pods/nodes immediately&lt;/li&gt;
&lt;li&gt;Forensics capability (preserve logs and state)&lt;/li&gt;
&lt;li&gt;Contact AWS Support for suspected breaches&lt;/li&gt;
&lt;li&gt;Notify stakeholders per incident severity&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;13. API Gateway &amp;amp; Service Mesh&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;API Security&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;AWS API Gateway or Kong/Envoy for API management&lt;/li&gt;
&lt;li&gt;Rate limiting to prevent abuse&lt;/li&gt;
&lt;li&gt;API keys or OAuth2 for authentication&lt;/li&gt;
&lt;li&gt;WAF (Web Application Firewall) rules&lt;/li&gt;
&lt;li&gt;DDoS protection via AWS Shield&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Service Mesh Benefits&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;mTLS for all service-to-service communication&lt;/li&gt;
&lt;li&gt;Zero-trust networking model&lt;/li&gt;
&lt;li&gt;Fine-grained authorization policies&lt;/li&gt;
&lt;li&gt;Observability and traffic monitoring&lt;/li&gt;
&lt;li&gt;Circuit breaking and fault injection&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;14. Update &amp;amp; Patch Management&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;EKS Updates&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Regular cluster updates (Kubernetes version support: ~14 months)&lt;/li&gt;
&lt;li&gt;Test in non-prod first&lt;/li&gt;
&lt;li&gt;Blue-green cluster upgrades for zero downtime&lt;/li&gt;
&lt;li&gt;Node group rolling updates&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Security Patching&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Automated node updates (EKS managed node groups)&lt;/li&gt;
&lt;li&gt;Bottlerocket OS for minimal attack surface&lt;/li&gt;
&lt;li&gt;Container image updates (rebuild regularly)&lt;/li&gt;
&lt;li&gt;Database patching during maintenance windows&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;Security Checklist&lt;/strong&gt;
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt; Private subnets for EKS nodes and databases&lt;/li&gt;
&lt;li&gt; Security groups with least privilege&lt;/li&gt;
&lt;li&gt; Network policies enforced&lt;/li&gt;
&lt;li&gt; IRSA configured for all pods needing AWS access&lt;/li&gt;
&lt;li&gt; No hardcoded credentials anywhere&lt;/li&gt;
&lt;li&gt; Secrets Manager/Parameter Store with rotation&lt;/li&gt;
&lt;li&gt; All encryption at rest enabled (etcd, EBS, RDS, S3)&lt;/li&gt;
&lt;li&gt; TLS/SSL enforced everywhere&lt;/li&gt;
&lt;li&gt; Pod security standards enforced (restricted)&lt;/li&gt;
&lt;li&gt; Image scanning in CI/CD&lt;/li&gt;
&lt;li&gt; Admission controllers (OPA/Kyverno) configured&lt;/li&gt;
&lt;li&gt; GuardDuty and Security Hub enabled&lt;/li&gt;
&lt;li&gt; Comprehensive logging to CloudWatch&lt;/li&gt;
&lt;li&gt; CloudTrail enabled in all regions&lt;/li&gt;
&lt;li&gt; VPC Flow Logs enabled&lt;/li&gt;
&lt;li&gt; Regular backups with cross-region copies&lt;/li&gt;
&lt;li&gt; Multi-factor authentication enforced&lt;/li&gt;
&lt;li&gt; RBAC properly configured&lt;/li&gt;
&lt;li&gt; Regular security audits scheduled&lt;/li&gt;
&lt;li&gt; Incident response plan documented&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Summary&lt;/strong&gt;&lt;br&gt;
Security in multi-cluster architectures requires a defense-in-depth approach:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Network isolation at every layer&lt;/li&gt;
&lt;li&gt;Zero-trust model - verify everything&lt;/li&gt;
&lt;li&gt;Encryption everywhere (at rest and in transit)&lt;/li&gt;
&lt;li&gt;Least privilege access for humans and workloads&lt;/li&gt;
&lt;li&gt;Continuous monitoring and alerting&lt;/li&gt;
&lt;li&gt;Regular audits and compliance checks&lt;/li&gt;
&lt;li&gt;Automation for consistency and reliability&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Security is not a one-time setup but an ongoing process requiring continuous improvement and vigilance.&lt;/p&gt;

</description>
      <category>security</category>
      <category>eks</category>
      <category>rds</category>
      <category>aws</category>
    </item>
    <item>
      <title>Migrating from ECS to EKS - Key Considerations</title>
      <dc:creator>Nowsath</dc:creator>
      <pubDate>Mon, 06 Oct 2025 04:06:43 +0000</pubDate>
      <link>https://dev.to/aws-builders/migrating-from-ecs-to-eks-key-considerations-14j3</link>
      <guid>https://dev.to/aws-builders/migrating-from-ecs-to-eks-key-considerations-14j3</guid>
      <description>&lt;p&gt;Thinking about moving your containers from Amazon ECS to Amazon EKS? You're not alone. Many teams are making this shift to leverage Kubernetes flexibility and industry standard tooling.&lt;/p&gt;

&lt;p&gt;While both ECS and EKS run containers on AWS, the migration isn't straightforward. ECS task definitions need to become Kubernetes manifests. Your IAM roles need restructuring. Your deployment pipelines need updates. And your team needs to learn Kubernetes.&lt;/p&gt;

&lt;p&gt;This guide covers everything you need to consider: from converting your applications and configuring networking, to setting up monitoring and training your team. We'll focus on practical considerations that will help you plan and execute a smooth migration.&lt;/p&gt;

&lt;p&gt;Whether you're moving one service or your entire platform, understanding these key areas will save you time and headaches.&lt;/p&gt;

&lt;p&gt;Let's get started with the following considerations..&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;1. Architecture &amp;amp; Workload Analysis&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Assess Current ECS Setup:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Task Definitions → Need conversion to Kubernetes Deployments/StatefulSets&lt;/li&gt;
&lt;li&gt;Service types (Fargate vs EC2) → Node groups or Fargate on EKS&lt;/li&gt;
&lt;li&gt;Task placement strategies → Pod affinity/anti-affinity rules&lt;/li&gt;
&lt;li&gt;Service discovery (Cloud Map, ALB) → Kubernetes Services/Ingress&lt;/li&gt;
&lt;li&gt;Auto-scaling policies → HPA (Horizontal Pod Autoscaler)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Application Compatibility:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Review applications for ECS-specific dependencies&lt;/li&gt;
&lt;li&gt;Check for AWS SDK calls that use ECS metadata endpoints&lt;/li&gt;
&lt;li&gt;Identify hardcoded ECS configurations in application code&lt;/li&gt;
&lt;li&gt;Assess container health check configurations&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;2. Container &amp;amp; Image Management&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Container Images:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;ECS and EKS both use Docker/OCI images - no changes needed&lt;/li&gt;
&lt;li&gt;Ensure images are in Amazon ECR (or accessible registry)&lt;/li&gt;
&lt;li&gt;Review image tagging strategy&lt;/li&gt;
&lt;li&gt;Verify image security scanning is enabled&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Startup Commands &amp;amp; Environment:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;ECS command and entryPoint → Kubernetes command and args&lt;/li&gt;
&lt;li&gt;Environment variables transfer directly&lt;/li&gt;
&lt;li&gt;Secret handling needs migration (see below)&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;3. Configuration Translation&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;ECS Task Definition → Kubernetes Manifests&lt;/p&gt;

&lt;p&gt;ECS Task Definition:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"family"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"my-app"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"cpu"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"512"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"memory"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"1024"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"containerDefinitions"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"name"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"app"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"image"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"my-app:latest"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"portMappings"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[{&lt;/span&gt;&lt;span class="nl"&gt;"containerPort"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;8080&lt;/span&gt;&lt;span class="p"&gt;}]&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;}]&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Kubernetes Deployment:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;apps/v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Deployment&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;my-app&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;replicas&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;3&lt;/span&gt;
  &lt;span class="na"&gt;template&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;containers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;app&lt;/span&gt;
        &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;my-app:latest&lt;/span&gt;
        &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;containerPort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;8080&lt;/span&gt;
        &lt;span class="na"&gt;resources&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;requests&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="na"&gt;cpu&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;500m"&lt;/span&gt;
            &lt;span class="na"&gt;memory&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;1Gi"&lt;/span&gt;
          &lt;span class="na"&gt;limits&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="na"&gt;cpu&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;500m"&lt;/span&gt;
            &lt;span class="na"&gt;memory&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;1Gi"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Resource Mapping:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;ECS CPU units (1024 = 1 vCPU) → Kubernetes millicores (1000m = 1 CPU)&lt;/li&gt;
&lt;li&gt;ECS memory (MB) → Kubernetes memory (Mi/Gi)&lt;/li&gt;
&lt;li&gt;Set both requests and limits in K8s&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;4. Networking Considerations&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Load Balancing:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;ECS ALB integration → AWS Load Balancer Controller for EKS&lt;/li&gt;
&lt;li&gt;ECS Service Discovery (Cloud Map) → Kubernetes Services (DNS-based)&lt;/li&gt;
&lt;li&gt;Target Groups → Managed by ALB Ingress Controller&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Service Mesh (Optional):&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Consider AWS App Mesh (works with both) or Istio/Linkerd for EKS&lt;/li&gt;
&lt;li&gt;Service-to-service communication patterns&lt;/li&gt;
&lt;li&gt;Traffic management and observability&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Network Modes:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;ECS awsvpc mode → Similar to Kubernetes pod networking&lt;/li&gt;
&lt;li&gt;Choose appropriate CNI plugin (AWS VPC CNI, Calico, Cilium)&lt;/li&gt;
&lt;li&gt;Plan IP address space (pods consume VPC IPs with AWS VPC CNI)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Security Groups:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;ECS task-level security groups → SecurityGroupPolicy CRD in EKS&lt;/li&gt;
&lt;li&gt;Or use Network Policies for pod-to-pod restrictions&lt;/li&gt;
&lt;li&gt;Maintain database and service security group rules&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;5. IAM &amp;amp; Authentication&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Task Roles → Service Accounts:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;ECS Task Roles → IAM Roles for Service Accounts (IRSA)&lt;/li&gt;
&lt;li&gt;Create OIDC provider for EKS cluster&lt;/li&gt;
&lt;li&gt;Map IAM roles to Kubernetes service accounts&lt;/li&gt;
&lt;li&gt;No more EC2 instance roles for pod permissions
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ServiceAccount&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;my-app-sa&lt;/span&gt;
  &lt;span class="na"&gt;annotations&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;eks.amazonaws.com/role-arn&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;arn:aws:iam::ACCOUNT:role/my-app-role&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Access Control:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;ECS permissions → RBAC (Role-Based Access Control) in Kubernetes&lt;/li&gt;
&lt;li&gt;AWS IAM → aws-auth ConfigMap for cluster access&lt;/li&gt;
&lt;li&gt;Integrate with AWS SSO or IAM Identity Center&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;6. Secrets &amp;amp; Configuration Management&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Secrets:&lt;/strong&gt;&lt;br&gt;
ECS Secrets (from Secrets Manager/SSM) → Multiple options:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;AWS Secrets Manager/Parameter Store with External Secrets Operator&lt;/li&gt;
&lt;li&gt;Kubernetes Secrets (less secure, stored in etcd)&lt;/li&gt;
&lt;li&gt;Sealed Secrets or SOPS for GitOps&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Configuration:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;ECS environment variables → ConfigMaps and Secrets&lt;/li&gt;
&lt;li&gt;ECS Systems Manager parameters → External Secrets Operator&lt;/li&gt;
&lt;li&gt;Consider Helm for templating and configuration management&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;7. Storage &amp;amp; Persistence&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Volumes:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;ECS EFS volumes → EFS CSI Driver for EKS&lt;/li&gt;
&lt;li&gt;ECS bind mounts → Kubernetes HostPath (not recommended) or emptyDir&lt;/li&gt;
&lt;li&gt;ECS Docker volumes → Persistent Volume Claims (PVCs) with EBS CSI Driver&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;StatefulSets:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Applications requiring persistent storage need StatefulSets not Deployments&lt;/li&gt;
&lt;li&gt;Define StorageClass and PersistentVolumeClaims
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;volumeClaimTemplates&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;data&lt;/span&gt;
  &lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;accessModes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;ReadWriteOnce"&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
    &lt;span class="na"&gt;storageClassName&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;gp3&lt;/span&gt;
    &lt;span class="na"&gt;resources&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;requests&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;storage&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;20Gi&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  &lt;strong&gt;8. Logging &amp;amp; Monitoring&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Logging:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;ECS CloudWatch Logs → Multiple options:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;FluentBit/Fluentd DaemonSet → CloudWatch/S3/Elasticsearch&lt;/li&gt;
&lt;li&gt;CloudWatch Container Insights for EKS&lt;/li&gt;
&lt;li&gt;Maintain same log groups or migrate to new structure&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Monitoring:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;ECS metrics → Prometheus + Grafana ecosystem&lt;/li&gt;
&lt;li&gt;CloudWatch Container Insights for both&lt;/li&gt;
&lt;li&gt;Application Performance Monitoring (APM) tools (Datadog, New Relic, etc.)&lt;/li&gt;
&lt;li&gt;Migrate existing dashboards and alerts&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Tracing:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Ensure distributed tracing (X-Ray, Jaeger) continues to work&lt;/li&gt;
&lt;li&gt;Update SDK configurations if needed&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;9. CI/CD Pipeline Changes&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Deployment Pipeline:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Update pipelines to use kubectl, Helm, or GitOps (ArgoCD/FluxCD)&lt;/li&gt;
&lt;li&gt;Replace ECS task definition updates with kubectl apply or Helm upgrades&lt;/li&gt;
&lt;li&gt;Image building remains the same (push to ECR)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Deployment Strategies:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;ECS rolling updates → Kubernetes rolling updates (more control)&lt;/li&gt;
&lt;li&gt;Configure RollingUpdate strategy with maxSurge and maxUnavailable&lt;/li&gt;
&lt;li&gt;Blue/green deployments using Argo Rollouts or Flagger&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Testing:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Update integration tests for Kubernetes environment&lt;/li&gt;
&lt;li&gt;Test in staging EKS cluster first&lt;/li&gt;
&lt;li&gt;Validate health checks and readiness probes&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;10. Cost Considerations&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Cost Changes:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;EKS Control Plane: $0.10/hour per cluster (~$73/month)&lt;/li&gt;
&lt;li&gt;Worker nodes: Similar to ECS EC2 instances&lt;/li&gt;
&lt;li&gt;Fargate on EKS: Available but pricing differs from ECS Fargate&lt;/li&gt;
&lt;li&gt;Potential for better resource utilization with Kubernetes bin-packing&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Optimization:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Use Spot instances for cost savings&lt;/li&gt;
&lt;li&gt;Implement Cluster Autoscaler or Karpenter&lt;/li&gt;
&lt;li&gt;Right-size pods with appropriate resource requests/limits&lt;/li&gt;
&lt;li&gt;Consider Savings Plans or Reserved Instances&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;11. Migration Strategy&lt;/strong&gt;
&lt;/h2&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;Phased Approach (Recommended):&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;&lt;strong&gt;Phase 1: Preparation:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Set up EKS cluster in same VPC&lt;/li&gt;
&lt;li&gt;Install necessary controllers (ALB, EBS CSI, etc.)&lt;/li&gt;
&lt;li&gt;Convert task definitions to Kubernetes manifests&lt;/li&gt;
&lt;li&gt;Set up monitoring and logging&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Phase 2: Pilot Migration:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Migrate non-critical, stateless services first&lt;/li&gt;
&lt;li&gt;Run in parallel with ECS (blue/green at service level)&lt;/li&gt;
&lt;li&gt;Validate functionality and performance&lt;/li&gt;
&lt;li&gt;Gather team feedback&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Phase 3: Gradual Migration:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Service-by-service migration&lt;/li&gt;
&lt;li&gt;Update DNS/load balancer routing gradually&lt;/li&gt;
&lt;li&gt;Monitor each migration closely&lt;/li&gt;
&lt;li&gt;Keep ECS as fallback initially&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Phase 4: Complete Migration:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Migrate remaining services&lt;/li&gt;
&lt;li&gt;Decommission ECS cluster&lt;/li&gt;
&lt;li&gt;Update documentation and runbooks&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Big Bang vs Strangler Pattern:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Big Bang: All at once (high risk, faster)&lt;/li&gt;
&lt;li&gt;Strangler Pattern: Gradual service-by-service (lower risk, recommended)&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;12. Operational Changes&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Team Training:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Train teams on Kubernetes concepts (pods, services, namespaces, etc.)&lt;/li&gt;
&lt;li&gt;Learn kubectl commands and workflows&lt;/li&gt;
&lt;li&gt;Understand YAML manifests and Helm charts&lt;/li&gt;
&lt;li&gt;Practice troubleshooting techniques&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;New Tools &amp;amp; Skills:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;kubectl (CLI tool)&lt;/li&gt;
&lt;li&gt;Helm (package manager)&lt;/li&gt;
&lt;li&gt;Kustomize (configuration management)&lt;/li&gt;
&lt;li&gt;GitOps tools (ArgoCD, FluxCD)&lt;/li&gt;
&lt;li&gt;Kubernetes dashboard or Lens (GUI tools)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Runbooks &amp;amp; Documentation:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Update incident response procedures&lt;/li&gt;
&lt;li&gt;Document new deployment processes&lt;/li&gt;
&lt;li&gt;Create troubleshooting guides for Kubernetes&lt;/li&gt;
&lt;li&gt;Update disaster recovery plans&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;13. Testing &amp;amp; Validation&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Pre-Migration Testing:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Load testing in EKS environment&lt;/li&gt;
&lt;li&gt;Failover testing (pod crashes, node failures)&lt;/li&gt;
&lt;li&gt;Database connection testing&lt;/li&gt;
&lt;li&gt;Performance benchmarking vs ECS&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Post-Migration Validation:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Monitor error rates and latency&lt;/li&gt;
&lt;li&gt;Verify all integrations working&lt;/li&gt;
&lt;li&gt;Check cost metrics&lt;/li&gt;
&lt;li&gt;Validate backup/restore procedures&lt;/li&gt;
&lt;li&gt;Test rollback procedures&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;14. Common Pitfalls to Avoid&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;⚠️ Resource limits: Not setting proper requests/limits causes instability&lt;/li&gt;
&lt;li&gt;⚠️ Health checks: Incorrect probes cause pod restarts&lt;/li&gt;
&lt;li&gt;⚠️ Secrets exposure: Storing secrets in ConfigMaps instead of Secrets&lt;/li&gt;
&lt;li&gt;⚠️ Network policies: Forgetting to configure proper pod communication&lt;/li&gt;
&lt;li&gt;⚠️ Storage: Not planning for persistent storage needs&lt;/li&gt;
&lt;li&gt;⚠️ IAM roles: Not properly configuring IRSA&lt;/li&gt;
&lt;li&gt;⚠️ Monitoring gaps: Losing observability during migration&lt;/li&gt;
&lt;li&gt;⚠️ DNS caching: Application DNS caching issues with service discovery&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;15. Migration Checklist&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt; EKS cluster setup with multi-AZ configuration&lt;/li&gt;
&lt;li&gt; Install AWS Load Balancer Controller&lt;/li&gt;
&lt;li&gt; Install EBS and EFS CSI drivers&lt;/li&gt;
&lt;li&gt; Configure IAM OIDC provider&lt;/li&gt;
&lt;li&gt; Set up IRSA for applications&lt;/li&gt;
&lt;li&gt; Convert all task definitions to K8s manifests&lt;/li&gt;
&lt;li&gt; Migrate secrets to Secrets Manager with External Secrets Operator&lt;/li&gt;
&lt;li&gt; Configure monitoring and logging&lt;/li&gt;
&lt;li&gt; Set up CI/CD pipelines for K8s&lt;/li&gt;
&lt;li&gt; Update security groups and network policies&lt;/li&gt;
&lt;li&gt; Test in staging environment&lt;/li&gt;
&lt;li&gt; Create rollback plan&lt;/li&gt;
&lt;li&gt; Train team on Kubernetes operations&lt;/li&gt;
&lt;li&gt; Update documentation&lt;/li&gt;
&lt;li&gt; Plan maintenance window for cutover&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Summary&lt;/strong&gt;&lt;br&gt;
The migration from ECS to EKS is significant but manageable with proper planning. Key success factors:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Gradual migration (service by service)&lt;/li&gt;
&lt;li&gt;Thorough testing at each stage&lt;/li&gt;
&lt;li&gt;Team training on Kubernetes&lt;/li&gt;
&lt;li&gt;Robust monitoring throughout the process&lt;/li&gt;
&lt;li&gt;Clear rollback plans&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Conclusion&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Migrating from ECS to EKS is a significant undertaking, but with proper planning and a phased approach, it's absolutely achievable. The key is to start small, test thoroughly, and migrate service by service rather than attempting everything at once.&lt;/p&gt;

&lt;p&gt;Remember that this migration isn't just about changing technology. It's about adopting a new way of thinking about container orchestration. Invest time in training your team, building robust monitoring, and documenting your processes. The flexibility and ecosystem that Kubernetes provides will be worth the effort.&lt;/p&gt;

&lt;p&gt;Take your time, plan carefully, and don't hesitate to run both platforms in parallel during the transition. Your future self will thank you for the careful approach.&lt;/p&gt;

&lt;p&gt;Happy migrating! 🤝😊&lt;/p&gt;

</description>
      <category>ecs</category>
      <category>eks</category>
      <category>migration</category>
      <category>aws</category>
    </item>
    <item>
      <title>Build AWS Cloud Services Hangman Game with Amazon Q CLI</title>
      <dc:creator>Nowsath</dc:creator>
      <pubDate>Tue, 03 Jun 2025 18:09:13 +0000</pubDate>
      <link>https://dev.to/aws-builders/build-aws-cloud-services-hangman-game-with-amazon-q-4k9m</link>
      <guid>https://dev.to/aws-builders/build-aws-cloud-services-hangman-game-with-amazon-q-4k9m</guid>
      <description>&lt;p&gt;Are you looking for a fun way to learn AWS service names while enjoying a classic game? In this blog post, I'll walk you through an AWS-themed Hangman game I&lt;br&gt;
built using Python and Pygame. This educational game helps players become familiar with AWS service names across different categories - perfect for AWS &lt;br&gt;
certification candidates or anyone interested in cloud computing.&lt;/p&gt;
&lt;h2&gt;
  
  
  Game Overview
&lt;/h2&gt;

&lt;p&gt;The AWS Cloud Services Hangman game challenges players to guess AWS service names one letter at a time. With 8 different AWS service categories and over 80 &lt;br&gt;
services to guess, it's both entertaining and educational.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmeg6t3ee2dkng2f5js4r.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmeg6t3ee2dkng2f5js4r.png" alt="AWS Hangman Game" width="800" height="628"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  Key Features:
&lt;/h3&gt;

&lt;p&gt;• Multiple AWS service categories (Compute, Storage, Database, etc.)&lt;br&gt;
• Interactive letter selection&lt;br&gt;
• Visual hangman drawing that builds with each wrong guess&lt;br&gt;
• Score tracking&lt;br&gt;
• Custom AWS-themed background&lt;br&gt;
• Clean and intuitive user interface&lt;/p&gt;
&lt;h2&gt;
  
  
  How the Game Works
&lt;/h2&gt;

&lt;p&gt;When you start the game, you're presented with a menu screen featuring the AWS Cloud Services title. After clicking "Play Game," you select from various AWS &lt;br&gt;
categories like Compute, Storage, or Database. The game then randomly selects an AWS service from that category for you to guess.&lt;/p&gt;

&lt;p&gt;You have six attempts to guess the service name correctly. With each incorrect guess, another part of the hangman is drawn. Guess correctly, and you'll earn &lt;br&gt;
a point. After completing 10 rounds, you'll see your final score.&lt;/p&gt;
&lt;h2&gt;
  
  
  Code Breakdown
&lt;/h2&gt;

&lt;p&gt;Let's look at the key components of the code to understand how the game works:&lt;/p&gt;
&lt;h3&gt;
  
  
  1. Game Structure
&lt;/h3&gt;

&lt;p&gt;The game is built using object-oriented programming with two main classes:&lt;br&gt;
• Button: Handles all interactive buttons in the game&lt;br&gt;
• Hangman: The main game class that manages game states and logic&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="k"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;Button&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;__init__&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;x&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;y&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;width&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;height&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;text&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;color&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;hover_color&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;text_color&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;BLACK&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;font&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;font&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
        &lt;span class="c1"&gt;# Button initialization code
&lt;/span&gt;
    &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;draw&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;surface&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
        &lt;span class="c1"&gt;# Draw the button on the screen
&lt;/span&gt;
    &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;check_hover&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;pos&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
        &lt;span class="c1"&gt;# Check if mouse is hovering over button
&lt;/span&gt;
    &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;is_clicked&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;pos&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;event&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
        &lt;span class="c1"&gt;# Check if button is clicked
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  2. AWS Service Categories
&lt;/h3&gt;

&lt;p&gt;The game uses a dictionary structure to organize AWS services by category:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;word_categories&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;AWS Compute&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;EC2&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;LAMBDA&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;FARGATE&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;LIGHTSAIL&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;BEANSTALK&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;...],&lt;/span&gt;
    &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;AWS Storage&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;S3&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;EBS&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;EFS&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;FSX&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;GLACIER&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;...],&lt;/span&gt;
    &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;AWS Database&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;RDS&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;DYNAMODB&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;AURORA&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;REDSHIFT&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;...],&lt;/span&gt;
    &lt;span class="c1"&gt;# More categories...
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This makes it easy to add new services or entire categories through the custom_words.py file.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. Game States
&lt;/h3&gt;

&lt;p&gt;The game uses a state machine pattern to manage different screens:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;run&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="c1"&gt;# Game loop
&lt;/span&gt;    &lt;span class="k"&gt;while&lt;/span&gt; &lt;span class="n"&gt;running&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="c1"&gt;# Handle events based on game state
&lt;/span&gt;        &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;game_state&lt;/span&gt; &lt;span class="o"&gt;==&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;menu&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
            &lt;span class="c1"&gt;# Menu screen logic
&lt;/span&gt;        &lt;span class="k"&gt;elif&lt;/span&gt; &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;game_state&lt;/span&gt; &lt;span class="o"&gt;==&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;category_select&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
            &lt;span class="c1"&gt;# Category selection logic
&lt;/span&gt;        &lt;span class="k"&gt;elif&lt;/span&gt; &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;game_state&lt;/span&gt; &lt;span class="o"&gt;==&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;playing&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
            &lt;span class="c1"&gt;# Gameplay logic
&lt;/span&gt;        &lt;span class="k"&gt;elif&lt;/span&gt; &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;game_state&lt;/span&gt; &lt;span class="o"&gt;==&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;game_over&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
            &lt;span class="c1"&gt;# Game over screen logic
&lt;/span&gt;
        &lt;span class="c1"&gt;# Draw the appropriate screen
&lt;/span&gt;        &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;game_state&lt;/span&gt; &lt;span class="o"&gt;==&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;menu&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
            &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;draw_menu&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
        &lt;span class="k"&gt;elif&lt;/span&gt; &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;game_state&lt;/span&gt; &lt;span class="o"&gt;==&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;category_select&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
            &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;draw_category_select&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
        &lt;span class="c1"&gt;# And so on...
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This approach makes the code modular and easier to maintain.&lt;/p&gt;

&lt;h3&gt;
  
  
  4. Background Image Handling
&lt;/h3&gt;

&lt;p&gt;The game loads a background image from the local images folder:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="c1"&gt;# Load background image from local file
&lt;/span&gt;&lt;span class="n"&gt;images_dir&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;os&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;path&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;join&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;os&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;path&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;dirname&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;__file__&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;images&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;bg_path&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;os&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;path&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;join&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;images_dir&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;aws_bg.jpg&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;os&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;path&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;exists&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;bg_path&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="n"&gt;background_images&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;main&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;pygame&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;image&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;load&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;bg_path&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;background_images&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;main&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;pygame&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;transform&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;scale&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;background_images&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;main&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;],(&lt;/span&gt;&lt;span class="n"&gt;SCREEN_WIDTH&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;SCREEN_HEIGHT&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;A semi-transparent overlay is added to ensure text remains readable:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="c1"&gt;# Create a semi-transparent overlay for better text readability
&lt;/span&gt;&lt;span class="n"&gt;overlay&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;pygame&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;Surface&lt;/span&gt;&lt;span class="p"&gt;((&lt;/span&gt;&lt;span class="n"&gt;SCREEN_WIDTH&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;SCREEN_HEIGHT&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt; &lt;span class="n"&gt;pygame&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;SRCALPHA&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;overlay&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;fill&lt;/span&gt;&lt;span class="p"&gt;((&lt;/span&gt;&lt;span class="mi"&gt;255&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;255&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;255&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;180&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;  &lt;span class="c1"&gt;# White with 70% opacity
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  5. Drawing the Hangman
&lt;/h3&gt;

&lt;p&gt;The hangman drawing is created step by step as the player makes incorrect guesses:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;draw_hangman&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="c1"&gt;# Base
&lt;/span&gt;    &lt;span class="n"&gt;pygame&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;draw&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;line&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;screen&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;BLACK&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;150&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;350&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;250&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;350&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt; &lt;span class="mi"&gt;5&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="c1"&gt;# Pole
&lt;/span&gt;    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;wrong_guesses&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;=&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="n"&gt;pygame&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;draw&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;line&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;screen&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;BLACK&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;200&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;350&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;200&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;100&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt; &lt;span class="mi"&gt;5&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="c1"&gt;# Top beam
&lt;/span&gt;    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;wrong_guesses&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;=&lt;/span&gt; &lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="n"&gt;pygame&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;draw&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;line&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;screen&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;BLACK&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;200&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;100&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;300&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;100&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt; &lt;span class="mi"&gt;5&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="c1"&gt;# And so on for each part of the hangman...
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Educational Value
&lt;/h2&gt;

&lt;p&gt;Beyond being a fun game, this project serves several educational purposes:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;AWS Service Familiarity: Players naturally memorize AWS service names through repeated gameplay&lt;/li&gt;
&lt;li&gt;Python Programming: The code demonstrates object-oriented programming, state machines, and event handling&lt;/li&gt;
&lt;li&gt;Pygame Framework: Shows how to build interactive games with Pygame&lt;/li&gt;
&lt;li&gt;UI/UX Design: Implements a clean, intuitive interface with proper feedback&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Customization Options
&lt;/h2&gt;

&lt;p&gt;The game is designed to be easily customizable:&lt;/p&gt;

&lt;p&gt;• &lt;strong&gt;&lt;em&gt;Add New AWS Services&lt;/em&gt;&lt;/strong&gt;: Edit the &lt;em&gt;custom_words.py&lt;/em&gt; file to add more services.&lt;br&gt;
• &lt;strong&gt;&lt;em&gt;Change Background&lt;/em&gt;&lt;/strong&gt;: Replace the &lt;em&gt;aws_bg.jpg&lt;/em&gt; file in the images folder.&lt;br&gt;
• &lt;strong&gt;&lt;em&gt;Add Sound Effects&lt;/em&gt;&lt;/strong&gt;: Place MP3 files in the sounds folder for audio feedback.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Building this AWS Cloud Services Hangman game was both fun and educational. It demonstrates how gaming mechanics can be used to make learning technical content more engaging. The modular code structure makes it easy to extend with new features or customize for different learning objectives.&lt;/p&gt;

&lt;p&gt;What's truly remarkable is how Amazon Q made the development process incredibly streamlined. As an AI assistant, Amazon Q helped me rapidly prototype the game, debug issues, and implement new features with minimal effort. When I encountered challenges like adding background images or fixing button spacing, Amazon Q provided immediate solutions with clear explanations.&lt;/p&gt;

&lt;p&gt;The collaborative development experience with Amazon Q transformed what could have been a complex coding project into an accessible and enjoyable process. Even developers with limited game development experience can leverage Amazon Q to build educational games like this one, receiving guidance on everything from Pygame fundamentals to AWS-specific implementations.&lt;/p&gt;

&lt;p&gt;Whether you're studying for AWS certifications or just want to become more familiar with cloud services, this game provides an entertaining way to reinforce your knowledge. And with Amazon Q as your coding partner, creating your own educational games becomes an achievable goal for developers at any skill level.&lt;/p&gt;




&lt;p&gt;The complete source code is available on &lt;a href="https://github.com/nowsathakm/aws_hangman_game" rel="noopener noreferrer"&gt;GitHub&lt;/a&gt;, along with installation instructions and more detailed documentation. Happy coding and happy AWS learning!&lt;/p&gt;

</description>
      <category>amazonqcli</category>
      <category>awscommunity</category>
      <category>awschallenge</category>
    </item>
    <item>
      <title>Securely access Amazon EKS with GitHub Actions and OpenID Connect</title>
      <dc:creator>Nowsath</dc:creator>
      <pubDate>Tue, 14 Jan 2025 10:07:03 +0000</pubDate>
      <link>https://dev.to/aws-builders/securely-access-amazon-eks-with-github-actions-and-openid-connect-2im2</link>
      <guid>https://dev.to/aws-builders/securely-access-amazon-eks-with-github-actions-and-openid-connect-2im2</guid>
      <description>&lt;p&gt;Modern DevOps practices often involve leveraging GitHub Actions for CI/CD workflows and Amazon Elastic Kubernetes Service (EKS) for container orchestration. To minimize the reliance on static credentials and improve security, integrating GitHub Actions with AWS via OpenID Connect (OIDC) provides a robust solution.&lt;/p&gt;

&lt;p&gt;This guide will walk you through securely accessing an Amazon EKS cluster from GitHub Actions using OIDC.&lt;/p&gt;

&lt;h3&gt;
  
  
  Why Use OpenID Connect?
&lt;/h3&gt;

&lt;p&gt;Traditionally, GitHub Actions access AWS resources through static credentials stored as secrets. While effective, this method carries risks if credentials are accidentally exposed. OIDC eliminates the need for static credentials by leveraging short-lived, dynamically generated tokens. This approach improves security and simplifies credential management.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Prerequisites&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;An AWS account with an EKS cluster set up.&lt;/li&gt;
&lt;li&gt;A GitHub repository configured for your project.&lt;/li&gt;
&lt;li&gt;The AWS CLI installed locally for initial setup with proper permissions.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Step 1: Enable OIDC in Your AWS Account
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;1. Add the GitHub OIDC Identity Provider:&lt;/strong&gt; Run the following AWS CLI command to establish a trust relationship with GitHub’s OIDC provider:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;aws iam create-open-id-connect-provider &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--url&lt;/span&gt; https://token.actions.githubusercontent.com &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--client-id-list&lt;/span&gt; sts.amazonaws.com &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--thumbprint-list&lt;/span&gt; &lt;span class="si"&gt;$(&lt;/span&gt;openssl s_client &lt;span class="nt"&gt;-servername&lt;/span&gt; token.actions.githubusercontent.com &lt;span class="nt"&gt;-connect&lt;/span&gt; token.actions.githubusercontent.com:443 2&amp;gt;/dev/null | openssl x509 &lt;span class="nt"&gt;-fingerprint&lt;/span&gt; &lt;span class="nt"&gt;-noout&lt;/span&gt; &lt;span class="nt"&gt;-sha1&lt;/span&gt; | &lt;span class="nb"&gt;awk&lt;/span&gt; &lt;span class="nt"&gt;-F&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s1"&gt;'{print $2}'&lt;/span&gt; | &lt;span class="nb"&gt;tr&lt;/span&gt; &lt;span class="nt"&gt;-d&lt;/span&gt; &lt;span class="s1"&gt;':'&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;2. Verify the OIDC Provider:&lt;/strong&gt; Check that the OIDC provider was successfully created:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;aws iam list-open-id-connect-providers
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Step 2: Create an IAM Role for GitHub Actions
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;1. Define the Trust Policy:&lt;/strong&gt; Create a JSON file named trust-policy.json with the following content:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"Version"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"2012-10-17"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"Statement"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"Effect"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Allow"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"Principal"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"Federated"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"arn:aws:iam::&amp;lt;AWS_ACCOUNT_ID&amp;gt;:oidc-provider/token.actions.githubusercontent.com"&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"Action"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"sts:AssumeRoleWithWebIdentity"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"Condition"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"StringEquals"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
          &lt;/span&gt;&lt;span class="nl"&gt;"token.actions.githubusercontent.com:aud"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"sts.amazonaws.com"&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"StringLike"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
          &lt;/span&gt;&lt;span class="nl"&gt;"token.actions.githubusercontent.com:sub"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"repo:&amp;lt;OWNER&amp;gt;/&amp;lt;REPO&amp;gt;:ref:refs/heads/&amp;lt;BRANCH&amp;gt;"&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Replace &lt;code&gt;&amp;lt;AWS_ACCOUNT_ID&amp;gt;&lt;/code&gt;,&lt;code&gt;&amp;lt;OWNER&amp;gt;&lt;/code&gt;,&lt;code&gt;&amp;lt;REPO&amp;gt;&lt;/code&gt; and &lt;code&gt;&amp;lt;BRANCH&amp;gt;&lt;/code&gt; with your AWS account ID, GitHub repository owner, repository name, and branch name.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Create the Role:&lt;/strong&gt; Use the AWS CLI to create the IAM role:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;aws iam create-role &lt;span class="nt"&gt;--role-name&lt;/span&gt; GitHubActionsEKSRole &lt;span class="nt"&gt;--assume-role-policy-document&lt;/span&gt; file://trust-policy.json
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;3. Attach a Policy for EKS Access:&lt;/strong&gt; Attach the necessary permissions to the role. For example, to allow access to EKS, attach the &lt;em&gt;AmazonEKSClusterPolicy&lt;/em&gt; and &lt;em&gt;AmazonEKSWorkerNodePolicy&lt;/em&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;aws iam attach-role-policy &lt;span class="nt"&gt;--role-name&lt;/span&gt; GitHubActionsEKSRole &lt;span class="nt"&gt;--policy-arn&lt;/span&gt; arn:aws:iam::aws:policy/AmazonEKSClusterPolicy
aws iam attach-role-policy &lt;span class="nt"&gt;--role-name&lt;/span&gt; GitHubActionsEKSRole &lt;span class="nt"&gt;--policy-arn&lt;/span&gt; arn:aws:iam::aws:policy/AmazonEKSWorkerNodePolicy
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Step 3: Associate access entry
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;1. Add your role to the cluster:&lt;/strong&gt; Update the cluster access entry to attach the GitHubActionsEKSRole role&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;aws eks create-access-entry &lt;span class="nt"&gt;--cluster-name&lt;/span&gt; &amp;lt;EKS_CLUSTER_NAME&amp;gt; &lt;span class="nt"&gt;--principal-arn&lt;/span&gt; arn:aws:iam::&amp;lt;AWS_ACCOUNT_ID&amp;gt;:role/GitHubActionsEKSRole &lt;span class="nt"&gt;--type&lt;/span&gt; STANDARD
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Replace &lt;code&gt;&amp;lt;AWS_ACCOUNT_ID&amp;gt;&lt;/code&gt;,&lt;code&gt;&amp;lt;EKS_CLUSTER_NAME&amp;gt;&lt;/code&gt; with your AWS account ID and cluster name.&lt;/p&gt;

&lt;p&gt;For additional configuration options: &lt;a href="https://docs.aws.amazon.com/eks/latest/userguide/creating-access-entries.html" rel="noopener noreferrer"&gt;click here&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Assign policies to the role:&lt;/strong&gt; Add policies to the GitHubActionsEKSRole based on your requirements.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;aws eks associate-access-policy &lt;span class="nt"&gt;--cluster-name&lt;/span&gt; &amp;lt;EKS_CLUSTER_NAME&amp;gt; &lt;span class="nt"&gt;--principal-arn&lt;/span&gt; arn:aws:iam::&amp;lt;AWS_ACCOUNT_ID&amp;gt;:role/GitHubActionsEKSRole &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--access-scope&lt;/span&gt; &lt;span class="nb"&gt;type&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;cluster &lt;span class="nt"&gt;--policy-arn&lt;/span&gt; arn:aws:eks::aws:cluster-access-policy/AmazonEKSClusterAdminPolicy
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Replace &lt;code&gt;&amp;lt;AWS_ACCOUNT_ID&amp;gt;&lt;/code&gt;,&lt;code&gt;&amp;lt;EKS_CLUSTER_NAME&amp;gt;&lt;/code&gt; with your AWS account ID and cluster name.&lt;/p&gt;

&lt;p&gt;For additional configuration options: &lt;a href="https://docs.aws.amazon.com/eks/latest/userguide/access-policies.html" rel="noopener noreferrer"&gt;click here&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 4: Update GitHub Actions Workflow
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;1. Configure Permissions in Workflow File:&lt;/strong&gt; Update your GitHub Actions workflow YAML file to use OIDC authentication. Below is an example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Deploy to EKS&lt;/span&gt;

&lt;span class="na"&gt;on&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;push&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;branches&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;main&lt;/span&gt;

&lt;span class="na"&gt;permissions&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;id-token&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;write&lt;/span&gt;
  &lt;span class="na"&gt;contents&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;read&lt;/span&gt;

&lt;span class="na"&gt;jobs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;deploy&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;runs-on&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ubuntu-latest&lt;/span&gt;

    &lt;span class="na"&gt;steps&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Checkout Code&lt;/span&gt;
        &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;actions/checkout@v3&lt;/span&gt;

      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Configure AWS Credentials&lt;/span&gt;
        &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;aws-actions/configure-aws-credentials@v3&lt;/span&gt;
        &lt;span class="na"&gt;with&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;role-to-assume&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;arn:aws:iam::&amp;lt;AWS_ACCOUNT_ID&amp;gt;:role/GitHubActionsEKSRole&lt;/span&gt;
          &lt;span class="na"&gt;aws-region&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;&amp;lt;AWS_REGION&amp;gt;&lt;/span&gt;

      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Deploy to EKS&lt;/span&gt;
        &lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;
          &lt;span class="s"&gt;aws eks update-kubeconfig --name &amp;lt;EKS_CLUSTER_NAME&amp;gt; --region &amp;lt;AWS_REGION&amp;gt;&lt;/span&gt;
          &lt;span class="s"&gt;kubectl apply -f deployment.yaml&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Replace &lt;code&gt;&amp;lt;AWS_ACCOUNT_ID&amp;gt;&lt;/code&gt;,&lt;code&gt;&amp;lt;AWS_REGION&amp;gt;&lt;/code&gt;, and &lt;code&gt;&amp;lt;EKS_CLUSTER_NAME&amp;gt;&lt;/code&gt; with your AWS account ID, region, and EKS cluster name.  Also you can retrieve those values from github secrets.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 5: Test the Workflow
&lt;/h2&gt;

&lt;p&gt;Push the updated workflow file to your repository and monitor the GitHub Actions workflow execution. Ensure that the deployment step successfully accesses the EKS cluster without errors.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 6: Secure Your Setup
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Restrict Role Permissions:&lt;/strong&gt; Minimize the permissions associated with the IAM role to adhere to the principle of least privilege.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Audit Usage:&lt;/strong&gt; Regularly audit IAM roles and policies to ensure compliance and security.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;By integrating GitHub Actions with Amazon EKS using OpenID Connect, you eliminate the need for static credentials, enhancing the security and simplicity of your CI/CD workflows. This setup aligns with modern DevOps best practices, providing a scalable and secure way to manage access to AWS resources.&lt;/p&gt;

</description>
      <category>security</category>
      <category>eks</category>
      <category>githubactions</category>
      <category>aws</category>
    </item>
    <item>
      <title>Migrate a hosted zone to a different AWS account in few seconds!!</title>
      <dc:creator>Nowsath</dc:creator>
      <pubDate>Tue, 26 Nov 2024 12:37:15 +0000</pubDate>
      <link>https://dev.to/aws-builders/migrate-a-hosted-zone-to-a-different-aws-account-in-few-seconds-2la2</link>
      <guid>https://dev.to/aws-builders/migrate-a-hosted-zone-to-a-different-aws-account-in-few-seconds-2la2</guid>
      <description>&lt;p&gt;Migrating a hosted zone from one AWS account to another involves creating a new hosted zone in the target account, replicating the DNS records, and updating the domain's nameservers. Here's a step-by-step guide for manual and automated steps.&lt;/p&gt;

&lt;h2&gt;
  
  
  -- Manual Steps --
&lt;/h2&gt;

&lt;p&gt;In this process, will guide you through migrating a hosted zone using the AWS CLI.&lt;/p&gt;

&lt;h3&gt;
  
  
  1. Export DNS Records from the Source Account
&lt;/h3&gt;

&lt;p&gt;i. Install AWS CLI if not already installed.&lt;br&gt;
ii. Use the following command to export the DNS records from the hosted zone in the source account:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;aws route53 list-resource-record-sets &lt;span class="nt"&gt;--hosted-zone-id&lt;/span&gt; &amp;lt;source-hosted-zone-id&amp;gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; dns-records.json
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;iii. Save the output file (&lt;code&gt;dns-records.json&lt;/code&gt;), which contains all DNS records.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Create a New Hosted Zone in the Target Account
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;Log in to the target AWS account.&lt;/li&gt;
&lt;li&gt;Navigate to &lt;strong&gt;Route 53&lt;/strong&gt; and create a new hosted zone with the same domain name.&lt;/li&gt;
&lt;li&gt;Note the new &lt;strong&gt;hosted zone ID&lt;/strong&gt; and &lt;strong&gt;nameservers&lt;/strong&gt; assigned to the zone.&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  3. Import DNS Records to the Target Account
&lt;/h3&gt;

&lt;p&gt;i. Use the exported &lt;code&gt;dns-records.json&lt;/code&gt; to replicate the records.&lt;br&gt;
ii. Transform the JSON file to match the &lt;em&gt;change-resource-record-sets&lt;/em&gt; API format if needed. An example format looks like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"Changes"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="nl"&gt;"Action"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"CREATE"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="nl"&gt;"ResourceRecordSet"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
                &lt;/span&gt;&lt;span class="nl"&gt;"Name"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"example.com."&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
                &lt;/span&gt;&lt;span class="nl"&gt;"Type"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"A"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
                &lt;/span&gt;&lt;span class="nl"&gt;"TTL"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;300&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
                &lt;/span&gt;&lt;span class="nl"&gt;"ResourceRecords"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
                    &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
                        &lt;/span&gt;&lt;span class="nl"&gt;"Value"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"192.0.2.1"&lt;/span&gt;&lt;span class="w"&gt;
                    &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
                &lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;iii. Import the records to the new hosted zone:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;aws route53 change-resource-record-sets &lt;span class="nt"&gt;--hosted-zone-id&lt;/span&gt; &amp;lt;new-hosted-zone-id&amp;gt; &lt;span class="nt"&gt;--change-batch&lt;/span&gt; file://dns-records.json
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  4. Update the Domain's Nameservers
&lt;/h3&gt;

&lt;p&gt;i. Go to your domain registrar (e.g., AWS Route 53, GoDaddy, Namecheap).&lt;br&gt;
ii. Replace the nameservers with the ones provided in the new hosted zone.&lt;br&gt;
iii. Wait for the DNS propagation, which can take up to 48 hours.&lt;/p&gt;
&lt;h3&gt;
  
  
  5. Verify the Migration
&lt;/h3&gt;

&lt;p&gt;i. Use tools like &lt;a href="https://dnschecker.org/" rel="noopener noreferrer"&gt;DNS&lt;/a&gt; Checker to ensure the records are correctly propagating.&lt;br&gt;
    Confirm that the DNS records are functional and resolving to the expected values.&lt;/p&gt;
&lt;h3&gt;
  
  
  Tips
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Avoid downtime: Keep both hosted zones active until propagation is complete.&lt;/li&gt;
&lt;li&gt;Delegate permissions: If you need cross-account access, consider using AWS Resource Access Manager (RAM) or an IAM role for temporary access.&lt;/li&gt;
&lt;li&gt;Automate the process: Use tools like Terraform or Route 53's APIs for larger migrations.&lt;/li&gt;
&lt;/ul&gt;


&lt;h2&gt;
  
  
  -- Automated Steps --
&lt;/h2&gt;

&lt;p&gt;Here’s a Python script using boto3 (AWS SDK for Python) to automate the transformation and migration of DNS records between AWS accounts. This script will:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Export DNS records from the source account.&lt;/li&gt;
&lt;li&gt;Transform them into the format required for importing.&lt;/li&gt;
&lt;li&gt;Import the records into the target account.&lt;/li&gt;
&lt;/ol&gt;
&lt;h3&gt;
  
  
  Prerequisites
&lt;/h3&gt;

&lt;p&gt;i. Install the required libraries:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;pip3 &lt;span class="nb"&gt;install &lt;/span&gt;boto3
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;ii. Set up AWS CLI profiles for both accounts:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Source account: &lt;code&gt;aws configure --profile source_account&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Target account: &lt;code&gt;aws configure --profile target_account&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;iii. Save the python script as migrate_dns.py&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;boto3&lt;/span&gt;
&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;json&lt;/span&gt;

&lt;span class="c1"&gt;# Constants
&lt;/span&gt;&lt;span class="n"&gt;SOURCE_PROFILE&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;source_account&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
&lt;span class="n"&gt;TARGET_PROFILE&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;target_account&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
&lt;span class="n"&gt;SOURCE_HOSTED_ZONE_ID&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;source-hosted-zone-id&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
&lt;span class="n"&gt;TARGET_HOSTED_ZONE_ID&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;target-hosted-zone-id&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;

&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;export_dns_records&lt;/span&gt;&lt;span class="p"&gt;():&lt;/span&gt;
    &lt;span class="sh"&gt;"""&lt;/span&gt;&lt;span class="s"&gt;Export DNS records from the source hosted zone.&lt;/span&gt;&lt;span class="sh"&gt;"""&lt;/span&gt;
    &lt;span class="n"&gt;session&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;boto3&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;Session&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;profile_name&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;SOURCE_PROFILE&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;client&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;session&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;client&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;route53&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;response&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;list_resource_record_sets&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;HostedZoneId&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;SOURCE_HOSTED_ZONE_ID&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="c1"&gt;# Save records to a file
&lt;/span&gt;    &lt;span class="k"&gt;with&lt;/span&gt; &lt;span class="nf"&gt;open&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;dns_records.json&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;w&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;f&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="n"&gt;json&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;dump&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;response&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;ResourceRecordSets&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt; &lt;span class="n"&gt;f&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;indent&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;4&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;DNS records exported to dns_records.json&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;transform_records&lt;/span&gt;&lt;span class="p"&gt;():&lt;/span&gt;
    &lt;span class="sh"&gt;"""&lt;/span&gt;&lt;span class="s"&gt;Transform DNS records for importing to the target hosted zone.&lt;/span&gt;&lt;span class="sh"&gt;"""&lt;/span&gt;
    &lt;span class="k"&gt;with&lt;/span&gt; &lt;span class="nf"&gt;open&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;dns_records.json&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;r&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;f&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="n"&gt;records&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;json&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;load&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;f&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="n"&gt;changes&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[]&lt;/span&gt;
    &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;record&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;records&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="c1"&gt;# Skip NS and SOA records
&lt;/span&gt;        &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;record&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Type&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;NS&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;SOA&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;]:&lt;/span&gt;
            &lt;span class="k"&gt;continue&lt;/span&gt;

        &lt;span class="n"&gt;change&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Action&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;CREATE&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;ResourceRecordSet&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
                &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Name&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;record&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Name&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;
                &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Type&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;record&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Type&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;
                &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;TTL&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;record&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;TTL&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;300&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;  &lt;span class="c1"&gt;# Default TTL
&lt;/span&gt;                &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;ResourceRecords&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;record&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;ResourceRecords&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;[])&lt;/span&gt;
            &lt;span class="p"&gt;}&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt;
        &lt;span class="n"&gt;changes&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;append&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;change&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="c1"&gt;# Save transformed records to a file
&lt;/span&gt;    &lt;span class="k"&gt;with&lt;/span&gt; &lt;span class="nf"&gt;open&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;transformed_records.json&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;w&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;f&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="n"&gt;json&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;dump&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Changes&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;changes&lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt; &lt;span class="n"&gt;f&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;indent&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;4&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Records transformed and saved to transformed_records.json&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;import_dns_records&lt;/span&gt;&lt;span class="p"&gt;():&lt;/span&gt;
    &lt;span class="sh"&gt;"""&lt;/span&gt;&lt;span class="s"&gt;Import transformed DNS records into the target hosted zone.&lt;/span&gt;&lt;span class="sh"&gt;"""&lt;/span&gt;
    &lt;span class="n"&gt;session&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;boto3&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;Session&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;profile_name&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;TARGET_PROFILE&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;client&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;session&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;client&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;route53&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="k"&gt;with&lt;/span&gt; &lt;span class="nf"&gt;open&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;transformed_records.json&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;r&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;f&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="n"&gt;change_batch&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;json&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;load&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;f&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="n"&gt;response&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;change_resource_record_sets&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
        &lt;span class="n"&gt;HostedZoneId&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;TARGET_HOSTED_ZONE_ID&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="n"&gt;ChangeBatch&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;change_batch&lt;/span&gt;
    &lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;DNS records imported successfully&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Change info: &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;response&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;ChangeInfo&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;__name__&lt;/span&gt; &lt;span class="o"&gt;==&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;__main__&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="c1"&gt;# Step 1: Export DNS records from source account
&lt;/span&gt;    &lt;span class="nf"&gt;export_dns_records&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;

    &lt;span class="c1"&gt;# Step 2: Transform records for the target account
&lt;/span&gt;    &lt;span class="nf"&gt;transform_records&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;

    &lt;span class="c1"&gt;# Step 3: Import DNS records into the target account
&lt;/span&gt;    &lt;span class="nf"&gt;import_dns_records&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  How to Use
&lt;/h3&gt;

&lt;p&gt;i. Replace source-hosted-zone-id and target-hosted-zone-id with the respective hosted zone IDs. Also replace profiles if you have created differently.&lt;br&gt;
ii. Run the script:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;python migrate_dns.py
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Key Notes
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;SOA and NS records are skipped: These are automatically managed by AWS.&lt;/li&gt;
&lt;li&gt;TTL fallback: If a record lacks a TTL, a default value of 300 seconds is applied.&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>route53</category>
      <category>dns</category>
      <category>migrate</category>
      <category>aws</category>
    </item>
    <item>
      <title>Install IBM Db2 Community Edition on Amazon EC2 (Ubuntu)</title>
      <dc:creator>Nowsath</dc:creator>
      <pubDate>Sun, 15 Sep 2024 20:09:30 +0000</pubDate>
      <link>https://dev.to/aws-builders/install-ibm-db2-community-edition-on-amazon-ec2-ubuntu-22io</link>
      <guid>https://dev.to/aws-builders/install-ibm-db2-community-edition-on-amazon-ec2-ubuntu-22io</guid>
      <description>&lt;p&gt;In this tutorial, we'll guide you through the step-by-step process of installing IBM Db2 Community Edition on an Amazon EC2 Ubuntu instance. We'll cover everything from setting up the EC2 instance to configuring Db2 and creating your first database.&lt;/p&gt;

&lt;p&gt;IBM Db2 Community Edition is a cloud-native database designed to deliver low-latency transactions and high resilience for both structured and unstructured data. As an entry-level edition of the Db2 data server, it's tailored for developers and partners to explore the capabilities of Db2 in a cost-effective manner.&lt;/p&gt;

&lt;p&gt;Available for Linux, Windows, AIX, and as a Docker image, Db2 Community Edition offers a comprehensive feature set, including core Db2 functionalities. While it is limited to up to 4 cores and 16 GB RAM, it provides a solid foundation for building and testing applications.&lt;/p&gt;

&lt;p&gt;Security is a top priority with Db2 Community Edition, featuring always-on security measures to protect your data. Additionally, the community-driven support model ensures that you have access to helpful resources and assistance from fellow developers.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Prerequisites&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Setup an EC2 instance of type t2.medium&lt;/li&gt;
&lt;li&gt;Ubuntu 22.04 LTS as AMI&lt;/li&gt;
&lt;li&gt;30 GB of hard disk space&lt;/li&gt;
&lt;li&gt;Open port 22 for SSH and 25000 for Db2&lt;/li&gt;
&lt;li&gt;Create an IBM account for downloading &lt;a href="https://early-access.ibm.com/software/support/trial/cst/programwebsite.wss?siteId=1120" rel="noopener noreferrer"&gt;Db2 community edition&lt;/a&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Installation Steps
&lt;/h2&gt;

&lt;p&gt;I. Login to your EC2 instance and verify the distribution version.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;cat&lt;/span&gt; /etc/os-release 
&lt;span class="nv"&gt;PRETTY_NAME&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"Ubuntu 22.04.4 LTS"&lt;/span&gt;
&lt;span class="nv"&gt;NAME&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"Ubuntu"&lt;/span&gt;
&lt;span class="nv"&gt;VERSION_ID&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"22.04"&lt;/span&gt;
&lt;span class="nv"&gt;VERSION&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"22.04.4 LTS (Jammy Jellyfish)"&lt;/span&gt;
&lt;span class="nv"&gt;VERSION_CODENAME&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;jammy
&lt;span class="nv"&gt;ID&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;ubuntu
&lt;span class="nv"&gt;ID_LIKE&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;debian
&lt;span class="nv"&gt;HOME_URL&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"https://www.ubuntu.com/"&lt;/span&gt;
&lt;span class="nv"&gt;SUPPORT_URL&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"https://help.ubuntu.com/"&lt;/span&gt;
&lt;span class="nv"&gt;BUG_REPORT_URL&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"https://bugs.launchpad.net/ubuntu/"&lt;/span&gt;
&lt;span class="nv"&gt;PRIVACY_POLICY_URL&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"&lt;/span&gt;
&lt;span class="nv"&gt;UBUNTU_CODENAME&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;jammy

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;II. Copy the downloaded latest IBM Db2 Linux (x64) version to EC2 instance and extract it.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;tar&lt;/span&gt; &lt;span class="nt"&gt;-xzvf&lt;/span&gt; v11.5.9_linuxx64_server_dec.tar.gz

&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;ls
&lt;/span&gt;server_dec  v11.5.9_linuxx64_server_dec.tar.gz

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Inside the server_dec directory, you can see following list of files.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;~/server_dec&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;ls
&lt;/span&gt;db2
db2_deinstall
db2_install  
db2checkCOL.tar.gz  
db2checkCOL_readme.txt  
db2ckupgrade  
db2ls  
db2prereqcheck  
db2setup  
installFixPack

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;III. From server_dec directory, Run &lt;em&gt;db2prereqcheck&lt;/em&gt; command to check the prerequisite requirements for installing Db2.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;~/server_dec&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;sudo&lt;/span&gt; ./db2prereqcheck

&lt;span class="o"&gt;==========================================================================&lt;/span&gt;

Sun Sep 15 16:37:59 2024
Checking prerequisites &lt;span class="k"&gt;for &lt;/span&gt;DB2 installation. Version &lt;span class="s2"&gt;"11.5.9.0"&lt;/span&gt;&lt;span class="nb"&gt;.&lt;/span&gt; Operating system &lt;span class="s2"&gt;"Linux"&lt;/span&gt; 

Validating &lt;span class="s2"&gt;"kernel level "&lt;/span&gt; ... 
   Required minimum operating system kernel level: &lt;span class="s2"&gt;"3.10.0"&lt;/span&gt;&lt;span class="nb"&gt;.&lt;/span&gt; 
   Actual operating system kernel level: &lt;span class="s2"&gt;"6.5.0"&lt;/span&gt;&lt;span class="nb"&gt;.&lt;/span&gt; 
   Requirement matched. 

Validating &lt;span class="s2"&gt;"Linux distribution "&lt;/span&gt; ... 
   Required minimum &lt;span class="s2"&gt;"UBUNTU"&lt;/span&gt; version: &lt;span class="s2"&gt;"16.04"&lt;/span&gt; 
   Actual version: &lt;span class="s2"&gt;"22.04"&lt;/span&gt; 
   Requirement matched. 

Validating &lt;span class="s2"&gt;"Bin user"&lt;/span&gt; ... 
   Requirement matched. 

Validating &lt;span class="s2"&gt;"C++ Library version "&lt;/span&gt; ... 
   Required minimum C++ library: &lt;span class="s2"&gt;"libstdc++.so.6"&lt;/span&gt; 
   Standard C++ library is located &lt;span class="k"&gt;in &lt;/span&gt;the following directory: &lt;span class="s2"&gt;"/usr/lib/x86_64-linux-gnu/libstdc++.so.6.0.30"&lt;/span&gt;&lt;span class="nb"&gt;.&lt;/span&gt; 
   Actual C++ library: &lt;span class="s2"&gt;"CXXABI_1.3.1"&lt;/span&gt; 
   Requirement matched. 


Validating &lt;span class="s2"&gt;"32 bit version of "&lt;/span&gt;libstdc++.so.6&lt;span class="s2"&gt;" "&lt;/span&gt; ... 
   Found the 64 bit &lt;span class="s2"&gt;"/lib/x86_64-linux-gnu/libstdc++.so.6"&lt;/span&gt; &lt;span class="k"&gt;in &lt;/span&gt;the following directory &lt;span class="s2"&gt;"/lib/x86_64-linux-gnu"&lt;/span&gt;&lt;span class="nb"&gt;.&lt;/span&gt; 
DBT3514W  The db2prereqcheck utility failed to find the following 32-bit library file: &lt;span class="s2"&gt;"libstdc++.so.6"&lt;/span&gt;&lt;span class="nb"&gt;.&lt;/span&gt; 

Validating &lt;span class="s2"&gt;"libaio.so version "&lt;/span&gt; ... 
DBT3553I  The db2prereqcheck utility successfully loaded the libaio.so.1 file. 
   Requirement matched. 

Validating &lt;span class="s2"&gt;"libnuma.so version "&lt;/span&gt; ... 
DBT3610I  The db2prereqcheck utility successfully loaded the libnuma.so.1 file. 
   Requirement matched. 

Validating &lt;span class="s2"&gt;"/lib/i386-linux-gnu/libpam.so*"&lt;/span&gt; ... 
   DBT3514W  The db2prereqcheck utility failed to find the following 32-bit library file: &lt;span class="s2"&gt;"/lib/i386-linux-gnu/libpam.so*"&lt;/span&gt;&lt;span class="nb"&gt;.&lt;/span&gt; 
   WARNING : Requirement not matched. 
Requirement not matched &lt;span class="k"&gt;for &lt;/span&gt;DB2 database &lt;span class="s2"&gt;"Server"&lt;/span&gt; &lt;span class="nb"&gt;.&lt;/span&gt; Version: &lt;span class="s2"&gt;"11.5.9.0"&lt;/span&gt;&lt;span class="nb"&gt;.&lt;/span&gt; 
Summary of prerequisites that are not met on the current system: 
   DBT3514W  The db2prereqcheck utility failed to find the following 32-bit library file: &lt;span class="s2"&gt;"/lib/i386-linux-gnu/libpam.so*"&lt;/span&gt;&lt;span class="nb"&gt;.&lt;/span&gt; 


DBT3514W  The db2prereqcheck utility failed to find the following 32-bit library file: &lt;span class="s2"&gt;"libstdc++.so.6"&lt;/span&gt;&lt;span class="nb"&gt;.&lt;/span&gt; 

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The output indicates that the requirement checks have failed, and we need to address these issues.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Enable 32-bit architecture on your instance and install the necessary packages.&lt;/li&gt;
&lt;li&gt;Install 32-bit library file: "/lib/i386-linux-gnu/libpam.so*" packages.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;dpkg &lt;span class="nt"&gt;--add-architecture&lt;/span&gt; i386
&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;apt-get update &lt;span class="nt"&gt;-y&lt;/span&gt;
&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;apt-get &lt;span class="nb"&gt;install &lt;/span&gt;libstdc++6:i386 libpam0g:i386 &lt;span class="nt"&gt;-y&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;After installing the required packages, rerun the &lt;em&gt;db2prereqcheck&lt;/em&gt; command to ensure all requirements are met.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;sudo&lt;/span&gt; ./db2prereqcheck 

&lt;span class="o"&gt;==========================================================================&lt;/span&gt;

Sun Sep 15 16:40:55 2024
Checking prerequisites &lt;span class="k"&gt;for &lt;/span&gt;DB2 installation. Version &lt;span class="s2"&gt;"11.5.9.0"&lt;/span&gt;&lt;span class="nb"&gt;.&lt;/span&gt; Operating system &lt;span class="s2"&gt;"Linux"&lt;/span&gt; 

Validating &lt;span class="s2"&gt;"kernel level "&lt;/span&gt; ... 
   Required minimum operating system kernel level: &lt;span class="s2"&gt;"3.10.0"&lt;/span&gt;&lt;span class="nb"&gt;.&lt;/span&gt; 
   Actual operating system kernel level: &lt;span class="s2"&gt;"6.5.0"&lt;/span&gt;&lt;span class="nb"&gt;.&lt;/span&gt; 
   Requirement matched. 

Validating &lt;span class="s2"&gt;"Linux distribution "&lt;/span&gt; ... 
   Required minimum &lt;span class="s2"&gt;"UBUNTU"&lt;/span&gt; version: &lt;span class="s2"&gt;"16.04"&lt;/span&gt; 
   Actual version: &lt;span class="s2"&gt;"22.04"&lt;/span&gt; 
   Requirement matched. 

Validating &lt;span class="s2"&gt;"Bin user"&lt;/span&gt; ... 
   Requirement matched. 

Validating &lt;span class="s2"&gt;"C++ Library version "&lt;/span&gt; ... 
   Required minimum C++ library: &lt;span class="s2"&gt;"libstdc++.so.6"&lt;/span&gt; 
   Standard C++ library is located &lt;span class="k"&gt;in &lt;/span&gt;the following directory: &lt;span class="s2"&gt;"/usr/lib/x86_64-linux-gnu/libstdc++.so.6.0.30"&lt;/span&gt;&lt;span class="nb"&gt;.&lt;/span&gt; 
   Actual C++ library: &lt;span class="s2"&gt;"CXXABI_1.3.1"&lt;/span&gt; 
   Requirement matched. 


Validating &lt;span class="s2"&gt;"32 bit version of "&lt;/span&gt;libstdc++.so.6&lt;span class="s2"&gt;" "&lt;/span&gt; ... 
   Found the 32 bit &lt;span class="s2"&gt;"/lib/i386-linux-gnu/libstdc++.so.6"&lt;/span&gt; &lt;span class="k"&gt;in &lt;/span&gt;the following directory &lt;span class="s2"&gt;"/lib/i386-linux-gnu"&lt;/span&gt;&lt;span class="nb"&gt;.&lt;/span&gt; 
   Requirement matched. 

Validating &lt;span class="s2"&gt;"libaio.so version "&lt;/span&gt; ... 
DBT3553I  The db2prereqcheck utility successfully loaded the libaio.so.1 file. 
   Requirement matched. 

Validating &lt;span class="s2"&gt;"libnuma.so version "&lt;/span&gt; ... 
DBT3610I  The db2prereqcheck utility successfully loaded the libnuma.so.1 file. 
   Requirement matched. 

Validating &lt;span class="s2"&gt;"/lib/i386-linux-gnu/libpam.so*"&lt;/span&gt; ... 
   Requirement matched. 
DBT3533I  The db2prereqcheck utility has confirmed that all installation prerequisites were met. 

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;IV. Install Db2 using &lt;em&gt;db2_install&lt;/em&gt; command as the root user and wait for the installation to complete.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;sudo&lt;/span&gt; ./db2_install
Read the license agreement file &lt;span class="k"&gt;in &lt;/span&gt;the db2/license directory.

&lt;span class="k"&gt;***********************************************************&lt;/span&gt;
To accept those terms, enter &lt;span class="s2"&gt;"yes"&lt;/span&gt;&lt;span class="nb"&gt;.&lt;/span&gt; Otherwise, enter &lt;span class="s2"&gt;"no"&lt;/span&gt; to cancel the &lt;span class="nb"&gt;install &lt;/span&gt;process. &lt;span class="o"&gt;[&lt;/span&gt;&lt;span class="nb"&gt;yes&lt;/span&gt;/no]
&lt;span class="nb"&gt;yes


&lt;/span&gt;Default directory &lt;span class="k"&gt;for &lt;/span&gt;installation of products - /opt/ibm/db2/V11.5

&lt;span class="k"&gt;***********************************************************&lt;/span&gt;
Install into default directory &lt;span class="o"&gt;(&lt;/span&gt;/opt/ibm/db2/V11.5&lt;span class="o"&gt;)&lt;/span&gt; ? &lt;span class="o"&gt;[&lt;/span&gt;&lt;span class="nb"&gt;yes&lt;/span&gt;/no] 
&lt;span class="nb"&gt;yes


&lt;/span&gt;Specify one of the following keywords to &lt;span class="nb"&gt;install &lt;/span&gt;DB2 products.

  SERVER 
  CONSV 
  CLIENT 
  RTCL 

Enter &lt;span class="s2"&gt;"help"&lt;/span&gt; to redisplay product names.

Enter &lt;span class="s2"&gt;"quit"&lt;/span&gt; to exit.

&lt;span class="k"&gt;***********************************************************&lt;/span&gt;
SERVER
&lt;span class="k"&gt;***********************************************************&lt;/span&gt;
Do you want to &lt;span class="nb"&gt;install &lt;/span&gt;the DB2 pureScale Feature? &lt;span class="o"&gt;[&lt;/span&gt;&lt;span class="nb"&gt;yes&lt;/span&gt;/no] 
no
DB2 installation is being initialized.

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Once the installation is completed, you will see the message below. You can then review the installation log file for post-installation steps.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;The execution completed successfully.

For more information see the DB2 installation log at
&lt;span class="s2"&gt;"/tmp/db2_install.log.3355"&lt;/span&gt;&lt;span class="nb"&gt;.&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;cat&lt;/span&gt; /tmp/db2_install.log.3355

Post-installation instructions 
&lt;span class="nt"&gt;-------------------------------&lt;/span&gt;

Required steps: 
Set up a DB2 instance to work with DB2. 

Optional steps: 
Notification SMTP server has not been specified. Notifications cannot be sent to contacts &lt;span class="k"&gt;in &lt;/span&gt;your contact list &lt;span class="k"&gt;until &lt;/span&gt;this is specified. For more information see the DB2 administration documentation. 

To validate your installation files, instance, and database functionality, run the Validation Tool, /opt/ibm/db2/V11.5/bin/db2val. For more information, see &lt;span class="s2"&gt;"db2val"&lt;/span&gt; &lt;span class="k"&gt;in &lt;/span&gt;the DB2 Information Center. 

Refer to &lt;span class="s2"&gt;"What's new"&lt;/span&gt; https://www.ibm.com/docs/en/db2/11.5?topic&lt;span class="o"&gt;=&lt;/span&gt;database-whats-new &lt;span class="k"&gt;in &lt;/span&gt;the DB2 Information Center to learn about the new functions &lt;span class="k"&gt;for &lt;/span&gt;DB2 V11.5. 

Open First Steps by running &lt;span class="s2"&gt;"db2fs"&lt;/span&gt; using a valid user ID such as the DB2 instance owner&lt;span class="s1"&gt;'s ID. You will need to have DISPLAY set and a supported web browser in the path of this user ID. 

Verify that you have access to the DB2 Information Center based on the choices you made during this installation. If you performed a typical or a compact installation, verify that you can access the IBM Web site using the internet. If you performed a custom installation, verify that you can access the DB2 Information Center location specified during the installation. 

Ensure that you have the correct license entitlements for DB2 products and features installed on this machine. Each DB2 product or feature comes with a license certificate file (also referred to as a license key) that is distributed on an Activation CD, which also includes instructions for applying the license file. If you purchased a base DB2 product, as well as, separately priced features, you might need to install more than one license certificate. The Activation CD for your product or feature can be downloaded from Passport Advantage if it is not part of the physical media pack you received from IBM. For more information about licensing, search the Information Center (https://www.ibm.com/docs/en/db2/11.5) using terms such as "license compliance", "licensing" or "db2licm". 

To use your DB2 database product, you must have a valid license. For information about obtaining and applying DB2 license files, see http://www-01.ibm.com/support/knowledgecenter/SSEPGG_11.1.0/com.ibm.db2.luw.qb.server.doc/doc/c0061199.html. 
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;V. To validate our installation, execute the &lt;em&gt;db2val&lt;/em&gt; command and review the log file to ensure everything is functioning correctly. If the validation completes without issues.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;sudo&lt;/span&gt; /opt/ibm/db2/V11.5/bin/db2val
DBI1379I  The db2val &lt;span class="nb"&gt;command &lt;/span&gt;is running. This can take several minutes.

DBI1335I  Installation file validation &lt;span class="k"&gt;for &lt;/span&gt;the DB2 copy installed at
      /opt/ibm/db2/V11.5 was successful.

DBI1343I  The db2val &lt;span class="nb"&gt;command &lt;/span&gt;completed successfully. For details, see
      the log file /tmp/db2val-240915_164309.log.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;cat&lt;/span&gt; /tmp/db2val-240915_164309.log
Installation file validation &lt;span class="k"&gt;for &lt;/span&gt;the DB2 copy installed at &lt;span class="s2"&gt;"/opt/ibm/db2/V11.5"&lt;/span&gt; starts.

Task 1: Validating Installation file sets.
Status 1 : Success 

Task 2: Validating embedded runtime path &lt;span class="k"&gt;for &lt;/span&gt;DB2 executables and libraries.
Status 2 : Success 

Task 3: Validating the accessibility to the installation path.
Status 3 : Success 

Task 4: Validating the accessibility to the /etc/services file.
Status 4 : Success 

DBI1335I  Installation file validation &lt;span class="k"&gt;for &lt;/span&gt;the DB2 copy installed at
      /opt/ibm/db2/V11.5 was successful.

Installation file validation &lt;span class="k"&gt;for &lt;/span&gt;the DB2 copy installed at &lt;span class="s2"&gt;"/opt/ibm/db2/V11.5"&lt;/span&gt; ends.

DBI1343I  The db2val &lt;span class="nb"&gt;command &lt;/span&gt;completed successfully. For details, see
      the log file /tmp/db2val-240915_164309.log.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Post Installation Steps
&lt;/h2&gt;

&lt;p&gt;I. Create groups and users for DB administrators and fenced users, then assign passwords to the users.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;groupadd &lt;span class="nt"&gt;-g&lt;/span&gt; 997 db2admin
&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;groupadd &lt;span class="nt"&gt;-g&lt;/span&gt; 998 db2fence


&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;useradd &lt;span class="nt"&gt;-u&lt;/span&gt; 1003 &lt;span class="nt"&gt;-g&lt;/span&gt; db2admin &lt;span class="nt"&gt;-m&lt;/span&gt; &lt;span class="nt"&gt;-d&lt;/span&gt; /home/db2adm1 db2adm1
&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;useradd &lt;span class="nt"&gt;-u&lt;/span&gt; 1004 &lt;span class="nt"&gt;-g&lt;/span&gt; db2fence &lt;span class="nt"&gt;-m&lt;/span&gt; &lt;span class="nt"&gt;-d&lt;/span&gt; /home/db2fen1 db2fen1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;II. Create a Db2 instance using the db2icrt command and check the log file for details on database connectivity.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;sudo&lt;/span&gt; /opt/ibm/db2/V11.5/instance/db2icrt &lt;span class="nt"&gt;-a&lt;/span&gt; server &lt;span class="nt"&gt;-u&lt;/span&gt; db2fen1 db2adm1
DBI1446I  The db2icrt &lt;span class="nb"&gt;command &lt;/span&gt;is running.


DB2 installation is being initialized.

 Total number of tasks to be performed: 4 
Total estimated &lt;span class="nb"&gt;time &lt;/span&gt;&lt;span class="k"&gt;for &lt;/span&gt;all tasks to be performed: 309 second&lt;span class="o"&gt;(&lt;/span&gt;s&lt;span class="o"&gt;)&lt;/span&gt; 

Task &lt;span class="c"&gt;#1 start&lt;/span&gt;
Description: Setting default global profile registry variables 
Estimated &lt;span class="nb"&gt;time &lt;/span&gt;1 second&lt;span class="o"&gt;(&lt;/span&gt;s&lt;span class="o"&gt;)&lt;/span&gt; 
Task &lt;span class="c"&gt;#1 end &lt;/span&gt;

Task &lt;span class="c"&gt;#2 start&lt;/span&gt;
Description: Initializing instance list 
Estimated &lt;span class="nb"&gt;time &lt;/span&gt;5 second&lt;span class="o"&gt;(&lt;/span&gt;s&lt;span class="o"&gt;)&lt;/span&gt; 
Task &lt;span class="c"&gt;#2 end &lt;/span&gt;

Task &lt;span class="c"&gt;#3 start&lt;/span&gt;
Description: Configuring DB2 instances 
Estimated &lt;span class="nb"&gt;time &lt;/span&gt;300 second&lt;span class="o"&gt;(&lt;/span&gt;s&lt;span class="o"&gt;)&lt;/span&gt; 
Task &lt;span class="c"&gt;#3 end &lt;/span&gt;

Task &lt;span class="c"&gt;#4 start&lt;/span&gt;
Description: Updating global profile registry 
Estimated &lt;span class="nb"&gt;time &lt;/span&gt;3 second&lt;span class="o"&gt;(&lt;/span&gt;s&lt;span class="o"&gt;)&lt;/span&gt; 
Task &lt;span class="c"&gt;#4 end &lt;/span&gt;

The execution completed successfully.

For more information see the DB2 installation log at &lt;span class="s2"&gt;"/tmp/db2icrt.log.80519"&lt;/span&gt;&lt;span class="nb"&gt;.&lt;/span&gt;
DBI1070I  Program db2icrt completed successfully.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;cat&lt;/span&gt; /tmp/db2icrt.log.80519

Post-installation instructions 
&lt;span class="nt"&gt;-------------------------------&lt;/span&gt;

Required steps: 
You can connect to the DB2 instance &lt;span class="s2"&gt;"db2adm1"&lt;/span&gt; using the port number &lt;span class="s2"&gt;"25001"&lt;/span&gt;&lt;span class="nb"&gt;.&lt;/span&gt; Record it &lt;span class="k"&gt;for &lt;/span&gt;future reference. 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;III. Now start the database instance using the below commands.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;su - db2adm1


&lt;span class="nv"&gt;$ &lt;/span&gt;db2ls

Install Path                       Level   Fix Pack   Special Install Number   Install Date                  Installer UID 
&lt;span class="nt"&gt;---------------------------------------------------------------------------------------------------------------------&lt;/span&gt;
/opt/ibm/db2/V11.5               11.5.9.0        0                            Sun Sep 15 16:42:52 2024 UTC             0 

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;.&lt;/span&gt; sqllib/userprofile

&lt;span class="nv"&gt;$ &lt;/span&gt;db2start 
09/15/2024 19:52:45     0   0   SQL1063N  DB2START processing was successful.
SQL1063N  DB2START processing was successful.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;IV. Now the database instance is started and we can connect to the instance.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;db2
&lt;span class="o"&gt;(&lt;/span&gt;c&lt;span class="o"&gt;)&lt;/span&gt; Copyright IBM Corporation 1993,2007
Command Line Processor &lt;span class="k"&gt;for &lt;/span&gt;DB2 Client 11.5.9.0

You can issue database manager commands and SQL statements from the &lt;span class="nb"&gt;command 
&lt;/span&gt;prompt. For example:
    db2 &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; connect to sample
    db2 &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="nb"&gt;bind &lt;/span&gt;sample.bnd

For general &lt;span class="nb"&gt;help&lt;/span&gt;, &lt;span class="nb"&gt;type&lt;/span&gt;: ?.
For &lt;span class="nb"&gt;command help&lt;/span&gt;, &lt;span class="nb"&gt;type&lt;/span&gt;: ? &lt;span class="nb"&gt;command&lt;/span&gt;, where &lt;span class="nb"&gt;command &lt;/span&gt;can be
the first few keywords of a database manager command. For example:
 ? CATALOG DATABASE &lt;span class="k"&gt;for &lt;/span&gt;&lt;span class="nb"&gt;help &lt;/span&gt;on the CATALOG DATABASE &lt;span class="nb"&gt;command&lt;/span&gt;
 ? CATALOG          &lt;span class="k"&gt;for &lt;/span&gt;&lt;span class="nb"&gt;help &lt;/span&gt;on all of the CATALOG commands.

To &lt;span class="nb"&gt;exit &lt;/span&gt;db2 interactive mode, &lt;span class="nb"&gt;type &lt;/span&gt;QUIT at the &lt;span class="nb"&gt;command &lt;/span&gt;prompt. Outside 
interactive mode, all commands must be prefixed with &lt;span class="s1"&gt;'db2'&lt;/span&gt;&lt;span class="nb"&gt;.&lt;/span&gt;
To list the current &lt;span class="nb"&gt;command &lt;/span&gt;option settings, &lt;span class="nb"&gt;type &lt;/span&gt;LIST COMMAND OPTIONS.

For more detailed &lt;span class="nb"&gt;help&lt;/span&gt;, refer to the Online Reference Manual.

db2 &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;V. Create a test database and connect to the database using the below commands.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;db2 &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; create database sales
DB20000I  The CREATE DATABASE &lt;span class="nb"&gt;command &lt;/span&gt;completed successfully.

db2 &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; connect to sales

   Database Connection Information

 Database server        &lt;span class="o"&gt;=&lt;/span&gt; DB2/LINUXX8664 11.5.9.0
 SQL authorization ID   &lt;span class="o"&gt;=&lt;/span&gt; DB2ADM1
 Local database &lt;span class="nb"&gt;alias&lt;/span&gt;   &lt;span class="o"&gt;=&lt;/span&gt; SALES

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;p&gt;That's it! Now you're ready to explore and work with the IBM Db2 database. I hope you find this guide helpful and enjoyable!&lt;/p&gt;

</description>
      <category>ibm</category>
      <category>db2</category>
      <category>ec2</category>
      <category>ubuntu</category>
    </item>
    <item>
      <title>Troubleshooting: GitHub Actions + Terraform + EKS + Helm</title>
      <dc:creator>Nowsath</dc:creator>
      <pubDate>Sun, 01 Sep 2024 16:24:21 +0000</pubDate>
      <link>https://dev.to/aws-builders/troubleshooting-github-actions-terraform-eks-helm-3khj</link>
      <guid>https://dev.to/aws-builders/troubleshooting-github-actions-terraform-eks-helm-3khj</guid>
      <description>&lt;p&gt;In this post, I've compiled a list of issues I encountered while setting up an EKS Fargate cluster using Terraform code and GitHub Actions, along with their corresponding solutions.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Error 1:&lt;/strong&gt; Connect: connection refused for service account.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

│ Error: Get &lt;span class="s2"&gt;"http://localhost/api/v1/namespaces/kube-system/serviceaccounts/aws-load-balancer-controller"&lt;/span&gt;: dial tcp &lt;span class="o"&gt;[&lt;/span&gt;::1]:80: connect: connection refused

│ 
│   with kubernetes_service_account.service-account,
│   on aws-alb-controller.tf line 50, &lt;span class="k"&gt;in &lt;/span&gt;resource &lt;span class="s2"&gt;"kubernetes_service_account"&lt;/span&gt; &lt;span class="s2"&gt;"service-account"&lt;/span&gt;:
│   50: resource &lt;span class="s2"&gt;"kubernetes_service_account"&lt;/span&gt; &lt;span class="s2"&gt;"service-account"&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
│ 


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Faoewrl76nfnrt0tj102x.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Faoewrl76nfnrt0tj102x.png" alt="connection refused for service account"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;Error 2:&lt;/strong&gt; Connect: connection refused for namespace.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

│ Error: Get &lt;span class="s2"&gt;"http://localhost/api/v1/namespaces/aws-observability"&lt;/span&gt;: dial tcp &lt;span class="o"&gt;[&lt;/span&gt;::1]:80: connect: connection refused
│ 
│   with kubernetes_namespace.aws_observability,
│   on namespaces.tf line 1, &lt;span class="k"&gt;in &lt;/span&gt;resource &lt;span class="s2"&gt;"kubernetes_namespace"&lt;/span&gt; &lt;span class="s2"&gt;"aws_observability"&lt;/span&gt;:
│    1: resource &lt;span class="s2"&gt;"kubernetes_namespace"&lt;/span&gt; &lt;span class="s2"&gt;"aws_observability"&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqkkp9lgthgsr6htrkqsu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqkkp9lgthgsr6htrkqsu.png" alt="connection refused for namespace"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;Error 3:&lt;/strong&gt; Kubernetes cluster unreachable.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

│ Error: Kubernetes cluster unreachable: invalid configuration: no configuration has been provided, try setting KUBERNETES_MASTER environment variable
│ 
│   with helm_release.alb-controller,
│   on aws-alb-controller.tf line 69, &lt;span class="k"&gt;in &lt;/span&gt;resource &lt;span class="s2"&gt;"helm_release"&lt;/span&gt; &lt;span class="s2"&gt;"alb-controller"&lt;/span&gt;:
│   69: resource &lt;span class="s2"&gt;"helm_release"&lt;/span&gt; &lt;span class="s2"&gt;"alb-controller"&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feqk8tnh22hsucph697fo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feqk8tnh22hsucph697fo.png" alt="Kubernetes cluster unreachable"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;Solutions for above errors 1, 2, 3:&lt;/strong&gt;&lt;br&gt;
These issues often occur when you attempt to run Terraform commands for updates on an EKS cluster after its creation using the EKS module.&lt;/p&gt;

&lt;p&gt;If you've used the Kubernetes provider as shown below, you're likely to encounter these types of errors.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight hcl"&gt;&lt;code&gt;

&lt;span class="nx"&gt;provider&lt;/span&gt; &lt;span class="s2"&gt;"kubernetes"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;host&lt;/span&gt;                   &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;data&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;aws_eks_cluster&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;cluster&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;endpoint&lt;/span&gt;
  &lt;span class="nx"&gt;cluster_ca_certificate&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;base64decode&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;data&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;aws_eks_cluster&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;cluster&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;certificate_authority&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;].&lt;/span&gt;&lt;span class="nx"&gt;data&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
  &lt;span class="nx"&gt;token&lt;/span&gt;                  &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;data&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;aws_eks_cluster_auth&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;cluster&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;token&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="nx"&gt;data&lt;/span&gt; &lt;span class="s2"&gt;"aws_eks_cluster"&lt;/span&gt; &lt;span class="s2"&gt;"cluster"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;name&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;module&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;eks&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;cluster_name&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="nx"&gt;data&lt;/span&gt; &lt;span class="s2"&gt;"aws_eks_cluster_auth"&lt;/span&gt; &lt;span class="s2"&gt;"cluster"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;name&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;module&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;eks&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;cluster_name&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;To resolve this, you should configure the Kubernetes provider as follows:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight hcl"&gt;&lt;code&gt;

&lt;span class="nx"&gt;provider&lt;/span&gt; &lt;span class="s2"&gt;"kubernetes"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;host&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;module&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;eks&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;cluster_endpoint&lt;/span&gt;
  &lt;span class="nx"&gt;cluster_ca_certificate&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;base64decode&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;module&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;eks&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;cluster_certificate_authority_data&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
  &lt;span class="nx"&gt;token&lt;/span&gt;                  &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;data&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;aws_eks_cluster_auth&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;cluster&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;token&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="nx"&gt;data&lt;/span&gt; &lt;span class="s2"&gt;"aws_eks_cluster_auth"&lt;/span&gt; &lt;span class="s2"&gt;"cluster"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;name&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;module&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;eks&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;cluster_name&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;In this configuration, the Kubernetes provider's host and cluster_ca_certificate values are sourced directly from the outputs of the eks module.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;em&gt;Pre-computed Outputs:&lt;/em&gt; The module.eks.cluster_endpoint and module.eks.cluster_certificate_authority_data are outputs that are likely computed and stored when the EKS module is applied. This means that once the module is applied, these outputs remain constant unless the cluster configuration changes.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;em&gt;No Additional Data Source Calls:&lt;/em&gt; Since the endpoint and certificate authority data are provided by the module's outputs, Terraform doesn't need to make an additional API call to AWS to retrieve this information each time you run terraform plan or apply. This reduces the likelihood of timing issues or inconsistencies that might cause connection problems.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;




&lt;p&gt;&lt;strong&gt;Error 4:&lt;/strong&gt; Connect: Missing required argument.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

│ Error: Missing required argument
│ 
│   with data.aws_eks_cluster_auth.cluster,
│   on aws-alb-controller.tf line 21, &lt;span class="k"&gt;in &lt;/span&gt;data &lt;span class="s2"&gt;"aws_eks_cluster_auth"&lt;/span&gt; &lt;span class="s2"&gt;"cluster"&lt;/span&gt;:
│   21:   name &lt;span class="o"&gt;=&lt;/span&gt; module.eks.cluster_id
│ 
│ The argument &lt;span class="s2"&gt;"name"&lt;/span&gt; is required, but no definition was found.


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fom8jtts48fyu7h7x6hqn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fom8jtts48fyu7h7x6hqn.png" alt="Missing required argument"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Solutions:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This error suggests that the name argument is required for the aws_eks_cluster_auth data source, but it's not being provided correctly. Specifically, the module.eks.cluster_id is not returning a value, or the value it is returning is not recognized as the name of the EKS cluster.&lt;/p&gt;

&lt;p&gt;i. &lt;em&gt;Check the output of module.eks.cluster_id:&lt;/em&gt; Ensure that module.eks.cluster_id is correctly referencing the EKS cluster ID. It should be an output of the EKS module. Verify that the cluster_id output is correctly defined in your EKS module.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight hcl"&gt;&lt;code&gt;

&lt;span class="nx"&gt;output&lt;/span&gt; &lt;span class="s2"&gt;"cluster_id"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;value&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;module&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;eks&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;cluster_id&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;ii. &lt;em&gt;Verify the EKS Module Version:&lt;/em&gt; Make sure you are using the correct version of the terraform-aws-modules/eks/aws module. If the module version is incorrect or outdated, it may not include the cluster_id output.&lt;/p&gt;

&lt;p&gt;iii. &lt;em&gt;Direct Reference:&lt;/em&gt; If the module output is correctly defined but still causing issues, try directly referencing the cluster name instead of the 'cluster_id'&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight hcl"&gt;&lt;code&gt;

&lt;span class="nx"&gt;data&lt;/span&gt; &lt;span class="s2"&gt;"aws_eks_cluster_auth"&lt;/span&gt; &lt;span class="s2"&gt;"cluster"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;name&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;module&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;eks&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;cluster_name&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;




&lt;p&gt;&lt;strong&gt;Error 5:&lt;/strong&gt; Required plugins are not installed.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;


│ Error: Required plugins are not installed
│ 
│ The installed provider plugins are not consistent with the packages
│ selected &lt;span class="k"&gt;in &lt;/span&gt;the dependency lock file:
│   - registry.terraform.io/hashicorp/null: there is no package &lt;span class="k"&gt;for &lt;/span&gt;registry.terraform.io/hashicorp/null 3.2.2 cached &lt;span class="k"&gt;in&lt;/span&gt; .terraform/providers
│   - registry.terraform.io/hashicorp/time: there is no package &lt;span class="k"&gt;for &lt;/span&gt;registry.terraform.io/hashicorp/time 0.11.2 cached &lt;span class="k"&gt;in&lt;/span&gt; .terraform/providers
│   - registry.terraform.io/hashicorp/tls: there is no package &lt;span class="k"&gt;for &lt;/span&gt;registry.terraform.io/hashicorp/tls 4.0.5 cached &lt;span class="k"&gt;in&lt;/span&gt; .terraform/providers
│   - registry.terraform.io/hashicorp/aws: there is no package &lt;span class="k"&gt;for &lt;/span&gt;registry.terraform.io/hashicorp/aws 5.53.0 cached &lt;span class="k"&gt;in&lt;/span&gt; .terraform/providers
│   - registry.terraform.io/hashicorp/cloudinit: there is no package &lt;span class="k"&gt;for &lt;/span&gt;registry.terraform.io/hashicorp/cloudinit 2.3.4 cached &lt;span class="k"&gt;in&lt;/span&gt; .terraform/providers
│   - registry.terraform.io/hashicorp/helm: there is no package &lt;span class="k"&gt;for &lt;/span&gt;registry.terraform.io/hashicorp/helm 2.14.0 cached &lt;span class="k"&gt;in&lt;/span&gt; .terraform/providers
│   - registry.terraform.io/hashicorp/kubernetes: there is no package &lt;span class="k"&gt;for &lt;/span&gt;registry.terraform.io/hashicorp/kubernetes 2.31.0 cached &lt;span class="k"&gt;in&lt;/span&gt; .terraform/providers
│ 
│ Terraform uses external plugins to integrate with a variety of different
│ infrastructure services. To download the plugins required &lt;span class="k"&gt;for &lt;/span&gt;this
│ configuration, run:
│   terraform init


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqtshn6s5vuh55p503em5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqtshn6s5vuh55p503em5.png" alt="Required plugins are not installed"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Solutions:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The error you're seeing indicates that Terraform's provider plugins are not correctly installed or do not match the versions specified in your dependency lock file (.terraform.lock.hcl). This can happen if you haven't run terraform init after modifying your Terraform configuration or if the plugins have been removed from the cache.&lt;/p&gt;

&lt;p&gt;Here’s how to resolve this issue:&lt;/p&gt;

&lt;p&gt;i. &lt;em&gt;Run terraform init:&lt;/em&gt; The error message itself suggests running terraform init. This command initializes the Terraform working directory by downloading the required provider plugins and setting up the necessary backend.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight hcl"&gt;&lt;code&gt;

&lt;span class="nx"&gt;terraform&lt;/span&gt; &lt;span class="nx"&gt;init&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;This should download and install all the required plugins based on your configuration and lock file.&lt;/p&gt;

&lt;p&gt;ii. &lt;em&gt;Force Re-initialization:&lt;/em&gt; If running terraform init alone does not solve the problem, you can try forcing a reinitialization, which will re-download the providers and update the .terraform directory:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight hcl"&gt;&lt;code&gt;

&lt;span class="nx"&gt;terraform&lt;/span&gt; &lt;span class="nx"&gt;init&lt;/span&gt; &lt;span class="nx"&gt;-upgrade&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;The -upgrade flag will ensure that Terraform checks for the latest versions of the required plugins and updates your local cache accordingly.&lt;/p&gt;

&lt;p&gt;iii. &lt;em&gt;Clear and Re-initialize the Terraform Directory:&lt;/em&gt; If the above steps still don’t resolve the issue, you may want to clear out the .terraform directory and reinitialize from scratch:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight hcl"&gt;&lt;code&gt;

&lt;span class="nx"&gt;rm&lt;/span&gt; &lt;span class="nx"&gt;-rf&lt;/span&gt; &lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;terraform&lt;/span&gt;
&lt;span class="nx"&gt;terraform&lt;/span&gt; &lt;span class="nx"&gt;init&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;This will delete the existing .terraform directory (where Terraform caches plugins and stores state-related files) and then reinitialize the working directory.&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;Error 6:&lt;/strong&gt; Backend configuration changed.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

│ Error: Backend configuration changed
│ 
│ A change &lt;span class="k"&gt;in &lt;/span&gt;the backend configuration has been detected, which may require migrating existing state.
│ 
│ If you wish to attempt automatic migration of the state, use &lt;span class="s2"&gt;"terraform init -migrate-state"&lt;/span&gt;&lt;span class="nb"&gt;.&lt;/span&gt;
│ If you wish to store the current configuration with no changes to the state, use &lt;span class="s2"&gt;"terraform init -reconfigure"&lt;/span&gt;&lt;span class="nb"&gt;.&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq44xo8qti8pkz9mm6icx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq44xo8qti8pkz9mm6icx.png" alt="Backend configuration changed"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Solutions:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The error message you're seeing indicates that Terraform has detected a change in your backend configuration. The backend is where Terraform stores the state file, which keeps track of your infrastructure resources. When the backend configuration changes, Terraform needs to decide how to handle the existing state.&lt;/p&gt;

&lt;p&gt;You have two options depending on your situation:&lt;br&gt;
i. &lt;em&gt;Automatic State Migration:&lt;/em&gt; &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;When to Use: If you've changed the backend configuration (e.g., switched from local state to a remote backend like S3 or changed the backend's parameters), and you want Terraform to automatically migrate your existing state to the new backend.&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight hcl"&gt;&lt;code&gt;

&lt;span class="nx"&gt;terraform&lt;/span&gt; &lt;span class="nx"&gt;init&lt;/span&gt; &lt;span class="nx"&gt;-migrate-state&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;ul&gt;
&lt;li&gt;What It Does: This command will attempt to migrate your existing state file to the new backend. It's a safe option if you're confident that the backend change is intentional and you want to keep your infrastructure's state intact in the new location.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;ii. &lt;em&gt;Reconfigure Without Migrating State:&lt;/em&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;When to Use: If you made changes to the backend configuration but do not need to migrate the state (e.g., you're working in a different environment, or you've changed a configuration that doesn't affect the state).&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight hcl"&gt;&lt;code&gt;

&lt;span class="nx"&gt;terraform&lt;/span&gt; &lt;span class="nx"&gt;init&lt;/span&gt; &lt;span class="nx"&gt;-reconfigure&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;ul&gt;
&lt;li&gt;What It Does: This command reconfigures the backend without migrating the state. It is useful if you're setting up Terraform in a new environment or if the state migration is unnecessary or handled separately.&lt;/li&gt;
&lt;/ul&gt;




&lt;p&gt;&lt;strong&gt;Error 7:&lt;/strong&gt; Helm release "" was created but has a failed status.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

│ Warning: Helm release &lt;span class="s2"&gt;""&lt;/span&gt; was created but has a failed status. Use the &lt;span class="sb"&gt;`&lt;/span&gt;helm&lt;span class="sb"&gt;`&lt;/span&gt; &lt;span class="nb"&gt;command &lt;/span&gt;to investigate the error, correct it, &lt;span class="k"&gt;then &lt;/span&gt;run Terraform again.
│ 
│   with helm_release.alb-controller,
│   on aws-alb-controller.tf line 75, &lt;span class="k"&gt;in &lt;/span&gt;resource &lt;span class="s2"&gt;"helm_release"&lt;/span&gt; &lt;span class="s2"&gt;"alb-controller"&lt;/span&gt;:
│   75: resource &lt;span class="s2"&gt;"helm_release"&lt;/span&gt; &lt;span class="s2"&gt;"alb-controller"&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;

│ Warning: Helm release &lt;span class="s2"&gt;""&lt;/span&gt; was created but has a failed status. Use the &lt;span class="sb"&gt;`&lt;/span&gt;helm&lt;span class="sb"&gt;`&lt;/span&gt; &lt;span class="nb"&gt;command &lt;/span&gt;to investigate the error, correct it, &lt;span class="k"&gt;then &lt;/span&gt;run Terraform again.
│ 
│   with helm_release.alb-controller,
│   on aws-alb-controller.tf line 75, &lt;span class="k"&gt;in &lt;/span&gt;resource &lt;span class="s2"&gt;"helm_release"&lt;/span&gt; &lt;span class="s2"&gt;"alb-controller"&lt;/span&gt;:
│   75: resource &lt;span class="s2"&gt;"helm_release"&lt;/span&gt; &lt;span class="s2"&gt;"alb-controller"&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;


│ Error: context deadline exceeded
│ 
│   with helm_release.alb-controller,
│   on aws-alb-controller.tf line 75, &lt;span class="k"&gt;in &lt;/span&gt;resource &lt;span class="s2"&gt;"helm_release"&lt;/span&gt; &lt;span class="s2"&gt;"alb-controller"&lt;/span&gt;:
│   75: resource &lt;span class="s2"&gt;"helm_release"&lt;/span&gt; &lt;span class="s2"&gt;"alb-controller"&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5zs3vpkk6843tswoqnoh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5zs3vpkk6843tswoqnoh.png" alt="Helm release "&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Solutions:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The error you're encountering indicates that the Helm release for the ALB (Application Load Balancer) controller was created but ended up in a failed state. Additionally, the "context deadline exceeded" error suggests that the operation took too long to complete, possibly due to network issues, resource constraints, or misconfiguration.&lt;/p&gt;

&lt;p&gt;To fix this, run 'terraform apply' again it will automatically replace the tainted helm_release.alb-controller as shown in the below image.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fajb0hgzug30wn348idj3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fajb0hgzug30wn348idj3.png" alt="Helm release tainted replacement"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;Error 8:&lt;/strong&gt; No value for required variable.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

│ Error: No value &lt;span class="k"&gt;for &lt;/span&gt;required variable
│ 
│   on variables.tf line 85:
│   85: variable &lt;span class="s2"&gt;"route53_access_key"&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
│ 
│ The root module input variable &lt;span class="s2"&gt;"route53_access_key"&lt;/span&gt; is not &lt;span class="nb"&gt;set&lt;/span&gt;, and has no
│ default value. Use a &lt;span class="nt"&gt;-var&lt;/span&gt; or &lt;span class="nt"&gt;-var-file&lt;/span&gt; &lt;span class="nb"&gt;command &lt;/span&gt;line argument to provide a
│ value &lt;span class="k"&gt;for &lt;/span&gt;this variable.


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fokavihgz6t2a31g5e0iq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fokavihgz6t2a31g5e0iq.png" alt="No value for required variable"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Solutions:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The error message indicates that the Terraform configuration requires an input variable named route53_access_key, which has not been provided a value and doesn't have a default value set in your variables.tf file.&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;Error 9:&lt;/strong&gt; InvalidArgument: The parameter Origin DomainName does not refer to a valid S3 bucket.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

│ Error: updating CloudFront Distribution &lt;span class="o"&gt;(&lt;/span&gt;E1B4377XXXX&lt;span class="o"&gt;)&lt;/span&gt;: operation error CloudFront: UpdateDistribution, https response error StatusCode: 400, RequestID: 0b25a733-e79-4xx7-bxx6-5exxx1cb405, InvalidArgument: The parameter Origin DomainName does not refer to a valid S3 bucket.
│ 
│   with aws_cloudfront_distribution.cms_cf,
│   on cloudfronts.tf line 1, &lt;span class="k"&gt;in &lt;/span&gt;resource &lt;span class="s2"&gt;"aws_cloudfront_distribution"&lt;/span&gt; &lt;span class="s2"&gt;"cms_cf"&lt;/span&gt;:
│    1: resource &lt;span class="s2"&gt;"aws_cloudfront_distribution"&lt;/span&gt; &lt;span class="s2"&gt;"cms_cf"&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi03pab8gp0qhhs776obv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi03pab8gp0qhhs776obv.png" alt="UpdateDistribution, https response error StatusCode: 400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Solutions:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The error message indicates that the Origin DomainName specified in your CloudFront distribution configuration does not point to a valid S3 bucket. This commonly happens when:&lt;/p&gt;

&lt;p&gt;i. Incorrect S3 bucket name: Ensure that the S3 bucket name you are using in your CloudFront distribution is correct and exists.&lt;/p&gt;

&lt;p&gt;ii. S3 bucket URL format: The Origin DomainName for an S3 bucket should follow this pattern:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;For a S3 bucket: &lt;code&gt;bucket-name.s3.amazonaws.com&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;For a regional S3 bucket: &lt;code&gt;bucket-name.s3.&amp;lt;region&amp;gt;.amazonaws.com&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;iii. Bucket permissions: Ensure the S3 bucket has the correct permissions to allow access from CloudFront. The bucket policy should allow CloudFront access.&lt;/p&gt;

&lt;p&gt;You can check the origin block in your CloudFront resource, which should look like this:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight hcl"&gt;&lt;code&gt;

&lt;span class="nx"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"aws_cloudfront_distribution"&lt;/span&gt; &lt;span class="s2"&gt;"cms_cf"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;origin&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;domain_name&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"your-bucket-name.s3.amazonaws.com"&lt;/span&gt;
    &lt;span class="nx"&gt;origin_id&lt;/span&gt;   &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"S3-your-bucket-name"&lt;/span&gt;

    &lt;span class="nx"&gt;s3_origin_config&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="nx"&gt;origin_access_identity&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;aws_cloudfront_origin_access_identity&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;example&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;cloudfront_access_identity_path&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;

  &lt;span class="c1"&gt;// Other CloudFront configuration&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;




&lt;p&gt;&lt;strong&gt;Error 10:&lt;/strong&gt; Reference to undeclared input variable.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

│ Error: Reference to undeclared input variable
│ 
│   on eks.tf line 65 &lt;span class="k"&gt;in &lt;/span&gt;module &lt;span class="s2"&gt;"eks"&lt;/span&gt;:
│   65:       namespace &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;var&lt;/span&gt;&lt;span class="p"&gt;.namespace[0]&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
│ 
│ An input variable with the name &lt;span class="s2"&gt;"namespace"&lt;/span&gt; has not been declared. | Did you mean &lt;span class="s2"&gt;"namespaces?


&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F87r54wg5mfhj7rj9vgvx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F87r54wg5mfhj7rj9vgvx.png" alt="Reference to undeclared input variable"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Solutions:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;As this error message indicates that in your eks.tf file, you are trying to reference the variable namespace, but it has not been declared.&lt;/p&gt;

&lt;p&gt;Check if the intended variable is &lt;code&gt;namespace&lt;/code&gt; or &lt;code&gt;namespaces&lt;/code&gt;, and adjust the code and declaration accordingly. Also ensure that the variable is declared in your variables.tf or in the same file where you're using it.&lt;/p&gt;




&lt;p&gt;Thanks for reading! I'll update this post regularly. Please share any other issues in the comments. 🤝😊&lt;/p&gt;

</description>
      <category>githubactions</category>
      <category>terraform</category>
      <category>eks</category>
      <category>helm</category>
    </item>
    <item>
      <title>Differences between primary, core and task nodes in Amazon EMR cluster</title>
      <dc:creator>Nowsath</dc:creator>
      <pubDate>Sun, 09 Jun 2024 10:43:42 +0000</pubDate>
      <link>https://dev.to/aws-builders/differences-between-primary-core-and-task-nodes-in-amazon-emr-cluster-pc6</link>
      <guid>https://dev.to/aws-builders/differences-between-primary-core-and-task-nodes-in-amazon-emr-cluster-pc6</guid>
      <description>&lt;p&gt;The key differences between primary, core, and task nodes in an Amazon EMR cluster are:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Primary Node (also known as Master Node):&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The primary node is responsible for coordinating the cluster and managing the execution of jobs.&lt;/li&gt;
&lt;li&gt;It runs the main Hadoop services, such as the JobTracker, NameNode, and ResourceManager.&lt;/li&gt;
&lt;li&gt;There is only one primary node in an EMR cluster.&lt;/li&gt;
&lt;li&gt;The primary node cannot be terminated during the lifetime of the cluster, as it is essential for the cluster's operation.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Core Nodes:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Core nodes host the Hadoop Distributed File System (HDFS) and run the DataNode and TaskTracker services.&lt;/li&gt;
&lt;li&gt;They are responsible for storing and processing data in the cluster.&lt;/li&gt;
&lt;li&gt;Core nodes cannot be removed from the cluster without risking data loss, as they contain the persistent data in HDFS.&lt;/li&gt;
&lt;li&gt;You should reserve core nodes for the capacity that is required until your cluster completes.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Task Nodes:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Task nodes are used for running tasks and do not host HDFS.
They can be added or removed from the cluster as needed, without the risk of data loss.&lt;/li&gt;
&lt;li&gt;Task nodes are ideal for handling temporary or burst workloads, as you can launch task instance fleets on Spot Instances to increase capacity while minimizing costs.&lt;/li&gt;
&lt;li&gt;The cluster will never scale below the minimum constraints set in the managed scaling policy.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Here's a table summarizing the key differences:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F23i535b8s12gnmcasriy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F23i535b8s12gnmcasriy.png" alt="EMR nodes summary"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;p&gt;More details regarding,&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Selecting and deploying an Amazon EMR cluster: click &lt;a href="https://docs.aws.amazon.com/prescriptive-guidance/latest/amazon-emr-hardware/select.html" rel="noopener noreferrer"&gt;here&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Estimating Amazon EMR cluster capacity: click &lt;a href="https://docs.aws.amazon.com/prescriptive-guidance/latest/amazon-emr-hardware/capacity.html" rel="noopener noreferrer"&gt;here&lt;/a&gt;
&lt;/li&gt;
&lt;/ol&gt;

</description>
      <category>aws</category>
      <category>emr</category>
      <category>cluster</category>
      <category>nodes</category>
    </item>
    <item>
      <title>EKS cluster upgrade fail with Kubelet version of Fargate pods must be updated to match cluster version!!</title>
      <dc:creator>Nowsath</dc:creator>
      <pubDate>Sat, 30 Dec 2023 17:04:24 +0000</pubDate>
      <link>https://dev.to/aws-builders/eks-cluster-upgrade-fail-with-kubelet-version-of-fargate-pods-must-be-updated-to-match-cluster-version-4gb</link>
      <guid>https://dev.to/aws-builders/eks-cluster-upgrade-fail-with-kubelet-version-of-fargate-pods-must-be-updated-to-match-cluster-version-4gb</guid>
      <description>&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0ofc2hy5tavjj6b4q91q.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0ofc2hy5tavjj6b4q91q.png" alt="EKS Cluster - Could not update cluster version"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You might encounter the following error message within the AWS EKS console when attempting to upgrade the Kubernetes version of your cluster.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Kubelet version of Fargate pods must be updated to match cluster version 1.25 before updating cluster version; Please recycle all offending pod replicas.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;For instance, if you are in the process of incrementally upgrading the Kubernetes version from 1.24 to 1.27.&lt;/p&gt;

&lt;p&gt;The upgrade from version 1.24 to 1.25 will proceed without encountering errors. However, please note that it won’t automatically update the kubelet version of the Fargate nodes/Pods until a restart is performed.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fid6othycyqfnk2t2e35t.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fid6othycyqfnk2t2e35t.png" alt="AWS EKS Console (Kubernetes version 1.25)"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;But when you check Fargate nodes versions,&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F088pwpeuqo6reht93kol.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F088pwpeuqo6reht93kol.png" alt="Fargate Node details"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Or when you check from terminal,&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

# To list all nodes
kubectl get nodes


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqwvilrvk4916clon2iij.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqwvilrvk4916clon2iij.png" alt="Fargate Nodes details — Terminal"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Solutions..&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;If the Fargate pods’ versions are not compatible with the upgraded EKS cluster, you might need to recreate the pods using a compatible version.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;You can do this by updating the deployment or stateful set YAML files with the correct version and applying the changes to the cluster. Kubernetes will automatically create new pods with the updated version.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;You can get all deployment, statefulset from all namespaces and restart it as shown below.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

# To list all deployments
kubectl get deployment -A

# To list all deployments
kubectl get statefulset -A

# To restart deployments
kubectl rollout restart deployment coredns -n kube-system
kubectl rollout restart deployment metrics-server -n kube-system
kubectl rollout restart deployment aws-load-balancer-controller -n kube-system
kubectl rollout restart deployment ebs-csi-controller -n kube-system

# To restart statefulset
kubectl rollout restart statefulset adot-collector -n fargate-container-insights


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Please note that this solution will work exclusively with Fargate nodes. If you are using other cluster modes, please refer to the links provided at the end of this article for alternative solutions.&lt;/p&gt;




&lt;p&gt;Also don’t forget to update kubectl as well.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

# To check kubectl version
kubectl version --short --client


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Link to update kubectl:&lt;br&gt;
&lt;a href="https://docs.aws.amazon.com/eks/latest/userguide/install-kubectl.html?source=post_page-----7fb0403bf9b6--------------------------------" rel="noopener noreferrer"&gt;Installing or updating kubectl&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;For more details about Amazon EKS Kubernetes versions upgrade:&lt;br&gt;
&lt;a href="https://docs.aws.amazon.com/eks/latest/userguide/kubernetes-versions.html?source=post_page-----7fb0403bf9b6--------------------------------" rel="noopener noreferrer"&gt;Amazon EKS Kubernetes versions&lt;/a&gt;&lt;br&gt;
&lt;a href="https://docs.aws.amazon.com/eks/latest/userguide/update-cluster.html?source=post_page-----7fb0403bf9b6--------------------------------" rel="noopener noreferrer"&gt;Updating an Amazon EKS cluster Kubernetes version&lt;/a&gt;&lt;/p&gt;




&lt;p&gt;I’ll always strive to provide simple explanations. If you found this article useful, follow me for more similar articles in the future. Your support keeps me motivated to create content like this. 🤝😊&lt;/p&gt;

</description>
      <category>eks</category>
      <category>upgrade</category>
      <category>fargate</category>
      <category>kubernetes</category>
    </item>
    <item>
      <title>Setup Prometheus and Grafana with existing EKS Fargate cluster - Monitoring</title>
      <dc:creator>Nowsath</dc:creator>
      <pubDate>Fri, 29 Dec 2023 13:30:43 +0000</pubDate>
      <link>https://dev.to/aws-builders/setup-prometheus-and-grafana-with-existing-eks-fargate-cluster-monitoring-39he</link>
      <guid>https://dev.to/aws-builders/setup-prometheus-and-grafana-with-existing-eks-fargate-cluster-monitoring-39he</guid>
      <description>&lt;p&gt;In this article, I will delineate the fundamental steps for configuring Prometheus and Grafana within the existing EKS Fargate cluster, along with the establishment of custom metrics. These measures are commonly utilized for monitoring and alerting purposes.&lt;/p&gt;

&lt;p&gt;Steps to follow.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Configure Node Groups&lt;/li&gt;
&lt;li&gt;Install AWS EBS CSI driver&lt;/li&gt;
&lt;li&gt;Install Prometheus&lt;/li&gt;
&lt;li&gt;Install Grafana&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  1. Configure Node Groups
&lt;/h2&gt;

&lt;p&gt;Given the absence of any pre-existing node groups, let's proceed to create a new one.&lt;/p&gt;

&lt;h4&gt;
  
  
  i. Create an IAM Role for EC2 worker nodes
&lt;/h4&gt;

&lt;p&gt;Go to the AWS IAM console, create a role 'AmazonEKSWorkerNodeRole' with following 3 AWS managed policies.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;AmazonEC2ContainerRegistryReadOnly&lt;/li&gt;
&lt;li&gt;AmazonEKS_CNI_Policy&lt;/li&gt;
&lt;li&gt;AmazonEKSWorkerNodePolicy&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F34isebad9zh16zuhxfsh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F34isebad9zh16zuhxfsh.png" alt="AWS IAM console"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  ii. Create Node groups
&lt;/h4&gt;

&lt;p&gt;Navigate to the AWS EKS console, add node group to the cluster. It is imperative to utilize EC2 instances for Prometheus and Grafana, as both applications require volumes to be mounted on them.&lt;/p&gt;

&lt;p&gt;When configuring the node group for the cluster, take the following into consideration,&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Node IAM role:&lt;/strong&gt; Select created role in the previous step (AmazonEKSWorkerNodeRole) &lt;br&gt;
&lt;strong&gt;Instance type:&lt;/strong&gt; Select based on your requirements (t3.small in my case)&lt;br&gt;
&lt;strong&gt;Subnets:&lt;/strong&gt; The private subnets within the VPC where the EKS cluster is located. If you want to enable remote access to node then you need to use public subnets.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;In my configuration, I opted for a t3.small instance with a desired size of 1. The setup worked without any issues.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;To verify the proper functioning of the EC2 worker nodes, execute the following command. The output should indicate that a pod is currently running.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

$ k get po -l k8s-app=aws-node -n kube-system
NAME             READY   STATUS    RESTARTS   AGE
aws-node-hbvz2   1/1     Running   0          58m


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;




&lt;h2&gt;
  
  
  2. Install AWS EBS CSI driver
&lt;/h2&gt;

&lt;p&gt;The Amazon EBS CSI driver manages the lifecycle of Amazon EBS volumes as storage for the Kubernetes Volumes that you create.&lt;/p&gt;

&lt;p&gt;The Amazon EBS CSI driver makes Amazon EBS volumes for these types of Kubernetes volumes: generic ephemeral volumes and persistent volumes.&lt;/p&gt;

&lt;p&gt;Prometheus and Grafana require persistent storage, commonly referred to as PV (Persistent Volume) in Kubernetes terminology, to be attached to them.&lt;/p&gt;

&lt;h4&gt;
  
  
  i. Create AWS EBS CSI driver IAM role and associate to service account
&lt;/h4&gt;

&lt;p&gt;Create a service account named 'ebs-csi-controller-sa' and associate it with AWS managed IAM policies. This service account will be utilized during the installation of the AWS EBS CSI driver.&lt;/p&gt;

&lt;p&gt;Replace 'my-cluster' with the name of your cluster.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

eksctl create iamserviceaccount &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--name&lt;/span&gt; ebs-csi-controller-sa &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--namespace&lt;/span&gt; kube-system &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--cluster&lt;/span&gt; my-cluster &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--role-name&lt;/span&gt; AmazonEKS_EBS_CSI_DriverRole &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--attach-policy-arn&lt;/span&gt; arn:aws:iam::aws:policy/service-role/AmazonEBSCSIDriverPolicy &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--override-existing-serviceaccounts&lt;/span&gt; &lt;span class="nt"&gt;--approve&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;If you don't specify the role name it will assign a name automatically.&lt;/p&gt;

&lt;h4&gt;
  
  
  ii. Add Helm repositories
&lt;/h4&gt;

&lt;p&gt;We will use Helm to install the components required to run Prometheus and Grafana. &lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

helm repo add aws-ebs-csi-driver https://kubernetes-sigs.github.io/aws-ebs-csi-driver
helm repo add kube-state-metrics https://kubernetes.github.io/kube-state-metrics
helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
helm repo add grafana https://grafana.github.io/helm-charts
helm repo update


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h4&gt;
  
  
  ii. Installing aws-ebs-csi-driver
&lt;/h4&gt;

&lt;p&gt;Following the addition of the new Helm repository, proceed to install the AWS EBS CSI driver using the below helm command.&lt;/p&gt;

&lt;p&gt;Replace the region 'eu-north-1' with your cluster region.&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

helm upgrade &lt;span class="nt"&gt;--install&lt;/span&gt; aws-ebs-csi-driver &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--namespace&lt;/span&gt; kube-system &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--set&lt;/span&gt; controller.region&lt;span class="o"&gt;=&lt;/span&gt;eu-north-1 &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--set&lt;/span&gt; controller.serviceAccount.create&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nb"&gt;false&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--set&lt;/span&gt; controller.serviceAccount.name&lt;span class="o"&gt;=&lt;/span&gt;ebs-csi-controller-sa &lt;span class="se"&gt;\&lt;/span&gt;
  aws-ebs-csi-driver/aws-ebs-csi-driver


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;




&lt;h2&gt;
  
  
  3. Install Prometheus
&lt;/h2&gt;

&lt;p&gt;For persistent storage of scraped metrics and configurations, Prometheus leverages two EBS volumes: one dedicated to the prometheus-server pod and another for the prometheus-alertmanager pod.&lt;/p&gt;

&lt;h4&gt;
  
  
  i. Create a namespace for Prometheus
&lt;/h4&gt;

&lt;p&gt;Create a namespace called 'prometheus'.&lt;br&gt;
&lt;code&gt;kubectl create namespace prometheus&lt;/code&gt; &lt;/p&gt;

&lt;h4&gt;
  
  
  ii. Set availability Zone and create storage class
&lt;/h4&gt;

&lt;p&gt;There are two options for storage class:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Use the default storage class (proceed to step iii).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Create a storage class in your worker node's Availability Zone (AZ).&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Get the Availability Zone of one of the worker nodes:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

&lt;span class="nv"&gt;EBS_AZ&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;kubectl get nodes &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-o&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nv"&gt;jsonpath&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"{.items[0].metadata.labels['topology&lt;/span&gt;&lt;span class="se"&gt;\.&lt;/span&gt;&lt;span class="s2"&gt;kubernetes&lt;/span&gt;&lt;span class="se"&gt;\.&lt;/span&gt;&lt;span class="s2"&gt;io&lt;/span&gt;&lt;span class="se"&gt;\/&lt;/span&gt;&lt;span class="s2"&gt;zone']}"&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Create a storage class:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: prometheus
  namespace: prometheus
provisioner: ebs.csi.aws.com
parameters:
  type: gp2
reclaimPolicy: Retain
allowedTopologies:
- matchLabelExpressions:
  - key: topology.ebs.csi.aws.com/zone
    values:
    - &lt;/span&gt;&lt;span class="nv"&gt;$EBS_AZ&lt;/span&gt;&lt;span class="s2"&gt;
"&lt;/span&gt; | kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; -


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h4&gt;
  
  
  iii. Installing Prometheus
&lt;/h4&gt;

&lt;p&gt;First download the Helm values for Prometheus file:&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

wget https://github.com/aws-samples/containers-blog-maelstrom/raw/main/fargate-monitoring/prometheus_values.yml


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Moreover, if you wish to configure custom metrics endpoints, include those details under the 'extraScrapeConfigs:|' section in the prometheus_values.yml file, as demonstrated here.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;

&lt;span class="na"&gt;extraScrapeConfigs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;
  &lt;span class="s"&gt;- job_name: 'api-svc'&lt;/span&gt;
    &lt;span class="s"&gt;metrics_path: /metrics&lt;/span&gt;
    &lt;span class="s"&gt;scheme: http&lt;/span&gt;
    &lt;span class="s"&gt;static_configs:&lt;/span&gt;
      &lt;span class="s"&gt;- targets: ['api-svc.api-dev.svc.cluster.local:5557']&lt;/span&gt;
  &lt;span class="s"&gt;- job_name: 'apps-svc'&lt;/span&gt;
    &lt;span class="s"&gt;metrics_path: /metrics&lt;/span&gt;
    &lt;span class="s"&gt;scheme: http&lt;/span&gt;
    &lt;span class="s"&gt;static_configs:&lt;/span&gt;
      &lt;span class="s"&gt;- targets: ['apps-svc.api-dev.svc.cluster.local:5559']&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Run Helm command to install Prometheus (for worker node's AZ storage class option):&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

helm upgrade &lt;span class="nt"&gt;-i&lt;/span&gt; prometheus &lt;span class="nt"&gt;-f&lt;/span&gt; prometheus_values.yml prometheus-community/prometheus &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--namespace&lt;/span&gt; prometheus &lt;span class="nt"&gt;--version&lt;/span&gt; 15


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Run Helm command to install Prometheus (for default storage class option):&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

helm upgrade &lt;span class="nt"&gt;-i&lt;/span&gt; prometheus &lt;span class="nt"&gt;-f&lt;/span&gt; prometheus_values.yml prometheus-community/prometheus &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--namespace&lt;/span&gt; prometheus &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--set&lt;/span&gt; alertmanager.persistentVolume.storageClass&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"gp2"&lt;/span&gt;,server.persistentVolume.storageClass&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"gp2"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--version&lt;/span&gt; 15


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;blockquote&gt;
&lt;p&gt;Important Note:  The default storage class has a reclaim policy set to "Delete"  Consequently, any EBS volumes used by Prometheus will be automatically deleted when you remove Prometheus itself.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Once Helm installation is completed, let's verify the resources.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

$ k get all -n prometheus
NAME                                                 READY   STATUS    RESTARTS   AGE
pod/prometheus-alertmanager-c7644896-7kfjq           2/2     Running   0          103m
pod/prometheus-kube-state-metrics-8476bdcc64-wng4m   1/1     Running   0          103m
pod/prometheus-node-exporter-8hf57                   1/1     Running   0          103m
pod/prometheus-pushgateway-665779d98f-v8q5d          1/1     Running   0          103m
pod/prometheus-server-6fd8bc8576-wwmvw               2/2     Running   0          103m

NAME                                    TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)        AGE
service/prometheus-alertmanager         ClusterIP   172.20.84.40     &amp;lt;none&amp;gt;        80/TCP         103m
service/prometheus-kube-state-metrics   ClusterIP   172.20.192.129   &amp;lt;none&amp;gt;        8080/TCP       103m
service/prometheus-node-exporter        ClusterIP   None             &amp;lt;none&amp;gt;        9100/TCP       103m
service/prometheus-pushgateway          ClusterIP   172.20.181.13    &amp;lt;none&amp;gt;        9091/TCP       103m
service/prometheus-server               NodePort    172.20.167.19    &amp;lt;none&amp;gt;        80:30900/TCP   103m

NAME                                      DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGE
daemonset.apps/prometheus-node-exporter   1         1         1       1            1           &amp;lt;none&amp;gt;          103m

NAME                                            READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/prometheus-alertmanager         1/1     1            1           103m
deployment.apps/prometheus-kube-state-metrics   1/1     1            1           103m
deployment.apps/prometheus-pushgateway          1/1     1            1           103m
deployment.apps/prometheus-server               1/1     1            1           103m

NAME                                                       DESIRED   CURRENT   READY   AGE
replicaset.apps/prometheus-alertmanager-c7644896           1         1         1       103m
replicaset.apps/prometheus-kube-state-metrics-8476bdcc64   1         1         1       103m
replicaset.apps/prometheus-pushgateway-665779d98f          1         1         1       103m
replicaset.apps/prometheus-server-6fd8bc8576               1         1         1       103m



&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;The chart creates two persistent volume claims: an 8Gi volume for prometheus-server pod and a 2Gi volume for prometheus-alertmanager.&lt;/p&gt;

&lt;h4&gt;
  
  
  iv. Check metrics from Prometheus
&lt;/h4&gt;

&lt;p&gt;To inspect metrics from Prometheus in the browser, you must initiate port forwarding.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

kubectl port-forward &lt;span class="nt"&gt;-n&lt;/span&gt; prometheus deploy/prometheus-server 8081:9090 &amp;amp;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Now, open a web browser and navigate to &lt;a href="http://localhost:8081/targets" rel="noopener noreferrer"&gt;http://localhost:8081/targets&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;From the page you can see all configured metrics, alerts, rules &amp;amp; other configurations.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Famxe5cpppl4v9oobijk8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Famxe5cpppl4v9oobijk8.png" alt="Prometheus dashboard"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  4. Install Grafana
&lt;/h2&gt;

&lt;p&gt;In this step, we will create a dedicated Kubernetes namespace for Grafana, create the Grafana manifest file, setup security groups, setup an Ingress, and finally configure the dashboard.&lt;/p&gt;

&lt;h4&gt;
  
  
  i. Create a namespace for Grafana
&lt;/h4&gt;

&lt;p&gt;Create a namespace called 'grafana'.&lt;br&gt;
&lt;code&gt;kubectl create namespace grafana&lt;/code&gt; &lt;/p&gt;

&lt;h4&gt;
  
  
  ii. Create a Grafana mainfest file
&lt;/h4&gt;

&lt;p&gt;We also require a manifest file to configure Grafana. Below is an example of the file named grafana.yaml.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;

&lt;span class="c1"&gt;# grafana.yaml&lt;/span&gt;
&lt;span class="na"&gt;datasources&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;datasources.yaml&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;1&lt;/span&gt;
    &lt;span class="na"&gt;datasources&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Prometheus&lt;/span&gt;
        &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;prometheus&lt;/span&gt;
        &lt;span class="na"&gt;url&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;http://prometheus-server.prometheus.svc.cluster.local&lt;/span&gt;
        &lt;span class="na"&gt;access&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;proxy&lt;/span&gt;
        &lt;span class="na"&gt;isDefault&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h4&gt;
  
  
  iii. Installing Grafana
&lt;/h4&gt;

&lt;p&gt;Now, proceed to install Grafana using Helm. Replace 'my-password' with your password.&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

helm &lt;span class="nb"&gt;install &lt;/span&gt;grafana grafana/grafana &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--namespace&lt;/span&gt; grafana &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--set&lt;/span&gt; persistence.storageClass&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s1"&gt;'gp2'&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--set&lt;/span&gt; persistence.enabled&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nb"&gt;true&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--set&lt;/span&gt; &lt;span class="nv"&gt;adminPassword&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s1"&gt;'my-passoword'&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--values&lt;/span&gt; grafana.yaml &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--set&lt;/span&gt; service.type&lt;span class="o"&gt;=&lt;/span&gt;NodePort


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h4&gt;
  
  
  iv. Create a security group
&lt;/h4&gt;

&lt;p&gt;Create a security group (grafana-alb-sg) with allowing https from anywhere as an inbound rule for ingress ALB.&lt;/p&gt;
&lt;h4&gt;
  
  
  v. Allow inbound request to EC2 worker node security group
&lt;/h4&gt;

&lt;p&gt;Before exposing Grafana to the external world, let's examine the definition of the Kubernetes service responsible for running Grafana.&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

$ k -n grafana get svc grafana -o yaml
apiVersion: v1
kind: Service
metadata:
  annotations:
    meta.helm.sh/release-name: grafana
    meta.helm.sh/release-namespace: grafana
  creationTimestamp: "2023-12-29T05:59:40Z"
  labels:
    app.kubernetes.io/instance: grafana
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: grafana
    app.kubernetes.io/version: 10.2.2
    helm.sh/chart: grafana-7.0.14
  name: grafana
  namespace: grafana
  resourceVersion: "179748053"
  uid: 7da370e2-63a4-4ca6-8ad4-14e624a51c4f
spec:
  clusterIP: 172.20.102.0
  clusterIPs:
  - 172.20.102.0
  externalTrafficPolicy: Cluster
  internalTrafficPolicy: Cluster
  ipFamilies:
  - IPv4
  ipFamilyPolicy: SingleStack
  ports:
  - name: service
    nodePort: 31059
    port: 80
    protocol: TCP
    targetPort: 3000
  selector:
    app.kubernetes.io/instance: grafana
    app.kubernetes.io/name: grafana
  sessionAffinity: None
  type: NodePort
status:
  loadBalancer: {}



&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;The target port is set to 3000, which corresponds to the port utilized by pods running Grafana.&lt;/p&gt;

&lt;p&gt;To enable inbound requests for port 3000, it is necessary to associate the security group created in the previous step with the EC2 worker nodes.&lt;/p&gt;

&lt;h4&gt;
  
  
  vi. Setup Ingress
&lt;/h4&gt;

&lt;p&gt;Define a new Kubernetes Ingress to facilitate the provisioning of an ALB.&lt;/p&gt;

&lt;p&gt;Additionally, assume that you have already installed the &lt;a href="https://docs.aws.amazon.com/eks/latest/userguide/aws-load-balancer-controller.html" rel="noopener noreferrer"&gt;AWS Load Balancer Controller&lt;/a&gt; to enable Kubernetes Ingress in creating an ALB.&lt;/p&gt;

&lt;p&gt;Let's define the Ingress definition file for Grafana.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;

&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;networking.k8s.io/v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Ingress&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;grafana-ingress&lt;/span&gt;
  &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;grafana&lt;/span&gt;
  &lt;span class="na"&gt;annotations&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;kubernetes.io/ingress.class&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;alb&lt;/span&gt;
    &lt;span class="na"&gt;alb.ingress.kubernetes.io/load-balancer-name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;grafana-alb&lt;/span&gt;
    &lt;span class="na"&gt;alb.ingress.kubernetes.io/scheme&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;internet-facing&lt;/span&gt;
    &lt;span class="na"&gt;alb.ingress.kubernetes.io/target-type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ip&lt;/span&gt;
    &lt;span class="na"&gt;alb.ingress.kubernetes.io/subnets&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${PUBLIC_SUBNET_IDs}&lt;/span&gt;
    &lt;span class="na"&gt;alb.ingress.kubernetes.io/listen-ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;[{"HTTPS":443},&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;{"HTTP":80}]'&lt;/span&gt;
    &lt;span class="na"&gt;alb.ingress.kubernetes.io/security-groups&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${ALB_SECURITY_GROUP_ID}&lt;/span&gt;
    &lt;span class="na"&gt;alb.ingress.kubernetes.io/healthcheck-port&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;3000"&lt;/span&gt;
    &lt;span class="na"&gt;alb.ingress.kubernetes.io/healthcheck-path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/api/health&lt;/span&gt;
    &lt;span class="na"&gt;alb.ingress.kubernetes.io/certificate-arn&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${ACM_CERT_ARN}&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;rules&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;host&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${YOUR_ROUTE53_DOMAIN}&lt;/span&gt;
      &lt;span class="na"&gt;http&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;paths&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/&lt;/span&gt;
            &lt;span class="na"&gt;pathType&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Prefix&lt;/span&gt;
            &lt;span class="na"&gt;backend&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
              &lt;span class="na"&gt;service&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
                &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;grafana&lt;/span&gt;
                &lt;span class="na"&gt;port&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
                  &lt;span class="na"&gt;number&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;80&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Replace values for subnets,secruity-groups,certificate-arn, host with your values.&lt;/p&gt;

&lt;p&gt;Post applying the new Ingress and once the new ALB is ready, navigate to ${YOUR_ROUTE53_DOMAIN} to confirm that Grafana is now accessible.&lt;/p&gt;

&lt;p&gt;After logging into your Grafana account, proceed to import the necessary dashboards.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsvi7l18oh7mfkkoqu7gt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsvi7l18oh7mfkkoqu7gt.png" alt="Grafana import dashboard"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You can download dashboards from &lt;a href="https://grafana.com/grafana/dashboards/" rel="noopener noreferrer"&gt;Grafana Dashboards&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I utilized these two dashboards, which proved to be valuable for monitoring the overall EKS Fargate cluster.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;ID: 17119 &lt;a href="https://grafana.com/grafana/dashboards/17119-kubernetes-eks-cluster-prometheus/" rel="noopener noreferrer"&gt;Kubernetes EKS Cluster (Prometheus)&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;ID: 12421 &lt;a href="https://grafana.com/grafana/dashboards/12421-fargate-pod-requests/" rel="noopener noreferrer"&gt;Fargate Pod Requests&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;




&lt;p&gt;That concludes our walkthrough! In this guide, we established a new node group essential for Prometheus and Grafana, and successfully installed and configured both tools.&lt;/p&gt;

&lt;p&gt;I trust this post proves valuable to you! 😊&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;Troubleshooting:&lt;/strong&gt;&lt;br&gt;
I've authored another post detailing additional issues and their solutions encountered during the setup of Prometheus &amp;amp; Grafana.&lt;/p&gt;

&lt;p&gt;You can find the post &lt;a href="https://dev.to/aws-builders/troubleshooting-eks-helm-prometheus-grafana-4ini"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Additional references:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://aws.amazon.com/blogs/containers/monitoring-amazon-eks-on-aws-fargate-using-prometheus-and-grafana/" rel="noopener noreferrer"&gt;Monitoring Amazon EKS on AWS Fargate using Prometheus and Grafana&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.aws.amazon.com/eks/latest/userguide/ebs-csi.html" rel="noopener noreferrer"&gt;Amazon EBS CSI driver&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.aws.amazon.com/eks/latest/userguide/csi-iam-role.html" rel="noopener noreferrer"&gt;Creating the Amazon EBS CSI driver IAM role&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/kubernetes-sigs/aws-ebs-csi-driver/blob/master/docs/install.md" rel="noopener noreferrer"&gt;AWS EBS CSI Driver Installation&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>eks</category>
      <category>fargate</category>
      <category>prometheus</category>
      <category>grafana</category>
    </item>
    <item>
      <title>Troubleshooting: EKS + Helm + Prometheus + Grafana</title>
      <dc:creator>Nowsath</dc:creator>
      <pubDate>Fri, 29 Dec 2023 12:06:26 +0000</pubDate>
      <link>https://dev.to/aws-builders/troubleshooting-eks-helm-prometheus-grafana-4ini</link>
      <guid>https://dev.to/aws-builders/troubleshooting-eks-helm-prometheus-grafana-4ini</guid>
      <description>&lt;p&gt;In this post, I've compiled a list of issues along with their corresponding solutions that I encountered while configuring Prometheus and Grafana with Helm in the existing EKS Fargate cluster setup.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Error 1:&lt;/strong&gt; Could not determine region from any metadata service. The region can be manually supplied via the AWS_REGION environment variable.&lt;/p&gt;

&lt;p&gt;panic: did not find aws instance ID in node providerID string&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ k logs ebs-csi-controller-7f5c959c75-j92jf -n kube-system -c ebs-plugin
I1228 04:31:45.536047       1 driver.go:78] "Driver Information" Driver="ebs.csi.aws.com" Version="v1.25.0"
I1228 04:31:45.536144       1 metadata.go:85] "retrieving instance data from ec2 metadata"
I1228 04:31:58.152468       1 metadata.go:88] "ec2 metadata is not available"
I1228 04:31:58.152491       1 metadata.go:96] "retrieving instance data from kubernetes api"
I1228 04:31:58.153081       1 metadata.go:101] "kubernetes api is available"
E1228 04:31:58.175387       1 controller.go:86] "Could not determine region from any metadata service. The region can be manually supplied via the AWS_REGION environment variable." err="did not find aws instance ID in node providerID string"
panic: did not find aws instance ID in node providerID string
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl get pod -n kube-system -l "app.kubernetes.io/name=aws-ebs-csi-driver,app.kubernetes.io/instance=aws-ebs-csi-driver"
NAME                                  READY   STATUS             RESTARTS       AGE
ebs-csi-controller-7f5c959c75-j92jf   0/6     CrashLoopBackOff   36 (9s ago)    10m
ebs-csi-controller-7f5c959c75-xpv9x   0/6     CrashLoopBackOff   36 (23s ago)   10m
ebs-csi-node-969qs                    3/3     Running            0              10m
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Solution:&lt;/strong&gt;&lt;br&gt;
If you don't specify the &lt;strong&gt;region&lt;/strong&gt; of your cluster when installing aws-ebs-csi-driver will result in the ebs-csi-controller pods crashing, as the default region will be set to 'us-east-1'.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;helm upgrade &lt;span class="nt"&gt;--install&lt;/span&gt; aws-ebs-csi-driver &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--namespace&lt;/span&gt; kube-system &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--set&lt;/span&gt; controller.region&lt;span class="o"&gt;=&lt;/span&gt;eu-north-1 &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--set&lt;/span&gt; controller.serviceAccount.create&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nb"&gt;false&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--set&lt;/span&gt; controller.serviceAccount.name&lt;span class="o"&gt;=&lt;/span&gt;ebs-csi-controller-sa &lt;span class="se"&gt;\&lt;/span&gt;
  aws-ebs-csi-driver/aws-ebs-csi-driver
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This is because of the ebs-plugin container "Could not determine region from any metadata service. The region can be manually supplied via the AWS_REGION environment variable."&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;Error 2:&lt;/strong&gt; Values don't meet the specifications of the schema(s) in the following chart(s)&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Error: values don't meet the specifications of the schema(s) in the following chart(s):
prometheus:
- server.remoteRead: Invalid type. Expected: array, given: object
alertmanager:
- extraEnv: Invalid type. Expected: array, given: object
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Solution:&lt;/strong&gt;&lt;br&gt;
The errors are a result of a version mismatch between the Prometheus version and the file used in the Helm installation. If you are using a customized prometheus_values.yml file, ensure you specify the precise version of Prometheus. Alternatively, if you do not use a customized file, make sure to use the latest version of the Prometheus file.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;helm upgrade &lt;span class="nt"&gt;-i&lt;/span&gt; prometheus prometheus-community/prometheus &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--namespace&lt;/span&gt; prometheus &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--set&lt;/span&gt; alertmanager.persistentVolume.storageClass&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"gp2"&lt;/span&gt;,server.persistentVolume.storageClass&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"gp2"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--version&lt;/span&gt; 15
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;I have used Prometheus version 15 here.&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;Error 3:&lt;/strong&gt; 0/17 nodes are available: pod has unbound immediate PersistentVolumeClaims. preemption: 0/17 nodes are available: 17 Preemption is not helpful for scheduling..&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ k get events -n prometheus
LAST SEEN   TYPE      REASON               OBJECT                                                MESSAGE
2m13s       Warning   FailedScheduling     pod/prometheus-alertmanager-c7644896-td8xv            0/17 nodes are available: pod has unbound immediate PersistentVolumeClaims. preemption: 0/17 nodes are available: 17 Preemption is not helpful for scheduling..
47m         Normal    SuccessfulCreate     replicaset/prometheus-alertmanager-c7644896           Created pod: prometheus-alertmanager-c7644896-td8xv
2m30s       Warning   ProvisioningFailed   persistentvolumeclaim/prometheus-alertmanager         storageclass.storage.k8s.io "prometheus" not found

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The pods, prometheus-alertmanager and prometheus-server, will remain in a pending status.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ k get po -n prometheus
NAME                                             READY   STATUS    RESTARTS   AGE
prometheus-alertmanager-c7644896-q2nzm           0/2     Pending   0          74s
prometheus-kube-state-metrics-8476bdcc64-f984p   1/1     Running   0          75s
prometheus-node-exporter-r82k7                   1/1     Running   0          74s
prometheus-pushgateway-665779d98f-zh2pf          1/1     Running   0          75s
prometheus-server-6fd8bc8576-csqt8               0/2     Pending   0          75s
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Solution:&lt;/strong&gt;&lt;br&gt;
This is due to missing storage class of 'prometheus' as clearly shown in the events logs. So go ahead and create the storage class shown below.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;EBS_AZ&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;kubectl get nodes &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-o&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nv"&gt;jsonpath&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"{.items[0].metadata.labels['topology&lt;/span&gt;&lt;span class="se"&gt;\.&lt;/span&gt;&lt;span class="s2"&gt;kubernetes&lt;/span&gt;&lt;span class="se"&gt;\.&lt;/span&gt;&lt;span class="s2"&gt;io&lt;/span&gt;&lt;span class="se"&gt;\/&lt;/span&gt;&lt;span class="s2"&gt;zone']}"&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: prometheus
  namespace: prometheus
provisioner: ebs.csi.aws.com
parameters:
  type: gp2
reclaimPolicy: Retain
allowedTopologies:
- matchLabelExpressions:
  - key: topology.ebs.csi.aws.com/zone
    values:
    - &lt;/span&gt;&lt;span class="nv"&gt;$EBS_AZ&lt;/span&gt;&lt;span class="s2"&gt;
"&lt;/span&gt; | kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; -
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;p&gt;&lt;strong&gt;Error 4:&lt;/strong&gt; Failed to provision volume with StorageClass "prometheus": rpc error: code = Internal desc = Could not create volume "pvc-48b7c3d8-d46a-47be-90e7-3d59eb3f5844": could not create volume in EC2: NoCredentialProviders: no valid providers in chain...&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl get events --sort-by=.metadata.creationTimestamp -n prometheus
LAST SEEN   TYPE      REASON                 OBJECT                                                MESSAGE
30s         Normal    Provisioning           persistentvolumeclaim/prometheus-alertmanager         External provisioner is provisioning volume for claim "prometheus/prometheus-alertmanager"
30s         Normal    Provisioning           persistentvolumeclaim/prometheus-server               External provisioner is provisioning volume for claim "prometheus/prometheus-server"
5s          Warning   ProvisioningFailed     persistentvolumeclaim/prometheus-alertmanager         failed to provision volume with StorageClass "prometheus": rpc error: code = Internal desc = Could not create volume "pvc-b7373f3b-3da9-47ac-8bfb-ad396816ce88": could not create volume in EC2: NoCredentialProviders: no valid providers in chain...
17s         Warning   ProvisioningFailed     persistentvolumeclaim/prometheus-server               failed to provision volume with StorageClass "prometheus": rpc error: code = Internal desc = Could not create volume "pvc-48b7c3d8-d46a-47be-90e7-3d59eb3f5844": could not create volume in EC2: NoCredentialProviders: no valid providers in chain...
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Solution:&lt;/strong&gt;&lt;br&gt;
This issue arises from insufficient permissions assigned to the service account in the cluster, preventing it from provisioning the required persistent volumes.&lt;/p&gt;

&lt;p&gt;You need to set service account details (with required IAM policies for the role) while install aws-ebs-csi-driver with Helm as shown here.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;helm upgrade &lt;span class="nt"&gt;--install&lt;/span&gt; aws-ebs-csi-driver &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--namespace&lt;/span&gt; kube-system &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--set&lt;/span&gt; controller.region&lt;span class="o"&gt;=&lt;/span&gt;eu-north-1 &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--set&lt;/span&gt; controller.serviceAccount.create&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nb"&gt;false&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--set&lt;/span&gt; controller.serviceAccount.name&lt;span class="o"&gt;=&lt;/span&gt;ebs-csi-controller-sa &lt;span class="se"&gt;\&lt;/span&gt;
  aws-ebs-csi-driver/aws-ebs-csi-driver
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;p&gt;&lt;strong&gt;Error 5:&lt;/strong&gt; The service account is absent in the EKS cluster setup, yet it is visible through the &lt;code&gt;eksctl get&lt;/code&gt; command.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Solution:&lt;/strong&gt;&lt;br&gt;
Check whether you have added '--role-only' option while creating service account using eksctl.&lt;br&gt;
If yes, delete the service account and recreate it without '--role-only' option as shown below.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;eksctl create iamserviceaccount &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--name&lt;/span&gt; ebs-csi-controller-sa &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--namespace&lt;/span&gt; kube-system &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--cluster&lt;/span&gt; api-dev &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--role-name&lt;/span&gt; AmazonEKS_EBS_CSI_DriverRole &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--attach-policy-arn&lt;/span&gt; arn:aws:iam::aws:policy/service-role/AmazonEBSCSIDriverPolicy &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--override-existing-serviceaccounts&lt;/span&gt; &lt;span class="nt"&gt;--approve&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here, 'api-dev' is the cluster name. Replace it with your cluster name before running the command.&lt;/p&gt;




&lt;p&gt;Thank you for taking the time to read 👏😊! I will continue to update this post as I encounter new issues. Feel free to mention any unlisted issues in the comment section. 🤝❤️&lt;/p&gt;

&lt;p&gt;Check my post on setting up &lt;a href="https://dev.to/nowsathk/setup-prometheus-and-grafana-with-existing-eks-fargate-cluster-monitoring-39he"&gt;Prometheus and Grafana with existing EKS Fargate cluster - Monitoring&lt;/a&gt;&lt;/p&gt;

</description>
      <category>eks</category>
      <category>fargate</category>
      <category>prometheus</category>
      <category>grafana</category>
    </item>
  </channel>
</rss>
