<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Luqman Bello</title>
    <description>The latest articles on DEV Community by Luqman Bello (@luqmanbello).</description>
    <link>https://dev.to/luqmanbello</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/luqmanbello"/>
    <language>en</language>
    <item>
      <title>Designing an Internal Developer Platform (IDP) on AWS Using Backstage, EKS &amp; Terraform</title>
      <dc:creator>Luqman Bello</dc:creator>
      <pubDate>Mon, 24 Nov 2025 10:33:25 +0000</pubDate>
      <link>https://dev.to/aws-builders/designing-an-internal-developer-platform-idp-on-aws-using-backstage-eks-terraform-3m88</link>
      <guid>https://dev.to/aws-builders/designing-an-internal-developer-platform-idp-on-aws-using-backstage-eks-terraform-3m88</guid>
      <description>&lt;h2&gt;
  
  
  Introduction — Why IDPs Are Becoming Mandatory
&lt;/h2&gt;

&lt;p&gt;If you've worked in engineering long enough, you already know the truth nobody wants to say out loud:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Most developers don’t want to touch YAML, Terraform, Kubernetes manifests, or IaC pipelines. They just want their service to run.&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;And honestly… can we blame them?&lt;br&gt;
Between debugging a failing pod, hunting for IAM permissions, or figuring out why Terraform refuses to plan, developer energy gets drained on infrastructure instead of innovation.&lt;/p&gt;

&lt;p&gt;That’s where &lt;strong&gt;Internal Developer Platforms (IDPs)&lt;/strong&gt; come in.&lt;/p&gt;

&lt;p&gt;IDPs are the “self-service layer” that removes friction.&lt;br&gt;
The best IDPs allow developers to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;scaffold services&lt;/li&gt;
&lt;li&gt;spin up infrastructure&lt;/li&gt;
&lt;li&gt;deploy to multiple environments&lt;/li&gt;
&lt;li&gt;observe their workloads&lt;/li&gt;
&lt;li&gt;and do all this &lt;strong&gt;without knowing how the underlying platform works&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Today, you’ll learn how to build such a platform using:&lt;/p&gt;

&lt;p&gt;✔ &lt;strong&gt;Backstage&lt;/strong&gt; (developer portal)&lt;br&gt;
✔ &lt;strong&gt;Terraform&lt;/strong&gt; (self-service IaC)&lt;br&gt;
✔ &lt;strong&gt;ArgoCD&lt;/strong&gt; (GitOps)&lt;br&gt;
✔ &lt;strong&gt;Amazon EKS&lt;/strong&gt; (runtime platform)&lt;br&gt;
✔ &lt;strong&gt;AWS native services like IAM, KMS, Secrets Manager&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This is the same pattern used at modern engineering orgs and unicorns.&lt;/p&gt;

&lt;p&gt;So let’s build a real one, the right way.&lt;/p&gt;
&lt;h2&gt;
  
  
  The Four Layers of a Modern IDP
&lt;/h2&gt;

&lt;p&gt;Every successful platform has the same 4 layers:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;1. Developer Experience Layer  → Backstage
2. Platform Orchestration      → Terraform + GitOps (ArgoCD)
3. Runtime Infrastructure      → Amazon EKS + networking
4. Shared Services             → Observability, security, secrets, CI
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Let’s break them down.&lt;/p&gt;

&lt;h2&gt;
  
  
  1. Developer Experience Layer — Backstage
&lt;/h2&gt;

&lt;p&gt;Backstage gives you:&lt;/p&gt;

&lt;h3&gt;
  
  
  💠 &lt;strong&gt;Software Catalog&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;A single source of truth for all services:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Ownership&lt;/li&gt;
&lt;li&gt;Deployments&lt;/li&gt;
&lt;li&gt;Documentation&lt;/li&gt;
&lt;li&gt;Tech stack&lt;/li&gt;
&lt;li&gt;Status&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  💠 &lt;strong&gt;Golden Paths (Software Templates)&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;This is the real magic.&lt;/p&gt;

&lt;p&gt;Instead of telling developers:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;“Hey, here’s the Confluence page with our recommended architecture. Follow it.”&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;You give them a &lt;strong&gt;self-service template&lt;/strong&gt; that does everything:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Create a Git repo&lt;/li&gt;
&lt;li&gt;Generate CI/CD pipelines&lt;/li&gt;
&lt;li&gt;Create Terraform infrastructure&lt;/li&gt;
&lt;li&gt;Register service in Backstage&lt;/li&gt;
&lt;li&gt;Deploy to EKS&lt;/li&gt;
&lt;li&gt;Wire up monitoring &amp;amp; alerts&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;With one click.&lt;/p&gt;

&lt;h2&gt;
  
  
  2. Platform Orchestration — Terraform + GitOps
&lt;/h2&gt;

&lt;p&gt;This is where your platform’s power comes from.&lt;/p&gt;

&lt;h3&gt;
  
  
  🟦 &lt;strong&gt;Terraform modules&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;These represent reusable “building blocks”:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;microservice module&lt;/li&gt;
&lt;li&gt;VPC module&lt;/li&gt;
&lt;li&gt;RDS module&lt;/li&gt;
&lt;li&gt;DynamoDB table module&lt;/li&gt;
&lt;li&gt;S3 bucket module&lt;/li&gt;
&lt;li&gt;EKS namespace module&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Developers never write Terraform.&lt;br&gt;
They fill in a simple form in Backstage → Backstage generates Terraform config → GitOps takes over → ArgoCD deploys it.&lt;/p&gt;
&lt;h3&gt;
  
  
  🟥 &lt;strong&gt;GitOps (ArgoCD)&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;This ensures:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;everything&lt;/strong&gt; is deployed from Git&lt;/li&gt;
&lt;li&gt;automatic rollbacks&lt;/li&gt;
&lt;li&gt;drift detection&lt;/li&gt;
&lt;li&gt;clear audit trails&lt;/li&gt;
&lt;li&gt;separation between build (CI) and deploy (CD)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Git is the source of truth.&lt;br&gt;
ArgoCD is the engine.&lt;/p&gt;
&lt;h2&gt;
  
  
  3. Runtime Infrastructure — Amazon EKS
&lt;/h2&gt;

&lt;p&gt;This is where workloads actually run.&lt;/p&gt;
&lt;h3&gt;
  
  
  Your EKS architecture includes:
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Multi-tenant namespaces&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;IRSA&lt;/strong&gt; (IAM Roles for Service Accounts)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Karpenter&lt;/strong&gt; for fast autoscaling&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Network Policies&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Pod Security Standards&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Ingress controller&lt;/strong&gt; (ALB/Nginx/Ambassador)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Internal service mesh&lt;/strong&gt; (optional)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;EKS is your “runtime engine”, Backstage is the steering wheel, and GitOps is the autopilot.&lt;/p&gt;
&lt;h2&gt;
  
  
  4. Shared Services Layer
&lt;/h2&gt;

&lt;p&gt;This includes all the “invisible” but necessary pieces:&lt;/p&gt;
&lt;h3&gt;
  
  
  ✔ Secrets
&lt;/h3&gt;

&lt;p&gt;AWS Secrets Manager + IRSA.&lt;/p&gt;
&lt;h3&gt;
  
  
  ✔ Observability
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Prometheus&lt;/li&gt;
&lt;li&gt;Grafana&lt;/li&gt;
&lt;li&gt;Tempo / Loki / OTel Collector&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;
  
  
  ✔ Security
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;KMS encryption&lt;/li&gt;
&lt;li&gt;Audit logs&lt;/li&gt;
&lt;li&gt;IAM boundaries&lt;/li&gt;
&lt;li&gt;Registry scanning&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;
  
  
  ✔ CI Pipelines
&lt;/h3&gt;

&lt;p&gt;GitHub Actions or GitLab CI.&lt;br&gt;
CI builds → GitOps deploys.&lt;/p&gt;
&lt;h2&gt;
  
  
  Full Architecture flow diagram
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://mermaid.live/edit#pako:eNp9VM1u00AQfpXVnopIQ_5qpz4ghaatov5FNb2AOWztiW1h71rrdSGtKpUrArWUXsgl4lKEEHckxMv0BegjMPY6oW6r-mDtfPPN7Dc_9hF1hQfUoqNIvHEDJhXZ3HW4wwk-abbvS5YEpA8HLx16Pf30_Xp6-ufq5OP19Px3jkIkEpAOfaUD8mdlc1BwkYBH8oSsh4oMszSosPY06dtP8oy5r1PFfCBDIRWL5jTg3h0hc3IRPb28Eb0w6A_JJhuDfFS56TnEScQUpEXI5zOyLiIPOBkyFfx3VktgqEP4OmBCbJAHoQsz-EGBO9INIFWSqVBwTHA1-fL31ykZ4iUjIeOqv6pzTdf0A0VJyQr2lvCy6JY4bOdOoou5eEd60hcr_RIkq9wPOTwocDfjKox1_yYnpBezQ8HJ6oZNFkrXfU3ctnUzLsk2iyFNmHtL1QaTSVHu1-IIXIEkvUyJ1GVRyP0K2T5wi3Qfzma9TXFP-pBEYhxjaPpgBTZuKXi6AeelNc9TuWdnvxz6-_yIDLYfRqEaE1vh2lQlgZZ0kUtyJaiUPCaD3ha-hyIK3RDuFbU3IIuLT2-sES8_gfvguamdayW6Vph6ghoqp5nD5Uw0PhtQ7sCC7oIovVSgV_W2ClqjMciYhR5-8Uc51aEqAMxPLTx6MGJZpBzq8GOkMhyfPeYutZTMoEalyPyAWiMWpWhliYdJ-yHDqcQzSsL4CyFumtQ6om-ptdhpLdW73U7H7JhGu9FtGzU6plarZdSXuw3TNFroa3a6xzV6WCRo1ttmq9lcMsx2x2gaZqNVo-CFSsgt_cMq_ls16su8mFIgTgbkisCOYOrlxvE_7rGe6g" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmermaid.ink%2Fimg%2Fpako%3AeNp9VM1u00AQfpXVnopIQ_5qpz4ghaatov5FNb2AOWztiW1h71rrdSGtKpUrArWUXsgl4lKEEHckxMv0BegjMPY6oW6r-mDtfPPN7Dc_9hF1hQfUoqNIvHEDJhXZ3HW4wwk-abbvS5YEpA8HLx16Pf30_Xp6-ufq5OP19Px3jkIkEpAOfaUD8mdlc1BwkYBH8oSsh4oMszSosPY06dtP8oy5r1PFfCBDIRWL5jTg3h0hc3IRPb28Eb0w6A_JJhuDfFS56TnEScQUpEXI5zOyLiIPOBkyFfx3VktgqEP4OmBCbJAHoQsz-EGBO9INIFWSqVBwTHA1-fL31ykZ4iUjIeOqv6pzTdf0A0VJyQr2lvCy6JY4bOdOoou5eEd60hcr_RIkq9wPOTwocDfjKox1_yYnpBezQ8HJ6oZNFkrXfU3ctnUzLsk2iyFNmHtL1QaTSVHu1-IIXIEkvUyJ1GVRyP0K2T5wi3Qfzma9TXFP-pBEYhxjaPpgBTZuKXi6AeelNc9TuWdnvxz6-_yIDLYfRqEaE1vh2lQlgZZ0kUtyJaiUPCaD3ha-hyIK3RDuFbU3IIuLT2-sES8_gfvguamdayW6Vph6ghoqp5nD5Uw0PhtQ7sCC7oIovVSgV_W2ClqjMciYhR5-8Uc51aEqAMxPLTx6MGJZpBzq8GOkMhyfPeYutZTMoEalyPyAWiMWpWhliYdJ-yHDqcQzSsL4CyFumtQ6om-ptdhpLdW73U7H7JhGu9FtGzU6plarZdSXuw3TNFroa3a6xzV6WCRo1ttmq9lcMsx2x2gaZqNVo-CFSsgt_cMq_ls16su8mFIgTgbkisCOYOrlxvE_7rGe6g%3Ftype%3Dpng" alt="Full Architecture Diagram" width="784" height="146"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  🧬 &lt;strong&gt;Golden Path Template Example (Backstage)&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Here’s a simplified template:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;scaffolder.backstage.io/v1beta3&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Template&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;aws-eks-service&lt;/span&gt;
  &lt;span class="na"&gt;title&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Create&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;a&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;Service&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;on&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;EKS"&lt;/span&gt;
  &lt;span class="na"&gt;description&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Scaffold&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;a&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;microservice&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;with&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;Terraform&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;+&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;GitOps"&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;owner&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;platform-team&lt;/span&gt;
  &lt;span class="na"&gt;parameters&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;title&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Service&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;Info"&lt;/span&gt;
      &lt;span class="na"&gt;properties&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;string&lt;/span&gt;
          &lt;span class="na"&gt;description&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Name of the service&lt;/span&gt;
        &lt;span class="na"&gt;owner&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;string&lt;/span&gt;
          &lt;span class="na"&gt;description&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Team name&lt;/span&gt;

  &lt;span class="na"&gt;steps&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;id&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;template&lt;/span&gt;
      &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Create Repo from Template&lt;/span&gt;
      &lt;span class="na"&gt;action&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;fetch:template&lt;/span&gt;

    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;id&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;terraform&lt;/span&gt;
      &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Terraform Module&lt;/span&gt;
      &lt;span class="na"&gt;action&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;publish:github&lt;/span&gt;
      &lt;span class="na"&gt;input&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;repoUrl&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;{{&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;parameters.name&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;}}"&lt;/span&gt;
        &lt;span class="na"&gt;files&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;terraform/main.tf&lt;/span&gt;

    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;id&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;register&lt;/span&gt;
      &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Register in Backstage Catalog&lt;/span&gt;
      &lt;span class="na"&gt;action&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;catalog:register&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  📦 &lt;strong&gt;Terraform Module Example (EKS Namespace)&lt;/strong&gt;
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight hcl"&gt;&lt;code&gt;&lt;span class="nx"&gt;module&lt;/span&gt; &lt;span class="s2"&gt;"namespace"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;source&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"git::https://github.com/org/platform-modules.git//eks-namespace"&lt;/span&gt;

  &lt;span class="nx"&gt;name&lt;/span&gt;      &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;var&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;service_name&lt;/span&gt;
  &lt;span class="nx"&gt;team&lt;/span&gt;      &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;var&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;team_name&lt;/span&gt;
  &lt;span class="nx"&gt;iam_roles&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;var&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;iam_roles&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  🔁 &lt;strong&gt;GitOps Flow (ArgoCD App)&lt;/strong&gt;
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;argoproj.io/v1alpha1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Application&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;myservice&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;source&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;repoURL&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;https://github.com/org/myservice&lt;/span&gt;
    &lt;span class="na"&gt;path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;deploy/&lt;/span&gt;
  &lt;span class="na"&gt;destination&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;myservice&lt;/span&gt;
    &lt;span class="na"&gt;server&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;https://kubernetes.default.svc&lt;/span&gt;
  &lt;span class="na"&gt;syncPolicy&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;automated&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;prune&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
      &lt;span class="na"&gt;selfHeal&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Real-World Challenges &amp;amp; Lessons Learned
&lt;/h2&gt;

&lt;p&gt;No one tells you these things until you experience them:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Templates need &lt;em&gt;constant updating&lt;/em&gt;
&lt;/li&gt;
&lt;li&gt;Teams will bypass your platform unless you make it genuinely delightful&lt;/li&gt;
&lt;li&gt;Backstage plugin versioning can be chaotic&lt;/li&gt;
&lt;li&gt;Keep the IDP simple: don’t over-engineer&lt;/li&gt;
&lt;li&gt;EKS upgrades require strict compatibility planning&lt;/li&gt;
&lt;li&gt;Terraform module contracts must be stable and versioned&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;But once it works, &lt;strong&gt;developer onboarding drops from weeks to minutes&lt;/strong&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Final Thoughts — The Future of IDPs
&lt;/h2&gt;

&lt;p&gt;The next evolution of IDPs will include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;AI-powered troubleshooting&lt;/li&gt;
&lt;li&gt;Auto-generated infrastructure from natural language&lt;/li&gt;
&lt;li&gt;Predictive autoscaling&lt;/li&gt;
&lt;li&gt;Auto-remediation&lt;/li&gt;
&lt;li&gt;Learning-based deployment risk scoring&lt;/li&gt;
&lt;li&gt;AI-assisted documentation &amp;amp; code generation&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;And you’ll be ahead of the curve because you’re already building the foundation today.&lt;/p&gt;

</description>
      <category>terraform</category>
      <category>kubernetes</category>
      <category>devops</category>
      <category>aws</category>
    </item>
    <item>
      <title>Mastering Amazon EKS Auto Mode: Let Your Cluster Drive Itself (So You Can Work on the Fun Stuff)</title>
      <dc:creator>Luqman Bello</dc:creator>
      <pubDate>Tue, 21 Oct 2025 11:10:59 +0000</pubDate>
      <link>https://dev.to/aws-builders/mastering-amazon-eks-auto-mode-let-your-cluster-drive-itself-so-you-can-work-on-the-fun-stuff-279k</link>
      <guid>https://dev.to/aws-builders/mastering-amazon-eks-auto-mode-let-your-cluster-drive-itself-so-you-can-work-on-the-fun-stuff-279k</guid>
      <description>&lt;p&gt;Cloud‑native adoption has turned us into part‑time cluster mechanics. We spend evenings wrestling with &lt;em&gt;YAML&lt;/em&gt; files, debugging tangled Helm charts and praying that our rollout doesn’t collide with a surprise Kubernetes upgrade. Wouldn’t it be nice to hand over the keys and let someone else handle the oil changes and tyre rotations?&lt;/p&gt;

&lt;p&gt;That’s exactly what Amazon EKS Auto Mode promises. Announced at &lt;a href="https://www.youtube.com/watch?v=a_aDPo9oTMo" rel="noopener noreferrer"&gt;re:Invent 2024&lt;/a&gt;, it lifts much of the day‑to‑day operational burden from platform teams. In this article, I’ll unpack what it is, how it works, and why it might just give you back your evenings.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why We Needed Auto Mode
&lt;/h2&gt;

&lt;p&gt;If you’ve been building on Kubernetes for more than a few months, you know the drill: patching clusters, managing controllers, scaling nodes, and performing version upgrades. During an interview at re:Invent, Barry Cooks (AWS VP for Kubernetes) noted that customers increasingly ask AWS to “do more for us” because they simply don’t have time to manage clusters themselves. The platform engineering to‑do list is never empty, and the more mission‑critical your applications become, the less appealing it is to babysit infrastructure.&lt;/p&gt;

&lt;p&gt;That’s why managed services like EKS exist: they abstract away control plane operations. Auto Mode takes this a step further by extending that abstraction into the data plane and cluster lifecycle tasks. Instead of worrying about which controller to install or when to apply security patches, you let AWS handle it for you.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Is EKS Auto Mode?
&lt;/h2&gt;

&lt;p&gt;EKS Auto Mode is a managed configuration of EKS designed to make Kubernetes almost “serverless.” When you opt in, AWS will:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Install and manage common controllers (such as autoscalers and networking add‑ons) for you.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Handle version lifecycle management and deliver one‑click upgrades.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Provision and patch EC2 nodes on your behalf, taking care of CVEs while still letting you benefit from on‑demand, reserved, or Spot pricing.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In essence, Auto Mode expands the “managed” surface area of EKS. You still interact with Kubernetes through the standard APIs, but the underlying mechanics of keeping the cluster healthy, secure, and up‑to‑date are handed over to AWS.&lt;/p&gt;

&lt;h3&gt;
  
  
  Under the Hood: Karpenter &amp;amp; the CNCF
&lt;/h3&gt;

&lt;p&gt;If you’re familiar with &lt;strong&gt;Karpenter&lt;/strong&gt;, AWS’s open‑source cluster autoscaler, Auto Mode will feel like a natural evolution. Karpenter dynamically provisions right‑sized nodes based on your workloads. In 2024, AWS donated Karpenter to the CNCF, signalling a commitment to open governance. Auto Mode runs Karpenter behind the scenes: when your workloads scale up or down, Karpenter decides what EC2 instances to launch and when to terminate them. You get the benefit of efficient autoscaling without having to maintain the tool yourself.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fud7tl7okojce23c1zkbt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fud7tl7okojce23c1zkbt.png" alt="How Auto Mode works" width="800" height="974"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The diagram above illustrates the idea: your pods connect to a central service that provisions nodes and controllers on the fly. You focus on building and deploying containers while Auto Mode orchestrates the underlying compute and networking resources.&lt;/p&gt;

&lt;h2&gt;
  
  
  Getting Started with Auto Mode
&lt;/h2&gt;

&lt;p&gt;If you are ready to take your hands off the wheel, here’s a high‑level overview of how to enable Auto Mode on a new cluster:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Create an EKS cluster with Auto Mode enabled. You can use the AWS CLI or eksctl. For example:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;eksctl create cluster &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--name&lt;/span&gt; my-auto-cluster &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--region&lt;/span&gt; us-east-1 &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--managed&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--auto-mount-service-account-token&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nb"&gt;true&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--enable-karpenter&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--enable-auto-mode&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;--enable-auto-mode&lt;/code&gt; flag tells EKS to handle controller installation and node lifecycle management for you.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Grant necessary IAM permissions. Auto Mode needs a service role that allows it to provision and patch EC2 instances. Make sure your cluster role includes &lt;code&gt;AmazonEKSClusterPolicy&lt;/code&gt;, &lt;code&gt;AmazonEKSServicePolicy&lt;/code&gt;, and &lt;code&gt;AmazonEC2ContainerRegistryReadOnly&lt;/code&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Deploy your workloads as usual. Use &lt;code&gt;kubectl&lt;/code&gt; or your GitOps pipeline to apply manifests. Karpenter and the EKS control plane will watch your pods and ensure the right amount of compute is available.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Enjoy simplified upgrades. When new Kubernetes versions are released, Auto Mode provides a single‑click upgrade path in the console or a one‑line command via the AWS CLI.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Use Cases &amp;amp; Predictions
&lt;/h2&gt;

&lt;p&gt;It's not every use case that this is applicable for. It is particularly appealing for:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Small teams or startups without dedicated SREs.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Edge or hybrid deployments, where you want a consistent cluster experience across environments.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;AI/ML workloads that require rapid scaling up and down.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;According to an article from &lt;a href="https://www.fairwinds.com/blog/kubernetes-2025-top-5-trends-predictions#:~:text=1,will%20find%20the%20journey%20worthwhile" rel="noopener noreferrer"&gt;fariwinds.com&lt;/a&gt;, Industry analysts expect Karpenter adoption to explode in the coming years, and predict that organizations will consolidate clusters, experiment more with multi‑cloud and hybrid strategies, and even migrate away from legacy virtualization platforms. Auto Mode sits at the intersection of these trends: it abstracts away low‑level operations while still giving you the flexibility to run in multiple environments.&lt;/p&gt;

&lt;h3&gt;
  
  
  Potential Caveats
&lt;/h3&gt;

&lt;p&gt;Auto Mode isn’t a one‑size‑fits‑all solution. You may still choose standard EKS if you need:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Fine‑grained control over add‑ons and node groups. Auto Mode manages common controllers automatically; if your architecture depends on niche or highly customized components, you might prefer manual control.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;On‑premises custom networking. Hybrid Nodes and Auto Mode aim to bring on‑prem to the cloud, but networking requirements can be complex.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Regulatory or compliance controls that require explicit sign‑off on upgrades and patches.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;To round this up, EKS Auto Mode isn’t a silver bullet—but it is a powerful tool in the ever‑expanding AWS toolbox. By delegating routine cluster maintenance to AWS, you free up valuable time to focus on what truly matters: delivering features, improving reliability, and delighting your users.&lt;/p&gt;

&lt;p&gt;As with any new service, the best way to evaluate Auto Mode is to try it. Spin up a test cluster, deploy a sample workload, and see how it feels to let your infrastructure drive itself. Then come back and share your experiences because the community learns faster when we learn together. Arigato 🙇🏽&lt;/p&gt;

</description>
      <category>aws</category>
      <category>kubernetes</category>
      <category>cloudnative</category>
      <category>devops</category>
    </item>
    <item>
      <title>Building Production-Grade ECS Anywhere Infrastructure with Custom Capacity Providers</title>
      <dc:creator>Luqman Bello</dc:creator>
      <pubDate>Fri, 01 Nov 2024 14:40:09 +0000</pubDate>
      <link>https://dev.to/aws-builders/building-production-grade-ecs-anywhere-infrastructure-with-custom-capacity-providers-jgj</link>
      <guid>https://dev.to/aws-builders/building-production-grade-ecs-anywhere-infrastructure-with-custom-capacity-providers-jgj</guid>
      <description>&lt;p&gt;Hey there, fellow infrastructure engineers! If you're reading this, you're probably looking to level up your container game with ECS Anywhere. Maybe you've got a hybrid infrastructure setup, or perhaps you're dealing with edge locations that need the same love as your cloud workloads. Whatever brought you here, I've got you covered.&lt;/p&gt;

&lt;p&gt;In this guide, we'll dive deep into building a production-ready ECS Anywhere infrastructure with custom capacity providers. But don't worry – we'll keep it real and focus on practical, battle-tested approaches that you can actually use.&lt;/p&gt;

&lt;h2&gt;
  
  
  What We'll Cover
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;Why ECS Anywhere (And Why Should You Care?)&lt;/li&gt;
&lt;li&gt;The Architecture That Actually Works&lt;/li&gt;
&lt;li&gt;Setting Things Up (The Right Way)&lt;/li&gt;
&lt;li&gt;Custom Capacity Providers (The Secret Sauce)&lt;/li&gt;
&lt;li&gt;Making It Production-Ready&lt;/li&gt;
&lt;li&gt;When Things Go Wrong (And They Will)&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Why ECS Anywhere?
&lt;/h2&gt;

&lt;p&gt;Let's be honest – not everything belongs in the cloud. Whether you're dealing with regulatory requirements, existing infrastructure investments, or edge computing needs, sometimes you need to run containers outside AWS. That's where ECS Anywhere comes in.&lt;/p&gt;

&lt;p&gt;Here's what makes it interesting:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;☁️ Cloud Benefits + On-Prem Control = ECS Anywhere
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;But here's what they don't tell you in the basic tutorials: the default capacity provider might not cut it for production workloads. That's why we're building a custom one.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Architecture That Actually Works
&lt;/h2&gt;

&lt;p&gt;Before we dive into the code, let's talk architecture. Here's what we're building:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://mermaid.live/edit#pako:eNpVkN1qhEAMhV8l5Hp9AS8KOtrSqxUslDJ6EZysSp0ZiTMtsuy7d2p_YHOVnPMRTnLFwRvGHEehdYKXsnOQqtDFawvKuyB-gWYhxz1k2QOUulYtFG7_nFi4_6HLw1JaxS14C4pWGuawQyP-YzYsv5g6sEq_-Sjw7C5CW5A4hPi_qDqIWp9d1gjbO_VR12a8B5_0OaQYoBYfzdbjCS2Lpdmke67fYIfJt9xhnlpD8t5h526Joxh8u7sB85SATyg-jtPfEFdDgauZ0k8s5hdaNr59ATq8XZg" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmermaid.ink%2Fimg%2Fpako%3AeNpVkN1qhEAMhV8l5Hp9AS8KOtrSqxUslDJ6EZysSp0ZiTMtsuy7d2p_YHOVnPMRTnLFwRvGHEehdYKXsnOQqtDFawvKuyB-gWYhxz1k2QOUulYtFG7_nFi4_6HLw1JaxS14C4pWGuawQyP-YzYsv5g6sEq_-Sjw7C5CW5A4hPi_qDqIWp9d1gjbO_VR12a8B5_0OaQYoBYfzdbjCS2Lpdmke67fYIfJt9xhnlpD8t5h526Joxh8u7sB85SATyg-jtPfEFdDgauZ0k8s5hdaNr59ATq8XZg%3Ftype%3Dpng" alt="Architecture" width="485" height="486"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Why this setup? Because it:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Keeps your ops team sane with unified management&lt;/li&gt;
&lt;li&gt;Handles real-world scaling scenarios&lt;/li&gt;
&lt;li&gt;Doesn't fall apart under pressure&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Setting Things Up
&lt;/h2&gt;

&lt;p&gt;First things first. Here's what you'll need:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Don't just copy-paste this - make sure you understand each part&lt;/span&gt;
aws &lt;span class="nt"&gt;--version&lt;/span&gt;  &lt;span class="c"&gt;# Needs v2.13.0+&lt;/span&gt;

&lt;span class="c"&gt;# Create your cluster with external capacity provider&lt;/span&gt;
aws ecs create-cluster &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--cluster-name&lt;/span&gt; prod-hybrid &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--capacity-providers&lt;/span&gt; EXTERNAL
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;🚨 &lt;strong&gt;Pro Tip&lt;/strong&gt;: Always use a test environment first. I learned this the hard way when I accidentally scaled down production instances. Not fun.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Secret Sauce: Custom Capacity Provider
&lt;/h2&gt;

&lt;p&gt;This is where things get interesting. Here's a custom capacity provider that actually works in production:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="kd"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;CustomCapacityProvider&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nf"&gt;constructor&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;private&lt;/span&gt; &lt;span class="nx"&gt;config&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;CapacityProviderConfig&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="c1"&gt;// Trust me, you want these logs&lt;/span&gt;
    &lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;setupLogging&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;

  &lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="nf"&gt;evaluateCapacity&lt;/span&gt;&lt;span class="p"&gt;():&lt;/span&gt; &lt;span class="nb"&gt;Promise&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="k"&gt;void&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;try&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;metrics&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;getMetrics&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;

      &lt;span class="c1"&gt;// Don't just check CPU - that's a rookie mistake&lt;/span&gt;
      &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;needsScaling&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;metrics&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;scaleCluster&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;metrics&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
      &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;catch &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;error&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="c1"&gt;// You'll thank me for this error handling later&lt;/span&gt;
      &lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;handleScalingError&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;error&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;

  &lt;span class="k"&gt;private&lt;/span&gt; &lt;span class="nf"&gt;needsScaling&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;metrics&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;ClusterMetrics&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt; &lt;span class="nx"&gt;boolean&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="c1"&gt;// Real-world scaling logic that won't wake you up at 3 AM&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nx"&gt;metrics&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;cpuUtilization&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="mi"&gt;70&lt;/span&gt; &lt;span class="o"&gt;||&lt;/span&gt; 
           &lt;span class="nx"&gt;metrics&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;memoryUtilization&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="mi"&gt;80&lt;/span&gt; &lt;span class="o"&gt;||&lt;/span&gt;
           &lt;span class="nx"&gt;metrics&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;pendingTasks&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here's what makes this implementation special:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;It handles edge cases (literally, if you're running at the edge)&lt;/li&gt;
&lt;li&gt;It won't flap like a fish out of water during traffic spikes&lt;/li&gt;
&lt;li&gt;It logs what you actually need to debug issues&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Making It Production-Ready
&lt;/h2&gt;

&lt;p&gt;Now, let's talk about what it takes to make this production-ready. Here are some battle-tested patterns:&lt;/p&gt;

&lt;h3&gt;
  
  
  Monitoring That Actually Helps
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="kd"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;MetricsPublisher&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="nf"&gt;publishMetrics&lt;/span&gt;&lt;span class="p"&gt;():&lt;/span&gt; &lt;span class="nb"&gt;Promise&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="k"&gt;void&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;cloudwatch&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;putMetricData&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
      &lt;span class="na"&gt;Namespace&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;ECS/CustomCapacityProvider&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="na"&gt;MetricData&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
        &lt;span class="p"&gt;{&lt;/span&gt;
          &lt;span class="c1"&gt;// These are the metrics you'll actually look at&lt;/span&gt;
          &lt;span class="na"&gt;MetricName&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;FailedTaskAllocation&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
          &lt;span class="na"&gt;Value&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;getFailedTaskCount&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt;
          &lt;span class="na"&gt;Unit&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Count&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;
        &lt;span class="p"&gt;},&lt;/span&gt;
        &lt;span class="c1"&gt;// Add more metrics that matter&lt;/span&gt;
      &lt;span class="p"&gt;]&lt;/span&gt;
    &lt;span class="p"&gt;}).&lt;/span&gt;&lt;span class="nf"&gt;promise&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;💡 &lt;strong&gt;Real Talk&lt;/strong&gt;: Don't just monitor everything. Monitor what matters. My team once spent hours chasing a "problem" that turned out to be a noisy metric.&lt;/p&gt;

&lt;h3&gt;
  
  
  Security That Makes Sense
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"Version"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"2012-10-17"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"Statement"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"Effect"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Allow"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"Action"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="s2"&gt;"ecs:RegisterExternalInstance"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="s2"&gt;"ecs:DeregisterExternalInstance"&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"Resource"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"*"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"Condition"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="err"&gt;//&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;This&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;condition&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;saved&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;us&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;during&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;a&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;security&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;audit&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"StringEquals"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
          &lt;/span&gt;&lt;span class="nl"&gt;"aws:ResourceTag/Environment"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Production"&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  When Things Go Wrong
&lt;/h2&gt;

&lt;p&gt;Because they will. Here's your survival guide:&lt;/p&gt;

&lt;h3&gt;
  
  
  Common Issues I've Hit (So You Don't Have To)
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;The "Missing Instance" Problem&lt;/strong&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# First, check if SSM Agent is actually running&lt;/span&gt;
&lt;span class="nb"&gt;sudo &lt;/span&gt;systemctl status amazon-ssm-agent

&lt;span class="c"&gt;# If it's not, here's the fix&lt;/span&gt;
&lt;span class="nb"&gt;sudo &lt;/span&gt;systemctl restart amazon-ssm-agent
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;The "Scaling Won't Stop" Issue&lt;/strong&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="kd"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;ScalingManager&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="k"&gt;private&lt;/span&gt; &lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="nf"&gt;applyBackoff&lt;/span&gt;&lt;span class="p"&gt;():&lt;/span&gt; &lt;span class="nb"&gt;Promise&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="k"&gt;void&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="c1"&gt;// This backoff strategy saved our bacon during a traffic spike&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;backoffMinutes&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nb"&gt;Math&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;min&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
      &lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;failureCount&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="mi"&gt;30&lt;/span&gt;
    &lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;wait&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;backoffMinutes&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Real-World Debugging
&lt;/h3&gt;

&lt;p&gt;Here's a debugging flow that's saved me countless hours:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Check the ECS agent logs&lt;/li&gt;
&lt;li&gt;Verify Systems Manager connectivity&lt;/li&gt;
&lt;li&gt;Look for capacity provider events&lt;/li&gt;
&lt;li&gt;Check your custom metrics
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# The holy grail of debugging commands&lt;/span&gt;
aws ecs describe-container-instances &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--cluster&lt;/span&gt; prod-hybrid &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--container-instances&lt;/span&gt; &lt;span class="nv"&gt;$INSTANCE_ID&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Lessons Learned
&lt;/h2&gt;

&lt;p&gt;After running this in production for a while, here are some key takeaways:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Start Small&lt;/strong&gt;: Don't try to boil the ocean. Get a basic setup working and iterate.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Monitor Wisely&lt;/strong&gt;: Focus on actionable metrics. Nobody wants another noisy dashboard.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Automate Recovery&lt;/strong&gt;: Because nobody wants to SSH into servers at 3 AM.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Document Everything&lt;/strong&gt;: Your future self will thank you.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Wrapping Up
&lt;/h2&gt;

&lt;p&gt;Building a production-grade ECS Anywhere infrastructure isn't just about following AWS documentation. It's about understanding your workloads, planning for failure, and building systems that can be maintained by humans.&lt;/p&gt;

&lt;p&gt;Remember:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Test thoroughly (seriously)&lt;/li&gt;
&lt;li&gt;Start with a simple capacity provider&lt;/li&gt;
&lt;li&gt;Add complexity only when needed&lt;/li&gt;
&lt;li&gt;Keep those logs meaningful&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  What's Next?
&lt;/h3&gt;

&lt;p&gt;If you're looking to take this further, consider:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Implementing cross-region failover&lt;/li&gt;
&lt;li&gt;Adding custom metrics for your specific use case&lt;/li&gt;
&lt;li&gt;Building automated testing for your capacity provider&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Got questions? Hit me up in the comments. I'd love to hear about your ECS Anywhere adventures!&lt;/p&gt;




&lt;p&gt;&lt;em&gt;P.S. If you found this helpful, I'd love to hear about your implementation stories. Drop a comment below!&lt;/em&gt;&lt;/p&gt;

</description>
      <category>tutorial</category>
      <category>devops</category>
      <category>aws</category>
      <category>containers</category>
    </item>
    <item>
      <title>The Ultimate Comprehensive Guide to Setting Up Self-Hosted Airbyte on EKS Using Karpenter</title>
      <dc:creator>Luqman Bello</dc:creator>
      <pubDate>Tue, 17 Sep 2024 12:13:56 +0000</pubDate>
      <link>https://dev.to/aws-builders/the-ultimate-comprehensive-guide-to-setting-up-self-hosted-airbyte-on-eks-using-karpenter-59ap</link>
      <guid>https://dev.to/aws-builders/the-ultimate-comprehensive-guide-to-setting-up-self-hosted-airbyte-on-eks-using-karpenter-59ap</guid>
      <description>&lt;p&gt;&lt;em&gt;Table of Contents&lt;/em&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Introduction&lt;/li&gt;
&lt;li&gt;Prerequisites&lt;/li&gt;
&lt;li&gt;
Setting Up EKS with Karpenter

&lt;ul&gt;
&lt;li&gt;Creating an EKS Cluster&lt;/li&gt;
&lt;li&gt;Installing Karpenter&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
Deploying Airbyte on EKS

&lt;ul&gt;
&lt;li&gt;Configuring Airbyte Resources&lt;/li&gt;
&lt;li&gt;Deploying Airbyte&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Upgrading Airbyte OSS Version&lt;/li&gt;
&lt;li&gt;Monitoring and Maintenance&lt;/li&gt;
&lt;li&gt;Troubleshooting Common Errors&lt;/li&gt;
&lt;/ol&gt;




&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;Airbyte is an open-source data integration platform that enables you to consolidate your data from various sources to data warehouses, lakes, and databases. This 2024 guide walks you through setting up a production-ready, self-hosted Airbyte instance on Amazon EKS (Elastic Kubernetes Service) using Karpenter for dynamic node provisioning. The setup includes best practices for security, scalability, and maintenance.&lt;/p&gt;

&lt;h2&gt;
  
  
  Prerequisites
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;AWS Account&lt;/strong&gt;: An active AWS account with appropriate permissions&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;AWS CLI&lt;/strong&gt;: Version 2.13.0 or later&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;kubectl&lt;/strong&gt;: Version 1.28 or later&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Helm&lt;/strong&gt;: Version 3.12 or later&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;eksctl&lt;/strong&gt;: Version 0.163.0 or later&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Basic knowledge&lt;/strong&gt;: Kubernetes, AWS, and command-line operations&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Setting Up EKS with Karpenter
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Creating an EKS Cluster
&lt;/h3&gt;

&lt;p&gt;Create a modern EKS cluster using &lt;code&gt;eksctl&lt;/code&gt; with the latest supported version:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;eksctl create cluster &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--name&lt;/span&gt; airbyte-cluster &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--region&lt;/span&gt; us-west-2 &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--version&lt;/span&gt; 1.28 &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--with-oidc&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--zones&lt;/span&gt; us-west-2a,us-west-2b &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--nodegroup-name&lt;/span&gt; standard-workers &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--node-type&lt;/span&gt; t3.large &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--nodes&lt;/span&gt; 2 &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--nodes-min&lt;/span&gt; 1 &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--nodes-max&lt;/span&gt; 4 &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--managed&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--addons&lt;/span&gt; aws-ebs-csi-driver,vpc-cni,coredns,kube-proxy &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--node-volume-size&lt;/span&gt; 100 &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--external-dns-access&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Key changes from previous versions:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Updated to EKS 1.28 (latest stable)&lt;/li&gt;
&lt;li&gt;Added essential EKS add-ons&lt;/li&gt;
&lt;li&gt;Increased default node size to t3.large for better performance&lt;/li&gt;
&lt;li&gt;Added larger root volume for containers&lt;/li&gt;
&lt;li&gt;Enabled external DNS access for better integration&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Installing Karpenter
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Create AWS IAM Resources for Karpenter&lt;/strong&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Using the latest Karpenter IAM setup:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;   &lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;CLUSTER_NAME&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"airbyte-cluster"&lt;/span&gt;
   &lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;AWS_ACCOUNT_ID&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;aws sts get-caller-identity &lt;span class="nt"&gt;--query&lt;/span&gt; Account &lt;span class="nt"&gt;--output&lt;/span&gt; text&lt;span class="si"&gt;)&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
   &lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;AWS_REGION&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"us-west-2"&lt;/span&gt;

   curl &lt;span class="nt"&gt;-fsSL&lt;/span&gt; https://karpenter.sh/v0.32.0/getting-started/getting-started-with-eksctl/cloudformation.yaml  &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; cloudformation.yaml

   aws cloudformation deploy &lt;span class="se"&gt;\&lt;/span&gt;
     &lt;span class="nt"&gt;--stack-name&lt;/span&gt; &lt;span class="s2"&gt;"karpenter-&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;CLUSTER_NAME&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
     &lt;span class="nt"&gt;--template-file&lt;/span&gt; cloudformation.yaml &lt;span class="se"&gt;\&lt;/span&gt;
     &lt;span class="nt"&gt;--capabilities&lt;/span&gt; CAPABILITY_NAMED_IAM &lt;span class="se"&gt;\&lt;/span&gt;
     &lt;span class="nt"&gt;--parameter-overrides&lt;/span&gt; &lt;span class="s2"&gt;"ClusterName=&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;CLUSTER_NAME&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Install Karpenter Controller&lt;/strong&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;   helm repo add karpenter https://charts.karpenter.sh
   helm repo update

   helm &lt;span class="nb"&gt;install &lt;/span&gt;karpenter karpenter/karpenter &lt;span class="se"&gt;\&lt;/span&gt;
     &lt;span class="nt"&gt;--namespace&lt;/span&gt; karpenter &lt;span class="nt"&gt;--create-namespace&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
     &lt;span class="nt"&gt;--version&lt;/span&gt; v0.32.0 &lt;span class="se"&gt;\&lt;/span&gt;
     &lt;span class="nt"&gt;--set&lt;/span&gt; serviceAccount.annotations.&lt;span class="s2"&gt;"eks&lt;/span&gt;&lt;span class="se"&gt;\.&lt;/span&gt;&lt;span class="s2"&gt;amazonaws&lt;/span&gt;&lt;span class="se"&gt;\.&lt;/span&gt;&lt;span class="s2"&gt;com/role-arn"&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;arn:aws:iam::&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;AWS_ACCOUNT_ID&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;:role/karpenter-controller-&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;CLUSTER_NAME&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
     &lt;span class="nt"&gt;--set&lt;/span&gt; settings.aws.clusterName&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;CLUSTER_NAME&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
     &lt;span class="nt"&gt;--set&lt;/span&gt; settings.aws.defaultInstanceProfile&lt;span class="o"&gt;=&lt;/span&gt;KarpenterNodeInstanceProfile-&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;CLUSTER_NAME&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
     &lt;span class="nt"&gt;--set&lt;/span&gt; settings.aws.interruptionQueueName&lt;span class="o"&gt;=&lt;/span&gt;karpenter-&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;CLUSTER_NAME&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
     &lt;span class="nt"&gt;--set&lt;/span&gt; controller.resources.requests.cpu&lt;span class="o"&gt;=&lt;/span&gt;1 &lt;span class="se"&gt;\&lt;/span&gt;
     &lt;span class="nt"&gt;--set&lt;/span&gt; controller.resources.requests.memory&lt;span class="o"&gt;=&lt;/span&gt;1Gi &lt;span class="se"&gt;\&lt;/span&gt;
     &lt;span class="nt"&gt;--set&lt;/span&gt; controller.resources.limits.cpu&lt;span class="o"&gt;=&lt;/span&gt;1 &lt;span class="se"&gt;\&lt;/span&gt;
     &lt;span class="nt"&gt;--set&lt;/span&gt; controller.resources.limits.memory&lt;span class="o"&gt;=&lt;/span&gt;1Gi
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Create a Modern Provisioner&lt;/strong&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Create &lt;code&gt;provisioner.yaml&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;   &lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;karpenter.sh/v1beta1&lt;/span&gt;
   &lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;NodePool&lt;/span&gt;
   &lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
     &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;default&lt;/span&gt;
   &lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
     &lt;span class="na"&gt;template&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
       &lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
         &lt;span class="na"&gt;requirements&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
           &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;key&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;kubernetes.io/arch"&lt;/span&gt;
             &lt;span class="na"&gt;operator&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;In&lt;/span&gt;
             &lt;span class="na"&gt;values&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;amd64"&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
           &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;key&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;kubernetes.io/os"&lt;/span&gt;
             &lt;span class="na"&gt;operator&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;In&lt;/span&gt;
             &lt;span class="na"&gt;values&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;linux"&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
           &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;key&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;karpenter.sh/capacity-type"&lt;/span&gt;
             &lt;span class="na"&gt;operator&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;In&lt;/span&gt;
             &lt;span class="na"&gt;values&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;on-demand"&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
           &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;key&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;node.kubernetes.io/instance-type"&lt;/span&gt;
             &lt;span class="na"&gt;operator&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;In&lt;/span&gt;
             &lt;span class="na"&gt;values&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;t3.large"&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;t3.xlarge"&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;m5.large"&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;m5.xlarge"&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
         &lt;span class="na"&gt;nodeClassRef&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
           &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;default&lt;/span&gt;
     &lt;span class="na"&gt;limits&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
       &lt;span class="na"&gt;resources&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
         &lt;span class="na"&gt;cpu&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;100&lt;/span&gt;
         &lt;span class="na"&gt;memory&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;100Gi&lt;/span&gt;
     &lt;span class="na"&gt;disruption&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
       &lt;span class="na"&gt;consolidationPolicy&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;WhenUnderutilized&lt;/span&gt;
       &lt;span class="na"&gt;consolidateAfter&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;30s&lt;/span&gt;
     &lt;span class="na"&gt;weight&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;10&lt;/span&gt;
&lt;span class="nn"&gt;---&lt;/span&gt;
&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;karpenter.sh/v1beta1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;EC2NodeClass&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;default&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;amiFamily&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;AL2&lt;/span&gt;
  &lt;span class="na"&gt;roleClassName&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;default&lt;/span&gt;
  &lt;span class="na"&gt;subnetSelectorTerms&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;tags&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;karpenter.sh/discovery&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${CLUSTER_NAME}&lt;/span&gt;
  &lt;span class="na"&gt;securityGroupSelectorTerms&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;tags&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;karpenter.sh/discovery&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${CLUSTER_NAME}&lt;/span&gt;
  &lt;span class="na"&gt;tags&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;karpenter.sh/discovery&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${CLUSTER_NAME}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Apply the configuration:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;   kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; provisioner.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Deploying Airbyte on EKS
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Configuring Airbyte Resources
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Create Namespace and Storage Class&lt;/strong&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;   kubectl create namespace airbyte

   &lt;span class="nb"&gt;cat&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&amp;lt;&lt;/span&gt;&lt;span class="no"&gt;EOF&lt;/span&gt;&lt;span class="sh"&gt; | kubectl apply -f -
   apiVersion: storage.k8s.io/v1
   kind: StorageClass
   metadata:
     name: airbyte-storage
   provisioner: ebs.csi.aws.com
   parameters:
     type: gp3
     encrypted: "true"
   reclaimPolicy: Retain
   allowVolumeExpansion: true
   volumeBindingMode: WaitForFirstConsumer
&lt;/span&gt;&lt;span class="no"&gt;   EOF
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Set Up Persistent Storage&lt;/strong&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;   &lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
   &lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;PersistentVolumeClaim&lt;/span&gt;
   &lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
     &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;airbyte-pvc&lt;/span&gt;
     &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;airbyte&lt;/span&gt;
   &lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
     &lt;span class="na"&gt;storageClassName&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;airbyte-storage&lt;/span&gt;
     &lt;span class="na"&gt;accessModes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
       &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;ReadWriteOnce&lt;/span&gt;
     &lt;span class="na"&gt;resources&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
       &lt;span class="na"&gt;requests&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
         &lt;span class="na"&gt;storage&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;50Gi&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Deploying Airbyte
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Add Airbyte Helm Repository&lt;/strong&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;   helm repo add airbyte https://airbytehq.github.io/helm-charts
   helm repo update
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Create Custom Values File&lt;/strong&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Create &lt;code&gt;airbyte-values.yaml&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;   &lt;span class="na"&gt;global&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
     &lt;span class="na"&gt;serviceAccountName&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;airbyte-admin&lt;/span&gt;
     &lt;span class="na"&gt;database&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
       &lt;span class="na"&gt;secretName&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;airbyte-db-secrets&lt;/span&gt;
     &lt;span class="na"&gt;logs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;storage&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;enabled&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;

   &lt;span class="na"&gt;webapp&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
     &lt;span class="na"&gt;resources&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
       &lt;span class="na"&gt;limits&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
         &lt;span class="na"&gt;cpu&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;1"&lt;/span&gt;
         &lt;span class="na"&gt;memory&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;2Gi&lt;/span&gt;
       &lt;span class="na"&gt;requests&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
         &lt;span class="na"&gt;cpu&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;0.5"&lt;/span&gt;
         &lt;span class="na"&gt;memory&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;1Gi&lt;/span&gt;

   &lt;span class="na"&gt;server&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
     &lt;span class="na"&gt;resources&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
       &lt;span class="na"&gt;limits&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
         &lt;span class="na"&gt;cpu&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;2"&lt;/span&gt;
         &lt;span class="na"&gt;memory&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;4Gi&lt;/span&gt;
       &lt;span class="na"&gt;requests&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
         &lt;span class="na"&gt;cpu&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;1"&lt;/span&gt;
         &lt;span class="na"&gt;memory&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;2Gi&lt;/span&gt;

   &lt;span class="na"&gt;worker&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
     &lt;span class="na"&gt;resources&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
       &lt;span class="na"&gt;limits&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
         &lt;span class="na"&gt;cpu&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;2"&lt;/span&gt;
         &lt;span class="na"&gt;memory&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;4Gi&lt;/span&gt;
       &lt;span class="na"&gt;requests&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
         &lt;span class="na"&gt;cpu&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;1"&lt;/span&gt;
         &lt;span class="na"&gt;memory&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;2Gi&lt;/span&gt;

   &lt;span class="na"&gt;persistence&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
     &lt;span class="na"&gt;enabled&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
     &lt;span class="na"&gt;storageClass&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;airbyte-storage&lt;/span&gt;
     &lt;span class="na"&gt;size&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;50Gi&lt;/span&gt;

   &lt;span class="na"&gt;metrics&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
     &lt;span class="na"&gt;enabled&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
     &lt;span class="na"&gt;serviceMonitor&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
       &lt;span class="na"&gt;enabled&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Install Airbyte&lt;/strong&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;   helm &lt;span class="nb"&gt;install &lt;/span&gt;airbyte airbyte/airbyte &lt;span class="se"&gt;\&lt;/span&gt;
     &lt;span class="nt"&gt;--namespace&lt;/span&gt; airbyte &lt;span class="se"&gt;\&lt;/span&gt;
     &lt;span class="nt"&gt;--values&lt;/span&gt; airbyte-values.yaml &lt;span class="se"&gt;\&lt;/span&gt;
     &lt;span class="nt"&gt;--version&lt;/span&gt; 0.50.0
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Verify Deployment&lt;/strong&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;   kubectl get pods &lt;span class="nt"&gt;-n&lt;/span&gt; airbyte
   kubectl get svc &lt;span class="nt"&gt;-n&lt;/span&gt; airbyte
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Upgrading Airbyte OSS Version
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Backup Before Upgrade&lt;/strong&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;   &lt;span class="c"&gt;# Backup using Velero or similar tool&lt;/span&gt;
   velero backup create airbyte-backup &lt;span class="se"&gt;\&lt;/span&gt;
     &lt;span class="nt"&gt;--include-namespaces&lt;/span&gt; airbyte
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Update Helm Repository&lt;/strong&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;   helm repo update
   helm search repo airbyte/airbyte &lt;span class="nt"&gt;--versions&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Upgrade Airbyte&lt;/strong&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;   helm upgrade airbyte airbyte/airbyte &lt;span class="se"&gt;\&lt;/span&gt;
     &lt;span class="nt"&gt;--namespace&lt;/span&gt; airbyte &lt;span class="se"&gt;\&lt;/span&gt;
     &lt;span class="nt"&gt;--values&lt;/span&gt; airbyte-values.yaml &lt;span class="se"&gt;\&lt;/span&gt;
     &lt;span class="nt"&gt;--version&lt;/span&gt; &amp;lt;NEW_VERSION&amp;gt; &lt;span class="se"&gt;\&lt;/span&gt;
     &lt;span class="nt"&gt;--timeout&lt;/span&gt; 15m
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Monitoring and Maintenance
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Set Up Monitoring&lt;/strong&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;   &lt;span class="c"&gt;# Install Prometheus and Grafana&lt;/span&gt;
   helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
   helm repo update

   helm &lt;span class="nb"&gt;install &lt;/span&gt;prometheus prometheus-community/kube-prometheus-stack &lt;span class="se"&gt;\&lt;/span&gt;
     &lt;span class="nt"&gt;--namespace&lt;/span&gt; monitoring &lt;span class="se"&gt;\&lt;/span&gt;
     &lt;span class="nt"&gt;--create-namespace&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Configure Log Aggregation&lt;/strong&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Use AWS CloudWatch Logs:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;   kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; https://raw.githubusercontent.com/aws/containers-roadmap/master/preview-programs/cloudwatch-logs/cloudwatch-logs-configmap.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Troubleshooting Common Errors
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Pod Scheduling Issues
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Symptom&lt;/strong&gt;: Pods stuck in &lt;code&gt;Pending&lt;/code&gt; state&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Solution&lt;/strong&gt;: Check Karpenter logs and node pool configuration:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;  kubectl logs &lt;span class="nt"&gt;-n&lt;/span&gt; karpenter &lt;span class="nt"&gt;-l&lt;/span&gt; app.kubernetes.io/name&lt;span class="o"&gt;=&lt;/span&gt;karpenter &lt;span class="nt"&gt;-f&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Database Connection Issues
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Symptom&lt;/strong&gt;: Database connection errors in Airbyte logs&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Solution&lt;/strong&gt;: Verify secrets and connection strings:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;  kubectl get secrets &lt;span class="nt"&gt;-n&lt;/span&gt; airbyte
  kubectl logs &lt;span class="nt"&gt;-n&lt;/span&gt; airbyte deployment/airbyte-server
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Resource Constraints
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Symptom&lt;/strong&gt;: OOMKilled or CPU throttling&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Solution&lt;/strong&gt;: Adjust resource limits in values.yaml:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;  &lt;span class="na"&gt;resources&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;limits&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;memory&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;4Gi"&lt;/span&gt;
      &lt;span class="na"&gt;cpu&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;2"&lt;/span&gt;
    &lt;span class="na"&gt;requests&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;memory&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;2Gi"&lt;/span&gt;
      &lt;span class="na"&gt;cpu&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;1"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Karpenter Node Provisioning Issues
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Symptom&lt;/strong&gt;: Nodes not being provisioned&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Solution&lt;/strong&gt;: Check NodePool and EC2NodeClass configurations:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;  kubectl describe nodepool default
  kubectl describe ec2nodeclass default
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;p&gt;This guide provides a robust foundation for running Airbyte on EKS with Karpenter. Remember to regularly check for updates and security patches for all components. For production environments, consider implementing additional security measures and backup strategies.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;For questions, issues, or suggestions, please leave a comment below!&lt;/em&gt;&lt;/p&gt;

</description>
      <category>karpenter</category>
      <category>eks</category>
      <category>aws</category>
      <category>kubernetes</category>
    </item>
    <item>
      <title>Managing AWS EKS Clusters Locally Using Lens</title>
      <dc:creator>Luqman Bello</dc:creator>
      <pubDate>Tue, 10 Oct 2023 14:48:41 +0000</pubDate>
      <link>https://dev.to/aws-builders/managing-aws-eks-clusters-locally-using-lens-5n6</link>
      <guid>https://dev.to/aws-builders/managing-aws-eks-clusters-locally-using-lens-5n6</guid>
      <description>&lt;p&gt;Kubernetes has undoubtedly become the go-to container orchestration platform, but managing its clusters can often be complex and tedious. This is where Lens comes in. Often dubbed as the "Kubernetes IDE,".&lt;/p&gt;

&lt;p&gt;Lens is more than just a Kubernetes IDE; it's your one-stop-shop for effortless cluster management, offering a unique blend of usability and functionality. The tool shines exceptionally when working with Amazon Elastic Kubernetes Service (EKS), providing an intuitive and user-friendly interface to manage and monitor your clusters. In this article, we'll walk through the steps required to manage an AWS EKS cluster using Lens from a local machine.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5bpfp4cwzb2rewog2hur.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5bpfp4cwzb2rewog2hur.png" alt="Lens IDE" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  Prerequisites
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;An active AWS account with the requisite permissions.&lt;/li&gt;
&lt;li&gt;AWS CLI installed and configured&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;kubectl&lt;/code&gt; installed&lt;/li&gt;
&lt;li&gt;EKS cluster up and running&lt;/li&gt;
&lt;li&gt;Lens IDE installed on your local machine&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
  
  
  Step-by-step guide:
&lt;/h2&gt;
&lt;h3&gt;
  
  
  Setting up AWS CLI and Authenticating:
&lt;/h3&gt;

&lt;p&gt;Before we connect Lens to EKS, ensure your AWS CLI is authenticated and has access to the EKS cluster.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws configure
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Fill in your AWS Access Key ID, Secret Access Key, default region name, and default output format when prompted.&lt;br&gt;
For a more detailed guide on how to setup AWS CLI, you can use &lt;a href="https://zacks.one/aws-cli/" rel="noopener noreferrer"&gt;Zacks Blog&lt;/a&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  Setting Up Your EKS Cluster
&lt;/h3&gt;

&lt;p&gt;If you haven't got an EKS cluster running, you can set one up using the &lt;code&gt;eksctl&lt;/code&gt; tool, as shown below:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;eksctl create cluster --name my-cluster --region us-west-2 --nodegroup-name my-nodes --nodes 2 --nodes-min 1 --nodes-max 3 --managed
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here, my-cluster is your cluster name, and my-nodes is your node group. The cluster will be created in the us-west-2 AWS region with two EC2 instances. Feel free to adjust these as per your needs.&lt;/p&gt;

&lt;h3&gt;
  
  
  Installing Lens
&lt;/h3&gt;

&lt;p&gt;You can download Lens from its &lt;a href="https://k8slens.dev/" rel="noopener noreferrer"&gt;official website&lt;/a&gt; and follow the installation guide for your respective operating system.&lt;/p&gt;

&lt;h3&gt;
  
  
  Connecting the Dots: kubectl and EKS
&lt;/h3&gt;

&lt;p&gt;Before you can manage your EKS cluster with Lens, make sure your &lt;code&gt;kubectl&lt;/code&gt; is configured to communicate with it. Run the following command to accomplish this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws eks update-kubeconfig --name my-cluster
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This will update your Kubernetes configuration file (&lt;code&gt;~/.kube/config&lt;/code&gt;), paving the way for Lens to connect to your EKS cluster.&lt;/p&gt;

&lt;h3&gt;
  
  
  Let's Dive into Lens
&lt;/h3&gt;

&lt;h4&gt;
  
  
  Connecting Lens to EKS
&lt;/h4&gt;

&lt;ol&gt;
&lt;li&gt;Start Lens: Open the Lens application and navigate to the cluster management view.&lt;/li&gt;
&lt;li&gt;Add Cluster: Click on the "+" icon to initiate the addition of a new cluster.&lt;/li&gt;
&lt;li&gt;Select kubeconfig: Choose the updated ~/.kube/config file.&lt;/li&gt;
&lt;li&gt;Connect: After the cluster loads, hit the "Connect" button.&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Navigating the Interface
&lt;/h3&gt;

&lt;p&gt;Once connected, you'll be greeted with a wealth of options. From nodes, pods, and services to deployments and config maps, Lens makes it easier to manage them all.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuq3zm437iiqu4042ogpf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuq3zm437iiqu4042ogpf.png" alt="Lens Dashboard" width="800" height="528"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Resource Management: CRUD Operations
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Create: Use the "Create" button to upload a YAML configuration file or manually input the YAML data to create a new Kubernetes resource.&lt;/li&gt;
&lt;li&gt;Update: Simply click on the resource you wish to update, make the required changes, and save.&lt;/li&gt;
&lt;li&gt;Delete: To remove a resource, select it and click on the trash bin icon.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  The Lens Advantage: Monitoring &amp;amp; Observability
&lt;/h3&gt;

&lt;p&gt;Lens comes equipped with built-in monitoring features, offering real-time insights into your EKS cluster. Metrics like CPU utilization, memory usage, and disk I/O can be accessed straight from the dashboard.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp91hzqt07wqbeuj0mt1v.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp91hzqt07wqbeuj0mt1v.png" alt="Cluster" width="800" height="288"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Advance Features
&lt;/h3&gt;

&lt;p&gt;Lens isn't just about monitoring; it's a full-fledged Kubernetes management IDE. You get features like:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;1. Real-time cluster metrics&lt;/li&gt;
&lt;li&gt;2. Direct terminal access to nodes and pods&lt;/li&gt;
&lt;li&gt;3. An extensive catalog of extensions for more functionalities
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdedb9vy6edenwuy56mij.png" alt="Lens Features" width="800" height="528"&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Locking It Down: Security Aspects
&lt;/h3&gt;

&lt;p&gt;Lens provides critical security features like Role-Based Access Control (RBAC) and Secrets management to ensure that your EKS cluster is well-secured.&lt;/p&gt;

&lt;p&gt;To wrap up, Lens offers a compelling and intuitive way to manage your AWS EKS clusters, making it a must-have tool for anyone in the DevOps, Cloud Computing, or Kubernetes ecosystem. Its ease of setup and rich feature set make it an excellent choice for cluster management.&lt;/p&gt;

&lt;p&gt;For further queries or discussion, feel free to drop a comment below. If you found this guide helpful, consider sharing it with your network!&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>opensource</category>
      <category>devops</category>
      <category>aws</category>
    </item>
    <item>
      <title>Harnessing the Power of AWS Workflow Studio for DevOps Automation</title>
      <dc:creator>Luqman Bello</dc:creator>
      <pubDate>Fri, 29 Sep 2023 16:27:40 +0000</pubDate>
      <link>https://dev.to/aws-builders/harnessing-the-power-of-aws-workflow-studio-for-devops-automation-1g30</link>
      <guid>https://dev.to/aws-builders/harnessing-the-power-of-aws-workflow-studio-for-devops-automation-1g30</guid>
      <description>&lt;h3&gt;
  
  
  Introduction:
&lt;/h3&gt;

&lt;p&gt;In the dynamic realm of DevOps, the ability to streamline operational processes is invaluable. AWS Workflow Studio emerges as a compelling tool, offering a visual workflow designer to automate tasks and orchestrate AWS services seamlessly. This article delves into the key features of AWS Workflow Studio, its integrative capabilities with other AWS services, and the remarkable ease it brings to the DevOps arena.&lt;/p&gt;

&lt;h3&gt;
  
  
  Getting Started with AWS Workflow Studio:
&lt;/h3&gt;

&lt;p&gt;AWS Workflow Studio provides a user-friendly interface to design, test, and deploy workflows. Even without extensive AWS Step Functions knowledge, DevOps professionals can navigate through the platform, designing workflows that automate complex operational tasks.&lt;/p&gt;

&lt;h3&gt;
  
  
  Visual Interface:
&lt;/h3&gt;

&lt;p&gt;The drag-and-drop interface abstracts the complexity of underlying configurations, making it easier to visualize and design workflows.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr6vc4pudvmxj098h5ruk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr6vc4pudvmxj098h5ruk.png" alt="AWS Workflow" width="800" height="787"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  AWS Services Integration:
&lt;/h3&gt;

&lt;p&gt;AWS Workflow Studio facilitates seamless integration with other AWS services like AWS Lambda, SNS, and SQS. This integrative ability enhances the creation of robust automation solutions, like setting up notification systems or processing pipelines.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffkz2vujtmm9qmobju10c.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffkz2vujtmm9qmobju10c.png" alt="AWS Anthena" width="800" height="427"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Deployment:
&lt;/h3&gt;

&lt;p&gt;Once a workflow is designed, deploying it is straightforward. AWS Workflow Studio automatically generates the necessary AWS Step Functions code, which can be deployed directly or with minor modifications as per requirements.&lt;br&gt;
The DevOps Impact:&lt;br&gt;
The introduction of AWS Workflow Studio augments the DevOps toolkit in multiple ways:&lt;/p&gt;

&lt;h3&gt;
  
  
  Troubleshooting and Optimization:
&lt;/h3&gt;

&lt;p&gt;The visual representation of workflows simplifies the troubleshooting process, helping to identify bottlenecks or errors swiftly.&lt;/p&gt;

&lt;h3&gt;
  
  
  Documentation:
&lt;/h3&gt;

&lt;p&gt;Visual workflows act as self-documenting processes, promoting a better understanding among the team and ensuring everyone is on the same page regarding operational workflows.&lt;/p&gt;

&lt;h3&gt;
  
  
  Collaboration:
&lt;/h3&gt;

&lt;p&gt;By providing a clear visualization of workflows, AWS Workflow Studio fosters better collaboration among development, operations, and other cross-functional teams.&lt;/p&gt;

&lt;h3&gt;
  
  
  Operational Efficiency:
&lt;/h3&gt;

&lt;p&gt;Streamlining the design and deployment of workflows significantly enhances operational efficiency, freeing up time and resources for other critical tasks.&lt;/p&gt;

&lt;h3&gt;
  
  
  Anticipated Challenges and Mitigation Strategies:
&lt;/h3&gt;

&lt;p&gt;While AWS Workflow Studio is a robust tool, integrating it into existing DevOps workflows might present challenges. Ensuring a smooth transition requires a well-thought-out strategy. One potential challenge could be the learning curve for professionals new to AWS Workflow Studio. Providing training sessions and creating comprehensive documentation can mitigate this challenge. Additionally, while the automatic generation of AWS Step Functions code accelerates workflow deployment, it may not always align with specific organizational standards or complex workflow requirements. A thorough review and possibly manual tweaking of the generated code might be necessary to ensure compliance and optimal performance.&lt;/p&gt;

&lt;h3&gt;
  
  
  Conclusion:
&lt;/h3&gt;

&lt;p&gt;AWS Workflow Studio is more than just a workflow designer; it's an enabler for DevOps professionals striving for operational excellence. The ease of designing, testing, and deploying workflows, coupled with the seamless integration with other AWS services, makes it a noteworthy addition to the DevOps toolkit.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>devops</category>
      <category>awscommunitybuilders</category>
      <category>architecture</category>
    </item>
    <item>
      <title>Using Amazon’s Kubernetes Distribution Everywhere with Amazon EKS Distro</title>
      <dc:creator>Luqman Bello</dc:creator>
      <pubDate>Sat, 06 Aug 2022 19:25:00 +0000</pubDate>
      <link>https://dev.to/aws-builders/using-amazons-kubernetes-distribution-everywhere-with-amazon-eks-distro-2fc2</link>
      <guid>https://dev.to/aws-builders/using-amazons-kubernetes-distribution-everywhere-with-amazon-eks-distro-2fc2</guid>
      <description>&lt;p&gt;Using Kubernetes in the public cloud is easy. Especially when using managed services like Amazon Elastic Kubernetes Service (Amazon EKS). EKS is one of the most popular Kubernetes distros. When using EKS, there is no need to administer examination level nodes, etcd nodes or other examination level components. This simplicity allows you to focus on your application. But in real life scenarios, you sometimes need to run Kubernetes clusters in on-premises environments. Maybe because of the regulation restrictions, compliance requirements or you may need the lowest latency when accessing your clusters or applications. &lt;/p&gt;

&lt;p&gt;There are so many Kubernetes Distributions out there. Most of them are CNCF certified. But that means you need to choose from many options, try and implement one that suits your needs. Check the security part of your distro or find a suitable tool for your deployment. In other words, we all need standardization.&lt;/p&gt;

&lt;p&gt;In December 2020, AWS announced the distribution of EKS. EKS Distro is a Kubernetes distribution built and powered by Amazon EKS managed, allowing you to deploy secure and reliable Kubernetes clusters in any environment. EKS Distro allows you to use the same tools and the Kubernetes version and dependencies with EKS. You also don't have to worry about security patches for the distribution because with each release of the EKS distribution you also get the latest patches and the EKS distribution follows the same EKS process to check Kubernetes versions. This means that you always use reliable and tested Kubernetes distributions in your environment.&lt;/p&gt;

&lt;p&gt;EKS Distro is an open source project on GitHub. You can check out the repository at this link. &lt;a href="https://github.com/aws/eks-distro/" rel="noopener noreferrer"&gt;https://github.com/aws/eks-distro/&lt;/a&gt; &lt;/p&gt;

&lt;p&gt;You can install EKS Distro on metal servers or virtual machines in your data center or even in the environment of other public cloud providers. Unlike EKS, when using EKS Distro, you have to manage all control level nodes, nodes, etc. and control level components yourself. This comes with some additional operational burden, but is a huge benefit if you don't have to think about the security or reliability of your Kubernetes deployment.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmzyl8wfqsvf5qwn0ge4c.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmzyl8wfqsvf5qwn0ge4c.png" alt="Image description" width="800" height="454"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As you can see from the screenshot above each EKS Deployment Option has its own features. On the right column there are options and features of EKS Distro. As I’ve mentioned before when using EKS Distro you need to have your own infrastructure and you need to manage the Control Plane. Also you can use different 3rd party CNI Plugins according to your needs. Biggest difference is that unlike EKS Anywhere, there are no Enterprise Support offerings from AWS with EKS Distro.&lt;/p&gt;

&lt;p&gt;The project is on GitHub and supported by the Community. When you have any problems or when you want to contribute to the projects you can file an issue or find solutions from the previous issues on the repository.&lt;/p&gt;

&lt;h3&gt;
  
  
  Let’s see EKS Distro in action!
&lt;/h3&gt;

&lt;p&gt;When installing EKS Distro, you can choose a launch partner’s installation options or you can use familiar community options like kubeadm or or kops.&lt;/p&gt;

&lt;p&gt;I will demonstrate the installation of EKS Distro with kubeadm in this blog post.&lt;/p&gt;

&lt;p&gt;First of all, for the installation with kubeadm, you need an RPM-based Linux system. I am using a CentOS system for this demonstration.&lt;/p&gt;

&lt;p&gt;I have installed Docker 19.03 version, disabled swap and disabled SELinux on the machine.&lt;/p&gt;

&lt;p&gt;I will install kubelet, kubectl and kubeadm with the commands below on the machine. I will install the 1.19 version of the Kubernetes in this demonstration.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes

cd /usr/bin

sudo wget https://distro.eks.amazonaws.com/kubernetes-1-19/releases/4/artifacts/kubernetes/v1.19.8/bin/linux/amd64/kubelet; \
sudo wget https://distro.eks.amazonaws.com/kubernetes-1-19/releases/4/artifacts/kubernetes/v1.19.8/bin/linux/amd64/kubeadm; \
sudo wget https://distro.eks.amazonaws.com/kubernetes-1-19/releases/4/artifacts/kubernetes/v1.19.8/bin/linux/amd64/kubectl

sudo chmod +x kubeadm kubectl kubelet

sudo systemctl enable kubelet
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftx9x5isfixik7iv35170.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftx9x5isfixik7iv35170.png" alt="Image description" width="800" height="83"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After enabling kubelet service, I am adding some arguments for kubeadm.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo mkdir /var/lib/kubelet

sudo vi /var/lib/kubelet/kubeadm-flags.env

KUBELET_KUBEADM_ARGS="--cgroup-driver=systemd —network-plugin=cni —pod-infra-container-image=public.ecr.aws/eks-distro/kubernetes/pause:3.2"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;I will pull the necessary EKS Distro container images and tag them accordingly.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo docker pull public.ecr.aws/eks-distro/kubernetes/pause:v1.19.8-eks-1-19-4;\
sudo docker pull public.ecr.aws/eks-distro/coredns/coredns:v1.8.0-eks-1-19-4;\
sudo docker pull public.ecr.aws/eks-distro/etcd-io/etcd:v3.4.14-eks-1-19-4;\
sudo docker tag public.ecr.aws/eks-distro/kubernetes/pause:v1.19.8-eks-1-19-4 public.ecr.aws/eks-distro/kubernetes/pause:3.2;\
sudo docker tag public.ecr.aws/eks-distro/coredns/coredns:v1.8.0-eks-1-19-4 public.ecr.aws/eks-distro/kubernetes/coredns:1.7.0;\
sudo docker tag public.ecr.aws/eks-distro/etcd-io/etcd:v3.4.14-eks-1-19-4 public.ecr.aws/eks-distro/kubernetes/etcd:3.4.13-0
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;I will add some other configurations as well.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo vi /etc/modules-load.d/k8s.conf

br_netfilter

sudo vi /etc/sysctl.d/99-k8s.conf

net.bridge.bridge-nf-call-iptables = 1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now let’s initialize the cluster!&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo kubeadm init --image-repository public.ecr.aws/eks-distro/kubernetes --kubernetes-version v1.19.8-eks-1-19-4
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This output is mostly the same as the usual kubeadm init command output. As you can see from the screenshot, the output has the kubeadm join command for the worker nodes or the configuration for accessing the cluster with the kubeconfig file. By the way, let me do that and access my Kubernetes cluster installed with EKS Distro.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Let’s run kubectl get nodes command and see the output.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feveu85ysk7ies54kctis.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feveu85ysk7ies54kctis.png" alt="Image description" width="800" height="112"&gt;&lt;/a&gt;&lt;br&gt;
As you can see, I am able to connect the cluster and see the kubectl get nodes command output but node is in NotReady status. The reason is I need to add a Pod Network addon and install a CNI Plugin to the cluster. I will use Calico CNI for this demonstration.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo curl https://docs.projectcalico.org/manifests/calico.yaml -O
kubectl apply -f calico.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After installing the Calico CNI, my master node is now in Ready state.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy4qb511lrar4mivb5e68.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy4qb511lrar4mivb5e68.png" alt="Image description" width="800" height="214"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1wdz1mab1b91052z5em1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1wdz1mab1b91052z5em1.png" alt="Image description" width="800" height="212"&gt;&lt;/a&gt;&lt;br&gt;
I have configured the worker nodes with the same prerequisites like installing Docker and disabling swap. I will pull and tag the necessary container image for the Kubernetes cluster as well with these commands.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo docker pull public.ecr.aws/eks-distro/kubernetes/pause:v1.19.8-eks-1-19-4;\
sudo docker tag public.ecr.aws/eks-distro/kubernetes/pause:v1.19.8-eks-1-19-4
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;I can now move on with adding a worker node to the cluster. I will use the kubeadm join command from the kubeadm init command output.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdqb48fl7wfn5flj4xnz1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdqb48fl7wfn5flj4xnz1.png" alt="Image description" width="800" height="193"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;When I run the kubectl get nodes command, I can see the other node in Ready state.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqcgvcawffj0p96cypvug.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqcgvcawffj0p96cypvug.png" alt="Image description" width="800" height="310"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As you can see my worker node has now joined the cluster and I can see the pods in the kube-system namespace. &lt;/p&gt;

&lt;p&gt;My Kubernetes cluster is installed with EKS Distro and ready for deploying application workloads!&lt;/p&gt;

&lt;h3&gt;
  
  
  Conclusion
&lt;/h3&gt;

&lt;p&gt;Having a tested, verified and reliable Kubernetes Distribution for production workloads is extremely crucial. This is why EKS is one of the most used and most popular Kubernetes distribution. Being able to run the same Distribution Amazon uses with managed EKS service on any infrastructure and platform is a huge advantage.&lt;/p&gt;

&lt;p&gt;If you have some compliance requirements or regulation restrictions and can not use public cloud platforms you can absolutely give EKS Distro a try.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Amazon Elastic Container Registry (Amazon ECR)</title>
      <dc:creator>Luqman Bello</dc:creator>
      <pubDate>Sun, 29 Aug 2021 22:18:57 +0000</pubDate>
      <link>https://dev.to/luqmanbello/amazon-elastic-container-registry-amazon-ecr-18hk</link>
      <guid>https://dev.to/luqmanbello/amazon-elastic-container-registry-amazon-ecr-18hk</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6krqvfh7ym3gyhpf11t5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6krqvfh7ym3gyhpf11t5.png" alt="image" width="800" height="371"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Amazon Elastic Container Registry (Amazon ECR) is an Amazon Web Service (AWS) product that stores, manages and deploys Docker images, which are managed clusters of Amazon EC2 instances. Amazon ECR allows all AWS developers to save configurations and quickly move them into a production environment, thus reducing overall workloads.&lt;/p&gt;

&lt;p&gt;Amazon ECR provides a command-line interface (CLI) and APIs to manage repositories and integrated services, such as Amazon Elastic Container Service (Amazon ECS), which installs and manages the infrastructure for these containers. The primary difference between Amazon ECR and ECS is that while ECR provides the repository that stores all code that has been written and packaged as a Docker image, the ECS takes these files and actively uses them in the deployment of applications.&lt;/p&gt;

&lt;p&gt;A developer can use the Docker command line interface to push or pull container images to or from an AWS region. Amazon ECR can be used wherever a Docker container service is running, including on-premises environments. AWS Elastic Beanstalk also supports Amazon ECR for multi-container environments.&lt;/p&gt;

&lt;h2&gt;
  
  
  How Amazon ECR works
&lt;/h2&gt;

&lt;p&gt;First, Amazon Elastic Container Registry writes and packages code in the form of a Docker image. Next, it compresses, encrypts and manages access to the images -- including all tags and versions -- and controls image lifecycles. Finally, the Amazon ECS pulls the necessary Docker images from the ECR to be used in the deployment of apps and continues to manage containers everywhere -- including Amazon Elastic Kubernetes Service (Amazon EKS), AWS cloud and on premise networks.&lt;/p&gt;

&lt;p&gt;Furthermore, Amazon ECR automatically encrypts container images at rest with Amazon Simple Storage Service (Amazon S3) server-side encryption and allows administrators to use AWS Identity and Access Management (AWS IAM) to create restrictions that limit access to each repository. The container registry stores container images in S3 for high availability. &lt;/p&gt;

&lt;h2&gt;
  
  
  Components of Amazon ECR
&lt;/h2&gt;

&lt;p&gt;Amazon ECR includes:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Docker images - This is the file that is used to execute code within a Docker container.&lt;/li&gt;
&lt;li&gt;Repository - The Docker images are stored in the Amazon ECR repository. Developers can push and pull images to the repository.&lt;/li&gt;
&lt;li&gt;Repository policy - Developers can use these policies to manage access to the repositories and the images within them.&lt;/li&gt;
&lt;li&gt;Registry - All AWS accounts receive access to Amazon ECR that allows them to create repositories and store images in them.&lt;/li&gt;
&lt;li&gt;Authorization token - Before it can push and pull images, the Docker client must be recognized as an AWS account holder.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Amazon ECR security and other benefits
&lt;/h2&gt;

&lt;p&gt;One of the greatest benefits provided by Amazon ECR is increased security. All images in Amazon ECR are transferred over HTTPS. Images at rest are automatically encrypted to ensure enhanced security. As mentioned before, developers can use AWS IAM to create policies that control permissions and manage access to images. This can be done without altering credentials directly on the EC2 instances. Policies can also be designed to control cross-account image sharing.&lt;/p&gt;

&lt;p&gt;AWS security groups can be selected for the interface that control whether each host is allowed to interact with the interface. AWS security groups are virtual firewalls at the instance level that are easily created, attached and deleted. For example, there may be a security group assigned to all the EC2 instances in a cluster using an AWS Auto Scaling group. Developers can create the rule that allows the Amazon Virtual Private Cloud (Amazon VPC) endpoint to be accessed by all instances in this assigned security group.&lt;/p&gt;

&lt;h2&gt;
  
  
  Other benefits of Amazon ECR include:
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;High availability - The Amazon ECR architecture is highly scalable, durable and redundant. As a result, the Docker images are easily available and accessible and users can feasibly and dependably deploy new containers for their applications.&lt;/li&gt;
&lt;li&gt;Streamlines workflow - Integration with Amazon ECS and the Docker CLI allows users to simplify their development and production work processes by facilitating continuous integration (CI) and continuous deployment (CD) in Amazon ECS. Furthermore, container images can be easily pushed to Amazon ECR with the Docker CLI. From there, Amazon ECS can easily pull the images directly and use them for production deployments.&lt;/li&gt;
&lt;li&gt;Completely managed - Amazon ECR does not include any software that will need to be installed and managed or an infrastructure that has to be scaled. Users simply push images to ECR and pull them with any container management tool when they're needed.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Configuration and implementation of Amazon ECR
&lt;/h2&gt;

&lt;p&gt;An AWS account is the first necessity when setting up Amazon ECR. When a user registers for an AWS account, they automatically get signed up for all of the services; they will only pay for the services they use.&lt;/p&gt;

&lt;p&gt;Once the user has an AWS account, they can download the AWS Command Line Interface (AWS CLI) and Docker software.&lt;/p&gt;

&lt;p&gt;All services in AWS require users to provide credentials in order to determine whether or not the user has permission to access the protected resources. The AWS console requires a password, however, use of AWS credentials is not recommended when accessing AWS. Instead, AWS IAM is recommended for a more secure authentication process. An AWS IAM user can access AWS using a special URL and their unique user credentials.&lt;/p&gt;

</description>
      <category>ecr</category>
      <category>aws</category>
      <category>containers</category>
    </item>
    <item>
      <title>Amazon WorkDocs</title>
      <dc:creator>Luqman Bello</dc:creator>
      <pubDate>Sun, 29 Aug 2021 21:49:17 +0000</pubDate>
      <link>https://dev.to/luqmanbello/amazon-workdocs-28hm</link>
      <guid>https://dev.to/luqmanbello/amazon-workdocs-28hm</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc52nj4ibpyyp7yaj5zw6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc52nj4ibpyyp7yaj5zw6.png" alt="image" width="800" height="309"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Amazon WorkDocs is a fully managed, secure content creation, storage, and collaboration service. With Amazon WorkDocs, you can easily create, edit, and share content, and because it’s stored centrally on AWS, access it from anywhere on any device. Amazon WorkDocs makes it easy to collaborate with others, and lets you easily share content, provide rich feedback, and collaboratively edit documents. You can use Amazon WorkDocs to retire legacy file share infrastructure by moving file shares to the cloud. Amazon WorkDocs lets you integrate with your existing systems, and offers a rich API so that you can develop your own content-rich applications. Amazon WorkDocs is built on AWS, where your content is secured on the world's largest cloud infrastructure.&lt;/p&gt;

&lt;p&gt;With Amazon WorkDocs, there are no upfront fees or commitments. You pay only for active user accounts, and the storage you use.&lt;/p&gt;

&lt;h2&gt;
  
  
  Benefits
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Migrate your on premise file servers and reduce costs significantly&lt;/li&gt;
&lt;li&gt;Securely share with internal teams and external users in real-time&lt;/li&gt;
&lt;li&gt;Secure your content in the cloud&lt;/li&gt;
&lt;li&gt;Bring content into your applications and processes&lt;/li&gt;
&lt;li&gt;Route your documents using approval workflow&lt;/li&gt;
&lt;li&gt;Extend your desktop to the cloud&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Some of the use case of the Amazon WorkDocs is in helping to replace expensive legacy file sharing services. This system are mostly referred to as ECM solutions. With Amazon WorkDocs, you can easily migrate existing content from legacy network file shares to the cloud and your users can continue to access all their individual and team’s shared content from their native desktop file systems through WorkDocs Drive, or through the web user interface or mobile application.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>ecm</category>
      <category>productivity</category>
    </item>
  </channel>
</rss>
