<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Theo Jung</title>
    <description>The latest articles on DEV Community by Theo Jung (@youngjin).</description>
    <link>https://dev.to/youngjin</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/youngjin"/>
    <language>en</language>
    <item>
      <title>How Do You Play Around with Terraform CLI Versions?</title>
      <dc:creator>Theo Jung</dc:creator>
      <pubDate>Sun, 23 Mar 2025 02:57:01 +0000</pubDate>
      <link>https://dev.to/aws-builders/how-do-you-play-around-with-terraform-cli-versions-2ab9</link>
      <guid>https://dev.to/aws-builders/how-do-you-play-around-with-terraform-cli-versions-2ab9</guid>
      <description>&lt;h1&gt;
  
  
  Introduce
&lt;/h1&gt;

&lt;p&gt;Reflecting on how I wasn’t able to manage my blog well in 2024 due to a lot going on, today I’d like to introduce an easy tool for managing Terraform CLI versions.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcdmuxo4snyrol0k5judj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcdmuxo4snyrol0k5judj.png" alt="Image description" width="795" height="841"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;If you have any questions, feel free to leave them in the comments, and I will do my best to answer them to the best of my ability.&lt;/strong&gt;&lt;/p&gt;




&lt;p&gt;I started with Terraform from version v0.11. Terraform has rapidly evolved to version v1.12.0-alpha, but in real-world environments, we usually stick to a specific version, so we’ve been using Terraform v1.5.7 or earlier.&lt;/p&gt;

&lt;p&gt;However, recently, while I was preparing to teach, I started using Terraform v1.9.0+ and was looking for a way to make local settings easier. &lt;/p&gt;

&lt;p&gt;That’s when I came across an open-source tool called tfenv.&lt;br&gt;
&lt;a href="https://github.com/tfutils/tfenv" rel="noopener noreferrer"&gt;https://github.com/tfutils/tfenv&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;tfenv is an open-source tool created for managing Terraform CLI versions in the same way as rbenv. &lt;/p&gt;

&lt;p&gt;When using multiple versions, similar to how Java versions are managed with sudo alternatives --config java, Terraform CLI versions can also be easily managed.&lt;/p&gt;

&lt;p&gt;Let’s take a look at how it works.&lt;/p&gt;
&lt;h3&gt;
  
  
  tfenv Installation
&lt;/h3&gt;

&lt;p&gt;There are two main installation methods provided.&lt;/p&gt;

&lt;p&gt;The Automatic method allows Mac users to easily install using Brew, and it can also be installed using yay or Puppet modules (though yay and Puppet might not be commonly used).&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnew7g1euu3z2txaz000n.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnew7g1euu3z2txaz000n.png" alt="Image description" width="800" height="483"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The Manual method involves cloning the repository directly and configuring the pre-configured shell file to work from the bin directory.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#tfenv clone
git clone --depth=1 https://github.com/tfutils/tfenv.git ~/.tfenv

# Choice 1 or 2 method
"#1 Set the tfenv subdirectory bin directory in the PATH environment variable"
echo 'export PATH="$HOME/.tfenv/bin:$PATH"' &amp;gt;&amp;gt; ~/.zprofile

#2 symbolic link
ln -s ~/.tfenv/bin/* /usr/local/bin
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;By setting it up as described above, Mac users can use the tfenv command without needing to install it via brew.&lt;/p&gt;

&lt;h3&gt;
  
  
  Managing Terraform CLI Versions with tfenv
&lt;/h3&gt;

&lt;p&gt;If you’ve completed the installation process above, let’s now manage the Terraform CLI versions using tfenv&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fujd5a3ualgtjctzk70yr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fujd5a3ualgtjctzk70yr.png" alt="Image description" width="800" height="313"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you enter tfenv once, you’ll see a command list like the one above.&lt;/p&gt;

&lt;p&gt;To check the available Terraform CLI versions for local installation, you can run tfenv list-remote, and it will display a list of installable versions as shown in the picture below.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9r5vktygv5cfg7qrksoi.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9r5vktygv5cfg7qrksoi.png" alt="Image description" width="800" height="555"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After checking the versions here, you can specify a version using the command tfenv install , or you can run a command like tfenv install latest, and the Terraform CLI will be installed according to the specified version.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Install the version specified by TFENV_TERRAFORM_VERSION or .terraform-version
# priority : TFENV_TERRAFORM_VERSION &amp;gt; .terraform-version
tfenv install

# Install Terraform version 1.9.0
tfenv install 1.9.0

# Install the latest Terraform version
tfenv install latest

# Install the latest version of Terraform 1.8
tfenv install latest:^1.8

# Install the latest allowed or the minimum required Terraform CLI version
tfenv install latest-allowed
tfenv install min-required
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Let's install version 1.9.0 on the local environment using the first method.&lt;/p&gt;

&lt;p&gt;First, you need to specify either TFENV_TERRAFORM_VERSION or terraform-version. As shown in the picture below, we will specify terraform-version.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4ygzuc6lzgj788mnqbma.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4ygzuc6lzgj788mnqbma.png" alt="Image description" width="716" height="320"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Then, when you run tfenv install, you will see that the Terraform CLI is downloaded according to the terraform-version specified above.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fovym4106gtgtp5b4bxhh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fovym4106gtgtp5b4bxhh.png" alt="Image description" width="800" height="385"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To use the installed Terraform v1.9.0, you simply need to run tfenv use 1.9.0&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1k4cqb1bkkdvnlwjbzz2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1k4cqb1bkkdvnlwjbzz2.png" alt="Image description" width="800" height="230"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Next, let’s install another version. We will install the latest version and try switching versions.&lt;/p&gt;

&lt;p&gt;Before doing that, you need to remove the environment variables.&lt;br&gt;
If you don’t, even after installing and switching to a new version, the environment variables will take precedence, and the version switch will not be applied correctly.&lt;/p&gt;

&lt;p&gt;When you install the latest version, Terraform 1.11.2 will be installed.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgaf3ski8q02k4c8zcl31.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgaf3ski8q02k4c8zcl31.png" alt="Image description" width="800" height="471"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Then, when you switch from the installed version 1.9.0 to 1.11.2, you can easily switch versions using tfenv use, as shown below.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp4twy7xt94ctc2k78cng.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp4twy7xt94ctc2k78cng.png" alt="Image description" width="800" height="323"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;p&gt;There are also environment variables related to DEBUG, so it’s highly recommended to check the official repository for more details!&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Enjoy the Terraform ride!&lt;/strong&gt;&lt;/p&gt;

</description>
      <category>terraform</category>
      <category>iac</category>
      <category>awscommunitybuilder</category>
      <category>aws</category>
    </item>
    <item>
      <title>Scaling with Karpenter and Empty Pod(A.k.a Overprovisioning) - Part 2</title>
      <dc:creator>Theo Jung</dc:creator>
      <pubDate>Sat, 21 Oct 2023 10:51:47 +0000</pubDate>
      <link>https://dev.to/aws-builders/scaling-with-karpenter-and-empty-podaka-overprovisioning-part-2-eeo</link>
      <guid>https://dev.to/aws-builders/scaling-with-karpenter-and-empty-podaka-overprovisioning-part-2-eeo</guid>
      <description>&lt;h2&gt;
  
  
  Introduce
&lt;/h2&gt;

&lt;p&gt;In this article, following the previous &lt;a href="https://dev.to/aws-builders/scaling-with-karpenter-and-empty-podaka-overprovisioning-1j5j"&gt;Scaling with Karpenter and Empty Pod (A.k.a Overprovisioning) - Part 1&lt;/a&gt;, we will share about PriorityClass and Empty Pod(Overprovisioner) provided by Kubernetes, and Overprovisioning using Karpenter. I want to do it.&lt;br&gt;
We will focus on PriorityClass and empty pods(Overprovisioner) and see how Karpenter is used.&lt;/p&gt;
&lt;h2&gt;
  
  
  What is overprovisioning?
&lt;/h2&gt;

&lt;p&gt;First, let's look at the pictures to understand what overprovisioning is.&lt;/p&gt;

&lt;p&gt;Typically, in an EKS environment, when user requests increase and CPU or memory utilization increases, HPA (Horizontal Pod Autoscaling) increases pods. (Figure 1) &lt;/p&gt;

&lt;p&gt;At this time, if there is not enough CPU or memory to be allocated to the expanded pod among the existing nodes, the pod will not be assigned to the node and will wait until the node is created in a standby state. And after some time, the nodes will be provisioned. &lt;/p&gt;

&lt;p&gt;Once the nodes are properly prepared, the Kubernetes scheduler assigns pods to the newly created nodes. Only at this time can the pods expanded by HPA begin processing requests. &lt;/p&gt;

&lt;p&gt;If a large number of requests suddenly come in while waiting for the pods added to the new node to operate normally, the existing pods may not be able to properly digest the requests, resulting in a 500 error or the pods dying due to OOM.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkbhixgsvj5ygaf3dm9n5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkbhixgsvj5ygaf3dm9n5.png" alt="Figure 1"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If more nodes have been provisioned than the required number before a surge in requests occurs, here's what happens:&lt;/p&gt;

&lt;p&gt;When user requests are low, there is no immediate action taken.&lt;/p&gt;

&lt;p&gt;However, let's assume that the pods that need CPU and memory allocation are already distributed across several nodes. In this scenario, if there is a sudden increase in requests, the pods that need to scale up through HPA can be quickly allocated to the nodes that are already provisioned and idle, thanks to the pods that are occupying resources without performing any active tasks. &lt;/p&gt;

&lt;p&gt;This practice of provisioning more nodes than necessary for ensuring the stability of the service and enabling rapid allocation of scaled-up pods is known as &lt;strong&gt;Overprovisioning&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9jb8dsjwcbg9101m1sd8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9jb8dsjwcbg9101m1sd8.png" alt="Figure 2"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  PriorityClass and Empty Pod(Overprovisioner)
&lt;/h2&gt;

&lt;p&gt;In Overprovisioning, two key elements determine the priority of pods: &lt;a href="https://kubernetes.io/docs/concepts/scheduling-eviction/pod-priority-preemption/" rel="noopener noreferrer"&gt;PriorityClass&lt;/a&gt; and the need for empty pods with PriorityClass applied.&lt;/p&gt;

&lt;p&gt;In situations where there might not be enough CPU or memory on worker nodes, or when the desired ports are already occupied by other running pods, deploying new pods can be challenging. However, if the new pods serve a crucial function, they must be deployed regardless of these constraints. PriorityClass is the solution to handle such scenarios.&lt;/p&gt;

&lt;p&gt;When you initially create an EKS cluster and check the PriorityClass, you'll find two default PriorityClass: &lt;code&gt;system-cluster-critical&lt;/code&gt; and &lt;code&gt;system-node-critical&lt;/code&gt;, both having high values. These PriorityClasses are applied to essential pods in the system, giving them a high priority.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcprjj9m75p39lpx062gc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcprjj9m75p39lpx062gc.png" alt="Figure 3"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In addition to the PriorityClass provided by the system, users can create a PriorityClass and assign it to the desired Pod.&lt;br&gt;
&lt;/p&gt;
&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;p&gt;The YAML configuration file above allows you to set the priority for pods based on the value specified. If a new pod with a high priority needs to be deployed but cannot due to various issues such as resource scarcity, the lower-priority pods on the existing nodes are evicted, making room for the new high-priority pods. By leveraging this mechanism, you can create lower-priority empty pods that only consume CPU and memory. When higher-priority service pods need to be deployed, these lower-priority pods can be evicted, allowing the higher-priority pods to take their place.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjuwhmcp6aqihkh4kqop1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjuwhmcp6aqihkh4kqop1.png" alt="Figrue 4"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Overprovisioning with Karpenter
&lt;/h2&gt;

&lt;p&gt;Now, let's take a look at how to apply overprovisioning using the PriorityClass, empty pods(Overprovisioner), and Karpenter described above in order through the picture below.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fworrnqkhadj96ix7z6p8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fworrnqkhadj96ix7z6p8.png" alt="Figure 5"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;ol&gt;
&lt;li&gt;Node 1 has a pod (Nginx-1) with high priority, and Node 2 has a pod (empty pod) with low priority.&lt;/li&gt;
&lt;li&gt;There is no available node to allocate the pod with high priority (Nginx-2).&lt;/li&gt;
&lt;li&gt;The pod with low priority (empty pod) is evicted. &lt;/li&gt;
&lt;li&gt;The pod with high priority (Nginx-2) is allocated to Node 2, where the low-priority pod was previously located. &lt;/li&gt;
&lt;li&gt;Karpenter adds a new node to allocate the pod with low priority (empty pod).&lt;/li&gt;
&lt;li&gt;The newly added node has the pod with low priority (empty pod) allocated to it.&lt;/li&gt;
&lt;/ol&gt;
&lt;/blockquote&gt;

&lt;p&gt;Let’s take a look at an example of applying the above process in an actual EKS environment.&lt;/p&gt;

&lt;p&gt;In Figure 6, there is a cluster with 12 worker nodes. These worker nodes host various pods such as aws-node, kube-proxy, and argoCD. Additionally, a namespace called "other" has been created as shown in picture below. In this namespace, a PriorityClass with a priority of -1 has been applied to empty pods. It can be observed that the empty pod has been allocated to the node with the IP address 10.102.108.161.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4v0uqosf5fl2h6cbktk4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4v0uqosf5fl2h6cbktk4.png" alt="Figure 6"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fflx2lugegiv8daovu43p.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fflx2lugegiv8daovu43p.png" alt="Figure 7"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Next, let's assume nginx pods without a specific PriorityClass applied are deployed in the same cluster to represent service pods. As seen in the first diagram below, because there are sufficient resources available for the nginx pods to be allocated, the empty pods are not immediately evicted. Instead, the nginx pods are deployed successfully. Under this scenario, additional load is applied to the nginx pods to trigger the scaling of new pods.&lt;/p&gt;

&lt;p&gt;However, due to insufficient CPU or memory resources available for the new pods that need to scale up, the lower-priority empty pods are evicted and then allocated to the respective nodes. These empty pods enter a pending state. Karpenter detects this situation and provisions new nodes accordingly. As shown in the final diagram below, it can be observed that the new pods are allocated to nodes with IP addresses starting with 10.102.111.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe0y8p39uwj4k03qlplzu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe0y8p39uwj4k03qlplzu.png" alt="Figure 8"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhju22kbdwysq44tfltx2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhju22kbdwysq44tfltx2.png" alt="Figure 9"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr1tz8nxux2qvs96gu3rd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr1tz8nxux2qvs96gu3rd.png" alt="Figure 10"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Summary
&lt;/h2&gt;

&lt;p&gt;In an EKS environment, service pods can be suddenly expanded during load situations. In this case, if nodes are not provisioned in advance, it may take a long time to add service pods. You can apply it by using Karpenter, which was explained in the last article, PriorityClass, which can set the pod priority of Kubernetes, and Empty Pod, which acts as an overprovisioner, which is explained in this article.&lt;/p&gt;

&lt;p&gt;If it is not already applied in your operating environment, I would like you to apply it.&lt;/p&gt;

&lt;p&gt;If you enjoyed the article, please leave comments, reactions, and shares.&lt;/p&gt;

</description>
      <category>devops</category>
      <category>aws</category>
      <category>kubernetes</category>
      <category>karpenter</category>
    </item>
    <item>
      <title>Scaling with Karpenter and Empty Pod(A.k.a Overprovisioning) - Part 1</title>
      <dc:creator>Theo Jung</dc:creator>
      <pubDate>Thu, 28 Sep 2023 10:42:10 +0000</pubDate>
      <link>https://dev.to/aws-builders/scaling-with-karpenter-and-empty-podaka-overprovisioning-1j5j</link>
      <guid>https://dev.to/aws-builders/scaling-with-karpenter-and-empty-podaka-overprovisioning-1j5j</guid>
      <description>&lt;h2&gt;
  
  
  Introduce
&lt;/h2&gt;

&lt;p&gt;In this article, we aim to compare Cluster Autoscaler (CA) and Karpenter in the context of node provisioning within AWS's managed service, Elastic Kubernetes Service (EKS). Additionally, we would like to introduce the operational principles of Karpenter.&lt;/p&gt;

&lt;p&gt;Recently, there has been a growing interest in Microservices Architecture (MSA) and Kubernetes, with many companies using AWS transitioning from on-premises or EC2/ECS environments to Elastic Kubernetes Services (EKS).&lt;/p&gt;

&lt;p&gt;To provide more reliable service in the EKS environment, fast pod provisioning is crucial, and as pods multiply, node provisioning becomes necessary. However, ensuring service stability through rapid pod and node provisioning while also optimizing costs can be a challenging task.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F34iq581rawe5odrypyiq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F34iq581rawe5odrypyiq.png" alt="feeling like a headache"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Today, we'd like to share our team's approach to fast provisioning using Karpenter and a scaling strategy that leverages empty pods.&lt;/p&gt;

&lt;p&gt;In Part 1, we'll explain Karpenter, and in Part 2, we'll delve into the scaling strategy using empty pods. Be sure to read next post for more details.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why should we use Karpenter in an EKS environment?
&lt;/h2&gt;

&lt;p&gt;As a provisioning tool for automatically adjusting AWS EKS clusters, there are two well-known options: the Kubernetes Cluster Autoscaler (CA) and Karpenter. Let's discuss why Karpenter might be the preferred choice.&lt;/p&gt;

&lt;p&gt;First, it's essential to understand how CA operates, which is based on Auto Scaling Groups (ASGs).&lt;br&gt;
Pods are deployed on one or more EC2 nodes, and nodes are provisioned through node groups associated with Amazon EC2 ASGs. CA monitors the EKS cluster for unscheduled pods and provisions nodes through ASGs when there are unscheduled pods. &lt;/p&gt;

&lt;p&gt;There are two primary operations: provisioning when additional nodes are needed and deprovisioning when nodes need to be removed.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Provisioning involves adding nodes to the EKS cluster through ASGs to accommodate new pods.&lt;/li&gt;
&lt;li&gt;Deprovisioning, on the other hand, entails removing nodes when scaling down is required.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Now, let's discuss why Karpenter might be a better choice.&lt;/p&gt;
&lt;h3&gt;
  
  
  Provisioning
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcdoli5q9ni2fhhdgf8v4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcdoli5q9ni2fhhdgf8v4.png" alt="How to work CA"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;There are pending pods due to resource shortages.&lt;/li&gt;
&lt;li&gt;CA increases the Desired count in the ASG.&lt;/li&gt;
&lt;li&gt;AWS ASG provisions new nodes.&lt;/li&gt;
&lt;li&gt;The kube-scheduler assigns pending pods to the newly provisioned nodes.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;When CA provisions nodes, it makes decisions based on the presence of unallocated pods rather than the node's resource utilization. CA adjusts the Desired count in the ASG and provisions new nodes accordingly. These newly created nodes then host the assigned pods.&lt;/p&gt;
&lt;h3&gt;
  
  
  Deprovisioning
&lt;/h3&gt;

&lt;p&gt;In the case of CA, Deprovisioning is determined based on available resources on the nodes. Nodes with resource utilization below 50% are considered for deprovisioning. CA calculates whether it can relocate the pods running on that node elsewhere and proceeds with node termination. Additionally, the Desired count in the ASG is adjusted accordingly.&lt;/p&gt;

&lt;p&gt;In both of the above cases, CA has a constraint where the node types are limited by the ASG associated with the node group. This means that more node types than necessary may be created, making cost optimization challenging.&lt;/p&gt;

&lt;p&gt;Karpenter has evolved to address cost optimization more effectively and operate independently of ASGs. When comparing the differences between Karpenter and CA, there are three key aspects to consider:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;No Grouping Constraint&lt;/strong&gt;: CA sends requests to ASGs, requiring the setup of multiple node groups to use various instance types. Karpenter, on the other hand, allows specifying a list of different instance types and dynamically allocates the most cost-efficient instance type that meets the conditions at provisioning time, within the available availability zones in a region.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Bypassing kube-scheduler&lt;/strong&gt;: CA relies on the kube-scheduler to detect unscheduled pods and inform ASGs, which doesn't result in immediate node creation. Karpenter, however, operates independently of the kube-scheduler. It directly creates nodes and assigns pods when there are pending pods, bypassing the kube-scheduler for faster operation.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Cost Optimization&lt;/strong&gt;: Karpenter evaluates currently provisioned on-demand node instances and compares their prices and resources to determine if they can be consolidated into a more suitable node type. This operation allows for cost optimization, although it doesn't optimize spot instances (they are not included in the optimization).&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;With these three perspectives in mind, our team adopted Karpenter as the node provisioning tool. Now that we've discussed how CA operates and why Karpenter is chosen, let's delve into how Karpenter works in the next part.&lt;/p&gt;
&lt;h2&gt;
  
  
  Deep Dive in to Karpenter
&lt;/h2&gt;

&lt;p&gt;Before deep diving into how Karpenter operates, it's essential to examine its components.&lt;/p&gt;

&lt;p&gt;Karpenter uses two key components, namely Provisioner and NodeTemplate, to rapidly provision nodes that meet specific conditions.&lt;/p&gt;

&lt;p&gt;Firstly, the Provisioner is responsible for configuring the instance family, availability zone, weights, and other parameters that determine the role of the providerref when nodes are created during provisioning.&lt;br&gt;
&lt;/p&gt;
&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;p&gt;Next, the NodeTemplate is included in the spec.providerRef is part of the Provisioner, and can be viewed as a template that defines the node to be provisioned, such as which AMI to run or which security group to use.&lt;br&gt;
&lt;/p&gt;
&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;h3&gt;
  
  
  How to work
&lt;/h3&gt;

&lt;p&gt;First, let’s look at the illustration of Karpenter’s operation process below. If you look at the picture, you can see that, unlike CA, it operates regardless of ASG. Because of this, when there are not enough nodes to allocate pods, node provisioning occurs in JIT (Just-In-Time), allowing pods to be allocated more quickly.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkx2ohu6vvxqryq6bamrm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkx2ohu6vvxqryq6bamrm.png" alt="How to work karpenter"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  Provisioning
&lt;/h4&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb026445rdfk7svkcfgw7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb026445rdfk7svkcfgw7.png" alt="Karpenter log"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you look at the log of the Karpenter Pod, you can see that there are three Pods in the Pending state and that three new On-Demand type nodes appear for this. How does this work? Provisioning must also satisfy certain conditions of the provisioner, and let's take a look at what these conditions are one by one.&lt;/p&gt;

&lt;p&gt;Here are the corrected sentences with improved grammar and clarity:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Condition 1 - Resource Request:&lt;/strong&gt; This condition specifies that the Provisioner will operate if the pending pod requires more resources than the current node can provide.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Condition 2 - Node Selection:&lt;/strong&gt; You can label the NodeSelector to specify the desired Provisioner to operate on.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Condition 3 - NodeAffinity:&lt;/strong&gt; This condition specifies that the Provisioner operates when NodeAffinity conditions are met. NodeAffinity behavior is determined by two conditions: requiredDuringSchedulingIgnoredDuringExecution (which must be satisfied) and preferredDuringSchedulingIgnoredDuringExecution (which should be satisfied whenever possible). You can specify the desired Provider by using key/value labels or requirements in the Provisioner using NodeSelectTerms.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Condition 4 - Topology Distribution:&lt;/strong&gt; This condition specifies that the Provisioner will operate if the conditions specified in topologySpreadConstraints are met. These conditions can ensure that multiple nodes are provisioned and prevent the same pod from appearing on a single node. Currently supported topologyKeys include topology.kubernetes.io/zone, kubernetes.io/hostname, and karpenter.sh/capacity-type.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Condition 5 - Pod Affinity/Anti-affinity:&lt;/strong&gt; This condition specifies that the Provisioner will operate if there are no nodes available to allocate a pod when the affinity condition is met. Pods can be assigned to nodes based on the PodAffinity and PodAntiAffinity conditions. If a pod needs allocation but there are no suitable nodes, the Provisioner will run and provision a new node.&lt;/p&gt;

&lt;h4&gt;
  
  
  Deprovisioning
&lt;/h4&gt;

&lt;p&gt;To optimize costs by reducing scaling for nodes that have been provisioned but are no longer in use, we initiate deprovisioning. Deprovisioning is governed by four conditions, each of which we will explore:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Provisioner Deletion&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Nodes created by the provisioner are considered owned by the provisioner. Therefore, when the provisioner is deleted, nodes generated by the provisioner are stopped, initiating deprovisioning.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Empty&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;For non-daemonset pods, de-provisioning takes place after the ttlSecondsAfterEmpty specified in the provisioner has passed since the disappearance of pods.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Interrupt&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Deprovisioning is triggered when node-related interruption events, such as Spot stop interrupts or Node terminations, are received through Event Bridge and queued in SQS.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Expire&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Nodes are stopped and de-provisioned when the ttlSecondsUntilExpired, as specified in the provisioner, elapses after node provisioning.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Integration&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;To optimize costs, we perform the task of comparing the costs of currently provisioned single or multiple nodes and consolidating them into a more suitable single node. This operation is only applicable to on-demand node types and does not work with Spot instances.&lt;/p&gt;

&lt;p&gt;So far, we have delved into the components of Karpenter and how they operate. Do you notice the differences in how CA and Karpenter work? If it still seems unclear, why not try building it yourself?&lt;/p&gt;

&lt;h2&gt;
  
  
  Concluding the article
&lt;/h2&gt;

&lt;p&gt;In this article, we didn't mention Empty Pods. To satisfy your curiosity, you'll have to read the next post.&lt;/p&gt;

&lt;p&gt;In the upcoming article, we will focus on PriorityClass, Empty Pods, and the scaling strategy using Karpenter that we described today.&lt;/p&gt;

&lt;h2&gt;
  
  
  Reference
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://catalog.us-east-prod.workshops.aws/workshops/fd6ccd33-980d-422f-b6a6-cb3c3424a78c/en-US/scaling/scaling-clusters" rel="noopener noreferrer"&gt;https://catalog.us-east-prod.workshops.aws/workshops/fd6ccd33-980d-422f-b6a6-cb3c3424a78c/en-US/scaling/scaling-clusters&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://aws.amazon.com/ko/blogs/tech/amazon-eks-cluster-auto-scaling-karpenter-bp/" rel="noopener noreferrer"&gt;https://aws.amazon.com/ko/blogs/tech/amazon-eks-cluster-auto-scaling-karpenter-bp/&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/kubernetes/autoscaler" rel="noopener noreferrer"&gt;https://github.com/kubernetes/autoscaler&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.wisen.co.kr/pages/blog/blog-detail.html?idx=12079" rel="noopener noreferrer"&gt;https://www.wisen.co.kr/pages/blog/blog-detail.html?idx=12079&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://karpenter.sh/" rel="noopener noreferrer"&gt;https://karpenter.sh/&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>devops</category>
      <category>aws</category>
      <category>kubernetes</category>
      <category>karpenter</category>
    </item>
    <item>
      <title>Automate AWS account creation(2)</title>
      <dc:creator>Theo Jung</dc:creator>
      <pubDate>Sun, 10 Sep 2023 02:07:43 +0000</pubDate>
      <link>https://dev.to/aws-builders/automate-aws-account-creation2-5ce1</link>
      <guid>https://dev.to/aws-builders/automate-aws-account-creation2-5ce1</guid>
      <description>&lt;p&gt;Hello! All Developers!&lt;/p&gt;

&lt;p&gt;Following the First Post &lt;a href="https://dev.to/aws-builders/automate-aws-account-creation1-3h78"&gt;Automate AWS account creation(1)&lt;/a&gt;, I would like to cover the implementation method in the second article related to infrastructure automation.&lt;/p&gt;

&lt;p&gt;I will explain it largely by dividing it into resources created with Terraform and implemented with Serverless Framework.&lt;/p&gt;

&lt;p&gt;The architecture covered in Part 1 uses the Serverless Framework as a framework to conveniently implement AWS Lambda functions.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Remind&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--k-3-ZX9M--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6ngzuk9ykzrl6nc53wfw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--k-3-ZX9M--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6ngzuk9ykzrl6nc53wfw.png" alt="AWS Architecture" width="799" height="276"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I hope those who read this will understand it completely :)&lt;/p&gt;

&lt;h2&gt;
  
  
  1. Implementation using Terraform
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;AWS IAM User and Role&lt;/li&gt;
&lt;/ul&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;
&lt;br&gt;
The organization I belong to uses IAM in a way that creates user accounts and maps them to groups. However, I can provide code examples specifically for creating user accounts as follows:

&lt;blockquote&gt;
&lt;p&gt;The 'name' section corresponds to the user account.&lt;/p&gt;

&lt;p&gt;We use the 'mail' format so that Lambda can extract the user account and use it directly as an email address.&lt;/p&gt;

&lt;p&gt;If you prefer to use a simple name like 'test,' you can include the email domain in Lambda.&lt;/p&gt;

&lt;p&gt;The 'tags' field is not mandatory and is optional.&lt;/p&gt;
&lt;/blockquote&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;p&gt;The IAM Role used in Lambda is configured as shown above. It has been set up to allow Lambda and EventBridge to assume this IAM Role.&lt;/p&gt;

&lt;p&gt;The next part pertains to the policy associated with the role. When events occur, it logs them, and it grants permissions for the API used in the Lambda execution process through policies.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;AWS SNS Topic and Subscription&lt;/li&gt;
&lt;/ul&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;p&gt;The SNS Topic is configured with two main policies:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Lines 6 - 13 define a policy that allows publishing events to the SNS Topic when events are triggered from EventBridge.&lt;/p&gt;

&lt;p&gt;Lines 14 - 35 define a policy that enables certain SNS functionalities when the SourceOwner is a specific Account-ID.&lt;/p&gt;
&lt;/blockquote&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;p&gt;So, when a specific event occurs in EventBridge and targets the SNS Topic, the first policy allows the event to be published. &lt;/p&gt;

&lt;p&gt;Additionally, for SNS Subscriptions associated with a specific Account ID, they can subscribe to the SNS Topic identified by its ARN and trigger the associated Lambda function.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;AWS SES&lt;/li&gt;
&lt;/ul&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;p&gt;To use the domain as mentioned above, you need to create a Domain Identity in SES. After creating the Domain Identity with the respective domain, when you go to the AWS Console, you will receive 3 address values for authentication.&lt;/p&gt;

&lt;p&gt;Once you register these addresses in the route table of the domain you intend to use (or in AWS Route53), it will be authenticated.&lt;/p&gt;

&lt;p&gt;However, registering these addresses alone does not mean that the domain is ready for use.&lt;/p&gt;

&lt;p&gt;This domain exists in a sandbox environment.&lt;/p&gt;

&lt;p&gt;The sandbox environment has the following constraints:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;You can send a maximum of 200 messages within 24 hours.&lt;br&gt;
You can send a maximum of 1 message per second.&lt;br&gt;
You can only send messages from authenticated email addresses in SES.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;To use this domain in a real production environment, you must create a Request-production-access and submit it to AWS Support.&lt;/p&gt;

&lt;p&gt;The official documentation states that Request-production-access may take up to 24 hours to complete (see Reference below).&lt;/p&gt;

&lt;p&gt;Therefore, the process of Request-production-access must be completed before you can finish creating resources using Terraform.&lt;/p&gt;

&lt;p&gt;Next, we will explain Lambda and EventBridge using the Serverless Framework.&lt;/p&gt;

&lt;h2&gt;
  
  
  2. Implementation using Terraform
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;serverless.yml&lt;/li&gt;
&lt;/ul&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;p&gt;&lt;strong&gt;The serverless.yml&lt;/strong&gt; file can be broadly divided into three sections: provider, function, and resource.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The provider section&lt;/strong&gt; represents the environment configuration for Lambda. It specifies details such as the region where the deployment will occur, the amount of memory to be used, and the IAM role to be used.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The function section&lt;/strong&gt; is where the actual Lambda execution behavior is defined when triggered. Among the key names, the handler part maps to the actual binary or function name that will be executed. This part varies slightly depending on the programming language used. For JavaScript or Python, it specifies the function name in the executing file, while for Go, it specifies the binary.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The resource section&lt;/strong&gt; represents the resources configured within the Serverless Framework, primarily specifying resources used by the functions. The provided example determines whether an event notification will be sent to an SNS topic when the CreateUser API is called via EventBridge.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;main.go&lt;/li&gt;
&lt;/ul&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;p&gt;When Lambda is triggered, the &lt;code&gt;Handler&lt;/code&gt; function in the source code is invoked, which can be divided into four main parts:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Parsing the incoming information from the Event (Lines 46 - 49)&lt;/strong&gt;:&lt;br&gt;
In this step, information like the user's account name is extracted from the Event.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Generating a Password using the &lt;code&gt;go-password&lt;/code&gt; library (Lines 31 - 36)&lt;/strong&gt;:&lt;br&gt;
This library is used to generate a password based on parameters such as the desired length, the number of special characters, and the number of digits.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Creating a login profile with the parsed user account name and generated Password (Lines 58 - 68)&lt;/strong&gt;:&lt;br&gt;
In this part, the Terraform-created user account is enabled, and a password is specified. Additionally, there's an option to determine whether the user should be prompted to reset their password upon initial login, which is passed as a parameter when making the API call.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Sending account information created to an email address using the SES API (Lines 70 - 99)&lt;/strong&gt;:&lt;br&gt;
In this step, if the user account wasn't created in the form of an email address, you need to concatenate the domain and user account to the &lt;code&gt;ToAddress&lt;/code&gt; in Line 79. Then, by specifying the desired Title and Content, the SES API is called to send an email.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;This automation process demonstrates how AWS IAM creates a user, sets their password, activates the account, and sends the account details to a specified email or user account.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>devops</category>
      <category>architecture</category>
      <category>automation</category>
    </item>
    <item>
      <title>Automate AWS account creation(1)</title>
      <dc:creator>Theo Jung</dc:creator>
      <pubDate>Sat, 02 Sep 2023 13:54:46 +0000</pubDate>
      <link>https://dev.to/aws-builders/automate-aws-account-creation1-3h78</link>
      <guid>https://dev.to/aws-builders/automate-aws-account-creation1-3h78</guid>
      <description>&lt;p&gt;Hello! All Developers!&lt;/p&gt;

&lt;p&gt;Lately, I've been using cloud services like AWS and Azure a lot. In the organization I'm part of, we use AWS extensively. In this post, I'd like to discuss automation related to infrastructure and share how we recently implemented automated notifications for user creation using various AWS services.&lt;/p&gt;

&lt;p&gt;I'll explain what prompted us to automate, and how we structured it, and hope that you'll fully understand by the end of this post.&lt;/p&gt;

&lt;p&gt;*Before reading, having some knowledge of Terraform and AWS resources will make it easier to follow along.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Why Automation?&lt;/strong&gt;&lt;br&gt;
In our organization, we use AWS and Terraform to create AWS accounts for users in the form of emails and notify them via email or Slack.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzxly3wq2l2zexcabtw4p.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzxly3wq2l2zexcabtw4p.png" alt="Communication Process"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Through this process, we identified two major issues that we wanted to address through automation.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;The first issue is the significant communication cost. For each account created, we needed to generate messages with the relevant information and communicate individually with each user. As new users kept coming in, the cost of processing and communication became substantial.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The second issue is the lack of automation in the account information delivery process, which increases the likelihood of human errors. When console-generated passwords were delivered incorrectly, it often resulted in reset requests, requiring additional processing.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;To address these two issues, we came up with the following steps:&lt;br&gt;
When a user (User) requests account creation,&lt;br&gt;
An administrator (Admin) uses Terraform to create User resources and,&lt;br&gt;
Detects this to set the user's account password and send it via email.&lt;/p&gt;

&lt;p&gt;Let's delve into the architecture we established following the above steps.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Architecture&lt;/strong&gt;&lt;br&gt;
In this process, we used a total of five AWS services:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;IAM (AWS Identity and Access Management)&lt;/li&gt;
&lt;li&gt;EventBridge (AWS EventBridge)&lt;/li&gt;
&lt;li&gt;SNS (AWS Simple Notification Service)&lt;/li&gt;
&lt;li&gt;Lambda (AWS Lambda)&lt;/li&gt;
&lt;li&gt;SES (AWS Simple Email Service)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6ngzuk9ykzrl6nc53wfw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6ngzuk9ykzrl6nc53wfw.png" alt="AWS Architecture"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The overall architecture is as shown in the diagram above.&lt;/p&gt;

&lt;p&gt;Before delving into how this architecture operates, it's helpful to understand that when you create AWS resources with Terraform, it invokes the appropriate AWS API for the corresponding functionality.&lt;/p&gt;

&lt;p&gt;Before this architecture can function, five conditions must be met:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;The domain from which SES sends emails must be verified and in production request mode.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;You should create topics and subscriptions in SNS.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Configure SNS subscriptions with Lambda as the target.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Set up an EventBridge rule to notify the SNS topic when the CreateUser API is called.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Once these four conditions are met, the architecture operates as follows:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;When you use Terraform to create User resources, it triggers the AWS internal CreateUser API.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;EventBridge detects the invocation of CreateUser and notifies the SNS target topic.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The SNS topic's subscription triggers Lambda, which operates internally by calling AWS APIs to randomly generate a password for the user account and create a login profile for the user.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;After creating the account, Lambda sends an email to the user's email address via SES, containing the newly generated account information and password.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Through these five steps, we've automated the process of creating accounts and notifying users via email.&lt;/p&gt;

&lt;p&gt;That concludes the explanation of the architecture. In the next post, I'll provide code implementations and further explanations.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Reference&lt;/strong&gt;&lt;br&gt;
Terraform AWS IAM User: &lt;a href="https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/iam_user" rel="noopener noreferrer"&gt;https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/iam_user&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Serverless Resource: &lt;a href="https://www.serverless.com/framework/docs/providers/aws/guide/resources" rel="noopener noreferrer"&gt;https://www.serverless.com/framework/docs/providers/aws/guide/resources&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Serverless EventBridge: &lt;a href="https://www.serverless.com/blog/eventbridge-use-cases-and-tutorial/" rel="noopener noreferrer"&gt;https://www.serverless.com/blog/eventbridge-use-cases-and-tutorial/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;AWS EventBridge: &lt;a href="https://docs.aws.amazon.com/eventbridge/latest/userguide/eb-use-sam.html" rel="noopener noreferrer"&gt;https://docs.aws.amazon.com/eventbridge/latest/userguide/eb-use-sam.html&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Send Mail using AWS SDK: &lt;a href="https://docs.aws.amazon.com/ko_kr/ses/latest/dg/send-an-email-using-sdk-programmatically.html" rel="noopener noreferrer"&gt;https://docs.aws.amazon.com/ko_kr/ses/latest/dg/send-an-email-using-sdk-programmatically.html&lt;/a&gt;&lt;/p&gt;

</description>
      <category>aws</category>
      <category>devops</category>
      <category>automation</category>
      <category>architecture</category>
    </item>
  </channel>
</rss>
