<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Camille He</title>
    <description>The latest articles on DEV Community by Camille He (@camillehe1992).</description>
    <link>https://dev.to/camillehe1992</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/camillehe1992"/>
    <language>en</language>
    <item>
      <title>Host a Vue Project in AWS Amplify using Terraform</title>
      <dc:creator>Camille He</dc:creator>
      <pubDate>Fri, 26 Jul 2024 07:12:45 +0000</pubDate>
      <link>https://dev.to/camillehe1992/host-a-vue-project-in-aws-amplify-using-terraform-2l89</link>
      <guid>https://dev.to/camillehe1992/host-a-vue-project-in-aws-amplify-using-terraform-2l89</guid>
      <description>&lt;p&gt;In the blog, I will introduce how to scaffolding a Vue based frontend project in AWS Amplify from scratch. The template project is setup using Vite, AWS Amplify infrastructure is provisioned using Terraform. Both Vue source code and Terraform infrastructure code are managed and versioned in GitHub repository. Besides, GitHub Actions workflows are created to provision and manage Cloud infrastructure. New commits to GitHub repository will trigger a new deployment in Amplify App accordingly. In the end, you should know:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;How to setup a Vue based frontend template project from scratch.&lt;/li&gt;
&lt;li&gt;How to define and manage AWS infrastructure using Terraform.&lt;/li&gt;
&lt;li&gt;How to provision and manage AWS infrastructure via GitHub Actions workflow.&lt;/li&gt;
&lt;/ul&gt;

&lt;ul&gt;
&lt;li&gt;Prerequisites&lt;/li&gt;
&lt;li&gt;
Preparation

&lt;ul&gt;
&lt;li&gt;1. Create PAT in GitHub Account&lt;/li&gt;
&lt;li&gt;2. Create Terraform Backend S3 Bucket&lt;/li&gt;
&lt;li&gt;3. Create a New GitHub Repository&lt;/li&gt;
&lt;li&gt;4. Install and Authorize the Amplify GitHub App for Deployment&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;Scaffolding Vue Project&lt;/li&gt;

&lt;li&gt;Setup Terraform Infrastructure&lt;/li&gt;

&lt;li&gt;

Provision Terraform Infrastructure using CLI

&lt;ul&gt;
&lt;li&gt;Option 1. From Local Machine&lt;/li&gt;
&lt;li&gt;Option 2. Via GitHub Actions&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;Demo Project&lt;/li&gt;

&lt;li&gt;Next Step&lt;/li&gt;

&lt;li&gt;Summary&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;The Demo Repo: &lt;a href="https://github.com/camillehe1992/amplify-vue-app" rel="noopener noreferrer"&gt;https://github.com/camillehe1992/amplify-vue-app&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Prerequisites
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;A GitHub account

&lt;ul&gt;
&lt;li&gt;permission to create an empty repository&lt;/li&gt;
&lt;li&gt;permission to create and manage personal access token&lt;/li&gt;
&lt;li&gt;permission to create and manage GitHub Actions workflow&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;An AWS account 

&lt;ul&gt;
&lt;li&gt;administrative permissions for AWS services, such as Amplify, IAM, S3,  etc.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Desktop:

&lt;ul&gt;
&lt;li&gt;a local machine with npm, nodejs installed.&lt;/li&gt;
&lt;li&gt;VSCode IDE (Optional, but best to have)&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Preparation
&lt;/h2&gt;

&lt;p&gt;Here are some manual preparation work. &lt;/p&gt;

&lt;h3&gt;
  
  
  1. Create PAT in GitHub Account
&lt;/h3&gt;

&lt;p&gt;Personal access tokens are intended to access GitHub resources on behalf of yourself. With personal access token, you can manage your GitHub source code using Git CLI from local machine. Follow the &lt;a href="https://docs.github.com/en/authentication/keeping-your-account-and-data-secure/managing-your-personal-access-tokens#creating-a-fine-grained-personal-access-token" rel="noopener noreferrer"&gt;official tutorial&lt;/a&gt; to generate a fine-grained personal access token. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi21yyom1zqgie47kpt3w.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi21yyom1zqgie47kpt3w.png" alt="pat-permissions"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Copy the token and save it somewhere else as the token will gone if the browser is refreshed. The token will be used in the variable &lt;strong&gt;ACCESS_TOKEN&lt;/strong&gt; of Terraform infrastructure, and in Git commands.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Create Terraform Backend S3 Bucket
&lt;/h3&gt;

&lt;p&gt;As our application is provisioned using Terraform, we should create a S3 bucket manually from AWS console as the Terraform backend to store state files. &lt;br&gt;
From AWS S3 console. Click on "Create Bucket" button, enter a bucket name with the AWS account id and region to make it unique in the global, for example &lt;strong&gt;terraform-state-{aws-account-id}-{aws-region}&lt;/strong&gt;. The bucket will be used in the environment variable of &lt;strong&gt;STATE_BUCKET&lt;/strong&gt; of Terraform CLI.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. Create a New GitHub Repository
&lt;/h3&gt;

&lt;p&gt;Create a GitHub repository from GitHub console &lt;a href="https://github.com/new" rel="noopener noreferrer"&gt;https://github.com/new&lt;/a&gt;. In the demo, I created a repository in my personal GitHub account with name &lt;strong&gt;amplify-vue-app&lt;/strong&gt; as below.&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnjalibkueofva0m0os09.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnjalibkueofva0m0os09.png" alt="create-repo"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  4. Install and Authorize the Amplify GitHub App for Deployment
&lt;/h3&gt;

&lt;p&gt;Before you deploy a new app to Amplify from existing code in a GitHub repo, follow &lt;a href="https://docs.aws.amazon.com/amplify/latest/userguide/setting-up-GitHub-access.html#setting-up-github-app" rel="noopener noreferrer"&gt;instructions&lt;/a&gt; to install and authorize the GitHub App. After completed, Amplify will be granted all necessary permissions to interact with GitHub repository you configured. For example, pull source code, listen on GitHub events, etc.&lt;br&gt;
From &lt;strong&gt;GitHub -&amp;gt; Settings -&amp;gt; Integrations -&amp;gt; Applications&lt;/strong&gt;, a GitHub App as below is installed successfully. You can view and update the Amplify GitHub App from here as needed. &lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwru95z42t2guzzpotbkf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwru95z42t2guzzpotbkf.png" alt="github-app"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Notes:&lt;br&gt;
If you missed the step, you will be asked to install and authorize the Amplify GitHub App from AWS Amplify console when your Amplify app firstly launched. &lt;br&gt;
If the step is already completed, but the deployment fail with error "[ERROR]: !!! Unable to assume specified IAM Role. Please ensure the selected IAM Role has sufficient permissions and the Trust Relationship is configured correctly", double check if your new repository is in the list of Repository access of above GitHub App. &lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Scaffolding Vue Project
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Step 1.&lt;/strong&gt; Clone your new repository to local machine. You may be asked for the username and password. The username is the GitHub account username, password is the personal access token (PAT) that generated in Prep 1.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

git clone https://github.com/camillehe1992/amplify-vue-app.git
&lt;span class="nb"&gt;cd &lt;/span&gt;amplify-vue-app


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;Step 2.&lt;/strong&gt; Scaffolding your Vue project using Vite. See the detailed information from &lt;a href="https://vitejs.dev/guide/" rel="noopener noreferrer"&gt;https://vitejs.dev/guide/&lt;/a&gt;&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

npm create vite@latest &lt;span class="nb"&gt;.&lt;/span&gt; &lt;span class="nt"&gt;--&lt;/span&gt; &lt;span class="nt"&gt;--template&lt;/span&gt; vue
npm &lt;span class="nb"&gt;install
&lt;/span&gt;npm run dev


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;The Vue project is listening on &lt;a href="http://localhost:5173/" rel="noopener noreferrer"&gt;http://localhost:5173/&lt;/a&gt; as below.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnu8sgdrvyif6fdcrpj0i.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnu8sgdrvyif6fdcrpj0i.png" alt="localhost"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 3.&lt;/strong&gt; Commit the code change and push to remote repository. &lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

git add &lt;span class="nb"&gt;.&lt;/span&gt;
git commit &lt;span class="nt"&gt;-m&lt;/span&gt; &lt;span class="s2"&gt;"scaffolding vue project"&lt;/span&gt;
git push


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Now we have setup a simple Vue project.&lt;/p&gt;

&lt;h2&gt;
  
  
  Setup Terraform Infrastructure
&lt;/h2&gt;

&lt;p&gt;We use Terraform to provision and manage AWS infrastructure, including Amplify App. All Terraform infrastructure related code is under terraform directory. You can find the detailed description of Terraform infrastructure from repository &lt;a href="https://github.com/camillehe1992/amplify-vue-app/blob/main/terraform/Terraform.md" rel="noopener noreferrer"&gt;documentation&lt;/a&gt;.&lt;/p&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

&lt;p&gt;&lt;span class="nb"&gt;.&lt;/span&gt;&lt;br&gt;
├── .env.sample              &lt;span class="c"&gt;# Environment variables for Terraform&lt;/span&gt;&lt;br&gt;
├── .terraform-docs.yaml     &lt;span class="c"&gt;# terraform-docs configuration file&lt;/span&gt;&lt;br&gt;
├── .terraform.lock.hcl      &lt;span class="c"&gt;# Auto generated The dependency lock file       &lt;/span&gt;&lt;br&gt;
├── Terraform.md&lt;br&gt;
├── amplify.yaml             &lt;span class="c"&gt;# The build specification for an Amplify app&lt;/span&gt;&lt;br&gt;
├── main.tf                  &lt;span class="c"&gt;# Terraform resources definition&lt;/span&gt;&lt;br&gt;
├── outputs.tf               &lt;span class="c"&gt;# Terraform outputs&lt;/span&gt;&lt;br&gt;
├── tf_dev.tfvars            &lt;span class="c"&gt;# Terraform variables for dev env&lt;/span&gt;&lt;br&gt;
├── tf_prod.tfvars           &lt;span class="c"&gt;# Terraform variables for prod env&lt;/span&gt;&lt;br&gt;
├── variables.tf             &lt;span class="c"&gt;# Terraform input variable definition&lt;/span&gt;&lt;br&gt;
└── versions.tf              &lt;span class="c"&gt;# Terraform providers and backend configuration&lt;/span&gt;&lt;/p&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h2&gt;
&lt;br&gt;
  &lt;br&gt;
  &lt;br&gt;
  Provision Terraform Infrastructure using CLI&lt;br&gt;
&lt;/h2&gt;

&lt;p&gt;Terraform provides CLI to manage its infrastructure. See basic Terraform CLI features from official &lt;a href="https://developer.hashicorp.com/terraform/cli/commands" rel="noopener noreferrer"&gt;documentation&lt;/a&gt;. With Terraform CLI, you can provision AWS infrastructure from local machine or via GitHub Actions workflows. &lt;/p&gt;

&lt;h3&gt;
  
  
  Option 1. From Local Machine
&lt;/h3&gt;

&lt;p&gt;To provision Amplify infrastructure from local machine, follow the &lt;a href="https://github.com/camillehe1992/amplify-vue-app/blob/main/docs/Deployment.md" rel="noopener noreferrer"&gt;tutorial&lt;/a&gt; to setup below configurations:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Install Terrafrom CLI&lt;/li&gt;
&lt;li&gt;Install AWS CLI&lt;/li&gt;
&lt;li&gt;Configurate AWS Credentials&lt;/li&gt;
&lt;li&gt;Execute Make Commands&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;After executing &lt;strong&gt;make quick-deploy&lt;/strong&gt; command, an Amplify App named &lt;strong&gt;dev-amplify-vue-app&lt;/strong&gt; is created in AWS Amplify. This Amplify app is for development only, and there is another Amplify app named &lt;strong&gt;prod-amplify-vue-app&lt;/strong&gt; for production environment that will be created using GitHub Actions workflow later.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fg1auddocunmkj3s71rol.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fg1auddocunmkj3s71rol.png" alt="multi-apps"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As you can see, there are two Amplify apps here, which I called Multi-Apps architecture. The architecture simulates the multiple environments in a real project. The production branch in &lt;strong&gt;dev-amplify-vue-app&lt;/strong&gt; is &lt;strong&gt;develop&lt;/strong&gt;, which targets to &lt;strong&gt;develop&lt;/strong&gt; branch in GitHub repository. The production branch in &lt;strong&gt;prod-amplify-vue-app&lt;/strong&gt; is &lt;strong&gt;main&lt;/strong&gt;, which targets in &lt;strong&gt;main&lt;/strong&gt; branch accordingly. In Amplify, each branch has an environment, that maintains in the backend. &lt;br&gt;
In the Multi-Apps architecture solution,  when new commits are pushed to the GitHub repository develop branch, a new deployment on the develop branch in Amplify App &lt;strong&gt;dev-amplify-vue-app&lt;/strong&gt; will be triggered automatically. AWS Amplify pulls source code from repository develop branch, then build and deploy it to development environment. After deployment completed successfully, changes in new commits will be available via Amplify app &lt;strong&gt;dev-amplify-vue-app&lt;/strong&gt; default domain url.&lt;br&gt;
Developers can validate the new changes in development environment. If everything works as expected, developers can create a pull request from develop to main branch in repository.&lt;br&gt;
A dedicated branch with prefix &lt;strong&gt;pr&lt;/strong&gt; created for each pull request in Amplify app in the target branch. For example, a pull request from develop to main branch will create a PR branch named &lt;strong&gt;pr-2&lt;/strong&gt; in &lt;strong&gt;prod-amplify-vue-app&lt;/strong&gt;. No.2 is the order number in &lt;strong&gt;GitHub -&amp;gt; Pull Request&lt;/strong&gt;. Developers can validate the change from the domain url before merging the pull request.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fg3w6kqimpi269eroy47h.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fg3w6kqimpi269eroy47h.png" alt="pr-branch-deployment"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The deployment result and domain url for pull request information is appended in the pull request conversion as below. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpp2wul1y14mrksv4lg0m.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpp2wul1y14mrksv4lg0m.png" alt="pr-conversation"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Amplify app &lt;strong&gt;prod-amplify-vue-app&lt;/strong&gt; is for production environment, and we disabled the automate build, you should trigger a new deployment from AWS console manually. After completed, the code changes are available in production environment. &lt;br&gt;
Below diagram illustrate the development and deployment workflow in the whole lifecycle. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh9z1o791dtlb1nk2oon7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh9z1o791dtlb1nk2oon7.png" alt="deployment-workflow"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The benefits of Multi-Apps architecture:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;A dedicated Amplify app for each environment.&lt;/li&gt;
&lt;li&gt;Configuration flexibility on Amplify app for different environments.&lt;/li&gt;
&lt;li&gt;Easy to maintain, easy to scale out (add a new environment, such as testing, or staging).&lt;/li&gt;
&lt;li&gt;Control management, for example, secure Amplify app for production with restricted access if only allowed people or role can manual trigger a new deployment in production environment.&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Option 2. Via GitHub Actions
&lt;/h3&gt;

&lt;p&gt;Now, you know how to provision Amplify infrastructure from local machine, and the workflow between GitHub and AWS Amplify. Next, let's create GitHub Actions workflows to automate the Terraform infrastructure provision and management process.&lt;br&gt;
You should be familiar with the concept of Infrastructure of Code (IoC). IaC is the ability to provision and support your computing infrastructure using code instead of manual processes and settings. With Terraform and GitHub Actions, we can provision and manage Amplify infrastructure automatically. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ful16nx61abr9rorbpjdu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ful16nx61abr9rorbpjdu.png" alt="github-actions"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Follow the &lt;a href="https://github.com/camillehe1992/amplify-vue-app/blob/main/docs/Deployment.md#deploy-via-github-actions-workflows" rel="noopener noreferrer"&gt;documentation&lt;/a&gt; to provision AWS infrastructure via GitHub Actions workflows.&lt;/p&gt;

&lt;h2&gt;
  
  
  Demo Project
&lt;/h2&gt;

&lt;p&gt;After provisioning Amplify App &lt;strong&gt;prod-amplify-vue-app&lt;/strong&gt; for production via GitHub Actions workflow, you should get two Amplify App in AWS. Here is the domain url for each environment.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fui46ejxzgqsvql5q2wwa.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fui46ejxzgqsvql5q2wwa.png" alt="dev-vue-app"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F441kvpvrajxj9mei83mp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F441kvpvrajxj9mei83mp.png" alt="prod-vue0app"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Next Step
&lt;/h2&gt;

&lt;p&gt;Now, we have setup a basic Amplify Multi-Apps architecture solution following infrastructure as code (IaC). You can do some enhancements on top of it, for example,&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Add an Amplify Domain Association resource on Amplify app. &lt;/li&gt;
&lt;li&gt;Add backend resources on Amplify app.&lt;/li&gt;
&lt;li&gt;Add notification process using SNS, CloudWatch Event Bridge, etc, or enable Amplify WebHook.&lt;/li&gt;
&lt;li&gt;Refactor the Single-Account Multi-Apps solution as needed, for example, a dedicated AWS account for Amplify app for specific development. With the Multi-Accounts solution as below, you can grant different permissions for users or roles for best practice.  It's suitable to setup an central platform that provisions and manages extensible, Full-Stack Web and Mobile Apps powered by AWS Amplify service in enterprise scope, and integrates with other frameworks or tools to provide one-stop service.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv6hbntzlfvabg89bywbu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv6hbntzlfvabg89bywbu.png" alt="multi-accounts"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Summary
&lt;/h2&gt;

&lt;p&gt;In the end, you have learned:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;How to setup a Vue based frontend template project from scratch.&lt;/li&gt;
&lt;li&gt;How to define and manage AWS infrastructure using Terraform.&lt;/li&gt;
&lt;li&gt;How to provision and manage AWS infrastructure via GitHub Actions workflow.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Thanks for reading and looking forward to your ideas and comments. &lt;br&gt;
Happy learning, and happy coding!&lt;/p&gt;

</description>
      <category>aws</category>
      <category>terraform</category>
      <category>vue</category>
      <category>githubactions</category>
    </item>
    <item>
      <title>Scaffolding Serverless Web Application on AWS</title>
      <dc:creator>Camille He</dc:creator>
      <pubDate>Mon, 22 Jul 2024 07:32:57 +0000</pubDate>
      <link>https://dev.to/camillehe1992/scaffolding-serverless-web-application-on-aws-1gl2</link>
      <guid>https://dev.to/camillehe1992/scaffolding-serverless-web-application-on-aws-1gl2</guid>
      <description>&lt;p&gt;In this article, I'm going to setup a serverless based web application on AWS Cloud. Firstly I will explain its AWS cloud architecture and infrastructure that used in the application. Then, dive into the details of AWS services. Finally, go through the source code, and guide you to setup your own application from scratch. In the end, you should know:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;How to design a serverless web application using API Gateway, Lambda and DynamoDB.&lt;/li&gt;
&lt;li&gt;How to provision and organize cloud infrastructure using Terraform.&lt;/li&gt;
&lt;li&gt;How to use Lambda PowerTools to organize and fulfill HTTP/HTTPS requests/responses.&lt;/li&gt;
&lt;li&gt;How to use PynamoDB to interact with DynamoDB using an ORM-like interface.&lt;/li&gt;
&lt;/ul&gt;

&lt;ul&gt;
&lt;li&gt;Demo Application&lt;/li&gt;
&lt;li&gt;
AWS Infrastructure

&lt;ul&gt;
&lt;li&gt;Core Layer - API Gateway &amp;amp; Lambda&lt;/li&gt;
&lt;li&gt;Database Layer - DynamoDB&lt;/li&gt;
&lt;li&gt;Access Control Layer - IAM&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

Source Code

&lt;ul&gt;
&lt;li&gt;Terraform&lt;/li&gt;
&lt;li&gt;Lambda Code&lt;/li&gt;
&lt;li&gt;Others&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;Development &amp;amp; Deployment&lt;/li&gt;

&lt;li&gt;Summary&lt;/li&gt;

&lt;li&gt;References&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;Let's get started!&lt;/p&gt;

&lt;h2&gt;
  
  
  Demo Application
&lt;/h2&gt;

&lt;p&gt;The demo application provides a simple “to-do list” backend interface that supports CURD operations.  Here is the screenshot of Swagger UI.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1rh943j8aj39vpvvdtar.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1rh943j8aj39vpvvdtar.png" alt="swagger-ui"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  AWS Infrastructure
&lt;/h2&gt;

&lt;p&gt;The serverless web application is built with several AWS services together. Below diagram shows how these AWS services integrate with each other. It contains four layers:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Core Layer: includes API Gateway and Lambda function, layers.&lt;/li&gt;
&lt;li&gt;Database Layer: DynamoDB tables.&lt;/li&gt;
&lt;li&gt;Access Control Layer: IAM roles for Lambda function.&lt;/li&gt;
&lt;li&gt;Monitoring Layer: CloudWatch Log groups for API Gateway and Lambda function.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn5v17j0o6cbwz2opx9sn.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn5v17j0o6cbwz2opx9sn.jpg" alt="arch-diagram"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Core Layer - API Gateway &amp;amp; Lambda
&lt;/h3&gt;

&lt;p&gt;Create a Restful API in API Gateway, and a function and layers in Lambda. &lt;br&gt;
API Gateway provide APIs that act as the "front door" for applications to access data, business logic, or functionality from your backend services.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpc4t9t5n5nyk75c1wknf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpc4t9t5n5nyk75c1wknf.png" alt="api-gateway-integration"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;From the screenshot, each API method has a Lambda integration . Any GET /todos request flows into integrated Lambda function. The function fulfills the request and get todo items from DynamoDB table.&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F09x18z3ktlxpralp96ek.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F09x18z3ktlxpralp96ek.png" alt="lambda"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Database Layer - DynamoDB
&lt;/h3&gt;

&lt;p&gt;Amazon DynamoDB is a serverless, NoSQL database service that enables you to develop modern applications at any scale. Two tables are created.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7c04o40sdmnz6zn7jl2t.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7c04o40sdmnz6zn7jl2t.png" alt="dynamodb-tables"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Access Control Layer - IAM
&lt;/h3&gt;

&lt;p&gt;With AWS Identity and Access Management (IAM), you can specify who or what can access services and resources in AWS, centrally manage fine-grained permissions, and analyze access to refine permissions across AWS. &lt;br&gt;
Two IAM roles are created: API Gateway logging role and Lambda function execution role.&lt;/p&gt;

&lt;h4&gt;
  
  
  API Gateway Logging Role
&lt;/h4&gt;

&lt;p&gt;In order to collect logs from API Gateway, I grant API Gateway permission to read and write logs to CloudWatch in the account. The role takes effect on the account level, so I created manually from AWS console before the demo. Follow the official &lt;a href="https://docs.aws.amazon.com/apigateway/latest/developerguide/set-up-logging.html" rel="noopener noreferrer"&gt;tutorial&lt;/a&gt; to create the role.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw0yldvk7zswdsr44n0nl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw0yldvk7zswdsr44n0nl.png" alt="api-gateway-role"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  Lambda Function Execution Role
&lt;/h4&gt;

&lt;p&gt;A Lambda function's execution role is an AWS IAM role that grants the function permission to access AWS services and resources. For example, you might create an execution role that has permission to send logs to Amazon CloudWatch. In the demo, we should grant DynamoDB permissions to the role.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvxsj4qmspydgixblm0b9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvxsj4qmspydgixblm0b9.png" alt="lambda-execution-role-policy"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  Monitoring Layer -  CloudWatch
&lt;/h4&gt;

&lt;p&gt;As we grant API Gateway and Lambda function permissions to write logs to CloudWatch Logs, we still need to create CloudWatch Log groups for them. You can find the logs for investigation or debugging.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fffq4kdkngne3hb7bhxjb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fffq4kdkngne3hb7bhxjb.png" alt="cloudwatch-logs"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Next section, I'll introduce the source code of Lambda function, and AWS infrastructure. You can find the source code from &lt;a href="https://github.com/camillehe1992/scaffolding-serverless-project-on-aws" rel="noopener noreferrer"&gt;https://github.com/camillehe1992/scaffolding-serverless-project-on-aws&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Source Code
&lt;/h2&gt;

&lt;p&gt;As I mentioned in the beginning, although we don't need to provision and manage servers, we still need to create AWS infrastructure manually from AWS console, or manage infrastructure as code (IaC) using CDK, CloudFormation or Terraform, etc. I use Terraform to define all AWS infrastructure and deploy them into AWS via Terraform CLI in GitHub Actions workflow. Except for Terraform code, the Lambda function source code is important as well. &lt;br&gt;
I will introduce Terraform and Lambda function code structure and python dependencies I used. These dependencies make the code readable, maintainable and be organized. Greatly reduce the workload of development in a real project of your work.&lt;/p&gt;

&lt;h3&gt;
  
  
  Terraform
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://terraform.io/" rel="noopener noreferrer"&gt;Terraform&lt;/a&gt; is an infrastructure as code tool that lets you build, change, and version infrastructure safely and efficiently. Terraform code is in the &lt;em&gt;terraform&lt;/em&gt; directory. &lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

&lt;span class="nb"&gt;.&lt;/span&gt;
├── deployments     &lt;span class="c"&gt;# Define AWS resources and data with Terraform scripts     &lt;/span&gt;
│   ├── api
│   ├── common_infra
│   └── dynamodb
├── modules        &lt;span class="c"&gt;# Define Terraform modules for AWS resources&lt;/span&gt;
│   ├── README.md
│   ├── api_gateway
│   ├── dynamodb
│   ├── iam
│   ├── lambda_function
│   ├── lambda_layer
│   └── vpc_endpoint
└── settings.     &lt;span class="c"&gt;# Define Terraform variables for each environment&lt;/span&gt;
    ├── dev
    └── prod


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;I won't dive into the details of Terraform scripts here, which are not in the scope. The key point I want to share, from my experience, you should organize Terraform code according to the cloud architecture. For example, separate AWS infrastructure into several groups, so that they can be managed and provisioned individually, especially when there is a bunch of infrastructure in the cloud architecture. Use Terraform modules to reduce code redundancy and operational and maintenance cost. &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Terraform Modules are containers for multiple resources that are used together in a configuration. &lt;a href="https://developer.hashicorp.com/terraform/language/modules#modules" rel="noopener noreferrer"&gt;https://developer.hashicorp.com/terraform/language/modules#modules&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  Lambda Code
&lt;/h3&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

&lt;span class="c"&gt;# File structure in src directory&lt;/span&gt;
&lt;span class="nb"&gt;.&lt;/span&gt;
├── __init__.py
├── portal                 &lt;span class="c"&gt;# Lambda function source code&lt;/span&gt;
│   ├── __init__.py
│   ├── app
│   └── requirements.txt
└── tests                  &lt;span class="c"&gt;# All test related source code&lt;/span&gt;
    ├── e2e
    ├── &lt;span class="nb"&gt;local&lt;/span&gt;
    ├── thunder
    └── unit


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;The core lambda logic code is in &lt;em&gt;src.portal.app&lt;/em&gt; directory.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

&lt;span class="c"&gt;# In src.portal&lt;/span&gt;
&lt;span class="nb"&gt;.&lt;/span&gt;
├── __init__.py
├── app
│   ├── __init__.py
│   ├── database
│   ├── enum.py
│   ├── logging.py
│   ├── main.py
│   ├── models
│   └── routers
└── requirements.txt


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;The main Python dependencies I use in the serverless web application are Lambda PowerTools and PynamoDB. &lt;/p&gt;

&lt;h4&gt;
  
  
  Lambda PowerTools
&lt;/h4&gt;

&lt;p&gt;&lt;a href="https://docs.powertools.aws.dev/lambda/python/latest/" rel="noopener noreferrer"&gt;https://docs.powertools.aws.dev/lambda/python/latest/&lt;/a&gt;&lt;br&gt;
Lambda PowerTools provides many amazing features, such as validation, logging, event handler, enable swagger ui etc. It makes the serverless code work as web application framework that you are more familiar with, for example Flask or FastAPI. The entry of the application is lambda_handler function in &lt;em&gt;app.main.py&lt;/em&gt; file.&lt;/p&gt;

&lt;h4&gt;
  
  
  PynamoDB
&lt;/h4&gt;

&lt;p&gt;&lt;a href="https://github.com/pynamodb/PynamoDB" rel="noopener noreferrer"&gt;https://github.com/pynamodb/PynamoDB&lt;/a&gt;&lt;br&gt;
A pythonic interface to Amazon's DynamoDB, that provide an ORM-like interface with query and scan filters. It supports many features which makes it more comfortable to interact with DynamoDB API. &lt;/p&gt;

&lt;h3&gt;
  
  
  Others
&lt;/h3&gt;

&lt;p&gt;Except for Lambda and Terraform source code, there are some development configurations.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;cloudformation/infra.yaml: Define the S3 bucket and DynomoDB lock table for Terraform backend configuration.&lt;/li&gt;
&lt;li&gt;Makefile: Define make script to simplify your local deployment using shell scripts.&lt;/li&gt;
&lt;li&gt;.pylinrc, .pytest.ini, .pre-commit-config.yaml etc : Testing, linting and formatting configuration for python code.&lt;/li&gt;
&lt;li&gt;.github/actions: automate workflows to deploy/destroy infrastructure to/from AWS account.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Development &amp;amp; Deployment
&lt;/h2&gt;

&lt;p&gt;For local development, you can follow the &lt;a href="https://github.com/camillehe1992/scaffolding-serverless-project-on-aws/blob/main/docs/DEVELOPMENT.md" rel="noopener noreferrer"&gt;Development&lt;/a&gt; documentation  to setup environment on your local machine.&lt;br&gt;
You can also deploy the serverless web application from your local machine or via GitHub Actions following &lt;a href="https://github.com/camillehe1992/scaffolding-serverless-project-on-aws/blob/main/docs/DEPLOYMENT.md" rel="noopener noreferrer"&gt;Deployment&lt;/a&gt; documentation. &lt;/p&gt;

&lt;h2&gt;
  
  
  Summary
&lt;/h2&gt;

&lt;p&gt;In the end, you should learned:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Design a serverless web application architecture using API Gateway, Lambda and DynamoDB.&lt;/li&gt;
&lt;li&gt;Provision and organize cloud infrastructure using Terraform.&lt;/li&gt;
&lt;li&gt;Use Lambda PowerTools to fulfill core logic.&lt;/li&gt;
&lt;li&gt;Use PynamoDB to interact with DynamoDB using an ORM-like interface.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Besides, there are tips you'd better to know.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;AWS officially provides Lambda layers for Lambda PowerTools &lt;a href="https://docs.powertools.aws.dev/lambda/python/2.26.1/#lambda-layer" rel="noopener noreferrer"&gt;https://docs.powertools.aws.dev/lambda/python/2.26.1/#lambda-layer&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;API Gateway API methods and resource can be generated from swagger.yaml. &lt;a href="https://github.com/camillehe1992/scaffolding-serverless-project-on-aws/blob/main/terraform/deployments/api/swagger.yaml" rel="noopener noreferrer"&gt;https://github.com/camillehe1992/scaffolding-serverless-project-on-aws/blob/main/terraform/deployments/api/swagger.yaml&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  References
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://aws.amazon.com/serverless/build-a-web-app/" rel="noopener noreferrer"&gt;https://aws.amazon.com/serverless/build-a-web-app/&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://aws.amazon.com/serverless/" rel="noopener noreferrer"&gt;https://aws.amazon.com/serverless/&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://aws.amazon.com/api-gateway/" rel="noopener noreferrer"&gt;https://aws.amazon.com/api-gateway/&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://registry.terraform.io/providers/hashicorp/aws/latest" rel="noopener noreferrer"&gt;https://registry.terraform.io/providers/hashicorp/aws/latest&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Thanks for reading and looking forward to your ideas and advices.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>terraform</category>
      <category>serverless</category>
      <category>powertools</category>
    </item>
    <item>
      <title>AWS S3 Bucket Website Hosting using Terraform</title>
      <dc:creator>Camille He</dc:creator>
      <pubDate>Thu, 27 Jun 2024 03:59:47 +0000</pubDate>
      <link>https://dev.to/camillehe1992/aws-s3-bucket-website-hosting-using-terraform-4fk5</link>
      <guid>https://dev.to/camillehe1992/aws-s3-bucket-website-hosting-using-terraform-4fk5</guid>
      <description>&lt;p&gt;In previous blog &lt;a href="https://dev.to/camillehe1992/deploy-terraform-resources-to-aws-using-github-actions-via-oidc-3b9g"&gt;Deploy Terraform resources to AWS using GitHub Actions via OIDC&lt;/a&gt;, I explained how to configure OpenID Connect within GitHub Actions workflows to authenticate with AWS, then demonstrated the process using a very simple Actions workflow that lists all buckets in my AWS account.  As I mentioned, the common use case in real world is we define AWS infrastructure as code, provision AWS resources automatically and manage these cloud infrastructure in GitHub. &lt;/p&gt;

&lt;p&gt;In this article, I’ll go one step further. Use GitHub Actions workflow to provision and manage a S3 website using S3 website static hosting feature. The S3 bucket related AWS infrastructure is defined using Terraform. The website content and Terraform code are saved in GitHub. Any code change will trigger a new workflow build automatically to sync its infrastructure status in AWS. After done, your website can be available through bucket static website endpoint from browser.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F43cy10umdco14gc2v1p5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F43cy10umdco14gc2v1p5.png" alt="arch-diagrm" width="566" height="201"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The whole process includes three sections:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Infrastructure as Code: Define AWS infrastructure as code using Terraform.&lt;/li&gt;
&lt;li&gt;Website Static Content: Create index.html and 404.html files as the website static content.&lt;/li&gt;
&lt;li&gt;GitHub Actions workflow: Create workflow to provision AWS infrastructure to AWS.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Let's get started.&lt;/p&gt;

&lt;h2&gt;
  
  
  Prerequisites
&lt;/h2&gt;

&lt;h3&gt;
  
  
  1. Select Tools
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Terraform&lt;/strong&gt;: I choose &lt;a href="https://www.terraform.io/"&gt;Terraform&lt;/a&gt; to provision and manage AWS infrastructure. You can use any other tools as you want, such as CloudFormation, CDK etc. The core concept is the same. Define your cloud infrastructure as code.&lt;br&gt;
&lt;strong&gt;GitHub&lt;/strong&gt;: Manage source code in a version control system, such as GItHub, Bitbucket, AWS CodeCommit, or Gitlab, etc.&lt;br&gt;
&lt;strong&gt;GitHub Actions&lt;/strong&gt;: CICD pipeline management tool, such as GitHub Actions, Jenkins, Bitbucket pipeline, AWS CodeBuild, etc.&lt;/p&gt;
&lt;h3&gt;
  
  
  2. Create Terraform Backend S3 Bucket
&lt;/h3&gt;

&lt;p&gt;As Terraform uses persisted state data to keep track of the resources it manages, we use a backend to store state remotely. S3 bucket is commonly used to save the state of Terraform infrastructure in AWS. You can create a S3 bucket manually from AWS console, Click on Create bucket button, enter a meaningful bucket name, to make it simple, just keep all configuration as default, and click Create bucket. &lt;/p&gt;

&lt;p&gt;The bucket name will be used in Step 3 workflow environment variable &lt;strong&gt;TF_BACKEND_S3_BUCKET&lt;/strong&gt;.&lt;/p&gt;
&lt;h3&gt;
  
  
  3. Attach Policy on Deployment IAM Role
&lt;/h3&gt;

&lt;p&gt;Remember we created a dedicated IAM role for deployment named &lt;strong&gt;GitHubAction-AssumeRoleWithAction&lt;/strong&gt; in previous blog. In order to provision all AWS infrastructure that we used in this demo, you should add polices on the role. The easiest way is to attach an AWS managed policy &lt;strong&gt;AmazonS3FullAccess&lt;/strong&gt; which grants full access to all buckets in your AWS account.&lt;/p&gt;

&lt;p&gt;After done, move to coding part. You can find the sample code from GitHub repo: &lt;a href="https://github.com/camillehe1992/demo-for-aws-deployment-via-oidc"&gt;https://github.com/camillehe1992/demo-for-aws-deployment-via-oidc&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  Step 1. Infrastructure as Code
&lt;/h2&gt;

&lt;p&gt;I use Terraform to define all AWS infrastructure as code, so that AWS resources can be provisioned automatically through Actions workflow and be managed in GitHub. I won’t dive into the Terraform code, as that's not in the scope. All terraform related files are in terraform directory as below.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;└── terraform
    ├── local.tf
    ├── main.tf
    ├── mime.json
    ├── outputs.tf
    ├── prod.tfvars
    ├── providers.tf
    └── variables.tf
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Step 2. Website Static Content
&lt;/h3&gt;

&lt;p&gt; &lt;br&gt;
Now, let’s prepare our demo website content. In public directory, create index.html, 404.html and image files. All the files in the directory are uploaded to S3 bucket as the website static content.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;├── public
│   ├── 404.html
│   ├── images
│   │   ├── coffee.jpg
│   │   └── dogs.jpg
│   └── index.html
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Step 3. GitHub Actions Workflow
&lt;/h3&gt;

&lt;p&gt;I created a new Actions workflow named &lt;strong&gt;deploy.yaml&lt;/strong&gt; in &lt;strong&gt;.github/workflows&lt;/strong&gt; directory.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;├── .github
│   └── workflows
│       ├── deploy.yaml
│       └── get-started.yaml
...
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Comparing with &lt;strong&gt;get-started.yaml&lt;/strong&gt; workflow, Here are main updates:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Add new environment variables in env block and configure these variables in GitHub Settings -&amp;gt; Secrets and variables -&amp;gt; Variable -&amp;gt; repository variable.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;env&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="s"&gt;...&lt;/span&gt;
  &lt;span class="s"&gt;TF_BACKEND_S3_BUCKET&lt;/span&gt;&lt;span class="err"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${{ vars.TF_BACKEND_S3_BUCKET }}&lt;/span&gt;
  &lt;span class="s"&gt;ENVIRONMENT&lt;/span&gt;&lt;span class="err"&gt;:&lt;/span&gt; &lt;span class="s"&gt;prod&lt;/span&gt;
  &lt;span class="s"&gt;NICKNAME&lt;/span&gt;&lt;span class="err"&gt;:&lt;/span&gt; &lt;span class="s"&gt;demo-for-aws-deployment-via-oidc&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyw1veitrq0fj702gxul9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyw1veitrq0fj702gxul9.png" alt="add-variable" width="800" height="389"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;TF_BACKEND_S3_BUCKET&lt;/strong&gt;: the S3 bucket name of Terraform state files.&lt;br&gt;
&lt;strong&gt;ENVIRONMENT&lt;/strong&gt;: is part of Terraform backend S3 object key. Meanwhile, it's used as the suffix of website bucket name.&lt;br&gt;
&lt;strong&gt;NICKNAME&lt;/strong&gt;: is part of Terraform backend S3 object key.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Add three steps in job block after authentication:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Terraform init&lt;/span&gt;
        &lt;span class="na"&gt;working-directory&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;terraform&lt;/span&gt;
        &lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;
          &lt;span class="s"&gt;terraform init -reconfigure \&lt;/span&gt;
            &lt;span class="s"&gt;-backend-config="bucket=$TF_BACKEND_S3_BUCKET" \&lt;/span&gt;
            &lt;span class="s"&gt;-backend-config="region=$AWS_REGION" \&lt;/span&gt;
            &lt;span class="s"&gt;-backend-config="key=$NICKNAME/prod/$AWS_REGION/terraform.tfstate"&lt;/span&gt;
      &lt;span class="c1"&gt;# An exit code of 0 indicated no changes, 1 a terraform failure, 2 there are pending changes.&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Terraform plan&lt;/span&gt;
        &lt;span class="na"&gt;id&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;tf-plan&lt;/span&gt;
        &lt;span class="na"&gt;working-directory&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;terraform&lt;/span&gt;
        &lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;
          &lt;span class="s"&gt;export exitcode=0&lt;/span&gt;

          &lt;span class="s"&gt;terraform plan \&lt;/span&gt;
            &lt;span class="s"&gt;-var-file=$ENVIRONMENT.tfvars -detailed-exitcode -no-color -out tfplan || export exitcode=$?&lt;/span&gt;

          &lt;span class="s"&gt;echo "exitcode=$exitcode" &amp;gt;&amp;gt; $GITHUB_OUTPUT&lt;/span&gt;

          &lt;span class="s"&gt;if [ $exitcode -eq 1 ]; then&lt;/span&gt;
            &lt;span class="s"&gt;echo Terraform Plan Failed!&lt;/span&gt;
            &lt;span class="s"&gt;exit 1&lt;/span&gt;
          &lt;span class="s"&gt;else&lt;/span&gt;
            &lt;span class="s"&gt;exit 0&lt;/span&gt;
          &lt;span class="s"&gt;fi&lt;/span&gt;
      &lt;span class="c1"&gt;# Apply the pending changes&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Terraform apply&lt;/span&gt;
        &lt;span class="na"&gt;if&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${{ steps.tf-plan.outputs.exitcode == 2 }}&lt;/span&gt;
        &lt;span class="na"&gt;working-directory&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;terraform&lt;/span&gt;
        &lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;
          &lt;span class="s"&gt;terraform apply -auto-approve tfplan -no-color&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Steps:&lt;br&gt;
&lt;strong&gt;Terraform init&lt;/strong&gt;: Run terraform init CLI with backend configuration&lt;br&gt;
&lt;strong&gt;Terraform plan&lt;/strong&gt;: Run terraform plan CLI to generate a plan.&lt;br&gt;
&lt;strong&gt;Terraform apply&lt;/strong&gt;: Run terraform apply CLI to apply the plan if there is pending change on it.&lt;/p&gt;

&lt;p&gt;Commit and push to remote. A new workflow appears in Actions -&amp;gt; workflow named &lt;strong&gt;Deploy Static Website&lt;/strong&gt; with running build.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs3afhgpgzv571g2uttve.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs3afhgpgzv571g2uttve.png" alt="deploy-workflow" width="800" height="302"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Here is the detail steps:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffl7lsw0b3j38nv4p9tom.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffl7lsw0b3j38nv4p9tom.png" alt="deploy-workflow-detail-steps" width="800" height="613"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;All content in S3 bucket:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8gbsa3zhoem6uu3sdheb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8gbsa3zhoem6uu3sdheb.png" alt="s3-objects" width="800" height="541"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You can find the &lt;em&gt;website_endpoint&lt;/em&gt; from the end of Terraform apply logs. Or go to AWS console, find your website bucket -&amp;gt; Properties. Scroll to the bottom of the page, then you will find the Bucket website endpoint. Depending on your Region, your Amazon S3 website endpoint follows one of these two formats.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="http://bucket-name.s3-website-Region.amazonaws.com"&gt;http://bucket-name.s3-website-Region.amazonaws.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="http://bucket-name.s3-website.Region.amazonaws.com"&gt;http://bucket-name.s3-website.Region.amazonaws.com&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You can visit your website from browser now. Add path after the endpoint that doesn't exist, an 404 page is returned.&lt;/p&gt;

&lt;p&gt;Correct URL:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5llk9o2ckasiu6n7cmh7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5llk9o2ckasiu6n7cmh7.png" alt="coffee-page" width="800" height="576"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Incorrect URL:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foew9dyk53plh4kost7sy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foew9dyk53plh4kost7sy.png" alt="404-page" width="800" height="263"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Summary
&lt;/h2&gt;

&lt;p&gt;You should know how to provision and manage AWS infrastructure using GitHub and Terraform. Now you can provision more AWS services or even other Cloud infrastructure as you want following the same methodology.&lt;/p&gt;

&lt;h2&gt;
  
  
  References
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://docs.aws.amazon.com/AmazonS3/latest/userguide/WebsiteHosting.html"&gt;https://docs.aws.amazon.com/AmazonS3/latest/userguide/WebsiteHosting.html&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Thanks for reading!&lt;/p&gt;

</description>
      <category>aws</category>
      <category>terraform</category>
      <category>githubactions</category>
      <category>s3</category>
    </item>
    <item>
      <title>Deploy Terraform resources to AWS using GitHub Actions via OIDC</title>
      <dc:creator>Camille He</dc:creator>
      <pubDate>Thu, 27 Jun 2024 01:56:51 +0000</pubDate>
      <link>https://dev.to/camillehe1992/deploy-terraform-resources-to-aws-using-github-actions-via-oidc-3b9g</link>
      <guid>https://dev.to/camillehe1992/deploy-terraform-resources-to-aws-using-github-actions-via-oidc-3b9g</guid>
      <description>&lt;p&gt;The article explains how to configure OpenID Connect within your GitHub Actions workflows to authenticate with AWS, so that the workflow can access AWS resources. The common use case is define AWS infrastructure as code, using CloudFormation, CDK or Terraform, etc, then sync the infrastructure update in AWS through workflows on each code change commit. As we only focus on the IODC setup here, to make it simple, the demo workflow authenticates to AWS account firstly, then list all S3 buckets in that account.&lt;/p&gt;

&lt;p&gt;So, what is OpenID Connect? &lt;/p&gt;

&lt;p&gt;&lt;a href="https://openid.net/developers/how-connect-works/" rel="noopener noreferrer"&gt;OpenID Connect (OIDC)&lt;/a&gt; is an identity authentication protocol that is an extension of open authorization (OAuth) 2.0 to standardize the process for authenticating and authorizing users when they sign in to access digital services. OIDC provides authentication, which means verifying that users are who they say they are.&lt;/p&gt;

&lt;p&gt;Let's talk about how OpenID Connect works with identity provider and federation.&lt;/p&gt;

&lt;p&gt;In AWS, when managing user identities outside of AWS, you can use identity providers instead of creating IAM users in AWS account. With an identity provider (IdP), you can give these external user identities permissions (defined in IAM role) to use AWS resources in your account. An external IdP provides identity information to AWS using either OpenID Connect (OIDC) or SAML 2.0. Identity providers help keep your AWS account secure because you don't have to distribute or embed long-term security credentials, such as access keys, in your application. &lt;/p&gt;

&lt;p&gt;In the demo, GitHub is an external identity provider for AWS. GitHub Actions workflows can be treated as external user identities.&lt;/p&gt;

&lt;p&gt;The core process is to authenticate with AWS using temporary credentials within your GitHub Actions workflows. It contains the following steps: &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Firstly, establish trust between AWS account and GitHub by adding GitHub identity provider in AWS IAM service.&lt;/li&gt;
&lt;li&gt;Create IAM role that allows to be assumed by the new added identity provider.&lt;/li&gt;
&lt;li&gt;GitHub actions workflow assumes the IAM role:

&lt;ul&gt;
&lt;li&gt;Workflow retrieves a JWT token from GitHub;&lt;/li&gt;
&lt;li&gt;Workflow makes an AssumeRoleWithWebIdentity call to AWS STS service with JWT token. &lt;/li&gt;
&lt;li&gt;AWS STS service validates the trust relationship, and returns temporary credentials in AWS that map to the IAM role with permissions to access specific resources in AWS account&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Workflow access AWS resources. &lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Here is a diagram that displays the authentication process.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdclfbzm6zpvn8k75pq28.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdclfbzm6zpvn8k75pq28.png" alt="auth-process"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Prerequisites
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;An AWS account with permission to create OIDC identity provider, role, attach policy in AWS IAM service.&lt;/li&gt;
&lt;li&gt;An GitHub account to create a repository and workflows.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Solution Overview
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Step 1 &amp;amp; 2 Add GitHub IdP &amp;amp; Create IAM Role in AWS Account
&lt;/h3&gt;

&lt;p&gt;You can follow the process and description from the &lt;a href="https://aws.amazon.com/blogs/security/use-iam-roles-to-connect-github-actions-to-actions-in-aws/" rel="noopener noreferrer"&gt;blog&lt;/a&gt; just like what I did. I won't repeat the process here because the blog is clear and understandable. &lt;/p&gt;

&lt;p&gt;After completed, you will have an identity provider named &lt;strong&gt;token.actions.githubusercontent.com&lt;/strong&gt; with OpenID Connect type, and an IAM role named &lt;strong&gt;GitHubAction-AssumeRoleWithAction&lt;/strong&gt; with trust relationship. &lt;/p&gt;

&lt;p&gt;Identity Provider&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq95fz1362glmb0fbeiuv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq95fz1362glmb0fbeiuv.png" alt="IdP"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;IAM Role -&amp;gt; Trust Relationship&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fci4a7qsos7dlsxz49usu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fci4a7qsos7dlsxz49usu.png" alt="trust-replationship"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 3. Create GitHub Actions Workflow
&lt;/h3&gt;

&lt;p&gt;In your GitHub repository, create a workflow yaml file named &lt;strong&gt;get-started.yaml&lt;/strong&gt; in &lt;strong&gt;.github/workflows&lt;/strong&gt; directory with below code. &lt;/p&gt;

&lt;p&gt;Update  and  with the real values. Update the branches in the code if you want to trigger the workflow from other branches.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;

&lt;span class="c1"&gt;# This is a basic workflow to help you get started with Actions&lt;/span&gt;
&lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Get Started&lt;/span&gt;
&lt;span class="c1"&gt;# Controls when the action will run. Invokes the workflow on push events but only for the main branch&lt;/span&gt;
&lt;span class="na"&gt;on&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
&lt;span class="na"&gt;  push&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
&lt;span class="na"&gt;    branches&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
&lt;span class="s"&gt;      - main&lt;/span&gt;
&lt;span class="na"&gt;  pull_request&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
&lt;span class="na"&gt;    branches&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
&lt;span class="s"&gt;      - main&lt;/span&gt;
&lt;span class="na"&gt;env&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
&lt;span class="na"&gt;  AWS_REGION&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;&amp;lt;YOUR_AWS_ACCOUNT_REGION&amp;gt;&lt;/span&gt;
&lt;span class="na"&gt;  ROLE_TO_ASSUME&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;arn:aws:iam::&amp;lt;YOUR_AWS_ACCOUNT_ID&amp;gt;:role/GitHubAction-AssumeRoleWithAction&lt;/span&gt;
&lt;span class="na"&gt;  ROLE_SESSION_NAME&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;GitHub_to_AWS_via_FederatedOIDC&lt;/span&gt;
&lt;span class="c1"&gt;# Permission can be added at job level or workflow level&lt;/span&gt;
&lt;span class="na"&gt;permissions&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
&lt;span class="na"&gt;  id-token&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;write&lt;/span&gt; &lt;span class="c1"&gt;# This is required for requesting the JWT&lt;/span&gt;
&lt;span class="na"&gt;  contents&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;read&lt;/span&gt; &lt;span class="c1"&gt;# This is required for actions/checkout&lt;/span&gt;
&lt;span class="na"&gt;jobs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
&lt;span class="na"&gt;  AssumeRoleAndCallIdentity&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
&lt;span class="na"&gt;    runs-on&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ubuntu-latest&lt;/span&gt;
&lt;span class="na"&gt;    steps&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
&lt;span class="na"&gt;      - name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Git clone the repository&lt;/span&gt;
&lt;span class="na"&gt;        uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;actions/checkout@v4&lt;/span&gt;
&lt;span class="na"&gt;      - name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;configure aws credentials&lt;/span&gt;
&lt;span class="na"&gt;        uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;aws-actions/configure-aws-credentials@v4&lt;/span&gt;
&lt;span class="na"&gt;        with&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
&lt;span class="na"&gt;          role-to-assume&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${{ env.ROLE_TO_ASSUME }}&lt;/span&gt;
&lt;span class="na"&gt;          role-session-name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${{ env.ROLE_SESSION_NAME }}&lt;/span&gt;
&lt;span class="na"&gt;          aws-region&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${{ env.AWS_REGION }}&lt;/span&gt;
&lt;span class="s"&gt;     &lt;/span&gt; &lt;span class="c1"&gt;# Hello from AWS: WhoAmI&lt;/span&gt;
&lt;span class="na"&gt;      - name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Sts GetCallerIdentity&lt;/span&gt;
&lt;span class="na"&gt;        run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;
&lt;span class="err"&gt; &lt;/span&gt;&lt;span class="s"&gt;         aws sts get-caller-identity&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Commit and push to remote. A build is triggered automatically. Below screenshot shows the workflow assumes the IAM role ROLE_TO_ASSUME successfully. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwsmmuy8f4wfq6x1sr9ks.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwsmmuy8f4wfq6x1sr9ks.png" alt="workflow-get-started"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;At this moment, the IAM role doesn't attach any policy, which means your workflow has no permissions for AWS resources. &lt;/p&gt;

&lt;p&gt;Next, I'm going to use the workflow to list all buckets in my AWS account. To enable it, we need to attach an IAM policy with necessary permission on IAM role &lt;strong&gt;GitHubAction-AssumeRoleWithAction&lt;/strong&gt;.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;From AWS IAM Console, find role, Add permissions -&amp;gt; create inline policy. &lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fia9v4uk6ll90qy062w37.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fia9v4uk6ll90qy062w37.png" alt="create-inline-policy"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Select a service: S3&lt;/li&gt;
&lt;li&gt;Action allowed: ListAllMyBuckets&lt;/li&gt;
&lt;li&gt;Resources: All&lt;/li&gt;
&lt;li&gt;Next&lt;/li&gt;
&lt;li&gt;Policy name: AllowListAllMyBuckets&lt;/li&gt;
&lt;li&gt;Create Policy&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foesdztzxeann2aabq79z.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foesdztzxeann2aabq79z.png" alt="role-policies"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Please be noted, to list all AWS buckets, we choose action s3:ListAllMyBuckets, not s3:ListBucket.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Now, you have attached an inline policy on IAM role. Then add a new step after step &lt;strong&gt;Sts GetCallerIdentity&lt;/strong&gt; to list all buckets using AWS CLI. The step name is &lt;strong&gt;List All S3 Buckets&lt;/strong&gt;, it executes shell script "aws s3 ls" to list all buckets from your AWS account.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

...
- name: List All S3 Buckets
  run: aws s3 ls


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Commit and push change to remote. A new build is triggered automatically. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzq6uj96bf02x1g5t7ujj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzq6uj96bf02x1g5t7ujj.png" alt="workflow-list-all-buckets"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Following the least privilege principle, I only granted s3:ListAllMyBuckets permission to the role. The policies assigned to the role determine what the federated users are allowed to do in AWS. You should follow the principle for best practice according to your needs in the daily work. &lt;/p&gt;

&lt;h2&gt;
  
  
  Others...
&lt;/h2&gt;

&lt;p&gt;Since we are taking about best practice, let's enhance our workflow by replacing hard-coded or sensitive environment variables with GitHub Secrets and variables. I'm going to save these environment variables in &lt;strong&gt;Settings -&amp;gt; Secrets and variables -&amp;gt; Actions&lt;/strong&gt;. Add repository variables. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4slkxxu1us4mayzei64q.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4slkxxu1us4mayzei64q.png" alt="add-variables"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Finally replace the hard-coded environment variables in workflow.&lt;/p&gt;

&lt;p&gt;Hard-coded:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;

&lt;span class="na"&gt;AWS_REGION&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;xxxxxx&lt;/span&gt;
&lt;span class="na"&gt;ROLE_TO_ASSUME&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;arn:aws:iam::xxxxxxxxxxx:role/GitHubAction-AssumeRoleWithAction&lt;/span&gt;
&lt;span class="na"&gt;ROLE_SESSION_NAME&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;GitHub_to_AWS_via_FederatedOIDC&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Retrieved from GitHub repository variables&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;

&lt;span class="na"&gt;AWS_REGION&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${{ vars.AWS_REGION }}&lt;/span&gt;
&lt;span class="na"&gt;ROLE_TO_ASSUME&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${{ vars.ROLE_TO_ASSUME }}&lt;/span&gt;
&lt;span class="na"&gt;ROLE_SESSION_NAME&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${{ vars.ROLE_SESSION_NAME }}&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Commit and push to remote. A new build is triggered automatically. The workflow will retrieve these variables from GitHub Secrets and variables at the beginning of job.&lt;/p&gt;

&lt;p&gt;You can find the source code from GitHub repo: &lt;a href="https://github.com/camillehe1992/demo-for-aws-deployment-via-oidc" rel="noopener noreferrer"&gt;https://github.com/camillehe1992/demo-for-aws-deployment-via-oidc&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Summary
&lt;/h2&gt;

&lt;p&gt;You have learned how to add GitHub as an identity provider and create an IAM role with correct trust relationship in AWS. Besides, you created a GitHub Actions workflow that authenticates to AWS and list all S3 buckets in the AWS account within the build. &lt;/p&gt;

&lt;p&gt;As I mentioned, the common use case is we define AWS infrastructure as code, these AWS resources are provisioned and managed through GitHub repositories and Actions workflows, a.k.a CICD pipelines. All changes are traceable and controllable, easy to repeat and recover with the power of IoC (Infrastructure as Code) and CICD tools.&lt;/p&gt;

&lt;h2&gt;
  
  
  References
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://aws.amazon.com/blogs/security/use-iam-roles-to-connect-github-actions-to-actions-in-aws/" rel="noopener noreferrer"&gt;https://aws.amazon.com/blogs/security/use-iam-roles-to-connect-github-actions-to-actions-in-aws/&lt;/a&gt;&lt;br&gt;
&lt;a href="https://docs.github.com/en/actions/deployment/security-hardening-your-deployments/configuring-openid-connect-in-amazon-web-services" rel="noopener noreferrer"&gt;https://docs.github.com/en/actions/deployment/security-hardening-your-deployments/configuring-openid-connect-in-amazon-web-services&lt;/a&gt;&lt;br&gt;
&lt;a href="https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_providers.html" rel="noopener noreferrer"&gt;https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_providers.html&lt;/a&gt;&lt;br&gt;
&lt;a href="https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_providers_oidc.html" rel="noopener noreferrer"&gt;https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_providers_oidc.html&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Thanks for reading!&lt;/p&gt;

</description>
      <category>terraform</category>
      <category>githubactions</category>
      <category>aws</category>
    </item>
    <item>
      <title>Mask Sensitive Data using Python Built-in Logging Module</title>
      <dc:creator>Camille He</dc:creator>
      <pubDate>Tue, 04 Jun 2024 07:19:12 +0000</pubDate>
      <link>https://dev.to/camillehe1992/mask-sensitive-data-using-python-built-in-logging-module-45fa</link>
      <guid>https://dev.to/camillehe1992/mask-sensitive-data-using-python-built-in-logging-module-45fa</guid>
      <description>&lt;p&gt;Logging is an essential and common topic in software development, which play a vital role in monitoring, debugging and troubleshooting applications. If logs are not properly secured or managed, they can become a target for hackers and other malicious actors who may attempt to gain access to this sensitive data. By keeping sensitive data out of logs, you can help protect users' privacy and reduce the risk of data breaches or other security incidents.&lt;/p&gt;

&lt;p&gt;There are lots of best practices for logging for different programming languages, and this article only focus on masking sensitive data in python using python built-in module - &lt;a href="https://docs.python.org/3/library/logging.html" rel="noopener noreferrer"&gt;logging&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Here is the demo &lt;a href="https://github.com/camillehe1992/python-practice/tree/master/python-toolkit/central_logging" rel="noopener noreferrer"&gt;source code&lt;/a&gt; in GitHub.&lt;/p&gt;

&lt;p&gt;Let's get started.&lt;br&gt;
 &lt;/p&gt;

&lt;h2&gt;
  
  
  Initialize Logging Configuration
&lt;/h2&gt;

&lt;p&gt;First of all, we create a file named &lt;strong&gt;log.py&lt;/strong&gt; with a log configuration as below. The outputs are formatted according to the formatter argument. The default is "console". &lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;

&lt;span class="c1"&gt;# import modules
&lt;/span&gt;&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;re&lt;/span&gt;
&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;time&lt;/span&gt;
&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;logging&lt;/span&gt;
&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;logging.config&lt;/span&gt;

&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;init_logging&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
&lt;span class="err"&gt;   &lt;/span&gt; &lt;span class="n"&gt;log_level&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;DEBUG&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;formatter&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;console&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;logging&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Logger&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
&lt;span class="err"&gt; &lt;/span&gt;
&lt;span class="err"&gt;   &lt;/span&gt; &lt;span class="n"&gt;LOG_CONFIG&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
&lt;span class="err"&gt;       &lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;version&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="err"&gt;       &lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;handlers&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
&lt;span class="err"&gt;           &lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;stdout&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
&lt;span class="err"&gt;               &lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;class&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;logging.StreamHandler&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="err"&gt;               &lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;stream&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;ext://sys.stdout&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="err"&gt;               &lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;formatter&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;formatter&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="err"&gt;           &lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="err"&gt;       &lt;/span&gt; &lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="err"&gt;       &lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;formatters&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
&lt;span class="err"&gt;           &lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;json&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
&lt;span class="err"&gt;               &lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;format&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;
&lt;span class="err"&gt;                   &lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;{&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;msg&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;:&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;%(message)s&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;,&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;level&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;:&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;%(levelname)s&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;,&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;
&lt;span class="err"&gt;                   &lt;/span&gt; &lt;span class="sh"&gt;'"&lt;/span&gt;&lt;span class="s"&gt;file&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;:&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;%(filename)s&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;,&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;line&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;:%(lineno)d,&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;
&lt;span class="err"&gt;                   &lt;/span&gt; &lt;span class="sh"&gt;'"&lt;/span&gt;&lt;span class="s"&gt;module&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;:&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;%(module)s&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;,&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;func&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;:&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;%(funcName)s&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;
&lt;span class="err"&gt;               &lt;/span&gt; &lt;span class="p"&gt;),&lt;/span&gt;
&lt;span class="err"&gt;               &lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;datefmt&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;%Y-%m-%dT%H:%M:%SZ&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="err"&gt;           &lt;/span&gt; &lt;span class="p"&gt;},&lt;/span&gt;
&lt;span class="err"&gt;           &lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;console&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
&lt;span class="err"&gt;               &lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;format&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;%(asctime)s %(levelname)s : %(message)s&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="err"&gt;               &lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;datefmt&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;%Y-%m-%dT%H:%M:%SZ&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="err"&gt;           &lt;/span&gt; &lt;span class="p"&gt;},&lt;/span&gt;
&lt;span class="err"&gt;       &lt;/span&gt; &lt;span class="p"&gt;},&lt;/span&gt;
&lt;span class="err"&gt;       &lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;root&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;handlers&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;stdout&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;level&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;log_level&lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;
&lt;span class="err"&gt;   &lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="err"&gt;   &lt;/span&gt; &lt;span class="n"&gt;logging&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Formatter&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;converter&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;time&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;gmtime&lt;/span&gt;
&lt;span class="err"&gt;   &lt;/span&gt; &lt;span class="n"&gt;logging&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;config&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;dictConfig&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;LOG_CONFIG&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="err"&gt;   &lt;/span&gt; &lt;span class="n"&gt;logger&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;logging&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;getLogger&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;__name__&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="err"&gt; &lt;/span&gt;
&lt;span class="err"&gt;   &lt;/span&gt; &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;logger&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Then, create a file named &lt;strong&gt;main.py&lt;/strong&gt; with below code.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;

&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;log&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;init_logging&lt;/span&gt;

&lt;span class="n"&gt;LOG&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;init_logging&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;span class="err"&gt; &lt;/span&gt;
&lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;__name__&lt;/span&gt; &lt;span class="o"&gt;==&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;__main__&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="n"&gt;test_case&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;
        &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;John [513-84-7329] made a payment with credit card 1234-5678-9012-3456.&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
    &lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;LOG&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;info&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;test_case&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Outputs:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

2024-06-03T06:45:38Z INFO : John [513-84-7329] made a payment with credit card 1234-5678-9012-3456.


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Obviously, there are sensitive data in logs. &lt;strong&gt;513-84-7329&lt;/strong&gt; is U.S. social security number, &lt;strong&gt;1234-5678-9012-3456&lt;/strong&gt; is a credit card number. These data are highly confidential and should be masked or redacted with a series of asterisks or replace them with a hash value.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Although filters are used primarily to filter records based on more sophisticated criteria than levels, they get to see every record which is processed by the handler or logger they’re attached to: this can be useful if you want to do things like counting how many records were processed by a particular logger or handler, or adding, changing or removing attributes in the LogRecord being processed.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt; &lt;br&gt;
&lt;em&gt;From logging.Filter &lt;a href="https://docs.python.org/3/library/logging.html#filter-objects" rel="noopener noreferrer"&gt;https://docs.python.org/3/library/logging.html#filter-objects&lt;/a&gt;&lt;/em&gt;&lt;br&gt;
 &lt;br&gt;
According to the above description, I use &lt;strong&gt;logging.Filter&lt;/strong&gt; to mask sensitive data in logs.&lt;br&gt;
 &lt;/p&gt;

&lt;h2&gt;
  
  
  Setup logging.Filter in Logger Configuration
&lt;/h2&gt;

&lt;p&gt;Create a filter and configure it in log configuration. Add below code in &lt;strong&gt;log.py&lt;/strong&gt; file.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;


&lt;span class="c1"&gt;# 1. Define the regex patterns of sensitive data
&lt;/span&gt;&lt;span class="n"&gt;regex_patterns&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
&lt;span class="err"&gt;   &lt;/span&gt; &lt;span class="c1"&gt;# U.S. Social Security numbers
&lt;/span&gt;&lt;span class="err"&gt;   &lt;/span&gt; &lt;span class="sa"&gt;r&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;\d{3}-\d{2}-\d{4}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="err"&gt;   &lt;/span&gt; &lt;span class="c1"&gt;# Credit card numbers
&lt;/span&gt;&lt;span class="err"&gt;   &lt;/span&gt; &lt;span class="sa"&gt;r&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;\d{4}-\d{4}-\d{4}-\d{4}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="p"&gt;]&lt;/span&gt;
&lt;span class="err"&gt; &lt;/span&gt;
&lt;span class="c1"&gt;# 2. Define a filter class to mask sensitive data that match the pre-defined regex patterns
&lt;/span&gt;&lt;span class="k"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;SensitiveDataFilter&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;logging&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Filter&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
&lt;span class="err"&gt;   &lt;/span&gt; &lt;span class="n"&gt;patterns&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;regex_patterns&lt;/span&gt;
&lt;span class="err"&gt;   &lt;/span&gt; &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;filter&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;record&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
&lt;span class="err"&gt;       &lt;/span&gt; &lt;span class="n"&gt;record&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;msg&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;mask_sensitive_data&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;record&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;msg&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="err"&gt;       &lt;/span&gt; &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="bp"&gt;True&lt;/span&gt;
&lt;span class="err"&gt;   &lt;/span&gt; &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;mask_sensitive_data&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;message&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
&lt;span class="err"&gt;       &lt;/span&gt; &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;pattern&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;patterns&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
&lt;span class="err"&gt;           &lt;/span&gt; &lt;span class="n"&gt;message&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;re&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;sub&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;pattern&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;******&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;message&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="err"&gt;       &lt;/span&gt; &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;message&lt;/span&gt;

&lt;span class="c1"&gt;# 3. Add filter configuration in LOG_CONFIG
&lt;/span&gt;&lt;span class="n"&gt;LOG_CONFIG&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
&lt;span class="err"&gt;       &lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;version&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="err"&gt;       &lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;handlers&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
&lt;span class="err"&gt;           &lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;stdout&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
                &lt;span class="bp"&gt;...&lt;/span&gt;
                &lt;span class="c1"&gt;# setup filters
&lt;/span&gt;&lt;span class="err"&gt;               &lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;filters&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;sensitive_data_filter&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;
&lt;span class="err"&gt;           &lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="err"&gt;       &lt;/span&gt; &lt;span class="p"&gt;},&lt;/span&gt;
        &lt;span class="c1"&gt;# add filters in configuration
&lt;/span&gt;&lt;span class="err"&gt;       &lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;filters&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
&lt;span class="err"&gt;           &lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;sensitive_data_filter&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
&lt;span class="err"&gt;               &lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;()&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;SensitiveDataFilter&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="err"&gt;           &lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="err"&gt;       &lt;/span&gt; &lt;span class="p"&gt;},&lt;/span&gt;
&lt;span class="err"&gt;       &lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;formatters&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{...},&lt;/span&gt;
&lt;span class="err"&gt;       &lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;root&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{...},&lt;/span&gt;
&lt;span class="err"&gt;   &lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Replace &lt;strong&gt;main.py&lt;/strong&gt; with below code. We print message to console using three string formatting methods for comparison. All work.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Read &lt;a href="https://realpython.com/python-string-formatting/" rel="noopener noreferrer"&gt;Python String Formatting Best Practices&lt;/a&gt; if you are interested in string formatting in python.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;

&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;log&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;init_logging&lt;/span&gt;

&lt;span class="n"&gt;LOG&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;init_logging&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;

&lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;__name__&lt;/span&gt; &lt;span class="o"&gt;==&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;__main__&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="n"&gt;test_case&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;
        &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;John [513-84-7329] made a payment with credit card 1234-5678-9012-3456.&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
    &lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;LOG&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;debug&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;use f-string: &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;test_case&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;LOG&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;debug&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;use str.format: {}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;format&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;test_case&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;
    &lt;span class="n"&gt;LOG&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;debug&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;use string modulo method: %s&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt; &lt;span class="o"&gt;%&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;test_case&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Outputs:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

2024-06-03T07:05:26Z DEBUG : use f-string: John [******] made a payment with credit card ******.
2024-06-03T07:05:26Z DEBUG : use str.format: John [******] made a payment with credit card ******.
2024-06-03T07:05:26Z DEBUG : use string modulo method: John [******] made a payment with credit card ******.


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt; &lt;br&gt;
Updates:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Define the regex patterns of sensitive data&lt;/li&gt;
&lt;li&gt;Define a filter class &lt;strong&gt;SensitiveDataFilter&lt;/strong&gt; to mask sensitive data that match the regex patterns&lt;/li&gt;
&lt;li&gt;Add and setup filter configuration in &lt;strong&gt;LOG_CONFIG&lt;/strong&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The masking process is clear. We define the regex patterns for sensitive data, and replace them with asterisks.&lt;/p&gt;

&lt;h2&gt;
  
  
  Mask sensitive data in the values of dictionary
&lt;/h2&gt;

&lt;p&gt;In above examples, the sensitive data have regex patterns that can be defined easily. However, sometimes our sensitive data is a random string, for example, we need to print test_case as below and mask the password only. We cannot define a regex pattern for password string 'fFwpUd!CJT4'.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;

&lt;span class="n"&gt;test_case&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;username&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;John Doe&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;password&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;fFwpUd!CJT4&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;After diving into logging module, I found two solutions to mask sensitive data in a dictionary. They may not be the best, but they did solve my problem. &lt;/p&gt;

&lt;h3&gt;
  
  
  Option 1. Mask sensitive data in record.msg matching regex pattern
&lt;/h3&gt;

&lt;p&gt;The first solution use regex pattern as well, but this time the sensitive data is located by it's key in dictionary.&lt;/p&gt;

&lt;p&gt;Let's update class &lt;strong&gt;SensitiveDataFilter&lt;/strong&gt;. Add below code in &lt;strong&gt;log.py file&lt;/strong&gt;.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;


&lt;span class="c1"&gt;# Define a list of keys that values are sensitive data
&lt;/span&gt;&lt;span class="n"&gt;sensitive_keys&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;
&lt;span class="err"&gt;   &lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;headers&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="err"&gt;   &lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;credentials&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="err"&gt;   &lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Authorization&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="err"&gt;   &lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;token&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="err"&gt;   &lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;password&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;


&lt;span class="c1"&gt;# mask sensitive data in record.msg
&lt;/span&gt;&lt;span class="k"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;SensitiveDataFilter2&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;logging&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Filter&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="n"&gt;patterns&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;regex_patterns&lt;/span&gt;
    &lt;span class="n"&gt;sensitive_keys&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;sensitive_keys&lt;/span&gt;

    &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;filter&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;record&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
        &lt;span class="n"&gt;record&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;msg&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;mask_sensitive_msg&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;record&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;msg&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="bp"&gt;True&lt;/span&gt;

    &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;mask_sensitive_msg&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;message&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
        &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;pattern&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;patterns&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
            &lt;span class="n"&gt;message&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;re&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;sub&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;pattern&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;******&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;message&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="c1"&gt;# replace sensitive data with asterisks
&lt;/span&gt;        &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;key&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;sensitive_keys&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
            &lt;span class="n"&gt;pattern_str&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sa"&gt;rf&lt;/span&gt;&lt;span class="sh"&gt;"'&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;key&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;: &lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;[^&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;]+&lt;/span&gt;&lt;span class="sh"&gt;'"&lt;/span&gt;
            &lt;span class="n"&gt;replace&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"'&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;key&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;: &lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;******&lt;/span&gt;&lt;span class="sh"&gt;'"&lt;/span&gt;
            &lt;span class="n"&gt;message&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;re&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;sub&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;pattern_str&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;replace&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;message&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;message&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Updates:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Define a list of keys that values are sensitive data. These keys are used to locate the sensitive data in dictionary.&lt;/li&gt;
&lt;li&gt;Mask sensitive data in dictionary.values() by processing record.msg. In function &lt;em&gt;mask_sensitive_msg&lt;/em&gt;, we define the pattern string &lt;strong&gt;'{key}': '[^']+'&lt;/strong&gt;. And replace the match string with &lt;strong&gt;'{key}': **&lt;/strong&gt;****. &lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Outputs:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

2024-06-03T07:33:15Z DEBUG : use f-string: {'username': 'John Doe', 'password': '******'}
2024-06-03T07:33:15Z DEBUG : use str.format: {'username': 'John Doe', 'password': '******'}
2024-06-03T07:33:15Z DEBUG : use string modulo method: {'username': 'John Doe', 'password': '******'}


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;In the solution, the dictionary object is treated as a part of log message essentially. The only difference is the regex patterns. However, it's not stable from my perspective, causes errors for special scenarios. So I tried another solution. &lt;br&gt;
 &lt;/p&gt;

&lt;h3&gt;
  
  
  Option 2. Mask sensitive data in record.args
&lt;/h3&gt;

&lt;p&gt;When going through the logging module documentation, I found that, for a &lt;strong&gt;LogRecord&lt;/strong&gt;, the primary information is passed in &lt;em&gt;msg&lt;/em&gt; and &lt;em&gt;args&lt;/em&gt;, which are combined using &lt;em&gt;msg % args&lt;/em&gt; to create the message attribute of the record. With &lt;em&gt;args&lt;/em&gt;, we can process the sensitive data before being merged into &lt;em&gt;msg&lt;/em&gt;. Here is the description of &lt;em&gt;msg&lt;/em&gt; and &lt;em&gt;args&lt;/em&gt; fields in a &lt;strong&gt;LogRecord&lt;/strong&gt; object.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;msg (Any)&lt;/strong&gt; – The event description message, which can be a %-format string with placeholders for variable data, or an arbitrary object (see Using arbitrary objects as messages).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;args (tuple | dict[str, Any])&lt;/strong&gt; – Variable data to merge into the msg argument to obtain the event description.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Interesting, right? Let's update class &lt;strong&gt;SensitiveDataFilter&lt;/strong&gt; as below to make it work.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;

&lt;span class="k"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;SensitiveDataFilter&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;logging&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Filter&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
&lt;span class="err"&gt;   &lt;/span&gt; &lt;span class="n"&gt;patterns&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;regex_patterns&lt;/span&gt;
&lt;span class="err"&gt;   &lt;/span&gt; &lt;span class="n"&gt;sensitive_keys&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;sensitive_keys&lt;/span&gt;
&lt;span class="err"&gt;   &lt;/span&gt; &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;filter&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;record&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
&lt;span class="err"&gt;       &lt;/span&gt; &lt;span class="k"&gt;try&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
&lt;span class="err"&gt;           &lt;/span&gt; &lt;span class="n"&gt;record&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;args&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;mask_sensitive_args&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;record&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;args&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="err"&gt;           &lt;/span&gt; &lt;span class="n"&gt;record&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;msg&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;mask_sensitive_msg&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;record&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;msg&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="err"&gt;           &lt;/span&gt; &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="bp"&gt;True&lt;/span&gt;
&lt;span class="err"&gt;       &lt;/span&gt; &lt;span class="k"&gt;except&lt;/span&gt; &lt;span class="nb"&gt;Exception&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;e&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
&lt;span class="err"&gt;           &lt;/span&gt; &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="bp"&gt;True&lt;/span&gt;
&lt;span class="err"&gt;   &lt;/span&gt; &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;mask_sensitive_args&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;args&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
&lt;span class="err"&gt;       &lt;/span&gt; &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="nf"&gt;isinstance&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;args&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nb"&gt;dict&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
&lt;span class="err"&gt;           &lt;/span&gt; &lt;span class="n"&gt;new_args&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;args&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;copy&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;span class="err"&gt;           &lt;/span&gt; &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;key&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;args&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;keys&lt;/span&gt;&lt;span class="p"&gt;():&lt;/span&gt;
&lt;span class="err"&gt;               &lt;/span&gt; &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;key&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;sensitive_keys&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
&lt;span class="err"&gt;                   &lt;/span&gt; &lt;span class="n"&gt;new_args&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;key&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;******&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
&lt;span class="err"&gt;               &lt;/span&gt; &lt;span class="k"&gt;else&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
&lt;span class="err"&gt;                   &lt;/span&gt; &lt;span class="c1"&gt;# mask sensitive data in dict values
&lt;/span&gt;&lt;span class="err"&gt;                   &lt;/span&gt; &lt;span class="n"&gt;new_args&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;key&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;mask_sensitive_msg&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;args&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;key&lt;/span&gt;&lt;span class="p"&gt;])&lt;/span&gt;
&lt;span class="err"&gt;           &lt;/span&gt; &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;new_args&lt;/span&gt;
&lt;span class="err"&gt;       &lt;/span&gt; &lt;span class="c1"&gt;# when there are multi arg in record.args
&lt;/span&gt;&lt;span class="err"&gt;       &lt;/span&gt; &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nf"&gt;tuple&lt;/span&gt;&lt;span class="p"&gt;([&lt;/span&gt;&lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;mask_sensitive_msg&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;arg&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;arg&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;args&lt;/span&gt;&lt;span class="p"&gt;])&lt;/span&gt;
&lt;span class="err"&gt;   &lt;/span&gt; &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;mask_sensitive_msg&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;message&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
&lt;span class="err"&gt;       &lt;/span&gt; &lt;span class="c1"&gt;# mask sensitive data in multi record.args
&lt;/span&gt;&lt;span class="err"&gt;       &lt;/span&gt; &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="nf"&gt;isinstance&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;message&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nb"&gt;dict&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
&lt;span class="err"&gt;           &lt;/span&gt; &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;mask_sensitive_args&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;message&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="err"&gt;       &lt;/span&gt; &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="nf"&gt;isinstance&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;message&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
&lt;span class="err"&gt;           &lt;/span&gt; &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;pattern&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;patterns&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
&lt;span class="err"&gt;               &lt;/span&gt; &lt;span class="n"&gt;message&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;re&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;sub&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;pattern&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;******&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;message&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="err"&gt;           &lt;/span&gt; &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;key&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;sensitive_keys&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
&lt;span class="err"&gt;               &lt;/span&gt; &lt;span class="n"&gt;pattern_str&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sa"&gt;rf&lt;/span&gt;&lt;span class="sh"&gt;"'&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;key&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;: &lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;[^&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;]+&lt;/span&gt;&lt;span class="sh"&gt;'"&lt;/span&gt;
&lt;span class="err"&gt;               &lt;/span&gt; &lt;span class="n"&gt;replace&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"'&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;key&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;: &lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;******&lt;/span&gt;&lt;span class="sh"&gt;'"&lt;/span&gt;
&lt;span class="err"&gt;               &lt;/span&gt; &lt;span class="n"&gt;message&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;re&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;sub&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;pattern_str&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;replace&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;message&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="err"&gt;       &lt;/span&gt; &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;message&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Replace &lt;strong&gt;main.py&lt;/strong&gt; with below code.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;

&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;log&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;init_logging&lt;/span&gt;

&lt;span class="n"&gt;LOG&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;init_logging&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;

&lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;__name__&lt;/span&gt; &lt;span class="o"&gt;==&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;__main__&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="n"&gt;test_case&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;username&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;John Doe&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;password&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;xyz&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="n"&gt;LOG&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;debug&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;use args: %s&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;test_case&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;LOG&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;debug&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;use multi args: %s %s&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;test_case&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;test_case&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Outputs:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

2024-06-03T09:41:02Z DEBUG : use args: {'username': 'John Doe', 'password': '******'}
2024-06-03T09:41:02Z DEBUG : use multi args: {'username': 'John Doe', 'password': '******'} {'username': 'John Doe', 'password': '******'}


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Let's go through the updates.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;I created a new function named &lt;em&gt;mask_sensitive_args&lt;/em&gt; to process variable data in &lt;strong&gt;record.arg&lt;/strong&gt;:&lt;/li&gt;
&lt;li&gt;When args is dictionary: Iterate all key, value pair of dict. replace the sensitive value if the key in the &lt;strong&gt;sensitive_keys&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;When args is a tuple: Iterate tuple. The item of each tuple may be string or dict.

&lt;ul&gt;
&lt;li&gt;If item is a string, mask sensitive data using regex patterns&lt;/li&gt;
&lt;li&gt;If item is a dict, use mask_sensitive_args&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw3s27nf9i6y4zc8snokl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw3s27nf9i6y4zc8snokl.png" alt="Process Logic Flow"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Modify existing function &lt;em&gt;mask_sensitive_msg&lt;/em&gt; in order to support multiple args scenarios.&lt;/li&gt;
&lt;li&gt;Wrap the filter function with a try-catch block. We catch all exception when masking data, and print the data without masking if there is any error occur during masking.&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Summary
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;When masking sensitive data in logs, choose regex pattern if the data match common pattern, for example SSN, or credit card.&lt;/li&gt;
&lt;li&gt;When masking sensitive data (random string) in a dictionary, utilize sensitive key name to locate sensitive value or leverage &lt;strong&gt;record.args&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;Always put an eye on your logs.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;References:&lt;br&gt;
&lt;a href="https://betterstack.com/community/guides/logging/python/python-logging-best-practices/#8-keep-sensitive-information-out-of-logs" rel="noopener noreferrer"&gt;Keep sensitive information out of logs&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Thanks for reading. Looking forward to your comments and ideas.&lt;/p&gt;

</description>
      <category>python</category>
      <category>logging</category>
    </item>
    <item>
      <title>Setup containerized Application in AWS ECS - Part 3/3</title>
      <dc:creator>Camille He</dc:creator>
      <pubDate>Fri, 08 Dec 2023 05:38:46 +0000</pubDate>
      <link>https://dev.to/camillehe1992/setup-containerized-application-in-aws-ecs-part-33-3fee</link>
      <guid>https://dev.to/camillehe1992/setup-containerized-application-in-aws-ecs-part-33-3fee</guid>
      <description>&lt;p&gt;In the previous &lt;a href="https://dev.to/camillehe1992/setup-containerized-application-in-aws-ecs-part-23-2of3"&gt;Part 2/3&lt;/a&gt;, I introduced how to setup a ECS Cluster in AWS ECS. Now, we will dive into next and also the final topic - Setup ECS Service related resources in AWS ECS. &lt;/p&gt;

&lt;h2&gt;
  
  
  ⚓ Amazon ECS Concepts
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftkslp9w2al5gaavzny9n.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftkslp9w2al5gaavzny9n.png" alt="arch-diagram"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The AWS resources in green box are created in this part, which include:&lt;/p&gt;

&lt;h3&gt;
  
  
  ECS Task Definition &amp;amp; ECS Service
&lt;/h3&gt;

&lt;p&gt;A task definition is a blueprint for your application. It is a text file in JSON format that describes the parameters and one or more containers that form your application.&lt;/p&gt;

&lt;p&gt;ECS service is used to run and maintain a specified number of instances of a task definition simultaneously in an Amazon ECS cluster.&lt;/p&gt;

&lt;p&gt;ECS Task definition is the core component for your containerization application. All the container's related parameters are defined in &lt;code&gt;container_definitions&lt;/code&gt; parameter of resource &lt;code&gt;aws_ecs_task_definition&lt;/code&gt;. For example, the docker image, resource (CPU, Memory) allocation, log configuration, container environment variables, etc. &lt;/p&gt;

&lt;h3&gt;
  
  
  ALB &amp;amp; Target Group &amp;amp; Listener
&lt;/h3&gt;

&lt;p&gt;A load balancer serves as the single point of contact for clients. The load balancer distributes incoming application traffic across multiple targets.&lt;/p&gt;

&lt;p&gt;A listener checks for connection requests from clients, using the protocol and port that you configure.&lt;/p&gt;

&lt;p&gt;Each target group routes requests to one or more registered targets, such as EC2 instances, using the protocol and port number that you specify. &lt;/p&gt;

&lt;p&gt;Now we have some containers running in the ECS Cluster. You can access a specific container via the public IP address of the EC2 instance that the container locates with the host port, however it's not recommended. Instead, we use an ALB to route traffic to the containers. All containers for a specific ECS service are managed by a target group. With Load Balancer Listener, Load Balancer can forward traffic to target group. Here is the diagram of relationship among load balancer, listener and target group.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe5m1y444tls5rx0u54n9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe5m1y444tls5rx0u54n9.png" alt="alb-tg-listener"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Application Auto Scaling Target &amp;amp; Policy
&lt;/h3&gt;

&lt;p&gt;Automatic scaling is the ability to increase or decrease the desired count of tasks in your Amazon ECS service automatically. Amazon ECS leverages the Application Auto Scaling service to provide this functionality. Amazon ECS Service Auto Scaling supports the following types of automatic scaling: Target tracking, Step and Scheduled. In the project, I chose Target tracking.&lt;/p&gt;

&lt;p&gt;With target tracking scaling policies, you select a metric and set a target value. In the project, I chose metric &lt;code&gt;ECSServiceAverageCPUUtilization&lt;/code&gt; and scalable dimension &lt;code&gt;ecs:service:DesiredCount&lt;/code&gt; with target value &lt;code&gt;75&lt;/code&gt;. Which means when the average CPU utilization of AWS ECS service is greater than 75%, new tasks will be launched (scale out) to meet the requirements, and vice versa. The automate scaling is monitored by CloudWatch alarm. If you go to the CloudWatch Alarm console, you will find two alarms are created as below.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwenakukun128naqnrq4o.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwenakukun128naqnrq4o.png" alt="alarms"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You provide the peak point (75) and AWS calculates the valley point (67.5) according to peak point. &lt;/p&gt;

&lt;h2&gt;
  
  
  Project Source Code
&lt;/h2&gt;

&lt;p&gt;GitHub source code &lt;a href="https://github.com/camillehe1992/containerized-app-in-aws-ecs" rel="noopener noreferrer"&gt;https://github.com/camillehe1992/containerized-app-in-aws-ecs&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Terraform
&lt;/h3&gt;

&lt;p&gt;All Terraform related source code is in &lt;code&gt;terraform&lt;/code&gt; directory. Go through the &lt;a href="https://github.com/camillehe1992/containerized-app-in-aws-ecs/blob/main/README_TF.md" rel="noopener noreferrer"&gt;README documentation&lt;/a&gt; for the details. Setup local environment if you want to deploy these AWS resources from local machine, or you can use GitHub Actions workflows.&lt;/p&gt;

&lt;p&gt;Currently you can use the Docker image &lt;a href="https://hub.docker.com/repository/docker/camillehe1992/strapi/general" rel="noopener noreferrer"&gt;https://hub.docker.com/repository/docker/camillehe1992/strapi/general&lt;/a&gt;. However you can build your own Docker image for the Strapi application and push to registry following the GitHub README.&lt;/p&gt;

&lt;h3&gt;
  
  
  Strapi
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://strapi.io/" rel="noopener noreferrer"&gt;https://strapi.io/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Validate the task is running successfully from AWS ECS console.&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcdc03qhmpf27w2182ec4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcdc03qhmpf27w2182ec4.png" alt="ecs-service"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Then, you can access the Strapi application via ALB DNS name, like &lt;code&gt;http://strapi-prod-xxxxxxxxx.aws-region.elb.amazonaws.com/&lt;/code&gt;. Here is the portal of Strapi application.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhdwo498ngpoyj3ku6cv5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhdwo498ngpoyj3ku6cv5.png" alt="strapi-portal"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And you can use admin panel as below to do some funny things after sign-up. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3t1va3y6uls0arhxruyq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3t1va3y6uls0arhxruyq.png" alt="admin-panel"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  📚 References
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ecs_services.html" rel="noopener noreferrer"&gt;https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ecs_services.html&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task_definitions.html" rel="noopener noreferrer"&gt;https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task_definitions.html&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.aws.amazon.com/elasticloadbalancing/latest/application/introduction.html" rel="noopener noreferrer"&gt;https://docs.aws.amazon.com/elasticloadbalancing/latest/application/introduction.html&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.aws.amazon.com/AmazonECS/latest/developerguide/service-auto-scaling.html" rel="noopener noreferrer"&gt;https://docs.aws.amazon.com/AmazonECS/latest/developerguide/service-auto-scaling.html&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Always appreciate for your ideas and comments. Thanks for reading! 😄&lt;/p&gt;

</description>
      <category>aws</category>
      <category>terraform</category>
      <category>cicd</category>
      <category>ecs</category>
    </item>
    <item>
      <title>Setup containerized Application in AWS ECS - Part 2/3</title>
      <dc:creator>Camille He</dc:creator>
      <pubDate>Thu, 07 Dec 2023 08:57:50 +0000</pubDate>
      <link>https://dev.to/camillehe1992/setup-containerized-application-in-aws-ecs-part-23-2of3</link>
      <guid>https://dev.to/camillehe1992/setup-containerized-application-in-aws-ecs-part-23-2of3</guid>
      <description>&lt;p&gt;In the previous &lt;a href="https://dev.to/camillehe1992/setup-containerized-application-in-aws-ecs-part-13-2emk"&gt;Part 1/3&lt;/a&gt;, I introduced how to setup a MYSQL database in AWS RDS. Now, we will dive into next topic - Setup ECS Cluster in AWS ECS. &lt;/p&gt;

&lt;p&gt;Amazon Elastic Container Service (Amazon ECS) is a fully managed container orchestration service that helps you easily deploy, manage, and scale containerized applications. The article will focus on the ECS services and how them work together to manage your containerized application in the Cloud.&lt;/p&gt;

&lt;h2&gt;
  
  
  ⚓ Amazon ECS Concepts
&lt;/h2&gt;

&lt;p&gt;Here is the architecture diagram of the entire project.&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkv5xrbybwx6yn5gwcyp1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkv5xrbybwx6yn5gwcyp1.png" alt="arch-overall"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The AWS resources in green box are created in this part, which include:&lt;/p&gt;

&lt;h3&gt;
  
  
  ECS Cluster &amp;amp; ECS Capacity
&lt;/h3&gt;

&lt;p&gt;An Amazon ECS cluster is a logical grouping of tasks or services. There are two capacity options: EC2 and Fargate. In this project, I setup ECS Cluster on EC2 type. You can find a article that talks about the differences between ECS and Fargate at the bottom of article. &lt;/p&gt;

&lt;h3&gt;
  
  
  Capacity Provider
&lt;/h3&gt;

&lt;p&gt;A capacity provider defines the cluster capacity that Amazon ECS scales up and down of the infrastructure you specify. For Amazon ECS on Amazon EC2, a capacity provider consists of a capacity provider name, an Auto Scaling group.&lt;/p&gt;

&lt;h3&gt;
  
  
  Amazon EC2 Auto Scaling
&lt;/h3&gt;

&lt;p&gt;As we use Amazon EC2 instances for ECS capacity, I use Auto Scaling groups to manage the Amazon EC2 instances registered to ECS cluster. Auto Scaling ensure that we have the correct number of Amazon EC2 instances available to handle the load for our application.&lt;/p&gt;

&lt;h3&gt;
  
  
  Launch Template
&lt;/h3&gt;

&lt;p&gt;A launch template contains the configuration information to launch an instance in EC2 Auto Scaling, including the ID of the Amazon Machine Image (AMI) and provides full functionality for Amazon EC2 Auto Scaling.&lt;/p&gt;

&lt;h3&gt;
  
  
  SNS Topic &amp;amp; Lambda Function
&lt;/h3&gt;

&lt;p&gt;On the right top of the diagram, we setup a SNS topic and Lambda function to automate container instance draining in Amazon ECS. It's not mandatory for the whole project, but good to have in a real world project. I won't dive into the details here, but go through the &lt;a href="https://aws.amazon.com/blogs/compute/how-to-automate-container-instance-draining-in-amazon-ecs/" rel="noopener noreferrer"&gt;blog&lt;/a&gt; if you're interested the topic. &lt;/p&gt;

&lt;h2&gt;
  
  
  Terraform Project
&lt;/h2&gt;

&lt;p&gt;Same as before, the AWS resources are defined using Terraform. Here is the GitHub source code &lt;a href="https://github.com/camillehe1992/aws-terraform-examples/tree/main/ecs-cluster-on-ec2" rel="noopener noreferrer"&gt;https://github.com/camillehe1992/aws-terraform-examples/tree/main/ecs-cluster-on-ec2&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Walk through the README documentation in above GitHub repository, and setup local environment if you want to deploy these AWS resources from local machine, or you can use GitHub Actions workflows.&lt;/p&gt;

&lt;p&gt;After done, validate the setup is successful from ECS Console. Your ECS Cluster must have 1 registered container instance as the below picture shows.&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foppgldnytihi6hnfvnqe.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foppgldnytihi6hnfvnqe.png" alt="registery-instances"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Next to &lt;a href="https://dev.to/camillehe1992/setup-containerized-application-in-aws-ecs-part-33-3fee"&gt;Setup containerized Application in AWS ECS - Part 3/3&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  📚 References
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://cloudonaut.io/ecs-vs-fargate-whats-the-difference/" rel="noopener noreferrer"&gt;https://cloudonaut.io/ecs-vs-fargate-whats-the-difference/&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.aws.amazon.com/AmazonECS/latest/developerguide/asg-capacity-providers.html" rel="noopener noreferrer"&gt;https://docs.aws.amazon.com/AmazonECS/latest/developerguide/asg-capacity-providers.html&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.aws.amazon.com/autoscaling/ec2/userguide/create-launch-template.html" rel="noopener noreferrer"&gt;https://docs.aws.amazon.com/autoscaling/ec2/userguide/create-launch-template.html&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://aws.amazon.com/blogs/compute/how-to-automate-container-instance-draining-in-amazon-ecs/" rel="noopener noreferrer"&gt;https://aws.amazon.com/blogs/compute/how-to-automate-container-instance-draining-in-amazon-ecs/&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Always appreciate for your ideas and comments. Thanks for reading! 😄&lt;/p&gt;

</description>
      <category>aws</category>
      <category>cicd</category>
      <category>terraform</category>
      <category>ecs</category>
    </item>
    <item>
      <title>Setup containerized Application in AWS ECS - Part 1/3</title>
      <dc:creator>Camille He</dc:creator>
      <pubDate>Thu, 07 Dec 2023 08:44:29 +0000</pubDate>
      <link>https://dev.to/camillehe1992/setup-containerized-application-in-aws-ecs-part-13-2emk</link>
      <guid>https://dev.to/camillehe1992/setup-containerized-application-in-aws-ecs-part-13-2emk</guid>
      <description>&lt;p&gt;In the article, I'll dive into the details that how to setup a containerized application in AWS Cloud. The AWS services leveraged in the project include ECS, RDS, ALB, IAM etc. All source code is stored in GitHub that publicly available. &lt;/p&gt;

&lt;p&gt;Before getting started, I assume you that:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Have basic knowledge about AWS services: ECS, RDS, ALB, IAM etc and hands-on experience on AWS.&lt;/li&gt;
&lt;li&gt;Have basic understanding about containerization and the benefits of containerization.&lt;/li&gt;
&lt;li&gt;Experience on Terraform and CICD.&lt;/li&gt;
&lt;li&gt;Love coding and be eager to learn.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;As there are lots of topics and knowledge in the article, I separate it into three sections, which include:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Build MySQL Database in AWS RDS&lt;/li&gt;
&lt;li&gt;&lt;a href="https://dev.to/camillehe1992/setup-containerized-application-in-aws-ecs-part-23-2of3"&gt;Build ECS Cluster in AWS ECS&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://dev.to/camillehe1992/setup-containerized-application-in-aws-ecs-part-33-3fee"&gt;Build containerized application in AWS ECS&lt;/a&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Something you'd better know:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;I'm working on MacBook M1, so some scripts in the source code may not work in other OS. Besides, you should configure the environment if you want to debug, develop and deploy the application on the machine. &lt;/li&gt;
&lt;li&gt;As the project is intended for demo purpose, which means it's not a best practice from the security and stability perspective. At the end of the article, I'll share some ideas about how to improve the project in real world.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  ⚓ Project Background
&lt;/h2&gt;

&lt;p&gt;In the demo, I will build a containerized application in AWS. The application is hosted in EC2 as containers. Users can access the application via an internet ALB (Application Load Balancer). A RDS instance will be launched for data persistence. The application is built based on &lt;a href="https://strapi.io/" rel="noopener noreferrer"&gt;Strapi&lt;/a&gt;, which is an open-source headless CMS. The Strapi application provides an admin panel and makes it easy to build standard backend API. Meanwhile, it integrates with most of popular database engines smoothly, for example MYSQL, Postgres and sqlite. All the features makes it the best fit in my demonstration. You can dive into Strapi via its official website portal if interested. &lt;/p&gt;

&lt;p&gt;The architecture diagram in AWS shows as below.&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvxox9ovao8miq3p8g9tf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvxox9ovao8miq3p8g9tf.png" alt="Arch-Overall"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;All AWS resources in above diagram will be created in the project, except for network things. I will use the default VPC, Subnets and Security Groups in the AWS account. Well it's unsafe but easier to setup a project just for demo purpose.&lt;/p&gt;

&lt;h2&gt;
  
  
  🏡 Setup Database in AWS RDS
&lt;/h2&gt;

&lt;p&gt;Now, let's focus on the RDS database setup (AWS resources in green box). Find the source code from &lt;a href="https://github.com/camillehe1992/aws-terraform-examples/tree/main/rds-mysql-instance" rel="noopener noreferrer"&gt;https://github.com/camillehe1992/aws-terraform-examples/tree/main/rds-mysql-instance&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The source code contains two parts, the Terraform resources and Lambda function source code. The AWS resources created in the project include:&lt;/p&gt;

&lt;h3&gt;
  
  
  AWS RDS Instance &amp;amp; Secrets
&lt;/h3&gt;

&lt;p&gt;A RDS instance with custom parameter group is created. The database secret (username and password) is managed in Secrets Manager. The secret is auto-generated by AWS and will be used when interacting with database.&lt;/p&gt;

&lt;h3&gt;
  
  
  AWS Lambda Function &amp;amp; Lambda Execution IAM Role &amp;amp; CloudWatch Logs Group
&lt;/h3&gt;

&lt;p&gt;A Lambda function and CloudWatch Logs group for function logs persistence. The function is used to initialize database, such as create a database or tables. A Lambda function execution IAM role with appropriate permissions.&lt;/p&gt;

&lt;p&gt;Replace the default SQL script in &lt;code&gt;src/script.sql&lt;/code&gt; with below script. I need to create a database name &lt;code&gt;strapi&lt;/code&gt; after the instance is available.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;

&lt;span class="k"&gt;CREATE&lt;/span&gt; &lt;span class="k"&gt;DATABASE&lt;/span&gt; &lt;span class="n"&gt;IF&lt;/span&gt; &lt;span class="k"&gt;NOT&lt;/span&gt; &lt;span class="k"&gt;EXISTS&lt;/span&gt; &lt;span class="n"&gt;strapi&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;You can invoke the Lambda function via AWS CLI from local or console directly.&lt;/p&gt;

&lt;p&gt;Go to the README documentation in above GitHub repository to setup local environment if you want to deploy AWS resources from local machine or via GitHub Actions workflows. &lt;/p&gt;

&lt;p&gt;After done, you can get three environment variables that will be used in next part.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

&lt;span class="c"&gt;# the endpoint of RDS database (retrieved from RDS)&lt;/span&gt;
DATABASE_HOST
&lt;span class="c"&gt;# the username of RDS database (retrieved from Secrets Manager)&lt;/span&gt;
DATABASE_USERNAME
&lt;span class="c"&gt;# the password of RDS database (retrieved from Secrets Manager)&lt;/span&gt;
DATABASE_PASSWORD


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Next to &lt;a href="https://dev.to/camillehe1992/setup-containerized-application-in-aws-ecs-part-23-2of3"&gt;Setup containerized Application in AWS ECS - Part 2/3&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  📚 References
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://strapi.io/" rel="noopener noreferrer"&gt;https://strapi.io/&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.terraform.io/" rel="noopener noreferrer"&gt;https://www.terraform.io/&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://registry.terraform.io/providers/hashicorp/aws/5.25.0/docs" rel="noopener noreferrer"&gt;https://registry.terraform.io/providers/hashicorp/aws/5.25.0/docs&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Welcome.html" rel="noopener noreferrer"&gt;https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Welcome.html&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.aws.amazon.com/AmazonECS/latest/developerguide/Welcome.html" rel="noopener noreferrer"&gt;https://docs.aws.amazon.com/AmazonECS/latest/developerguide/Welcome.html&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Always appreciate for your ideas and comments. Thanks for reading! 😄&lt;/p&gt;

</description>
      <category>aws</category>
      <category>terraform</category>
      <category>cicd</category>
      <category>rds</category>
    </item>
    <item>
      <title>Using AWS S3 and Hugo to Create and Host a Static Website</title>
      <dc:creator>Camille He</dc:creator>
      <pubDate>Sat, 07 Oct 2023 09:20:02 +0000</pubDate>
      <link>https://dev.to/camillehe1992/using-aws-s3-and-hugo-to-create-and-host-a-static-website-40o8</link>
      <guid>https://dev.to/camillehe1992/using-aws-s3-and-hugo-to-create-and-host-a-static-website-40o8</guid>
      <description>&lt;p&gt;In this blog, I'm going to guide you how to create a static website using AWS S3 and Hugo. The demo application is deployed using Terraform and I assume you have basic knowledge about Terraform, Hugo, Github Actions, and AWS S3 Static Website Hosting, etc.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;a href="https://developer.hashicorp.com/terraform/intro" rel="noopener noreferrer"&gt;What is Terraform?&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Terraform is an infrastructure as code tool that lets you build, change, and version cloud and on-prem resources safely and efficiently.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;a href="https://gohugo.io/about/what-is-hugo/" rel="noopener noreferrer"&gt;What is Hugo?&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Hugo is a fast and modern static site generator written in Go, and designed to make website creation fun again.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;a href="https://github.com/google/docsy" rel="noopener noreferrer"&gt;What is Docsy?&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Docsy is a &lt;a href="https://gohugo.io/hugo-modules/use-modules/#use-a-module-for-a-theme" rel="noopener noreferrer"&gt;Hugo theme module&lt;/a&gt; for technical documentation sites, providing easy site navigation, structure, and more. In the demo, I use &lt;strong&gt;Docsy&lt;/strong&gt; theme component as a Hugo module. Repo &lt;a href="https://github.com/camillehe1992/static-website-using-docsy-in-aws-s3" rel="noopener noreferrer"&gt;Static WebSite using Docsy in AWS S3&lt;/a&gt; is generated from official &lt;a href="https://github.com/google/docsy-example" rel="noopener noreferrer"&gt;docsy-example&lt;/a&gt;.&lt;br&gt;
You can clone/copy the &lt;code&gt;docsy-example&lt;/code&gt; and edit it with your own content, or use it as an example.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;a href="https://docs.github.com/en/actions/learn-github-actions/understanding-github-actions" rel="noopener noreferrer"&gt;What is GitHub Actions?&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;GitHub Actions is a continuous integration and continuous delivery (CI/CD) platform that allows you to automate your build, test, and deployment pipeline. You can create workflows that build and test every pull request to your repository, or deploy merged pull requests to production.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;a href="https://docs.aws.amazon.com/AmazonS3/latest/userguide/WebsiteHosting.html" rel="noopener noreferrer"&gt;Hosting a static website using Amazon S3&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;You can use Amazon S3 to host a static website. On a static website, individual webpages include static content. They might also contain client-side scripts.&lt;/p&gt;

&lt;p&gt;The blog covers two sections:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Deploy S3 Bucket using Terraform.&lt;/li&gt;
&lt;li&gt;Deploy Static Website Content using Hugo.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Deploy S3 Bucket using Terraform
&lt;/h2&gt;

&lt;p&gt;Repo: &lt;a href="https://github.com/camillehe1992/aws-terraform-examples/tree/main/s3-static-website" rel="noopener noreferrer"&gt;Terraform S3 Static Website&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As we use AWS S3 bucket for static website hosting, in this demo, I use Terraform to define and create AWS S3 bucket related resources. There are two options for the deployment.&lt;/p&gt;

&lt;h3&gt;
  
  
  Deploy from Local
&lt;/h3&gt;

&lt;p&gt;Follow the &lt;a href="https://github.com/camillehe1992/aws-terraform-examples/blob/main/README.md" rel="noopener noreferrer"&gt;terraform-examples&lt;/a&gt; to setup local environment to deploy Terraform resources, including&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Install Terraform CLI&lt;/li&gt;
&lt;li&gt;Install AWS CLI&lt;/li&gt;
&lt;li&gt;Setup AWS Credentials&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Then, use &lt;code&gt;make&lt;/code&gt; commands as below for deployment.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

&lt;span class="c"&gt;# Create a Terraform plan named `tfplan`&lt;/span&gt;
make plan

&lt;span class="c"&gt;# Apply the plan `tfplan`&lt;/span&gt;
make apply

&lt;span class="c"&gt;# Apply complete! Resources: 1 added, 0 changed, 0 destroyed.&lt;/span&gt;
&lt;span class="c"&gt;# &lt;/span&gt;
&lt;span class="c"&gt;# Outputs:&lt;/span&gt;
&lt;span class="c"&gt;# &lt;/span&gt;
&lt;span class="c"&gt;# website_endpoint = "&amp;lt;static_bucket_name&amp;gt;.s3-website-&amp;lt;aws_region&amp;gt;.amazonaws.com"&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;At this point, you will get 404 Not Found if access the website via &lt;strong&gt;website_endpoint&lt;/strong&gt; as above. After static content get uploaded to bucket, you should get the rendered content after refresh the page.&lt;/p&gt;

&lt;h3&gt;
  
  
  Deploy from GitHub Actions
&lt;/h3&gt;

&lt;p&gt;Or, you can follow the &lt;a href="https://github.com/camillehe1992/aws-terraform-examples#setup-github-environment-for-github-actions-workflows" rel="noopener noreferrer"&gt;Setup GitHub Environment for GitHub Actions Workflows&lt;/a&gt; to use GitHub Actions to deploy Terraform resources with &lt;a href="https://github.com/camillehe1992/aws-terraform-examples/blob/main/.github/workflows/s3-static-website-apply.yaml" rel="noopener noreferrer"&gt;deploy-static-content.yml&lt;/a&gt; workflow. &lt;/p&gt;

&lt;h2&gt;
  
  
  Deploy Static Website Content using Hugo
&lt;/h2&gt;

&lt;p&gt;Repo: &lt;a href="https://github.com/camillehe1992/static-website-using-docsy-in-aws-s3" rel="noopener noreferrer"&gt;Static WebSite using Docsy in AWS S3&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As mentioned before, the above source code repo is generated from &lt;a href="https://github.com/google/docsy-example" rel="noopener noreferrer"&gt;docsy-example&lt;/a&gt;. I did two main changes after repo created.&lt;/p&gt;

&lt;h3&gt;
  
  
  Configure Hugo Deployment
&lt;/h3&gt;

&lt;p&gt;Add below configuration in &lt;code&gt;hugo.toml&lt;/code&gt; file. Replace &lt;code&gt;&amp;lt;static_bucket_name&amp;gt;&lt;/code&gt; with the &lt;code&gt;static_bucket_name&lt;/code&gt; variable you provided in Terraform project source. You should create your own bucket. &lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight toml"&gt;&lt;code&gt;

&lt;span class="c"&gt;# Hugo deployment configuration&lt;/span&gt;

&lt;span class="nn"&gt;[deployment]&lt;/span&gt;
&lt;span class="nn"&gt;[[deployment.targets]]&lt;/span&gt;
&lt;span class="py"&gt;name&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"dev"&lt;/span&gt;
&lt;span class="py"&gt;url&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"s3://&amp;lt;static_bucket_name&amp;gt;"&lt;/span&gt;

&lt;span class="nn"&gt;[[deployment.targets]]&lt;/span&gt;
&lt;span class="py"&gt;name&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"prod"&lt;/span&gt;
&lt;span class="py"&gt;url&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"s3://&amp;lt;static_bucket_name&amp;gt;"&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Then, create dedicated Github Actions workflows to build Hugo project and upload the static website content to S3 bucket in specific environments.&lt;/p&gt;

&lt;p&gt;You should configure AWS credentials secrets from GitHub console following document &lt;a href="https://github.com/camillehe1992/aws-terraform-examples#setup-github-environment-for-github-actions-workflows" rel="noopener noreferrer"&gt;Setup GitHub Environment for GitHub Actions Workflows&lt;/a&gt;, then update &lt;code&gt;STATIC_BUCKET_NAME&lt;/code&gt; in env block in workflow yaml file.&lt;/p&gt;

&lt;p&gt;The workflow is automatically triggered. After a successful build, access the static website via http://.s3-website-.amazonaws.com&lt;/p&gt;

&lt;p&gt;In this demo, access below website via &lt;a href="http://docsy-portal-prod.s3-website-ap-southeast-1.amazonaws.com" rel="noopener noreferrer"&gt;http://docsy-portal-prod.s3-website-ap-southeast-1.amazonaws.com&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmb61aea2tta4wn3wa7np.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmb61aea2tta4wn3wa7np.png" alt="Website Screenshot"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Advanced Configuration
&lt;/h2&gt;

&lt;p&gt;In a real project for static website hosting using AWS S3, you should think about more complex configurations.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Setup a CDN using AWS CloudFront to secure and improve the performance following &lt;a href="https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/getting-started-cloudfront-overview.html" rel="noopener noreferrer"&gt;https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/getting-started-cloudfront-overview.html&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;Create a staging/pre-production environment in order to validate the change of content before delivering the final content to users/customers.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Done. Thanks for reading.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>hugo</category>
      <category>s3</category>
      <category>terraform</category>
    </item>
    <item>
      <title>Deploy Github Secrets to AWS Secrets Manager using Terraform and Github Actions</title>
      <dc:creator>Camille He</dc:creator>
      <pubDate>Thu, 21 Sep 2023 09:49:54 +0000</pubDate>
      <link>https://dev.to/camillehe1992/deploy-github-secrets-to-aws-secrets-manager-using-terraform-and-github-actions-49a0</link>
      <guid>https://dev.to/camillehe1992/deploy-github-secrets-to-aws-secrets-manager-using-terraform-and-github-actions-49a0</guid>
      <description>&lt;p&gt;In this article, I'm going to guide you how to save the project secure tokens in Github Secrets and deploy them into AWS Secrets Manager using Terraform and Github Actions workflow.&lt;/p&gt;

&lt;p&gt;Using sensitive information, such as api key, password, secure tokens in a real project is very common. For example, you are working on a backend web server that needs to interacts with database. You must have database user/password configured programmatically no matter which programming language you select. These secure tokens are normally persisted in a secret management tool, and injected into environment variables when the server is up. &lt;/p&gt;

&lt;p&gt;Secrets management tools help companies securely store, transmit, and manage sensitive digital authentication credentials such as passwords, SSH keys, API keys, database passwords, certificates like TLS/SSL certificates or private certificates, tokens, encryption keys, privileged credentials, and other secrets.&lt;/p&gt;

&lt;p&gt;Some companies use third-party tools, such as AWS Secrets Manager, Azure Key Vault, and HashiCorp Vault, etc. Some CICD tools also provide secret management features to allow your pipeline retrieves these secrets easily before deployment. Jenkins has Credentials and Github Actions has Secrets. &lt;/p&gt;

&lt;p&gt;What I implemented:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Create a Terraform project that defines a &lt;code&gt;aws_secretsmanager_secret&lt;/code&gt; resource using &lt;a href="https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/secretsmanager_secret" rel="noopener noreferrer"&gt;AWS official provider&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;Create an &lt;code&gt;Environment&lt;/code&gt; named &lt;code&gt;dev&lt;/code&gt; and add secrets and variables in the environment.&lt;/li&gt;
&lt;li&gt;Create a Github Actions workflow to deploy Terraform resources to AWS account.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Find the demo source code from &lt;a href="https://github.com/camillehe1992/aws-terraform-examples/tree/main/secret-manager" rel="noopener noreferrer"&gt;https://github.com/camillehe1992/aws-terraform-examples/tree/main/secret-manager&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Project Structure
&lt;/h2&gt;

&lt;p&gt;The project source code structure as follows.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

&lt;span class="nb"&gt;.&lt;/span&gt;
├── Makefile                    &lt;span class="c"&gt;# a markfile for terraform commands &lt;/span&gt;
├── README.md
├── main.tf                     &lt;span class="c"&gt;# define the specs of secretsmanager module&lt;/span&gt;
├── modules
│   └── secretsmanager          &lt;span class="c"&gt;# secretsmanager module&lt;/span&gt;
│       ├── locals.tf
│       ├── main.tf
│       ├── outputs.tf
│       └── variables.tf
├── outputs.tf                  &lt;span class="c"&gt;# outputs of terraform&lt;/span&gt;
├── terraform.tfvars            &lt;span class="c"&gt;# variables of terraform&lt;/span&gt;
├── variables.tf                &lt;span class="c"&gt;# variables definition of terraform&lt;/span&gt;
└── versions.tf                 &lt;span class="c"&gt;# terraform backend configuration and provider versions&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Define a variable named &lt;code&gt;database_password&lt;/code&gt; in &lt;code&gt;variables.tf&lt;/code&gt;.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight terraform"&gt;&lt;code&gt;

&lt;span class="k"&gt;variable&lt;/span&gt; &lt;span class="s2"&gt;"database_password"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;type&lt;/span&gt;        &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;string&lt;/span&gt;
  &lt;span class="nx"&gt;description&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"The password of database"&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Define a Terraform module named &lt;code&gt;secretsmanager&lt;/code&gt; and create a bucket of secrets. &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Modules are containers for multiple resources that are used together.&lt;/p&gt;
&lt;/blockquote&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight terraform"&gt;&lt;code&gt;

&lt;p&gt;&lt;span class="k"&gt;module&lt;/span&gt; &lt;span class="s2"&gt;"secrets"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;&lt;br&gt;
  &lt;span class="nx"&gt;source&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"./modules/secretsmanager"&lt;/span&gt;&lt;/p&gt;

&lt;p&gt;&lt;span class="nx"&gt;env&lt;/span&gt;      &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kd"&gt;var&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;env&lt;/span&gt;&lt;br&gt;
  &lt;span class="nx"&gt;nickname&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kd"&gt;var&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;nickname&lt;/span&gt;&lt;br&gt;
  &lt;span class="nx"&gt;tags&lt;/span&gt;     &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kd"&gt;var&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;tags&lt;/span&gt;&lt;/p&gt;

&lt;p&gt;&lt;span class="nx"&gt;secret_specs&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;&lt;br&gt;
    &lt;span class="nx"&gt;database_password&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;&lt;br&gt;
      &lt;span class="nx"&gt;description&lt;/span&gt;   &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"A sample secure. e.g password"&lt;/span&gt;&lt;br&gt;
      &lt;span class="nx"&gt;secret_string&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kd"&gt;var&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;database_password&lt;/span&gt;&lt;br&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;&lt;br&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;&lt;br&gt;
&lt;span class="p"&gt;}&lt;/span&gt;&lt;/p&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h2&gt;
&lt;br&gt;
  &lt;br&gt;
  &lt;br&gt;
  Create Environment and Secrets&lt;br&gt;
&lt;/h2&gt;

&lt;p&gt;Create an environment named &lt;code&gt;dev&lt;/code&gt; in Github Settings, and add secrets and variables in it as follows.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjeym6hkthd98fbmnm08g.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjeym6hkthd98fbmnm08g.png" alt="Github Actions Secrets"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Github Actions Workflow
&lt;/h2&gt;

&lt;p&gt;Define an environment variable name &lt;code&gt;DATABASE_PASSWORD&lt;/code&gt; in &lt;code&gt;env&lt;/code&gt; block in Github Actions workflow file and inject the environment variable &lt;code&gt;DATABASE_PASSWORD&lt;/code&gt; dynamically when running command &lt;code&gt;terraform plan&lt;/code&gt; via parameter &lt;code&gt;-var&lt;/code&gt;.&lt;/p&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;

&lt;p&gt;&lt;span class="na"&gt;env&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;&lt;br&gt;
  &lt;span class="c1"&gt;# Secret tokens injected into Terraform infra&lt;/span&gt;&lt;br&gt;
  &lt;span class="na"&gt;DATABASE_PASSWORD&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;${{&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;secrets.TF_VAR_DATABASE_PASSWORD&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;}}"&lt;/span&gt;&lt;/p&gt;

&lt;p&gt;&lt;span class="nn"&gt;...&lt;/span&gt;&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;  &amp;lt;span class="pi"&amp;gt;-&amp;lt;/span&amp;gt; &amp;lt;span class="na"&amp;gt;name&amp;lt;/span&amp;gt;&amp;lt;span class="pi"&amp;gt;:&amp;lt;/span&amp;gt; &amp;lt;span class="s"&amp;gt;Terraform Plan&amp;lt;/span&amp;gt;
    &amp;lt;span class="na"&amp;gt;if&amp;lt;/span&amp;gt;&amp;lt;span class="pi"&amp;gt;:&amp;lt;/span&amp;gt; &amp;lt;span class="s"&amp;gt;${{ inputs.destroy }} == &amp;lt;/span&amp;gt;&amp;lt;span class="kc"&amp;gt;false&amp;lt;/span&amp;gt;
    &amp;lt;span class="na"&amp;gt;id&amp;lt;/span&amp;gt;&amp;lt;span class="pi"&amp;gt;:&amp;lt;/span&amp;gt; &amp;lt;span class="s"&amp;gt;tf-plan&amp;lt;/span&amp;gt;
    &amp;lt;span class="na"&amp;gt;working-directory&amp;lt;/span&amp;gt;&amp;lt;span class="pi"&amp;gt;:&amp;lt;/span&amp;gt; &amp;lt;span class="s"&amp;gt;${{ env.WORKING_DIRECTORY }}&amp;lt;/span&amp;gt;
    &amp;lt;span class="na"&amp;gt;run&amp;lt;/span&amp;gt;&amp;lt;span class="pi"&amp;gt;:&amp;lt;/span&amp;gt; &amp;lt;span class="pi"&amp;gt;|&amp;lt;/span&amp;gt;
      &amp;lt;span class="s"&amp;gt;export exitcode=0&amp;lt;/span&amp;gt;
      &amp;lt;span class="s"&amp;gt;terraform plan -var-file $(pwd)/tf_$ENVIRONMENT.tfvars \&amp;lt;/span&amp;gt;
      &amp;lt;span class="s"&amp;gt;-var="database_password=$DATABASE_PASSWORD" \&amp;lt;/span&amp;gt;
      &amp;lt;span class="s"&amp;gt;-detailed-exitcode -no-color -out tfplan || export exitcode=$?&amp;lt;/span&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h2&gt;
&lt;br&gt;
  &lt;br&gt;
  &lt;br&gt;
  Trigger Deployment Manually&lt;br&gt;
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2cu30qzg87owqn7b8c3i.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2cu30qzg87owqn7b8c3i.png" alt="Github Actions workflow"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Choose the target environment to deploy. Currently only dev is available.&lt;/li&gt;
&lt;li&gt;Check &lt;code&gt;True to destroy&lt;/code&gt; checkbox if you want to destory the resources.&lt;/li&gt;
&lt;li&gt;Check &lt;code&gt;True to force&lt;/code&gt; checkbox if you want to apply refresh the secure tokens. Useful when there is no change in Terraform infra, but there is a variables or secrets update in Github Settings. &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Done. I'm always appreciating your comments and advice.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>githubactions</category>
      <category>cicd</category>
    </item>
    <item>
      <title>Stop/Start RDS Instances Automatically Using System Manager for Cost Optimization</title>
      <dc:creator>Camille He</dc:creator>
      <pubDate>Tue, 12 Sep 2023 01:15:56 +0000</pubDate>
      <link>https://dev.to/camillehe1992/stopstart-rds-instances-automatically-using-system-manager-for-cost-optimization-5d04</link>
      <guid>https://dev.to/camillehe1992/stopstart-rds-instances-automatically-using-system-manager-for-cost-optimization-5d04</guid>
      <description>&lt;p&gt;Amazon Relational Database Service (Amazon RDS) is a web service that makes it easier to set up, operate, and scale a relational database in the AWS Cloud. It provides cost-efficient, resizable capacity for an industry-standard relational database and manages common database administration tasks. But there are some things that users should be manage like availability of the database, choosing the right size of database engine, maintaining backups, cost optimization and so on. &lt;/p&gt;

&lt;p&gt;In this post, we're going to take about Cost Optimization with the runtime of the database. For example, If you want to run your database instance only a certain time like 9 am to 6 pm in working days. But manually start the database in morning 9 am and stop it in evening 6 pm is boring. Some times you forget to start the database and it will lead the application downtime in the working hours.&lt;/p&gt;

&lt;p&gt;But if you setup an automation process which is stop and start the RDS database daily, at the time which you specified. Could that be amazing? AWS provides a service called System Manager, which is we are gonna use to stop and start our database. Let’s get into the details now.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is AWS System Manager
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://docs.aws.amazon.com/systems-manager/latest/userguide/what-is-systems-manager.html" rel="noopener noreferrer"&gt;AWS Systems Manager&lt;/a&gt; is the operations hub for your AWS applications and resources and a secure end-to-end management solution for hybrid and multi-cloud environments that enables secure operations at scale. In the post, we will use one of capabilities of &lt;a href="https://docs.aws.amazon.com/systems-manager/latest/userguide/systems-manager-state.html" rel="noopener noreferrer"&gt;System Manager&lt;/a&gt;, which named State Manager. State Manager is a secure and scalable service that automates the process of keeping managed nodes in a hybrid and multi-cloud infrastructure in a state that you define.&lt;/p&gt;

&lt;p&gt;In our example, RDS instance is a type of nodes that can be managed using automate process. To enable the automation, you should do the following steps:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Create IAM role and Policy for System Manager&lt;/li&gt;
&lt;li&gt;Create SSM Associations for Stop/Start RDS Instance&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Create IAM role and Policy for System Manager
&lt;/h2&gt;

&lt;p&gt;Firstly, you need to create an automation IAM role which grants start/stop RDS instance permissions to SSM. You can do it from AWS Console or any other IaC tools. Here are the details for the IAM role.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Role Name&lt;/strong&gt;&lt;br&gt;
StartStopRebootRDS&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Trust Relationships&lt;/strong&gt;&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="w"&gt;

&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"Version"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"2012-10-17"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"Statement"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="nl"&gt;"Sid"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;""&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="nl"&gt;"Effect"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Allow"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="nl"&gt;"Principal"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
                &lt;/span&gt;&lt;span class="nl"&gt;"Service"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"ssm.amazonaws.com"&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="nl"&gt;"Action"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"sts:AssumeRole"&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;


&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;Inline Policy&lt;/strong&gt;&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "rds:Describe*",
                "rds:Start*",
                "rds:Stop*",
                "rds:Reboot*"
            ],
            "Resource": "*"
        }
    ]
}


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h2&gt;
  
  
  Create SSM Associations for Stop/Start RDS Instance
&lt;/h2&gt;

&lt;p&gt;In the example, I'm going to implement: start RDS instance at 8:30 AM and stop it at 17:30 PM from Monday to Friday. As there is a limitation for the scheduled expression as below. We have to create a specific association for each day. &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Schedule expression cron(0 30 08 ? * MON-FRI *) is currently not accepted. Supported expressions are every half, 1, 2, 4, 8 or 12 hour(s), every specified day and time of the week. Supported examples are: cron(0 0/30 * 1/1 * ? ), cron(0 0 0/4 1/1  ? ), cron (0 0 10 ?  SUN ), cron (0 0 10 ?  * *)&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;That's a batch of repeated manual stuff if creating associations from AWS console. I use AWS CLI to simplify the creation process.&lt;/p&gt;

&lt;p&gt;Run below shell script to create SSM associations to start target RDS instances (billing-test) at 00:30 AM (UTC+8) from Monday to Friday. &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;My local time is UTC+0800, so the start instance cron expression for 08:30 AM is 00:30. So does the stop instance cron expression.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

&lt;span class="c"&gt;#!/bin/sh&lt;/span&gt;

&lt;span class="nv"&gt;WORKDAY&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"MON TUE WED THU FRI"&lt;/span&gt;

&lt;span class="k"&gt;for &lt;/span&gt;day &lt;span class="k"&gt;in&lt;/span&gt; &lt;span class="nv"&gt;$WORKDAY&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;do
    &lt;/span&gt;aws ssm create-association &lt;span class="se"&gt;\&lt;/span&gt;
        &lt;span class="nt"&gt;--name&lt;/span&gt; AWS-StartRdsInstance &lt;span class="se"&gt;\&lt;/span&gt;
        &lt;span class="nt"&gt;--schedule-expression&lt;/span&gt; &lt;span class="s2"&gt;"cron(30 0 ? * &lt;/span&gt;&lt;span class="nv"&gt;$day&lt;/span&gt;&lt;span class="s2"&gt; *)"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
        &lt;span class="nt"&gt;--association-name&lt;/span&gt; StartRDSInstance_&lt;span class="nv"&gt;$day&lt;/span&gt;&lt;span class="se"&gt;\&lt;/span&gt;
        &lt;span class="nt"&gt;--parameters&lt;/span&gt; &lt;span class="nv"&gt;InstanceId&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;billing-test,AutomationAssumeRole&lt;span class="o"&gt;=&lt;/span&gt;arn:&lt;span class="o"&gt;{&lt;/span&gt;aws::partition&lt;span class="o"&gt;}&lt;/span&gt;:iam::&lt;span class="o"&gt;{&lt;/span&gt;aws:accountid&lt;span class="o"&gt;}&lt;/span&gt;/role/StartStopRebootRDS &lt;span class="se"&gt;\&lt;/span&gt;
        &lt;span class="nt"&gt;--profile&lt;/span&gt; your-credentials
&lt;span class="k"&gt;done&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Run below shell script to create SSM associations to stop target RDS instances (billing-test) at 17:30 PM (UTC+8) from Monday to Friday.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

&lt;span class="c"&gt;#!/bin/sh&lt;/span&gt;

&lt;span class="nv"&gt;WORKDAY&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"MON TUE WED THU FRI"&lt;/span&gt;

&lt;span class="k"&gt;for &lt;/span&gt;day &lt;span class="k"&gt;in&lt;/span&gt; &lt;span class="nv"&gt;$WORKDAY&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;do
    &lt;/span&gt;aws ssm create-association &lt;span class="se"&gt;\&lt;/span&gt;
        &lt;span class="nt"&gt;--name&lt;/span&gt; AWS-StopRdsInstance &lt;span class="se"&gt;\&lt;/span&gt;
        &lt;span class="nt"&gt;--schedule-expression&lt;/span&gt; &lt;span class="s2"&gt;"cron(30 9 ? * &lt;/span&gt;&lt;span class="nv"&gt;$day&lt;/span&gt;&lt;span class="s2"&gt; *)"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
        &lt;span class="nt"&gt;--association-name&lt;/span&gt; StopRDSInstance_&lt;span class="nv"&gt;$day&lt;/span&gt;&lt;span class="se"&gt;\&lt;/span&gt;
        &lt;span class="nt"&gt;--parameters&lt;/span&gt; &lt;span class="nv"&gt;InstanceId&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;billing-test,AutomationAssumeRole&lt;span class="o"&gt;=&lt;/span&gt;arn:&lt;span class="o"&gt;{&lt;/span&gt;aws::partition&lt;span class="o"&gt;}&lt;/span&gt;:iam::&lt;span class="o"&gt;{&lt;/span&gt;aws:accountid&lt;span class="o"&gt;}&lt;/span&gt;:role/StartStopRebootRDS &lt;span class="se"&gt;\&lt;/span&gt;
        &lt;span class="nt"&gt;--profile&lt;/span&gt; your-credentials
&lt;span class="k"&gt;done&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Here is a screenshot of the associations I created using AWS CLI.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwdrwh7ztbm9zc0bocxpr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwdrwh7ztbm9zc0bocxpr.png" alt="Screenshot"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  References
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://docs.aws.amazon.com/cli/latest/reference/ssm/create-association.html" rel="noopener noreferrer"&gt;https://docs.aws.amazon.com/cli/latest/reference/ssm/create-association.html&lt;/a&gt;&lt;br&gt;
&lt;a href="https://www.easydeploy.io/blog/automate-rds-instance-using-system-manager/#Introduction" rel="noopener noreferrer"&gt;https://www.easydeploy.io/blog/automate-rds-instance-using-system-manager/#Introduction&lt;/a&gt;&lt;br&gt;
&lt;a href="https://docs.aws.amazon.com/systems-manager/latest/userguide/state-manager-about.html" rel="noopener noreferrer"&gt;https://docs.aws.amazon.com/systems-manager/latest/userguide/state-manager-about.html&lt;/a&gt;&lt;/p&gt;

</description>
      <category>aws</category>
      <category>optimazation</category>
      <category>cost</category>
      <category>ssm</category>
    </item>
    <item>
      <title>Build Scheduled AWS Batch Job Infrastructure using Terraform</title>
      <dc:creator>Camille He</dc:creator>
      <pubDate>Sun, 10 Sep 2023 06:22:56 +0000</pubDate>
      <link>https://dev.to/camillehe1992/build-scheduled-aws-batch-job-infrastructure-using-terraform-1bnh</link>
      <guid>https://dev.to/camillehe1992/build-scheduled-aws-batch-job-infrastructure-using-terraform-1bnh</guid>
      <description>&lt;p&gt;In this post, I will walk you through how to build a scheduled AWS Batch job infrastructure using Terraform. It focuses on the Terraform infrastructure and modules definition, and how they work together to build the entire workflow. It doesn't cover the advanced features that AWS Batch provided, and only use &lt;a href="https://registry.terraform.io/providers/hashicorp/aws/5.0.0"&gt;Terraform aws official provider&lt;/a&gt; &lt;br&gt;
and basic Terraform features. In short, this post helps you setup a basic AWS Batch job infrastructure quickly, and you can add more features and enhance the structure as needed.&lt;/p&gt;
&lt;h2&gt;
  
  
  Prerequisite
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;AWS CLI (V2) is installed on the local machine.&lt;/li&gt;
&lt;li&gt;AWS credentials setup. We use the credentials to deploy Terraform resources to the target AWS Account. In this demo, I use the same AWS profile for S3 remote backend configuration and Terraform apply.&lt;/li&gt;
&lt;li&gt;Terraform CLI (1.3.4) is installed in the local machine. You can loosen the Terraform version restriction in &lt;code&gt;versions.tf&lt;/code&gt; to use other close versions, however I'm not sure if the code works as expected without verification.&lt;/li&gt;
&lt;li&gt;I'm working on Mac (MacOS Monterey) with Apple M2 Chip. So the &lt;code&gt;aws provider&lt;/code&gt; installed is &lt;code&gt;.terraform/providers/registry.terraform.io/hashicorp/aws/5.0.0/darwin_arm64/terraform-provider-aws_v5.0.0_x5&lt;/code&gt;. If you are in other operating system, you should remove &lt;code&gt;.terraform.lock.hcl&lt;/code&gt; file from source code, and let Terraform CLI install the well-matched &lt;code&gt;aws provider&lt;/code&gt; version according to your OS.&lt;/li&gt;
&lt;li&gt;In the demo source code, I use default VPC, subnets and security group for EC2 instances. You can use customized network resources, however you should take care of the network availability if your Batch needs internet access, for example, the docker image that is used is hosted in public docker registry. &lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;You can find the demo source code from &lt;a href="https://github.com/camillehe1992/on-demand-job-in-aws-batch"&gt;https://github.com/camillehe1992/on-demand-job-in-aws-batch&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  Terraform Structure
&lt;/h2&gt;

&lt;p&gt;The Terraform structure contains several components shows as below.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;.&lt;/span&gt;
├── config.tfbackend  &lt;span class="c"&gt;# Remote backend config file&lt;/span&gt;
├── data.tf           &lt;span class="c"&gt;# File for Terraform data source &lt;/span&gt;
├── main.tf           &lt;span class="c"&gt;# The reference of modules&lt;/span&gt;
├── outputs.tf        &lt;span class="c"&gt;# The outputs of Terraform resources&lt;/span&gt;
├── terraform         
│   ├── dev           &lt;span class="c"&gt;# Environment specified variables&lt;/span&gt;
│   └── modules       &lt;span class="c"&gt;# Terraform modules&lt;/span&gt;
├── variables.tf      &lt;span class="c"&gt;# Terraform input variables that should be passed to the arch module&lt;/span&gt;
└── versions.tf       &lt;span class="c"&gt;# Defines the versions of Terraform and providers&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Remote Backend Configuration (S3)
&lt;/h3&gt;

&lt;p&gt;For best practice, I save the Terraform state files in the remote location, which is S3 bucket here. The configuration below specifies the S3 bucket name, region, and profile used to access S3 bucket. Another parameter is &lt;code&gt;key&lt;/code&gt;, which we&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# config.tfbackend
region         = "cn-north-1"
bucket         = "tf-state-756143471679-cn-north-1"
profile        = "service.app-deployment-dev-ci-bot"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Another parameter is &lt;code&gt;key&lt;/code&gt; that is provided via &lt;code&gt;-backend-config&lt;/code&gt; command in &lt;code&gt;terraform init&lt;/code&gt; as there is a &lt;code&gt;ENVIRONMENT&lt;/code&gt; specific path and we have to inject it dynamically instead of hard-coding in &lt;code&gt;config.tfbackend&lt;/code&gt; file. You can use other configurations settings as needed.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform init -reconfigure \
    -backend-config=$BACKEND_CONFIG \
    -backend-config="key=$NICKNAME/$ENVIRONMENT/$AWS_REGION/terraform.tfstate"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Terraform Main File
&lt;/h3&gt;

&lt;p&gt;The &lt;code&gt;main.tf&lt;/code&gt; in the root directory is the core component and entrance in Terraform structure. I define all resources here, including IAM roles, Batch, CloudWatch Events (EventBridge), and SNS Topic. You can separate them into specific files, for example, &lt;code&gt;iam.tf&lt;/code&gt;, &lt;code&gt;batch.tf&lt;/code&gt;, &lt;code&gt;events.tf&lt;/code&gt;, &lt;code&gt;sns.tf&lt;/code&gt;, etc. I keep them in one file as the project is not that complex, and it's easy to understand the relationships between resources/modules. You should make a better decision according to your project structure.&lt;/p&gt;

&lt;p&gt;In the &lt;code&gt;main.tf&lt;/code&gt; file, I define 8 resources, 3 IAM roles, 1 secrets group, 1 batch, 2 CloudWatch events, 1 SNS topic. All of them compose the AWS architecture as the diagram shows&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Cdbu5RWk--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/rhojifkiwg5gpzii9355.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Cdbu5RWk--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/rhojifkiwg5gpzii9355.png" alt="batch-arch" width="800" height="353"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Workflow steps:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;User creates a docker image, uploads the image to the Amazon ECR or another container registry (for example, DockerHub), and creates a job definition, compute environment and job queue in AWS Batch. In this repo, we use an AWS official image &lt;code&gt;public.ecr.aws/amazonlinux/amazonlinux:latest&lt;/code&gt; for demo purpose.&lt;/li&gt;
&lt;li&gt;Batch job is submitted using job definition into the job queue in AWS Batch by CloudWatch Event regularly as scheduled.&lt;/li&gt;
&lt;li&gt;AWS Batch launches an EC2 instance in computing environment, pulls the image from the image registry and create an container.&lt;/li&gt;
&lt;li&gt;The container should implement some tasks on your behave. An email notification will be triggered if the job is failed.&lt;/li&gt;
&lt;li&gt;After done, the container will be stopped and removed. EC2 instance is shutdown automatically by AWS Batch. &lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Terraform Modules
&lt;/h3&gt;

&lt;p&gt;For well-architecting and organizing Terraform structure, I define modules for each resource group. &lt;code&gt;Modules&lt;/code&gt; is a key feature in Terraform which helps users manage their own resources efficiently. Let's dive into the details of each module in this section.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Batch Module&lt;/strong&gt;: Defines resources in Batch service, including computing environment, job queue, job definition.&lt;br&gt;
&lt;strong&gt;EventBridge Module&lt;/strong&gt;: Defines resources in EventBridge, including event rule, rule target. One named &lt;code&gt;submit_batch_job_event&lt;/code&gt; is used to submit Batch job as scheduled, another named &lt;code&gt;capture_failed_batch_event&lt;/code&gt; is used to send out an alert email if a Batch job is failed.&lt;br&gt;
&lt;strong&gt;IAM Module&lt;/strong&gt;: Defines IAM resources, including roles, policies, instance profile. These roles are used by AWS Batch resources and EventBridge rules.&lt;br&gt;
&lt;strong&gt;SecretManager Module&lt;/strong&gt;: Defines secret token that may be used in Job container. It's not required in the demo project, but for your reference.&lt;br&gt;
&lt;strong&gt;SNS Module&lt;/strong&gt;: Defines resource in SNS, including topic and subscription.&lt;/p&gt;
&lt;h2&gt;
  
  
  Apply/Destroy Terraform Resources
&lt;/h2&gt;

&lt;p&gt;I create a &lt;code&gt;Makefile&lt;/code&gt; and shell script to simplify the apply/destroy process in one command. You can find the shell script from &lt;code&gt;/scripts/apply.sh&lt;/code&gt; in demo code.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# apply Terraform infrastructure&lt;/span&gt;
make apply

&lt;span class="c"&gt;# destroy Terraform infrastructure&lt;/span&gt;
make destroy
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Submit Batch Job Manually
&lt;/h2&gt;

&lt;p&gt;After applying Terraform resources successfully using &lt;code&gt;make apply&lt;/code&gt;, you can submit a Batch job for testing the entire workflow. The Batch job is submitted/triggered by CloudWatch Event (EventBridge) per day regularly as scheduled. However, you are allowed to submit a job manually via &lt;a href="https://docs.aws.amazon.com/cli/latest/reference/batch/submit-job.html"&gt;AWS CLI&lt;/a&gt; as below. Or from AWS Console directly.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Don't forget to update job definition revision in &lt;code&gt;--job-definition&lt;/code&gt; if you have a new revision created. Only the latest revision is ACTIVE.&lt;br&gt;
&lt;/p&gt;
&lt;/blockquote&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Setup AWS_PROFILE with permission to submit batch job&lt;/span&gt;
&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;AWS_PROFILE&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;service.app-deployment-dev-ci-bot

&lt;span class="c"&gt;# Submit a job using CLI&lt;/span&gt;
aws batch submit-job &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--job-name&lt;/span&gt; triggered-via-cli &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--job-definition&lt;/span&gt; arn:aws-cn:batch:cn-north-1:756143471679:job-definition/dev-helloworld-jd:1 &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--job-queue&lt;/span&gt; arn:aws-cn:batch:cn-north-1:756143471679:job-queue/dev-helloworld-jq
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After submitted successfully, go to &lt;code&gt;AWS Console -&amp;gt; Batch -&amp;gt; Jobs&lt;/code&gt;. Select the target job queue from the dropdown list, then your new submitted job will be listed on the top. It will spend a few minutes for a job to complete, according to the job processing time, and whether you allocate an EC2 instance resource in advance by giving variable &lt;code&gt;desired_vcpus&lt;/code&gt; a number greater than 0 or not. If the job failed, an email notification will be sent out to the Topic subscribers you provided in variable &lt;code&gt;notification_email_addresses&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;For cost saving, I set the &lt;code&gt;desired_vcpus&lt;/code&gt; as &lt;code&gt;0&lt;/code&gt; as default, which means a new EC2 instance will be launched when a new job is submitted and shut down immediately after completed. The screenshot below shows the latest job that submitted by CloudWatch Event (EventBridge) at 04:00 AM (UTC).&lt;/p&gt;

&lt;h2&gt;
  
  
  Logging
&lt;/h2&gt;

&lt;p&gt;The logging data is saved to CloudWatch Logs automatically. You can find the logs on the bottom of the job details (some delay to sync logs from CloudWatch Logs). In the job details view page, it also provides a link to the log stream of current job. AWS creates a CloudWatch Logs group named &lt;code&gt;/aws/batch/job&lt;/code&gt; automatically when you submit a Batch job at the first time in the same region.&lt;/p&gt;

&lt;h2&gt;
  
  
  Notes:
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;In the &lt;code&gt;compute_resources&lt;/code&gt; block of resource &lt;code&gt;aws_batch_compute_environment&lt;/code&gt;, the &lt;code&gt;instance_role&lt;/code&gt; argument is the ARN of the IAM role profile, not the IAM role.&lt;/li&gt;
&lt;li&gt;You may met a ClientException that failed to delete Computing Environment as it has a relationship with Job Queue. This is a well-known issue in &lt;code&gt;Terraform aws provider&lt;/code&gt; and lots of people raised on the Internet. A few workarounds came up with, and you can find one that fits your needs if you google it. &lt;/li&gt;
&lt;li&gt;Don't forget to accept the email subscription request when you first deploy the SNS subscription, then your subscribed email address is able to receive failed job alert. &lt;/li&gt;
&lt;li&gt;For secret token, NEVER EVER checkin the secret token in your source code. For demo purpose, I export the token in &lt;code&gt;Makefile&lt;/code&gt; using &lt;code&gt;export TF_VAR_my_secret=replace_me&lt;/code&gt;, but you should save the secret tokens in a better place, for example GitHub Secrets, and inject them as environment variables in runtime. &lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Reference
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://docs.aws.amazon.com/batch/latest/userguide/what-is-batch.html"&gt;What is Batch&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/batch_compute_environment#compute_resources"&gt;Terraform Batch Compute Environment Resources&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/sns_topic"&gt;Terraform SNS Topic&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.aws.amazon.com/batch/latest/userguide/batch_cwe_events.html"&gt;Batch CloudWatch Events&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.aws.amazon.com/batch/latest/userguide/batch-cwe-target.html"&gt;Batch CloudWatch Events Target&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.aws.amazon.com/batch/latest/userguide/batch_sns_tutorial.html"&gt;Batch with SNS&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Done. I'm always appreciating your commands and ideas. Happy learning!&lt;/p&gt;

</description>
      <category>aws</category>
      <category>batch</category>
      <category>terraform</category>
    </item>
  </channel>
</rss>
