<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Nathan (Nursultan) Bekenov</title>
    <description>The latest articles on DEV Community by Nathan (Nursultan) Bekenov (@nbekenov).</description>
    <link>https://dev.to/nbekenov</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/nbekenov"/>
    <language>en</language>
    <item>
      <title>Run Flyway DB migrations with AWS Lambda and RDS - Part 1</title>
      <dc:creator>Nathan (Nursultan) Bekenov</dc:creator>
      <pubDate>Sat, 06 Jul 2024 20:32:14 +0000</pubDate>
      <link>https://dev.to/aws-builders/run-flyway-db-migrations-with-aws-lambda-and-rds-part-1-2a6j</link>
      <guid>https://dev.to/aws-builders/run-flyway-db-migrations-with-aws-lambda-and-rds-part-1-2a6j</guid>
      <description>&lt;p&gt;Usually there is a need to run SQL database updates: update table columns, add new rows, create a new schema etc. Often developer teams are using &lt;a href="https://flywaydb.org/" rel="noopener noreferrer"&gt;Flyway&lt;/a&gt; It is an open-source database SQL deployment tool. In Flyway, all DDL and DML changes to the database are called migrations. Migrations can be versioned or repeatable. &lt;/p&gt;

&lt;p&gt;If RDS cluster is in private subnet how then you are going to automate these DB migrations?&lt;br&gt;
One of the solutions is to use AWS Lambda in the same VPC that will have flyway run against DB &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fiwuadx5t2eoe9v5y8r55.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fiwuadx5t2eoe9v5y8r55.png" alt="Image description" width="800" height="438"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Here is what we are going to do:&lt;/p&gt;

&lt;p&gt;Part 1 - Create local setup&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Initialize project&lt;/li&gt;
&lt;li&gt;Docker image for PostgreSQL and Flyway so we can test our code&lt;/li&gt;
&lt;li&gt;Write Java class that will run Flyway Migrations in our docker container&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Part 2 - Deploy in AWS&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Create AWS Lambda using Terraform&lt;/li&gt;
&lt;li&gt;Update Java class and deploy code in Lambda&lt;/li&gt;
&lt;li&gt;Configure access from Lambda to RDS (no DB password is needed)&lt;/li&gt;
&lt;li&gt;Make some conclusions&lt;/li&gt;
&lt;/ol&gt;



&lt;p&gt;&lt;strong&gt;Initialize project&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;create new java project using gradle init&lt;/li&gt;
&lt;li&gt;you src folder should like this (Example &lt;a href="https://github.com/nbekenov/flyway-lambda/tree/local-setup" rel="noopener noreferrer"&gt;https://github.com/nbekenov/flyway-lambda/tree/local-setup&lt;/a&gt;)
&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;└── src
    ├── main
        ├── java
        │   └── com
        │       └── example
        │           └── DatabaseMigrationHandler.java
        └── resources
            └── db
                └── migration
                    └── V1__Create_table.sql
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;ul&gt;
&lt;li&gt;our SQL migration scripts will be stored in src/resources/db/migration folder&lt;/li&gt;
&lt;li&gt;our main java class will be in DatabaseMigrationHandler.java (you can name you package the way you want - I named it com.example)&lt;/li&gt;
&lt;/ul&gt;



&lt;p&gt;&lt;strong&gt;Docker Compose Setup for Local Development&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In this setup, we are using Docker Compose to create a local environment for testing database migrations using Flyway and PostgreSQL. If you want you can skip explanation and get to &lt;a href="https://github.com/nbekenov/flyway-lambda/tree/local-setup" rel="noopener noreferrer"&gt;git repo with the code&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;/docker
├── .env.pg_admin
├── README.md
├── docker-compose.yml
└── init
    └── create_schemas.sql
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Create docker folder.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Create init folder inside docker folder&lt;br&gt;
In init folder create new file create_schemas.sql. This file will be used for initialization and creating our DB schema.&lt;br&gt;
&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;CREATE SCHEMA IF NOT EXISTS myschema;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Create new file .env.pg_admin inside docker folder - this file contains values for env variables for one of the docker containers
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;PGADMIN_DEFAULT_EMAIL=user@domain.com
PGADMIN_DEFAULT_PASSWORD=mysecretpassword
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;And finally create docker-compose.yml inside docker folder
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;version: '3.1'

services:
  db:
    image: postgres
    restart: always
    environment:
      POSTGRES_USER: postgres
      POSTGRES_PASSWORD: mysecretpassword
    volumes:
      - ./local-data:/var/lib/postgresql/data
      - ./init:/docker-entrypoint-initdb.d # init scripts are executed upon DB container startup
    ports:
      - 5432:5432

  flyway:
    image: flyway/flyway
    depends_on:
      - db 
    volumes:
      - ../src/main/resources/db/migration:/flyway/sql
    command: -url=jdbc:postgresql://db:5432/postgres -schemas=myschema -user=postgres -password=mysecretpassword -connectRetries=60 migrate

  pg_admin:
    image: dpage/pgadmin4
    depends_on:
      - db 
    env_file:
      - .env.pg_admin
    ports:
      - 80:80

volumes:
  local-data:
    external: false
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We define three services: db, flyway, and pg_admin.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Database Service (db)&lt;/em&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Environment Variables: Sets the PostgreSQL user and password.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Volumes:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;./local-data:/var/lib/postgresql/data: Maps a local directory to the PostgreSQL data directory to persist data.&lt;/li&gt;
&lt;li&gt;./init:/docker-entrypoint-initdb.d: Maps a local directory to the directory where PostgreSQL looks for initialization scripts.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;&lt;em&gt;Flyway Service (flyway)&lt;/em&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Depends_on: Ensures that the db service starts before the Flyway service.&lt;/li&gt;
&lt;li&gt;Volumes: Maps the local directory containing SQL migration scripts to Flyway's expected location.&lt;/li&gt;
&lt;li&gt;Command: Provides Flyway with the necessary parameters to connect to the database and run the migrations:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;   -url=jdbc:postgresql://db:5432/postgres: JDBC URL to connect to the PostgreSQL database.
   -schemas=myschema: Specifies the schema to migrate.
   -user=postgres and -password=mysecretpassword: Database credentials.
   -connectRetries=60: Retries the connection for up to 60 seconds if the database is not immediately available.
   migrate: Command to run the migrations.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;em&gt;pgAdmin Service (pg_admin)&lt;/em&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Depends_on: Ensures the db service starts before pgAdmin.&lt;/li&gt;
&lt;li&gt;Env_file: Loads environment variables from a .env.pg_admin file to configure pgAdmin.&lt;/li&gt;
&lt;li&gt;Ports: Maps port 80 on the host to port 80 in the container to access pgAdmin through a web browser.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Start containers&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cd docker
docker-compose up -d
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Verify that Flyway run&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker ps -a
docker logs &amp;lt;container-id-or-name&amp;gt; --tail 20
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzthp2pk2xcwgahfca4ad.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzthp2pk2xcwgahfca4ad.png" alt="Image description" width="800" height="424"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;Write Java class&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In this section, we'll dive into the Java class DatabaseMigrationHandler that is designed to run Flyway migrations against a local PostgreSQL database set up in a Docker container. This class encapsulates all the necessary logic to establish a database connection, test the connection, and execute the migrations.&lt;/p&gt;

&lt;p&gt;If you want you can skip explanation and get to &lt;a href="https://github.com/nbekenov/flyway-lambda/blob/local-setup/src/main/java/com/example/DatabaseMigrationHandler.java" rel="noopener noreferrer"&gt;git repo with the code&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Package and Imports
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;package com.example;

import org.flywaydb.core.Flyway;

import java.sql.Connection;
import java.sql.SQLException;
import java.util.Properties;
import java.util.Objects;
import software.amazon.jdbc.PropertyDefinition;
import software.amazon.jdbc.ds.AwsWrapperDataSource;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Package Declaration: The class is part of the com.example package.&lt;br&gt;
Imports: Necessary classes from the Flyway library, Java SQL package, and AWS JDBC wrapper for handling database connections are imported&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Class and Instance Variables
&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;public class DatabaseMigrationHandler {
    // instance vars
    private final String dbHost;
    private final String dbPort;
    private final String dbName;
    private final String dbSchema;
    private final String dbUser;
    private final String dbPassword;

    private static final String DB_HOST = "localhost";
    private static final String DB_PORT = "5432";
    private static final String DB_NAME = "postgres";
    private static final String DB_SCHEMA = "myschema";
    private static final String DB_USER = "postgres";
    private static final String DB_PASSWORD = "mysecretpassword";
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Instance Variables: These store the database connection details such as host, port, name, schema, user, and password.&lt;br&gt;
Static Constants: Default values for the database connection details are defined as static constants.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Constructor
&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    public DatabaseMigrationHandler() {
        this.dbHost = DB_HOST;
        this.dbPort = DB_PORT;
        this.dbName = DB_NAME;
        this.dbSchema = DB_SCHEMA;
        this.dbUser = DB_USER;
        this.dbPassword = DB_PASSWORD;
    }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Constructor: Initializes the instance variables with the default values defined above.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Test Connection Method
&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    private boolean testConnection() {
        try (Connection connection = getDataSource().getConnection()) {
            return connection != null;
        } catch (SQLException e) {
            e.printStackTrace();
            return false;
        }
    }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;testConnection Method: Attempts to establish a connection to the database. Returns true if successful, otherwise logs the exception and returns false.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Run Migrations Method
&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    private void runMigrations() {
        try{
            Flyway flyway = Flyway.configure()
                    .dataSource(getDataSource())
                    .schemas(this.dbSchema.  )
                    .load();
            flyway.migrate();
            System.out.println("Completed Database migration!");
        } catch (Exception e) {
            System.out.println("Database migration failed!");
            e.printStackTrace();
        }
    }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;runMigrations Method: Configures and runs Flyway migrations. It uses the Flyway class to set up the data source and schema, then initiates the migration process.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Data Source Configuration
&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    private AwsWrapperDataSource getDataSource() {
        Properties targetDataSourceProps = new Properties();
        targetDataSourceProps.setProperty("ssl", "false");
        targetDataSourceProps.setProperty("password", this.dbPassword);

        AwsWrapperDataSource ds = new AwsWrapperDataSource();
        ds.setJdbcProtocol("jdbc:postgresql:");
        ds.setTargetDataSourceClassName("org.postgresql.ds.PGSimpleDataSource");
        ds.setServerName(this.dbHost);
        ds.setDatabase(this.dbName);
        ds.setServerPort(this.dbPort);
        ds.setUser(this.dbUser);
        ds.setTargetDataSourceProperties(targetDataSourceProps);

        return ds;
    }
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;getDataSource Method: Configures the data source using &lt;a href="https://github.com/aws/aws-advanced-jdbc-wrapper/blob/main/docs/using-the-jdbc-driver/DataSource.md" rel="noopener noreferrer"&gt;AwsWrapperDataSource&lt;/a&gt; to connect to the PostgreSQL database. It sets the necessary properties such as server name, database name, port, user, and password.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Main method
&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    public static void main(String[] args) {
        DatabaseMigrationHandler handler = new DatabaseMigrationHandler();
        if (handler.testConnection()) {
            System.out.println("Database connection successful!");
            handler.runMigrations();
        } else {
            System.out.println("Failed to connect to the database.");
        }
    }

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;main Method: The entry point of the application. It creates an instance of DatabaseMigrationHandler, tests the database connection, and runs the migrations if the connection is successful.&lt;/p&gt;



&lt;p&gt;&lt;strong&gt;Explanation of the build.gradle&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In this section, we'll go through the build.gradle file, which is used to configure the build process for your Java project. We'll also cover some useful Gradle commands for building and running your project.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Plugins Section
&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;plugins {
    id 'java'
    id 'groovy'
    id 'application'
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;application Plugin: Facilitates the creation of Java applications and provides tasks for running the application&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Dependencies Section
&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;dependencies {
    implementation 'org.flywaydb:flyway-core:9.22.3'
    implementation 'org.postgresql:postgresql:42.7.2'
    implementation 'software.amazon.jdbc:aws-advanced-jdbc-wrapper:2.3.0'

    testImplementation platform('org.junit:junit-bom:5.10.0')
    testImplementation 'org.junit.jupiter:junit-jupiter'
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;implementation: Declares dependencies required to compile and run the application. Here, flyway-core, postgresql, and aws-advanced-jdbc-wrapper are included.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Application Section
&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;application {
    mainClass = 'com.example.DatabaseMigrationHandler'
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;mainClass: Specifies the main class of the application, which is com.example.DatabaseMigrationHandler. This is the entry point when running the application.&lt;/p&gt;



&lt;p&gt;Once you have your build.gradle file set up, you can use several Gradle commands to manage your project. These commands are executed from the command line.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;./gradlew clean
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;clean: Deletes the build directory, effectively cleaning the project. This is useful for ensuring a fresh build environment.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;./gradlew build
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;build: Compiles the source code, runs tests, and packages the project into a JAR file. This command performs all the necessary steps to create a build artifact.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;./gradlew run
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;run: Executes the main class specified in the application section. In this case, it will run com.example.DatabaseMigrationHandler, which handles the Flyway migrations.&lt;/p&gt;

&lt;p&gt;In the logs you should see that connection to DB was established and DB migrations run successfully.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr0gowkqdscmc6x8ek5lg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr0gowkqdscmc6x8ek5lg.png" alt="Image description" width="800" height="370"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>devops</category>
      <category>aws</category>
      <category>java</category>
      <category>database</category>
    </item>
    <item>
      <title>Securely connect to an Amazon RDS</title>
      <dc:creator>Nathan (Nursultan) Bekenov</dc:creator>
      <pubDate>Mon, 06 May 2024 01:14:09 +0000</pubDate>
      <link>https://dev.to/aws-builders/securely-connect-to-an-amazon-rds-2i3p</link>
      <guid>https://dev.to/aws-builders/securely-connect-to-an-amazon-rds-2i3p</guid>
      <description>&lt;p&gt;Hello there! In this post I am going to show you Terraform code example of how to create resources for secure connection to your DB in RDS cluster.&lt;/p&gt;




&lt;p&gt;I am going to follow approach provided by AWS team in this &lt;a href="https://aws.amazon.com/blogs/database/securely-connect-to-an-amazon-rds-or-amazon-ec2-database-instance-remotely-with-your-preferred-gui/" rel="noopener noreferrer"&gt;article&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Ok, let's figure out what resources we need to create.&lt;br&gt;
I assume that you already have: VPC and RDS Cluster (if not yet then check out &lt;a href="https://dev.to/aws-builders/create-rds-cluster-and-manage-passwords-in-2024-4n3d"&gt;previous post&lt;/a&gt; ).&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;VPC endpoints (ssm, ssmmessages, ec2messages)&lt;/li&gt;
&lt;li&gt;EC2 instance&lt;/li&gt;
&lt;li&gt;Security Group for EC2 instance&lt;/li&gt;
&lt;li&gt;IAM role and instance profile&lt;/li&gt;
&lt;/ul&gt;



&lt;p&gt;I will put details after each part of the code. Also if you want to skip and jump right to the code then everything can be found in my &lt;a href="https://github.com/nbekenov/rds-aurora/tree/main" rel="noopener noreferrer"&gt;repo&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Below code creates VPC endpoints. Since we need 3 of them I am looping through the list. We also will need security group.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# ----------------
# VPC Endpoints
# ----------------
locals {
  endpoints = {
    "endpoint-ssm" = {
      name        = "ssm"
      private_dns = false
    },
    "endpoint-ssm-messages" = {
      name        = "ssmmessages"
      private_dns = false
    },
    "endpoint-ec2-messages" = {
      name        = "ec2messages"
      private_dns = false
    },
  }
}

resource "aws_vpc_endpoint" "endpoints" {
  for_each            = local.endpoints

  vpc_id              = module.vpc.vpc_id
  vpc_endpoint_type   = "Interface"
  service_name        = "com.amazonaws.${data.aws_region.current.name}.${each.value.name}"
  security_group_ids  = [aws_security_group.vpc_endpoint_sg.id]
  subnet_ids          = data.aws_subnets.private.ids
  private_dns_enabled = each.value.private_dns

}

# SG for VPC endpoints
resource "aws_security_group" "vpc_endpoint_sg" {
  name_prefix = "vpc-endpoint-sg"
  vpc_id      = module.vpc.vpc_id
  description = "security group for VPC endpoints"

  ingress {
    from_port   = 0
    to_port     = 65535
    protocol    = "tcp"
    cidr_blocks = [module.vpc.vpc_cidr_block]
    description = "allow all TCP within VPC CIDR"
  }

  egress {
    from_port   = 0
    to_port     = 0
    protocol    = "-1"
    cidr_blocks = ["0.0.0.0/0"]
    description = "allow all outbound traffic from VPC"
  }

  tags = {
    Name = "vpc-endpoints-sg"
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now let's create EC2 instance and configure instance profile for it.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "aws_instance" "bastion_host" {
  ami                     = data.aws_ami.amazon_linux_2_ssm.id
  instance_type           = "t3.nano"
  subnet_id               = data.aws_subnets.private.ids[1]
  vpc_security_group_ids  = data.aws_security_groups.vpc_endpoint_sg.ids
  iam_instance_profile    = aws_iam_instance_profile.bastion_host_instance_profile.name
  user_data               = templatefile("ssm-agent-installer.sh", {})
  disable_api_termination = false
  metadata_options {
    http_endpoint = "enabled"
    http_tokens   = "required"
  }
  tags = {
    Name = "ssm-bastion-host"
  }
}

## Instance profile
resource "aws_iam_role" "bastion_host_role" {
  name = "EC2-SSM-Session-Manager-Role"
  assume_role_policy = jsonencode({
    Version = "2012-10-17"
    Statement = [
      {
        Effect = "Allow"
        Principal = {
          Service = "ec2.amazonaws.com"
        }
        Action = "sts:AssumeRole"
      }
    ]
  })
}

resource "aws_iam_role_policy_attachment" "bastion_host_role_policy" {
  policy_arn = "arn:aws:iam::aws:policy/AmazonSSMManagedInstanceCore"
  role       = aws_iam_role.bastion_host_role.name
}

resource "aws_iam_instance_profile" "bastion_host_instance_profile" {
  name = "EC2_SSM_InstanceProfile"
  role = aws_iam_role.bastion_host_role.name
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For Instance IAM role we are using existing policy AmazonSSMManagedInstanceCore that we will need in order to be able to use SSM.&lt;br&gt;
Note that in user_data I am using shell script to install necessary packages.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#!/bin/bash
main(){
    #####Installing dependencies and packages ####
    echo "Installing Security Updates..."
    sudo yum -y update
    echo "Installing ec2 instance connect..."
    sudo yum install ec2-instance-connect
    echo "Installing latest aws-ssm agent..."
    sudo yum install -y https://s3.amazonaws.com/ec2-downloads-windows/SSMAgent/latest/linux_amd64/amazon-ssm-agent.rpm
    sudo systemctl enable amazon-ssm-agent
    echo "Starting latest aws-ssm agent..."
    sudo systemctl start amazon-ssm-agent
    sudo amazon-linux-extras enable postgresql14
    sudo yum -y install postgresql
}
time main &amp;gt; /tmp/time-output.txt
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once all infra is created follow the steps from my Readme&lt;br&gt;
&lt;a href="https://github.com/nbekenov/rds-aurora/blob/main/bastion_host/README.md" rel="noopener noreferrer"&gt;https://github.com/nbekenov/rds-aurora/blob/main/bastion_host/README.md&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Create a remote port forwarding session
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws ssm start-session \   
    --region us-east-1 \   
    --target &amp;lt;bastion instance id&amp;gt; \    
    --document-name AWS-StartPortForwardingSessionToRemoteHost \    
    --parameters host="&amp;lt;rds endpoint name&amp;gt;",portNumber="5432" localPortNumber="5432"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Connect to DB using PGAdmin&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcg6u777u8d7z3hlpx47r.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcg6u777u8d7z3hlpx47r.png" alt="Image description" width="707" height="556"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Use username and password from AWS Secrets Manager&lt;/p&gt;




&lt;p&gt;In the next post I will be providing details on how to run DB Migrations using Flyway and Lambda Function&lt;/p&gt;

</description>
      <category>devops</category>
      <category>terraform</category>
      <category>database</category>
      <category>security</category>
    </item>
    <item>
      <title>Create RDS cluster and manage passwords in 2024</title>
      <dc:creator>Nathan (Nursultan) Bekenov</dc:creator>
      <pubDate>Thu, 01 Feb 2024 21:33:59 +0000</pubDate>
      <link>https://dev.to/aws-builders/create-rds-cluster-and-manage-passwords-in-2024-4n3d</link>
      <guid>https://dev.to/aws-builders/create-rds-cluster-and-manage-passwords-in-2024-4n3d</guid>
      <description>&lt;p&gt;Hello there! In this post I am going to show you Terraform code example of how to create AWS RDS cluster and mange DB passwords in  AWS Secrets Manager. &lt;/p&gt;




&lt;p&gt;Ok, let's get started with creating RDS cluster. In my example I am going to create Aurora PostgreSQL Serverless DB. To create a cluster I am going to use &lt;a href="https://github.com/terraform-aws-modules/terraform-aws-rds-aurora/tree/master" rel="noopener noreferrer"&gt;existing terraform module &lt;/a&gt;. I will put details after each part of the code. Also the all scripts can be found in &lt;a href="https://github.com/nbekenov/rds-aurora" rel="noopener noreferrer"&gt;my repo &lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;data.tf to get VPC details and engine details&lt;br&gt;
&lt;/p&gt;
&lt;/blockquote&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;data "aws_caller_identity" "current" {}

data "aws_vpc" "vpc" {
  filter {
    name   = "tag:Name"
    values = [var.vpc_name]
  }
}

data "aws_subnets" "private" {
  filter {
    name   = "tag:Name"
    values = ["${var.vpc_name}-private-*"]
  }
}

data "aws_rds_engine_version" "postgresql" {
  engine  = "aurora-postgresql"
  version = var.engine_version
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;vars.tf - update vars based on your environment setup&lt;br&gt;
&lt;/p&gt;
&lt;/blockquote&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;variable "database_name" {
  type        = string
  description = "database_name"
  default     = "aurorapostgres"
}

variable "admin_user_name" {
  type        = string
  description = "admin_user_name"
  default     = "aurora_admin"
}

variable "engine_version" {
  type        = string
  description = "postgresql engine_version"
  default     = "15.4"
}

variable "max_capacity" {
  type        = number
  description = "max scaling capacity"
  default     = 4
}

variable "min_capacity" {
  type        = number
  description = "min scaling capacity"
  default     = 2
}

variable "vpc_name" {
  type        = string
  description = "vpc_name"
  default     = "main-vpc"
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;rds.tf&lt;br&gt;
&lt;/p&gt;
&lt;/blockquote&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "aws_kms_key" "aurora_kms_key" {
  description             = "CMK for Aurora PostgreSQL server side encryption"
  deletion_window_in_days = 10
  enable_key_rotation     = false
}

resource "aws_kms_alias" "aurora_kms_key_alias" {
  name          = "alias/aurora-data-store-key"
  target_key_id = aws_kms_key.aurora_kms_key.id
}

resource "aws_db_subnet_group" "serverlessv2_sg" {
  name       = "${var.database_name}-subnet_group"
  subnet_ids = data.aws_subnets.private.ids
}

module "aurora_postgresql_v2" {
  source  = "terraform-aws-modules/rds-aurora/aws"
  version = "8.5.0"

  name          = var.database_name
  database_name = var.database_name

  engine         = data.aws_rds_engine_version.postgresql.engine
  engine_version = data.aws_rds_engine_version.postgresql.version

  instance_class = "db.serverless"
  instances = {
    one = {}
    two = {}
  }
  serverlessv2_scaling_configuration = {
    min_capacity = var.min_capacity
    max_capacity = var.max_capacity
  }

  master_username                     = var.admin_user_name
  manage_master_user_password         = true
  storage_encrypted                   = true
  kms_key_id                          = aws_kms_key.aurora_kms_key.arn
  iam_database_authentication_enabled = true
  ca_cert_identifier                  = "rds-ca-rsa2048-g1"

  vpc_id               = data.aws_vpc.vpc.id
  db_subnet_group_name = aws_db_subnet_group.serverlessv2_sg.name
  security_group_rules = {
    vpc_ingress = {
      cidr_blocks = [data.aws_vpc.vpc.cidr_block]
    }
  }

  monitoring_interval = 60
  apply_immediately   = true
  skip_final_snapshot = true

  deletion_protection = true

}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;So here I am at first creating KMS key that will be used in server side encryption. Then I am creating cluster with subnet group. Let's stop on DB master user and password&lt;/p&gt;

&lt;p&gt;Previously there were couple ways to setup DB master user and password:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;create secret (password=random string) in AWS Secrets Manager and then using terraform &lt;code&gt;${data.aws_secretsmanager_secret_version.db_password.secret_string}&lt;/code&gt; provide password to create the cluster. In this case if someone will get access to your Terraform state file they will be able to see the DB password in the plain text.&lt;/li&gt;
&lt;li&gt;To workaround this you could have configured password update by enabling secrets rotation in AWS Secrets Manager. That would usually require additional Lambda function that will trigger rotation and password update in RDS cluster&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;But now (well &lt;a href="https://aws.amazon.com/about-aws/whats-new/2022/12/amazon-rds-integration-aws-secrets-manager/" rel="noopener noreferrer"&gt;since Dec 22, 2022&lt;/a&gt;) "...RDS fully manages the master user password and stores it in AWS Secrets Manager whenever your RDS database instances are created, modified, or restored. The new feature supports the entire lifecycle maintenance for your RDS master user password including regular and automatic password rotations; removing the need for you to manage rotations using custom Lambda functions." &lt;/p&gt;

&lt;p&gt;The life become significantly easier - you just need to select the option&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;  master_username                     = var.admin_user_name
  manage_master_user_password         = true
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And it will create secret for you. The rotation schedule by default is 7 days.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbdxrgjjf8j2n2t9iuc8z.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbdxrgjjf8j2n2t9iuc8z.png" alt="Image description" width="800" height="190"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And that's it. We created RDS cluster and managing master password in most secure way with enabled rotation.&lt;/p&gt;

&lt;p&gt;In the next post I will be providing details on how to configure secure access to RDS instances using Terraform&lt;/p&gt;

&lt;p&gt;&lt;a href="https://dev.to/aws-builders/securely-connect-to-an-amazon-rds-2i3p"&gt;https://dev.to/aws-builders/securely-connect-to-an-amazon-rds-2i3p&lt;/a&gt;&lt;/p&gt;

</description>
      <category>devops</category>
      <category>terraform</category>
      <category>database</category>
      <category>rds</category>
    </item>
    <item>
      <title>ECS Blue/Green deployment with CodeDeploy and Terraform</title>
      <dc:creator>Nathan (Nursultan) Bekenov</dc:creator>
      <pubDate>Thu, 30 Nov 2023 22:09:38 +0000</pubDate>
      <link>https://dev.to/aws-builders/ecs-bluegreen-deployment-with-codedeploy-and-terraform-3gf1</link>
      <guid>https://dev.to/aws-builders/ecs-bluegreen-deployment-with-codedeploy-and-terraform-3gf1</guid>
      <description>&lt;p&gt;In this article you will find a helpful step by step guide on how to setup Blue/Green deployment with AWS CodeDeploy for ECS Fargate using Terraform and the ways of working around common challenges of doing this through the Terraform.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Note&lt;/strong&gt;: if you want to skip and get to the solution then please follow this &lt;a href="https://github.com/nbekenov/ecs-codedeploy-blue-green/tree/main" rel="noopener noreferrer"&gt;link to my repository&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Assumptions:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;You already know how to use Terraform&lt;/li&gt;
&lt;li&gt;You already know what is AWS ECS and how to create ECS service in Fargate&lt;/li&gt;
&lt;li&gt;You already know what is AWS ALB and Target Groups&lt;/li&gt;
&lt;li&gt;You already have idea what is Blue/Green deployment strategy &lt;a href="https://aws.amazon.com/blogs/compute/bluegreen-deployments-with-amazon-ecs/" rel="noopener noreferrer"&gt;if no then check this page&lt;/a&gt;
&lt;/li&gt;
&lt;/ol&gt;




&lt;p&gt;So what are the challenges of doing this with Terraform? Before answering this question let me briefly go through the Blue/Green deployment process in ECS when it's performed by CodeDeploy.&lt;/p&gt;

&lt;p&gt;This is what you should have in place before starting deployment:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;ECS cluster, ECS task definition&lt;/li&gt;
&lt;li&gt;In that cluster ECS service running behind Load Balancer&lt;/li&gt;
&lt;li&gt;In that Load Balancer - ALB Listener rule (&lt;em&gt;Production Listener&lt;/em&gt; for example with port 443) that forwards traffic to the Target group(&lt;em&gt;Blue  target group&lt;/em&gt;) where ECS service tasks are registered. &lt;/li&gt;
&lt;li&gt;A second Listener (&lt;em&gt;Test traffic listener&lt;/em&gt; for example with port 8443) that points to the Green Target group, there are no ECS service tasks registered in this group at the moment.&lt;/li&gt;
&lt;li&gt;CodeDeploy app and deployment group - you can find &lt;a href="https://github.com/nbekenov/ecs-codedeploy-blue-green/blob/main/main.tf" rel="noopener noreferrer"&gt;my example here &lt;/a&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa6ymokqzzrwqii17xael.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa6ymokqzzrwqii17xael.png" alt="Image description" width="778" height="381"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;p&gt;Now to perform B/G deployment you need to:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Create new ECS task revision &lt;/li&gt;
&lt;li&gt;Update appspec.yaml file with New Task arn and ALB information &lt;/li&gt;
&lt;li&gt;Create a new deployment in CodeDeploy with provided appspec.yaml&lt;/li&gt;
&lt;/ol&gt;




&lt;p&gt;This is what CodeDeploy will do&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Creates new tasks with the new version of the image, adds them into Green Target Group. Now you can access new version of the app through the Test Listener port&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe4aqovtfp3u14vnk3m0i.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe4aqovtfp3u14vnk3m0i.png" alt="Image description" width="778" height="381"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Once Confirmed that new tasks are up and running (or all pre-hook tests are passed) CodeDeploy shifts traffic in LoadBalancer - Pointing &lt;em&gt;Production Listener&lt;/em&gt; now to the Green Target Group&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgiihqllg7d4u80hsikeh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgiihqllg7d4u80hsikeh.png" alt="Image description" width="778" height="381"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Once traffic is shifted CodeDeploy will (after configured time) decommission tasks running the old version of the app&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftrxg6njp6bce2t10yq9r.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftrxg6njp6bce2t10yq9r.png" alt="Image description" width="778" height="381"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;p&gt;So the challenges of doing this through the terraform:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;CodeDeploy makes changes in ALB (shifts the traffic between target groups) outside of Terraform. If we run terraform apply it will rewrite all changes made my CodeDeploy&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;To workaround this add lifecycle { ignore_changes = ...} in ALB&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "aws_lb_listener_rule" "example"{
...
 action {
  type = "forward"
  target_group_arn = aws_lb_target_group.example.arn
 }
# because CodeDeploy will switch target groups during the B/G deployment
 lifecycle {
   ignore_changes = [action] 
 }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;After deployment is done CodeDeploy changes task version in ECS service and updates target group information outside of Terraform&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;To workaround this add lifecycle { ignore_changes = ...} in ECS&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "aws_ecs_service" "example"{
...
 task_definition = aws_ecs_task_definition.example.arn

 load_balancer {
  container_name = "example"
  container_port = 8080
  target_group_arn = aws_lb_target_group.example.arn
 }
# because CodeDeploy will handle task definition and alb changes outside of terraform
 lifecycle {
   ignore_changes = [load_balancer, task_definition]
 }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;To create deployment in CodeDeploy we need execute CLI&lt;/li&gt;
&lt;li&gt;To update the appspec.yaml file we need to know the ARN of the new task revision. On different blog posts I saw examples of creating new task revision using CLI - but what if we want to manage ECS task through Terraform&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;To do this through Terraform use local_file and local-exec to update appspec.yaml and execute CLI to create deployment in CodeDeploy&lt;/p&gt;

&lt;p&gt;In the code below we are creating content of the appspec.yaml file and then executing CLI commands to start the deployment and wait until it's finished.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;locals {

  # appspec file  
  appspec = {
    version = "0.0"
    Resources = [
      {
        TargetService = {
          Type = "AWS::ECS::Service"
          Properties = {
            TaskDefinition = var.ecs_task_def_arn
            LoadBalancerInfo = {
              ContainerName = var.container_name
              ContainerPort = var.container_port
            }
          }
        }
      }
    ]
  }
  appspec_content = replace(jsonencode(local.appspec), "\"", "\\\"")
  appspec_sha256  = sha256(jsonencode(local.appspec))

  # create deployment bash script
  script = &amp;lt;&amp;lt;EOF
#!/bin/bash

echo "creating deployment ..."
ID=$(aws deploy create-deployment \
    --application-name ${var.codedeploy_application_name} \
    --deployment-group-name ${var.deployment_group_name} \
    --revision '{"revisionType": "AppSpecContent", "appSpecContent": {"content": "${local.appspec_content}", "sha256": "${local.appspec_sha256}"}}' \
    --output text \
    --query '[deploymentId]')

echo "======================================================="
echo "waiting for deployment $deploymentId to finish ..."
STATUS=$(aws deploy get-deployment \
    --deployment-id $ID \
    --output text \
    --query '[deploymentInfo.status]')

while [[ $STATUS == "Created" || $STATUS == "InProgress" || $STATUS == "Pending" || $STATUS == "Queued" || $STATUS == "Ready" ]]; do
    echo "Status: $STATUS..."
    STATUS=$(aws deploy get-deployment \
        --deployment-id $ID \
        --output text \
        --query '[deploymentInfo.status]')

    SLEEP_TIME=30

    echo "Sleeping for: $SLEEP_TIME Seconds"
    sleep $SLEEP_TIME
done

if [[ $STATUS == "Succeeded" ]]; then
    echo "Deployment succeeded."
else
    echo "Deployment failed!"
    exit 1
fi

EOF

}

resource "local_file" "deploy_script" {
  filename             = "${path.module}/deploy_script.txt"
  directory_permission = "0755"
  file_permission      = "0644"
  content              = local.script

  depends_on = [ 
    aws_codedeploy_app.this,
    aws_codedeploy_deployment_group.this,
  ]
}

resource "null_resource" "start_deploy" {
  triggers = {
    appspec_sha256 = local.appspec_sha256 # run only if appspec file changed
  }

  provisioner "local-exec" {
    command     = local.script
    interpreter = ["/bin/bash", "-c"]
  }

  depends_on = [ 
    aws_codedeploy_app.this,
    aws_codedeploy_deployment_group.this,
  ]
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



</description>
      <category>aws</category>
      <category>devops</category>
      <category>terraform</category>
      <category>containers</category>
    </item>
    <item>
      <title>Use AWS CodeArtifact with Angular app builds</title>
      <dc:creator>Nathan (Nursultan) Bekenov</dc:creator>
      <pubDate>Wed, 27 Sep 2023 18:20:39 +0000</pubDate>
      <link>https://dev.to/aws-builders/use-aws-codeartifact-with-angular-app-builds-52kg</link>
      <guid>https://dev.to/aws-builders/use-aws-codeartifact-with-angular-app-builds-52kg</guid>
      <description>&lt;p&gt;&lt;strong&gt;Introduction&lt;/strong&gt;&lt;br&gt;
This is a short post describing how you can use AWS CodeArtifact with your application builds. I am using generic packages but there are other options. I will provide example for npm build but the approach can be used for other types of builds.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Note: I previously posted detailed walkthrough on how to host Angular up in S3 ( &lt;a href="https://dev.to/aws-builders/angular-app-in-aws-s3-and-cloudfront-2jp0"&gt;check out the link&lt;/a&gt;). This post will describe how we can now store the builds.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;strong&gt;Understanding the Need for Artifact Repositories&lt;/strong&gt;&lt;br&gt;
Why Store Builds? Why can't we just deploy our applications immediately after building them? It's essential to understand the rationale behind storing builds in the first place.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Consistent Deployments: The "build once and deploy everywhere" mantra ensures that the same build is deployed across various environments, be it staging, QA, or production. This consistency eliminates discrepancies between environments and the notorious "it works on my machine" dilemma.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Versioning and Rollbacks: Storing builds provides a clear version history. If a new deployment encounters issues, teams can swiftly revert to a previous, stable build, ensuring minimal service disruption.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Streamlined Testing: With builds stored and versioned, testing becomes more efficient. Testers can pull specific builds for different testing stages, ensuring that the application undergoes thorough scrutiny before hitting production.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Audit and Compliance: A repository provides a clear audit trail of application iterations. This is invaluable for sectors with stringent regulatory requirements, as it offers transparency and traceability.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Setting Up an AWS CodeArtifact Repository&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;data "aws_kms_key" "codeartifact_key" {
  key_id = "alias/aws/codeartifact"
}

resource "aws_codeartifact_domain" "example" {
  domain         = "nbekenov"
  encryption_key = data.aws_kms_key.codeartifact_key.arn
}

resource "aws_codeartifact_repository" "example" {
  repository = "my-example-repo"
  domain     = aws_codeartifact_domain.example.domain
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Build&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;npm run build:prod
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This creates dist/ folder&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Publish to CodeArtifact&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;export DOMAIN_NAME="nbekenov"
export REPOSITORY_NAME="my-example-repo"
export PACKAGE_NAME="example-package"
export PACKAGE_VERSION="1.0.0"

tar -cvzf asset.tar.gz dist/

export ASSET_SHA256=$(sha256sum asset.tar.gz | awk '{print $1;}')

aws codeartifact publish-package-version --domain $DOMAIN_NAME \
    --repository $REPOSITORY_NAME --format generic \
    --namespace $PACKAGE_NAME \
    --package $PACKAGE_NAME --package-version $PACKAGE_VERSION \
    --asset-content asset.tar.gz \
    --asset-name asset.tar.gz \
    --asset-sha256 $ASSET_SHA256
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Get package from CodeArtifact&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#!/bin/bash
export DOMAIN_NAME="nbekenov"
export REPOSITORY_NAME="my-example-repo"
export PACKAGE_NAME="example-package"

PACKAGE_VERSION=$1

aws codeartifact get-package-version-asset --domain $DOMAIN_NAME \
    --repository $REPOSITORY_NAME  --format generic \
    --namespace $PACKAGE_NAME \
    --package $PACKAGE_NAME \
    --package-version $PACKAGE_VERSION --asset asset.tar.gz asset.tar.gz

tar -xzvf asset.tar.gz

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And that's it! Now you have packages based on the build number (package version) you provided and can deploy it.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>angular</category>
      <category>devops</category>
    </item>
    <item>
      <title>Host angular app in AWS S3 and CloudFront using Terraform (complete setup)</title>
      <dc:creator>Nathan (Nursultan) Bekenov</dc:creator>
      <pubDate>Sat, 19 Aug 2023 19:51:10 +0000</pubDate>
      <link>https://dev.to/aws-builders/angular-app-in-aws-s3-and-cloudfront-2jp0</link>
      <guid>https://dev.to/aws-builders/angular-app-in-aws-s3-and-cloudfront-2jp0</guid>
      <description>&lt;p&gt;In this guide, I'll walk you through the steps to host your Angular app on AWS S3 and then optimize its delivery using CloudFront. Let's dive in!&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What We're Going to Build&lt;/strong&gt;&lt;br&gt;
Before diving into the technicalities, let's set the stage by understanding what we aim to achieve by the end of this guide&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz556enzhrzzpwnk8zspc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz556enzhrzzpwnk8zspc.png" alt="Image description" width="641" height="401"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Angular Application: We'll start with a basic Angular application. If you already have one, great! If not, don't worry. We'll briefly touch upon how to set up a simple Angular app using the Angular CLI. This will serve as our demo project to be hosted on AWS.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;AWS S3 Bucket: We'll create an S3 bucket, which is essentially a storage space in AWS where we'll upload our Angular app's build files. This bucket will act as our web server, serving the static files of our application.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;AWS CloudFront Distribution: To ensure our Angular app is delivered quickly and securely to users worldwide, we'll set up a CloudFront distribution. This will pull the static files from our S3 bucket and distribute them across a network of data centers (edge locations) around the globe. When a user accesses our app, they'll be served from the nearest edge location, ensuring minimal latency.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Custom Domain (Optional): For those who want to take it a step further, we'll also touch upon how to link a custom domain to our CloudFront distribution. This way, users can access our app using a friendly URL rather than the default AWS-generated one.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;SSL Certificate for HTTPS: Security is paramount. We'll guide you on how to set up an SSL certificate (for free) using AWS Certificate Manager, ensuring that our Angular app is accessible via HTTPS, providing an encrypted and secure connection for our users.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;By the end of this guide, you'll have a fully functional Angular application hosted on AWS, optimized for speed and security. Whether you're looking to host a personal project, a portfolio, or even a production-ready application, this setup will ensure your Angular app is accessible to users around the world with top-notch performance.&lt;/p&gt;

&lt;p&gt;Let's get started!&lt;/p&gt;


&lt;h2&gt;
  
  
  &lt;strong&gt;Angular application&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Let's create a simple "Hello World" Angular application using the Angular CLI.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Prerequisites:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Node.js and npm: Ensure you have Node.js and npm installed. If not, download and install them from Node.js official website.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Angular CLI: If you haven't installed the Angular CLI yet, you can do so using npm:&lt;br&gt;
&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;npm &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;-g&lt;/span&gt; @angular/cli
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;&lt;strong&gt;Steps to Create a Simple "Hello World" Angular App:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Create a New Angular Project:
Open your terminal or command prompt and run the following command to create a new Angular project named my-app:
&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;ng new my-app
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;ul&gt;
&lt;li&gt;Edit the App Component:
Open the my-app/src/app/app.component.html file in your favorite code editor. Replace its content with:
&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight html"&gt;&lt;code&gt;&lt;span class="nt"&gt;&amp;lt;div&lt;/span&gt; &lt;span class="na"&gt;style=&lt;/span&gt;&lt;span class="s"&gt;"text-align:center; margin-top: 40px;"&lt;/span&gt;&lt;span class="nt"&gt;&amp;gt;&lt;/span&gt;
    &lt;span class="nt"&gt;&amp;lt;h1&amp;gt;&lt;/span&gt;Hello World!&lt;span class="nt"&gt;&amp;lt;/h1&amp;gt;&lt;/span&gt;
    &lt;span class="nt"&gt;&amp;lt;p&amp;gt;&lt;/span&gt;Welcome to my simple Angular app.&lt;span class="nt"&gt;&amp;lt;/p&amp;gt;&lt;/span&gt;
&lt;span class="nt"&gt;&amp;lt;/div&amp;gt;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;ul&gt;
&lt;li&gt;Serve the Application:
&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;ng serve &lt;span class="nt"&gt;-o&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Once compiled successfully, you can open your browser and navigate to &lt;a href="http://localhost:4200/" rel="noopener noreferrer"&gt;http://localhost:4200/&lt;/a&gt;. You should see the "Hello World!" message displayed.&lt;br&gt;
And that's it! You've successfully created a simple "Hello World" Angular application using the Angular CLI. This app can now be built for production and hosted on platforms like AWS, as discussed earlier.&lt;/p&gt;
&lt;h2&gt;
  
  
  &lt;strong&gt;Infrastructure&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Alright! Let's set up the infrastructure for hosting our Angular app on AWS using Terraform. We'll create an S3 bucket for storing our app's static files and a CloudFront distribution for content delivery.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Prerequisites&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Terraform Installed: Ensure you have Terraform installed. If not, download and install it from Terraform's official website.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;AWS CLI Configured: Ensure you have the AWS CLI installed and configured with the necessary access rights.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Existing Domain name (if you don't have any then follow instructions &lt;a href="https://aws.amazon.com/getting-started/hands-on/get-a-domain/" rel="noopener noreferrer"&gt;https://aws.amazon.com/getting-started/hands-on/get-a-domain/&lt;/a&gt;)&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Steps to Create Infrastructure using Terraform&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;You can find all IaC in &lt;a href="https://github.com/nbekenov/angular-app-in-s3/tree/main/infra" rel="noopener noreferrer"&gt;my repo &lt;/a&gt;&lt;br&gt;
Update variables with your domain name and bucket name. Update backend configuration (backend.tf file)&lt;br&gt;
Then run:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;terraform init
terraform plan
terraform apply
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Let's break down the infrastructure components we're setting up:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;S3 Bucket for Source Code:&lt;br&gt;
We are creating an S3 bucket, which is Amazon's object storage service. This bucket will store the static files of our Angular application. Once the application is built using the Angular CLI, the output files from the dist/ directory will be uploaded to this S3 bucket. By configuring the bucket as a website, it can serve these static files as a web application.&lt;br&gt;
&lt;a href="https://github.com/nbekenov/angular-app-in-s3/blob/main/infra/modules/cloudfront-s3-cdn/s3_origin.tf" rel="noopener noreferrer"&gt;infra/modules/cloudfront-s3-cdn/s3_origin.tf&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;CloudFront Distribution with Geographical Restrictions:&lt;br&gt;
CloudFront is AWS's content delivery network (CDN) service. We're setting up a CloudFront distribution to deliver our Angular app's content to users. The key advantage of using CloudFront is its vast network of edge locations worldwide, which caches content closer to the end-users, ensuring faster load times. For this setup, we're adding a geographical restriction to whitelist only the USA. This means that users outside the USA will not be able to access the content, ensuring that the application is only available to a specific audience.&lt;br&gt;
&lt;a href="https://github.com/nbekenov/angular-app-in-s3/blob/main/infra/modules/cloudfront-s3-cdn/main.tf" rel="noopener noreferrer"&gt;infra/modules/cloudfront-s3-cdn/main.tf&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;S3 Bucket for CloudFront Access Logs:&lt;br&gt;
Monitoring and logging are crucial for any application. We'll set up a separate S3 bucket dedicated to storing CloudFront's access logs. These logs provide detailed records about every user request that CloudFront receives. By analyzing these logs, you can gain insights into user behavior, troubleshoot issues, and even detect potential security threats.&lt;br&gt;
&lt;a href="https://github.com/nbekenov/angular-app-in-s3/blob/main/infra/modules/cloudfront-s3-cdn/log_bucket.tf" rel="noopener noreferrer"&gt;infra/modules/cloudfront-s3-cdn/log_bucket.tf&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;SSL Certificate:&lt;br&gt;
Security is paramount for web applications. We're integrating an SSL certificate to ensure that our Angular app is served over HTTPS. This encrypts the data between the user's browser and CloudFront, providing a secure browsing experience. We'll use AWS Certificate Manager (ACM) to provision, manage, and deploy the SSL/TLS certificate.&lt;br&gt;
&lt;a href="https://github.com/nbekenov/angular-app-in-s3/blob/main/infra/dns.tf" rel="noopener noreferrer"&gt;infra/dns.tf&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Record in Route 53:&lt;br&gt;
AWS Route 53 is a scalable domain name system (DNS) web service. Once our Angular app is up and running with CloudFront, we might want to link a custom domain to it, rather than using the default AWS-generated URL. We'll create a record in Route 53 that points our custom domain to the CloudFront distribution. This ensures that users can access our app using a friendly and memorable domain name.&lt;br&gt;
&lt;a href="https://github.com/nbekenov/angular-app-in-s3/blob/main/infra/dns.tf" rel="noopener noreferrer"&gt;infra/dns.tf&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;In summary, this infrastructure setup ensures that our Angular app is not only hosted efficiently but is also optimized for speed, security, and specific audience targeting. By leveraging AWS services like S3, CloudFront, ACM, and Route 53, we're building a robust and scalable hosting solution.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Deploy&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Deploy to S3 using aws s3 sync:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Once you have your Angular application built and your AWS infrastructure set up using Terraform, the next step is to deploy the built Angular app to the S3 bucket. One of the most efficient ways to do this is by using the aws s3 sync command provided by the AWS CLI.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;cd &lt;/span&gt;my-app
ng build &lt;span class="nt"&gt;--prod&lt;/span&gt;
aws s3 &lt;span class="nb"&gt;sync &lt;/span&gt;dist/my-app/ s3://my-angular-app-bucket/
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Replace my-angular-app-bucket with the name of the S3 bucket you created using Terraform.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Invalidate CloudFront Cache (Optional):&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;If you've made changes to your app and want them to be immediately reflected via CloudFront, you might need to invalidate the CloudFront cache. This ensures that CloudFront fetches the latest version of your app from the S3 bucket:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws cloudfront create-invalidation --distribution-id YOUR_DISTRIBUTION_ID --paths "/*"

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Replace YOUR_DISTRIBUTION_ID with the ID of your CloudFront distribution.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Access the Application:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Once deployed, you can access your Angular app either through the S3 bucket's website URL or, preferably, through the CloudFront distribution URL. If you've set up a custom domain with Route 53, you can use that domain to access your app.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0zrfmkbfxwfmocdf7y3c.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0zrfmkbfxwfmocdf7y3c.png" alt="Image description" width="800" height="340"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And that's it! Your Angular app is now live on AWS, hosted in an S3 bucket and delivered globally via CloudFront. &lt;br&gt;
Great job! 👏 &lt;/p&gt;

&lt;p&gt;Further read &lt;a href="https://dev.to/aws-builders/use-aws-codeartifact-with-angular-app-builds-52kg"&gt;Use CodeArtifact with Angular builds&lt;/a&gt;&lt;/p&gt;

</description>
      <category>aws</category>
      <category>terraform</category>
      <category>angular</category>
      <category>devops</category>
    </item>
    <item>
      <title>Deploying a Node.js application in ECS with CodeCatalyst and Terraform</title>
      <dc:creator>Nathan (Nursultan) Bekenov</dc:creator>
      <pubDate>Tue, 11 Jul 2023 23:39:47 +0000</pubDate>
      <link>https://dev.to/nbekenov/deploying-a-nodejs-application-in-ecs-with-codecatalyst-and-terraform-1hf5</link>
      <guid>https://dev.to/nbekenov/deploying-a-nodejs-application-in-ecs-with-codecatalyst-and-terraform-1hf5</guid>
      <description>&lt;p&gt;In this step-by-step guide, I'll walk you through the entire process of deploying a Node.js application in ECS Fargate with CodeCatalyst as CI/CD tool and Terraform as IaC. From setting up your environment to configuring your containers and services, we'll cover everything you need to know to get your application up and running in no time. So whether you're new to cloud deployment or just looking for a more streamlined process, this guide is for you. Let's get started!&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Prerequisites&lt;/strong&gt;&lt;br&gt;
You will need the following :&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;NPM to build and run the app locally&lt;/li&gt;
&lt;li&gt;Docker to build and test your image&lt;/li&gt;
&lt;li&gt;Terraform to create infrastructure in AWS&lt;/li&gt;
&lt;li&gt;AWS account + AWS Builder ID (for CodeCatalyst)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt;&lt;br&gt;
If you interested only in part of Creating CI/CD pipeline in CodeCatalyst please go to step # 4&lt;/p&gt;
&lt;h2&gt;
  
  
  &lt;strong&gt;Step 0 [Optional] Build and run app locally.&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Feel free to skip if you already have existing node.js app.&lt;br&gt;
Create the following files:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;src/app.js main app file in the src folder&lt;/li&gt;
&lt;li&gt;package.json that includes dependencies
&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// Import the express module
const express = require('express');

// Create an instance of express
const app = express();

// Define a port
const port = 3000;

// Define a route
app.get('/', (req, res) =&amp;gt; {
  res.send('Hello, World!');
});

// Start the server
app.listen(port, () =&amp;gt; {
  console.log(`Server is running at http://localhost:${port}`);
});
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;&lt;a href="https://github.com/nbekenov/nodejs-example-app/commit/9d13af1a3e23d0e767c6c2cc853af0c6872071fe" rel="noopener noreferrer"&gt;My example code &lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now you can build and run your app locally&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;npm install
npm run build
npm test
npm start
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  &lt;strong&gt;Step 1 - Containerize the app&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Let's create Dockerfile&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Use an official Node.js runtime as a parent image
FROM node:20-alpine

# set the current mode
ENV NODE_ENV=production

# Set the working directory to /app
WORKDIR /usr/src/app

# Copy the application code to the container
COPY --chown=node:node . .

# Install the dependencies
RUN npm ci --only=production

# Expose the port that your application listens on
EXPOSE 3000

USER node

# Start the application
CMD ["node", "src/app.js"]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now you can run your app in Docker&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker build -t test:latest .
docker run -p 3000:3000 test:latest
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  &lt;strong&gt;Step 3 - Create Infra&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;We are going to implement the following architecture using Terraform and open source modules.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwznfttzthkioaevsxsqf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwznfttzthkioaevsxsqf.png" alt="Image description" width="800" height="412"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;First you will need to bootstrap your account:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;create S3 and DynamoDB table for Terraform State file&lt;/li&gt;
&lt;li&gt;create ECR&lt;/li&gt;
&lt;li&gt;create IAM roles and policies for CodeCatalyst&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;More details in the &lt;a href="https://github.com/nbekenov/nodejs-example-app/tree/main/iac/bootstrap" rel="noopener noreferrer"&gt;readme&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Then &lt;a href="https://github.com/nbekenov/nodejs-example-app/tree/main/iac/app_infra" rel="noopener noreferrer"&gt;IaC for app&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Step 4 - create Pipeline&lt;/strong&gt;
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;&lt;a href="https://docs.aws.amazon.com/signin/latest/userguide/create-aws_builder_id.html" rel="noopener noreferrer"&gt;create AWS Builder ID&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Log into CodeCatalyst and create new space and project&lt;/li&gt;
&lt;li&gt;Connect AWS accounts&lt;/li&gt;
&lt;li&gt;Create environments&lt;/li&gt;
&lt;/ol&gt;

</description>
    </item>
    <item>
      <title>Approve AWS CodePipeline Deployments from MS Teams</title>
      <dc:creator>Nathan (Nursultan) Bekenov</dc:creator>
      <pubDate>Mon, 10 Jul 2023 14:15:34 +0000</pubDate>
      <link>https://dev.to/aws-builders/approve-aws-codepipeline-deployments-from-ms-teams-dd9</link>
      <guid>https://dev.to/aws-builders/approve-aws-codepipeline-deployments-from-ms-teams-dd9</guid>
      <description>&lt;p&gt;With increasing number of pipelines running in multiple accounts for different applications it becomes difficult to manage all deployment approvals. The solution provided here enables you to approve CodePipeline stages in a centralized way using Microsoft Teams.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Architecture&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvb0ekcip6pwri5cgtve7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvb0ekcip6pwri5cgtve7.png" alt="Image description" width="800" height="533"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The solution involves several components working together. Firstly, Notification Lambda function is used to send an approval request to an MS Teams channel via message card. This function is triggered by SNS topic configured in a AWS CodePipeline stage that requires manual approval. The Notification Lambda function uses the Microsoft Teams webhook connector to send a message card to the MS Teams channel, which includes a button that the approver can click to approve or reject the pipeline stage. When the approver clicks the approval button, the MS Teams channel sends a POST request to an API Gateway endpoint, which triggers another Lambda function. This second function Approval Lambda uses the AWS CodePipeline API to update the status of the pipeline stage based on the approver's response.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Implementation&lt;/strong&gt;&lt;br&gt;
The whole solution can be found in &lt;a href="https://github.com/nbekenov/pipeline-approval-from-ms-teams" rel="noopener noreferrer"&gt;my git repository&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;For &lt;a href="https://github.com/nbekenov/pipeline-approval-from-ms-teams/tree/main/infra/terraform" rel="noopener noreferrer"&gt;IaC&lt;/a&gt; I used Terraform but you can choose any other method you prefer. The most important part I believe is the code of Lambda functions. I wrote them using Python. &lt;/p&gt;

&lt;p&gt;More about MS Teams message cards &lt;a href="https://learn.microsoft.com/en-us/microsoftteams/platform/webhooks-and-connectors/how-to/connectors-using?tabs=cURL" rel="noopener noreferrer"&gt;here&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Notes&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;It's possible to enhance security by adding Lambda Authorizer for Api Gateway to ensure that requests are coming only from MS Teams.&lt;/li&gt;
&lt;li&gt;Use a dedicated MS Teams channel for approval request to ensure that they are easily visible and distinguishable from other messages.&lt;/li&gt;
&lt;li&gt;Make sure that people who allowed to trigger pipeline approval are added into channel and granted admin access
Use descriptive and informative message card format that includes relevant information such as the pipeline name, stage name and etc.&lt;/li&gt;
&lt;li&gt;Use appropriate error handling and logging in your Lambda functions to detect and handle errors and exceptions&lt;/li&gt;
&lt;li&gt;Use monitoring and alerting to track usage and performance of your Lambda functions and other resources, and to alert you about potential issues or anomalies.&lt;/li&gt;
&lt;li&gt;The same approach/solution can be done with Slack channels. The only difference will be in the part of sending requests to and from Slack.&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>aws</category>
      <category>devops</category>
      <category>cicd</category>
      <category>terraform</category>
    </item>
  </channel>
</rss>
