<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Engin ALTAY</title>
    <description>The latest articles on DEV Community by Engin ALTAY (@enginaltayy).</description>
    <link>https://dev.to/enginaltayy</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/enginaltayy"/>
    <language>en</language>
    <item>
      <title>Complete Guide - Deploying Production-Ready MongoDB Replica Set on AWS</title>
      <dc:creator>Engin ALTAY</dc:creator>
      <pubDate>Tue, 21 Jan 2025 13:33:38 +0000</pubDate>
      <link>https://dev.to/aws-builders/complete-guide-deploying-production-ready-mongodb-replica-set-on-aws-1ph</link>
      <guid>https://dev.to/aws-builders/complete-guide-deploying-production-ready-mongodb-replica-set-on-aws-1ph</guid>
      <description>&lt;p&gt;MongoDB is a leading NoSQL database trusted by organizations for its flexibility, scalability, and high availability. This guide walks you through deploying a production-ready MongoDB replica set on AWS using a self-hosted, Dockerized setup. By leveraging AWS’s robust infrastructure, such as EC2 instances, paired with cost-saving strategies like Savings Plans or Reserved Instances, you can achieve a highly available and cost-optimized database solution.&lt;/p&gt;

&lt;p&gt;We’ll also compare this self-hosted approach to managed services like MongoDB Atlas, showcasing how you can significantly reduce costs while retaining full control over your database environment. To ensure robust security, we’ll enable authentication and implement proper role-based authorization, safeguarding your data against unauthorized access.&lt;/p&gt;

&lt;p&gt;Whether you’re a DevOps professional or a database administrator, this guide provides practical insights and detailed steps for deploying, securing, and optimizing MongoDB replica sets in an AWS environment.&lt;/p&gt;

&lt;h3&gt;
  
  
  Preparation Phase: Setting Up MongoDB Replica Set Deployment on AWS
&lt;/h3&gt;

&lt;p&gt;To ensure a smooth deployment of your MongoDB replica set, we will use the latest stable Debian release, Bookworm, for compatibility with the latest MongoDB version. Additionally, the Docker engine must be installed and operational. Proper internal hostnames must also be assigned to your EC2 instances for seamless replica set initialization.&lt;/p&gt;

&lt;h4&gt;
  
  
  Step 1 - Verify the Server Environment
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Kernel Information:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;uname -a
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;em&gt;Ensure the system is running the Debian Bookworm kernel.&lt;/em&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Docker Installation:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker info
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;em&gt;Confirm that Docker is installed and functioning correctly.&lt;/em&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Resource Availability:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;htop
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;em&gt;Check the system's resource usage, including CPU and memory, to ensure adequate capacity.&lt;/em&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Disk Space:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;df -h
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;em&gt;Verify that sufficient disk space is available for MongoDB data storage.&lt;/em&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  Step 2 (Optional) - Expand Filesystem
&lt;/h4&gt;

&lt;p&gt;We'll use persistent EBS volume attached to EC2 instances to store MongoDB data. It'll be easier to manage separate EBS volume to backup and restore in case of need.&lt;/p&gt;

&lt;p&gt;If you’ve increased the size of a persistent disk attached to your EC2 instance, grow the filesystem to utilize the expanded space:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;xfs_growfs /dev/xvdp
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Step 3 - Assign Internal Hostnames
&lt;/h4&gt;

&lt;p&gt;Internal hostnames are critical for MongoDB replica set configuration. Assign a unique, descriptive hostname to each machine:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;hostnamectl set-hostname &amp;lt;hostname&amp;gt;.internal.company.net
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;em&gt;Replace "hostname" with an appropriate identifier for each instance, such as mongo1, mongo2, and mongo3.&lt;/em&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Overview of Authentication-Enabled MongoDB Replica Set
&lt;/h3&gt;

&lt;p&gt;Setting up an authentication-enabled MongoDB replica set involves securing both internal communication between replica set members and external client connections. This is achieved through the following configurations:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Internal Authentication:&lt;br&gt;
Members of the replica set use a keyfile for secure communication. This ensures that only trusted nodes can join and exchange data within the replica set.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Client Authentication with Role-Based Access Control (RBAC):&lt;br&gt;
External clients, such as the MongoDB shell or applications, must authenticate using valid credentials. Access is managed through RBAC, which assigns specific roles and permissions to each user.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This dual-layered security setup ensures robust protection for your MongoDB replica set, safeguarding both internal operations and client interactions.&lt;/p&gt;

&lt;h3&gt;
  
  
   Deploying MongoDB Replica Set with Keyfile Access Control
&lt;/h3&gt;

&lt;p&gt;Setting up a MongoDB replica set with keyfile-based authentication involves generating a secure keyfile, preparing Docker volumes to persist data and configurations, and setting appropriate file permissions. Additionally, a dedicated MongoDB user must be created for managing the necessary files securely.&lt;/p&gt;

&lt;h4&gt;
  
  
  Step 1 - Generate the Keyfile
&lt;/h4&gt;

&lt;p&gt;Keyfile authentication ensures secure communication between replica set members. Each mongod instance uses the keyfile as a shared password to authenticate with other members. Only nodes with the correct keyfile can join the replica set.&lt;/p&gt;

&lt;p&gt;To generate a secure keyfile:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;openssl rand -base64 756 &amp;gt; prod-mongo.pem
chmod 400 prod-mongo.pem
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;openssl rand -base64 756:&lt;/strong&gt; Generates a 756-byte pseudo-random string encoded in Base64, creating a secure keyfile.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;chmod 400:&lt;/strong&gt; Restricts file access to the owner.&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Step 2 - Create MongoDB Linux User and Group
&lt;/h4&gt;

&lt;p&gt;On each EC2 instance, create a dedicated Linux user and group with consistent UID and GID for MongoDB:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;groupadd -r mongodb &amp;amp;&amp;amp; useradd -r -u 999 -g mongodb mongodb
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This ensures MongoDB processes, running with UID 999 in Docker containers, have proper access permissions.&lt;/p&gt;

&lt;h4&gt;
  
  
  Step 3 - Copy the Keyfile to All Replica Set Members
&lt;/h4&gt;

&lt;p&gt;Distribute the prod-mongo.pem keyfile to all servers in the replica set. Ensure file permissions and ownership are correctly set:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;chmod 400 prod-mongo.pem  
chown 999:999 prod-mongo.pem
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;em&gt;It is required to store the keyfile in the mongo-key Docker volume for use by the containers.&lt;/em&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
   Step 4 - Create Docker Volumes for Persistent Storage
&lt;/h4&gt;

&lt;p&gt;Prepare Docker volumes to persistently store MongoDB data, configuration files, logs, and the keyfile for each container:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker volume create mongo-data  
docker volume create mongo-config  
docker volume create mongo-log  
docker volume create mongo-key
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;em&gt;Copy the prod-mongo.pem keyfile into the mongo-key volume.&lt;/em&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  Step 5 - Configure Logging
&lt;/h4&gt;

&lt;p&gt;Alongside security measures, effective logging is essential for monitoring and troubleshooting. Create a mongod.conf file and store it in the mongo-config Docker volume at /var/lib/docker/volumes/mongo-config/.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# mongod.conf

storage:  
  dbPath: /data/db  

systemLog:  
  destination: file  
  logAppend: true  
  path: /var/log/mongodb/mongod.log  

net:  
  port: 27017  
  bindIp: 0.0.0.0  

processManagement:  
  timeZoneInfo: /usr/share/zoneinfo  

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Step 6 - Start MongoDB Containers
&lt;/h4&gt;

&lt;p&gt;Start each replica set member as a Docker container, ensuring the --auth and --keyFile options are enabled for secure access and communication. &lt;/p&gt;

&lt;p&gt;Run the following command on each EC2 instance:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker run \
--name aws-mongodb-prod \
-h aws-mongodb-prod \
--restart unless-stopped \
-v mongo-data:/data/db \
-v mongo-config:/etc/mongo-config \
-v mongo-key:/opt \
-v mongo-log:/var/log/mongodb \
-p 27017:27017 \
-d mongo:8.0.4 -f /etc/mongo-config/mongod.conf --keyFile /opt/prod-mongokey.pem --replSet awsProdRepl --auth
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Initiating the MongoDB Replica Set
&lt;/h3&gt;

&lt;p&gt;After deploying MongoDB as standalone instances using Docker, the replica set is not yet active because no replica set configuration has been provided. To enable high availability and replication, you must configure and initiate the replica set.&lt;/p&gt;

&lt;h4&gt;
  
  
  Step 1 - Access the MongoDB Shell
&lt;/h4&gt;

&lt;p&gt;First, exec into the MongoDB container to access the mongo shell:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker exec -it aws-mongodb-prod bash
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once inside the container, start the MongoDB shell:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;mongosh
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
   Step 2 - Initiate the Replica Set
&lt;/h4&gt;

&lt;p&gt;Now that you have access to the mongo shell, you can initiate the replica set by passing the appropriate configuration. Replace "hostname" with the actual internal hostnames of your replica set members:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;rs.initiate({  
   _id: "awsProdRepl",  
   members: [  
      { _id: 0, host: "&amp;lt;hostname&amp;gt;:27017" },  
      { _id: 1, host: "&amp;lt;hostname&amp;gt;:27017" },  
      { _id: 2, host: "&amp;lt;hostname&amp;gt;:27017" }  
   ]  
})  
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If successful, the output should include:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{ "ok" : 1 }  
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Important Notes:
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Replica Set Name: The _id field in the configuration must match the replica set name you provided in the Docker --replSet argument (e.g., awsProdRepl).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Hostnames: Ensure the host field contains the correct hostnames of each replica set member, along with the MongoDB port (27017).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Ordering: The _id values in the members array define the priority of the nodes. Typically, the primary node has the lowest _id.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
   Step 3 - Verify the Replica Set
&lt;/h4&gt;

&lt;p&gt;After initiating the replica set, you can verify its status using the following command in the mongo shell:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;rs.status()
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;em&gt;This will display details about the replica set, including the state of each member.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;🎉 Congratulations!&lt;br&gt;
You have successfully deployed and configured a MongoDB replica set with authentication and keyfile-based access control. Your MongoDB deployment is now highly available and ready for production.&lt;/p&gt;
&lt;h3&gt;
  
  
   Post-Deployment Actions: Creating Privileged &amp;amp; Restricted Users
&lt;/h3&gt;

&lt;p&gt;After successfully initiating the MongoDB replica set with keyfile-based internal authentication, the next step is to implement user access control. This ensures secure access to your MongoDB deployment, especially for clients like the Mongo shell or application APIs. Below are the steps to create both privileged and restricted users.&lt;/p&gt;
&lt;h4&gt;
  
  
  Step 1 - Access Mongo Shell
&lt;/h4&gt;

&lt;p&gt;Start by accessing the Mongo shell from within the Docker container:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker exec -it aws-mongodb-prod bash  
mongosh
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
   Step 2 - Create a Privileged MongoDB User
&lt;/h4&gt;

&lt;p&gt;For managing MongoDB cluster operations, you'll need a privileged user.&lt;/p&gt;

&lt;p&gt;To create a privileged user, switch to the admin database and execute the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;use admin  

db.createUser(  
  {  
    user: "&amp;lt;user&amp;gt;",  
    pwd: "&amp;lt;password&amp;gt;",  
    roles: [ { role: "root", db: "admin" } ]  
  }  
)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Replace  and  with the desired username and password. This user will now have full administrative privileges on the MongoDB cluster&lt;/p&gt;

&lt;h4&gt;
  
  
  Step 3 - Create a Restricted User for Applications and APIs
&lt;/h4&gt;

&lt;p&gt;Next, create a restricted user for use by applications or APIs that need limited access. In this example, the user will have read and write access only to the &lt;strong&gt;user_events&lt;/strong&gt; database.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;use user_events  

db.createUser(  
  {  
    user: "prodUser",  
    pwd: "&amp;lt;password&amp;gt;",  
    roles: [ { role: "readWrite", db: "user_events" } ]  
  }  
)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Replace  with the desired password for the prodUser. This user will only have read and write access to the &lt;strong&gt;user_events&lt;/strong&gt; database.&lt;/p&gt;

&lt;h4&gt;
  
  
   Step 4 - Verify User Access
&lt;/h4&gt;

&lt;p&gt;To verify that the &lt;strong&gt;prodUser&lt;/strong&gt; has the correct access, authenticate using the newly created user:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;use user_events  
db.auth("prodUser", "&amp;lt;password&amp;gt;")  
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If successful, you should see the output:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;1  
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;🎉 Congratulations!&lt;br&gt;
You have successfully created the prodUser with restricted access and the privileged user with full administrative rights. Your MongoDB deployment now has proper user access controls, ensuring secure operations and access for different use cases.&lt;/p&gt;

&lt;h3&gt;
  
  
  Conclusion
&lt;/h3&gt;

&lt;p&gt;Deploying a production-ready MongoDB replica set on AWS using Docker provides a robust, secure, and cost-effective solution compared to managed database services like MongoDB Atlas or AWS DocumentDB. By leveraging AWS EC2 instances and keyfile-based authentication, you achieve high availability with granular control over the deployment. This approach ensures both internal security between replica set members and external security for client connections using role-based access control.&lt;/p&gt;

&lt;p&gt;With thoughtful configuration, including persistent Docker volumes, optimized logging, and properly defined user roles, the solution is tailored to meet enterprise requirements. The use of AWS Savings Plans or Reserved Instances further reduces operational costs, offering significant savings over managed database services while maintaining full control over your infrastructure.&lt;/p&gt;

&lt;p&gt;This deployment model combines flexibility, performance, and security, making it an excellent choice for businesses looking to balance cost efficiency and operational excellence in their database infrastructure. By following this guide, you now have the tools to set up a scalable, secure, and cost-optimized MongoDB replica set on AWS.&lt;/p&gt;

&lt;h4&gt;
  
  
  References:
&lt;/h4&gt;

&lt;h5&gt;
  
  
   MongoDB Official Documentation
&lt;/h5&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Replica Set Configuration&lt;br&gt;
&lt;a href="https://www.mongodb.com/docs/manual/replication/" rel="noopener noreferrer"&gt;https://www.mongodb.com/docs/manual/replication/&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Authentication and Security&lt;br&gt;
&lt;a href="https://www.mongodb.com/docs/manual/core/security/" rel="noopener noreferrer"&gt;https://www.mongodb.com/docs/manual/core/security/&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Role-Based Access Control&lt;br&gt;
&lt;a href="https://www.mongodb.com/docs/manual/core/authorization/" rel="noopener noreferrer"&gt;https://www.mongodb.com/docs/manual/core/authorization/&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h5&gt;
  
  
  AWS Resources
&lt;/h5&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;AWS EC2 Pricing&lt;br&gt;
&lt;a href="https://aws.amazon.com/ec2/pricing/" rel="noopener noreferrer"&gt;https://aws.amazon.com/ec2/pricing/&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Savings Plans&lt;br&gt;
&lt;a href="https://aws.amazon.com/savingsplans/" rel="noopener noreferrer"&gt;https://aws.amazon.com/savingsplans/&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Amazon EBS Persistent Storage&lt;br&gt;
&lt;a href="https://aws.amazon.com/ebs/" rel="noopener noreferrer"&gt;https://aws.amazon.com/ebs/&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h5&gt;
  
  
  Docker Resources
&lt;/h5&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Docker Volume Management&lt;br&gt;
&lt;a href="https://docs.docker.com/storage/volumes/" rel="noopener noreferrer"&gt;https://docs.docker.com/storage/volumes/&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;MongoDB Official Docker Image&lt;br&gt;
&lt;a href="https://hub.docker.com/_/mongo" rel="noopener noreferrer"&gt;https://hub.docker.com/_/mongo&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>aws</category>
      <category>mongodb</category>
      <category>devops</category>
      <category>containers</category>
    </item>
    <item>
      <title>Powering AWS Fargate with IaC - AWS CloudFormation</title>
      <dc:creator>Engin ALTAY</dc:creator>
      <pubDate>Mon, 11 Mar 2024 09:01:56 +0000</pubDate>
      <link>https://dev.to/aws-builders/powering-aws-fargate-with-iac-aws-cloudformation-3n99</link>
      <guid>https://dev.to/aws-builders/powering-aws-fargate-with-iac-aws-cloudformation-3n99</guid>
      <description>&lt;p&gt;Today's tech world, there's a clear truth that your organization needs to be agile as well as your workloads need to run smooth. Especially in containers area, there are plenty of methods to deploy your containerized workloads to your environment. &lt;/p&gt;

&lt;p&gt;In this post, I'd like to mention powering AWS Fargate - &lt;em&gt;&lt;strong&gt;a serverless compute that run containers without needing to manage your infrastructure&lt;/strong&gt;&lt;/em&gt;, with Infrastructure as Code AWS CloudFormation - to provision your fargate workloads, including load balancing and rolling-update deployment features.&lt;/p&gt;

&lt;p&gt;I assume that you already have that in use or familiar with tech stack that I mentioned below.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://docs.aws.amazon.com/AmazonECS/latest/developerguide/AWS_Fargate.html"&gt;Amazon ECS Cluster - AWS Fargate&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/Welcome.html"&gt;AWS CloudFormation&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.gitlab.com/ee/ci/"&gt;GitLab CI/CD&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Imagine the following case:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;You have created your ECS cluster with Fargate option,&lt;/li&gt;
&lt;li&gt;You already have provisioned internet-facing ELB,&lt;/li&gt;
&lt;li&gt;You already have VPC, subnets and security group for your application.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;But in the continuation, you need to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Provision your AWS Fargate workloads,&lt;/li&gt;
&lt;li&gt;Attach security group,&lt;/li&gt;
&lt;li&gt;Expose as a ECS service, &lt;/li&gt;
&lt;li&gt;Associate with ELB, create your ELB listener routing rule and more.
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;em&gt;All these steps become unmanageable and tedious after number of your application, environment (dev,test,staging,prod) and workload grows.&lt;/em&gt;  &lt;/p&gt;

&lt;p&gt;To make this agile and automated, we'll leverage AWS CloudFormation, combining with GitLab CI/CD.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 1 - Building our CloudFormation template
&lt;/h2&gt;

&lt;p&gt;Using AWS CloudFormation to provision and update our resources in AWS environment helps us in a way to centralize and track our each change.&lt;/p&gt;

&lt;p&gt;We need to define each resource definitions to our CloudFormation template. This is the core component we'll work on it.&lt;/p&gt;

&lt;p&gt;deploy-fargate.yaml&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;AWSTemplateFormatVersion: 2010-09-09
Description: An example CloudFormation template for Fargate.
Parameters:
  VPC:
    Type: String
    Default: &amp;lt;VPC_ID_HERE&amp;gt;
  SubnetPublicA:
    Type: String
    Default: &amp;lt;PUBLIC_SUBNET_A&amp;gt;
  SubnetPublicB:
    Type: String
    Default: &amp;lt;PUBLIC_SUBNET_B&amp;gt;
  SubnetPublicC:
    Type: String
    Default: &amp;lt;PUBLIC_SUBNET_C&amp;gt;
  Image:
    Type: String
    Default: &amp;lt;ACCOUNT_ID&amp;gt;.dkr.ecr.eu-central-1.amazonaws.com/nginx:latest
  ClusterName:
    Type: String
    Description: ECS_CLUSTER_NAME here
    Default: &amp;lt;ECS_CLUSTER_NAME&amp;gt;   
  ServiceName:
    Type: String
    Description: ECS_SERVICE_NAME here
    Default: "API_NAME-prod-svc"
  TaskDefinitionName: 
    Type: String
    Description: Task Definition Name
    Default: "API_NAME-prod-fargate"
  ContainerPort:
    Type: Number
    Default: 3000
  ContainerSecurityGroup:
    Type: String
    Description: api-container-sec-rules    
    Default: &amp;lt;SECURITY_GROUP_ID&amp;gt;
  ELBListenerArn:
    Type: String
    Default: &amp;lt;ELB_LISTENER_ARN&amp;gt;


Resources:
  TaskDefinition:
    Type: AWS::ECS::TaskDefinition
    Properties:
      # Name of the task definition.
      Family: !Ref TaskDefinitionName
      NetworkMode: awsvpc
      RequiresCompatibilities:
        - FARGATE
      # 1024 (1 vCPU) - Available memory values: 2GB, 3GB, 4GB, 5GB, 6GB, 7GB, 8GB
      Cpu: 1024
      # 2GB, 3GB, 4GB, 5GB, 6GB, 7GB, 8GB - Available cpu values: 1024 (1 vCPU)
      Memory: 3GB
      # "The ARN of the task execution role that containers in this task can assume. All containers in this task are granted the permissions that are specified in this role."
      ExecutionRoleArn: !GetAtt ExecutionRole.Arn
      # "The (ARN) of an IAM role that grants containers in the task permission to call AWS APIs on your behalf."
      TaskRoleArn: !Ref TaskRole
      ContainerDefinitions:
        - Name: API_NAME
          Image: !Ref Image
          Cpu: 0
          Essential: true
          PortMappings:
            - ContainerPort: !Ref ContainerPort
              Protocol: tcp
          LogConfiguration:
            LogDriver: awslogs
            Options:
              awslogs-region: !Ref AWS::Region
              awslogs-group: !Ref LogGroup
              awslogs-stream-prefix: ecs              

  LogGroup:
    Type: AWS::Logs::LogGroup   
    Properties:
      LogGroupName: !Join  ['', [/ecs/, !Ref TaskDefinitionName]]
      RetentionInDays: 14

  # A role needed by ECS
  ExecutionRole:
    Type: AWS::IAM::Role
    Properties:
      RoleName: !Join ['', [!Ref ServiceName, "ECSExecutionRole"]]
      AssumeRolePolicyDocument:
        Statement:
          - Effect: Allow
            Principal:
              Service: ecs-tasks.amazonaws.com
            Action: 'sts:AssumeRole'
      ManagedPolicyArns:
        - 'arn:aws:iam::aws:policy/service-role/AmazonECSTaskExecutionRolePolicy'
        -'arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly'

  # A role for the containers
  TaskRole:
    Type: AWS::IAM::Role
    Properties:
      RoleName: !Join ['', [!Ref ServiceName, "ECSTaskRole"]]
      AssumeRolePolicyDocument:
        Statement:
          - Effect: Allow
            Principal:
              Service: ecs-tasks.amazonaws.com
            Action: 'sts:AssumeRole'
      ManagedPolicyArns:
        - 'arn:aws:iam::aws:policy/service-role/AmazonECSTaskExecutionRolePolicy'
        - 'arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly'


  Service:
    Type: AWS::ECS::Service
    DependsOn:
      - LoadBalancerListenerRule    
    Properties: 
      ServiceName: !Ref ServiceName
      Cluster: !Ref ClusterName
      TaskDefinition: !Ref TaskDefinition
      DeploymentConfiguration:
        MinimumHealthyPercent: 100
        MaximumPercent: 200
      DesiredCount: 1
      HealthCheckGracePeriodSeconds: 120
      CapacityProviderStrategy:
        - CapacityProvider: FARGATE_SPOT
          Base: 0
          Weight: 1      
      NetworkConfiguration: 
        AwsvpcConfiguration:
          AssignPublicIp: ENABLED
          Subnets:
            - !Ref SubnetPublicA
            - !Ref SubnetPublicB
            - !Ref SubnetPublicC
          SecurityGroups:
            - !Ref ContainerSecurityGroup
      LoadBalancers:
        - ContainerName: API_NAME
          ContainerPort: !Ref ContainerPort
          TargetGroupArn: !Ref TargetGroup

  TargetGroup:
    Type: AWS::ElasticLoadBalancingV2::TargetGroup
    Properties:
      HealthCheckIntervalSeconds: 30
      HealthCheckPath: /API_NAME/health
      HealthCheckTimeoutSeconds: 5
      UnhealthyThresholdCount: 2
      HealthyThresholdCount: 3
      TargetType: ip
      Name: !Ref ServiceName
      Port: !Ref ContainerPort
      Protocol: HTTP
      TargetGroupAttributes:
        - Key: deregistration_delay.timeout_seconds
          Value: 30  #default 300 seconds
      VpcId: !Ref VPC

  LambdaDescribeELBListenerPriority:
    Type: 'Custom::LambdaDescribeELBListenerPriority'
    Properties:
      ServiceToken: 'arn:aws:lambda:eu-central-1:&amp;lt;ACCOUNT_ID&amp;gt;:function:DescribeELBListener'

  LoadBalancerListenerRule:
    Type: AWS::ElasticLoadBalancingV2::ListenerRule
    #DependsOn: GetListenerRulesLambdaFunction
    Properties:
      Actions:
        - Type: forward
          TargetGroupArn: !Ref TargetGroup
      Conditions:
        - Field: host-header
          HostHeaderConfig:
            Values:
              - "api.example.com"
        - Field: path-pattern
          PathPatternConfig:
            Values:
              - "/API_NAME*"
      ListenerArn: !Ref ELBListenerArn
      Priority: !GetAtt LambdaDescribeELBListenerPriority.NextPriorityValue

Outputs:
  NextPriorityValue:
    Value: !GetAtt LambdaDescribeELBListenerPriority.NextPriorityValue      
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In this template, a few sections need to be mentioned to clarify the case we are dealing with.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;In the Parameters section, provide some constants that we already created before such as VPC Id, Subnets, ECS Cluster Name, Security Group, ELB Listener etc.&lt;br&gt;
You can also create these from scratch but in my case they all have created before.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;As a Capacity Provider, FARGATE_SPOT is used, but you can change it to FARGATE as your needs or you can benefit combining both.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;As a placeholder, API_NAME is used. Replace API_NAME with application name that needed to be run on AWS Fargate. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;We did not declare auto scale actions for Fargate service. Desired count set to 1. Planning to mention auto scale policy for Fargate in the next post.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Declared Custom Resource LambdaDescribeELBListenerPriority which describes the ELB Listener and finds next available priority number to create listener routing rule. &lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;em&gt;That custom resource is a bit headache, I expect CloudFormation to handle automatically finding next available ELB Listener priority number and put my rule to there. But it does not. It expects you to provide priority number. In a development lifecycle and CI/CD perspective, it's impossible to know which priority number is free and set it before running CloudFormation template. Therefore, we write a simple lambda function that describes ELB Listener, takes max priority number and adds +1 to create available priority number.&lt;/em&gt; &lt;/p&gt;

&lt;p&gt;I provide the related lambda function below. &lt;/p&gt;

&lt;p&gt;DescribeELBListener&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import boto3
import json
import urllib3

http = urllib3.PoolManager()
SUCCESS = "SUCCESS"
FAILED = "FAILED"

def lambda_handler(event, context):
    elbv2 = boto3.client('elbv2')

    # Get all listener rules for the provided ARN
    response = elbv2.describe_rules(ListenerArn='&amp;lt;ELB_LISTENER_ARN&amp;gt;')

    # Filter out rules with non-numeric priorities
    filtered_rules = [rule for rule in response['Rules'] if str(rule['Priority']).isdigit()]

    # Find the maximum priority among the remaining rules
    max_priority = max(filtered_rules, key=lambda x: x['Priority'])['Priority']

    # Prepare the CloudFormation (CF) stack event response payload    
    responseValue = int(max_priority) + 1
    responseData = {'NextPriorityValue': responseValue}

    send(event, context, SUCCESS, responseData)

def send(event, context, responseStatus, responseData, physicalResourceId=None, noEcho=False, reason=None):
    responseUrl = event['ResponseURL']

    responseBody = {
        'Status': responseStatus,
        'Reason': reason or f'See the details in CloudWatch Log Stream: {context.log_stream_name}',
        'PhysicalResourceId': physicalResourceId or context.log_stream_name,
        'StackId': event['StackId'],
        'RequestId': event['RequestId'],
        'LogicalResourceId': event['LogicalResourceId'],
        'NoEcho': noEcho,
        'Data': responseData
    }

    json_responseBody = json.dumps(responseBody, default=str)

    headers = {
        'content-type': '',
        'content-length': str(len(json_responseBody))
    }
    try:
        response = http.request('PUT', responseUrl, headers=headers, body=json_responseBody)
        print("Status code:", response.status)

    except Exception as e:
        print("send(..) failed executing http.request(..):", e)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Step 2 - Prepare CI/CD Pipeline - GitLab
&lt;/h2&gt;

&lt;p&gt;Now we need to prepare CI/CD side to run our CloudFormation template. &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Never use your own credentials to access AWS and run CloudFormation template from your local environment. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Here, we use centralized private GitLab to run CloudFormation stack and provided AWS access by following least-privileged permission policy to access resources.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Pay attention to "image": ".dkr.ecr.eu-central-1.amazonaws.com/nginx:latest" section. Latest tag will be replaced with our respectful container image tag during CI/CD job.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Building GitLab CI/CD Pipeline&lt;/p&gt;

&lt;p&gt;To automate and run our IaC template, we leverage version control system, so each our iteration is trackable and easy to apply changes.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Below fully ready .gitlab-ci.yml file that includes build &amp;amp; IaC deployment stages.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;variables:
  API: nginx
  REGISTRY: "&amp;lt;your_aws_account_number_here&amp;gt;.dkr.ecr.eu-central-1.amazonaws.com"
  ECS_CLUSTER_NAME: "YOUR_ECS_CLUSTER_NAME"
  ECS_SERVICE_NAME: "${API}-prod-svc"
  ECS_TASK_FAMILY: "${API}-prod-fargate"
  CF_STACK_NAME: '${API}-cf-template-${CI_PIPELINE_IID}'  

stages:
  - build
  - deploy
  - update

before_script:
  - echo "Build Name:" "$CI_JOB_NAME"
  - echo "Branch:" "$CI_COMMIT_REF_NAME"
  - echo "Build Stage:" "$CI_JOB_STAGE"


build:
  stage: build
  script:
    - $(aws ecr get-login --no-include-email --region eu-central-1)
    - VER=$(cat ${PWD}/package.json | jq --raw-output '.version')
    - echo $VER    
    - docker build -t ${API} .
    - docker tag ${API} ${REGISTRY}/${API}:${VER}-${CI_ENVIRONMENT_NAME}-${CI_PIPELINE_IID}
    - docker push ${REGISTRY}/${API}:${VER}-${CI_ENVIRONMENT_NAME}-${CI_PIPELINE_IID}
  environment:
    name: prod


deploy_cloudformation:
  stage: deploy
  when: manual
  image:
    name: amazon/aws-cli:latest
    entrypoint: ['']
  rules:
    - if: $API != "null" &amp;amp;&amp;amp; $CI_COMMIT_BRANCH == "master"
  script:
    - echo "Deploying your IaC CloudFormation..."
    - yum install jq -y
    - jq -Version
    - VER=$(cat ${PWD}/package.json | jq --raw-output '.version')
    - echo $VER
    - echo "API Name ----&amp;gt; ${API} &amp;lt;----"
    - echo "ECS FARGATE Cluster is = ${ECS_CLUSTER_NAME}"
    - sed -i 's/API_NAME/'"${API}"'/g' deploy-fargate.yaml #replace API_NAME placeholder with the container that we want to run on AWS Fargate.
    - cat deploy-fargate.yaml
    - |
      aws cloudformation create-stack \
        --stack-name $CF_STACK_NAME \
        --template-body file://deploy-fargate.yaml \
        --capabilities CAPABILITY_NAMED_IAM \
        --parameters \
      ParameterKey=Image,ParameterValue=${REGISTRY}/${API}:${VER}-${CI_ENVIRONMENT_NAME}-${CI_PIPELINE_IID}
    - echo "Visit https://api.example.com to see changes"
  needs:
    - job: build
      optional: true
  tags:
    - gitlab-dind-runner
  environment:
    name: prod
    url: https://api.example.com
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;With this, in every CI/CD pipeline we run, build our container image, tag with respectful CI/CD pipeline id, then inject it to CloudFormation template to run it on ECS Fargate.&lt;/p&gt;

&lt;h3&gt;
  
  
  Updating ECS service - rolling update
&lt;/h3&gt;

&lt;p&gt;In the final step, we need to update ECS service with our updated task revision. To do this, we need to run cloudformation update-stack instead of create-stack. Below pipeline will trigger a rolling update for ECS service.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;update_cloudformation:
  stage: update
  when: manual
  image:
    name: amazon/aws-cli:latest
    entrypoint: ['']
  rules:
    - if: $API != "null" &amp;amp;&amp;amp; $CI_COMMIT_BRANCH == "master"
  script:
    - echo "Deploying your IaC CloudFormation..."
    - yum install jq -y
    - jq -Version
    - VER=$(cat ${PWD}/package.json | jq --raw-output '.version')
    - echo $VER
    - echo "API Name ----&amp;gt; ${API} &amp;lt;----"
    - echo "ECS FARGATE Cluster is = ${ECS_CLUSTER_NAME}"
    - sed -i 's/API_NAME/'"${API}"'/g' deploy-fargate.yaml #replace API_NAME placeholder with the container that we want to run on AWS Fargate.
    - cat deploy-fargate.yaml
    - |
      aws cloudformation update-stack \
        --stack-name $CF_STACK_NAME \
        --template-body file://deploy-fargate.yaml \
        --capabilities CAPABILITY_NAMED_IAM \
        --parameters \
      ParameterKey=Image,ParameterValue=${REGISTRY}/${API}:${VER}-${CI_ENVIRONMENT_NAME}-${CI_PIPELINE_IID}
    - echo "Visit https://api.example.com to see changes"
  needs:
    - job: build
      optional: true
  tags:
    - gitlab-dind-runner
  environment:
    name: prod
    url: https://api.example.com
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That's it! Now you are able to deploy your containerized application with zero downtime, and in an automated way.&lt;/p&gt;

&lt;p&gt;In this post, I wanted to mention how to automate deployment process of your container to Amazon ECS Fargate using AWS CloudFormation &amp;amp; GitLab CI/CD.&lt;/p&gt;

&lt;p&gt;References:&lt;br&gt;
&lt;a href="https://docs.gitlab.com/runner/"&gt;https://docs.gitlab.com/runner/&lt;/a&gt;&lt;br&gt;
&lt;a href="https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/AWS_ECS.html"&gt;https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/AWS_ECS.html&lt;/a&gt;&lt;/p&gt;

</description>
      <category>aws</category>
      <category>devops</category>
      <category>infrastructureascode</category>
      <category>containers</category>
    </item>
    <item>
      <title>Bring AWS Notifications Into Your Slack Channel</title>
      <dc:creator>Engin ALTAY</dc:creator>
      <pubDate>Mon, 28 Nov 2022 12:13:26 +0000</pubDate>
      <link>https://dev.to/aws-builders/bring-aws-notifications-into-your-slack-channel-gd4</link>
      <guid>https://dev.to/aws-builders/bring-aws-notifications-into-your-slack-channel-gd4</guid>
      <description>&lt;p&gt;Today’s software development lifecycle world requires responding to notifications immediately; such as incidents, application deployments, security events and so on.&lt;/p&gt;

&lt;h3&gt;
  
  
  ChatOps — The Slack Way
&lt;/h3&gt;

&lt;p&gt;Nowadays, most of the DevOps teams rely on &lt;a href="https://slack.com/"&gt;Slack&lt;/a&gt; to collaborate with team members and the system they manage. In the notifications, teams might need to switch among Slack, email, text messages and phone calls at the end of the day. In addition to context switching, merging the data from all those different sources is inefficient and time consuming for teams who monitor and interact with their AWS resources.&lt;/p&gt;

&lt;p&gt;Meet &lt;a href="https://aws.amazon.com/chatbot/"&gt;AWS Chatbot&lt;/a&gt;. Interactive agent that makes it easier to monitor and interact with your &lt;a href="https://aws.amazon.com/"&gt;Amazon Web Services (AWS)&lt;/a&gt; resources from your team’s Slack channels. By integrating AWS Chatbot with Slack, DevOps teams can receive real-time notifications, view incident details, and response incident quickly without need to cycle among other tools.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;“With AWS Chatbot, all notifications centrally managed within Slack that our teams already use every day. We’ve configured to receive various notifications such as network &amp;amp; system alerts, application deployments, performance monitoring and more directly into related Slack channels. Teams can take action immediately without needing to switch from where they’re already working. This speed up our response time and overall development agility.” — Engin Altay, SRE, Foreks Digital.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  Integrating AWS Chatbot with Slack
&lt;/h3&gt;

&lt;p&gt;Let’s move on AWS Console to begin integrating AWS Chatbot with Slack channel.&lt;/p&gt;

&lt;h3&gt;
  
  
  Create an Amazon SNS Topic
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://aws.amazon.com/sns/"&gt;Amazon Simple Notification Service (SNS)&lt;/a&gt; is a fully managed Pub/Sub messaging service that can send notifications two-ways, A2A and A2P.&lt;/p&gt;

&lt;p&gt;To create Amazon SNS topic, navigate to AWS Console, go SNS dashboard and create &lt;strong&gt;&lt;em&gt;Standart&lt;/em&gt;&lt;/strong&gt; type of topic.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--wN4rVrIc--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/2236/1%2A0T0R0iLFoL22I84iChKaLA.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--wN4rVrIc--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/2236/1%2A0T0R0iLFoL22I84iChKaLA.png" alt="Create standart type of Amazon SNS topic" width="800" height="432"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Setting up AWS Chatbot
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://aws.amazon.com/chatbot/"&gt;AWS Chatbot&lt;/a&gt; is an AWS service that enables ChatOps for teams. AWS Chatbot processes notifications from Amazon Simple Notification Service (Amazon SNS), and forwards them to chat rooms so teams can monitor and respond to AWS related events in their Slack channel.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;AWS Chatbot supports Amazon Chime or Slack chat clients.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Navigate to AWS Console, go to AWS Chatbot and configure new client.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--B3JElZCd--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/2000/1%2AXXly036pL0i8fNsLJjjOpw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--B3JElZCd--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/2000/1%2AXXly036pL0i8fNsLJjjOpw.png" alt="AWS Chatbot — Configure new client type as Slack." width="800" height="473"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;When you click configure, you’ll be redirected to Slack workspace.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--1L2cAvVv--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/2024/1%2A5UMTtqF-ZYO_2WYro2p_Zg.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--1L2cAvVv--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/2024/1%2A5UMTtqF-ZYO_2WYro2p_Zg.jpeg" alt="Add AWS Chatbot to Slack workspace" width="800" height="516"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Click Add to Slack, and you’ll be asked for required permissions for AWS Chatbot to interact with your Slack channel.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Rename bot user as you wish, which will be shown in your Slack channel.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--PlnzT3jR--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/2000/1%2Aj03mCa2kcq8rq-X5Z1qWCg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--PlnzT3jR--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/2000/1%2Aj03mCa2kcq8rq-X5Z1qWCg.png" alt="." width="728" height="172"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Then, add bot user to Slack by running the following command in related channel.&lt;/p&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;/invite @aws-bot&lt;br&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h3&gt;
&lt;br&gt;
  &lt;br&gt;
  &lt;br&gt;
  AWS Chatbot — Configure the Slack Channel Settings&lt;br&gt;
&lt;/h3&gt;

&lt;p&gt;AWS Chatbot requires your Slack channel ID to send notifications to specified channel.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--KAXktPtU--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/2000/1%2AmIUWPbcNeWWtiIFBUAh35g.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--KAXktPtU--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/2000/1%2AmIUWPbcNeWWtiIFBUAh35g.jpeg" alt="AWS Chatbot requires Slack private channel ID." width="800" height="332"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;In Slack, get channel ID by right clicking the channel and copy the link. Channel ID is the string at the end of the URL.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Then, AWS Chatbot requires IAM permissions to perform actions.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--pvrM4IJD--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/2000/1%2AOjrxdhSbUGALVssXpbuMdg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--pvrM4IJD--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/2000/1%2AOjrxdhSbUGALVssXpbuMdg.png" alt="AWS Chatbot needs IAM role to perform actions." width="796" height="674"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;From the policy templates, select Notification permissions. It’s enough to get metric from Amazon CloudWatch.&lt;/p&gt;

&lt;p&gt;Lastly, AWS Chatbot needs SNS topic you previously created to send notifications from AWS services to your chat client.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--esQREtrw--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/2000/1%2AHCjd-dEfp_NmKP5-vLELiA.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--esQREtrw--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/2000/1%2AHCjd-dEfp_NmKP5-vLELiA.jpeg" alt="AWS Chatbot — select SNS topic and region its created." width="800" height="562"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Click Configure, and that’s it! Successfully integrated AWS Chatbot with Slack workspace.&lt;/p&gt;

&lt;p&gt;To verify, go to SNS dashboard and see subscription status is confirmed for AWS Chatbot.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--HTkEx1I6--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/2196/1%2A3cf79iU-qQpC7V4_0X7iUQ.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--HTkEx1I6--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/2196/1%2A3cf79iU-qQpC7V4_0X7iUQ.jpeg" alt="SNS topic subscription confirmed for AWS Chatbot." width="800" height="164"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After all integration and configuration steps, I’d like to show a real world example that one of our alarm from AWS CloudWatch displayed as rich messages with graphs in our Slack channel.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--7_9eMVvD--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/2776/1%2Azp9gTc7vMTwTWYxnhU3OOA.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--7_9eMVvD--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/2776/1%2Azp9gTc7vMTwTWYxnhU3OOA.jpeg" alt="." width="800" height="809"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Leveraging AWS Chatbot makes organizations easily manage, centralize, and monitor AWS notifications into their daily chat rooms. With AWS Chatbot, teams can collaborate and respond to events faster.&lt;/p&gt;

&lt;p&gt;Thank you for taking the time to read my article. I appreciate you sharing your thoughts and feedback.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Resources&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://aws.amazon.com/chatbot/"&gt;*https://aws.amazon.com/chatbot/&lt;/a&gt;*&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://docs.aws.amazon.com/chatbot/latest/adminguide/what-is.html"&gt;*https://docs.aws.amazon.com/chatbot/latest/adminguide/what-is.html&lt;/a&gt;*&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://aws.amazon.com/blogs/aws/aws-chatbot-chatops-for-slack-and-chime/"&gt;*https://aws.amazon.com/blogs/aws/aws-chatbot-chatops-for-slack-and-chime/&lt;/a&gt;*&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>sre</category>
      <category>devops</category>
      <category>monitoring</category>
      <category>aws</category>
    </item>
    <item>
      <title>Automated Deployment to Amazon ECS Fargate</title>
      <dc:creator>Engin ALTAY</dc:creator>
      <pubDate>Tue, 18 Oct 2022 11:17:37 +0000</pubDate>
      <link>https://dev.to/aws-builders/automated-deployment-to-amazon-ecs-fargate-4o6c</link>
      <guid>https://dev.to/aws-builders/automated-deployment-to-amazon-ecs-fargate-4o6c</guid>
      <description>&lt;p&gt;Today's software development lifecycle world requires rapid and uninterrupted way to publish your product. Especially in containers area, there are plenty of methods to deploy your containerized application to your environment. &lt;/p&gt;

&lt;p&gt;In this post, I'd like to mention how to deploy your containerized application to Amazon ECS Fargate - &lt;em&gt;&lt;strong&gt;a serverless option that run containers without needing to manage your infrastructure&lt;/strong&gt;&lt;/em&gt;, in an automated way using AWS CLI and GitLab CI/CD. &lt;/p&gt;

&lt;p&gt;Before begin, below are the required environment. So, I assume that you already have that in use or familiar with tech stack that I mentioned below.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://docs.aws.amazon.com/AmazonECS/latest/developerguide/AWS_Fargate.html"&gt;Amazon ECS Cluster - Fargate&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.gitlab.com/ee/ci/"&gt;GitLab CI/CD&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-getting-started.html"&gt;AWS CLI&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Imagine you have created your ECS cluster, then deployed your first release of containerized application to ECS Fargate. But in the continuation, most probably it'll be required to automate deployment of your new release of container. &lt;/p&gt;

&lt;p&gt;To do this, We'll leverage AWS CLI, combining with GitLab CI/CD.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 1 - Create your ECS task definition
&lt;/h2&gt;

&lt;p&gt;We need to create &lt;a href="https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task_definitions.html"&gt;ECS task definition&lt;/a&gt; which is similar to definitions that refers docker related commands of your containerized application.&lt;/p&gt;

&lt;p&gt;myapp-ecs-task-definition.json&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
    "family": "myapp-preprod-fargate",
    "executionRoleArn": "arn:aws:iam::&amp;lt;aws_account_id&amp;gt;:role/ecsTaskExecutionRole",
    "taskRoleArn": "arn:aws:iam::&amp;lt;aws_account_id&amp;gt;:role/ecsTaskExecutionRole",
    "cpu": "1 vCPU",
    "memory": "4GB",
    "networkMode": "awsvpc",
    "containerDefinitions": [
        {
            "name": "myapp-preprod",
            "image": "&amp;lt;aws_account_id&amp;gt;.dkr.ecr.eu-central-1.amazonaws.com/myapp:latest",
            "logConfiguration": {
                "logDriver": "awslogs",
                "options": {
                    "awslogs-group": "/ecs/myapp-preprod-fargate",
                    "awslogs-region": "eu-central-1",
                    "awslogs-stream-prefix": "ecs"
                }
            },
            "portMappings": [
                {
                    "protocol": "tcp",
                    "containerPort": 3000
                }
            ],
            "cpu": 0,
            "essential": true
        }
    ],
    "requiresCompatibilities": [
        "FARGATE"
    ]
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;Pay attention to "image": ".dkr.ecr.eu-central-1.amazonaws.com/myapp:latest"&lt;/p&gt;
&lt;/blockquote&gt;

&lt;ul&gt;
&lt;li&gt;Here we'll replace "latest" tag with our respectful container image version tag.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Step 2 - Prepare your gitlab-ci.yml
&lt;/h2&gt;

&lt;p&gt;Now we need to prepare CI/CD side, which will be used to automate deployment of your updated containerized application to ECS Fargate.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://gist.github.com/enginaltay/cc8b577b1c62322d42c8991ff1ce6b6d"&gt;ecs-gitlab-ci.yml&lt;/a&gt; is a fully ready .gitlab-ci.yml file that includes build &amp;amp; deployment stages. We'll go through the Amazon ECS Fargate related section.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Setting unique container image version tag
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Below command will replace image tag from &lt;strong&gt;latest&lt;/strong&gt; to respectful tag which is *&lt;em&gt;environment name preprod and pipeline number.  *&lt;/em&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sed -i 's/latest/'"${CI_ENVIRONMENT_NAME}-${CI_PIPELINE_IID}"'/g' myapp-ecs-task-definition.json
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After running command above, container image will look as below.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;lt;aws_account_id&amp;gt;.dkr.ecr.eu-central-1.amazonaws.com/myapp:preprod-1 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;With this, in every CI/CD pipeline we run, unique container image tag will be injected to ECS task definition file.&lt;/p&gt;

&lt;h3&gt;
  
  
  Getting ECS task revision number
&lt;/h3&gt;

&lt;p&gt;Now create new task definition and get revision number by registering our updated task definition file.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;export TASK_REVISION=$(aws ecs register-task-definition \
--family ${ECS_TASK_FAMILY} \
--cli-input-json file://myapp-ecs-task-definition.json \
--region eu-central-1 | jq --raw-output '.taskDefinition.revision')

echo "Registered ECS Task Definition = " $TASK_REVISION
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After running command above, ECS task definition file with unique revision number will be created.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;em&gt;&lt;strong&gt;aws ecs register-task-definition&lt;/strong&gt;&lt;/em&gt; command will basically creates new task definition file by incrementing revision number by one. And will look as below.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;echo "Updated Task Definition = " $ECS_TASK_FAMILY:$TASK_REVISION
myapp-preprod-fargate:1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Updating ECS service - rolling update
&lt;/h3&gt;

&lt;p&gt;In the final step, we need to update ECS service with our updated task revision. Below command will trigger a rolling update for ECS service.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;UPDATE_ECS_SERVICE=$(aws ecs update-service \
--cluster $ECS_CLUSTER_NAME \
--service $ECS_SERVICE_NAME \
--task-definition $ECS_TASK_FAMILY:$TASK_REVISION \
--desired-count 1 \
--region eu-central-1 | jq --raw-output '.service.serviceName')

echo "Deployment of $UPDATE_ECS_SERVICE has been completed"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That's it! Now you are able to deploy your containerized application with zero downtime, and in an automated way.&lt;/p&gt;

&lt;p&gt;In this post, I wanted to mention how to automate deployment process of your container to Amazon ECS Fargate using AWS CLI &amp;amp; GitLab CI/CD. &lt;/p&gt;

</description>
      <category>aws</category>
      <category>containerapps</category>
      <category>ecs</category>
      <category>devops</category>
    </item>
  </channel>
</rss>
