<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: itTrident Software Services</title>
    <description>The latest articles on DEV Community by itTrident Software Services (@ittrident).</description>
    <link>https://dev.to/ittrident</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/ittrident"/>
    <language>en</language>
    <item>
      <title>Deploy Fider as a Private App on AWS with CloudFront VPC Origin</title>
      <dc:creator>Mohamed Wasim</dc:creator>
      <pubDate>Wed, 26 Mar 2025 04:49:33 +0000</pubDate>
      <link>https://dev.to/ittrident/deploy-fider-as-a-private-app-on-aws-with-cloudfront-vpc-origin-1oa6</link>
      <guid>https://dev.to/ittrident/deploy-fider-as-a-private-app-on-aws-with-cloudfront-vpc-origin-1oa6</guid>
      <description>&lt;p&gt;When architecting solutions on AWS, minimizing the attack surface and ensuring secure deployments is essential. CloudFront VPC Origins provides a way to deliver content from private VPC subnet, reducing direct exposure to the internet. This makes it an ideal choice for those looking to host a private app while securely exposing it to the public internet, offering a more secure alternative to exposing applications from a public subnet.&lt;/p&gt;

&lt;p&gt;In this guide, I’ll explain:  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
What is CloudFront VPC Origin
&lt;/li&gt;
&lt;li&gt;
Why you should deploy Fider as a private application &lt;/li&gt;
&lt;li&gt;
Create an Internal Application Load Balancer &lt;/li&gt;
&lt;li&gt;Create CloudFront VPC origin&lt;/li&gt;
&lt;li&gt;Create a CloudFront Distribution&lt;/li&gt;
&lt;li&gt;Create Secrets manager secret and Lambda function&lt;/li&gt;
&lt;li&gt;Create a Custom Header rule in AWS WAF&lt;/li&gt;
&lt;li&gt;Spin up ECS and RDS&lt;/li&gt;
&lt;li&gt;Monitor CloudFront and WAF logs&lt;/li&gt;
&lt;li&gt;Do You Really Need Zero Trust for Every App&lt;/li&gt;
&lt;li&gt;Is CloudFront Truly Zero Trust Compliant&lt;/li&gt;
&lt;li&gt;Deciding on Enterprise Grade Deployment for Fider&lt;/li&gt;
&lt;li&gt;Caveats to consider&lt;/li&gt;
&lt;li&gt;Final thoughts&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  1. What is CloudFront VPC Origin
&lt;/h2&gt;

&lt;p&gt;Traditionally, when using CloudFront with an origin resource (such as an ALB or EC2 instance), the origin needs to be in a public subnet with a public IP address, implementing Access Control Lists (ACLs) and other controls to restrict access effectively. Users needed to invest ongoing effort to implement and maintain these solutions, resulting in undifferentiated heavy lifting.&lt;/p&gt;

&lt;p&gt;However, With CloudFront VPC Origins, users can host their applications in a private VPC, without requiring any direct route to the internet and make sure CloudFront is the only entry point to their applications. When CloudFront VPC Origin is set up as an origin for the CloudFront distribution, traffic stays on the high-throughput AWS Backbone network all the way to your AWS origin, making sure of optimized performance and low latency.&lt;/p&gt;

&lt;p&gt;This is designed to prevent end users from discovering or bypassing CloudFront to access web applications directly. By implementing &lt;strong&gt;network-level isolation&lt;/strong&gt;, the origin servers remain hidden on the internet (&lt;em&gt;Obfuscating AWS resources&lt;/em&gt;), significantly reducing the attack surface (&lt;em&gt;attack surface reduction&lt;/em&gt;) and enhancing the overall security posture. At the same time, users continue to benefit from the CloudFront global scale and high-performance capabilities.&lt;/p&gt;

&lt;h2&gt;
  
  
  2. Why you should deploy Fider as a private application
&lt;/h2&gt;

&lt;p&gt;Alright, here’s the deal: With this setup, even your ALB can be &lt;strong&gt;internal&lt;/strong&gt;, and it'll route traffic straight to your ECS instances in those private subnets. You heard that right, &lt;strong&gt;no public access&lt;/strong&gt;. This keeps your backend services completely hidden from the internet. User requests go from CloudFront to the VPC origins over a private, secure connection, providing additional security for your applications.&lt;/p&gt;

&lt;p&gt;Now, with CloudFront VPC origin, you can stick that ALB in a private subnet where nobody can get to it directly. CloudFront is the only way in, so you can rest easy knowing the attack surface is way smaller. Plus, that ALB? The DNS name only resolves to private IPs, so no internet users can mess with it. By doing so, you significantly reduce the exposure of your internal ALB to DDoS attacks.&lt;/p&gt;

&lt;p&gt;Even if someone were to somehow discover your ALB’s ARN (which is very unlikely), they still couldn’t send traffic to it using their own CloudFront distribution. That’s because internal ALBs are only accessible from within the same AWS account and VPC.&lt;/p&gt;

&lt;p&gt;And if you’re really serious about security, you can slap on Origin Custom Headers to make sure only your CloudFront distribution is allowed to talk to the ALB. This header verification ensures the traffic legitimacy. But wait, it gets better, those custom headers can be &lt;strong&gt;dynamically rotated&lt;/strong&gt; using AWS Secrets Manager and a Lambda function automation, keeping everything secure without manual intervention.&lt;/p&gt;

&lt;h2&gt;
  
  
  3. Create an Internal Application Load Balancer
&lt;/h2&gt;

&lt;p&gt;Login to management console and navigate to EC2, under Load Balancing section choose Load Balancers,&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhwit2823adfy7r6fisph.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhwit2823adfy7r6fisph.png" alt=" " width="800" height="133"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Click on create load balancer, and choose Application Load Balancer&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzi33anfre4it0h3476u5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzi33anfre4it0h3476u5.png" alt=" " width="800" height="640"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Under &lt;strong&gt;Basic Configuration&lt;/strong&gt; choose &lt;strong&gt;Internal&lt;/strong&gt; as scheme,&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgmk3nnl8qbq529m55ex8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgmk3nnl8qbq529m55ex8.png" alt=" " width="800" height="388"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Under &lt;strong&gt;Network Mapping&lt;/strong&gt; choose the VPC you created or refer &lt;a href="https://dev.to/ittrident/deploying-fider-on-aws-ecs-a-step-by-step-guide-to-deploy-a-feedback-platform-jl4#create-a-vpc"&gt;this&lt;/a&gt; guide to create a VPC, creating a single public subnet should suffice this time (for placing NAT gateway).&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4gedzz1a7h6pac5whsb5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4gedzz1a7h6pac5whsb5.png" alt=" " width="800" height="166"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Choose to launch the Application Load Balancer in at least 2 Azs, in private subnets.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3hxz3aseu9gsn7oiy4cj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3hxz3aseu9gsn7oiy4cj.png" alt=" " width="800" height="270"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Choose the &lt;strong&gt;Security group&lt;/strong&gt; for your internal ALB and create an HTTPS listener,&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh75c5l4hq9z6k1ox4agt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh75c5l4hq9z6k1ox4agt.png" alt=" " width="800" height="270"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Navigate to the VPC console and click on the &lt;strong&gt;Managed prefix lists&lt;/strong&gt;, choose &lt;strong&gt;pl-b6a144df&lt;/strong&gt; and map this to your ALB's inbound SG rules(HTTPS), by doing so you are only letting the CloudFront's IP ranges to communicate with your ALB. This is the recommended approach according to &lt;a href="https://docs.aws.amazon.com/whitepapers/latest/aws-best-practices-ddos-resiliency/protecting-your-origin-bp1-bp5.html" rel="noopener noreferrer"&gt;AWS Best Practices for DDoS Resiliency.&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn17m2qqt669zmu4wpivy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn17m2qqt669zmu4wpivy.png" alt=" " width="800" height="392"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Under &lt;strong&gt;Secure listener settings&lt;/strong&gt; choose the certificate you created from &lt;strong&gt;ACM&lt;/strong&gt;, if not you can refer &lt;a href="https://dev.to/ittrident/deploying-fider-on-aws-ecs-a-step-by-step-guide-to-deploy-a-feedback-platform-jl4#create-an-ecs-service"&gt;this&lt;/a&gt; guide and forward to &lt;strong&gt;Step 3&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F249w3ev8gfhkjx7as86s.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F249w3ev8gfhkjx7as86s.png" alt=" " width="800" height="271"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once you are done with requesting the public certificate in &lt;strong&gt;ACM&lt;/strong&gt;, your certificate will appear in the drop-down select it, scroll down, and click on &lt;strong&gt;create load balancer&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs9huh97vrxj7kzfl4drt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs9huh97vrxj7kzfl4drt.png" alt=" " width="800" height="347"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The internal ALB is now created with the traffic originating from the CloudFront IP ranges.&lt;/p&gt;

&lt;h2&gt;
  
  
  4. Create CloudFront VPC origin
&lt;/h2&gt;

&lt;p&gt;Now that you are done with the internal ALB creation, Navigate to the CloudFront service console and choose &lt;strong&gt;VPC origins&lt;/strong&gt; from the navigation pane.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F755qv4zcwgvv6obdvllf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F755qv4zcwgvv6obdvllf.png" alt=" " width="800" height="116"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Click on VPC origins and create a VPC origin,&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn24oqjt8nl6ntg1kkgq4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn24oqjt8nl6ntg1kkgq4.png" alt=" " width="800" height="116"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Provide a Name and choose internal ALB's &lt;strong&gt;ARN&lt;/strong&gt; from the &lt;strong&gt;Origin ARN&lt;/strong&gt; drop-down,&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foxcfi7ns9b93mjrd8qzw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foxcfi7ns9b93mjrd8qzw.png" alt=" " width="800" height="338"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Choose the protocol to &lt;strong&gt;HTTPS only&lt;/strong&gt; and click on create VPC origin.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnn38k0itn5nc03nvwz19.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnn38k0itn5nc03nvwz19.png" alt=" " width="800" height="205"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The VPC origin creation is now complete.&lt;/p&gt;

&lt;h2&gt;
  
  
  5. Create a CloudFront Distribution
&lt;/h2&gt;

&lt;p&gt;Navigate to the CloudFront service console and click on &lt;strong&gt;Create a CloudFront distribution&lt;/strong&gt;,&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6w30yqgw5o68ei23ylpq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6w30yqgw5o68ei23ylpq.png" alt=" " width="800" height="389"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Under the &lt;strong&gt;Origin&lt;/strong&gt; section, select the VPC origin you created. Enable &lt;strong&gt;Origin Shield&lt;/strong&gt; for an additional layer of caching, and don't forget to add the &lt;strong&gt;custom header&lt;/strong&gt;, name it &lt;em&gt;x-origin-verify&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxpwwgg4tf3bd0m31zok3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxpwwgg4tf3bd0m31zok3.png" alt=" " width="800" height="404"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Under &lt;strong&gt;Default cache behaviour&lt;/strong&gt; choose the following settings, under &lt;strong&gt;Allowed HTTP methods&lt;/strong&gt; choose the third option, if you want to know why then check out &lt;a href="https://dev.to/ittrident/deploying-fider-on-aws-ecs-a-step-by-step-guide-to-deploy-a-feedback-platform-jl4#create-cloudfront-distribution-with-waf"&gt;this&lt;/a&gt; guide for reason.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fden6r4xd7517s53zm48x.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fden6r4xd7517s53zm48x.png" alt=" " width="800" height="325"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Under &lt;strong&gt;Cache key and origin requests&lt;/strong&gt; choose &lt;strong&gt;Cache policy and origin request policy (recommended)&lt;/strong&gt;. Choose the default cache policy and origin request policy. Under &lt;strong&gt;Response headers policy&lt;/strong&gt; choose &lt;strong&gt;CORS-with-preflight-and-SecurityHeadersPolicy&lt;/strong&gt;, it is an AWS-managed response header policy. Use this policy to specify security-related HTTP response headers that CloudFront adds to HTTP responses that it sends to viewers.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv6ry4utjly2fuc7ilj8g.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv6ry4utjly2fuc7ilj8g.png" alt=" " width="800" height="325"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The AWS-managed policy provides the following security headers, it ultimately narrows down to your use case, you can also create a &lt;strong&gt;Custom policy&lt;/strong&gt; and attach it under &lt;strong&gt;Response headers policy&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fam0kjezs4ixf4pfxvc4u.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fam0kjezs4ixf4pfxvc4u.png" alt=" " width="800" height="409"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Enable &lt;strong&gt;Web Application Firewall&lt;/strong&gt; for your CloudFront distribution, toggle on &lt;strong&gt;use monitor mode&lt;/strong&gt; you can later disable it once you get know how much of your requests are being blocked by this WAF configuration(False positives and red herrings).&lt;/p&gt;

&lt;p&gt;Check &lt;strong&gt;SQL protections&lt;/strong&gt; and &lt;strong&gt;Rate limiting&lt;/strong&gt;(DDoS protection) and take a note of price estimation.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk3owxatqrr42u3jho6lg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk3owxatqrr42u3jho6lg.png" alt=" " width="800" height="398"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Add the &lt;strong&gt;CNAME&lt;/strong&gt; through which you would like to access your application publicly and choose the custom SSL certificate for your &lt;strong&gt;CNAME&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fheel5vlalxbk5vkztzsb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fheel5vlalxbk5vkztzsb.png" alt=" " width="800" height="380"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Configure standard logging for your CloudFront distribution to monitor the performance and usage metrics.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5cp7j3uf57aiivz4pwo9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5cp7j3uf57aiivz4pwo9.png" alt=" " width="800" height="380"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The CloudFront distribution creation is now complete.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffxak5ogjao1ibvljl4lb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffxak5ogjao1ibvljl4lb.png" alt=" " width="800" height="328"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Navigate to the &lt;strong&gt;WAF &amp;amp; Shield&lt;/strong&gt; console, click on Web ACLs and choose Global(CloudFront) in the region drop-down. Click on the Web ACL and make a note of the rules and WCUs.&lt;/p&gt;

&lt;h2&gt;
  
  
  6. Create Secrets manager secret and Lambda function
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Choose secret type:&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Navigate to the Secrets manager service console, and click on &lt;strong&gt;Store a new secret&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsjvd83m1yfjvz6j6krac.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsjvd83m1yfjvz6j6krac.png" alt=" " width="800" height="194"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Under &lt;strong&gt;Secret type&lt;/strong&gt; choose &lt;em&gt;Other type of secret&lt;/em&gt;, and provide &lt;em&gt;HEADERVALUE&lt;/em&gt; as a &lt;strong&gt;Key&lt;/strong&gt; and paste the same header value you used in the CloudFront distribution. Click on next.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F37oze0fl0rfki69gshpi.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F37oze0fl0rfki69gshpi.png" alt=" " width="800" height="335"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Configure secret:&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Choose the secret name and click on next, take a note of &lt;strong&gt;Resource permissions&lt;/strong&gt; step, we will come back to this step once we are done creating the lambda function.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4930xvcfq5fpymxqty4k.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4930xvcfq5fpymxqty4k.png" alt=" " width="800" height="383"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Review:&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Skip to the review section leaving behind the configure rotation step, as we will configure that once we create and configure the lambda function.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7lheuxgmriccj5bgcj8o.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7lheuxgmriccj5bgcj8o.png" alt=" " width="800" height="383"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The secret creation is now complete.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqm1myrc3e643m9m8rp1s.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqm1myrc3e643m9m8rp1s.png" alt=" " width="800" height="103"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Create a rotator lambda function:&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Navigate to the lambda console and click on &lt;strong&gt;Create function&lt;/strong&gt;,&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F149dgasbdub1hxhrmmqx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F149dgasbdub1hxhrmmqx.png" alt=" " width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Choose &lt;strong&gt;Author from scratch&lt;/strong&gt; and choose &lt;em&gt;Python 3.9&lt;/em&gt; as runtime with architecture as &lt;em&gt;x86_64&lt;/em&gt;. Under &lt;strong&gt;Execution role&lt;/strong&gt; choose create a new role, this creates a new service role for the function.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fffzusoxztpva7j1iywqw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fffzusoxztpva7j1iywqw.png" alt=" " width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Under &lt;strong&gt;Additional Configurations&lt;/strong&gt; enable tags and VPC, i am placing lambda function inside VPC to enable &lt;strong&gt;VPC flow logs&lt;/strong&gt; to monitor network traffic, which is critical for auditing and incident response.  Before moving onto choosing the security group create a new one in the EC2 console and then revert back here.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh0rbcz8vcypf1ddwlp8z.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh0rbcz8vcypf1ddwlp8z.png" alt=" " width="800" height="291"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Since this rotator Lambda function will be triggered by Secrets Manager, it won't be needing any inbound rules, as it's &lt;em&gt;event-driven&lt;/em&gt;. It still needs one outbound rule, to reach out to the Secrets manager endpoint via HTTPS protocol, as the traffic (&lt;em&gt;for HTTPS&lt;/em&gt;) will be routed through public internet via a NAT-gateway.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frszpj112jkjefh6w7eyh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frszpj112jkjefh6w7eyh.png" alt=" " width="800" height="230"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now, reference the created SG under the lambda's configuration parameters drop-down and click on &lt;strong&gt;Create function&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fujd1s45xvan8dlxzux2j.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fujd1s45xvan8dlxzux2j.png" alt=" " width="800" height="375"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The rotator-lambda function is now created.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Configuration parameters - rotator lambda function:&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Use the python code &lt;a href="https://gist.github.com/WasimTTY/ff092402d09c56f6496b359036e42aa0" rel="noopener noreferrer"&gt;https://gist.github.com/WasimTTY/ff092402d09c56f6496b359036e42aa0&lt;/a&gt; from this gist to deploy the code onto the lambda function. &lt;/p&gt;

&lt;p&gt;This code was &lt;em&gt;modified&lt;/em&gt; to work with the CloudFront WAF configuration.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjr7drrxdssrlqj3p0a9c.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjr7drrxdssrlqj3p0a9c.png" alt=" " width="800" height="375"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once pasted, click on &lt;em&gt;deploy&lt;/em&gt; to deploy the code to the lambda function. Additionally, this code does come with an external dependency, so create a lambda layer and attach it to the rotator-lambda function.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0uiug5ujninexarsz17r.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0uiug5ujninexarsz17r.png" alt=" " width="800" height="206"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You can find the &lt;em&gt;artifact&lt;/em&gt; in this repository: &lt;a href="https://github.com/aws-samples/amazon-cloudfront-waf-secretsmanager/tree/master/artifacts" rel="noopener noreferrer"&gt;https://github.com/aws-samples/amazon-cloudfront-waf-secretsmanager/tree/master/artifacts&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe3j5nwlpnqv3y8secvc6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe3j5nwlpnqv3y8secvc6.png" alt=" " width="800" height="206"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now, under the &lt;strong&gt;Configuration&lt;/strong&gt; tab choose the &lt;em&gt;Timeout&lt;/em&gt; for the rotator-lambda function, i have set it to &lt;em&gt;5 mins 3 sec&lt;/em&gt; with the memory and the Ephemeral storage left to defaults.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftc7xp13drkve0vnhkp9q.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftc7xp13drkve0vnhkp9q.png" alt=" " width="800" height="403"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Take a note of the &lt;em&gt;Existing role&lt;/em&gt; attached to the lambda function. we will be needing it attach the policies to let the lambda function interact with the CloudFront distribution and WAF for rotating the &lt;strong&gt;secret-custom-header&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Navigate to the IAM console and under &lt;em&gt;Roles&lt;/em&gt; tab search for the service role created by the rotator-lambda function.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjfbipixas66ac07gmis6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjfbipixas66ac07gmis6.png" alt=" " width="800" height="403"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Create an inline policy for accessing the CloudFront distribution. I’ve provided only the permissions necessary for the service to function. This policy grants the rotator Lambda function the minimal privileges required to modify the Secret-Custom-Header on the CloudFront distribution.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;  &lt;span class="o"&gt;{&lt;/span&gt;
    &lt;span class="s2"&gt;"Version"&lt;/span&gt;: &lt;span class="s2"&gt;"2012-10-17"&lt;/span&gt;,
    &lt;span class="s2"&gt;"Statement"&lt;/span&gt;: &lt;span class="o"&gt;[&lt;/span&gt;
        &lt;span class="o"&gt;{&lt;/span&gt;
            &lt;span class="s2"&gt;"Sid"&lt;/span&gt;: &lt;span class="s2"&gt;"VisualEditor0"&lt;/span&gt;,
            &lt;span class="s2"&gt;"Effect"&lt;/span&gt;: &lt;span class="s2"&gt;"Allow"&lt;/span&gt;,
            &lt;span class="s2"&gt;"Action"&lt;/span&gt;: &lt;span class="o"&gt;[&lt;/span&gt;
                &lt;span class="s2"&gt;"cloudfront:GetDistribution"&lt;/span&gt;,
                &lt;span class="s2"&gt;"cloudfront:UpdateDistribution"&lt;/span&gt;,
                &lt;span class="s2"&gt;"cloudfront:GetDistributionConfig"&lt;/span&gt;
            &lt;span class="o"&gt;]&lt;/span&gt;,
            &lt;span class="s2"&gt;"Resource"&lt;/span&gt;: &lt;span class="s2"&gt;"arn:aws:cloudfront::&amp;lt;AWS ACCOUNT ID&amp;gt;:distribution/E3R3ZKP77W4214"&lt;/span&gt;,
            &lt;span class="s2"&gt;"Condition"&lt;/span&gt;: &lt;span class="o"&gt;{&lt;/span&gt;
                &lt;span class="s2"&gt;"StringEquals"&lt;/span&gt;: &lt;span class="o"&gt;{&lt;/span&gt;
                    &lt;span class="s2"&gt;"aws:PrincipalArn"&lt;/span&gt;: &lt;span class="s2"&gt;"arn:aws:iam::&amp;lt;AWS ACCOUNT ID&amp;gt;:role/service-role/header-rotate-function-role-tinh6qnu"&lt;/span&gt;
                &lt;span class="o"&gt;}&lt;/span&gt;
            &lt;span class="o"&gt;}&lt;/span&gt;
        &lt;span class="o"&gt;}&lt;/span&gt;
    &lt;span class="o"&gt;]&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;I’ve applied the same approach for the Secrets Manager, CloudFront WAF, and Lambda VPC policies as well, granting only the minimal permissions required for each service.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;  &lt;span class="o"&gt;{&lt;/span&gt;
    &lt;span class="s2"&gt;"Version"&lt;/span&gt;: &lt;span class="s2"&gt;"2012-10-17"&lt;/span&gt;,
    &lt;span class="s2"&gt;"Statement"&lt;/span&gt;: &lt;span class="o"&gt;[&lt;/span&gt;
        &lt;span class="o"&gt;{&lt;/span&gt;
            &lt;span class="s2"&gt;"Sid"&lt;/span&gt;: &lt;span class="s2"&gt;"VisualEditor0"&lt;/span&gt;,
            &lt;span class="s2"&gt;"Effect"&lt;/span&gt;: &lt;span class="s2"&gt;"Allow"&lt;/span&gt;,
            &lt;span class="s2"&gt;"Action"&lt;/span&gt;: &lt;span class="o"&gt;[&lt;/span&gt;
                &lt;span class="s2"&gt;"secretsmanager:GetSecretValue"&lt;/span&gt;,
                &lt;span class="s2"&gt;"secretsmanager:DescribeSecret"&lt;/span&gt;,
                &lt;span class="s2"&gt;"secretsmanager:PutSecretValue"&lt;/span&gt;,
                &lt;span class="s2"&gt;"secretsmanager:UpdateSecretVersionStage"&lt;/span&gt;
            &lt;span class="o"&gt;]&lt;/span&gt;,
            &lt;span class="s2"&gt;"Resource"&lt;/span&gt;: &lt;span class="s2"&gt;"arn:aws:secretsmanager:us-east-2:&amp;lt;AWS ACCOUNT ID&amp;gt;:secret:Custom-Header-lQWRrQ"&lt;/span&gt;,
            &lt;span class="s2"&gt;"Condition"&lt;/span&gt;: &lt;span class="o"&gt;{&lt;/span&gt;
                &lt;span class="s2"&gt;"StringEquals"&lt;/span&gt;: &lt;span class="o"&gt;{&lt;/span&gt;
                    &lt;span class="s2"&gt;"aws:PrincipalArn"&lt;/span&gt;: &lt;span class="s2"&gt;"arn:aws:iam::&amp;lt;AWS ACCOUNT ID&amp;gt;:role/service-role/header-rotate-function-role-tinh6qnu"&lt;/span&gt;
                &lt;span class="o"&gt;}&lt;/span&gt;
            &lt;span class="o"&gt;}&lt;/span&gt;
        &lt;span class="o"&gt;}&lt;/span&gt;,
        &lt;span class="o"&gt;{&lt;/span&gt;
            &lt;span class="s2"&gt;"Sid"&lt;/span&gt;: &lt;span class="s2"&gt;"VisualEditor1"&lt;/span&gt;,
            &lt;span class="s2"&gt;"Effect"&lt;/span&gt;: &lt;span class="s2"&gt;"Allow"&lt;/span&gt;,
            &lt;span class="s2"&gt;"Action"&lt;/span&gt;: &lt;span class="s2"&gt;"secretsmanager:GetRandomPassword"&lt;/span&gt;,
            &lt;span class="s2"&gt;"Resource"&lt;/span&gt;: &lt;span class="s2"&gt;"*"&lt;/span&gt;,
            &lt;span class="s2"&gt;"Condition"&lt;/span&gt;: &lt;span class="o"&gt;{&lt;/span&gt;
                &lt;span class="s2"&gt;"StringEquals"&lt;/span&gt;: &lt;span class="o"&gt;{&lt;/span&gt;
                    &lt;span class="s2"&gt;"aws:PrincipalArn"&lt;/span&gt;: &lt;span class="s2"&gt;"arn:aws:iam::&amp;lt;AWS ACCOUNT ID&amp;gt;:role/service-role/header-rotate-function-role-tinh6qnu"&lt;/span&gt;
                &lt;span class="o"&gt;}&lt;/span&gt;
            &lt;span class="o"&gt;}&lt;/span&gt;
        &lt;span class="o"&gt;}&lt;/span&gt;
    &lt;span class="o"&gt;]&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;  &lt;span class="o"&gt;{&lt;/span&gt;
    &lt;span class="s2"&gt;"Version"&lt;/span&gt;: &lt;span class="s2"&gt;"2012-10-17"&lt;/span&gt;,
    &lt;span class="s2"&gt;"Statement"&lt;/span&gt;: &lt;span class="o"&gt;[&lt;/span&gt;
        &lt;span class="o"&gt;{&lt;/span&gt;
            &lt;span class="s2"&gt;"Sid"&lt;/span&gt;: &lt;span class="s2"&gt;"VisualEditor0"&lt;/span&gt;,
            &lt;span class="s2"&gt;"Effect"&lt;/span&gt;: &lt;span class="s2"&gt;"Allow"&lt;/span&gt;,
            &lt;span class="s2"&gt;"Action"&lt;/span&gt;: &lt;span class="o"&gt;[&lt;/span&gt;
                &lt;span class="s2"&gt;"ec2:CreateNetworkInterface"&lt;/span&gt;,
                &lt;span class="s2"&gt;"ec2:DescribeNetworkInterfaces"&lt;/span&gt;,
                &lt;span class="s2"&gt;"ec2:DeleteNetworkInterface"&lt;/span&gt;,
                &lt;span class="s2"&gt;"ec2:UnassignPrivateIpAddresses"&lt;/span&gt;,
                &lt;span class="s2"&gt;"ec2:DescribeSubnets"&lt;/span&gt;,
                &lt;span class="s2"&gt;"ec2:AssignPrivateIpAddresses"&lt;/span&gt;
            &lt;span class="o"&gt;]&lt;/span&gt;,
            &lt;span class="s2"&gt;"Resource"&lt;/span&gt;: &lt;span class="s2"&gt;"*"&lt;/span&gt;,
            &lt;span class="s2"&gt;"Condition"&lt;/span&gt;: &lt;span class="o"&gt;{&lt;/span&gt;
                &lt;span class="s2"&gt;"StringEquals"&lt;/span&gt;: &lt;span class="o"&gt;{&lt;/span&gt;
                    &lt;span class="s2"&gt;"aws:PrincipalArn"&lt;/span&gt;: &lt;span class="s2"&gt;"arn:aws:iam::&amp;lt;AWS ACCOUNT ID&amp;gt;:role/service-role/header-rotate-function-role-tinh6qnu"&lt;/span&gt;
                &lt;span class="o"&gt;}&lt;/span&gt;
            &lt;span class="o"&gt;}&lt;/span&gt;
        &lt;span class="o"&gt;}&lt;/span&gt;
    &lt;span class="o"&gt;]&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;  &lt;span class="o"&gt;{&lt;/span&gt;
    &lt;span class="s2"&gt;"Version"&lt;/span&gt;: &lt;span class="s2"&gt;"2012-10-17"&lt;/span&gt;,
    &lt;span class="s2"&gt;"Statement"&lt;/span&gt;: &lt;span class="o"&gt;[&lt;/span&gt;
        &lt;span class="o"&gt;{&lt;/span&gt;
            &lt;span class="s2"&gt;"Sid"&lt;/span&gt;: &lt;span class="s2"&gt;"VisualEditor0"&lt;/span&gt;,
            &lt;span class="s2"&gt;"Effect"&lt;/span&gt;: &lt;span class="s2"&gt;"Allow"&lt;/span&gt;,
            &lt;span class="s2"&gt;"Action"&lt;/span&gt;: &lt;span class="o"&gt;[&lt;/span&gt;
                &lt;span class="s2"&gt;"wafv2:UpdateWebACL"&lt;/span&gt;,
                &lt;span class="s2"&gt;"wafv2:GetWebACL"&lt;/span&gt;
            &lt;span class="o"&gt;]&lt;/span&gt;,
            &lt;span class="s2"&gt;"Resource"&lt;/span&gt;: &lt;span class="o"&gt;[&lt;/span&gt;
                &lt;span class="s2"&gt;"arn:aws:wafv2:*:&amp;lt;AWS ACCOUNT ID&amp;gt;:*/regexpatternset/*/*"&lt;/span&gt;,
                &lt;span class="s2"&gt;"arn:aws:wafv2:*:&amp;lt;AWS ACCOUNT ID&amp;gt;:*/managedruleset/*/*"&lt;/span&gt;,
                &lt;span class="s2"&gt;"arn:aws:wafv2:*:&amp;lt;AWS ACCOUNT ID&amp;gt;:*/rulegroup/*/*"&lt;/span&gt;,
                &lt;span class="s2"&gt;"arn:aws:wafv2:us-east-1:&amp;lt;AWS ACCOUNT ID&amp;gt;:global/webacl/CreatedByCloudFront-4ffb490d-8e92-419b-a38b-017361fd80aa/a2512452-7f08-4591-bf1a-fec1923ac75a"&lt;/span&gt;,
                &lt;span class="s2"&gt;"arn:aws:wafv2:*:&amp;lt;AWS ACCOUNT ID&amp;gt;:*/ipset/*/*"&lt;/span&gt;
            &lt;span class="o"&gt;]&lt;/span&gt;,
            &lt;span class="s2"&gt;"Condition"&lt;/span&gt;: &lt;span class="o"&gt;{&lt;/span&gt;
                &lt;span class="s2"&gt;"StringEquals"&lt;/span&gt;: &lt;span class="o"&gt;{&lt;/span&gt;
                    &lt;span class="s2"&gt;"aws:PrincipalArn"&lt;/span&gt;: &lt;span class="s2"&gt;"arn:aws:iam::&amp;lt;AWS ACCOUNT ID&amp;gt;:role/service-role/header-rotate-function-role-tinh6qnu"&lt;/span&gt;
                &lt;span class="o"&gt;}&lt;/span&gt;
            &lt;span class="o"&gt;}&lt;/span&gt;
        &lt;span class="o"&gt;}&lt;/span&gt;
    &lt;span class="o"&gt;]&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The IAM policies crafted above are &lt;em&gt;identity-based policies&lt;/em&gt; scoped down to a specific IAM role using a &lt;em&gt;PrincipalArn&lt;/em&gt; condition, enforcing least privilege.&lt;/p&gt;

&lt;p&gt;The rotator-lambda function also needs a &lt;em&gt;resource-based policy&lt;/em&gt; in order for the Secrets manager to invoke the function. I have used AWS CLI to add this policy to the lambda function.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;  aws lambda add-permission &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--function-name&lt;/span&gt; secret-header-rotator-function &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--statement-id&lt;/span&gt; AllowSecretsManagerInvoke &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--action&lt;/span&gt; lambda:InvokeFunction &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--principal&lt;/span&gt; secretsmanager.amazonaws.com &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--source-account&lt;/span&gt; &amp;lt;AWS ACCOUNT ID&amp;gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--source-arn&lt;/span&gt; arn:aws:secretsmanager:us-east-2:&amp;lt;AWS ACCOUNT ID&amp;gt;:secret:Custom-Header-lQWRrQ

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The resultant policy will look like this,&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;  &lt;span class="o"&gt;{&lt;/span&gt;
  &lt;span class="s2"&gt;"Version"&lt;/span&gt;: &lt;span class="s2"&gt;"2012-10-17"&lt;/span&gt;,
  &lt;span class="s2"&gt;"Id"&lt;/span&gt;: &lt;span class="s2"&gt;"default"&lt;/span&gt;,
  &lt;span class="s2"&gt;"Statement"&lt;/span&gt;: &lt;span class="o"&gt;[&lt;/span&gt;
    &lt;span class="o"&gt;{&lt;/span&gt;
      &lt;span class="s2"&gt;"Sid"&lt;/span&gt;: &lt;span class="s2"&gt;"AllowSecretsManagerInvoke"&lt;/span&gt;,
      &lt;span class="s2"&gt;"Effect"&lt;/span&gt;: &lt;span class="s2"&gt;"Allow"&lt;/span&gt;,
      &lt;span class="s2"&gt;"Principal"&lt;/span&gt;: &lt;span class="o"&gt;{&lt;/span&gt;
        &lt;span class="s2"&gt;"Service"&lt;/span&gt;: &lt;span class="s2"&gt;"secretsmanager.amazonaws.com"&lt;/span&gt;
      &lt;span class="o"&gt;}&lt;/span&gt;,
      &lt;span class="s2"&gt;"Action"&lt;/span&gt;: &lt;span class="s2"&gt;"lambda:InvokeFunction"&lt;/span&gt;,
      &lt;span class="s2"&gt;"Resource"&lt;/span&gt;: &lt;span class="s2"&gt;"arn:aws:lambda:us-east-2:&amp;lt;AWS ACCOUNT ID&amp;gt;:function:secret-header-rotator-function"&lt;/span&gt;,
      &lt;span class="s2"&gt;"Condition"&lt;/span&gt;: &lt;span class="o"&gt;{&lt;/span&gt;
        &lt;span class="s2"&gt;"StringEquals"&lt;/span&gt;: &lt;span class="o"&gt;{&lt;/span&gt;
          &lt;span class="s2"&gt;"AWS:SourceAccount"&lt;/span&gt;: &lt;span class="s2"&gt;"&amp;lt;AWS ACCOUNT ID&amp;gt;"&lt;/span&gt;
         &lt;span class="o"&gt;}&lt;/span&gt;,
        &lt;span class="s2"&gt;"ArnLike"&lt;/span&gt;: &lt;span class="o"&gt;{&lt;/span&gt;
          &lt;span class="s2"&gt;"AWS:SourceArn"&lt;/span&gt;: &lt;span class="s2"&gt;"arn:aws:secretsmanager:us-east-2:&amp;lt;AWS ACCOUNT ID&amp;gt;:secret:Custom-Header-lQWRrQ"&lt;/span&gt;
        &lt;span class="o"&gt;}&lt;/span&gt;
      &lt;span class="o"&gt;}&lt;/span&gt;
    &lt;span class="o"&gt;}&lt;/span&gt;
  &lt;span class="o"&gt;]&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;em&gt;resource-based policy&lt;/em&gt; above follows the &lt;strong&gt;Zero Trust&lt;/strong&gt; principle by  minimizing trust, ensuring access is explicitly verified through conditions like Source Account and Source ARN before granting permissions.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Configure secrets manager - resource policy:&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Now, navigate to the Secrets manager service console and under the &lt;strong&gt;Resource permissions&lt;/strong&gt; paste the &lt;em&gt;resource-based policy&lt;/em&gt; for invoking the rotator-lambda function.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;  &lt;span class="o"&gt;{&lt;/span&gt;
  &lt;span class="s2"&gt;"Version"&lt;/span&gt; : &lt;span class="s2"&gt;"2012-10-17"&lt;/span&gt;,
  &lt;span class="s2"&gt;"Statement"&lt;/span&gt; : &lt;span class="o"&gt;[&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
    &lt;span class="s2"&gt;"Sid"&lt;/span&gt; : &lt;span class="s2"&gt;"AllowLambdaAccess"&lt;/span&gt;,
    &lt;span class="s2"&gt;"Effect"&lt;/span&gt; : &lt;span class="s2"&gt;"Allow"&lt;/span&gt;,
    &lt;span class="s2"&gt;"Principal"&lt;/span&gt; : &lt;span class="o"&gt;{&lt;/span&gt;
      &lt;span class="s2"&gt;"AWS"&lt;/span&gt; : &lt;span class="s2"&gt;"arn:aws:iam::&amp;lt;AWS ACCOUNT ID&amp;gt;:role/service-role/header-rotate-function-role-tinh6qnu"&lt;/span&gt;
    &lt;span class="o"&gt;}&lt;/span&gt;,
    &lt;span class="s2"&gt;"Action"&lt;/span&gt; : &lt;span class="o"&gt;[&lt;/span&gt; &lt;span class="s2"&gt;"secretsmanager:GetSecretValue"&lt;/span&gt;, &lt;span class="s2"&gt;"secretsmanager:PutSecretValue"&lt;/span&gt;, &lt;span class="s2"&gt;"secretsmanager:UpdateSecretVersionStage"&lt;/span&gt;, &lt;span class="s2"&gt;"secretsmanager:DescribeSecret"&lt;/span&gt; &lt;span class="o"&gt;]&lt;/span&gt;,
    &lt;span class="s2"&gt;"Resource"&lt;/span&gt; : &lt;span class="s2"&gt;"arn:aws:secretsmanager:us-east-2:&amp;lt;AWS ACCOUNT ID&amp;gt;:secret:Custom-Header-lQWRrQ"&lt;/span&gt;,
    &lt;span class="s2"&gt;"Condition"&lt;/span&gt; : &lt;span class="o"&gt;{&lt;/span&gt;
      &lt;span class="s2"&gt;"StringEquals"&lt;/span&gt; : &lt;span class="o"&gt;{&lt;/span&gt;
        &lt;span class="s2"&gt;"aws:ResourceTag/Environment"&lt;/span&gt; : &lt;span class="s2"&gt;"production"&lt;/span&gt;,
        &lt;span class="s2"&gt;"aws:SourceAccount"&lt;/span&gt; : &lt;span class="s2"&gt;"&amp;lt;AWS ACCOUNT ID&amp;gt;"&lt;/span&gt;,
        &lt;span class="s2"&gt;"aws:SourceArn"&lt;/span&gt; : &lt;span class="s2"&gt;"arn:aws:lambda:us-east-2:&amp;lt;AWS ACCOUNT ID&amp;gt;:function:secret-header-rotator-function"&lt;/span&gt;
      &lt;span class="o"&gt;}&lt;/span&gt;
    &lt;span class="o"&gt;}&lt;/span&gt;
  &lt;span class="o"&gt;}&lt;/span&gt; &lt;span class="o"&gt;]&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This policy also aligns with the &lt;strong&gt;Zero Trust&lt;/strong&gt; principle, Even though the request is coming from the rotator-lambda function (which could be considered a trusted service), the policy does not allow access by default. It enforces strict conditions including &lt;strong&gt;ABAC&lt;/strong&gt; (&lt;em&gt;Attribute Based Access Control&lt;/em&gt;) using resource tags to validate the request’s origin and ensure the caller is explicitly authorized.&lt;/p&gt;

&lt;p&gt;Taking this approach tightens security by enforcing stringent, context-aware access controls.&lt;/p&gt;

&lt;h2&gt;
  
  
  7. Create a Custom Header rule in AWS WAF
&lt;/h2&gt;

&lt;p&gt;Navigate to the WAF &amp;amp; Shield service console and look-up for &lt;em&gt;Web ACLs&lt;/em&gt; created under the &lt;strong&gt;Global(CloudFront)&lt;/strong&gt;. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz78rwe3akbm7rkjwbub0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz78rwe3akbm7rkjwbub0.png" alt=" " width="800" height="164"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Create a new rule named &lt;strong&gt;XOriginVerify&lt;/strong&gt;, and configure the same &lt;em&gt;secret-custom-header&lt;/em&gt; value that was set in the CloudFront distribution and the Secrets manager.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyeg5fnrkxkcmhn02ntoe.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyeg5fnrkxkcmhn02ntoe.png" alt=" " width="800" height="248"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The rotator-lambda function looks &lt;strong&gt;Explicitly&lt;/strong&gt; for this rule name as it is hard-coded in the python code. You can modify it according to your use case.&lt;/p&gt;

&lt;p&gt;Once the rotator-lambda function updates the &lt;em&gt;secret-custom-header&lt;/em&gt; in both the CloudFront distribution and in WAF, it tests the secret by pinging your domain-name, in this case the request has to propagate through the CloudFront, and WAF will let the request through only if that header matches, it should result in &lt;strong&gt;200 OK&lt;/strong&gt;. You have to check CloudWatch logs for this.&lt;/p&gt;

&lt;h2&gt;
  
  
  8. Spin up ECS and RDS
&lt;/h2&gt;

&lt;p&gt;You can refer to &lt;a href="https://dev.to/ittrident/deploying-fider-on-aws-ecs-a-step-by-step-guide-to-deploy-a-feedback-platform-jl4#install-aws-cli"&gt;this&lt;/a&gt; guide for spinning up the ECS and RDS infrastructure.&lt;/p&gt;

&lt;h2&gt;
  
  
  9. Monitor CloudFront and WAF logs
&lt;/h2&gt;

&lt;p&gt;Monitoring CloudFront and AWS WAF logs is a crucial part of &lt;em&gt;incident response&lt;/em&gt; in any secure AWS setup, especially when you're running a public-facing app.&lt;/p&gt;

&lt;p&gt;You need visibility into what’s happening at your edge and app layers. That’s where CloudFront and AWS WAF logs come in.&lt;/p&gt;

&lt;p&gt;In the AWS management console under the region drop-down choose &lt;strong&gt;us-east-1&lt;/strong&gt;, this is where CloudFront delivers logs to in the CloudWatch log-group.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxts20wrjkfxbm7hm33ty.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxts20wrjkfxbm7hm33ty.png" alt=" " width="800" height="133"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Each log contains information such as the time the request was received, the processing time, request paths,client-IP and server responses. You can use these access logs to analyze response times and to troubleshoot issues.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F47yb0j5703rt4keu1paa.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F47yb0j5703rt4keu1paa.png" alt=" " width="800" height="383"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;When you're absolutely done observing the false positives for WAF rules, navigate to the WAF console and adjust the actions for the rules according to your use case. You can choose whether to set actions such as Allow, Block, Captcha, or Count based on your specific needs. It ultimately narrows down to your requirements and the level of protection you wish to enforce.&lt;/p&gt;

&lt;p&gt;It’s important to note that when you set a rule group action to 'Count,' it will override any Allow, Block, or Captcha actions that were originally specified for the rules within that group. As a result, all rules in the group will only count matching requests, regardless of their original action.&lt;/p&gt;

&lt;p&gt;By default, the action is set to 'Count' during testing and monitoring of the rule group’s behavior, which allows you to observe how the rules perform before deploying them to production.&lt;/p&gt;

&lt;p&gt;AWS WAF now supports sending logs to CloudWatch logs, Hover over to the &lt;strong&gt;Logging and metrics&lt;/strong&gt; tab and  click on &lt;strong&gt;Enable&lt;/strong&gt; drop-down and choose &lt;strong&gt;Logging destination&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftma12clyqg7150d33hs4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftma12clyqg7150d33hs4.png" alt=" " width="800" height="376"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Choose the CloudWatch logs log group as destination and click on save,&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fde6x9r78i2nbj9gjg7ja.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fde6x9r78i2nbj9gjg7ja.png" alt=" " width="800" height="376"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Logging is now enabled for AWS WAF.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fas2iimfg50rf0uo9b5ag.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fas2iimfg50rf0uo9b5ag.png" alt=" " width="800" height="376"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Hover over to the &lt;strong&gt;Sampled requests&lt;/strong&gt; tab to help you analyze how your AWS WAF rules are &lt;strong&gt;evaluating&lt;/strong&gt; traffic. Each sampled request shows details like the &lt;strong&gt;IP address&lt;/strong&gt;, &lt;strong&gt;headers&lt;/strong&gt;, &lt;strong&gt;URI&lt;/strong&gt;, &lt;strong&gt;country&lt;/strong&gt;, and &lt;strong&gt;rule evaluation outcome&lt;/strong&gt;. Sampled requests are &lt;strong&gt;free&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo9zk38t9odiyq71ln5e7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo9zk38t9odiyq71ln5e7.png" alt=" " width="800" height="376"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxn4k81k3rw6gl1s4l6py.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxn4k81k3rw6gl1s4l6py.png" alt=" " width="800" height="376"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In addition to this, you can also use the default queries provided by WAF to query the number of requests that come from an IP address. Under the &lt;strong&gt;Cloudwatch logs insight&lt;/strong&gt; tab in the WAF console.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkxx8rmp2iredw57c0sox.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkxx8rmp2iredw57c0sox.png" alt=" " width="800" height="387"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  10. Do You Really Need Zero Trust for Every App
&lt;/h2&gt;

&lt;p&gt;Before slapping on Zero Trust policies like they’re the new fad, take a step back and do the &lt;strong&gt;threat modelling&lt;/strong&gt; first.&lt;/p&gt;

&lt;p&gt;That’s the only way to know whether Zero Trust is even the right fit for your app.&lt;/p&gt;

&lt;p&gt;Threat modelling is like a security audit before you go shopping for locks and alarms. It forces you to ask:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;What exactly are you protecting ?&lt;/strong&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Who would want to break in ?&lt;/strong&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;How might they try to do it ?&lt;/strong&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That last part? That’s basically &lt;strong&gt;recon&lt;/strong&gt; and attackers do it all the time. They scan your public IPs, poke at your APIs, look for leaky S3 buckets or overly permissive roles. So when you threat model, you're doing a defensive version of recon: identifying what an attacker might see and how they'd try to exploit it.&lt;/p&gt;

&lt;p&gt;Threat modeling helps you spot risks like this, Once you know that, you can set up your Zero Trust stuff, like least privilege access, network segmentation, and strong authentication all tailored to the actual threats, not just random paranoia.&lt;/p&gt;

&lt;p&gt;Even something as focused as the rotator Lambda function, which runs every 4 hours to manage the secret header, carries real risk. Its footprint may be small, but its role is critical. In cloud environments, even well-scoped serverless functions can become attack vectors if misconfigured or exposed.&lt;/p&gt;

&lt;p&gt;This strict posture isn't just theoretical. Tools like &lt;a href="https://github.com/OtterHacker/AWSRedirector" rel="noopener noreferrer"&gt;AWSRedirector&lt;/a&gt; demonstrates how serverless components can be exploited in real-world attacks . That's why we lock down even the smallest moving parts.&lt;/p&gt;

&lt;p&gt;Without threat modeling, you’d be like, ‘Oh, I’ll just block everything!’ but that’s like building a fortress with no doors. You gotta know who’s allowed in, and under what conditions. That’s how Zero Trust works it’s not about ‘trust no one’ for the sake of it, it’s about verifying every request based on real risks. You should understand the balance between security and usability.&lt;/p&gt;

&lt;p&gt;If you’re curious about how AWS approaches Zero Trust in detail, you can check out their &lt;a href="https://aws.amazon.com/security/zero-trust/" rel="noopener noreferrer"&gt;Zero Trust Security Model&lt;/a&gt; and &lt;a href="https://docs.aws.amazon.com/prescriptive-guidance/latest/strategy-zero-trust-architecture/zero-trust-principles.html" rel="noopener noreferrer"&gt;AWS Zero Trust Architecture principles&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  11. Is CloudFront Truly Zero Trust Compliant
&lt;/h2&gt;

&lt;p&gt;Zero Trust architectures are &lt;strong&gt;identity-centric&lt;/strong&gt;, requiring per-request authentication and authorization before traffic reaches the application. Using CloudFront means your application is publicly addressable on the internet, even if access is restricted with WAF, signed URLs, or custom headers.  This introduces a public entry point, contrary to Zero Trust principles of default deny and private access.&lt;/p&gt;

&lt;p&gt;CloudFront does not support integration with &lt;em&gt;IAM Identity Center, Azure AD, Okta&lt;/em&gt;, or other identity providers (IdPs) to enforce access control. While it offers IP-level and path-level access logs, it lacks identity-based audit trails, making per-user auditing and continuous trust evaluation impossible. Additionally, it does not inspect or enforce device posture or context, meaning it cannot verify whether requests originate from secure, approved environments. &lt;/p&gt;

&lt;p&gt;If you're looking to implement true Zero Trust at the edge, &lt;strong&gt;Cloudflare&lt;/strong&gt; and &lt;strong&gt;Akamai (EAA)&lt;/strong&gt; offer the most complete solutions today. Others like Fastly are catching up, but platforms like AWS CloudFront, Azure Front Door, and Google Cloud CDN still fall short of Zero Trust standards.&lt;/p&gt;

&lt;h2&gt;
  
  
  12. Deciding on Enterprise Grade Deployment for Fider
&lt;/h2&gt;

&lt;p&gt;Fider is just a feedback platform, and most of what it contains are suggestions and feedback. We don’t store any sensitive Personally Identifiable Information &lt;strong&gt;(PII)&lt;/strong&gt; like Social Security Numbers or financial data. Even in the event of a compromise, the risk is minimal, typically limited to names and email addresses. However, if you're using Fider internally within your organization, the risk could be higher, as internal feedback may include more sensitive or proprietary information&lt;/p&gt;

&lt;p&gt;Given Fider’s risk profile, a Lambda-based secret header rotator backed by Secrets Manager offers a strong security baseline. This level of protection is more than sufficient if you’re using Fider to collect public feedback and suggestions without overengineering.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://aws.amazon.com/verified-access/" rel="noopener noreferrer"&gt;AWS Verified Access&lt;/a&gt; is indeed the right choice if you’re planning to host everything within AWS and require enterprise-level security controls. But, it’s generally overkill unless you have tighter access control policies in your enterprise setup, and costs more. You are also charged per GB for data processed by Verified Access. You can calculate the estimated cost here &lt;a href="https://aws.amazon.com/verified-access/pricing/" rel="noopener noreferrer"&gt;AWS Verified Access Pricing&lt;/a&gt;. It functions more as an &lt;strong&gt;access control solution&lt;/strong&gt; than as traditional &lt;strong&gt;VPN tunnels&lt;/strong&gt;. It only works with AWS hosted systems.&lt;/p&gt;

&lt;p&gt;If you are curious about this setup then you can check this out &lt;a href="https://pages.nist.gov/zero-trust-architecture/VolumeC/HowTo-E4B5.html" rel="noopener noreferrer"&gt;https://pages.nist.gov/zero-trust-architecture/VolumeC/HowTo-E4B5.html&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.cloudflare.com/en-gb/zero-trust/products/access/" rel="noopener noreferrer"&gt;Cloudflare Zero Trust&lt;/a&gt; is the sweet spot and comes out as the better choice when considering both &lt;strong&gt;cost&lt;/strong&gt; and &lt;strong&gt;performance&lt;/strong&gt;.  It supports &lt;strong&gt;edge services&lt;/strong&gt;, &lt;strong&gt;DNS&lt;/strong&gt;, and &lt;strong&gt;mTLS&lt;/strong&gt; out of the box, providing enhanced security, performance, and seamless encryption with Cloudflare’s global network.You can check out its pricing &lt;a href="https://www.cloudflare.com/en-gb/plans/zero-trust-services/" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  13. Caveats to consider
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;VPC Endpoints&lt;/strong&gt;:&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You can still choose to go with VPC interface endpoints (Powered by AWS PrivateLink) if you haven't integrated external OAuth providers (like Google, GitHub, etc.) because these services are not available via AWS PrivateLink.&lt;/p&gt;

&lt;p&gt;When you configure OAuth providers with Fider, the authentication requests go to the OAuth provider's endpoints over the internet. This involves external APIs that cannot be routed through VPC Interface Endpoints directly, since these OAuth services are external to your AWS infrastructure and not hosted within your VPC or private AWS environment.&lt;/p&gt;

&lt;p&gt;It will also cost you more than the NAT gateway, you can checkout this &lt;a href="https://repost.aws/questions/QUmfyiKedjTd225PQS7MlHQQ/vpc-nat-gateway-vs-vpc-endpoint-pricing" rel="noopener noreferrer"&gt;post&lt;/a&gt; from the official AWS forum for even more clarification. Only choose this method if you consider security as paramount.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;⚠️ Network Isolation ≠ Identity Validation&lt;/strong&gt;:&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Using a VPC origin does make the app private, but it does not guarantee identity-based access control. CloudFront doesn't authenticate requesters the way solutions like Cloudflare Zero Trust do, leaving the application vulnerable without proper identity verification.&lt;/p&gt;

&lt;h2&gt;
  
  
  14. Final thoughts
&lt;/h2&gt;

&lt;p&gt;Let’s be real no system is made of &lt;em&gt;adamantium&lt;/em&gt;. Given enough time, resources, or misconfigurations, even the most locked-down architecture can be poked at. But this setup doesn’t aim for invincibility, it aims to minimize exposure, limit blast radius, and make exploitation significantly harder and riskier.&lt;/p&gt;

&lt;p&gt;This architecture trades convenience for control, reducing your attack surface, slowing down adversaries, and giving you time and telemetry to respond. &lt;/p&gt;

</description>
      <category>devops</category>
      <category>security</category>
      <category>cloud</category>
      <category>aws</category>
    </item>
    <item>
      <title>Deploying Fider on AWS ECS: A Step-by-Step Guide to Deploy a Feedback Platform</title>
      <dc:creator>Mohamed Wasim</dc:creator>
      <pubDate>Tue, 05 Nov 2024 12:12:17 +0000</pubDate>
      <link>https://dev.to/ittrident/deploying-fider-on-aws-ecs-a-step-by-step-guide-to-deploy-a-feedback-platform-jl4</link>
      <guid>https://dev.to/ittrident/deploying-fider-on-aws-ecs-a-step-by-step-guide-to-deploy-a-feedback-platform-jl4</guid>
      <description>&lt;p&gt;This guide provides detailed instructions for deploying Fider on AWS ECS, with tasks running inside a private subnet and an internet-facing Application Load Balancer (ALB) set up as a custom origin for CloudFront(Cloudfront + WAF).&lt;/p&gt;

&lt;p&gt;&lt;em&gt;This is a detailed article, so buckle up! Since this setup closely mimics a production deployment and let’s be real, deploying Fider on AWS isn’t an easy feat, you’ll want to follow along carefully.&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  AWS services used
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Elastic container registry(ECR)&lt;/li&gt;
&lt;li&gt;Elastic container service(ECS Fargate launch type)&lt;/li&gt;
&lt;li&gt;Application Load Balancer(ALB)&lt;/li&gt;
&lt;li&gt;Relational Database Service(RDS)&lt;/li&gt;
&lt;li&gt;Systems Manager Parameter Store&lt;/li&gt;
&lt;li&gt;Simple Storage Service(S3)&lt;/li&gt;
&lt;li&gt;Simple Email Service(SES)&lt;/li&gt;
&lt;li&gt;Cloudfront(CDN)&lt;/li&gt;
&lt;li&gt;Web Application Firewall(WAF)&lt;/li&gt;
&lt;li&gt;Key Management Service(KMS)&lt;/li&gt;
&lt;li&gt;Amazon Certificate Manager (ACM)&lt;/li&gt;
&lt;/ul&gt;

&lt;h1&gt;
  
  
  Table of Contents
&lt;/h1&gt;

&lt;ul&gt;
&lt;li&gt;What is Fider ?&lt;/li&gt;
&lt;li&gt;Fider's Tech Stack&lt;/li&gt;
&lt;li&gt;Pre-flight Checks&lt;/li&gt;
&lt;li&gt;Install AWS CLI&lt;/li&gt;
&lt;li&gt;Push the Docker Images to ECR&lt;/li&gt;
&lt;li&gt;Create a VPC&lt;/li&gt;
&lt;li&gt;Create Subnet Group for RDS&lt;/li&gt;
&lt;li&gt;Create an RDS PostgreSQL DB&lt;/li&gt;
&lt;li&gt;Create Parameters in Systems Manager Parameter Store&lt;/li&gt;
&lt;li&gt;Create an IAM User&lt;/li&gt;
&lt;li&gt;Create an S3 Bucket&lt;/li&gt;
&lt;li&gt;Create an Identity in SES&lt;/li&gt;
&lt;li&gt;Why Use ECS Instead of EC2&lt;/li&gt;
&lt;li&gt;Create an ECS Cluster&lt;/li&gt;
&lt;li&gt;Create Task Definition&lt;/li&gt;
&lt;li&gt;Create an ECS Service&lt;/li&gt;
&lt;li&gt;Create CloudFront Distribution with WAF&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  What is Fider ?
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqncq40lxhg0i4zuwjqmj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqncq40lxhg0i4zuwjqmj.png" alt="Image description" width="256" height="256"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Okay, okay, so, listen up! Fider is like this thing where people can go and, like, share their opinions on stuff, right? Imagine you’ve got a big ol' bag of potato chips don’t ask me why I’m talking about chips, it just seems right and you wanna know what everyone thinks about the flavor. Fider’s the place where people can give feedback on stuff like that, but, you know, for real stuff like websites and apps.&lt;/p&gt;

&lt;p&gt;So, you go there, type up your thoughts, and boom other people can vote on whether they agree or disagree, like they're choosing between watching Family Guy or some other dumb show. But here’s the kicker, Fider lets people actually vote and prioritize the feedback they care about! &lt;/p&gt;

&lt;p&gt;So, yeah, that’s Fider. Kinda like a big opinion poll but with a lot more techy stuff behind it. And you can totally use it for, you know, not chips...&lt;/p&gt;

&lt;p&gt;You can checkout the Fider's official page for more details: &lt;a href="https://fider.io/" rel="noopener noreferrer"&gt;https://fider.io/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This guide focuses on deploying Fider on AWS, but you can also self-host it on other platforms like &lt;strong&gt;Heroku&lt;/strong&gt;, &lt;strong&gt;Azure&lt;/strong&gt;, or your own servers. For more details, check out the &lt;a href="https://docs.fider.io/self-hosted/" rel="noopener noreferrer"&gt;official Fider self-hosting documentation&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Fider's Tech Stack
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Backend&lt;/strong&gt;: Go (Golang), with custom HTTP handling.&lt;br&gt;
&lt;strong&gt;Frontend&lt;/strong&gt;: React, using SCSS(Sassy CSS) for styling.&lt;br&gt;
&lt;strong&gt;Database&lt;/strong&gt;: PostgreSQL.&lt;br&gt;
&lt;strong&gt;Authentication&lt;/strong&gt;: OAuth 2.0.&lt;br&gt;
&lt;strong&gt;Background Jobs&lt;/strong&gt;: Go routines.&lt;br&gt;
&lt;strong&gt;Cloud Storage and Email Integration&lt;/strong&gt;: S3 for blob storage and SES for email notifications.&lt;/p&gt;
&lt;h3&gt;
  
  
  Pre-flight checks
&lt;/h3&gt;

&lt;p&gt;Before starting, ensure that you have an AWS account with access to the free tier or a valid payment method on file.&lt;/p&gt;
&lt;h3&gt;
  
  
  Install AWS CLI
&lt;/h3&gt;
&lt;h4&gt;
  
  
  Step 1:
&lt;/h4&gt;

&lt;p&gt;You need the awscli package installed for pushing the images built locally to Elastic Container Registry(ECR).&lt;/p&gt;
&lt;h4&gt;
  
  
  Step 2:
&lt;/h4&gt;

&lt;p&gt;If you use ubuntu distro, Check the snap version by using the following command.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;  snap version
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Run the following snap install command for the AWS CLI.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;  &lt;span class="nb"&gt;sudo &lt;/span&gt;snap &lt;span class="nb"&gt;install &lt;/span&gt;aws-cli &lt;span class="nt"&gt;--classic&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After the installation is complete, verify the installation by using the command,&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;  aws &lt;span class="nt"&gt;--version&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7jbmqiyzkt4ubzkhhnnk.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7jbmqiyzkt4ubzkhhnnk.jpeg" alt="Image description" width="800" height="82"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;With Snap package you always get the latest version of AWS CLI as snap packages automatically refresh.&lt;/p&gt;

&lt;p&gt;For command line installer check out the documentation: &lt;a href="https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html" rel="noopener noreferrer"&gt;https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  Step 4: Configure AWS CLI
&lt;/h4&gt;

&lt;p&gt;Use this command to configure the AWS CLI.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;  aws configure
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once configured, verify your AWS CLI credentials using the following command.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;  aws configure list 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Step 5: Pull official docker images from DockerHub
&lt;/h4&gt;

&lt;p&gt;Pull the official docker image for Fider from the DockerHub registry at: &lt;a href="https://hub.docker.com/r/getfider/fider/tags" rel="noopener noreferrer"&gt;https://hub.docker.com/r/getfider/fider/tags&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Use the docker pull command to pull the image from the DockerHub.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;  docker pull getfider/fider:main
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F00pmk1k3mu2w4b2vrjy5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F00pmk1k3mu2w4b2vrjy5.png" alt="Image description" width="735" height="342"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Check if the docker image is available in your instance.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;  docker images 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx0jdzadv7bo7hjg5ogwh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx0jdzadv7bo7hjg5ogwh.png" alt="Image description" width="660" height="81"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Push the Docker images to ECR
&lt;/h2&gt;

&lt;h4&gt;
  
  
  Step 1: Visit the ECR console
&lt;/h4&gt;

&lt;p&gt;Create a public or private repository based on your use case by clicking on the create repository prompt in the top right corner.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9eyde058uerav0n99lgd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9eyde058uerav0n99lgd.png" alt="Image description" width="800" height="731"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Name the repo getfider/fider&lt;/p&gt;

&lt;p&gt;Check if the repository is created.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9dfquszu9rzm03ewfko5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9dfquszu9rzm03ewfko5.png" alt="Image description" width="800" height="96"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  Step 2: View the push commands
&lt;/h4&gt;

&lt;p&gt;View the push commands by clicking on the respective repository.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frgngc3atvaf2f90nk060.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frgngc3atvaf2f90nk060.png" alt="Image description" width="800" height="509"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Use the AWS CLI to retrieve an authentication token and authenticate your Docker client to your registry. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsqtyhl7vpvh8efzwbex8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsqtyhl7vpvh8efzwbex8.png" alt="Image description" width="800" height="130"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;tag your pulled image so you can push the image to this repository.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F77f4qya27o96gvc094tm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F77f4qya27o96gvc094tm.png" alt="Image description" width="800" height="29"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Run the following command to push this image to your newly created AWS repository.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs9k0fysnztbok4zdr20i.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs9k0fysnztbok4zdr20i.png" alt="Image description" width="800" height="230"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The docker images should now be available in the repository you created.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2nup2dse4a0hp1ztz6k1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2nup2dse4a0hp1ztz6k1.png" alt="Image description" width="800" height="176"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Create a VPC
&lt;/h2&gt;

&lt;h4&gt;
  
  
  Step 1: Create a standalone VPC for deploying Fider application
&lt;/h4&gt;

&lt;p&gt;Login to the management console then navigate to the VPC service console and click on Create VPC.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Froxm3lyorwu7zu80mq5s.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Froxm3lyorwu7zu80mq5s.png" alt="Image description" width="800" height="762"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Create VPC only for now, you can create public and private subnets manually and can attach these subnets to particular route tables.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqh5fyz3wim4vixqjwine.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqh5fyz3wim4vixqjwine.png" alt="Image description" width="800" height="371"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;A route table will also get created with route to local traffic.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fem8feb67ocak9wtfdx1h.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fem8feb67ocak9wtfdx1h.png" alt="Image description" width="800" height="169"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  Step 2: Create an Internet gateway
&lt;/h4&gt;

&lt;p&gt;Create an IGW and attach it to the VPC you created.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F90w855bq467tu2jl89pv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F90w855bq467tu2jl89pv.png" alt="Image description" width="800" height="195"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj4nr8oyesbvqf7gxiqw7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj4nr8oyesbvqf7gxiqw7.png" alt="Image description" width="800" height="70"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The IGW is now attached to the VPC.&lt;/p&gt;

&lt;p&gt;Associate the IGW with the route table by clicking on edit routes.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvr0axktlpivlgst60ns9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvr0axktlpivlgst60ns9.png" alt="Image description" width="800" height="233"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;By clicking on Add route attach the the IGW to the route table.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq2y0tjik0idto209wagl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq2y0tjik0idto209wagl.png" alt="Image description" width="800" height="173"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The route table now has a route to the IGW, which means the subnets associated with this route table can have Internet access.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff3dw4b0csbq1alhnbp4t.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff3dw4b0csbq1alhnbp4t.png" alt="Image description" width="800" height="248"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  Step 3: Create Public Subnets
&lt;/h4&gt;

&lt;p&gt;Create at least two public subnets, so that the Application Load Balancer can reside in it.&lt;/p&gt;

&lt;p&gt;Navigate to the VPC Console, under the subnets section create two public subnets.&lt;/p&gt;

&lt;p&gt;Decide the subnet CIDR range and provision it accordingly.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5e552dqpozf14a3mfvwu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5e552dqpozf14a3mfvwu.png" alt="Image description" width="800" height="134"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Associate the public subnets to the public route table where the IGW is also attached.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4eax24jnq32dxuiwcul8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4eax24jnq32dxuiwcul8.png" alt="Image description" width="800" height="352"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  Step 4: Create Private Subnets
&lt;/h4&gt;

&lt;p&gt;Create at least two private subnets, so that your ECS tasks and RDS can reside in it.&lt;/p&gt;

&lt;p&gt;Navigate to the VPC Console, under the subnets section create at least two private subnets.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh4dr0dgs1elkb9kw98tw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh4dr0dgs1elkb9kw98tw.png" alt="Image description" width="800" height="129"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Under the actions dropdown choose Edit subnet settings and uncheck the Enable auto-assign public IPv4 address.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fywlviu2piima439ztuqr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fywlviu2piima439ztuqr.png" alt="Image description" width="800" height="395"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  Step 5: Create a Private route table
&lt;/h4&gt;

&lt;p&gt;Create a private route table in the VPC console under the route tables section to route traffic locally.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpvdrqs25r44z85qpsgyf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpvdrqs25r44z85qpsgyf.png" alt="Image description" width="800" height="622"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpiio8rnesf2fi6ylt1u2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpiio8rnesf2fi6ylt1u2.png" alt="Image description" width="800" height="108"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Associate the private subnets to the private route table, like how you did with the public route table, and follow the same steps.&lt;/p&gt;

&lt;h4&gt;
  
  
  Step 6: Create an Elastic-ip
&lt;/h4&gt;

&lt;p&gt;Allocate an elastic-ip address and attach it with the NAT gateway.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc5wdrvrmy16qljuqve6o.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc5wdrvrmy16qljuqve6o.png" alt="Image description" width="800" height="110"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Click on Allocate Elastic IP address&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyawnn5qhbiif4m19zqwc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyawnn5qhbiif4m19zqwc.png" alt="Image description" width="800" height="323"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;From public ipv4 address pool, choose Amazon's pool of ipv4 address and click Allocate&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fttrndfcsrujtgi1b0o6y.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fttrndfcsrujtgi1b0o6y.png" alt="Image description" width="800" height="162"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You should now be able to check the static Elastic IP allocated to you. It can now be attached to the NAT gateway.&lt;/p&gt;

&lt;h4&gt;
  
  
  Step 7: Create NAT gateway
&lt;/h4&gt;

&lt;p&gt;Create a NAT gateway to allow tasks in the private subnet to connect to the internet.&lt;/p&gt;

&lt;p&gt;Navigate to NAT gateway section under the VPC console and create a NAT gateway.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffyljs1pp2o8ldo9m68t4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffyljs1pp2o8ldo9m68t4.png" alt="Image description" width="800" height="369"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Associate it with a public subnet and also associate an Elastic IP address to it.&lt;/p&gt;

&lt;p&gt;Set the connectivity type to public.&lt;/p&gt;

&lt;p&gt;Attach the NAT gateway to the private route table that has three private subnets associated with it.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx7huv836zbdrcvftub75.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx7huv836zbdrcvftub75.png" alt="Image description" width="800" height="179"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The private route table now has a route to the NAT gateway.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgb534p084qe7wdx438n7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgb534p084qe7wdx438n7.png" alt="Image description" width="800" height="250"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Your VPC lineage should now look like this. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhdeitwgde21em5p7v55n.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhdeitwgde21em5p7v55n.png" alt="Image description" width="800" height="256"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Create Subnet group for RDS
&lt;/h2&gt;

&lt;p&gt;Navigate to the RDS sevice console and click on subnet groups,&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fth1ldgbcg6n174h1yzn7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fth1ldgbcg6n174h1yzn7.png" alt="Image description" width="800" height="334"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Click on create DB subnet group&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9k1c9rjle20gsdmme2y6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9k1c9rjle20gsdmme2y6.png" alt="Image description" width="800" height="112"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Fill out the subnet group details and add subnets and choose the availability zones(azs) accordingly(private subnets).&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmsi5lhq4vpxg3nnt0q8u.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmsi5lhq4vpxg3nnt0q8u.png" alt="Image description" width="800" height="370"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you choose to launch multi-AZ DB clusters, you must select 3 subnets in three different availability zones.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fig839buhfkdpdwp94ah3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fig839buhfkdpdwp94ah3.png" alt="Image description" width="800" height="156"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Click on create,your subnet group will now be available in the subnet groups dashboard.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F33ut1qdvtjrgz29u29kb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F33ut1qdvtjrgz29u29kb.png" alt="Image description" width="800" height="149"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Create an RDS PostreSQL DB
&lt;/h2&gt;

&lt;p&gt;Navigate to the RDS service console from the management console and click on DB instance, to create an RDS DB instance.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy1apv0dn4i7kmhdb4cpg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy1apv0dn4i7kmhdb4cpg.png" alt="Image description" width="800" height="149"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Click on create database&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2kkcxc3bz2sxqs1zqw1u.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2kkcxc3bz2sxqs1zqw1u.png" alt="Image description" width="800" height="380"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Choose standard create and choose PostgreSQL as engine type.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffsdg2etmmj2mplzlv1tx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffsdg2etmmj2mplzlv1tx.png" alt="Image description" width="800" height="380"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Choose the latest PostgreSQL engine version and for this guide i choose to go with single DB instance under the availability and durability section.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgz3efahsbbu9jpkyiqq4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgz3efahsbbu9jpkyiqq4.png" alt="Image description" width="800" height="312"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Under credential settings provide postgres as master user name and choose self managed to use your custom password for authenticating with RDS.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feokhchvdjt9oxhoeesm6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feokhchvdjt9oxhoeesm6.png" alt="Image description" width="800" height="372"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;For this guide, I chose to go with Burstable classes, depending on your use case you can fine tune the DB instance class. Storage will be left as default, unless you choose to customise it.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzo0e8nrwa020loz0ptom.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzo0e8nrwa020loz0ptom.png" alt="Image description" width="800" height="372"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Under connectivity section, choose the VPC that you created(fider-vpc) and the subnet group you created previously would appear under the DB subnet group section.&lt;/p&gt;

&lt;p&gt;Choose NO under public access section.&lt;/p&gt;

&lt;p&gt;Now, before proceeding to VPC security group section, navigate to VPC console and create a security group for both RDS and ECS. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh4pk8q9mpsg28qlolsta.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh4pk8q9mpsg28qlolsta.png" alt="Image description" width="800" height="286"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe7i7rmnvrpnuzfd84jjm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe7i7rmnvrpnuzfd84jjm.png" alt="Image description" width="800" height="286"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The above sg rules are for ECS service.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl2ds6q07uovnkw8wpuow.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl2ds6q07uovnkw8wpuow.png" alt="Image description" width="800" height="280"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The above sg rule is for RDS and it has no outbound rules set.&lt;/p&gt;

&lt;p&gt;Now back to the RDS console where we left,&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp2rzxlyr3nkwfkv7vs0z.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp2rzxlyr3nkwfkv7vs0z.png" alt="Image description" width="800" height="280"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Choose the security group you created and choose the availability zone where you would want to host the DB instance.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8q1q5jf3ch4l2k3k6dqu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8q1q5jf3ch4l2k3k6dqu.png" alt="Image description" width="800" height="386"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Set database authentication as password based, I have disabled monitoring you can enable it if you want. &lt;/p&gt;

&lt;p&gt;Create the initial database as fider in the DB instance.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjsl9hdpfiddo5no38tzq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjsl9hdpfiddo5no38tzq.png" alt="Image description" width="800" height="386"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I chose to go with the above settings, you can enable it if you want.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsg6veg398nhrutlibs3x.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsg6veg398nhrutlibs3x.png" alt="Image description" width="800" height="175"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Click on create database to provision an RDS PostgrSQL DB.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fltuznyrg3upn5kt142dp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fltuznyrg3upn5kt142dp.png" alt="Image description" width="800" height="165"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The RDS PostgreSQL DB is now available!&lt;/p&gt;

&lt;h2&gt;
  
  
  Create Parameters in Systems manager Parameter Store
&lt;/h2&gt;

&lt;p&gt;Since Fider currently does not appear to have built-in logic specifically to fetch credentials or secrets from AWS Secrets Manager, I have used parameter store to store Fider's environment variables so the tasks in ECS can fetch these credentials when they are being fired up. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcbi2x7s86tzj162mi7l4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcbi2x7s86tzj162mi7l4.png" alt="Image description" width="800" height="392"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Fider supports the above environment variables. I have used the standard tier and secureString type to encrypt sensitive data using KMS keys.&lt;/p&gt;

&lt;h2&gt;
  
  
  Create an IAM user
&lt;/h2&gt;

&lt;p&gt;Fider supports the use of Amazon S3 for blob storage and Amazon SES for sending emails. However, simply passing the access keys through environment variables will not automatically make the images appear in the S3 bucket, nor will it enable SES for email sending.&lt;/p&gt;

&lt;p&gt;The application will assume the role assigned to the user, which will then allow it to perform actions on the user's behalf.&lt;/p&gt;

&lt;p&gt;Navigate to the IAM console and create a user and assign the roles necessary for accessing the S3 bucket and SES service,&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmbq43063l7h4wuo8av0h.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmbq43063l7h4wuo8av0h.png" alt="Image description" width="800" height="325"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I've granted full access to S3 and SES for this guide, but this isn't the recommended approach. Instead, you should create fine-grained access policies and assign them to the role.&lt;/p&gt;

&lt;h2&gt;
  
  
  Create an S3 bucket
&lt;/h2&gt;

&lt;p&gt;Navigate to the S3 bucket console and create a general purpose S3 bucket for blob storage.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F59g5vruko09zjuy9h6hv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F59g5vruko09zjuy9h6hv.png" alt="Image description" width="800" height="244"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once you've created the bucket, navigate to the Permissions tab and check the Bucket Policy section. It’s likely empty, right? Well, it shouldn’t be, if left empty the S3 bucket will remain empty forever.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;  &lt;span class="o"&gt;{&lt;/span&gt;
    &lt;span class="s2"&gt;"Version"&lt;/span&gt;: &lt;span class="s2"&gt;"2012-10-17"&lt;/span&gt;,
    &lt;span class="s2"&gt;"Statement"&lt;/span&gt;: &lt;span class="o"&gt;[&lt;/span&gt;
        &lt;span class="o"&gt;{&lt;/span&gt;
            &lt;span class="s2"&gt;"Effect"&lt;/span&gt;: &lt;span class="s2"&gt;"Allow"&lt;/span&gt;,
            &lt;span class="s2"&gt;"Principal"&lt;/span&gt;: &lt;span class="o"&gt;{&lt;/span&gt;
                &lt;span class="s2"&gt;"AWS"&lt;/span&gt;: &lt;span class="s2"&gt;"arn:aws:iam::&amp;lt;ACCOUNT_ID&amp;gt;:user/&amp;lt;USER_NAME&amp;gt;"&lt;/span&gt;
            &lt;span class="o"&gt;}&lt;/span&gt;,
            &lt;span class="s2"&gt;"Action"&lt;/span&gt;: &lt;span class="s2"&gt;"s3:*"&lt;/span&gt;,
            &lt;span class="s2"&gt;"Resource"&lt;/span&gt;: &lt;span class="s2"&gt;"arn:aws:s3:::&amp;lt;BUCKET_NAME&amp;gt;"&lt;/span&gt;
        &lt;span class="o"&gt;}&lt;/span&gt;
    &lt;span class="o"&gt;]&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This policy is too permissive; you can create more fine-grained policies to restrict access.&lt;/p&gt;

&lt;h2&gt;
  
  
  Create an identity in SES
&lt;/h2&gt;

&lt;p&gt;The application requires email verification as part of the signup process to prevent spam and ensure only valid users access the platform. I chose the SES API route instead of SMTP.&lt;/p&gt;

&lt;p&gt;Navigate to the SES service console and start creating an identity.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmalx0ivf03isd64kwkr7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmalx0ivf03isd64kwkr7.png" alt="Image description" width="800" height="251"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I chose to go with domain verification. Once you've created the identity via the domain, make sure to validate the records with your DNS provider.&lt;/p&gt;

&lt;p&gt;You can send a test mail from the verified identities to the check the workflow.&lt;/p&gt;

&lt;p&gt;If you plan to run this setup in production, you can request production access from Amazon support. Otherwise, you can remain in the sandbox environment.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffeab894u4lt18akjm0gw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffeab894u4lt18akjm0gw.png" alt="Image description" width="800" height="376"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Why use ECS instead of EC2
&lt;/h2&gt;

&lt;p&gt;There is one major reason to use ECS instead of EC2,&lt;/p&gt;

&lt;h4&gt;
  
  
  1. Reduced Operational Overhead:
&lt;/h4&gt;

&lt;p&gt;EC2 is cheaper on paper, but that requires careful capacity management and container resource planning plus reserved instance commitments or spot fleet configurations. In the end it feels like 99% of the time, the admin overhead doesn't outweigh the savings.&lt;/p&gt;

&lt;p&gt;Depends on how much control you want. If you want less control, go with Fargate. And yes, if you go with standard ECS, the EC2 instances are run and managed by you.&lt;/p&gt;

&lt;h2&gt;
  
  
  Create an ECS cluster
&lt;/h2&gt;

&lt;h4&gt;
  
  
  Step 1:
&lt;/h4&gt;

&lt;p&gt;Navigate to the ECS console and create an ECS cluster with Fargate launch type.&lt;/p&gt;

&lt;p&gt;Click on Create cluster.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm6q208ji14f1smu5nqc4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm6q208ji14f1smu5nqc4.png" alt="Image description" width="800" height="164"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fefqncbblldbj4nrcu3s4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fefqncbblldbj4nrcu3s4.png" alt="Image description" width="800" height="749"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fiznr8v5fbxjblhjzgfo0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fiznr8v5fbxjblhjzgfo0.png" alt="Image description" width="800" height="298"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I have turned on container insights for this guide, to see aggregated metrics at cluster and service level. In this way you can run the deep dive analysis.&lt;/p&gt;

&lt;p&gt;This will create an ECS cluster via Cloudformation in the backend.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmnbw9mr10o6j1j5u7ccg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmnbw9mr10o6j1j5u7ccg.png" alt="Image description" width="800" height="138"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Create Task Definition
&lt;/h2&gt;

&lt;h4&gt;
  
  
  Step 1:
&lt;/h4&gt;

&lt;p&gt;Navigate to the task definition section under the ECS console and create a task definition named fider-app-task-definition.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6e8xvg1kmcbzomrv0zqy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6e8xvg1kmcbzomrv0zqy.png" alt="Image description" width="800" height="137"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The task definition can be created via both the console and JSON.&lt;/p&gt;

&lt;p&gt;Refer to the following JSON task definition for creating the task definition via JSON.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;  &lt;span class="o"&gt;{&lt;/span&gt;
    &lt;span class="s2"&gt;"taskDefinitionArn"&lt;/span&gt;: &lt;span class="s2"&gt;"arn:aws:ecs:us-east-2:624184658995:task-definition/fider-app-task-definition:7"&lt;/span&gt;,
    &lt;span class="s2"&gt;"containerDefinitions"&lt;/span&gt;: &lt;span class="o"&gt;[&lt;/span&gt;
        &lt;span class="o"&gt;{&lt;/span&gt;
            &lt;span class="s2"&gt;"name"&lt;/span&gt;: &lt;span class="s2"&gt;"fider-app"&lt;/span&gt;,
            &lt;span class="s2"&gt;"image"&lt;/span&gt;: &lt;span class="s2"&gt;"624184658995.dkr.ecr.us-east-2.amazonaws.com/getfider/fider:main"&lt;/span&gt;,
            &lt;span class="s2"&gt;"cpu"&lt;/span&gt;: 2048,
            &lt;span class="s2"&gt;"memory"&lt;/span&gt;: 8192,
            &lt;span class="s2"&gt;"memoryReservation"&lt;/span&gt;: 4096,
            &lt;span class="s2"&gt;"portMappings"&lt;/span&gt;: &lt;span class="o"&gt;[&lt;/span&gt;
                &lt;span class="o"&gt;{&lt;/span&gt;
                    &lt;span class="s2"&gt;"name"&lt;/span&gt;: &lt;span class="s2"&gt;"fider-app-3000-tcp"&lt;/span&gt;,
                    &lt;span class="s2"&gt;"containerPort"&lt;/span&gt;: 3000,
                    &lt;span class="s2"&gt;"hostPort"&lt;/span&gt;: 3000,
                    &lt;span class="s2"&gt;"protocol"&lt;/span&gt;: &lt;span class="s2"&gt;"tcp"&lt;/span&gt;,
                    &lt;span class="s2"&gt;"appProtocol"&lt;/span&gt;: &lt;span class="s2"&gt;"http"&lt;/span&gt;
                &lt;span class="o"&gt;}&lt;/span&gt;
            &lt;span class="o"&gt;]&lt;/span&gt;,
            &lt;span class="s2"&gt;"essential"&lt;/span&gt;: &lt;span class="nb"&gt;true&lt;/span&gt;,
            &lt;span class="s2"&gt;"environment"&lt;/span&gt;: &lt;span class="o"&gt;[]&lt;/span&gt;,
            &lt;span class="s2"&gt;"mountPoints"&lt;/span&gt;: &lt;span class="o"&gt;[]&lt;/span&gt;,
            &lt;span class="s2"&gt;"volumesFrom"&lt;/span&gt;: &lt;span class="o"&gt;[]&lt;/span&gt;,
            &lt;span class="s2"&gt;"secrets"&lt;/span&gt;: &lt;span class="o"&gt;[&lt;/span&gt;
                &lt;span class="o"&gt;{&lt;/span&gt;
                    &lt;span class="s2"&gt;"name"&lt;/span&gt;: &lt;span class="s2"&gt;"BASE_URL"&lt;/span&gt;,
                    &lt;span class="s2"&gt;"valueFrom"&lt;/span&gt;: &lt;span class="s2"&gt;"arn:aws:ssm:us-east-2:624184658995:parameter/BASE_URL"&lt;/span&gt;
                &lt;span class="o"&gt;}&lt;/span&gt;,
                &lt;span class="o"&gt;{&lt;/span&gt;
                    &lt;span class="s2"&gt;"name"&lt;/span&gt;: &lt;span class="s2"&gt;"DATABASE_URL"&lt;/span&gt;,
                    &lt;span class="s2"&gt;"valueFrom"&lt;/span&gt;: &lt;span class="s2"&gt;"arn:aws:ssm:us-east-2:624184658995:parameter/DATABASE_URL"&lt;/span&gt;
                &lt;span class="o"&gt;}&lt;/span&gt;,
                &lt;span class="o"&gt;{&lt;/span&gt;
                    &lt;span class="s2"&gt;"name"&lt;/span&gt;: &lt;span class="s2"&gt;"JWT_SECRET"&lt;/span&gt;,
                    &lt;span class="s2"&gt;"valueFrom"&lt;/span&gt;: &lt;span class="s2"&gt;"arn:aws:ssm:us-east-2:624184658995:parameter/JWT_SECRET"&lt;/span&gt;
                &lt;span class="o"&gt;}&lt;/span&gt;,
                &lt;span class="o"&gt;{&lt;/span&gt;
                    &lt;span class="s2"&gt;"name"&lt;/span&gt;: &lt;span class="s2"&gt;"GO_ENV"&lt;/span&gt;,
                    &lt;span class="s2"&gt;"valueFrom"&lt;/span&gt;: &lt;span class="s2"&gt;"arn:aws:ssm:us-east-2:624184658995:parameter/GO_ENV"&lt;/span&gt;
                &lt;span class="o"&gt;}&lt;/span&gt;,
                &lt;span class="o"&gt;{&lt;/span&gt;
                    &lt;span class="s2"&gt;"name"&lt;/span&gt;: &lt;span class="s2"&gt;"EMAIL_NOREPLY"&lt;/span&gt;,
                    &lt;span class="s2"&gt;"valueFrom"&lt;/span&gt;: &lt;span class="s2"&gt;"arn:aws:ssm:us-east-2:624184658995:parameter/EMAIL_NOREPLY"&lt;/span&gt;
                &lt;span class="o"&gt;}&lt;/span&gt;,
                &lt;span class="o"&gt;{&lt;/span&gt;
                    &lt;span class="s2"&gt;"name"&lt;/span&gt;: &lt;span class="s2"&gt;"EMAIL_AWSSES_ACCESS_KEY_ID"&lt;/span&gt;,
                    &lt;span class="s2"&gt;"valueFrom"&lt;/span&gt;: &lt;span class="s2"&gt;"arn:aws:ssm:us-east-2:624184658995:parameter/EMAIL_AWSSES_ACCESS_KEY_ID"&lt;/span&gt;
                &lt;span class="o"&gt;}&lt;/span&gt;,
                &lt;span class="o"&gt;{&lt;/span&gt;
                    &lt;span class="s2"&gt;"name"&lt;/span&gt;: &lt;span class="s2"&gt;"EMAIL_AWSSES_SECRET_ACCESS_KEY"&lt;/span&gt;,
                    &lt;span class="s2"&gt;"valueFrom"&lt;/span&gt;: &lt;span class="s2"&gt;"arn:aws:ssm:us-east-2:624184658995:parameter/EMAIL_AWSSES_SECRET_ACCESS_KEY"&lt;/span&gt;
                &lt;span class="o"&gt;}&lt;/span&gt;,
                &lt;span class="o"&gt;{&lt;/span&gt;
                    &lt;span class="s2"&gt;"name"&lt;/span&gt;: &lt;span class="s2"&gt;"EMAIL_AWSSES_REGION"&lt;/span&gt;,
                    &lt;span class="s2"&gt;"valueFrom"&lt;/span&gt;: &lt;span class="s2"&gt;"arn:aws:ssm:us-east-2:624184658995:parameter/EMAIL_AWSSES_REGION"&lt;/span&gt;
                &lt;span class="o"&gt;}&lt;/span&gt;,
                &lt;span class="o"&gt;{&lt;/span&gt;
                    &lt;span class="s2"&gt;"name"&lt;/span&gt;: &lt;span class="s2"&gt;"BLOB_STORAGE"&lt;/span&gt;,
                    &lt;span class="s2"&gt;"valueFrom"&lt;/span&gt;: &lt;span class="s2"&gt;"arn:aws:ssm:us-east-2:624184658995:parameter/BLOB_STORAGE"&lt;/span&gt;
                &lt;span class="o"&gt;}&lt;/span&gt;,
                &lt;span class="o"&gt;{&lt;/span&gt;
                    &lt;span class="s2"&gt;"name"&lt;/span&gt;: &lt;span class="s2"&gt;"BLOB_STORAGE_S3_ACCESS_KEY_ID"&lt;/span&gt;,
                    &lt;span class="s2"&gt;"valueFrom"&lt;/span&gt;: &lt;span class="s2"&gt;"arn:aws:ssm:us-east-2:624184658995:parameter/BLOB_STORAGE_S3_ACCESS_KEY_ID"&lt;/span&gt;
                &lt;span class="o"&gt;}&lt;/span&gt;,
                &lt;span class="o"&gt;{&lt;/span&gt;
                    &lt;span class="s2"&gt;"name"&lt;/span&gt;: &lt;span class="s2"&gt;"BLOB_STORAGE_S3_BUCKET"&lt;/span&gt;,
                    &lt;span class="s2"&gt;"valueFrom"&lt;/span&gt;: &lt;span class="s2"&gt;"arn:aws:ssm:us-east-2:624184658995:parameter/BLOB_STORAGE_S3_BUCKET"&lt;/span&gt;
                &lt;span class="o"&gt;}&lt;/span&gt;,
                &lt;span class="o"&gt;{&lt;/span&gt;
                    &lt;span class="s2"&gt;"name"&lt;/span&gt;: &lt;span class="s2"&gt;"BLOB_STORAGE_S3_ENDPOINT_URL"&lt;/span&gt;,
                    &lt;span class="s2"&gt;"valueFrom"&lt;/span&gt;: &lt;span class="s2"&gt;"arn:aws:ssm:us-east-2:624184658995:parameter/BLOB_STORAGE_S3_ENDPOINT_URL"&lt;/span&gt;
                &lt;span class="o"&gt;}&lt;/span&gt;,
                &lt;span class="o"&gt;{&lt;/span&gt;
                    &lt;span class="s2"&gt;"name"&lt;/span&gt;: &lt;span class="s2"&gt;"BLOB_STORAGE_S3_REGION"&lt;/span&gt;,
                    &lt;span class="s2"&gt;"valueFrom"&lt;/span&gt;: &lt;span class="s2"&gt;"arn:aws:ssm:us-east-2:624184658995:parameter/BLOB_STORAGE_S3_REGION"&lt;/span&gt;
                &lt;span class="o"&gt;}&lt;/span&gt;,
                &lt;span class="o"&gt;{&lt;/span&gt;
                    &lt;span class="s2"&gt;"name"&lt;/span&gt;: &lt;span class="s2"&gt;"BLOB_STORAGE_S3_SECRET_ACCESS_KEY"&lt;/span&gt;,
                    &lt;span class="s2"&gt;"valueFrom"&lt;/span&gt;: &lt;span class="s2"&gt;"arn:aws:ssm:us-east-2:624184658995:parameter/BLOB_STORAGE_S3_SECRET_ACCESS_KEY"&lt;/span&gt;
                &lt;span class="o"&gt;}&lt;/span&gt;
            &lt;span class="o"&gt;]&lt;/span&gt;,
            &lt;span class="s2"&gt;"logConfiguration"&lt;/span&gt;: &lt;span class="o"&gt;{&lt;/span&gt;
                &lt;span class="s2"&gt;"logDriver"&lt;/span&gt;: &lt;span class="s2"&gt;"awslogs"&lt;/span&gt;,
                &lt;span class="s2"&gt;"options"&lt;/span&gt;: &lt;span class="o"&gt;{&lt;/span&gt;
                    &lt;span class="s2"&gt;"awslogs-group"&lt;/span&gt;: &lt;span class="s2"&gt;"/ecs/fider-app-task"&lt;/span&gt;,
                    &lt;span class="s2"&gt;"mode"&lt;/span&gt;: &lt;span class="s2"&gt;"non-blocking"&lt;/span&gt;,
                    &lt;span class="s2"&gt;"awslogs-create-group"&lt;/span&gt;: &lt;span class="s2"&gt;"true"&lt;/span&gt;,
                    &lt;span class="s2"&gt;"max-buffer-size"&lt;/span&gt;: &lt;span class="s2"&gt;"25m"&lt;/span&gt;,
                    &lt;span class="s2"&gt;"awslogs-region"&lt;/span&gt;: &lt;span class="s2"&gt;"us-east-2"&lt;/span&gt;,
                    &lt;span class="s2"&gt;"awslogs-stream-prefix"&lt;/span&gt;: &lt;span class="s2"&gt;"ecs"&lt;/span&gt;
                &lt;span class="o"&gt;}&lt;/span&gt;,
                &lt;span class="s2"&gt;"secretOptions"&lt;/span&gt;: &lt;span class="o"&gt;[]&lt;/span&gt;
            &lt;span class="o"&gt;}&lt;/span&gt;,
            &lt;span class="s2"&gt;"systemControls"&lt;/span&gt;: &lt;span class="o"&gt;[]&lt;/span&gt;
        &lt;span class="o"&gt;}&lt;/span&gt;
    &lt;span class="o"&gt;]&lt;/span&gt;,
    &lt;span class="s2"&gt;"family"&lt;/span&gt;: &lt;span class="s2"&gt;"fider-app-task-definition"&lt;/span&gt;,
    &lt;span class="s2"&gt;"taskRoleArn"&lt;/span&gt;: &lt;span class="s2"&gt;"arn:aws:iam::624184658995:role/ecsTaskExecutionRole"&lt;/span&gt;,
    &lt;span class="s2"&gt;"executionRoleArn"&lt;/span&gt;: &lt;span class="s2"&gt;"arn:aws:iam::624184658995:role/ecsTaskExecutionRole"&lt;/span&gt;,
    &lt;span class="s2"&gt;"networkMode"&lt;/span&gt;: &lt;span class="s2"&gt;"awsvpc"&lt;/span&gt;,
    &lt;span class="s2"&gt;"revision"&lt;/span&gt;: 7,
    &lt;span class="s2"&gt;"volumes"&lt;/span&gt;: &lt;span class="o"&gt;[]&lt;/span&gt;,
    &lt;span class="s2"&gt;"status"&lt;/span&gt;: &lt;span class="s2"&gt;"ACTIVE"&lt;/span&gt;,
    &lt;span class="s2"&gt;"requiresAttributes"&lt;/span&gt;: &lt;span class="o"&gt;[&lt;/span&gt;
        &lt;span class="o"&gt;{&lt;/span&gt;
            &lt;span class="s2"&gt;"name"&lt;/span&gt;: &lt;span class="s2"&gt;"ecs.capability.execution-role-awslogs"&lt;/span&gt;
        &lt;span class="o"&gt;}&lt;/span&gt;,
        &lt;span class="o"&gt;{&lt;/span&gt;
            &lt;span class="s2"&gt;"name"&lt;/span&gt;: &lt;span class="s2"&gt;"com.amazonaws.ecs.capability.ecr-auth"&lt;/span&gt;
        &lt;span class="o"&gt;}&lt;/span&gt;,
        &lt;span class="o"&gt;{&lt;/span&gt;
            &lt;span class="s2"&gt;"name"&lt;/span&gt;: &lt;span class="s2"&gt;"com.amazonaws.ecs.capability.docker-remote-api.1.28"&lt;/span&gt;
        &lt;span class="o"&gt;}&lt;/span&gt;,
        &lt;span class="o"&gt;{&lt;/span&gt;
            &lt;span class="s2"&gt;"name"&lt;/span&gt;: &lt;span class="s2"&gt;"com.amazonaws.ecs.capability.docker-remote-api.1.21"&lt;/span&gt;
        &lt;span class="o"&gt;}&lt;/span&gt;,
        &lt;span class="o"&gt;{&lt;/span&gt;
            &lt;span class="s2"&gt;"name"&lt;/span&gt;: &lt;span class="s2"&gt;"com.amazonaws.ecs.capability.task-iam-role"&lt;/span&gt;
        &lt;span class="o"&gt;}&lt;/span&gt;,
        &lt;span class="o"&gt;{&lt;/span&gt;
            &lt;span class="s2"&gt;"name"&lt;/span&gt;: &lt;span class="s2"&gt;"ecs.capability.execution-role-ecr-pull"&lt;/span&gt;
        &lt;span class="o"&gt;}&lt;/span&gt;,
        &lt;span class="o"&gt;{&lt;/span&gt;
            &lt;span class="s2"&gt;"name"&lt;/span&gt;: &lt;span class="s2"&gt;"ecs.capability.secrets.ssm.environment-variables"&lt;/span&gt;
        &lt;span class="o"&gt;}&lt;/span&gt;,
        &lt;span class="o"&gt;{&lt;/span&gt;
            &lt;span class="s2"&gt;"name"&lt;/span&gt;: &lt;span class="s2"&gt;"com.amazonaws.ecs.capability.docker-remote-api.1.18"&lt;/span&gt;
        &lt;span class="o"&gt;}&lt;/span&gt;,
        &lt;span class="o"&gt;{&lt;/span&gt;
            &lt;span class="s2"&gt;"name"&lt;/span&gt;: &lt;span class="s2"&gt;"ecs.capability.task-eni"&lt;/span&gt;
        &lt;span class="o"&gt;}&lt;/span&gt;,
        &lt;span class="o"&gt;{&lt;/span&gt;
            &lt;span class="s2"&gt;"name"&lt;/span&gt;: &lt;span class="s2"&gt;"com.amazonaws.ecs.capability.docker-remote-api.1.29"&lt;/span&gt;
        &lt;span class="o"&gt;}&lt;/span&gt;,
        &lt;span class="o"&gt;{&lt;/span&gt;
            &lt;span class="s2"&gt;"name"&lt;/span&gt;: &lt;span class="s2"&gt;"com.amazonaws.ecs.capability.logging-driver.awslogs"&lt;/span&gt;
        &lt;span class="o"&gt;}&lt;/span&gt;,
        &lt;span class="o"&gt;{&lt;/span&gt;
            &lt;span class="s2"&gt;"name"&lt;/span&gt;: &lt;span class="s2"&gt;"com.amazonaws.ecs.capability.docker-remote-api.1.19"&lt;/span&gt;
        &lt;span class="o"&gt;}&lt;/span&gt;
    &lt;span class="o"&gt;]&lt;/span&gt;,
    &lt;span class="s2"&gt;"placementConstraints"&lt;/span&gt;: &lt;span class="o"&gt;[]&lt;/span&gt;,
    &lt;span class="s2"&gt;"compatibilities"&lt;/span&gt;: &lt;span class="o"&gt;[&lt;/span&gt;
        &lt;span class="s2"&gt;"EC2"&lt;/span&gt;,
        &lt;span class="s2"&gt;"FARGATE"&lt;/span&gt;
    &lt;span class="o"&gt;]&lt;/span&gt;,
    &lt;span class="s2"&gt;"requiresCompatibilities"&lt;/span&gt;: &lt;span class="o"&gt;[&lt;/span&gt;
        &lt;span class="s2"&gt;"FARGATE"&lt;/span&gt;
    &lt;span class="o"&gt;]&lt;/span&gt;,
    &lt;span class="s2"&gt;"cpu"&lt;/span&gt;: &lt;span class="s2"&gt;"2048"&lt;/span&gt;,
    &lt;span class="s2"&gt;"memory"&lt;/span&gt;: &lt;span class="s2"&gt;"8192"&lt;/span&gt;,
    &lt;span class="s2"&gt;"runtimePlatform"&lt;/span&gt;: &lt;span class="o"&gt;{&lt;/span&gt;
        &lt;span class="s2"&gt;"cpuArchitecture"&lt;/span&gt;: &lt;span class="s2"&gt;"X86_64"&lt;/span&gt;,
        &lt;span class="s2"&gt;"operatingSystemFamily"&lt;/span&gt;: &lt;span class="s2"&gt;"LINUX"&lt;/span&gt;
    &lt;span class="o"&gt;}&lt;/span&gt;,
    &lt;span class="s2"&gt;"registeredAt"&lt;/span&gt;: &lt;span class="s2"&gt;"2025-02-18T06:06:55.123Z"&lt;/span&gt;,
    &lt;span class="s2"&gt;"registeredBy"&lt;/span&gt;: &lt;span class="s2"&gt;"arn:aws:sts::624184658995:assumed-role/AWSReservedSSO_AdministratorAccess_263ab8d7ae88c1c4/wasim"&lt;/span&gt;,
    &lt;span class="s2"&gt;"tags"&lt;/span&gt;: &lt;span class="o"&gt;[]&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;I have referenced the environment variables created in systems manager parameter store by their resource arn.&lt;/p&gt;

&lt;p&gt;You also need the following policies to be attached to &lt;em&gt;ecsTaskExecutionRole&lt;/em&gt; along with &lt;em&gt;AmazonECSTaskExecutionRolePolicy&lt;/em&gt;,&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;em&gt;AmazonSSMFullAccess&lt;/em&gt;&lt;/li&gt;
&lt;li&gt;&lt;em&gt;ROSAKMSProviderPolicy&lt;/em&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsasjikg6da3zgmfqnzbx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsasjikg6da3zgmfqnzbx.png" alt="Image description" width="800" height="298"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The policies attached provides the ECS agent the required permissions to fetch the KMS encrypted environment variables from the systems manager parameter store, decrypt it and provide it to the application hosted inside the ECS tasks.&lt;/p&gt;

&lt;p&gt;I have provided full access permission policies, but it's usually not the right way to do it. You can craft the fine grained access policies and attach the same to the &lt;em&gt;ecsTaskExecutionRole&lt;/em&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Create an ECS service
&lt;/h2&gt;

&lt;h4&gt;
  
  
  Step 1:
&lt;/h4&gt;

&lt;p&gt;Navigate to the ECS cluster you created and click on it, once inside the cluster click on service.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8whkplvje0h4uagsp848.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8whkplvje0h4uagsp848.png" alt="Image description" width="800" height="312"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  Step 2: Configuration parameters
&lt;/h4&gt;

&lt;p&gt;In the compute configuration choose the capacity provider strategy and choose FARGATE as capacity provider with base 1 and weight as 100&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcdbmo77mllbvaul9l8c6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcdbmo77mllbvaul9l8c6.png" alt="Image description" width="800" height="236"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Choose service as the application type in the deployment configuration and Replica as the service type with Desired tasks set to 1. You can also set it 2, I have used one for this guide.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2soq3b73kuhf5w22or10.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2soq3b73kuhf5w22or10.png" alt="Image description" width="800" height="384"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In the networking configuration choose the VPC you created.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3xot5yi7k0n25s76b0sz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3xot5yi7k0n25s76b0sz.png" alt="Image description" width="800" height="330"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Turn off the public IP address as the tasks will use NAT gateway.&lt;/p&gt;

&lt;p&gt;Configure the Application load balancer and place it inside the public subnet as front-facing for your application. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fayjfu3b39yhqinvmjo35.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fayjfu3b39yhqinvmjo35.png" alt="Image description" width="800" height="330"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Create a target group on the fly in the ECS console itself and set up an HTTPS listener for your ALB. For this deployment, I’ve chosen to handle the HTTP to HTTPS redirect at the CDN level, so you don’t need an HTTP listener for your ALB, as redirects won’t happen at that level.&lt;/p&gt;

&lt;p&gt;For the SSL certificate to appear in the dropdown, skip to &lt;strong&gt;step 3&lt;/strong&gt; from here and request a public certificate from the ACM console.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffdprrl5n8z9gvct4g921.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffdprrl5n8z9gvct4g921.png" alt="Image description" width="800" height="393"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The health check path for Fider is &lt;strong&gt;_/health&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;I have not enabled service auto scaling for this guide, you can enable it if you want.&lt;/p&gt;

&lt;p&gt;Click on create in the end and navigate to Cloudformation to view the status. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7jkx2y4twa4hdciswvbn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7jkx2y4twa4hdciswvbn.png" alt="Image description" width="800" height="372"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyhsp91fwb7pbkvwn6doc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyhsp91fwb7pbkvwn6doc.png" alt="Image description" width="800" height="379"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The password authentication with RDS PostgreSQL is successful and the records got migrated!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8x9414ee8apxfnuuefej.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8x9414ee8apxfnuuefej.png" alt="Image description" width="800" height="370"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Since we have also enabled container insights, we can check the performance metrics of clusters, services and tasks.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Note&lt;/strong&gt;: If you are not able to run the tasks or if you ran into errors like this,&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fltyupghdwjyjtc7z00fm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fltyupghdwjyjtc7z00fm.png" alt="Image description" width="800" height="144"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;It's because you placed the task in the public subnet with no public IP assigned to it, when you created the application load balancer in the ECS service console itself. Even i ran into this issue and then i had to troubleshoot the failed tasks.&lt;/p&gt;

&lt;p&gt;I used the Systems Manager's &lt;strong&gt;AWSSupport-TroubleshootECSTaskFailedToStart&lt;/strong&gt; automation document. It diagnosed the issue for me. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbchd8rmhmq31miiedn5l.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbchd8rmhmq31miiedn5l.png" alt="Image description" width="800" height="290"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; Create the application load balancer and target group separately, note the availability zones(AZs) you assinged for ALB, and when you spin up the ECS tasks via service make sure you provide the private subnets from the same availability zones(AZs).&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Feel free to check this documentation when you run into issues!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcgz8pz27m7u3ovy2m1uh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcgz8pz27m7u3ovy2m1uh.png" alt="Image description" width="800" height="290"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  Step 3: Use ACM for SSL certificates
&lt;/h4&gt;

&lt;p&gt;The service is up and running now.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fldial6vlxgh3o51md25z.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fldial6vlxgh3o51md25z.png" alt="Image description" width="800" height="282"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Navigate to the ACM console and click on request a certificate.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsl0653a9fm7y68ws3l0s.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsl0653a9fm7y68ws3l0s.png" alt="Image description" width="800" height="389"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Request a public certificate and click on Next.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F55pl0dpbntd697xn17y7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F55pl0dpbntd697xn17y7.png" alt="Image description" width="800" height="404"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Provide your domain name and choose the DNS validation method. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fedujs6gc75ri364xy90v.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fedujs6gc75ri364xy90v.png" alt="Image description" width="800" height="790"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Click on request and ACM provides you with the DNS records, verify the records by mapping them to your DNS registrar settings. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2q1psiv8oof0e6tybba3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2q1psiv8oof0e6tybba3.png" alt="Image description" width="800" height="168"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Your SSL certificate is now ready to be used with the Application Load Balancer. You can map it under the Listener configuration for the ALB in the certificate section of the ECS service console.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbj6ky2d9x4wv8r7z3807.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbj6ky2d9x4wv8r7z3807.png" alt="Image description" width="800" height="205"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Since I’m setting up CloudFront, I’ve also chosen to enable WAF at the CDN level for this deployment, as CloudFront acts as the first point of contact and can filter traffic before it reaches the ALB.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fiah2wej9ufmnur6zu8w9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fiah2wej9ufmnur6zu8w9.png" alt="Image description" width="800" height="330"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Since WAF is enabled at the CloudFront level and all traffic originates from there, you don’t need to enable WAF at the ALB level. Instead, I’ve added an &lt;strong&gt;X-Custom-Header&lt;/strong&gt; to restrict access to the Application Load Balancer (ALB), ensuring that only requests from CloudFront are allowed.&lt;/p&gt;

&lt;h2&gt;
  
  
  Create Cloudfront distribution with WAF
&lt;/h2&gt;

&lt;p&gt;Navigate to the integrations tab under the application load balancer &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Furuyit24ium6weumgwsl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Furuyit24ium6weumgwsl.png" alt="Image description" width="800" height="303"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Click on manage cloudfront + WAF integration.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo96l7t0k4e9vfbz3s6w3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo96l7t0k4e9vfbz3s6w3.png" alt="Image description" width="800" height="402"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;click on the check box to enable the cloudfront IPs as inbound to your application load balancer.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy5jgcckcbrpook7gq0sf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy5jgcckcbrpook7gq0sf.png" alt="Image description" width="800" height="146"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;create an SSL/TLS certificate for your domain, from which you would like to host Fider on. Cloudfront only accepts SSL/TLS certificates from us-east-1(N.virginia) region.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgu3j568e21v1ahjokeo3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgu3j568e21v1ahjokeo3.png" alt="Image description" width="800" height="375"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once you click create, WAF rules and Cloudfront distribution will get enabled for your ALB.&lt;/p&gt;

&lt;p&gt;Now,navigate to the Cloudfront distribution and click on the behaviours tab to edit the settings of default cache behaviour,&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fupwcuxu49508tz5ywd7f.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fupwcuxu49508tz5ywd7f.png" alt="Image description" width="800" height="538"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I chose to handle the HTTP to HTTPS redirect at this level.&lt;/p&gt;

&lt;p&gt;Since Fider is a dynamic application, Features like submitting feedback, voting, and commenting are processed in real-time, and the application relies on a database to store and manage user data dynamically. So set the Allowed HTTP methods to &lt;strong&gt;GET,HEAD,OPTIONS,PUT,POST,PATCH,DELETE&lt;/strong&gt; . If not you won't be able to perform PUT operation if you choose from first two.&lt;/p&gt;

&lt;p&gt;Save it and now hover over to the origins tab and choose to edit the ALB origin settings.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc4ii2lz1ef51tmet8o7i.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc4ii2lz1ef51tmet8o7i.png" alt="Image description" width="800" height="325"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw39h4ahjqgetwcp69art.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw39h4ahjqgetwcp69art.png" alt="Image description" width="800" height="453"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Set the protocol to HTTPS only, I choose to let Cloudfront talk to my ALB over HTTPS and scroll further down,&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb6nrqtyoawyxbp7yxfzl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb6nrqtyoawyxbp7yxfzl.png" alt="Image description" width="800" height="226"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Add the Custom header which you added in the previous step with ALB, Cloudfront passes this header in every request it makes to the ALB. Hitting the ALB DNS without this header would result in 403 forbidden error. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F780su9gk0xq6v39od63t.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F780su9gk0xq6v39od63t.png" alt="Image description" width="800" height="285"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This is how your ALB resource map should look. It will only forward traffic to the target group if the string you configured in CloudFront is present in the request header.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzw9n2ogzuz7vt8ey0syg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzw9n2ogzuz7vt8ey0syg.png" alt="Image description" width="800" height="131"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Next, map the distribution domain to the domain where you want to host Fider in your DNS records(CNAME record). Don't hit the distribution domain name it will only result in 502 error.&lt;/p&gt;

&lt;p&gt;Hit the domain name you configured in the DNS entry, You should be able to see the landing page.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foqg1xftpigfvgtskl4l8.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foqg1xftpigfvgtskl4l8.jpeg" alt="Image description" width="800" height="252"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The images you upload to the webiste will get stored as an object in the S3 bucket, and when you access it from the browser via UI. you should be able to see this in the network tab under response headers (X-cache: Hit from Cloudfront)&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdphbjkzjmg32fzrl7as2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdphbjkzjmg32fzrl7as2.png" alt="Image description" width="800" height="386"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You can navigate to the AWS WAF in the console and look for the request metrics.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fibci5lurc2fhvtzl25e0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fibci5lurc2fhvtzl25e0.png" alt="Image description" width="800" height="375"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuxsw2e3n3cgk4ledbbcj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuxsw2e3n3cgk4ledbbcj.png" alt="Image description" width="800" height="375"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You can also configure the managed rules in the WAF console under the Rules tab.&lt;/p&gt;

&lt;p&gt;Viola! There you have it. Fider's up and running with the tasks securely inside their own private subnet, and the ALB routing requests to them. Additionally, with CloudFront and AWS WAF configured, traffic is efficiently delivered via the CDN while being protected at the edge, ensuring secure and optimized access.&lt;/p&gt;

</description>
      <category>devops</category>
      <category>cloud</category>
      <category>ecs</category>
      <category>aws</category>
    </item>
    <item>
      <title>Deploying Node-Red on Azure Container Instance</title>
      <dc:creator>Manjula Rajamani</dc:creator>
      <pubDate>Thu, 05 Sep 2024 12:30:38 +0000</pubDate>
      <link>https://dev.to/ittrident/deploying-node-red-on-azure-container-instance-4n44</link>
      <guid>https://dev.to/ittrident/deploying-node-red-on-azure-container-instance-4n44</guid>
      <description>&lt;p&gt;This guide provides instructions for deploying the Node-Red application on the Azure platform, utilizing Azure Container Instances, Azure Container Registry, and Azure Storage Account.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What is Azure Container Instance?&lt;/strong&gt;&lt;br&gt;
Azure Container Instances (ACI) is a managed service that allows you to run containers directly on the Microsoft Azure public cloud, without requiring the use of virtual machines (VMs)&lt;/p&gt;

&lt;p&gt;For more information about Azure Container Instances, check out the official documentation at this link: &lt;a href="https://learn.microsoft.com/en-us/azure/container-instances/" rel="noopener noreferrer"&gt;Azure Container Instances Documentation&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What is Azure Container Registry?&lt;/strong&gt;&lt;br&gt;
Azure Container Registry is a private registry service for building, storing, and managing container images and related artifacts&lt;/p&gt;

&lt;p&gt;For more information about Azure Container Registry, check out the official documentation at this link: &lt;a href="https://learn.microsoft.com/en-us/azure/container-registry/" rel="noopener noreferrer"&gt;Azure Container Registry Documentation&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What is an Azure Storage account?&lt;/strong&gt;&lt;br&gt;
The Azure Storage platform is Microsoft's cloud storage solution for modern data storage scenarios. Azure Storage offers highly available, massively scalable, durable, and secure storage for a variety of data objects in the cloud. Azure Storage data objects are accessible from anywhere in the world over HTTP or HTTPS via a REST API.&lt;/p&gt;

&lt;p&gt;For more information about the Azure Storage account, check out the official documentation at this link: &lt;a href="https://learn.microsoft.com/en-us/azure/storage/common/storage-account-overview" rel="noopener noreferrer"&gt;Azure Storage account Documentation&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Startup:&lt;/strong&gt;&lt;br&gt;
Before starting, ensure that you have an Azure account with an active subscription&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Step 1:&lt;/em&gt;&lt;/strong&gt;&lt;br&gt;
Login into your Azure account&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

az login


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Step 2:&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Create a container registry and store the Node-RED image in the container registry&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;As of Node-RED 1.0, the repository on Docker Hub was renamed to &lt;code&gt;nodered/node-red&lt;/code&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Log in to a registry using Azure CLI&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

az acr login --name myregistry
docker login myregistry.azurecr.io


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;If you are unfamiliar with creating an Azure Container Registry (ACR), you can refer to the following link for step-by-step instructions using the Azure portal: &lt;a href="https://learn.microsoft.com/en-us/azure/container-registry/container-registry-get-started-portal?tabs=azure-cli" rel="noopener noreferrer"&gt;Get Started with Azure Container Registry&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Step 3:&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;&lt;strong&gt;Create an Azure file share:&lt;/strong&gt;&lt;/em&gt;&lt;br&gt;
Run the following script to create a storage account to host the file share, and the share itself. The storage account name must be globally unique, so the script adds a random value to the base string.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

# Change these parameters as needed
RESOURCE_GROUP=myResourceGroup
STORAGE_ACCOUNT_NAME=storageaccount$RANDOM
LOCATION=eastus
FILE_SHARE_NAME=node-red-share
IMAGE=testingregistrydevops.azurecr.io/node-red:latest
ACI_NAME=node-red

# Create the storage account with the parameters
az storage account create \
    --resource-group $RESOURCE_GROUP \
    --name $STORAGE_ACCOUNT_NAME \
    --location $LOCATION \
    --sku Standard_LRS

# Create the file share
az storage share-rm create \
    --resource-group $RESOURCE_GROUP \
    --storage-account $STORAGE_ACCOUNT_NAME \
    --name $FILE_SHARE_NAME \
    --quota 1024 \
    --enabled-protocols SMB \
    --output table


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Step 4:&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;&lt;strong&gt;Get storage credentials:&lt;/strong&gt;&lt;/em&gt;&lt;br&gt;
To mount an Azure file share as a volume in Azure Container Instances, you need three values: the storage account name, the share name, and the storage access key.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Storage account name&lt;/strong&gt; - If you used the preceding script, the storage account name was stored in the &lt;code&gt;$STORAGE_ACCOUNT_NAME&lt;/code&gt; variable. To see the account name, type:&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

echo $STORAGE_ACCOUNT_NAME


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Share name&lt;/strong&gt;- This value is already known (defined as &lt;code&gt;node-red-share&lt;/code&gt; in the preceding script). To see the file share name &lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

echo $FILE_SHARE_NAME


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Storage account key&lt;/strong&gt; - This value can be found using the following command:&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

STORAGE_KEY=$(az storage account keys list --resource-group $RESOURCE_GROUP --account-name $STORAGE_ACCOUNT_NAME --query "[0].value" --output tsv)
echo $STORAGE_KEY



&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Step 5:&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;&lt;strong&gt;Deploy container and mount volume - CLI:&lt;/strong&gt;&lt;/em&gt;&lt;br&gt;
To mount an Azure file share as a volume in a container by using the Azure CLI, specify the share and volume mount point when you create the container with az container create. If you followed the previous steps, you can mount the share you created earlier by using the following command to create a container:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

az container create \
        --resource-group $RESOURCE_GROUP \
        --name $ACI_NAME \
        --image $IMAGE \
        --dns-name-label unique-acidemo-label \
        --ports 1880 \
        --azure-file-volume-account-name $STORAGE_ACCOUNT_NAME \
        --azure-file-volume-account-key $STORAGE_ACCOUNT_KEY \
        --azure-file-volume-share-name $FILE_SHARE_NAME \
        --azure-file-volume-mount-path /aci/logs/



&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;The &lt;code&gt;--dns-name-label&lt;/code&gt; value must be unique within the Azure region where you create the container instance&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Using Bash&lt;/strong&gt;&lt;br&gt;
You can combine the above commands and execute the bash script to create an Azure Container Instance for Node-RED.&lt;br&gt;
Here is the bash script for Node-RED&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

#!/usr/bin/env bash

RESOURCE_GROUP=MyResourceGroup
STORAGE_ACCOUNT_NAME=storageaccount$RANDOM
LOCATION=eastus
FILE_SHARE_NAME=node-red-share
IMAGE=testingregistrydevops.azurecr.io/node-red:latest
ACI_NAME=node-red

# Function to handle errors
handle_error() {
    echo "Error: $1" &amp;gt;&amp;amp;2
    exit 1
}

# Azure Login
az login || handle_error "Failed to login to Azure"

# ACR Login
az acr login --name testingregistrydevops.azurecr.io || handle_error "Failed to login to ACR"

# Check if Resource Group exists
if az group show --name $RESOURCE_GROUP &amp;amp;&amp;gt;/dev/null; then
    echo "Resource group '$RESOURCE_GROUP' already exists."
else
    # Creating Resource Group
    az group create --name $RESOURCE_GROUP --location $LOCATION || handle_error "Failed to create resource group"
    echo "Resource group '$RESOURCE_GROUP' created."
fi

# Check if the Storage Account exists
if az storage account show --name $STORAGE_ACCOUNT_NAME --resource-group $RESOURCE_GROUP &amp;amp;&amp;gt;/dev/null; then
    echo "Storage account '$STORAGE_ACCOUNT_NAME' already exists."
else
    # Creating Storage Account
    az storage account create \
        --resource-group $RESOURCE_GROUP \
        --name $STORAGE_ACCOUNT_NAME \
        --location $LOCATION \
        --sku Standard_LRS || handle_error "Failed to create storage account"
    echo "Storage account '$STORAGE_ACCOUNT_NAME' created."
fi

# Creating File Share
echo "Creating file share '$FILE_SHARE_NAME'..."
if az storage share-rm create \
    --resource-group $RESOURCE_GROUP \
    --storage-account $STORAGE_ACCOUNT_NAME \
    --name $FILE_SHARE_NAME \
    --quota 1024 \
    --enabled-protocols SMB \
    --output table &amp;amp;&amp;gt;/dev/null; then
    echo "File share '$FILE_SHARE_NAME' created successfully."
else
    handle_error "Failed to create file share '$FILE_SHARE_NAME'"
fi

# Fetch Storage Account Key
STORAGE_ACCOUNT_KEY=$(az storage account keys list --resource-group $RESOURCE_GROUP --account-name $STORAGE_ACCOUNT_NAME --query "[0].value" --output tsv)
echo $STORAGE_ACCOUNT_KEY

# Creating Azure Container Instance for Node-Red
if az container show --resource-group $RESOURCE_GROUP --name $ACI_NAME &amp;amp;&amp;gt;/dev/null; then
    echo "Azure Container Instance '$ACI_NAME' already exists."
else
    # Creating Azure Container Instance for Node-Red
    az container create \
        --resource-group $RESOURCE_GROUP \
        --name $ACI_NAME \
        --image $IMAGE \
        --dns-name-label unique-acidemo-label \
        --ports 1880 \
        --azure-file-volume-account-name $STORAGE_ACCOUNT_NAME \
        --azure-file-volume-account-key $STORAGE_ACCOUNT_KEY \
        --azure-file-volume-share-name $FILE_SHARE_NAME \
        --azure-file-volume-mount-path /aci/logs/ || handle_error "Failed to create container instance"

    echo "Azure Container Instance '$ACI_NAME' created."
fi



&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;After executing the file, you can make the application accessible using the &lt;code&gt;public IP&lt;/code&gt; or &lt;code&gt;Fully Qualified Domain Name (FQDN)&lt;/code&gt; of the Azure Container Instance.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcgzh7wse2lwjwexp32cc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcgzh7wse2lwjwexp32cc.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Additionally, you can verify whether the file share is properly mounted using Azure Container Instances (ACI)&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwdczbsw66g4f07sz5wcx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwdczbsw66g4f07sz5wcx.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Comprehensive Guide to Integrating SonarCloud with GitHub Projects</title>
      <dc:creator>Niresh Prabu A</dc:creator>
      <pubDate>Tue, 13 Aug 2024 12:31:02 +0000</pubDate>
      <link>https://dev.to/ittrident/comprehensive-guide-to-integrating-sonarcloud-with-github-projects-3449</link>
      <guid>https://dev.to/ittrident/comprehensive-guide-to-integrating-sonarcloud-with-github-projects-3449</guid>
      <description>&lt;p&gt;This blog post exemplifies how to integrate SonarCloud with GitHub to enhance code quality and security in your projects.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Sonarcloud&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;SonarCloud is a Software-as-a-Service (SaaS) code analysis tool designed to detect coding issues in 30+ languages, frameworks, and IaC platforms. By integrating directly with your CI pipeline or one of the supported DevOps platforms, your code is checked against an extensive set of rules that cover many attributes of code, such as maintainability, reliability, and security issues, on each merge/pull request.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Why SonarCloud Integration with GitHub is Essential for Your Projects&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Integrating SonarCloud with GitHub is essential for maintaining high code quality and security in your projects. By automatically analyzing your code with every commit, SonarCloud identifies issues like bugs, code smells, and vulnerabilities early in the development process. This integration helps ensure that only clean, reliable code gets merged, reducing technical debt and preventing potential security risks. Ultimately, it fosters a culture of continuous improvement and accountability, leading to more robust and maintainable software&lt;/p&gt;

&lt;h3&gt;
  
  
  Prerequisites
&lt;/h3&gt;

&lt;p&gt;Before integrating SonarCloud with your GitHub projects, there are a couple of prerequisites to ensure a smooth setup process:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Admin Access to the GitHub Repository&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;You must have administrative access to the GitHub repository you wish to integrate with SonarCloud. This access is necessary to configure repository settings, add secrets, and link the repository with SonarCloud.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;SonarCloud Account Setup&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;You need to have a SonarCloud account to proceed. If you don't have one, you can easily set it up by signing in with your GitHub account. This method simplifies the process by directly linking your GitHub repositories to SonarCloud, making it easier to manage projects and streamline the integration process. Visit &lt;a href="https://sonarcloud.io" rel="noopener noreferrer"&gt;SonarCloud&lt;/a&gt; and choose the "Sign in with GitHub" option to create your account and get started.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Step-by-Step Guide: GitHub Integration with SonarCloud&lt;/strong&gt;&lt;br&gt;
&lt;em&gt;a. Linking SonarCloud with GitHub&lt;/em&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Sign in to SonarCloud: Go to SonarCloud and sign in using your GitHub account.&lt;/li&gt;
&lt;li&gt;Create a new organization: Navigate to the "My Organizations" tab and create a new organization linked to your GitHub account.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5gnx80j08u6w3g6raehb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5gnx80j08u6w3g6raehb.png" alt=" " width="800" height="199"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Import your GitHub repository: After creating the organization, select "Analyze new project" and choose the GitHub repository you want to integrate with SonarCloud.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frytgxpu6sz33psigvvgv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frytgxpu6sz33psigvvgv.png" alt=" " width="800" height="84"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fg0q1wxagt8q475vwgozu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fg0q1wxagt8q475vwgozu.png" alt=" " width="800" height="201"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;b. Generating and Adding Sonar Token in GitHub Secrets&lt;/em&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Generate a Sonar Token: 

&lt;ul&gt;
&lt;li&gt;In SonarCloud, go to your account settings and generate a new token under "Security".&lt;/li&gt;
&lt;li&gt;Copy the token to a secure location.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi79e5hpzvs16xadnak6a.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi79e5hpzvs16xadnak6a.png" alt=" " width="800" height="350"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Add Sonar Token to GitHub Secrets:

&lt;ul&gt;
&lt;li&gt;In your GitHub repository, navigate to Settings &amp;gt; Secrets and variables &amp;gt; Actions.&lt;/li&gt;
&lt;li&gt;Click on New repository secret and name it SONAR_TOKEN.&lt;/li&gt;
&lt;li&gt;Paste the Sonar token generated earlier into the value field and save it.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2p7vg4ipqt42exkri76d.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2p7vg4ipqt42exkri76d.png" alt=" " width="800" height="434"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4. CI/CD Pipeline Integration&lt;/strong&gt;&lt;br&gt;
   &lt;em&gt;a. Setting Up the CI/CD Pipeline:&lt;/em&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Modify Your YAML File: In your repository, create or modify the .github/workflows/deployment.yml file to include SonarCloud analysis steps. Include this sonarcloud code scan step in the deployment yaml file.
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Example configuration:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;  SonarCloudSCan:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v3
        with:
            fetch-depth: 0
      - name: SonarCloud Scan
        uses: sonarsource/sonarcloud-github-action@master
        env:
            GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
            SONAR_TOKEN: ${{ secrets.SONAR_TOKEN }}
        with:
          args: &amp;gt;
              -Dsonar.organization=&amp;lt;your-organization-name&amp;gt;
              -Dsonar.projectKey=&amp;lt;your-project-key&amp;gt;
              -Dsonar.qualitygate.wait=true
              -X
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;5. Ensuring Code Quality Before Deployment&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;To ensure that your deployment only occurs when your code passes all quality checks, it's essential to add dependencies to your deployment step. This will prevent deployment if the code check fails, thereby maintaining the integrity and security of your application.&lt;/p&gt;

&lt;p&gt;In your CI/CD pipeline configuration (.yml file), include the following step to make sure the deployment only happens after the SonarCloud scan are successful:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Deploy:
  needs: 
    - SonarCloudScan
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;jobs:
  SonarCloudSCan:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v3
        with:
            fetch-depth: 0
      - name: SonarCloud Scan
        uses: sonarsource/sonarcloud-github-action@master
        env:
            GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
            SONAR_TOKEN: ${{ secrets.SONAR_TOKEN }}
        with:
          args: &amp;gt;
              -Dsonar.organization=tridentsqa
              -Dsonar.projectKey=TridentSQA_pmo-api
              -Dsonar.qualitygate.wait=true
              -X

   Deploy:
    needs: 
      - SonarCloudSCan
    name: deploy the new image in ECS
    runs-on: ubuntu-latest

    steps:
      - name: Checkout
        uses: actions/checkout@v2

      - name: configure aws credentials
        uses: aws-actions/configure-aws-credentials@v1

 # Remaining deployment steps...........
....................................
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;6. Project Code Scan and Issue Resolution&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;After successfully integrating SonarCloud with your GitHub repository and setting up the CI/CD pipeline, your project's code will be automatically scanned by SonarCloud with every commit or pull request.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;a. Viewing the Scan Results:&lt;/em&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Access the SonarCloud Dashboard: From your dashboard, select the project that has been integrated with GitHub.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Review the Analysis Overview: The dashboard provides an overview of the code quality, including metrics like code coverage, bugs, vulnerabilities, and code smells.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl7o2vn5h63lj739th1kt.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl7o2vn5h63lj739th1kt.jpg" alt=" " width="800" height="411"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Examine Detailed Reports: Click on specific issues to view detailed descriptions, including the lines of code affected and suggestions for fixing them.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;em&gt;b. Resolving Issues:&lt;/em&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Prioritize Critical Issues: Start by addressing bugs and security vulnerabilities, as these can impact the stability and security of your application.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Follow SonarCloud's Recommendations: Each issue identified by SonarCloud comes with a recommended solution. Implement these fixes in your codebase.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Re-run the Analysis: After resolving issues, push your changes to GitHub. The CI/CD pipeline will trigger a new SonarCloud scan, and the updated results will be reflected in the dashboard.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Ensure Quality Gates are Passed: Quality gates are thresholds set in SonarCloud to enforce code quality standards. Make sure your project passes these gates before considering the work complete.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>devops</category>
      <category>security</category>
      <category>githubactions</category>
      <category>sonarcloud</category>
    </item>
    <item>
      <title>Adding support for stateful applications deployed on GCP Cloud Run via Filestore</title>
      <dc:creator>Selvakumar</dc:creator>
      <pubDate>Wed, 04 Oct 2023 09:53:40 +0000</pubDate>
      <link>https://dev.to/ittrident/setting-up-gcp-filestore-integration-for-cloud-run-30af</link>
      <guid>https://dev.to/ittrident/setting-up-gcp-filestore-integration-for-cloud-run-30af</guid>
      <description>&lt;h2&gt;
  
  
  &lt;strong&gt;Introduction:&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;While Cloud Run offers the undeniable advantages of a serverless and stateless environment, it's important to recognize that it primarily caters to stateless applications. However, there may be instances where the need arises to deploy a stateful application which mandates a storage medium (logs, media assets, and such) within the Cloud Run environment. If you find yourself facing this very scenario, fear not, for this blog post will serve as your comprehensive guide, walking you through the process step by step.&lt;/p&gt;

&lt;p&gt;How exactly are we going to do this, you ask? By leveraging a &lt;a href="https://github.com/itTrident/terraform-gcp-cloudrun-filestore"&gt;tf module&lt;/a&gt; I authored, of course! &lt;/p&gt;

&lt;p&gt;A lil' pretext as to why I even came about with the idea to write this was when the only other way I saw around making stateful applications seamlessly work with Cloud Run was to make direct changes (google libs, sdk, etc.) to the codebase in effect, which would unequivocally alter the very substance of the application to render it working ONLY and EXCLUSIVELY on Google Cloud.&lt;/p&gt;

&lt;p&gt;No more meddling around with the code, all thanks to this module, all of the apps volume needs are taken care of within the confinement of the Cloud Run service itself!&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Prerequisite:&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;An account with Google Cloud with administrative access or permissions for the essential components constituting the following services - Cloud Run, VPC, API services, Filestore, and Artifact Hub.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Getting Started:&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Step 1: Enable APIs&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Initially, you must activate the necessary APIs:

&lt;ul&gt;
&lt;li&gt;Cloud Filestore API&lt;/li&gt;
&lt;li&gt;Cloud Run API&lt;/li&gt;
&lt;li&gt;Serverless VPC Access API&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;Navigate to the "APIs &amp;amp; Services" section, proceed to the "Library" section, and search for the specified APIs mentioned above. Afterward, enable them as needed.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--arFApM4j--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/zb1l3b6kmrx4wver5a99.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--arFApM4j--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/zb1l3b6kmrx4wver5a99.png" alt="Image description" width="800" height="408"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--7PcHxr5L--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/4lw3tk9upacfaxhrair3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--7PcHxr5L--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/4lw3tk9upacfaxhrair3.png" alt="Image description" width="800" height="408"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--7ZmTckY1--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ipvl2xi99q0ge32cdrvx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--7ZmTckY1--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ipvl2xi99q0ge32cdrvx.png" alt="Image description" width="800" height="388"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 2: Create Serverless VPC connector&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;To connect to your Filestore instance, your Cloud Run service needs access to the Filestore instance's authorized VPC network.&lt;/li&gt;
&lt;li&gt; Every VPC connector requires its own /28 subnet to place connector instances in. Do note that this IP range must not overlap with any existing IP address reservations in your VPC network&lt;/li&gt;
&lt;li&gt;Goto to &lt;code&gt;Serverless VPC access&lt;/code&gt;, afterware, click &lt;code&gt;CREATE CONNECTOR&lt;/code&gt;, fill the required field&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--a8TrqjiT--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/99r5npx4wlga6fvdvru3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--a8TrqjiT--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/99r5npx4wlga6fvdvru3.png" alt="Image description" width="800" height="311"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 3: Filestore&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Navigate to the "Filestore" section.&lt;/li&gt;
&lt;li&gt;Click &lt;code&gt;CREATE INSTANCE&lt;/code&gt; to create the new filestore.&lt;/li&gt;
&lt;li&gt;Fill the required field and click Create.&lt;/li&gt;
&lt;li&gt;Upon it being created, you'll have the "NFS mount point" displayed&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--AhuDUIei--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/df9dshfkx82l1t2d8yd1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--AhuDUIei--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/df9dshfkx82l1t2d8yd1.png" alt="Image description" width="800" height="343"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 4: Build Docker Image&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Once you've completed the preceding configuration steps, the next task involves adding the following line to your Dockerfile's ENTRYPOINT or CMD:&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;&lt;code&gt;mount -o nolock FILESTORE_IP_ADDRESS:/FILE_SHARE_NAME MNT_DIR&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Here's what each argument of this command denotes:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;FILESTORE_IP_ADDRESS:/FILE_SHARE_NAME: You should obtain this value from step #3 labeled "NFS mount point"&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;MNT_DIR: You must specify the target directory where you intend to mount the Filestore.&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt; FROM ubuntu:latest
 WORKDIR /data
 # If you use entrypoint
 ENTRYPOINT ["mount -o nolock FILESTORE_IP_ADDRESS:/FILE_SHARE_NAME /data", "&amp;amp;&amp;amp;", "npm" , "start"]
 # If you use CMD
 CMD mount -o nolock FILESTORE_IP_ADDRESS:/FILE_SHARE_NAME /data &amp;amp;&amp;amp; npm start
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;/ul&gt;

&lt;ul&gt;
&lt;li&gt;Subsequently, proceed to build your Docker image and push it to the Artifact Hub.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Step 4: Deploy that Image to Cloud Run&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Navigate to "Cloud Run"&lt;/li&gt;
&lt;li&gt;Click on the "Create Service" button.&lt;/li&gt;
&lt;li&gt;
&lt;em&gt;Service Configuration:&lt;/em&gt;

&lt;ul&gt;
&lt;li&gt;Select the container image you pushed earlier.&lt;/li&gt;
&lt;li&gt;GGive the service a name of your choosing and fill the rest of the information.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;
&lt;em&gt;Container Configuration:&lt;/em&gt;

&lt;ul&gt;
&lt;li&gt;Give the container a port number to start serving on&lt;/li&gt;
&lt;li&gt;Execution environment : Choose "
Second generation"&lt;/li&gt;
&lt;li&gt;Fill the other things.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;
&lt;em&gt;Network Configuration:&lt;/em&gt;

&lt;ul&gt;
&lt;li&gt;Select "
Connect to a VPC for outbound traffic"&lt;/li&gt;
&lt;li&gt;Choose "Use Serverless VPC Access connectors" from the list, next, choose the serverless VPC connector that you have created in Step #2.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;Finally, Click "CREATE".&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;A no-brainer way to smoke test if the filestore crate is actually mounted, do the following:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Launch a VM (GCE) in the same network as Filestore and SSH into it&lt;/li&gt;
&lt;li&gt;Install the nfs utility (I'm demo'ing this on Debian: &lt;code&gt;sudo apt install nfs-common&lt;/code&gt;)&lt;/li&gt;
&lt;li&gt;Create an example directory, i.e; nfs? filestore?&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Mount the example directory as a volume in the filesystem.&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;&lt;code&gt;mount -o nolock FILESTORE_IP_ADDRESS:/FILE_SHARE_NAME &amp;lt;example-directory&amp;gt;&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;df -h&lt;/code&gt; to check if the share is mounted, &lt;code&gt;cd&lt;/code&gt; into the &amp;lt;example-dir&amp;gt;, &lt;code&gt;mv&lt;/code&gt; files you'd want accessed by the container; later, and as soon as your app begins writing data, you'll naturally find it inside the &amp;lt;example-dir&amp;gt; along with the rest.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>gcp</category>
      <category>cloudrun</category>
      <category>filestore</category>
      <category>terraform</category>
    </item>
    <item>
      <title>A Step-by-Step Guide to Deploying an Application on AWS App Runner with GitHub Action workflow</title>
      <dc:creator>Rajeshwar R</dc:creator>
      <pubDate>Wed, 31 May 2023 17:11:07 +0000</pubDate>
      <link>https://dev.to/ittrident/a-step-by-step-guide-to-deploying-an-application-on-aws-app-runner-with-github-action-workflow-17ke</link>
      <guid>https://dev.to/ittrident/a-step-by-step-guide-to-deploying-an-application-on-aws-app-runner-with-github-action-workflow-17ke</guid>
      <description>&lt;h3&gt;
  
  
  &lt;strong&gt;Privacy-Protect&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Share passwords and sensitive files over email or store them in insecure locations like cloud drives using nothing more than desktop or mobile web browsers like chrome and safari.&lt;/p&gt;

&lt;p&gt;No special software. No need to create an account. It's free, open-source, keeps your private data a secret, and leave you alone.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;App Runner&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;AWS App Runner is a fully managed container application service that lets you build, deploy, and run containerized web application and API services without prior infrastructure or container experience.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Cons of App Runner&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;&lt;em&gt;Lack of EFS Mount Support:&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;The absence of support for mounting the Elastic File System (EFS). Instead, AWS offers an &lt;a href="https://aws.amazon.com/blogs/containers/deploy-python-application-using-aws-app-runner/" rel="noopener noreferrer"&gt;alternative solution&lt;/a&gt; with DynamoDB for handling data storage. The lack of EFS mount support might be a challenge for certain use cases:Limited File Storage Flexibility, Potential Performance Impact, Additional Configuration Complexity. However, by leveraging AWS's alternative storage solution, DynamoDB, developers can work around this limitation and continue to build scalable applications.&lt;/p&gt;

&lt;p&gt;Go through this Architecture diagram.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fadvnxq1w8aecr5wo958p.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fadvnxq1w8aecr5wo958p.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  ᛫ &lt;em&gt;&lt;strong&gt;Create a IAM role&lt;/strong&gt;&lt;/em&gt;
&lt;/h3&gt;

&lt;p&gt;This role is used by AWS App runner to access AWS ECR docker images.&lt;/p&gt;

&lt;p&gt;The following are the step-by-step instructions to create a service role and associating &lt;code&gt;AWSAppRunnerServicePolicyForECRAccess&lt;/code&gt; policy.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;&lt;strong&gt;step 1&lt;/strong&gt;&lt;/em&gt;&lt;br&gt;
Create a role named with &lt;code&gt;app-runner-service-role&lt;/code&gt;&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "Service": "build.apprunner.amazonaws.com"
      },
      "Action": "sts:AssumeRole"
    }
  ]
}


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;em&gt;&lt;strong&gt;step 2&lt;/strong&gt;&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Attach the &lt;code&gt;AWSAppRunnerServicePolicyForECRAccess&lt;/code&gt; existing policy to the role &lt;code&gt;app-runner-service-role&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Here the steps to deploy Privacy-Protect application in App Runner using GitHub Action&lt;/strong&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  ᛫ &lt;em&gt;&lt;strong&gt;Create a AWS ECR private repostiory&lt;/strong&gt;&lt;/em&gt;
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Figvyblvzdwkbm2gs6xh2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Figvyblvzdwkbm2gs6xh2.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  ᛫ &lt;em&gt;&lt;strong&gt;Clone the repository&lt;/strong&gt;&lt;/em&gt;
&lt;/h3&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

git clone https://github.com/r4jeshwar/privacy-protect.git
cd privacy-protect


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h3&gt;
  
  
  ᛫ &lt;em&gt;&lt;strong&gt;Migrate from Server deployment(Vercel-@sveltejs/adapter-vercel) to static site generator(@sveltejs/adapter-static)&lt;/strong&gt;&lt;/em&gt;
&lt;/h3&gt;

&lt;p&gt;First, Install &lt;code&gt;npm i -D @sveltejs/adapter-static&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Go to the &lt;code&gt;svelte.config.js&lt;/code&gt; file delete the existing content and paste the below content in &lt;code&gt;svelte.config.js&lt;/code&gt; file.&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

import adapter from "@sveltejs/adapter-static";
import { vitePreprocess } from "@sveltejs/kit/vite";
import { mdsvex } from "mdsvex";
import { resolve } from "path";

/** @type {import('@sveltejs/kit').Config} */
export default {
  extensions: [".md", ".svelte"],
  kit: {
    adapter: adapter({
       pages: 'build',
       assets: 'build',
      fallback: null,
      precompress: false,
      strict: true
    }),
    alias: {
      $components: resolve("src/components"),
      $icons: resolve("src/assets/icons"),
    },
    csp: {
      directives: {
        "base-uri": ["none"],
        "default-src": ["self"],
        "frame-ancestors": ["none"],
        "img-src": ["self", "data:"],
        "object-src": ["none"],
        // See https://github.com/sveltejs/svelte/issues/6662
        "style-src": ["self", "unsafe-inline"],
        "upgrade-insecure-requests": true,
        "worker-src": ["none"],
      },
      mode: "auto",
    },
  },
  preprocess: [
    mdsvex({
      extensions: [".md"],
      layout: {
        blog: "src/routes/blog/post.svelte",
      },
    }),
    vitePreprocess(),
  ],
};


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;em&gt;Note: This changes is to deploy in various infrastructure and to dockerize in small size. If you need to deploy in vercel ignore this.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;&lt;strong&gt;Why we need to switch out the adapter from vercel to static ?&lt;/strong&gt;&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Adapter-Static allows deploying Vite.js application to various static hosting providers.It also simplifies the deployment process by eliminating the need for serverless infrastructure and configuration specific to Vercel. &lt;/p&gt;

&lt;p&gt;Overall, migrating from Adapter-Vercel to Adapter-Static empowers you with greater flexibility, simplicity.&lt;/p&gt;

&lt;h3&gt;
  
  
  ᛫ &lt;em&gt;&lt;strong&gt;Dockerfile for privacy-protect to deploy in App Runner&lt;/strong&gt;&lt;/em&gt;
&lt;/h3&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

FROM node:18-alpine as build

WORKDIR /app

COPY . .

RUN npm i
RUN npm run build


FROM nginx:stable-alpine

COPY --from=build /app/build/ /usr/share/nginx/html


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Push the changes to your repository to be picked up by the GH workflow we'll be setting up next&lt;/p&gt;

&lt;h3&gt;
  
  
  ᛫ &lt;em&gt;&lt;strong&gt;Configure the GitHub Action secrets&lt;/strong&gt;&lt;/em&gt;
&lt;/h3&gt;

&lt;p&gt;Go to your project repository, Go to &lt;code&gt;settings&lt;/code&gt; in the security section click the &lt;code&gt;Secrets and variables&lt;/code&gt; drop down and click &lt;code&gt;Actions&lt;/code&gt;. Configure the variables.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxlyu9c4u1ou8gpqze8ar.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxlyu9c4u1ou8gpqze8ar.png" alt="Image description"&gt;&lt;/a&gt; &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;AWS_ACCESS_KEY_ID&lt;/code&gt; is your AWS user access key.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;AWS_SECRET_ACCESS_KEY&lt;/code&gt; is your AWS user secret key.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;AWS_REGION&lt;/code&gt; is the region of AWS services where you creating.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;ROLE_ARN&lt;/code&gt; is your IAM role ARN which you have created before which you named &lt;code&gt;app-runner-service-role&lt;/code&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  ᛫ &lt;em&gt;&lt;strong&gt;Configure the new workflow in GitHub Action&lt;/strong&gt;&lt;/em&gt;
&lt;/h3&gt;

&lt;p&gt;Click &lt;code&gt;set up a workflow by yourself&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmgna6eenrxx8a9n7muv8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmgna6eenrxx8a9n7muv8.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Next, on the file editor(main.yml), populate the below YAML file&lt;/p&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

&lt;p&gt;name: Deploy to App Runner - Image based # Name of the workflow&lt;br&gt;
on:&lt;br&gt;
  push:&lt;br&gt;
    branches: [ main ] # Trigger workflow on git push to main branch&lt;br&gt;
  workflow_dispatch: # Allow manual invocation of the workflow&lt;br&gt;
jobs:&lt;br&gt;&lt;br&gt;
  deploy:&lt;br&gt;
    runs-on: ubuntu-latest&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;steps:      
  - name: Checkout
    uses: actions/checkout@v2
    with:
      persist-credentials: false

  - name: Configure AWS credentials
    id: aws-credentials
    uses: aws-actions/configure-aws-credentials@v1
    with:
      aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
      aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
      aws-region: ${{ secrets.AWS_REGION }}

  - name: Login to Amazon ECR
    id: login-ecr
    uses: aws-actions/amazon-ecr-login@v1        

  - name: Build, tag, and push image to Amazon ECR
    id: build-image
    env:
      ECR_REGISTRY: ${{ steps.login-ecr.outputs.registry }}
      ECR_REPOSITORY: privacy-protect
      IMAGE_TAG: ${{ github.sha }}
    run: |
      docker build -t $ECR_REGISTRY/$ECR_REPOSITORY:$IMAGE_TAG .
      docker push $ECR_REGISTRY/$ECR_REPOSITORY:$IMAGE_TAG
      echo "::set-output name=image::$ECR_REGISTRY/$ECR_REPOSITORY:$IMAGE_TAG"  

  - name: Deploy to App Runner
    id: deploy-apprunner
    uses: awslabs/amazon-app-runner-deploy@main        
    with:
      service: privacy-protect-app-runner
      image: ${{ steps.build-image.outputs.image }}          
      access-role-arn: ${{ secrets.ROLE_ARN }}       
      region: ${{ secrets.AWS_REGION }}
      cpu : 1
      memory : 2
      port: 80
      wait-for-service-stability: true

  - name: App Runner ID
    run: echo "App runner ID ${{ steps.deploy-apprunner.outputs.service-id }}"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h3&gt;
&lt;br&gt;
  &lt;br&gt;
  &lt;br&gt;
  ᛫ &lt;em&gt;&lt;strong&gt;GitHub Action runs successfully after a few minutes&lt;/strong&gt;&lt;/em&gt;&lt;br&gt;
&lt;/h3&gt;

&lt;p&gt;You can see the AWS App Runner dashboard your application is up and running successfully.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foewr1yxlzr5gwmpi4qxe.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foewr1yxlzr5gwmpi4qxe.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>awsapprunner</category>
      <category>githubactions</category>
      <category>docker</category>
    </item>
    <item>
      <title>How to Deploy a TimeVault Application on Cloud Run with Cloud Build on GCP</title>
      <dc:creator>Rajeshwar R</dc:creator>
      <pubDate>Tue, 23 May 2023 07:00:32 +0000</pubDate>
      <link>https://dev.to/ittrident/timevault-on-gcp-cloudrun-4d5k</link>
      <guid>https://dev.to/ittrident/timevault-on-gcp-cloudrun-4d5k</guid>
      <description>&lt;p&gt;&lt;a href="https://dev.tourl"&gt;&lt;/a&gt;&lt;strong&gt;Timevault&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;A deadman's switch to encrypt your vulnerability reports or other compromising data to be decryptable at a set time in the future. Uses tlock-js and is powered by drand. Messages encrypted with timevault are also compatible with the go tlock library.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Cloudrun&lt;/strong&gt;&lt;br&gt;
Cloud Run is a managed compute platform that lets you run containers directly on top of Google's scalable infrastructure. You can deploy code written in any programming language on Cloud Run if you can build a container image from it. In fact, building container images is optional.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Structure of Timevault&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;&lt;strong&gt;Step 1:&lt;/strong&gt;&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Create a repository as in GCP Cloud Source Repositories name &lt;code&gt;timevault-cloudrun&lt;/code&gt;. And clone the repository to your local.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;&lt;strong&gt;Step 2:&lt;/strong&gt;&lt;/em&gt;&lt;br&gt;
    Clone the timevault repository&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt; git clone https://github.com/r4jeshwar/timevault.git
 cd  timevault
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;em&gt;&lt;strong&gt;Step 3:&lt;/strong&gt;&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Mirror the Github (timevault) repository to GCP cloud source repository&lt;/p&gt;

&lt;p&gt;Go the GCP cloud source repository service and click &lt;code&gt;Add repository&lt;/code&gt; and click &lt;code&gt;connect external repository&lt;/code&gt; next click &lt;code&gt;continue&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s---foPceYu--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/wz6sugz1i1h8658e72p3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s---foPceYu--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/wz6sugz1i1h8658e72p3.png" alt="Image description" width="662" height="332"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Select your project ID and Select &lt;code&gt;Github&lt;/code&gt; in Git provider, next choose your needed repository and click &lt;code&gt;connect selected repository&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--nCZSfAd_--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/begprqseppf19iu8ien3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--nCZSfAd_--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/begprqseppf19iu8ien3.png" alt="Image description" width="657" height="922"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Finally, Your Github repository will connect to your GCP cloud source repository&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--kqhH0neG--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/g5saa57iragp4nj1am2p.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--kqhH0neG--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/g5saa57iragp4nj1am2p.png" alt="Image description" width="449" height="270"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;&lt;strong&gt;Step 4:&lt;/strong&gt;&lt;/em&gt;&lt;br&gt;
Using this Multi-stage Dockerfile we can able to deploy it on Cloudrun&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;FROM node:18-alpine AS build

WORKDIR /app

COPY package*.json ./
COPY ./src ./src

RUN npm i
RUN npm run build

FROM nginx:stable-alpine

COPY --from=build /app/dist/ /usr/share/nginx/html/
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;em&gt;&lt;strong&gt;Step 5:&lt;/strong&gt;&lt;/em&gt;&lt;br&gt;
Write the cloudbuild.yaml to deploy it on Cloudrun. Here the &lt;code&gt;cloudbuild.yaml&lt;/code&gt; file&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt; steps:
 # Build the container image
 - name: 'gcr.io/cloud-builders/docker'
   args: ['build', '-t', 'gcr.io/$_PROJECT_ID/$_IMAGE_NAME:$COMMIT_SHA', '.']
 # Push the container image to Container Registry
 - name: 'gcr.io/cloud-builders/docker'
   args: ['push', 'gcr.io/$_PROJECT_ID/$_IMAGE_NAME:$COMMIT_SHA']
 # Deploy container image to Cloud Run
 - name: 'gcr.io/google.com/cloudsdktool/cloud-sdk'
   entrypoint: gcloud
   args:
   - 'run'
   - 'deploy'
   - '$_IMAGE_NAME'
   - '--platform=managed'
   - '--image'
   - 'gcr.io/$_PROJECT_ID/$_IMAGE_NAME:$COMMIT_SHA'
   - '--region'
   - '$_REGION'
   - '--allow-unauthenticated'
   - '--port'
   - '80'  
 images:
 - 'gcr.io/$_PROJECT_ID/$_IMAGE_NAME:$COMMIT_SHA'

 substitutions:
   _IMAGE_NAME: timevault
   _REGION: &amp;lt;YOUR_REGION&amp;gt;
   _PROJECT_ID: &amp;lt;YOUR_PROJECT_ID&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;YOUR_REGION&lt;/code&gt; is the region of the Cloud Run service you are deploying.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;YOUR_PROJECT_ID&lt;/code&gt; is your Google Cloud project ID where your image is stored.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;em&gt;&lt;strong&gt;Step 6:&lt;/strong&gt;&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Go to Cloud Build and create a trigger.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--fafzZsWk--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7mt8ndfv6oz5x88xze9z.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--fafzZsWk--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7mt8ndfv6oz5x88xze9z.png" alt="Image description" width="792" height="921"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--m8Zg8_dC--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/oatl3fsd96ws02ytljc2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--m8Zg8_dC--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/oatl3fsd96ws02ytljc2.png" alt="Image description" width="792" height="908"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;&lt;strong&gt;Step 7:&lt;/strong&gt;&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Trigger the Cloud Build. By below steps:&lt;br&gt;
&lt;code&gt;Cloud Build&lt;/code&gt; --&amp;gt; &lt;code&gt;Triggers&lt;/code&gt; --&amp;gt; click &lt;code&gt;RUN&lt;/code&gt; --&amp;gt; click &lt;code&gt;RUN TRIGGER&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--x9lX3V-1--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5rktueuiibrrbjyeqwor.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--x9lX3V-1--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5rktueuiibrrbjyeqwor.png" alt="Image description" width="800" height="194"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Deployed on Cloud Run. Copy the Cloud Run URL and expose it on web browser&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--eBXXAMVR--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/n4j73tl7oasha9hgkd4a.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--eBXXAMVR--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/n4j73tl7oasha9hgkd4a.png" alt="Image description" width="800" height="168"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Adding custom domain in Cloud run&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;When you deploy a service to Cloud run, you are provided with a default  domain to access the service. However, you can use your own domain or subdomain instead of default one.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;&lt;strong&gt;Step 1:&lt;/strong&gt;&lt;/em&gt;&lt;br&gt;
Go to Cloud run service, Click &lt;strong&gt;MANAGE CUSTOM DOMAINS&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--tdcagLpl--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/d550vqfvkgx9u7i9331t.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--tdcagLpl--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/d550vqfvkgx9u7i9331t.png" alt="Image description" width="800" height="165"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;&lt;strong&gt;Step 2:&lt;/strong&gt;&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Go to the domain mapping, Click &lt;strong&gt;ADD MAPPING&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--k7SsKM9T--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/vlkofi6gv4mxeyr5yua1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--k7SsKM9T--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/vlkofi6gv4mxeyr5yua1.png" alt="Image description" width="800" height="187"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;&lt;strong&gt;Step 3:&lt;/strong&gt;&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;In the add mapping form, select the service which you map to and select the verify the new domain, next enter your domain. As shown in the below. Click the &lt;strong&gt;CONTINUE&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--LIIpiQ-l--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/zunw5epp2h6r95yln4lj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--LIIpiQ-l--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/zunw5epp2h6r95yln4lj.png" alt="Image description" width="728" height="651"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;It will take some time to verify your domain, Click &lt;strong&gt;REFRESH&lt;/strong&gt; to check verification is done.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--y_8GpxhF--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/sz9jyhuwi1d08x065aes.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--y_8GpxhF--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/sz9jyhuwi1d08x065aes.png" alt="Image description" width="607" height="682"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After the verification is done, enter the subdomain to map to the service you have selected, then click &lt;strong&gt;CONTINUE&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--2lHtutmm--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qpgg09zw6tm5ckbjc4ti.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--2lHtutmm--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qpgg09zw6tm5ckbjc4ti.png" alt="Image description" width="642" height="576"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You will get the DNS record for your custom domain that maps to your Cloud run service. You need to add this DNS record to your domain provider&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--8nrfDm9l--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/c3hk1x0333odyognm5a0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--8nrfDm9l--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/c3hk1x0333odyognm5a0.png" alt="Image description" width="641" height="516"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Finally, when you add the DNS record successfully, you can use your custom domain to access your deployed Cloud run service.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--BZgfEj3c--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/hahzk0o7nzyfnlxd6dew.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--BZgfEj3c--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/hahzk0o7nzyfnlxd6dew.png" alt="Image description" width="800" height="392"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
    </item>
    <item>
      <title>MicroBin on Fly.io</title>
      <dc:creator>Manjula Rajamani</dc:creator>
      <pubDate>Wed, 17 May 2023 15:00:03 +0000</pubDate>
      <link>https://dev.to/ittrident/microbin-on-flyio-2nik</link>
      <guid>https://dev.to/ittrident/microbin-on-flyio-2nik</guid>
      <description>&lt;p&gt;MicroBin is a tiny, feature rich, configurable, self-contained, and self-hosted paste bin web application. It is elementary to set up and use, and will only require a few megabytes of memory and disk storage.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Fly.io&lt;/strong&gt;&lt;br&gt;
Fly.io is a platform for running full-stack applications and databases close to the users without any DevOps. You can deploy your apps to Fly.io (it's about the simplest place to deploy a Docker Image) and use our CLI to launch instances in regions that are most important for their application.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Structure of MicroBin&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Step 1:&lt;/em&gt;&lt;/strong&gt;&lt;br&gt;
Clone your repository&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;git clone https://github.com/manjularajamani/microbin.git
cd microbin/
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;&lt;em&gt;Step 2:&lt;/em&gt;&lt;/strong&gt;&lt;br&gt;
Using Dockerfile(or pre-built Docker images)we can able to deploy the application on the fly.io app server&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;FROM rust:latest as build

WORKDIR /app

COPY . .

RUN \
  DEBIAN_FRONTEND=noninteractive \
  apt-get update &amp;amp;&amp;amp;\
  apt-get -y install ca-certificates tzdata &amp;amp;&amp;amp;\
  CARGO_NET_GIT_FETCH_WITH_CLI=true \
  cargo build --release

# https://hub.docker.com/r/bitnami/minideb
FROM bitnami/minideb:latest

# microbin will be in /app
WORKDIR /app

# copy time zone info
COPY --from=build \
  /usr/share/zoneinfo \
  /usr/share/zoneinfo

COPY --from=build \
  /etc/ssl/certs/ca-certificates.crt \
  /etc/ssl/certs/ca-certificates.crt

# copy built executable
COPY --from=build \
  /app/target/release/microbin \
  /usr/bin/microbin

# Expose webport used for the webserver to the docker runtime
EXPOSE 8080

ENTRYPOINT ["microbin"]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Or fetch Docker image from &lt;a href="https://hub.docker.com/r/danielszabo99/microbin" rel="noopener noreferrer"&gt;DockerHub: danielszabo99/microbin:latest&lt;br&gt;
&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Step 3:&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Install flyctl&lt;/em&gt;:&lt;/strong&gt;&lt;br&gt;
flyctl is a command line interface to the Fly.io platform.It allows users to manage authentication, application launch, deployment, network configuration, logging and more with just the one command.Install &lt;a href="https://fly.io/docs/hands-on/install-flyctl/" rel="noopener noreferrer"&gt;Flyctl CLI&lt;/a&gt; &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Sign In&lt;/em&gt;:&lt;/strong&gt;&lt;br&gt;
If you already have a Fly.io account, all you need to do is sign in with flyctl&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;fly auth login
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Your browser will open up with the Fly.io sign-in screen, enter your user name and password to sign in.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Step 4:&lt;/em&gt;&lt;/strong&gt;&lt;br&gt;
To deploy app into fly.io we need a &lt;code&gt;fly.toml&lt;/code&gt; file&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;app = "microbin"
primary_region = "ams"
kill_signal = "SIGINT"
kill_timeout = "5s"

[experimental]
  entrypoint = ["microbin", "--highlightsyntax", "--private", "--qr", "--editable", "--enable-burn-after"]

[build]
  image = "danielszabo99/microbin:latest"

[[mounts]]
  source = "microbin_data"
  destination = "/app/pasta_data"

[http_service]
  internal_port = 8080
  force_https = true
  auto_stop_machines = true
  auto_start_machines = true
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;I've added only a few things that are enough for micro microbin&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;em&gt;&lt;strong&gt;kill_signal:&lt;/strong&gt;&lt;/em&gt;&lt;br&gt;
The kill_signal option allows you to change what signal is sent so that you can trigger a softer, less disruptive shutdown. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;em&gt;&lt;strong&gt;kill_timeout:&lt;/strong&gt;&lt;/em&gt;&lt;br&gt;
When shutting down a Fly app instance, by default, after sending a signal, Fly gives an app instance five seconds to close down before being killed.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;em&gt;&lt;strong&gt;experimental:&lt;/strong&gt;&lt;/em&gt;&lt;br&gt;
This section is for flags and feature settings which have yet to be promoted into the main configuration.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;em&gt;&lt;strong&gt;build:&lt;/strong&gt;&lt;/em&gt;&lt;br&gt;
The image builder is used when you want to immediately deploy an existing public image&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;em&gt;&lt;strong&gt;mounts:&lt;/strong&gt;&lt;/em&gt;&lt;br&gt;
This section supports the Volumes feature for persistent storage&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;em&gt;&lt;strong&gt;auto_stop_machines&lt;/strong&gt;&lt;/em&gt;:&lt;br&gt;
Whether to automatically stop an application's machines when there's excess capacity, per region.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;em&gt;&lt;strong&gt;auto_start_machines&lt;/strong&gt;&lt;/em&gt;:&lt;br&gt;
Whether to automatically start an application's machines when a new request is made to the application and there's no excess capacity, per region.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Step 5:&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;&lt;strong&gt;Launch an App on Fly:&lt;/strong&gt;&lt;/em&gt;&lt;br&gt;
Fly.io enables you to deploy almost any kind of app using a Docker image.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;flyctl launch 
      (or)
flyctl launch --image danielszabo99/microbin:latest
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you need to edit &lt;code&gt;fly.toml&lt;/code&gt; and redeploy use the below command&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;flyctl deploy
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;em&gt;&lt;strong&gt;Check Your App's Status:&lt;/strong&gt;&lt;/em&gt;&lt;br&gt;
The application has been deployed with a DNS hostname of &lt;code&gt;microbin.fly.dev&lt;/code&gt;. Your deployment's name will, of course, be different. &lt;code&gt;fly status&lt;/code&gt; give us basic details&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;flyctl status
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;&lt;em&gt;Step 6:&lt;/em&gt;&lt;/strong&gt;&lt;br&gt;
The DESTROY command will remove an application from the Fly platform.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;flyctl destroy &amp;lt;APPNAME&amp;gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Adding Custom Domains&lt;/strong&gt;&lt;br&gt;
Fly offers a simple command-line process for the manual configuration of custom domains and for people integrating Fly custom domains into their automated workflows.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Step 1:&lt;/em&gt;&lt;/strong&gt;&lt;br&gt;
 Run &lt;code&gt;flyctl ips list&lt;/code&gt; to see your app's addresses&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;flyctl ips list
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Create an A record pointing to your v4 address, and an AAAA record pointing to your v6 address on your respective DNS.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Step 2:&lt;/em&gt;&lt;/strong&gt;&lt;br&gt;
You can add the custom domain to the application's certificates.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;flyctl certs create &amp;lt;domain.name&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;&lt;em&gt;Step 3:&lt;/em&gt;&lt;/strong&gt;&lt;br&gt;
You can check on the progress by running the below command&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;flyctl certs show &amp;lt;domain.name&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once the certificate is issued you can access your application to your Domain&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Step 4:&lt;/em&gt;&lt;/strong&gt;&lt;br&gt;
To remove the hostname from the application, drop the certificates in the process.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;flyctl certs delete &amp;lt;domain.name&amp;gt; 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



</description>
    </item>
    <item>
      <title>How to Setup AWS ECS Cluster as Build Slave for Jenkins</title>
      <dc:creator>Selvakumar</dc:creator>
      <pubDate>Wed, 12 Apr 2023 12:30:46 +0000</pubDate>
      <link>https://dev.to/ittrident/how-to-setup-aws-ecs-cluster-as-build-slave-for-jenkins-1fp8</link>
      <guid>https://dev.to/ittrident/how-to-setup-aws-ecs-cluster-as-build-slave-for-jenkins-1fp8</guid>
      <description>&lt;p&gt;This is a blog walk through how to set up the AWS ECS Fargate as a build slave for Jenkins.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step#1: (ECS cluster setup)
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Login to the AWS&lt;/li&gt;
&lt;li&gt;Goto ECS and click the "Create cluster"&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--AL1jOkxC--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/dldunhmvbheb7y0x2zaz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--AL1jOkxC--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/dldunhmvbheb7y0x2zaz.png" alt="create cluster" width="800" height="363"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Choose the "Networking only" and then click "Next step"&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--HQnfNNix--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/cyk5dfebtpo9l0t7mkce.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--HQnfNNix--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/cyk5dfebtpo9l0t7mkce.png" alt="choose fargate" width="800" height="490"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Fill the cluster name and if you want to create the new VPC just click the "Create VPC" tick box or just leave it. Finally click the "create" button.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s---BxAaTK5--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/hwdeqgv1jzjm65hhm0id.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s---BxAaTK5--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/hwdeqgv1jzjm65hhm0id.png" alt="Image description" width="800" height="518"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Step#2: (Setup ECS task execution IAM role)
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Goto IAM, select the "Roles". Then click the "create role" button.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--W1h65OMu--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/pow6v5g89ktlzcnzcjya.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--W1h65OMu--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/pow6v5g89ktlzcnzcjya.png" alt="Image description" width="800" height="363"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Choose "AWS service" on the trusted entry, then select "Elastic Container Service" on the drop down of use cases for other AWS services, then select the "Elastic Container Service Task". And click next.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Yit8J9YZ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/guhie90wspb2k9nzu1e2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Yit8J9YZ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/guhie90wspb2k9nzu1e2.png" alt="Image description" width="800" height="373"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;On the Add permissions section select 'AmazonECSTaskExecutionRolePolicy", then click next.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--PIAp8q7Y--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/bgf6wkanj58u3q6k9quo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--PIAp8q7Y--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/bgf6wkanj58u3q6k9quo.png" alt="Image description" width="800" height="368"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Give the name of the role, then click "create role".&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--RP0IIhzt--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/mlxkabp57eqwklv4ephm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--RP0IIhzt--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/mlxkabp57eqwklv4ephm.png" alt="Image description" width="800" height="421"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Step#3: (Create the security Group to ECS Slave)
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Create the security group with JNLP port "50000".&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Step#4: (Install the Jenkins)
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Install the Jenkins on the EC2 if you not have it.&lt;a href="https://www.jenkins.io/doc/book/installing/linux/"&gt;Installation link&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;em&gt;Note:&lt;/em&gt; &lt;em&gt;Open the JNLP port "50000" on the Jenkins machine security group&lt;/em&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Step#5: (AWS credential config at Jenkins)
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Configure the AWS programmatic access keys and secret key on Jenkins credential. This will help to run the agent on ECS.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--2XWDHFoc--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/fjk373um1r299onpa9fw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--2XWDHFoc--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/fjk373um1r299onpa9fw.png" alt="Image description" width="800" height="396"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Step#6: (Setup JNLP port on Jenkins settings)
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Goto "Manage Jenkins" on the dashboard.&lt;/li&gt;
&lt;li&gt;Click the "Security" under the security section.&lt;/li&gt;
&lt;li&gt;On the "Agents" setting, Choose the "Fixed" and put the port "50000".&lt;/li&gt;
&lt;li&gt;Then "save" it.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--bzg7lXWF--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/r16u2pthslnu970wfq8w.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--bzg7lXWF--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/r16u2pthslnu970wfq8w.png" alt="Image description" width="800" height="397"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Step#7: (Install ECS plugin on Jenkins)
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Goto "Manage Jenkins", then click the "Plugins". &lt;/li&gt;
&lt;li&gt;Install the "Amazon Elastic Container Service(ECS)/Fargate".&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--R3yfeEJB--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/87hlfoo0x87biyj0ee6w.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--R3yfeEJB--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/87hlfoo0x87biyj0ee6w.png" alt="Image description" width="800" height="395"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Step#8:(Setup the slave configuration on the Jenkins)
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Goto "Manage Jenkins", then choose the "Nodes and Clouds".&lt;/li&gt;
&lt;li&gt;Click "Clouds" on the left side top.&lt;/li&gt;
&lt;li&gt;Choose "Amazon EC2 Container Service Cloud" on the add a new cloud drop down.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--hkeoEzP_--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/dtbrf10se5699wvb9her.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--hkeoEzP_--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/dtbrf10se5699wvb9her.png" alt="Image description" width="800" height="267"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Name&lt;/strong&gt; -&amp;gt; Give name to you cloud config. Then click "Show More".&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Amazon ECS Credentials&lt;/strong&gt; -&amp;gt; Select an AWS credential which we were configured before on step-5.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Amazon ECS Region Name&lt;/strong&gt; -&amp;gt; choose the region where your cluster is running(we were created the on step 1)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;ECS Cluster&lt;/strong&gt; -&amp;gt; Select the cluster which we were created the on step 1&lt;/li&gt;
&lt;li&gt;Click "Add" under the ECS agent templates.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--ripqjbGN--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/us2ps7b1nf7rk1oscrdn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ripqjbGN--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/us2ps7b1nf7rk1oscrdn.png" alt="Image description" width="800" height="395"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Label&lt;/strong&gt; -&amp;gt; Give the label name ( this name will use us to configure the agent with job)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Template Name&lt;/strong&gt; -&amp;gt; Give name to template&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Type&lt;/strong&gt; -&amp;gt; Choose "Fargate"&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Operating System Family&lt;/strong&gt; -&amp;gt; Choose "linux"
&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--675NhEPo--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/s7j1fl1wagukp146siwr.png" alt="Image description" width="800" height="563"&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Network mode&lt;/strong&gt; - &amp;gt; Select "awsvpc"&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Soft Memory Reservation and CPU units&lt;/strong&gt; give base on the &lt;a href="https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task_definition_parameters.html#task_size"&gt;doc&lt;/a&gt;. For "i.e. Memory is 2048 means, CPU should be 1024" &lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Subnets&lt;/strong&gt; -&amp;gt; Given the subnets to run agent on vpc network("," is a delimiter). i.e subnet-1,subnet-2&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Security Groups&lt;/strong&gt; -&amp;gt; Give the security group ID which we were created on step 3.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Assign Public Ip&lt;/strong&gt; -&amp;gt; Tick the check box.&lt;/li&gt;
&lt;li&gt;Then click "Advanced".&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--3dXSCWka--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/339olg4wai8we3kvtdjm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--3dXSCWka--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/339olg4wai8we3kvtdjm.png" alt="Image description" width="800" height="571"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Task Execution Role ARN&lt;/strong&gt; -&amp;gt; Give the role ARN name wich we were created on step 2.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;ContainerUser&lt;/strong&gt; -&amp;gt; Given container user is "root".&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--dbwvRGeh--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/3y6k6ioaoaetdym50c6z.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--dbwvRGeh--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/3y6k6ioaoaetdym50c6z.png" alt="Image description" width="800" height="320"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;_Note: the following log configuration step are "not required", but If you want to see agent logs then config these. &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Logging Driver&lt;/strong&gt; -&amp;gt; Give "awslogs"&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Logging Configuration:&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
&lt;em&gt;note: you must create the log group (/ecs/jenkins-slave) on cloud watch before you give here.&lt;/em&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Key:&lt;/strong&gt; &lt;code&gt;awslogs-group&lt;/code&gt; &lt;strong&gt;Value:&lt;/strong&gt; &lt;code&gt;/ecs/jenkins-slave&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Key:&lt;/strong&gt; &lt;code&gt;awslogs-region&lt;/code&gt; &lt;strong&gt;Value:&lt;/strong&gt; &lt;code&gt;us-east-1&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Key:&lt;/strong&gt;&lt;code&gt;awslogs-stream-prefix&lt;/code&gt; &lt;strong&gt;Value:&lt;/strong&gt;&lt;code&gt;ecs&lt;/code&gt; &lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--x0EsNmDo--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/e5kdndvjqk6y27x45fop.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--x0EsNmDo--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/e5kdndvjqk6y27x45fop.png" alt="Image description" width="800" height="568"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Finally, Save the configuration.&lt;/li&gt;
&lt;/ul&gt;

&lt;h1&gt;
  
  
  Step#8:(Create the test job)
&lt;/h1&gt;

&lt;ul&gt;
&lt;li&gt;On the "Dashboard", add new item.&lt;/li&gt;
&lt;li&gt;Select "Freestyle project".&lt;/li&gt;
&lt;li&gt;Give the label name which we configured on step 7 
&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--FIGxE-4n--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/pf56os2igtvsuoxwgwyh.png" alt="Image description" width="800" height="391"&gt;
&lt;/li&gt;
&lt;li&gt;On the build step just put some shell command. Then save and run the job&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--mWQw5VTu--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9oqqruzecciepoznpmvg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--mWQw5VTu--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9oqqruzecciepoznpmvg.png" alt="Image description" width="800" height="254"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The slave container is PROVISIONING
&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--H31W6RTT--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/o488j6rfw4ft000sd7lp.png" alt="Image description" width="800" height="361"&gt;
&lt;/li&gt;
&lt;li&gt;Console output&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--s1z0j-7O--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/00bv2c4m5pt5wgi7w78z.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--s1z0j-7O--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/00bv2c4m5pt5wgi7w78z.png" alt="Image description" width="800" height="236"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Note: The Jenkins and the ecs salve should be on same network.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>aws</category>
      <category>ecs</category>
      <category>jenkins</category>
    </item>
    <item>
      <title>EC2 instance as self-hosted GItHub runners</title>
      <dc:creator>Arunodhayam</dc:creator>
      <pubDate>Tue, 11 Oct 2022 05:56:39 +0000</pubDate>
      <link>https://dev.to/ittrident/ec2-instance-as-self-hosted-github-runners-5cg8</link>
      <guid>https://dev.to/ittrident/ec2-instance-as-self-hosted-github-runners-5cg8</guid>
      <description>&lt;p&gt;By default, GitHub offers &lt;a href="https://docs.github.com/en/actions/using-github-hosted-runners/about-github-hosted-runners" rel="noopener noreferrer"&gt;hosted&lt;/a&gt; runners to run our workloads inside. For their Linux runners, GitHub offers runners with specs: 2-core CPU, 7 GB of RAM, 14 GB of SSD disk space.&lt;/p&gt;

&lt;p&gt;Some of your CI/CD workloads may require more powerful hardware than GitHub-hosted runners could provide. In such case, you can configure an EC2 instance to act as your GitHub runner instead.&lt;/p&gt;

&lt;p&gt;For example, you may run a &lt;code&gt;c5.4xlarge&lt;/code&gt; instance type for some of your workloads, or even a &lt;code&gt;r5.xlarge&lt;/code&gt; for workloads that process large data sets in-memory.&lt;/p&gt;

&lt;p&gt;Let us begin:&lt;/p&gt;

&lt;h2&gt;
  
  
  #1. Create a GitHub personal access token
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Go to your GitHub Settings&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpuped7o6kxqmf5r65dle.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpuped7o6kxqmf5r65dle.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Click the &lt;code&gt;Developer settings&lt;/code&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fss0669h84v9h12i9b267.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fss0669h84v9h12i9b267.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Click &lt;code&gt;Personal access token&lt;/code&gt; to create one&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr1hkgb7kqizbbry5wdyl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr1hkgb7kqizbbry5wdyl.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Create a new GitHub personal access token with the &lt;code&gt;repo&lt;/code&gt; scope in the name of &lt;code&gt;GH_PERSONAL_ACCESS_TOKEN&lt;/code&gt;. The action will use the token for self-hosted runners management in the GitHub account on the repository level.
&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3citqwmnviw24jlyxfzm.png" alt="Image description"&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  #2. Create an AMI image
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Create a new &lt;a href="https://docs.aws.amazon.com/efs/latest/ug/gs-step-one-create-ec2-resources.html" rel="noopener noreferrer"&gt;EC2 instance&lt;/a&gt; based on a Linux distribution of your choice&lt;/li&gt;
&lt;li&gt;Connect to the instance
```sh
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;ssh -i  user_name@IP_ADDRESS&lt;/p&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;- Install packages and configure the instance according to your workflow requirements
- After that, land on the EC2 console, select your instance, right-click on it to find `Image and templates` , hover to see `Create image`, click it

![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/buiy3gqonksd0yabhsxg.png)
- Enter the `Image name`, `Image description`, and check `No reboot`, and at last , hit `Create`
![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/k00gnvipndhmh2avxaor.png)
-  The AMI should show up in the AMI console 
![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/83mf1vod1nav7k5h2rhv.png)
- You can safely terminate the instance as it's of no use any more, of course, you could just keep it running if you plan on making any changes to it, however

## #3. Create VPC with subnet and security group
- Create a new [VPC and Subnet](https://docs.aws.amazon.com/vpc/latest/userguide/working-with-vpcs.html) or use an existing
- Create a new [Security group](https://docs.aws.amazon.com/cli/latest/userguide/cli-services-ec2-sg.html) for the runners in the VPC and only allow outbound traffic to port 443, for pulling jobs from GitHub. No port opening for inbound traffic is necessay.

## #4. Configure the environment secrets
- The following environment variables must be set for this action 

| Secrets | Description |
| ------- | ----------- |
| `GH_PERSONAL_ACCESS_TOKEN` | Add the GitHub pat token in secret |
| `AWS_REGION` | Add the region in secret |
| `AWS_ACCESS_KEY_ID` | Add the access key in secret |
| `AWS_SECRET_ACCESS_KEY` | Add the secret key in secret |
| `AMI` | Add the EC2 AMI ID in secret |
| `INSTANCE_TYPE` | Provide the instance type like t2.medium, t2.micro |
| `SUBNET` | Add the subnet ID in secret |
| `SECURITY_GROUP` | Add the security group ID in secret |

- Go to your project `repo settings` to add the above Secrets

![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/bpbvapzu5qh3qckwpfqr.png)

- Expand the `Secrets` drop-down and click `Actions`

![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/y8041s4x1caywdahqwvo.png)

- In the _Actions secrets_ tab click `New repository secret` 

![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5fmybdpgemqq2vv6s1ci.png)

- Name your secrets and add their values to be consumed by the action

![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/3kx83orynk8n3mz16vwr.png)

- A complete listing of the required secrets should look something like

![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/gjd7ez1jzkxgwqrwmkul.png)


## #5. Configure the GitHub workflow

- Create a new GitHub Action with the below example workflow
- Please don't forget to set up a job for removing the EC2 instance at the end of the workflow execution.
   Otherwise, the EC2 instance won't be removed and continue to run even after the workflow execution is finished.

Now you're ready to go!

### Inputs

| &amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;Name&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; | Required                                   | Description                                                                                                                                                                                                                                                                                                                           |
| ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------ | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `mode`                                                                                                                                                                       | Always required.                           | Specify here which mode you want to use: &amp;lt;br&amp;gt; - `start` - to start a new runner; &amp;lt;br&amp;gt; - `stop` - to stop the previously created runner.                                                                                                                                                                                               |
| `GitHub-token`                                                                                                                                                               | Always required.                           | GitHub Personal Access Token with the `repo` scope assigned.                                                                                                                                                                                                                                                                          |
| `ec2-image-id`                                                                                                                                                               | Required if you use the `start` mode.      | EC2 Image Id (AMI). &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; The new runner will be launched from this image. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; The action is compatible with Amazon Linux 2 images.                                                                                                                                                                                           |
| `ec2-instance-type`                                                                                                                                                          | Required if you use the `start` mode.      | EC2 Instance Type.                                                                                                                                                                                                                                                                                                                    |
| `subnet-id`                                                                                                                                                                  | Required if you use the `start` mode.      | VPC Subnet Id. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; The subnet should belong to the same VPC as the specified security group.                                                                                                                                                                                                                                     |
| `security-group-id`                                                                                                                                                          | Required if you use the `start` mode.      | EC2 Security Group Id. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; The security group should belong to the same VPC as the specified subnet. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Only the outbound traffic for port 443 should be allowed. No inbound traffic is required.                                                                                                                          |
| `label`                                                                                                                                                                      | Required if you use the `stop` mode.       | Name of the unique label assigned to the runner. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; The label is provided by the output of the action in the `start` mode. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; The label is used to remove the runner from GitHub when the runner is not needed anymore.                                                                                                   |
| `ec2-instance-id`                                                                                                                                                            | Required if you use the `stop` mode.       | EC2 Instance Id of the created runner. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; The id is provided by the output of the action in the `start` mode. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; The id is used to terminate the EC2 instance when the runner is not needed anymore.                                                                                                                      |
| `iam-role-name`                                                                                                                                                              | Optional. Used only with the `start` mode. | IAM role name to attach to the created EC2 runner. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; This allows the runner to have permission to run additional actions within the AWS account, without having to manage additional GitHub secrets and AWS users. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Setting this requires additional AWS permissions for the role launching the instance (see above). |
| `aws-resource-tags`                                                                                                                                                          | Optional. Used only with the `start` mode. | Specifies tags to add to the EC2 instance and any attached storage. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; This field is a stringified JSON array of tag objects, each containing a `Key` and `Value` field (see example below). &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Setting this requires additional AWS permissions for the role launching the instance (see above).                         |
| `runner-home-dir`                                                                                                                                                              | Optional. Used only with the `start` mode. | Specifies a directory where pre-installed actions-runner software and scripts are located.&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; |

### Outputs

| &amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;Name&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; | Description                                                                                                                                                                                                                               |
| ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `label`                                                                                                                                                                      | Name of the unique label assigned to the runner. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; The label is used in two cases: &amp;lt;br&amp;gt; - to use as the input of `runs-on` property for the following jobs; &amp;lt;br&amp;gt; - to remove the runner from GitHub when it is not needed anymore. |
| `ec2-instance-id`                                                                                                                                                            | EC2 Instance Id of the created runner. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; The id is used to terminate the EC2 instance when the runner is not needed anymore.                                                                                                       |

### Example workflow

`ci.yml`

```yml


name: ci
on: pull_request
jobs:
  start-runner:
    name: Start self-hosted EC2 runner
    runs-on: ubuntu-latest
    outputs:
      label: ${{ steps.start-ec2-runner.outputs.label }}
      ec2-instance-id: ${{ steps.start-ec2-runner.outputs.ec2-instance-id }}
    steps:
      - name: Configure AWS credentials
        uses: aws-actions/configure-aws-credentials@v1
        with:
          aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
          aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
          aws-region: ${{ secrets.AWS_REGION }}
      - name: Start EC2 runner
        id: start-ec2-runner
        uses: machulav/ec2-github-runner@v2
        with:
          mode: start
          github-token: ${{ secrets.GH_PERSONAL_ACCESS_TOKEN }}
          ec2-image-id: ${{ secrets.AMI }}
          ec2-instance-type: ${{ secrets.INSTANCE_TYPE }}
          subnet-id: ${{ secrets.SUBNET }}
          security-group-id: ${{ secrets.SECURITY_GROUP }}
          iam-role-name: my-role-name # optional, requires additional permissions
          aws-resource-tags: &amp;gt; # optional, requires additional permissions
            [
              {"Key": "Name", "Value": "ec2-github-runner"},
              {"Key": "GitHubRepository", "Value": "${{ github.repository }}"}
            ]
  do-the-job:
    name: Do the job on the runner
    needs: start-runner # required to start the main job when the runner is ready
    runs-on: ${{ needs.start-runner.outputs.label }} # run the job on the newly created runner
    steps:
      - name: Hello World
        run: echo 'Hello World!'
  stop-runner:
    name: Stop self-hosted EC2 runner
    needs:
      - start-runner # required to get output from the start-runner job
      - do-the-job # required to wait until the main job is done
    runs-on: ubuntu-latest
    if: ${{ always() }} # required to stop the runner even if the error happened in the previous jobs
    steps:
      - name: Configure AWS credentials
        uses: aws-actions/configure-aws-credentials@v1
        with:
          aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
          aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
          aws-region: ${{ secrets.AWS_REGION }}
      - name: Stop EC2 runner
        uses: machulav/ec2-github-runner@v2
        with:
          mode: stop
          github-token: ${{ secrets.GH_PERSONAL_ACCESS_TOKEN }}
          label: ${{ needs.start-runner.outputs.label }}
          ec2-instance-id: ${{ needs.start-runner.outputs.ec2-instance-id }}


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;After a successful first workflow execution, you should see this&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs0nwfoke8pz006yhfrs6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs0nwfoke8pz006yhfrs6.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>githubactions</category>
      <category>ec2</category>
      <category>selfhostedrunner</category>
      <category>githubrunner</category>
    </item>
    <item>
      <title>Restore SSH connectivity to EC2 instance if SSH key pair is lost</title>
      <dc:creator>Selvakumar</dc:creator>
      <pubDate>Thu, 06 Oct 2022 13:42:07 +0000</pubDate>
      <link>https://dev.to/ittrident/restore-ssh-connectivity-to-ec2-instance-if-ssh-key-pair-is-lost-4dnn</link>
      <guid>https://dev.to/ittrident/restore-ssh-connectivity-to-ec2-instance-if-ssh-key-pair-is-lost-4dnn</guid>
      <description>&lt;p&gt;Some time back, unfortunately, we had lost an SSH key pair belonging to an important EC2 instance. At that point in time, we had just taken a snapshot of the instance and moved on to create a new one with a new key pair. In this blog post, we shall see how to restore SSH connectivity.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step #1:
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Create a new EC2 instance. If performing on an existing instance, make sure you hold that machine's SSH private key&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Step #2:
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Stop the instance you lost the SSH key pair to and detach its volume right after (Make absolutely sure that before you go about detaching the volume, it is the correct one)
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt; Goto --&amp;gt; AWS --&amp;gt; EC2 --&amp;gt; Volumes(EBS)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb1snn2slj75fltygts2w.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb1snn2slj75fltygts2w.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Step #3:
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Post volume detachment, attach the volume to the EC2 machine of choice from &lt;strong&gt;#1&lt;/strong&gt;, which is either a new or an existing
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Goto --&amp;gt; AWS --&amp;gt; EC2 --&amp;gt; Volumes(EBS)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Select that detached volume and click &lt;code&gt;action&lt;/code&gt; choose &lt;code&gt;Attach volume&lt;/code&gt;, and select the EC2 instance that wants the volume attached to&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy86bbely7q5mfwg4535n.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy86bbely7q5mfwg4535n.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Verify that you have attached the volume to the right instance. Go to the EC2 console, select the instance, and select storage, and you should see &lt;em&gt;two&lt;/em&gt; volumes: one is the root and the other is our attached&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fof7rabaasxz1zfjw8isw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fof7rabaasxz1zfjw8isw.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Note&lt;/strong&gt;&lt;br&gt;
Before moving on to Step #4, you have to create an SSH key pair&lt;br&gt;
&lt;code&gt;ssh-keygen -t rsa -b 4096&lt;/code&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  Step #4:
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Login to the instance from &lt;strong&gt;#3&lt;/strong&gt;. Then, mount the attached volume into a directory
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ ssh -i path/to/private.key username@ip-addr
$ lsblk
$ mkdir ~/data
$ sudo mount /dev/xvdf1 /data
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F85r6rd26natl5w8sofeu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F85r6rd26natl5w8sofeu.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Step #5:
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Once mounted, navigate to the mounted directory
&lt;code&gt;cd ~/data/home/ubuntu/.ssh&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhbs7856anucw5ta1pdfo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhbs7856anucw5ta1pdfo.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Edit the &lt;code&gt;authorized_keys&lt;/code&gt; file, delete the existing public key and paste the new public key generated in &lt;strong&gt;#4&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Once, that is done, unmount the volume
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;code&gt;sudo umount ~/data&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Finish it by detaching the volume from the instance, and reattaching the volume back to its former owner i.e; stopped instance.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt; 1. Goto --&amp;gt; AWS --&amp;gt; EC2 --&amp;gt; Volumes (EBS)
 2. Select the correct volume, then "action --&amp;gt; force detach/detach volume"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjq3tnwr7dcrzxxso1uoz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjq3tnwr7dcrzxxso1uoz.png" alt="Image description"&gt;&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt; 3. Select the volume again, "action --&amp;gt; attach volume"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Note&lt;/strong&gt; "Device name" should be "/dev/sda1"&lt;br&gt;
Because this is the device naming supported by the root volume&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fawgae09e8s2359glr0k6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fawgae09e8s2359glr0k6.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Step #6:
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Here on forth, you can SSH back into the machine you lost your SSH keys to, using the new private key which was created in &lt;strong&gt;#3&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>aws</category>
      <category>ssh</category>
      <category>ec2</category>
      <category>ebs</category>
    </item>
    <item>
      <title>Migrate RDS Cross-Account</title>
      <dc:creator>Selvakumar</dc:creator>
      <pubDate>Thu, 06 Oct 2022 13:17:31 +0000</pubDate>
      <link>https://dev.to/ittrident/migrate-rds-cross-account-4bp6</link>
      <guid>https://dev.to/ittrident/migrate-rds-cross-account-4bp6</guid>
      <description>&lt;p&gt;This post gives a walkthrough on how to migrate an RDS DB across two AWS accounts.&lt;/p&gt;

&lt;h3&gt;
  
  
  #1 Take Snapshot
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Take a snapshot of the target DB
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Go to AWS --&amp;gt; RDS --&amp;gt; Snapshots --&amp;gt; Take snapshot
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkgwz9e1cbyvava1unopy.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkgwz9e1cbyvava1unopy.jpeg" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  #2 Create a KMS key
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Create a fresh KMS key and share it with the target account
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;AWS --&amp;gt; KMS --&amp;gt; Customer-managed keys --&amp;gt; create key
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Before you go on creating the new key, don't skip the above step. Now proceed to enter the target account ID in the below field&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzj94jkh80jyx0zyn2bbl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzj94jkh80jyx0zyn2bbl.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Create a snapshot (a second) &lt;em&gt;of&lt;/em&gt; the snapshot created in &lt;strong&gt;#1&lt;/strong&gt;. The reasoning for this is that the KMS key created in &lt;strong&gt;#2&lt;/strong&gt; which was shared with the target account, was done to allow the creation of the DB on the target account
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;select the snapshot --&amp;gt; action --&amp;gt; copy snapshot
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;While the steps to copying are underway, select the freshly created KMS key (I named it "test") from the drop-down&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs10fpwhjlv22o38jemtm.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs10fpwhjlv22o38jemtm.jpg" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  #3 Share the Snapshot
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Share the copied snapshot with the target account and enter the target account ID, once more
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Select the snapshot --&amp;gt; Action --&amp;gt; share snapshot
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fojxmju0hvp2umom06j8f.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fojxmju0hvp2umom06j8f.jpeg" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;&lt;em&gt;NOTE:&lt;/em&gt;&lt;/strong&gt;  The below-proceeding steps should be executed on the target account&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  #4 Create a New DB.
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;The shared DB snapshot will show up in your target account
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Go to AWS --&amp;gt; RDS --&amp;gt; Snapshot --&amp;gt; Shared with me
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fppw2sjbhh7akswajusmx.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fppw2sjbhh7akswajusmx.jpeg" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;In the target account, create a new DB instance by restoring the DB snapshot
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;RDS --&amp;gt; Snapshot --&amp;gt; Shared with me --&amp;gt; select snapshot --&amp;gt; action --&amp;gt;  restore snapshot.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;&lt;em&gt;NOTE:&lt;/em&gt;&lt;/strong&gt;  When creating a DB from the snapshot, &lt;strong&gt;DO NOT FORGET&lt;/strong&gt; to swap the "test" KMS key with the "default" on the target account key. Select "(default) aws/rds"&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fiqz8lu3m5cjbeldkckpt.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fiqz8lu3m5cjbeldkckpt.jpeg" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>aws</category>
      <category>rds</category>
      <category>crossaccount</category>
      <category>dbmigration</category>
    </item>
  </channel>
</rss>
