<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Yaroslav Yarmoshyk</title>
    <description>The latest articles on DEV Community by Yaroslav Yarmoshyk (@yyarmoshyk).</description>
    <link>https://dev.to/yyarmoshyk</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/yyarmoshyk"/>
    <language>en</language>
    <item>
      <title>The cost of self-hosted LLM model in AWS</title>
      <dc:creator>Yaroslav Yarmoshyk</dc:creator>
      <pubDate>Fri, 28 Feb 2025 10:27:38 +0000</pubDate>
      <link>https://dev.to/yyarmoshyk/the-cost-of-self-hosted-llm-model-in-aws-4ijk</link>
      <guid>https://dev.to/yyarmoshyk/the-cost-of-self-hosted-llm-model-in-aws-4ijk</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;There are numerous reasons why you’d like to run an LLM model locally and isolated from the internet instead of using the public &lt;code&gt;OpenAI&lt;/code&gt;, &lt;code&gt;Meta&lt;/code&gt; or &lt;code&gt;Deepseek&lt;/code&gt; apis.&lt;/p&gt;

&lt;p&gt;For me the most important are the following:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Data privacy&lt;/strong&gt;

&lt;ul&gt;
&lt;li&gt;Some industries (healthcare, finance, legal) require sensitive or proprietary data to remain on-premises or within specific geographic regions&lt;/li&gt;
&lt;li&gt;Stringent regulations (e.g., HIPAA, GDPR) by avoiding data transfers to external third-party services.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Security&lt;/strong&gt;

&lt;ul&gt;
&lt;li&gt;The content, generated or processed, must remain confidential. 
A local solution prevents sending queries to an external API&lt;/li&gt;
&lt;li&gt;You have end-to-end control (network, physical access, encryption at rest/in transit) when models are self-hosted.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Everything that includes PII of your clients or the business confidential information should never be uploaded to the public services.&lt;/p&gt;

&lt;p&gt;In order to meet these requirements, while still being able to use LLM to boost performance of your organization, the local setup in your cloud account can be the golden bullet.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqgonfm7876v50wrrlcg5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqgonfm7876v50wrrlcg5.png" alt="llama in the cloud" width="800" height="800"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  What is the financial side of this setup?
&lt;/h2&gt;

&lt;p&gt;I prepared an approximate forecast to run the &lt;a href="https://llamaimodel.com/requirements-3-2/" rel="noopener noreferrer"&gt;LLama models v3.2 based on requirements&lt;/a&gt; in AWS Cloud.&lt;/p&gt;

&lt;p&gt;In my calculations I covered the following 2 cases:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;The LLM should be online only during the working hours (40h/week)&lt;/li&gt;
&lt;li&gt;LLM should be available 24/7&lt;/li&gt;
&lt;li&gt;No saving plans, reserved instances, upfront payment included.&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Llama Name&lt;/th&gt;
&lt;th&gt;Possible EC2 Instance&lt;/th&gt;
&lt;th&gt;Instance Details&lt;/th&gt;
&lt;th&gt;Monthly Price (40 hrs/week)&lt;/th&gt;
&lt;th&gt;Monthly Price (168 hrs/week)&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Llama 3.2 1B Instruct&lt;/td&gt;
&lt;td&gt;g4dn.xlarge&lt;/td&gt;
&lt;td&gt;16GB RAM&lt;br&gt;4 vCPUs&lt;br&gt;1 GPU (NVIDIA T4)&lt;/td&gt;
&lt;td&gt;$91.42&lt;/td&gt;
&lt;td&gt;$383.98&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Llama 3.2 3B Instruct&lt;/td&gt;
&lt;td&gt;g4dn.2xlarge&lt;/td&gt;
&lt;td&gt;32GB RAM&lt;br&gt;8 vCPUs&lt;br&gt;1 GPU (NVIDIA T4)&lt;/td&gt;
&lt;td&gt;$130.70&lt;/td&gt;
&lt;td&gt;$548.96&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Llama 3.2 11B Vision&lt;/td&gt;
&lt;td&gt;g5.8xlarge&lt;/td&gt;
&lt;td&gt;128GB RAM&lt;br&gt;32 vCPUs&lt;br&gt;1 GPU (24GB Memory)&lt;/td&gt;
&lt;td&gt;$429.33&lt;/td&gt;
&lt;td&gt;$1,803.04&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Llama 3.2 90B Vision&lt;/td&gt;
&lt;td&gt;g5.48xlarge&lt;/td&gt;
&lt;td&gt;768GB RAM&lt;br&gt;192 vCPUs&lt;br&gt;1 GPU (192GB Memory)&lt;/td&gt;
&lt;td&gt;$2,834.85&lt;/td&gt;
&lt;td&gt;$11,906.24&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h2&gt;
  
  
  Notes on the Table
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Possible EC2 Instance&lt;/strong&gt; was selected based on the &lt;a href="https://llamaimodel.com/requirements-3-2/" rel="noopener noreferrer"&gt;LLama models v3.2 requirements&lt;/a&gt;

&lt;ul&gt;
&lt;li&gt;For smaller Instruct models (1B, 3B), a single g4dn or g5 instance with an NVIDIA T4 should be enough.&lt;/li&gt;
&lt;li&gt;For 11B Vision, the &lt;code&gt;g5.8xlarge&lt;/code&gt; meets the minimum 22 GB VRAM requirement (A10G has 24 GB VRAM).&lt;/li&gt;
&lt;li&gt;For 90B Vision, you typically need multiple high-end GPUs. The &lt;code&gt;g5.48xlarge&lt;/code&gt; offers 8× A100 GPUs (40 GB each = 320 GB total VRAM) plus sufficient CPU and RAM.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Monthly Price
Calculated from approximate On-Demand hourly rates in &lt;code&gt;us-west-2 (Oregon)&lt;/code&gt;.
Prices shown are for:

&lt;ul&gt;
&lt;li&gt;160 hours/month (40 hrs/week) &lt;/li&gt;
&lt;li&gt;720 hours/month (24×7 usage).
Actual AWS rates may vary slightly by region and can change over time.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Storage
The selected instances typically come with local NVMe SSD volumes. In production, you’ll often attach an EBS volume to meet or exceed the required disk space. EBS costs are not included in the prices above.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Here is a &lt;a href="https://calculator.aws/#/estimate?id=4fbd7ccba7e7b0596d473c03b7989c8bdc100fbc" rel="noopener noreferrer"&gt;link to the pricing calculator&lt;/a&gt;. You can use it as a baseline in your cost forecasts.&lt;/p&gt;

&lt;h2&gt;
  
  
  Optimization Options
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Reserved Instances&lt;/strong&gt; or &lt;strong&gt;Savings Plans&lt;/strong&gt; can drastically reduce hourly rates.&lt;/li&gt;
&lt;li&gt;Spot Instances offer lower prices but can be interrupted.&lt;/li&gt;
&lt;li&gt;For large models, you might also explore distributed training/inference techniques to scale across multiple smaller GPUs.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Always confirm instance pricing using the official &lt;a href="https://aws.amazon.com/pricing/calculators/" rel="noopener noreferrer"&gt;AWS Pricing Calculator&lt;/a&gt; or up-to-date AWS documentation for &lt;a href="https://aws.amazon.com/ec2/instance-types/g4/" rel="noopener noreferrer"&gt;G4 instaces&lt;/a&gt; and &lt;a href="https://aws.amazon.com/ec2/instance-types/g5/" rel="noopener noreferrer"&gt;G5 instances&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>llm</category>
      <category>cloudcomputing</category>
      <category>cloudbudget</category>
      <category>aws</category>
    </item>
    <item>
      <title>Detect Inappropriate Content with AWS Rekognition</title>
      <dc:creator>Yaroslav Yarmoshyk</dc:creator>
      <pubDate>Wed, 15 Jan 2025 09:39:13 +0000</pubDate>
      <link>https://dev.to/yyarmoshyk/detect-inappropriate-content-with-aws-rekognition-4igl</link>
      <guid>https://dev.to/yyarmoshyk/detect-inappropriate-content-with-aws-rekognition-4igl</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;In this article I want to describe how to use AWS Rekognition service to detect and block the images that doesn't comply with your content policy and can affect the reputation of your website, leading to user dissatisfaction, loss of trust, and potential legal issues.&lt;/p&gt;

&lt;p&gt;This case is applicable to any website that allows users to upload images with subesequent location of these images in the S3 bucket. &lt;/p&gt;

&lt;p&gt;Let’s say you have a &lt;strong&gt;website&lt;/strong&gt; with the feature of file uploads. Users can publish inappropriate images or videos that can affect the reputation of your website, leading to user dissatisfaction, loss of trust, and potential legal issues.&lt;/p&gt;

&lt;p&gt;Another possible case is related to &lt;strong&gt;Educational Platforms&lt;/strong&gt; where one of your teachers accidentally uploads his home videos instead of learning material.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Problem&lt;/strong&gt;: User-uploaded images may contain explicit or harmful content such as &lt;code&gt;nudity&lt;/code&gt;, &lt;code&gt;violence&lt;/code&gt;, &lt;code&gt;hate symbols&lt;/code&gt; or other inappropriate material. If such images remain publicly accessible, they can violate platform policies, offend users and potentially lead to legal issues.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Solution&lt;/strong&gt;: use &lt;code&gt;AWS Rekognition&lt;/code&gt; to automatically detect such content and move it into a secure, non-public location. The system ensures that inappropriate content is promptly removed from public access.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This is more a solution design rather then How-To implementation guide.&lt;/p&gt;

&lt;h2&gt;
  
  
  Disclaimer
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://docs.aws.amazon.com/rekognition/latest/dg/what-is.html" rel="noopener noreferrer"&gt;AWS Rekognition&lt;/a&gt; is a comprehensive image and video analysis service offered by &lt;code&gt;Amazon Web Services (AWS)&lt;/code&gt;. It is powered by a deep learning technology and requires no machine learning expertise to use.&lt;/p&gt;

&lt;p&gt;It provides object and scene detection, allowing for the identification of various elements within &lt;code&gt;images&lt;/code&gt; and &lt;code&gt;videos&lt;/code&gt;. Facial analysis and recognition features allows it to detect faces, emotions and even celebrity recognition, making it useful for security and personalization applications. &lt;/p&gt;

&lt;p&gt;Additionally, content moderation tools help automatically identify and filter inappropriate or explicit content, ensuring compliance and safety. The following article will be primarily focused on using the recognition for content moderation.&lt;/p&gt;

&lt;p&gt;For those of you who is looking for more detailed information, you can visit the &lt;a href="https://aws.amazon.com/rekognition/" rel="noopener noreferrer"&gt;AWS Rekognition Overview&lt;/a&gt; and check its &lt;a href="https://docs.aws.amazon.com/rekognition/latest/dg/what-is.html#what-is-capabilities" rel="noopener noreferrer"&gt;Key Features&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Automated Content Moderation With AWS Recognition
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Data Flow Summary&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Upload Path&lt;/strong&gt;: &lt;code&gt;User&lt;/code&gt; → &lt;code&gt;CloudFront&lt;/code&gt; → Public &lt;code&gt;S3 Bucket&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Processing Path&lt;/strong&gt;: &lt;code&gt;CloudWatch Event&lt;/code&gt; → &lt;code&gt;Lambda Function&lt;/code&gt; → &lt;code&gt;AWS Rekognition&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Response Path&lt;/strong&gt;: If flagged → Move to Secure &lt;code&gt;S3 Bucket&lt;/code&gt; + &lt;code&gt;SES&lt;/code&gt; Email Notification&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmj6cbyngxpa73bze6iql.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmj6cbyngxpa73bze6iql.png" alt="Automated image analysis With AWS Recognition" width="760" height="441"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;User Interaction&lt;/strong&gt;:

&lt;ul&gt;
&lt;li&gt;
&lt;u&gt;Action&lt;/u&gt;: A user uploads an erotic image.&lt;/li&gt;
&lt;li&gt;
&lt;u&gt;Storage&lt;/u&gt;: The image is stored in a public &lt;code&gt;Amazon S3 bucket&lt;/code&gt;, which is accessible via &lt;code&gt;Amazon CloudFront&lt;/code&gt; for content delivery.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Event Trigger&lt;/strong&gt;:

&lt;ul&gt;
&lt;li&gt;
&lt;u&gt;Service&lt;/u&gt;: Amazon CloudWatch Event Rule&lt;/li&gt;
&lt;li&gt;
&lt;u&gt;Function&lt;/u&gt;: Scheduled to trigger a Lambda function every hour, ensuring periodic checks of newly uploaded content.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Lambda Function Execution:

&lt;ul&gt;
&lt;li&gt;
&lt;u&gt;Language&lt;/u&gt;: Python&lt;/li&gt;
&lt;li&gt;
&lt;u&gt;Tasks&lt;/u&gt;:

&lt;ul&gt;
&lt;li&gt;
&lt;u&gt;File Retrieval&lt;/u&gt;: The function scans the S3 bucket to identify and list files that have been uploaded in the last 62 minutes.&lt;/li&gt;
&lt;li&gt;
&lt;u&gt;Content Analysis&lt;/u&gt;: For each identified image, the Lambda function invokes &lt;code&gt;AWS Rekognition&lt;/code&gt; to &lt;a href="https://docs.aws.amazon.com/rekognition/latest/dg/labels-detect-labels-image.html" rel="noopener noreferrer"&gt;analyze the content against predefined labels&lt;/a&gt;. The the &lt;a href="https://docs.aws.amazon.com/rekognition/latest/APIReference/API_DetectLabels.html" rel="noopener noreferrer"&gt;DetectLabels&lt;/a&gt; operation is used&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;AWS Rekognition Analysis&lt;/strong&gt;:

&lt;ul&gt;
&lt;li&gt;
&lt;u&gt;Labels Checked&lt;/u&gt;:

&lt;ul&gt;
&lt;li&gt;&lt;code&gt;Detected Nudity&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;Violence&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;Gambling&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;Rude Gestures&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;Hate Symbols&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;Drugs &amp;amp; Tobacco&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;Alcohol Use&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;Exposed Buttocks or Anus&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;Explicit Nudity&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;Explicit Sexual Activity&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;Obstructed Intimate Parts&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;u&gt;Outcome&lt;/u&gt;: Determines whether any of the specified labels are present in the image.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Conditional Handling Based on Analysis&lt;/strong&gt;:

&lt;ul&gt;
&lt;li&gt;
&lt;u&gt;If Labels are Detected&lt;/u&gt;:

&lt;ul&gt;
&lt;li&gt;
&lt;u&gt;File Management&lt;/u&gt;: The image is moved from the public S3 bucket to a secure, non-public S3 bucket to prevent further public access.
&lt;strong&gt;You can not set ACLs on an object level. This will not block access to the object&lt;/strong&gt;. You can either move it into a non-public folder of the existing s3 bucket or into another &lt;code&gt;S3 Bucket&lt;/code&gt; without public access.&lt;/li&gt;
&lt;li&gt;Notification: An email notification is sent to &lt;code&gt;recognized@example.com&lt;/code&gt; using &lt;code&gt;Amazon Simple Email Service (SES)&lt;/code&gt;, alerting the relevant parties about the detection.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Speaking about the possible list of labels that can be detected&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Customers can download the list of supported labels and object bounding boxes from our &lt;a href="https://docs.aws.amazon.com/rekognition/latest/dg/labels.html" rel="noopener noreferrer"&gt;documentation&lt;/a&gt; page or from the 'Label detection' tab of the &lt;a href="https://console.aws.amazon.com/rekognition" rel="noopener noreferrer"&gt;Amazon Rekognition Console&lt;/a&gt;. In addition, on the Rekognition console, customers can use a search bar to easily check whether their label is already supported or not. Using the same interface, customers can request new labels that they would like Amazon Rekognition to support, or provide any other product feedback.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  External links:
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;&lt;a href="https://docs.aws.amazon.com/rekognition/latest/dg/images-s3.html" rel="noopener noreferrer"&gt;Analyzing images stored in an Amazon S3 bucket&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.aws.amazon.com/rekognition/latest/dg/service_code_examples.html" rel="noopener noreferrer"&gt;Code examples for Amazon Rekognition&lt;/a&gt;&lt;/li&gt;
&lt;/ol&gt;

</description>
      <category>awsrekognition</category>
      <category>aws</category>
      <category>serverless</category>
      <category>imageanalysis</category>
    </item>
    <item>
      <title>Protect nginx ingress with AWS WAF and AWS Shield</title>
      <dc:creator>Yaroslav Yarmoshyk</dc:creator>
      <pubDate>Mon, 09 Dec 2024 09:19:16 +0000</pubDate>
      <link>https://dev.to/yyarmoshyk/protect-nginx-ingress-with-aws-waf-and-aws-shield-3ob2</link>
      <guid>https://dev.to/yyarmoshyk/protect-nginx-ingress-with-aws-waf-and-aws-shield-3ob2</guid>
      <description>&lt;h2&gt;
  
  
  I have
&lt;/h2&gt;

&lt;p&gt;The application running in AWS EKS. The application frontend is exposed to the world over nginx ingress controller.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhotzlz5xae7iq62xkz8d.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhotzlz5xae7iq62xkz8d.jpg" alt="eks nginx ingress" width="416" height="191"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  I need to
&lt;/h2&gt;

&lt;p&gt;Enable AWS WAF and AWS Shield protection from Application and Network attacks with AWS WAF and AWS Shield.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhqfh2qp081uevkmktdsy.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhqfh2qp081uevkmktdsy.jpg" alt="aws waf nginx nlb" width="688" height="251"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Why do I need this?
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://aws.amazon.com/waf/" rel="noopener noreferrer"&gt;AWS WAF&lt;/a&gt; helps to prevent malicious attacks such as &lt;a href="https://docs.aws.amazon.com/waf/latest/developerguide/waf-rule-statement-type-sqli-match.html" rel="noopener noreferrer"&gt;SQL injection attacks&lt;/a&gt; and cross-site scripting, aligning with OWASP top 10 list. Complete &lt;a href="https://aws.amazon.com/waf/features/" rel="noopener noreferrer"&gt;list of features is available here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://aws.amazon.com/shield/" rel="noopener noreferrer"&gt;AWS Shield&lt;/a&gt; is primarily used to protect from distributed denial of service (DDoS) attacks. It automatically detects threats to the environment&lt;/p&gt;

&lt;h2&gt;
  
  
  I know that
&lt;/h2&gt;

&lt;p&gt;Nginx ingress controller creates Network Load Balancer for it's service.&lt;/p&gt;

&lt;p&gt;Also I know the &lt;a href="https://en.wikipedia.org/wiki/OSI_model" rel="noopener noreferrer"&gt;OSI model&lt;/a&gt; and the Application is level 7 while NLB is level 4. Such integrations are incompatible.&lt;br&gt;
&lt;code&gt;AWS WAF&lt;/code&gt; doesn't work for &lt;code&gt;Network Load Balancers&lt;/code&gt;.&lt;/p&gt;
&lt;h3&gt;
  
  
  AWS WAF
&lt;/h3&gt;

&lt;p&gt;is applicable only to the following resources &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Amazon CloudFront distribution&lt;/li&gt;
&lt;li&gt;Amazon API Gateway REST API&lt;/li&gt;
&lt;li&gt;Application Load Balancer&lt;/li&gt;
&lt;li&gt;AWS AppSync GraphQL API&lt;/li&gt;
&lt;li&gt;Amazon Cognito user pool&lt;/li&gt;
&lt;li&gt;AWS App Runner service&lt;/li&gt;
&lt;li&gt;AWS Verified Access instance&lt;/li&gt;
&lt;/ol&gt;
&lt;h3&gt;
  
  
  AWS Shield
&lt;/h3&gt;

&lt;p&gt;.. comes with 2 subscription models &lt;code&gt;Standard&lt;/code&gt; and &lt;code&gt;Advanced&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;AWS Shield Advanced&lt;/strong&gt; provides expanded DDoS attack protection for&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Amazon EC2 instances&lt;/li&gt;
&lt;li&gt;Elastic Load Balancing load balancers&lt;/li&gt;
&lt;li&gt;CloudFront distributions&lt;/li&gt;
&lt;li&gt;Route 53 hosted zones&lt;/li&gt;
&lt;li&gt;AWS Global Accelerator standard accelerators. &lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;AWS Shield Advanced incurs additional charges. The &lt;a href="https://aws.amazon.com/shield/pricing#Pricing_details" rel="noopener noreferrer"&gt;pricing breakdown can be found here&lt;/a&gt;. To be short: it costs &lt;code&gt;~$3k/month&lt;/code&gt; with a &lt;strong&gt;yearly subscription&lt;/strong&gt; so you need to pay &lt;code&gt;$36k&lt;/code&gt; to enable it for your AWS organization.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;AWS Shield Standard&lt;/strong&gt; is automatically included at no extra cost beyond AWS managed services. This means that you have no control over the configuration of Shield standard. It is used by AWS to protect the managed services like AWS Cloudfront distributions, route53 resolvers and global accelerator.&lt;/p&gt;
&lt;h2&gt;
  
  
  AWS CloudFront is the answer
&lt;/h2&gt;

&lt;p&gt;Taking a look at the statements above we can use AWS Cloudfront to take advantage of built-in AWS Shield basic and enable filtering with AWS WAF Rules. The only trick is to disable caching where it is not needed. Really often caching can affect your workload experience.&lt;/p&gt;

&lt;p&gt;Additionally you will need to ensure your NGINX Ingress Controller on EKS handles only traffic routed through CloudFront.&lt;br&gt;
To achieve this you can Configure CloudFront to add a custom header (e.g., &lt;code&gt;X-CloudFront-Secret&lt;/code&gt;) to all requests and next update your NGINX Ingress configuration to validate this custom header.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmldkbk11u6o6w626w8fh.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmldkbk11u6o6w626w8fh.jpg" alt="aws waf shield cloudfront nlb" width="800" height="243"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Use the following annotation in the Ingress resource to set up a whitelist of allowed headers.&lt;br&gt;
Example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;nginx.ingress.kubernetes.io/server-snippet&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;
  &lt;span class="s"&gt;if ($http_x_cloudfront_secret != "YourSecretValue") {&lt;/span&gt;
    &lt;span class="s"&gt;return 403;&lt;/span&gt;
  &lt;span class="s"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;It has to be applied to every &lt;code&gt;Ingress&lt;/code&gt; resource in your workloads.&lt;/p&gt;

&lt;p&gt;Alternatively you can apply the custom header validation globally at the controller level by modifying the NGINX configuration through a &lt;code&gt;ConfigMap&lt;/code&gt; or custom template.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ConfigMap&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx-configuration&lt;/span&gt;
  &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ingress-nginx&lt;/span&gt;
&lt;span class="na"&gt;data&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;http-snippet&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;
    &lt;span class="s"&gt;map $http_x_cloudfront_secret $valid_secret {&lt;/span&gt;
      &lt;span class="s"&gt;default 0;&lt;/span&gt;
      &lt;span class="s"&gt;"YourSecretValue" 1;&lt;/span&gt;
    &lt;span class="s"&gt;}&lt;/span&gt;

    &lt;span class="s"&gt;server {&lt;/span&gt;
      &lt;span class="s"&gt;if ($valid_secret = 0) {&lt;/span&gt;
        &lt;span class="s"&gt;return 403;&lt;/span&gt;
      &lt;span class="s"&gt;}&lt;/span&gt;
    &lt;span class="s"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Bonus part
&lt;/h2&gt;

&lt;p&gt;You can not be 100% confident that managed services will save you from all the problems. This is why I recommend to &lt;a href="https://docs.aws.amazon.com/waf/latest/developerguide/classic-web-acl-ip-conditions.html" rel="noopener noreferrer"&gt;create the custom WAF ACL with the list of IPs&lt;/a&gt; that should be blocked from accessing your website. &lt;/p&gt;

&lt;p&gt;In this case you will have a quick path to block IP address or IP CIDR if you see the suspicious activity coming form those to remain safe while you are sorting out what is missing in your security configuration.&lt;/p&gt;

</description>
      <category>eks</category>
      <category>awswaf</category>
      <category>awsshield</category>
      <category>awssecurity</category>
    </item>
    <item>
      <title>How to enable AWS S3 replication between Global AWS Region and AWS China</title>
      <dc:creator>Yaroslav Yarmoshyk</dc:creator>
      <pubDate>Mon, 04 Nov 2024 10:29:49 +0000</pubDate>
      <link>https://dev.to/yyarmoshyk/how-to-enable-aws-s3-replication-between-global-aws-region-and-aws-china-32gh</link>
      <guid>https://dev.to/yyarmoshyk/how-to-enable-aws-s3-replication-between-global-aws-region-and-aws-china-32gh</guid>
      <description>&lt;p&gt;Recently I was struggling to enable S3 replication from China to EU region and I made it work using the &lt;a href="https://aws.amazon.com/solutions/implementations/data-transfer-hub/" rel="noopener noreferrer"&gt;Data Transfer Hub&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Actually it is not an AWS managed solution. It consists of the number of AWS resource such as EC2, SQS, Lambda functions and DynamoDB table.&lt;/p&gt;

&lt;p&gt;I didn't need a fancy UI so I used the simplified &lt;a href="https://github.com/aws-solutions/data-transfer-hub/blob/main/docs/S3_PLUGIN.md" rel="noopener noreferrer"&gt;s3-plugin approach&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;It is well documented so there is not much to add, however I'd like to outline couple of points:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;You need to deploy the stack into global region.&lt;/li&gt;
&lt;li&gt;The IAM user has to be created in China. &lt;/li&gt;
&lt;li&gt;The entire solution is automated in Cloudformation.&lt;/li&gt;
&lt;li&gt;The &lt;a href="https://solutions-reference.s3.amazonaws.com/data-transfer-hub/latest/DataTransferS3Stack.template" rel="noopener noreferrer"&gt;DataTransferS3Stack.template&lt;/a&gt; didn't work for me. There is something wrong with IAM policy on line #1866. Everything worked out without this resource.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The actual resource composition diagram is the following:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnqi1xpz1og4oxct6rvdd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnqi1xpz1og4oxct6rvdd.png" alt="Data Transfer Hub AWS resources" width="800" height="412"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>s3</category>
      <category>aws</category>
      <category>replication</category>
    </item>
    <item>
      <title>Proper setup of IAM federation in Multi-account AWS Organization for Terragrunt</title>
      <dc:creator>Yaroslav Yarmoshyk</dc:creator>
      <pubDate>Tue, 20 Aug 2024 13:44:12 +0000</pubDate>
      <link>https://dev.to/yyarmoshyk/proper-setup-of-iam-federation-in-multi-account-aws-organization-for-terragrunt-3ape</link>
      <guid>https://dev.to/yyarmoshyk/proper-setup-of-iam-federation-in-multi-account-aws-organization-for-terragrunt-3ape</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;In this article I want to describe how to configure the IAM relationships in a multi-account AWS organization with AWS SSO to allow managing  infrastructure as a code with &lt;code&gt;terragrunt&lt;/code&gt;/&lt;code&gt;terraform&lt;/code&gt; from both CI/CD runner and local PCs.&lt;/p&gt;

&lt;p&gt;This is &lt;strong&gt;NOT&lt;/strong&gt; a beginner guide. This is the &lt;strong&gt;solution design&lt;/strong&gt; with very brief code examples.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcx90x2ad9fvexjp5kaei.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcx90x2ad9fvexjp5kaei.png" alt="aws cross account iam trust relationships" width="800" height="287"&gt;&lt;/a&gt;&lt;br&gt;
The picture above is the most common setup when we have an IAM &lt;code&gt;spoke&lt;/code&gt;/&lt;code&gt;assumer&lt;/code&gt; role in the shared account that assumes the &lt;code&gt;spoken&lt;/code&gt;/&lt;code&gt;doers&lt;/code&gt; roles in the application accounts.&lt;/p&gt;

&lt;p&gt;Every &lt;code&gt;spoken&lt;/code&gt;/&lt;code&gt;doer&lt;/code&gt; role in the application accounts has the IAM policy attached that allows them to manage resources.&lt;/p&gt;

&lt;p&gt;The following schema has more details:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdyre2ojlexbk3tjiwkvq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdyre2ojlexbk3tjiwkvq.png" alt="aws cross account iam for terragrunt" width="800" height="457"&gt;&lt;/a&gt;&lt;br&gt;
According to this diagram we have EC2 instances that are registered as self-hosted runners in the CI/CD system (Github Actions, GitlabCI, Jenkins, etc.)&lt;/p&gt;

&lt;p&gt;The &lt;code&gt;spoke&lt;/code&gt;/&lt;code&gt;assumer&lt;/code&gt; role is attached to the instance. When we run &lt;code&gt;terraform&lt;/code&gt;/&lt;code&gt;terragrunt&lt;/code&gt; on this instance it can assume the &lt;code&gt;spoken&lt;/code&gt;/&lt;code&gt;doers&lt;/code&gt; roles in other accounts and apply the infrastructure as a code changes. &lt;/p&gt;

&lt;p&gt;This is achieved by the following in aws provider configuration&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight terraform"&gt;&lt;code&gt;&lt;span class="k"&gt;provider&lt;/span&gt; &lt;span class="s2"&gt;"aws"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;assume_role&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;role_arn&lt;/span&gt;     &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"arn:aws:iam::123456789012:role/doer-role"&lt;/span&gt;
    &lt;span class="nx"&gt;session_name&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"doer-session-123456789012"&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you are using &lt;code&gt;terragrunt&lt;/code&gt; (which is highly recommended) then it will automatically create the following resources using this spoked/doer role in every target account:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;S3 bucket for statefiles&lt;/li&gt;
&lt;li&gt;Dynamodb for state file locks&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Shared resources in the Shared aws account
&lt;/h2&gt;

&lt;p&gt;Everything is OK until you don’t need to create shared resources in the Shared aws account. For example global secrets or resource for monitoring (Managed Prometheus) or log aggregation (Opensearch)&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl2k01lr4d3wkb6k6v352.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl2k01lr4d3wkb6k6v352.png" alt="Shared resources in aws account" width="800" height="859"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In this case one more &lt;code&gt;spoken&lt;/code&gt;/&lt;code&gt;doer&lt;/code&gt; role is needed in the shared account and &lt;code&gt;terragrunt&lt;/code&gt; will create the resources for statefiles.&lt;/p&gt;

&lt;h2&gt;
  
  
  Reference terragrunt outputs between accounts
&lt;/h2&gt;

&lt;p&gt;The problem appears when you need to reference the outputs of terragrunt resource from shared as inputs for resources in other account(s). &lt;/p&gt;

&lt;p&gt;Here is the example of &lt;code&gt;terragrunt&lt;/code&gt; folder structure&lt;br&gt;
&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv4kvmgf48zy4qli0nn99.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv4kvmgf48zy4qli0nn99.png" alt="example terragrunt cross account resource dependencies" width="800" height="274"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In this case &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;The &lt;code&gt;eks_controllers&lt;/code&gt; in &lt;code&gt;dev&lt;/code&gt;/&lt;code&gt;stage&lt;/code&gt;/&lt;code&gt;prod&lt;/code&gt; depend on the &lt;code&gt;opensearch&lt;/code&gt; and &lt;code&gt;prometheus&lt;/code&gt; in a &lt;code&gt;shared&lt;/code&gt; account.&lt;/li&gt;
&lt;li&gt;The &lt;code&gt;transit_gateway_attachments&lt;/code&gt; depend on the &lt;code&gt;transit_gateway&lt;/code&gt; in &lt;code&gt;shared&lt;/code&gt; account&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;And the &lt;strong&gt;problem&lt;/strong&gt; is the &lt;code&gt;spoken&lt;/code&gt;/&lt;code&gt;doer&lt;/code&gt; roles of these accounts &lt;strong&gt;can not&lt;/strong&gt; access the terraform statefile of the shared account&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5d2w896vnjp4sucqfbe3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5d2w896vnjp4sucqfbe3.png" alt="problem of reading statefile of shared aws account" width="800" height="464"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You apply infrastructure updates to a shared account first, get outputs and hardcode them as inputs for workload accounts. &lt;br&gt;
But what if there are more dependencies?&lt;/p&gt;
&lt;h2&gt;
  
  
  Use single s3 bucket for terraform state
&lt;/h2&gt;

&lt;p&gt;Terraform allows the use of a single state bucket to locate state files of multiple accounts. &lt;/p&gt;

&lt;p&gt;This is being achieved with &lt;a href="https://developer.hashicorp.com/terraform/language/settings/backends/s3#assume-role-configuration" rel="noopener noreferrer"&gt;Assume Role Configuration&lt;/a&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight terraform"&gt;&lt;code&gt;&lt;span class="k"&gt;terraform&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;backend&lt;/span&gt; &lt;span class="s2"&gt;"s3"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;bucket&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"terraform-state-prod"&lt;/span&gt;
    &lt;span class="nx"&gt;key&lt;/span&gt;    &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"network/terraform.tfstate"&lt;/span&gt;
    &lt;span class="nx"&gt;region&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"us-east-1"&lt;/span&gt;
    &lt;span class="nx"&gt;assume_role&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="nx"&gt;role_arn&lt;/span&gt;           &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"arn:aws:iam::SHARED-ACCOUNT-ID:role/state-mgmt-role"&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In &lt;code&gt;terragrunt&lt;/code&gt;, the remote state with single role looks like the following:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight terraform"&gt;&lt;code&gt;&lt;span class="nx"&gt;remote_state&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;backend&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"s3"&lt;/span&gt;
  &lt;span class="nx"&gt;config&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;bucket&lt;/span&gt;         &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"terraform-state-SHARED-ACCOUNT-ID"&lt;/span&gt;
    &lt;span class="nx"&gt;dynamodb_table&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"terraform-state-lock-SHARED-ACCOUNT-ID"&lt;/span&gt;
    &lt;span class="nx"&gt;key&lt;/span&gt;            &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;path_relative_to_include&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;/terraform.tfstate"&lt;/span&gt;
    &lt;span class="nx"&gt;role_arn&lt;/span&gt;       &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"arn:aws:iam::SHARED-ACCOUNT-ID:role/state-mgmt-role”
    region         = "&lt;/span&gt;&lt;span class="nx"&gt;us-east-1&lt;/span&gt;&lt;span class="s2"&gt;"
  }
}
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In this case we will need to create one more role for state management in the shared account and the infrastructure will look like the following:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxsr7g7lmzda3tbr862hb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxsr7g7lmzda3tbr862hb.png" alt="iam role for terraform state management" width="800" height="567"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;At this stage we can reference the outputs of the resources from &lt;code&gt;0-shared&lt;/code&gt; account as inputs for resources in &lt;code&gt;1-dev&lt;/code&gt;, &lt;code&gt;2-stage&lt;/code&gt; and &lt;code&gt;3-prod&lt;/code&gt; accounts as shown below:&lt;br&gt;
&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv4kvmgf48zy4qli0nn99.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv4kvmgf48zy4qli0nn99.png" alt="example terragrunt cross account resource dependencies" width="800" height="274"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Happy days&lt;/strong&gt; if all we need is to run a &lt;code&gt;terragrunt plan&lt;/code&gt; or &lt;code&gt;terragrunt apply&lt;/code&gt; from the CI/CD pipeline. &lt;/p&gt;
&lt;h2&gt;
  
  
  Run terragrunt plan locally
&lt;/h2&gt;

&lt;p&gt;What if we need some kind of &lt;strong&gt;breaking glass&lt;/strong&gt; access in case of CI/CD failure so that we could update infrastructure from our workstation or run &lt;code&gt;terragrunt plan&lt;/code&gt; locally? &lt;/p&gt;

&lt;p&gt;Let’s add the AWS SSO into our setup.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fraeer73xq25tuq310l6u.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fraeer73xq25tuq310l6u.png" alt="multiaccount access over aws sso" width="800" height="635"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;How to make it work with SSO roles and consolidated terraform state management role?&lt;br&gt;
There are two ways how to make it happen&lt;/p&gt;
&lt;h3&gt;
  
  
  Option #1: Allow all IAM roles to assume state management role
&lt;/h3&gt;

&lt;p&gt;If you have AWS SSO then you have the following &lt;code&gt;AWSReserverdSSO_*&lt;/code&gt; IAM roles in all accounts with random suffix in the the name&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxmh5icpxe9fccqdjzryx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxmh5icpxe9fccqdjzryx.png" alt="AWSReserverdSSO IAM roles random names" width="800" height="527"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you copy environment variables from AWS SSO page to authorize in AWS and run your &lt;code&gt;terragrunt&lt;/code&gt; locally then it will not work until you update the trust relationships of the &lt;code&gt;state-managemt&lt;/code&gt; role with the arns of IAM Roles &lt;br&gt;
&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1wp939q52g9c0ga7tfjb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1wp939q52g9c0ga7tfjb.png" alt="copy environment variables from AWS SSO page" width="800" height="197"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Basically you’ll need to add every &lt;code&gt;AWSReserverdSSO_*&lt;/code&gt; role from each AWS account in your organization into the list of allowed principles of the &lt;code&gt;state-management&lt;/code&gt; role in shared account.&lt;/p&gt;

&lt;p&gt;This seems to be complicated from operations and automation perspective&lt;/p&gt;
&lt;h3&gt;
  
  
  Option #2: Allow IAM roles from Management account to assume spoke/assumer role
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;This is the best approach.&lt;/strong&gt; &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;a href="https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-sso.html" rel="noopener noreferrer"&gt;Configure the AWS CLI with IAM Identity Center authentication&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Update the list of IAM principals in trust policy of &lt;code&gt;spoke&lt;/code&gt;/&lt;code&gt;assumer&lt;/code&gt; role with the arn of the SSO roles you’d like to be able to assume it.&lt;/li&gt;
&lt;li&gt;Add the profiles into your &lt;code&gt;~/.aws/config&lt;/code&gt; file.
&lt;/li&gt;
&lt;/ol&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight conf"&gt;&lt;code&gt;[&lt;span class="n"&gt;profile&lt;/span&gt; &lt;span class="n"&gt;shared&lt;/span&gt;]
&lt;span class="n"&gt;sso_session&lt;/span&gt; = &lt;span class="n"&gt;sso&lt;/span&gt;
&lt;span class="n"&gt;sso_account_id&lt;/span&gt; = &lt;span class="m"&gt;123456789011&lt;/span&gt;
&lt;span class="n"&gt;sso_role_name&lt;/span&gt; = &lt;span class="n"&gt;AWSAdministratorAccess&lt;/span&gt;
&lt;span class="n"&gt;region&lt;/span&gt; = &lt;span class="n"&gt;us&lt;/span&gt;-&lt;span class="n"&gt;west&lt;/span&gt;-&lt;span class="m"&gt;2&lt;/span&gt;
&lt;span class="n"&gt;output&lt;/span&gt; = &lt;span class="n"&gt;json&lt;/span&gt;

[&lt;span class="n"&gt;profile&lt;/span&gt; &lt;span class="n"&gt;assumer&lt;/span&gt;]
&lt;span class="n"&gt;source_profile&lt;/span&gt; = &lt;span class="n"&gt;shared&lt;/span&gt;
&lt;span class="n"&gt;role_arn&lt;/span&gt; = &lt;span class="n"&gt;arn&lt;/span&gt;:&lt;span class="n"&gt;aws&lt;/span&gt;:&lt;span class="n"&gt;iam&lt;/span&gt;::&lt;span class="m"&gt;123456789011&lt;/span&gt;:&lt;span class="n"&gt;role&lt;/span&gt;/&lt;span class="n"&gt;infrastructure&lt;/span&gt;-&lt;span class="n"&gt;assumer&lt;/span&gt;-&lt;span class="n"&gt;role&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Make sure the update the following:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;sso_account_id&lt;/li&gt;
&lt;li&gt;sso_role_name&lt;/li&gt;
&lt;li&gt;role_arn&lt;/li&gt;
&lt;li&gt;region&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This will configure your workstation to perform the following actions:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;sts:assume the &lt;code&gt;AWSAdministratorAccess&lt;/code&gt; or &lt;code&gt;AWSPoweruserAccess&lt;/code&gt; SSO role from a shared account.&lt;/li&gt;
&lt;li&gt;STS:Assume infrastructure assumer role from shared account&lt;/li&gt;
&lt;li&gt;All other roles will be assumed by terragrunt automatically.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4edqt3f64h36pi593rfm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4edqt3f64h36pi593rfm.png" alt="sts assume iam assumer over sso" width="800" height="524"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Usage:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;aws sso login &lt;span class="nt"&gt;--sso-session&lt;/span&gt; sso
&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;AWS_PROFILE&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;assumer
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Follow the steps in your browser.&lt;br&gt;
After this you can run&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;terragrunt plan
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



</description>
      <category>iam</category>
      <category>terragrunt</category>
      <category>awssso</category>
      <category>terraform</category>
    </item>
    <item>
      <title>How to migrate DNS records from CloudFlare to AWS Route53 with Terraform&amp;Terragrunt</title>
      <dc:creator>Yaroslav Yarmoshyk</dc:creator>
      <pubDate>Sun, 26 May 2024 11:26:54 +0000</pubDate>
      <link>https://dev.to/yyarmoshyk/how-to-migrate-dns-records-from-cloudflare-to-aws-route53-with-terraformterragrunt-2ebj</link>
      <guid>https://dev.to/yyarmoshyk/how-to-migrate-dns-records-from-cloudflare-to-aws-route53-with-terraformterragrunt-2ebj</guid>
      <description>&lt;h2&gt;
  
  
  Possible reasons
&lt;/h2&gt;

&lt;p&gt;There are multiple reasons for such migration. The most common are the following:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;You'd like to use external-dns controller in your EKS cluster to manage DNS records automatically for you, however the CloudFlare support is still in beta and you don't want to use it for production workloads.&lt;/li&gt;
&lt;li&gt;You want to take the advantages of &lt;a href="https://aws.amazon.com/waf/"&gt;AWS WebApplication Firewall&lt;/a&gt; instead of CloudFlare WAF.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;There might be other reasons but I faced the 2 in the most resent project.&lt;/p&gt;

&lt;p&gt;You'll need to put either Cloudfront distribution or ApplicationLoadBalancer (ALB) in front of your web application to use AWS WAF because it provides the application level protection so it can not be enabled for NetworkLoadBalancer (NLB)&lt;/p&gt;

&lt;h2&gt;
  
  
  Migration flow
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;Read all the records from the existing CloudFlare DNS zone.
You can re-use the python script I've prepared. The automation is available in &lt;a href="https://github.com/yyarmoshyk/read-cloudflare-dns-records"&gt;github.com/yyarmoshyk/read-cloudflare-dns-records&lt;/a&gt;
The readme file describes how to use it. &lt;/li&gt;
&lt;li&gt;Create DNS zone in AWS
You don't need to invest much efforts into this. Feel free to re-use the existing  &lt;a href="https://github.com/terraform-aws-modules/terraform-aws-route53/tree/master/modules/zones"&gt;terraform-aws-route53&lt;/a&gt; community module&lt;/li&gt;
&lt;li&gt;Create DNS records in AWS
The script above produces the json output that can be used as an input for the &lt;a href="https://github.com/terraform-aws-modules/terraform-aws-route53/tree/master/modules/records"&gt;terraform-aws-route53/records&lt;/a&gt; terraform module
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"name"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"example.com"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"type"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"A"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"ttl"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;300&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"records"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="s2"&gt;"10.10.10.10"&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The output should be saved into the file. Next the contents can be read with terrafrom/terragrunt and specified as inputs to the &lt;a href="https://github.com/terraform-aws-modules/terraform-aws-route53/tree/master/modules/records"&gt;terraform-aws-route53/records&lt;/a&gt; terrafrom module&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    records_jsonencoded = jsondecode(file("dns_records.json"))
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Update NameServer configuration in your current DNS registrar.
For this you'll need to refer to the documentation of the DNS provider where your domain is registered.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;I will not cover running &lt;code&gt;terragrunt apply&lt;/code&gt; procedure here. There are many documents about this over the internet.&lt;/p&gt;

&lt;h2&gt;
  
  
  Closing words
&lt;/h2&gt;

&lt;p&gt;Most of the time you'll spend on creating the API token in CloudFlare and injecting the route53 provisioning into your existing IaaC structure.&lt;br&gt;
Basically we extract the data from cloudflare, convert it into proper format, next create all records with terragrunt or terraform.&lt;/p&gt;

</description>
      <category>dns</category>
      <category>route53</category>
      <category>migration</category>
      <category>awswaf</category>
    </item>
    <item>
      <title>AWS Cloud Platform for highly loaded WordPress website</title>
      <dc:creator>Yaroslav Yarmoshyk</dc:creator>
      <pubDate>Mon, 29 Apr 2024 09:46:01 +0000</pubDate>
      <link>https://dev.to/yyarmoshyk/aws-cloud-platform-for-highly-loaded-wordpress-website-3lpd</link>
      <guid>https://dev.to/yyarmoshyk/aws-cloud-platform-for-highly-loaded-wordpress-website-3lpd</guid>
      <description>&lt;p&gt;Today I'd like to share the simple (or not), highly available and secure design of the AWS Cloud Solution to host the highly loaded WordPress website. I think it is also applicable to other popular CMS such as Joomla, Drupal, etc.&lt;/p&gt;

&lt;p&gt;The diagram doesn't cover the multi-OU setup for simplicity purposes. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F87pw7xm30szmq0lbrko0.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F87pw7xm30szmq0lbrko0.jpg" alt="AWS Cloud Platform for WordPress" width="701" height="851"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Explanations
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Networking
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Public&lt;/strong&gt; - the primary purpose for this network is to locate load balancer and &lt;strong&gt;NAT gateways&lt;/strong&gt; (1 per availability zone). &lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Private&lt;/strong&gt; - all EC2 instances will be hosted here. 
I skipped routing tables to simplify the schema. Ofcourse you need to define routing rules for Private subnets.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Isolated&lt;/strong&gt; - is a home for multi-AZ Aurora RDS and EFS mount endpoints.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Internet Gateway&lt;/strong&gt; is used to communicate with the world. &lt;/p&gt;

&lt;h3&gt;
  
  
  File Storage
&lt;/h3&gt;

&lt;p&gt;Elastic File System (EFS) is planned to be used to locate the code of the website. It has to be mounted to every webserver root (ex. &lt;code&gt;/var/www/html&lt;/code&gt;). &lt;/p&gt;

&lt;p&gt;&lt;a href="https://docs.aws.amazon.com/aws-backup/latest/devguide/whatisbackup.html#supported-resources"&gt;AWS Backup&lt;/a&gt; is to be used to create scheduled backups.&lt;/p&gt;

&lt;p&gt;S3 is intended to be used for static files. To be honest I didn't test this approach but I found the following &lt;a href="https://github.com/humanmade/S3-Uploads"&gt;WordPress plugin&lt;/a&gt; that allows to locate uploads to s3. So I'd appreciate if whoever decides to implement this setup could drop off a comment whether it works or not. &lt;/p&gt;

&lt;p&gt;If it doesn't work then we'll need to add &lt;a href="https://aws.amazon.com/storagegateway/file/s3/"&gt;S3 File Gateway&lt;/a&gt; to the schema and &lt;a href="https://aws.amazon.com/blogs/storage/mounting-amazon-s3-to-an-amazon-ec2-instance-using-a-private-connection-to-s3-file-gateway/"&gt;mount s3 into&lt;/a&gt; &lt;code&gt;$ROOT/wp-content/uploads&lt;/code&gt; folder&lt;/p&gt;

&lt;p&gt;I think &lt;a href="https://aws.amazon.com/s3/storage-classes/intelligent-tiering/"&gt;S3 Intelligent tiering&lt;/a&gt; should be enabled for it to enable automatic storage cost savings by automatically moving data to the most cost-effective access tier when access patterns change.&lt;/p&gt;

&lt;h3&gt;
  
  
  Data Storage
&lt;/h3&gt;

&lt;p&gt;The RDS Aurora with MySQL engine is to be used. It doesn't require much configuration except of the right sizing. It is recommended to store the password and read/write endpoint address in SSM Parameter Store or AWS Secret Manager and next read it on EC2 boot with userdata and expose as environment variables to be read by WordPress.&lt;/p&gt;

&lt;p&gt;The automatic daily snapshotting should be enabled. Sometimes you might need to recover from it instead of point in time recovery.&lt;/p&gt;

&lt;h3&gt;
  
  
  Public Access and Content Delivery
&lt;/h3&gt;

&lt;p&gt;In order to expose the website to the internet the combination of the following resources will be used:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Application Load Balancer&lt;/strong&gt;. The load balancer will distribute the load to the EC2 instances in the autoscaling group. In case of really high traffic this can be updated to Network Loadbalancer because.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;CloudFront&lt;/strong&gt; is a CDN that will deliver traffic to the users from the closes edge location and will provide front-end caching capabilities to reduce the load on the compute. Also it will be configured to deliver static files directly from the S3 bucket (&lt;code&gt;URI: /wp-content/uploads/*&lt;/code&gt;)&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Security
&lt;/h3&gt;

&lt;p&gt;This area includes multiple items:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Cloudfront&lt;/strong&gt; comes with &lt;strong&gt;AWS Shield&lt;/strong&gt; standard enabled. No configuration is needed at this point. However we need to create proper S3 bucket policy to allow reading objects only with a certain CloudFront origin access identity. &lt;a href="https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/private-content-restricting-access-to-s3.html"&gt;Here is the manual.&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;AWS Web Application Firewall&lt;/strong&gt; (WAF) with a set of rules is to be used to protect ALB from L7 (Application) attacks. A good practice is to create the ip ruleset and custom ACL linked to it to have an ability to blacklist IPs on manually (ex. during DDoS attacks). &lt;a href="https://docs.aws.amazon.com/waf/latest/developerguide/waf-ip-set-using.html"&gt;Here is the manual&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;KMS&lt;/strong&gt; was skipped to simplify the schema but it is recommended to create customer-managed key for every resource and use those to encrypt data. Resources to be covered are the following:

&lt;ol&gt;
&lt;li&gt;S3 buckets&lt;/li&gt;
&lt;li&gt;EBS volumes of EC2 instances&lt;/li&gt;
&lt;li&gt;RDS volumes and snapshots&lt;/li&gt;
&lt;/ol&gt;


&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Macie&lt;/strong&gt; is a &lt;a href="https://docs.aws.amazon.com/macie/latest/user/what-is-macie.html"&gt;great ML-powered solution&lt;/a&gt; to audit the files in S3 buckets and detect any sensitive data published (ex. PII, passwords, etc.)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;AWS Certificate Manager&lt;/strong&gt; (ACM) is probably one of the oldest services in AWS that allows to create SSl certificates to enable traffic encryption. The certificates are to be wired to CloudFront distribution.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Security groups&lt;/strong&gt; were skipped in the diagram to avoid visual complexity. 3 security groups are to be created:

&lt;ol&gt;
&lt;li&gt;ALB - allow access to loadbalancer. Port 80 is to be open to 0.0.0.0/0. Since we will do ssl termination at CloudFront level, there is no need to listen to port 443. Transit encryption is optional inside AWS networks.&lt;/li&gt;
&lt;li&gt;Compute - allow access from loadbalancer to EC2. Port 80 open to &lt;code&gt;alb-security-group-id&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;EFS - ports 111 and 2049 (TCP + UDP) are to be opened to &lt;code&gt;compute-security-group-id&lt;/code&gt; &lt;/li&gt;
&lt;li&gt;RDS - port 3306 (TCP) is to be opened to &lt;code&gt;compute-security-group-id&lt;/code&gt;
&lt;/li&gt;
&lt;/ol&gt;


&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;IAM role&lt;/strong&gt; and &lt;strong&gt;instance profile&lt;/strong&gt; to be attached to the EC2 instances has to have sufficient permissions to find and mount EFS volumes. Also it needs to have permissions to read/write in the S3 bucket for static files. Use &lt;a href="https://awspolicygen.s3.amazonaws.com/policygen.html"&gt;IAM policy generator&lt;/a&gt; to create the IAM policy&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Compute
&lt;/h3&gt;

&lt;p&gt;I use autoscaling group to start instances in multiple availability zones with scaling policy based on built-in metric of CPU utilization that is being sent to CloudWatch. Standard approach. If the load goes up then we need more servers to handle it.&lt;/p&gt;

&lt;p&gt;The missing piece of puzzle is the AMI "golden image" that will be used to start the instances in autoscaling group. The AMI has to have NGINX and PHP installed with the list of required modules enabled. The great tool to brew one is &lt;a href="https://www.packer.io/"&gt;hashicorp packer&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Additionally, the userdata script should find the EFS mount point in the current Availability Zone and mount it to the NGINX WebServer root as I mentioned earlier.&lt;/p&gt;

&lt;h3&gt;
  
  
  Automation possibilities
&lt;/h3&gt;

&lt;p&gt;I am not only a big fan of &lt;a href="https://www.terraform.io/"&gt;hashicorp terraform&lt;/a&gt;. I'm also one of the early adopters of it. So this is my main go-to &lt;code&gt;Infrastructure as a Code&lt;/code&gt; tool. However all the resources I use are supported by other IaaC solutions such as &lt;code&gt;AWS CloudFormation&lt;/code&gt; and &lt;code&gt;AWS CDK&lt;/code&gt;. You definitely got to use one to avoid loosing the track of resources you create.&lt;/p&gt;

&lt;h2&gt;
  
  
  Summary
&lt;/h2&gt;

&lt;p&gt;This solution satisfies the following requirements to safely run your website in AWS Cloud:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;High availability (multi AZ provisioning)&lt;/li&gt;
&lt;li&gt;On-demand capacity (autoscaling)&lt;/li&gt;
&lt;li&gt;Encryption at rest and in transit (KMS + SSL)&lt;/li&gt;
&lt;li&gt;Least access privilege (IAM, Security Groups with explicit rules)&lt;/li&gt;
&lt;li&gt;Data safety (AWS Backup + Automatic snapshotting of RDS)&lt;/li&gt;
&lt;li&gt;Security audit (Macie)&lt;/li&gt;
&lt;li&gt;Secure communications (AWS Shield + WAF + 3-tier networking)&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;It is relatively easy to deploy code updates. All you need to do is just to copy over the updated files to NFS. No restarts  are required.&lt;/p&gt;

&lt;p&gt;The bottleneck here can be network limitations of NFS. In case of really high traffic you might need to adjust throughput settings but be aware of additional costs.&lt;/p&gt;

</description>
      <category>wordpress</category>
      <category>aws</category>
      <category>cloudcomputing</category>
      <category>hosting</category>
    </item>
  </channel>
</rss>
