<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Sunbird Analytics</title>
    <description>The latest articles on DEV Community by Sunbird Analytics (@sunbirdlabs).</description>
    <link>https://dev.to/sunbirdlabs</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/sunbirdlabs"/>
    <language>en</language>
    <item>
      <title>How to Securely Migrate PostgreSQL to AWS RDS with Zero Downtime (Docker &amp; Terraform)</title>
      <dc:creator>Sunbird Analytics</dc:creator>
      <pubDate>Sun, 12 Apr 2026 14:30:16 +0000</pubDate>
      <link>https://dev.to/sunbirdlabs/how-to-securely-migrate-postgresql-to-aws-rds-with-zero-downtime-docker-terraform-3n4i</link>
      <guid>https://dev.to/sunbirdlabs/how-to-securely-migrate-postgresql-to-aws-rds-with-zero-downtime-docker-terraform-3n4i</guid>
      <description>&lt;h1&gt;
  
  
  How to Migrate PostgreSQL to AWS RDS with Zero Downtime (Docker &amp;amp; Terraform)
&lt;/h1&gt;

&lt;h2&gt;
  
  
  1. Introduction &amp;amp; Source Code
&lt;/h2&gt;

&lt;p&gt;This article demonstrates a professional, zero-downtime homogeneous database migration from a local "on-premise" PostgreSQL database to AWS RDS. &lt;/p&gt;

&lt;p&gt;It demonstrates the process that AWS DMS runs "under the hood" when migrating a homogeneous database. Instead of relying on a managed service, this architecture manually implements PostgreSQL Native Logical Replication (Publication/Subscription) operating securely over a Site-to-Site IPsec VPN tunnel. It includes a Python script simulating live order traffic to prove that data continuously syncs during the migration process.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;You can download the full source code for this project on GitHub:&lt;/strong&gt;&lt;br&gt;
&lt;a href="https://github.com/JoeyAlpha5/postgres-on-prem-aws-cloud-migration" rel="noopener noreferrer"&gt;Download the Source Code&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fiaz5szg7jxyy171ddipn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fiaz5szg7jxyy171ddipn.png" alt="JetBrains Discount Code" width="800" height="215"&gt;&lt;/a&gt;&lt;/p&gt;


&lt;h2&gt;
  
  
  2. Architecture Overview
&lt;/h2&gt;
&lt;h3&gt;
  
  
  1. The On-Premise Environment
&lt;/h3&gt;

&lt;p&gt;Simulated using Docker, this environment contains the source PostgreSQL database, a pgAdmin dashboard, and a live Python application generating mock e-commerce transactions.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl3tpkedwaivarc6mdtf5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl3tpkedwaivarc6mdtf5.png" alt="Architectural diagram of the local on-premise Docker setup" width="800" height="861"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Fig 1: Simulated on-premise environment using interconnected Docker containers.&lt;/em&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  2. The Cloud Environment
&lt;/h3&gt;

&lt;p&gt;Provisioned via Terraform, this environment features a secure VPC, private subnets, an RDS PostgreSQL instance, and a Virtual Private Gateway (VGW) ready to receive the VPN connection.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz4vundkc6chg0ekcby15.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz4vundkc6chg0ekcby15.png" alt="Architectural diagram of the AWS cloud environment" width="800" height="867"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Fig 2: Cloud infrastructure layout provisioned dynamically via Terraform.&lt;/em&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  3. The Combined Migration Architecture
&lt;/h3&gt;

&lt;p&gt;The strongSwan container establishes an IPsec tunnel to the AWS VGW, allowing the AWS RDS instance to securely reach back into the local Docker network and subscribe to the live transaction feed.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr7uv6ebusqa2wmjin2g2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr7uv6ebusqa2wmjin2g2.png" alt="Combined architecture diagram showing the VPN tunnel" width="800" height="554"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Fig 3: End-to-end logical replication architecture connected over a secure Site-to-Site VPN tunnel.&lt;/em&gt;&lt;/p&gt;


&lt;h2&gt;
  
  
  3. Prerequisites
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Docker &amp;amp; Docker Compose:&lt;/strong&gt; To run the local environment.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Terraform:&lt;/strong&gt; To provision the AWS infrastructure.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;AWS CLI:&lt;/strong&gt; Configured with credentials that have permission to create VPCs, RDS instances, and VPNs.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;psql:&lt;/strong&gt; Installed locally (or accessible via your pgAdmin container) to execute the schema dump.&lt;/li&gt;
&lt;/ul&gt;


&lt;h2&gt;
  
  
  4. Step-by-Step Execution Guide
&lt;/h2&gt;
&lt;h3&gt;
  
  
  Phase 1: Start the On-Premise Infrastructure
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;Navigate to the on-prem directory: &lt;code&gt;cd on-prem&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Start the Docker containers: &lt;code&gt;docker-compose up -d --build&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Verify the live traffic generator is working by checking the logs: &lt;code&gt;docker logs -f fake_live_traffic&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Open your browser and navigate to &lt;code&gt;http://localhost:5050&lt;/code&gt; to access pgAdmin.&lt;/li&gt;
&lt;li&gt;Log in using &lt;code&gt;admin@sunbirdanalytics.com&lt;/code&gt; and password &lt;code&gt;sunbird_analytics&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;Add the local server using host &lt;code&gt;source_db&lt;/code&gt;, username &lt;code&gt;sunbird_analytics&lt;/code&gt;, and password &lt;code&gt;sunbird_analytics&lt;/code&gt;. You will see the &lt;code&gt;live_orders&lt;/code&gt; table actively populating.&lt;/li&gt;
&lt;/ol&gt;
&lt;h3&gt;
  
  
  Phase 2: Gather Required IP Addresses
&lt;/h3&gt;

&lt;p&gt;Before provisioning the cloud, you need two critical IP addresses.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Find your Public IP:&lt;/strong&gt; Run &lt;code&gt;curl ifconfig.me&lt;/code&gt; in your terminal. Save this for Terraform.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Find your Source DB Docker IP:&lt;/strong&gt; Run &lt;code&gt;docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' source_db&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;Open &lt;code&gt;scripts/main.md&lt;/code&gt; and replace the IP address in the &lt;code&gt;CREATE SUBSCRIPTION&lt;/code&gt; connection string with this Docker IP.&lt;/li&gt;
&lt;/ol&gt;
&lt;h3&gt;
  
  
  Phase 3: Provision the AWS Infrastructure
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;Navigate to the cloud setup directory: &lt;code&gt;cd ../cloud-setup&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Initialize Terraform: &lt;code&gt;terraform init&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Deploy the infrastructure (replace the placeholder with your actual public IP from Phase 2): &lt;code&gt;terraform apply -var="my_ip=YOUR_PUBLIC_IP_HERE"&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Type &lt;code&gt;yes&lt;/code&gt; to confirm. This will take roughly 5-10 minutes to provision the RDS instance and VPN.&lt;/li&gt;
&lt;li&gt;Once complete, retrieve your new RDS endpoint via the AWS Console or by running:
&lt;/li&gt;
&lt;/ol&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;aws rds describe-db-instances &lt;span class="se"&gt;\&lt;/span&gt;
 &lt;span class="nt"&gt;--region&lt;/span&gt; us-west-1 &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="nt"&gt;--db-instance-identifier&lt;/span&gt; target-cloud-db &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="nt"&gt;--query&lt;/span&gt; &lt;span class="s2"&gt;"DBInstances[0].Endpoint.Address"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="nt"&gt;--output&lt;/span&gt; text
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h3&gt;
  
  
  Phase 4: Map Your Infrastructure with Sunbird Insyte
&lt;/h3&gt;

&lt;p&gt;Once &lt;code&gt;terraform apply&lt;/code&gt; finishes and everything is successfully provisioned on AWS, you can visualize your new cloud environment.&lt;/p&gt;

&lt;p&gt;Use &lt;strong&gt;&lt;a href="https://sunbirdanalytics.com/insyte/index.html" rel="noopener noreferrer"&gt;Sunbird Insyte&lt;/a&gt;&lt;/strong&gt; to automatically map all of your newly provisioned infrastructure. It will generate a clear overview of all your provisioned resources, giving you total visibility into your active cloud setup.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv616eh0xvuan54p4g95h.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv616eh0xvuan54p4g95h.png" alt="Dashboard view of Sunbird Insyte" width="800" height="423"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Fig 4: Verifying the deployed AWS VPC and resources automatically with Sunbird Insyte.&lt;/em&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  Phase 5: Configure the Site-to-Site VPN Tunnel
&lt;/h3&gt;

&lt;p&gt;AWS has generated the keys for your VPN tunnel, but your local Docker container needs them to establish the connection.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Log into the AWS Management Console and navigate to the &lt;strong&gt;VPC Dashboard&lt;/strong&gt; -&amp;gt; &lt;strong&gt;Site-to-Site VPN Connections&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;Select your newly created VPN and click &lt;strong&gt;Download configuration&lt;/strong&gt; (Vendor: Generic).&lt;/li&gt;
&lt;li&gt;Open the downloaded file and find the &lt;strong&gt;IPSec Tunnel #1&lt;/strong&gt; section to locate your &lt;strong&gt;Pre-Shared Key (PSK)&lt;/strong&gt; and the &lt;strong&gt;Virtual Private Gateway IP&lt;/strong&gt; (&lt;code&gt;&amp;lt;AWS_TUNNEL_IP&amp;gt;&lt;/code&gt;).&lt;/li&gt;
&lt;li&gt;Update &lt;code&gt;vpn_config/ipsec.secrets&lt;/code&gt; in your project folder:
&lt;/li&gt;
&lt;/ol&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;   &amp;lt;YOUR_PUBLIC_IP&amp;gt; &amp;lt;AWS_TUNNEL_IP&amp;gt; : PSK "YOUR_PRE_SHARED_KEY_HERE"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;5.Update &lt;code&gt;vpn_config/ipsec.conf&lt;/code&gt; in your project folder:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;   leftid=&amp;lt;YOUR_PUBLIC_IP&amp;gt;
   ...
   right=&amp;lt;AWS_TUNNEL_IP&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;6.Start Tunnel: &lt;code&gt;docker restart aws_ipsec_vpn&lt;/code&gt; then &lt;code&gt;docker exec aws_ipsec_vpn ipsec up aws-tunnel-1&lt;/code&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Phase 6: Inject Local Network Routes
&lt;/h3&gt;

&lt;p&gt;Even with the tunnel "Up," the pgAdmin and Postgres containers need a map to find AWS.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Find VPN Container IP: &lt;code&gt;docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' aws_ipsec_vpn&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Add Route to pgadmin and local db containers so they know to send traffic destined for AWS through the VPN tunnel:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;   docker &lt;span class="nb"&gt;exec&lt;/span&gt; &lt;span class="nt"&gt;-u&lt;/span&gt; root pgadmin_dashboard ip route add 10.0.0.0/16 via &amp;lt;VPN_IP&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;   docker &lt;span class="nb"&gt;exec&lt;/span&gt; &lt;span class="nt"&gt;-u&lt;/span&gt; root source_db apt-get update &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; docker &lt;span class="nb"&gt;exec&lt;/span&gt; &lt;span class="nt"&gt;-u&lt;/span&gt; root source_db apt-get &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;-y&lt;/span&gt; iproute2
   docker &lt;span class="nb"&gt;exec&lt;/span&gt; &lt;span class="nt"&gt;-u&lt;/span&gt; root source_db ip route add 10.0.0.0/16 via &amp;lt;VPN_IP&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;3.Test Connectivity: From the pgAdmin container, ping the AWS RDS endpoint to confirm connectivity:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;   docker &lt;span class="nb"&gt;exec&lt;/span&gt; &lt;span class="nt"&gt;-it&lt;/span&gt; pgadmin_dashboard ping &amp;lt;YOUR_RDS_ENDPOINT&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;4.If the ping is successful, you have established a secure connection from your local Docker environment to AWS! You can now also connect to the AWS RDS instance from pgAdmin using the &lt;code&gt;&amp;lt;YOUR_RDS_ENDPOINT&amp;gt;&lt;/code&gt;, username &lt;code&gt;SunbirdAdmin&lt;/code&gt;, and the password &lt;code&gt;sunbird_analytics&lt;/code&gt; you set in Terraform.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh0kk05fzgw90a7wh3m4r.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh0kk05fzgw90a7wh3m4r.png" alt="pgAdmin interface screenshot" width="800" height="422"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Fig 5: Connecting directly to the AWS cloud server via the local pgAdmin dashboard.&lt;/em&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  Phase 7: Execute the Migration Using PostgreSQL Publication and Subscription
&lt;/h3&gt;
&lt;h4&gt;
  
  
  1. Schema Migration
&lt;/h4&gt;

&lt;p&gt;Export the schema from your Docker Postgres:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker &lt;span class="nb"&gt;exec &lt;/span&gt;source_db pg_dump &lt;span class="nt"&gt;-s&lt;/span&gt; &lt;span class="nt"&gt;-U&lt;/span&gt; sunbird_analytics &lt;span class="nt"&gt;-d&lt;/span&gt; sunbird_analytics &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; schema.sql
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Import the schema into AWS RDS. Open the &lt;code&gt;schema.sql&lt;/code&gt; script and Copy all the text inside that file. Then, in pgAdmin open a query window connected from the cloud server to the &lt;code&gt;postgres&lt;/code&gt; database on AWS RDS. Paste the entire &lt;code&gt;schema.sql&lt;/code&gt; content and execute it. This will create all the tables and structures needed on AWS.&lt;/p&gt;

&lt;h4&gt;
  
  
  2. On the Source DB (Local Docker)
&lt;/h4&gt;

&lt;p&gt;Create the "Publisher." This tells your local DB to start broadcasting every change to the transaction logs.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="c1"&gt;-- Run this in the 'sunbird_analytics' database on Localhost&lt;/span&gt;
&lt;span class="k"&gt;CREATE&lt;/span&gt; &lt;span class="n"&gt;PUBLICATION&lt;/span&gt; &lt;span class="n"&gt;sunbird_migration_pub&lt;/span&gt; &lt;span class="k"&gt;FOR&lt;/span&gt; &lt;span class="k"&gt;ALL&lt;/span&gt; &lt;span class="n"&gt;TABLES&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  3. On the Target DB (AWS RDS)
&lt;/h4&gt;

&lt;p&gt;Get the IP address of your local Docker Postgres container. This is the "host" that AWS will connect to.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker inspect &lt;span class="nt"&gt;-f&lt;/span&gt; &lt;span class="s1"&gt;'{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}'&lt;/span&gt; source_db
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Create the "Subscriber." This tells AWS to reach through the VPN and start sucking in data.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="c1"&gt;-- Run this in the 'sunbird_analytics' database on AWS RDS&lt;/span&gt;
&lt;span class="k"&gt;CREATE&lt;/span&gt; &lt;span class="n"&gt;SUBSCRIPTION&lt;/span&gt; &lt;span class="n"&gt;sunbird_migration_pub&lt;/span&gt;
&lt;span class="k"&gt;CONNECTION&lt;/span&gt; &lt;span class="s1"&gt;'host=172.20.0.2 port=5432 user=sunbird_analytics password=sunbird_analytics dbname=sunbird_analytics'&lt;/span&gt;
&lt;span class="n"&gt;PUBLICATION&lt;/span&gt; &lt;span class="n"&gt;sunbird_migration_pub&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  4. Reset Sequences (CRITICAL)
&lt;/h4&gt;

&lt;p&gt;Postgres sequences (for ID columns) don't stay in sync during replication. You must manually "bump" them on AWS so the next ID used is correct.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Run this on AWS RDS:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="c1"&gt;-- This script resets all sequences to the current max ID + 1&lt;/span&gt;
&lt;span class="k"&gt;SELECT&lt;/span&gt; &lt;span class="n"&gt;setval&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;pg_get_serial_sequence&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s1"&gt;'live_orders'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s1"&gt;'id'&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt; &lt;span class="n"&gt;coalesce&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;max&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;id&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt; &lt;span class="k"&gt;FROM&lt;/span&gt; &lt;span class="n"&gt;live_orders&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  Need Help With Your Infrastructure?
&lt;/h2&gt;

&lt;p&gt;Navigating complex database migrations and modernizing your cloud infrastructure doesn't have to be a solo mission. &lt;/p&gt;

&lt;p&gt;👉 &lt;strong&gt;&lt;a href="https://sunbirdanalytics.com/solutions/index.html" rel="noopener noreferrer"&gt;Explore Our Cloud Solutions at Sunbird Analytics&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;

</description>
      <category>cloud</category>
      <category>aws</category>
      <category>database</category>
      <category>docker</category>
    </item>
    <item>
      <title>The Free AWS Risk, Compliance, FinOpS &amp; Auditing Platform</title>
      <dc:creator>Sunbird Analytics</dc:creator>
      <pubDate>Fri, 10 Apr 2026 23:38:54 +0000</pubDate>
      <link>https://dev.to/sunbirdlabs/the-free-aws-risk-compliance-finops-auditing-platform-4jpb</link>
      <guid>https://dev.to/sunbirdlabs/the-free-aws-risk-compliance-finops-auditing-platform-4jpb</guid>
      <description>&lt;p&gt;  &lt;iframe src="https://www.youtube.com/embed/zbkOKhiwSW0"&gt;
  &lt;/iframe&gt;
&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. The Need for Automated Cloud Governance&lt;/strong&gt;&lt;br&gt;
Maintaining a secure and cost-effective AWS environment manually is virtually impossible as your infrastructure scales. By leveraging Sunbird Insyte, teams can instantly identify security vulnerabilities and FinOps optimisation targets through a unified dashboard. Let's break down how simple it is to run a comprehensive scan of your cloud infrastructure.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Running Your First Infrastructure Audit&lt;/strong&gt;&lt;br&gt;
To begin, navigate to the Infrastructure Audits section in the Sunbird Insyte console. From there, click the Run Audit Scan button. A configuration modal will prompt you to select your target AWS region. In our example, we select us-east-1 and click Confirm &amp;amp; Run.&lt;/p&gt;

&lt;p&gt;The system will queue your scan for processing. After a few moments, clicking Refresh Scan Status will update the status indicator to a green "Succeeded", confirming that your environment data has been successfully ingested and analysed.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. Reviewing Security Vulnerabilities&lt;/strong&gt;&lt;br&gt;
With the scan complete, head over to the Security tab located under the Governance menu. The Security Posture dashboard immediately highlights the risk overview.&lt;/p&gt;

&lt;p&gt;For example, a typical audit might reveal a total of 135 findings, with a specific focus on 5 Critical Risks. The dashboard clearly flags severe issues, such as:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;AWS LAMBDA:&lt;/strong&gt; Potential secrets found in Lambda function source code (e.g., hardcoded Secret Keywords).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;ECS:&lt;/strong&gt; Potential secrets detected within ECS task definition environment variables.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;IAM:&lt;/strong&gt; Highly permissive policies attached directly to users, or overly broad administrative access.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;S3:&lt;/strong&gt; Buckets with public-read access policies exposing data to the public internet.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;4. Uncovering FinOps Opportunities&lt;/strong&gt;&lt;br&gt;
Cloud optimisation isn't just about security; it's also about managing costs. Switching to the FinOps tab provides deep visibility into your cloud spend. The dashboard displays a straightforward Cost Insights panel showcasing the current 30-day cost—such as $29.00 in our demo environment—alongside the forecasted spend.&lt;/p&gt;

&lt;p&gt;Below the overview, Sunbird Insyte breaks down your cost distribution across different AWS services and maps them directly to actionable Strategy Overviews. Some highly effective recommendations generated by the platform include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Right-sizing task allocations and utilizing Fargate Spot for ECS workloads.&lt;/li&gt;
&lt;li&gt;Monitoring utilisation and right-sizing based on precise metrics for EC2.&lt;/li&gt;
&lt;li&gt;Implementing S3 Lifecycle policies and storage tiering for long-term storage.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;5. Exporting Actionable PDFs&lt;/strong&gt;&lt;br&gt;
The real value of an audit is in how easily the data can be shared with stakeholders and engineers. On both the Security and FinOps pages, Sunbird Insyte provides an Export PDF Report button. Clicking this generates a beautifully formatted, comprehensive PDF document outlining every finding, risk analysis, and FinOps recommendation ready to be saved and distributed.&lt;/p&gt;

&lt;p&gt;Stop guessing about your cloud security posture and monthly bill. Try running your first automated audit today with Sunbird Insyte.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://sunbirdanalytics.co.za/insyte/index.html" rel="noopener noreferrer"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>automation</category>
      <category>aws</category>
      <category>cloud</category>
      <category>security</category>
    </item>
    <item>
      <title>Are You Wasting 70% of Your AWS Budget on Non-Prod Instances?</title>
      <dc:creator>Sunbird Analytics</dc:creator>
      <pubDate>Fri, 10 Apr 2026 13:08:51 +0000</pubDate>
      <link>https://dev.to/sunbirdlabs/are-you-wasting-70-of-your-aws-budget-on-non-prod-instances-110e</link>
      <guid>https://dev.to/sunbirdlabs/are-you-wasting-70-of-your-aws-budget-on-non-prod-instances-110e</guid>
      <description>&lt;p&gt;&lt;strong&gt;1. The Voluntary Cloud Tax&lt;/strong&gt;&lt;br&gt;
Watching an AWS bill speedrun your IT budget is pure panic. Stop paying the voluntary cloud tax for dev servers that nobody turned off on Friday.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. The Idle Math&lt;/strong&gt;&lt;br&gt;
Let's look at the math: there are 168 hours in a week, and your developers are allegedly working 40 of them. That means for almost 130 hours a week, your non-production servers are just sitting there. They are active for maybe a quarter of the time.&lt;/p&gt;

&lt;p&gt;The other 76%, they're completely idle, doing absolutely nothing, just quietly burning company cash in a data center somewhere.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. The Baseline Architecture&lt;/strong&gt;&lt;br&gt;
Here is a baseline setup: we've got a standard Application Load Balancer sitting in front of an Auto Scaling Group with three T3 mediums. If we run a cost breakdown right now, it wants to charge us $108 a month. For a development server, that's a bit too much.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzri6i56571waw5gx2t8h.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzri6i56571waw5gx2t8h.png" alt=" " width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Here is the complete Terraform configuration used to provision the baseline architecture, including the Launch Template, Auto Scaling Group, and networking components.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight hcl"&gt;&lt;code&gt;&lt;span class="nx"&gt;provider&lt;/span&gt; &lt;span class="s2"&gt;"aws"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;region&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"us-east-1"&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="c1"&gt;# 1. Fetch the latest Amazon Linux OS&lt;/span&gt;
&lt;span class="nx"&gt;data&lt;/span&gt; &lt;span class="s2"&gt;"aws_ami"&lt;/span&gt; &lt;span class="s2"&gt;"amazon_linux"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;most_recent&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
  &lt;span class="nx"&gt;owners&lt;/span&gt;      &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"amazon"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
  &lt;span class="nx"&gt;filter&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;name&lt;/span&gt;   &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"name"&lt;/span&gt;
    &lt;span class="nx"&gt;values&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"al2023-ami-2023.*-x86_64"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="c1"&gt;# 2. Minimal Networking for the Load Balancer&lt;/span&gt;
&lt;span class="nx"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"aws_vpc"&lt;/span&gt; &lt;span class="s2"&gt;"dev_vpc"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;cidr_block&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"10.0.0.0/16"&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="nx"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"aws_subnet"&lt;/span&gt; &lt;span class="s2"&gt;"dev_subnet_a"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;vpc_id&lt;/span&gt;                  &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;aws_vpc&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;dev_vpc&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;
  &lt;span class="nx"&gt;cidr_block&lt;/span&gt;              &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"10.0.1.0/24"&lt;/span&gt;
  &lt;span class="nx"&gt;availability_zone&lt;/span&gt;       &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"us-east-1a"&lt;/span&gt;
  &lt;span class="nx"&gt;map_public_ip_on_launch&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="nx"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"aws_subnet"&lt;/span&gt; &lt;span class="s2"&gt;"dev_subnet_b"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;vpc_id&lt;/span&gt;                  &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;aws_vpc&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;dev_vpc&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;
  &lt;span class="nx"&gt;cidr_block&lt;/span&gt;              &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"10.0.2.0/24"&lt;/span&gt;
  &lt;span class="nx"&gt;availability_zone&lt;/span&gt;       &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"us-east-1b"&lt;/span&gt;
  &lt;span class="nx"&gt;map_public_ip_on_launch&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="nx"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"aws_security_group"&lt;/span&gt; &lt;span class="s2"&gt;"dev_sg"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;name&lt;/span&gt;   &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"dev-sg"&lt;/span&gt;
  &lt;span class="nx"&gt;vpc_id&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;aws_vpc&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;dev_vpc&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;

  &lt;span class="c1"&gt;# Allow inbound HTTP traffic&lt;/span&gt;
  &lt;span class="nx"&gt;ingress&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;from_port&lt;/span&gt;   &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;80&lt;/span&gt;
    &lt;span class="nx"&gt;to_port&lt;/span&gt;     &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;80&lt;/span&gt;
    &lt;span class="nx"&gt;protocol&lt;/span&gt;    &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"tcp"&lt;/span&gt;
    &lt;span class="nx"&gt;cidr_blocks&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"0.0.0.0/0"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;

  &lt;span class="c1"&gt;# Allow all outbound traffic (so the servers can download updates)&lt;/span&gt;
  &lt;span class="nx"&gt;egress&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;from_port&lt;/span&gt;   &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;
    &lt;span class="nx"&gt;to_port&lt;/span&gt;     &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;
    &lt;span class="nx"&gt;protocol&lt;/span&gt;    &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"-1"&lt;/span&gt;
    &lt;span class="nx"&gt;cidr_blocks&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"0.0.0.0/0"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="nx"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"aws_internet_gateway"&lt;/span&gt; &lt;span class="s2"&gt;"dev_igw"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;vpc_id&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;aws_vpc&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;dev_vpc&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="nx"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"aws_route_table"&lt;/span&gt; &lt;span class="s2"&gt;"dev_public_rt"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;vpc_id&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;aws_vpc&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;dev_vpc&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;
  &lt;span class="nx"&gt;route&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;cidr_block&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"0.0.0.0/0"&lt;/span&gt;
    &lt;span class="nx"&gt;gateway_id&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;aws_internet_gateway&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;dev_igw&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="nx"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"aws_route_table_association"&lt;/span&gt; &lt;span class="s2"&gt;"a"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;subnet_id&lt;/span&gt;      &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;aws_subnet&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;dev_subnet_a&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;
  &lt;span class="nx"&gt;route_table_id&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;aws_route_table&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;dev_public_rt&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="nx"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"aws_route_table_association"&lt;/span&gt; &lt;span class="s2"&gt;"b"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;subnet_id&lt;/span&gt;      &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;aws_subnet&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;dev_subnet_b&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;
  &lt;span class="nx"&gt;route_table_id&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;aws_route_table&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;dev_public_rt&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="c1"&gt;# 3. The Launch Template&lt;/span&gt;
&lt;span class="nx"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"aws_launch_template"&lt;/span&gt; &lt;span class="s2"&gt;"dev_web"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;name_prefix&lt;/span&gt;            &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"dev-web-template"&lt;/span&gt;
  &lt;span class="nx"&gt;image_id&lt;/span&gt;               &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;data&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;aws_ami&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;amazon_linux&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;
  &lt;span class="nx"&gt;instance_type&lt;/span&gt;          &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"t3.medium"&lt;/span&gt;
  &lt;span class="nx"&gt;vpc_security_group_ids&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;aws_security_group&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;dev_sg&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;

  &lt;span class="c1"&gt;# This script automatically installs a web server when the instance boots!&lt;/span&gt;
  &lt;span class="nx"&gt;user_data&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;base64encode&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&amp;lt;-&lt;/span&gt;&lt;span class="no"&gt;EOF&lt;/span&gt;&lt;span class="sh"&gt;
              #!/bin/bash
              yum update -y
              yum install -y httpd
              systemctl start httpd
              systemctl enable httpd

              # Grab the private IP of the specific EC2 instance
              INSTANCE_IP=$(hostname -I | awk '{print $1}')

              # Build the webpage
              echo "&amp;lt;html&amp;gt;&amp;lt;body&amp;gt;" &amp;gt; /var/www/html/index.html
              echo "&amp;lt;h1&amp;gt;Hello from the Dev Environment!&amp;lt;/h1&amp;gt;" &amp;gt;&amp;gt; /var/www/html/index.html
              echo "&amp;lt;p&amp;gt;Traffic is currently being handled by Server IP: $INSTANCE_IP&amp;lt;/p&amp;gt;" &amp;gt;&amp;gt; /var/www/html/index.html
              echo "&amp;lt;/body&amp;gt;&amp;lt;/html&amp;gt;" &amp;gt;&amp;gt; /var/www/html/index.html
&lt;/span&gt;&lt;span class="no"&gt;              EOF
&lt;/span&gt;  &lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="c1"&gt;# 4. The Auto Scaling Group (Deploys 3 Instances)&lt;/span&gt;
&lt;span class="nx"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"aws_autoscaling_group"&lt;/span&gt; &lt;span class="s2"&gt;"dev_web_asg"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;name&lt;/span&gt;                &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"dev-web-asg"&lt;/span&gt;
  &lt;span class="nx"&gt;vpc_zone_identifier&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;aws_subnet&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;dev_subnet_a&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;aws_subnet&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;dev_subnet_b&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
  &lt;span class="nx"&gt;desired_capacity&lt;/span&gt;    &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;3&lt;/span&gt;
  &lt;span class="nx"&gt;min_size&lt;/span&gt;            &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;3&lt;/span&gt;
  &lt;span class="nx"&gt;max_size&lt;/span&gt;            &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;3&lt;/span&gt;

  &lt;span class="nx"&gt;launch_template&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;id&lt;/span&gt;      &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;aws_launch_template&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;dev_web&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;
    &lt;span class="nx"&gt;version&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"$Latest"&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="nx"&gt;target_group_arns&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;aws_lb_target_group&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;dev_web_tg&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;arn&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="c1"&gt;# 5. The Load Balancer&lt;/span&gt;
&lt;span class="nx"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"aws_lb"&lt;/span&gt; &lt;span class="s2"&gt;"dev_alb"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;name&lt;/span&gt;               &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"dev-web-alb"&lt;/span&gt;
  &lt;span class="nx"&gt;internal&lt;/span&gt;           &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;
  &lt;span class="nx"&gt;load_balancer_type&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"application"&lt;/span&gt;
  &lt;span class="nx"&gt;security_groups&lt;/span&gt;    &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;aws_security_group&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;dev_sg&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
  &lt;span class="nx"&gt;subnets&lt;/span&gt;            &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;aws_subnet&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;dev_subnet_a&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;aws_subnet&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;dev_subnet_b&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="nx"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"aws_lb_target_group"&lt;/span&gt; &lt;span class="s2"&gt;"dev_web_tg"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;name&lt;/span&gt;     &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"dev-web-tg"&lt;/span&gt;
  &lt;span class="nx"&gt;port&lt;/span&gt;     &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;80&lt;/span&gt;
  &lt;span class="nx"&gt;protocol&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"HTTP"&lt;/span&gt;
  &lt;span class="nx"&gt;vpc_id&lt;/span&gt;   &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;aws_vpc&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;dev_vpc&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="c1"&gt;# The Listener (Tells the ALB where to send traffic)&lt;/span&gt;
&lt;span class="nx"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"aws_lb_listener"&lt;/span&gt; &lt;span class="s2"&gt;"dev_listener"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;load_balancer_arn&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;aws_lb&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;dev_alb&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;arn&lt;/span&gt;
  &lt;span class="nx"&gt;port&lt;/span&gt;              &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"80"&lt;/span&gt;
  &lt;span class="nx"&gt;protocol&lt;/span&gt;          &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"HTTP"&lt;/span&gt;

  &lt;span class="nx"&gt;default_action&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;type&lt;/span&gt;             &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"forward"&lt;/span&gt;
    &lt;span class="nx"&gt;target_group_arn&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;aws_lb_target_group&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;dev_web_tg&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;arn&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Let's run &lt;code&gt;terraform apply&lt;/code&gt; and deploy our setup. Terraform does its thing, we check the AWS console, and the instances are running. If we hit the load balancer URL, we get our test page. Refresh it a couple of times, the IPs change, so traffic is routing perfectly. The architecture is solid, but the billing department is still crying.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4. Implementing Scheduled Scaling&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Now, let's cut the AWS bill by treating the servers like office lights, automatically turning them off when people go home using a couple of lines of Terraform.&lt;/p&gt;

&lt;p&gt;Here's the magic trick: scheduled scaling. We drop in two &lt;code&gt;aws_autoscaling_schedule&lt;/code&gt; resources into our configuration.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight hcl"&gt;&lt;code&gt;&lt;span class="c1"&gt;# --- SCHEDULED SCALING (The Cost Saver) ---&lt;/span&gt;
&lt;span class="nx"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"aws_autoscaling_schedule"&lt;/span&gt; &lt;span class="s2"&gt;"scale_down"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;scheduled_action_name&lt;/span&gt;  &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"scale-down-evening"&lt;/span&gt;
  &lt;span class="nx"&gt;min_size&lt;/span&gt;               &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;
  &lt;span class="nx"&gt;max_size&lt;/span&gt;               &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;
  &lt;span class="nx"&gt;desired_capacity&lt;/span&gt;       &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;
  &lt;span class="nx"&gt;recurrence&lt;/span&gt;             &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"0 18 * * 1-5"&lt;/span&gt; &lt;span class="c1"&gt;# 6 PM Mon-Fri&lt;/span&gt;
  &lt;span class="nx"&gt;autoscaling_group_name&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;aws_autoscaling_group&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;dev_web_asg&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;name&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="nx"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"aws_autoscaling_schedule"&lt;/span&gt; &lt;span class="s2"&gt;"scale_up"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;scheduled_action_name&lt;/span&gt;  &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"scale-up-morning"&lt;/span&gt;
  &lt;span class="nx"&gt;min_size&lt;/span&gt;               &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;3&lt;/span&gt;
  &lt;span class="nx"&gt;max_size&lt;/span&gt;               &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;3&lt;/span&gt;
  &lt;span class="nx"&gt;desired_capacity&lt;/span&gt;       &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;3&lt;/span&gt;
  &lt;span class="nx"&gt;recurrence&lt;/span&gt;             &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"0 8 * * 1-5"&lt;/span&gt; &lt;span class="c1"&gt;# 8 AM Mon-Fri&lt;/span&gt;
  &lt;span class="nx"&gt;autoscaling_group_name&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;aws_autoscaling_group&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;dev_web_asg&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;name&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The first block tells the group to scale down to zero instances at 6 PM every weekday. The second block tells it to wake back up and spin up three instances at 8 AM every weekday.&lt;/p&gt;

&lt;p&gt;We run a quick &lt;code&gt;terraform apply&lt;/code&gt; to push the update. Check the AWS console again, and look under the autoscaling group, and our instances officially have a bedtime.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;5. The Cost Savings&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Let's actually look at the math to see if it was worth it. To calculate our baseline cost without the schedule, we run a standard Infracost breakdown:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;infracost breakdown &lt;span class="nt"&gt;--path&lt;/span&gt; &lt;span class="nb"&gt;.&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This command returns our $108/month estimate based on instances running 24/7. But what happens when we apply our bedtime? We can feed Infracost a custom usage file (&lt;code&gt;infracost-usage.yml&lt;/code&gt;) that simulates our 40-hour work week instead of the default 730-hour month.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# infracost-usage.yml&lt;/span&gt;
&lt;span class="na"&gt;version&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;0.1&lt;/span&gt;
&lt;span class="na"&gt;resource_usage&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;aws_autoscaling_group.dev_web_asg&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;instances&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;1&lt;/span&gt; &lt;span class="c1"&gt;# This simulates 40 hours/week out of the standard 730-hour month&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We run the breakdown again, this time passing in the custom usage file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;infracost breakdown &lt;span class="nt"&gt;--path&lt;/span&gt; &lt;span class="nb"&gt;.&lt;/span&gt; &lt;span class="nt"&gt;--usage-file&lt;/span&gt; infracost-usage.yml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Our estimated bill drops from $108 to $47. It took 5 minutes of code to cut the bill in half.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>devops</category>
      <category>cloud</category>
      <category>ai</category>
    </item>
    <item>
      <title>Amazon S3 Files: The End of Data Silos (And the Security Risks to Watch)</title>
      <dc:creator>Sunbird Analytics</dc:creator>
      <pubDate>Thu, 09 Apr 2026 22:19:30 +0000</pubDate>
      <link>https://dev.to/sunbirdlabs/amazon-s3-files-the-end-of-data-silos-and-the-security-risks-to-watch-4l39</link>
      <guid>https://dev.to/sunbirdlabs/amazon-s3-files-the-end-of-data-silos-and-the-security-risks-to-watch-4l39</guid>
      <description>&lt;p&gt;&lt;strong&gt;1. What is Amazon S3 Files?&lt;/strong&gt;&lt;br&gt;
For years, a fundamental division has existed in cloud storage: you either used object storage (like Amazon S3) for massive scale and low costs, or block/file storage (like EFS or EBS) for applications that require a standard file system. Syncing data between the two was a tedious process requiring custom pipelines.&lt;/p&gt;

&lt;p&gt;AWS has erased that boundary with the introduction of Amazon S3 Files. S3 Files is a shared file system feature that connects any AWS compute resource directly with your data in Amazon S3. It provides fast, direct access to your S3 buckets as files with complete NFS file system semantics, bringing the simplicity of a file system to the limitless scale of S3.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Game-Changing Benefits for Cloud Workloads&lt;/strong&gt;&lt;br&gt;
By effectively turning your S3 bucket into a traditional file system, you instantly eliminate duplicate storage. Data engineers, ML models, and containerized applications can read and write to the same central S3 bucket in real time.&lt;/p&gt;

&lt;p&gt;&lt;u&gt;No Code Changes Required:&lt;/u&gt; Standard Python libraries, shell scripts, and native ML frameworks can interact with S3 directly, oblivious to the fact that it's object storage on the backend.&lt;/p&gt;

&lt;p&gt;&lt;u&gt;Massive Scalability:&lt;/u&gt; S3 Files supports up to 25,000 compute resources (EC2, EKS, ECS, Lambda, Fargate) accessing the same dataset simultaneously.&lt;/p&gt;

&lt;p&gt;&lt;u&gt;Cost Optimization:&lt;/u&gt; By avoiding data replication between object stores and file systems, AWS claims S3 Files can deliver up to 90% lower costs. Intelligent caching ensures only active working sets are loaded onto high-performance layers.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Uncovering the New Security Risks
While the operational advantages are massive, S3 Files introduces complex new attack vectors. Exposing object storage through a network file system interface means your network security boundary is now inextricably tied to your data security boundary.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;u&gt;Over-Permissive Network Access:&lt;/u&gt; S3 Files requires mount targets inside your VPC. If your security groups are misconfigured, unauthorized resources—or bad actors who have breached an EC2 instance—could silently mount your S3 bucket and access or exfiltrate your entire dataset using standard OS commands.&lt;/p&gt;

&lt;p&gt;&lt;u&gt;Identity and Access Management (IAM) Gaps:&lt;/u&gt; Organizations typically lock down S3 using bucket policies. However, S3 Files uses a combination of IAM policies, file system policies, and Access Points. Misaligning these layers can result in "shadow access" where users bypass intended bucket restrictions via the file system mount.&lt;/p&gt;

&lt;p&gt;&lt;u&gt;Ransomware Threats:&lt;/u&gt; Because S3 Files allows standard file writes and overwrites, a compromised container with write access to an S3 file system could potentially encrypt or delete files at a massive scale, behaving exactly like a traditional on-premise ransomware attack.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4. Securing S3 Files with Sunbird Insyte&lt;/strong&gt;&lt;br&gt;
Adopting Amazon S3 Files means your compliance and security audits must evolve immediately. The days of simply checking S3 Bucket Policies are over; you now have to audit VPC endpoints, NFS security groups, and S3 File system access points in tandem.&lt;/p&gt;

&lt;p&gt;This is where Sunbird Insyte steps in. Insyte continuously monitors your AWS infrastructure and automatically flags risky configurations related to S3 Files. Our platform checks for:&lt;/p&gt;

&lt;p&gt;Exposed mount targets with overly broad security group rules.&lt;br&gt;
Misconfigurations between IAM policies and file system access points.&lt;br&gt;
Missing encryption settings for data in transit and at rest.&lt;br&gt;
Instead of manually querying your environment to see who has mounted what, Insyte gives you a single pane of glass to verify that your new S3 file systems align with strict compliance frameworks like ISO 27001 and SOC 2.&lt;/p&gt;

</description>
      <category>architecture</category>
      <category>aws</category>
      <category>news</category>
      <category>security</category>
    </item>
  </channel>
</rss>
