<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: rizasaputra</title>
    <description>The latest articles on DEV Community by rizasaputra (@rizasaputra).</description>
    <link>https://dev.to/rizasaputra</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/rizasaputra"/>
    <language>en</language>
    <item>
      <title>Securely Exporting MongoDB Atlas Snapshots to S3 Over AWS PrivateLink</title>
      <dc:creator>rizasaputra</dc:creator>
      <pubDate>Sat, 21 Feb 2026 06:36:05 +0000</pubDate>
      <link>https://dev.to/rizasaputra/securely-exporting-mongodb-atlas-snapshots-to-s3-over-aws-privatelink-37m4</link>
      <guid>https://dev.to/rizasaputra/securely-exporting-mongodb-atlas-snapshots-to-s3-over-aws-privatelink-37m4</guid>
      <description>&lt;p&gt;Organizations with strict regulatory compliance requirements often need to ensure that sensitive backup data never traverses the public internet. Exporting Atlas snapshots to your own S3 bucket also provides additional control over retention policies, lifecycle management, and disaster recovery strategies beyond Atlas's built-in backup capabilities.&lt;/p&gt;

&lt;p&gt;MongoDB Atlas supports exporting snapshots to S3 over AWS PrivateLink—keeping all traffic on private IP addresses within the AWS network. Atlas exposes a dedicated object storage private endpoint for backup exports; you create it via the Atlas API/CLI, and Atlas provisions and manages the underlying AWS PrivateLink infrastructure for you.&lt;/p&gt;

&lt;p&gt;This guide provides a step-by-step implementation to meet compliance requirements around data movement and network isolation by exporting Atlas snapshots to S3 over PrivateLink.&lt;/p&gt;

&lt;h2&gt;
  
  
  Current Limitations
&lt;/h2&gt;

&lt;p&gt;Before diving in, understand the constraints:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;AWS only&lt;/strong&gt;: Your Atlas cluster must be hosted on AWS (not GCP or Azure)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Same-region only&lt;/strong&gt;: Your Atlas cluster and S3 bucket must be in the same AWS region.&lt;/li&gt;
&lt;li&gt;Requires M10+ Atlas clusters&lt;/li&gt;
&lt;li&gt;Additional cost: $0.01/hour for PrivateLink connection&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Prerequisites
&lt;/h2&gt;

&lt;p&gt;You'll need:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;MongoDB Atlas cluster (M10+) &lt;strong&gt;hosted on AWS&lt;/strong&gt; with Cloud Backup enabled&lt;/li&gt;
&lt;li&gt;AWS account with permissions to create IAM roles and S3 buckets&lt;/li&gt;
&lt;li&gt;Atlas cluster and S3 bucket in the same AWS region&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Phase 1: Installing Required Tools
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Installing AWS CLI
&lt;/h3&gt;

&lt;h4&gt;
  
  
  macOS
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;brew &lt;span class="nb"&gt;install &lt;/span&gt;awscli
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Or using the installer:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;curl &lt;span class="s2"&gt;"https://awscli.amazonaws.com/AWSCLIV2.pkg"&lt;/span&gt; &lt;span class="nt"&gt;-o&lt;/span&gt; &lt;span class="s2"&gt;"AWSCLIV2.pkg"&lt;/span&gt;
&lt;span class="nb"&gt;sudo &lt;/span&gt;installer &lt;span class="nt"&gt;-pkg&lt;/span&gt; AWSCLIV2.pkg &lt;span class="nt"&gt;-target&lt;/span&gt; /
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Linux
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;curl &lt;span class="s2"&gt;"https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip"&lt;/span&gt; &lt;span class="nt"&gt;-o&lt;/span&gt; &lt;span class="s2"&gt;"awscliv2.zip"&lt;/span&gt;
unzip awscliv2.zip
&lt;span class="nb"&gt;sudo&lt;/span&gt; ./aws/install
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Configure your credentials:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;aws configure
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Verify:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;aws &lt;span class="nt"&gt;--version&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Installing Atlas CLI
&lt;/h3&gt;

&lt;h4&gt;
  
  
  macOS
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;brew &lt;span class="nb"&gt;install &lt;/span&gt;mongodb-atlas-cli
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Linux
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Debian/Ubuntu&lt;/span&gt;
wget https://fastdl.mongodb.org/mongocli/mongodb-atlas-cli_latest_linux_x86_64.deb
&lt;span class="nb"&gt;sudo &lt;/span&gt;dpkg &lt;span class="nt"&gt;-i&lt;/span&gt; mongodb-atlas-cli_latest_linux_x86_64.deb

&lt;span class="c"&gt;# RHEL/CentOS/Fedora&lt;/span&gt;
wget https://fastdl.mongodb.org/mongocli/mongodb-atlas-cli_latest_linux_x86_64.rpm
&lt;span class="nb"&gt;sudo &lt;/span&gt;rpm &lt;span class="nt"&gt;-i&lt;/span&gt; mongodb-atlas-cli_latest_linux_x86_64.rpm
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Verify:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;atlas &lt;span class="nt"&gt;--version&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Phase 2: Authenticating with Atlas
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Creating Atlas API Keys
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;Log into MongoDB Atlas&lt;/li&gt;
&lt;li&gt;Go to your project, then expand the sidebar, choose "Project Identity &amp;amp; Access", and then "Applications", and then "API Keys"&lt;/li&gt;
&lt;li&gt;Click "Create API Key"&lt;/li&gt;
&lt;li&gt;Name it descriptively (e.g., "PrivateLink S3 Export")&lt;/li&gt;
&lt;li&gt;Assign "Project Owner" role (required for PrivateLink and backup exports)&lt;/li&gt;
&lt;li&gt;Save both the Public Key and Private Key securely&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkf1hfj8m3ulklj272jgp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkf1hfj8m3ulklj272jgp.png" alt="Creating Atlas API key" width="800" height="135"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Authenticate the CLI
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;atlas auth login
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Select "API Keys" and enter your credentials:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;? Select authentication &lt;span class="nb"&gt;type&lt;/span&gt;: API Keys &lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="k"&gt;for &lt;/span&gt;existing automations&lt;span class="o"&gt;)&lt;/span&gt;
? Public API Key: &amp;lt;your-public-key&amp;gt;
? Private API Key: &amp;lt;your-private-key&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Verify:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;atlas auth &lt;span class="nb"&gt;whoami&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Phase 3: Creating the S3 Bucket
&lt;/h2&gt;

&lt;p&gt;Set your region and create a private bucket for snapshots:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Set your region (must match your Atlas cluster region)&lt;/span&gt;
&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;AWS_REGION&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;REPLACE-WITH-YOUR-AWS-REGION-CODE-LIKE-us-east-1
&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;BUCKET_NAME&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;REPLACE-WITH-YOUR-ATLAS-SNAPSHOT-BUCKET-NAME

&lt;span class="c"&gt;# Create the bucket&lt;/span&gt;
aws s3 mb s3://&lt;span class="nv"&gt;$BUCKET_NAME&lt;/span&gt; &lt;span class="nt"&gt;--region&lt;/span&gt; &lt;span class="nv"&gt;$AWS_REGION&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Verify public access is blocked:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;aws s3api get-public-access-block &lt;span class="nt"&gt;--bucket&lt;/span&gt; &lt;span class="nv"&gt;$BUCKET_NAME&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;All four settings should be &lt;code&gt;true&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkcd4obryfh2iobvn109p.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkcd4obryfh2iobvn109p.png" alt="S3 bucket public access blocked" width="800" height="167"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Phase 4: Setting Up Unified AWS Access
&lt;/h2&gt;

&lt;p&gt;Atlas uses a unified AWS access model where you authorize an IAM role once, and it can be used across multiple Atlas features (backups, encryption, etc.). &lt;/p&gt;

&lt;h3&gt;
  
  
  Step 1: Create Atlas Cloud Provider Access Role
&lt;/h3&gt;

&lt;p&gt;Set your project ID:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;PROJECT_ID&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;YOUR_PROJECT_ID
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Create the access role in Atlas:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;atlas cloudProviders accessRoles aws create &lt;span class="nt"&gt;--projectId&lt;/span&gt; &lt;span class="nv"&gt;$PROJECT_ID&lt;/span&gt; &lt;span class="nt"&gt;--output&lt;/span&gt; json | &lt;span class="nb"&gt;tee &lt;/span&gt;atlas-access-role.json
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This returns something like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"providerName"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"AWS"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"atlasAWSAccountArn"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"arn:aws:iam::012345678999:root"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"atlasAssumedRoleExternalId"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"xxxxxxxx-1234-5678-9000-xxxxyyyyzzzz"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"createdDate"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"2026-02-21T03:55:10Z"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"featureUsages"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[],&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"roleId"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"xxxxyyyyzzzz123412341234"&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Save these values:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;ATLAS_AWS_ACCOUNT&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;jq &lt;span class="nt"&gt;-r&lt;/span&gt; &lt;span class="s1"&gt;'.atlasAWSAccountArn'&lt;/span&gt; atlas-access-role.json&lt;span class="si"&gt;)&lt;/span&gt;
&lt;span class="nv"&gt;EXTERNAL_ID&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;jq &lt;span class="nt"&gt;-r&lt;/span&gt; &lt;span class="s1"&gt;'.atlasAssumedRoleExternalId'&lt;/span&gt; atlas-access-role.json&lt;span class="si"&gt;)&lt;/span&gt;
&lt;span class="nv"&gt;ROLE_ID&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;jq &lt;span class="nt"&gt;-r&lt;/span&gt; &lt;span class="s1"&gt;'.roleId'&lt;/span&gt; atlas-access-role.json&lt;span class="si"&gt;)&lt;/span&gt;

&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"Atlas AWS Account: &lt;/span&gt;&lt;span class="nv"&gt;$ATLAS_AWS_ACCOUNT&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"External ID: &lt;/span&gt;&lt;span class="nv"&gt;$EXTERNAL_ID&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"Role ID: &lt;/span&gt;&lt;span class="nv"&gt;$ROLE_ID&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Step 2: Create IAM Role in AWS
&lt;/h3&gt;

&lt;p&gt;Create the trust policy using the values from Atlas:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;cat&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; atlas-trust-policy.json &lt;span class="o"&gt;&amp;lt;&amp;lt;&lt;/span&gt;&lt;span class="no"&gt;EOF&lt;/span&gt;&lt;span class="sh"&gt;
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "AWS": "&lt;/span&gt;&lt;span class="nv"&gt;$ATLAS_AWS_ACCOUNT&lt;/span&gt;&lt;span class="sh"&gt;"
      },
      "Action": "sts:AssumeRole",
      "Condition": {
        "StringEquals": {
          "sts:ExternalId": "&lt;/span&gt;&lt;span class="nv"&gt;$EXTERNAL_ID&lt;/span&gt;&lt;span class="sh"&gt;"
        }
      }
    }
  ]
}
&lt;/span&gt;&lt;span class="no"&gt;EOF
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Create the IAM role:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;ROLE_NAME&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;REPLACE-WITH-YOUR-AWS-IAM-ROLE-NAME

aws iam create-role &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--role-name&lt;/span&gt; &lt;span class="nv"&gt;$ROLE_NAME&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--assume-role-policy-document&lt;/span&gt; file://atlas-trust-policy.json &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--description&lt;/span&gt; &lt;span class="s2"&gt;"Role for MongoDB Atlas unified AWS access"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Step 3: Attach S3 Permissions
&lt;/h3&gt;

&lt;p&gt;Create the S3 policy:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;cat&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; atlas-s3-policy.json &lt;span class="o"&gt;&amp;lt;&amp;lt;&lt;/span&gt;&lt;span class="no"&gt;EOF&lt;/span&gt;&lt;span class="sh"&gt;
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Sid": "AllowSnapshotExport",
      "Effect": "Allow",
      "Action": [
        "s3:PutObject",
        "s3:GetBucketLocation"
      ],
      "Resource": [
        "arn:aws:s3:::&lt;/span&gt;&lt;span class="nv"&gt;$BUCKET_NAME&lt;/span&gt;&lt;span class="sh"&gt;",
        "arn:aws:s3:::&lt;/span&gt;&lt;span class="nv"&gt;$BUCKET_NAME&lt;/span&gt;&lt;span class="sh"&gt;/*"
      ]
    }
  ]
}
&lt;/span&gt;&lt;span class="no"&gt;EOF
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Create and attach the policy:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Get your AWS account ID&lt;/span&gt;
&lt;span class="nv"&gt;AWS_ACCOUNT_ID&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;aws sts get-caller-identity &lt;span class="nt"&gt;--query&lt;/span&gt; Account &lt;span class="nt"&gt;--output&lt;/span&gt; text&lt;span class="si"&gt;)&lt;/span&gt;

&lt;span class="c"&gt;# Create policy&lt;/span&gt;
aws iam create-policy &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--policy-name&lt;/span&gt; MongoDBAtlasSnapshotExportPolicy &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--policy-document&lt;/span&gt; file://atlas-s3-policy.json

&lt;span class="c"&gt;# Attach to role&lt;/span&gt;
aws iam attach-role-policy &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--role-name&lt;/span&gt; &lt;span class="nv"&gt;$ROLE_NAME&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--policy-arn&lt;/span&gt; arn:aws:iam::&lt;span class="nv"&gt;$AWS_ACCOUNT_ID&lt;/span&gt;:policy/MongoDBAtlasSnapshotExportPolicy
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Step 4: Authorize the IAM Role in Atlas
&lt;/h3&gt;

&lt;p&gt;Get the IAM role ARN:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;IAM_ROLE_ARN&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;aws iam get-role &lt;span class="nt"&gt;--role-name&lt;/span&gt; &lt;span class="nv"&gt;$ROLE_NAME&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--query&lt;/span&gt; &lt;span class="s1"&gt;'Role.Arn'&lt;/span&gt; &lt;span class="nt"&gt;--output&lt;/span&gt; text&lt;span class="si"&gt;)&lt;/span&gt;

&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"IAM Role ARN: &lt;/span&gt;&lt;span class="nv"&gt;$IAM_ROLE_ARN&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Authorize the role in Atlas:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;atlas cloudProviders accessRoles aws authorize &lt;span class="nv"&gt;$ROLE_ID&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--projectId&lt;/span&gt; &lt;span class="nv"&gt;$PROJECT_ID&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--iamAssumedRoleArn&lt;/span&gt; &lt;span class="nv"&gt;$IAM_ROLE_ARN&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Optional: Add S3 Bucket Policy
&lt;/h3&gt;

&lt;p&gt;For defense in depth, restrict bucket access to only the IAM role:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;cat&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; bucket-policy.json &lt;span class="o"&gt;&amp;lt;&amp;lt;&lt;/span&gt;&lt;span class="no"&gt;EOF&lt;/span&gt;&lt;span class="sh"&gt;
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Sid": "AllowOnlyAtlasRole",
      "Effect": "Allow",
      "Principal": {
        "AWS": "&lt;/span&gt;&lt;span class="nv"&gt;$IAM_ROLE_ARN&lt;/span&gt;&lt;span class="sh"&gt;"
      },
      "Action": [
        "s3:PutObject",
        "s3:PutObjectAcl",
        "s3:GetObject",
        "s3:ListBucket",
        "s3:GetBucketLocation"
      ],
      "Resource": [
        "arn:aws:s3:::&lt;/span&gt;&lt;span class="nv"&gt;$BUCKET_NAME&lt;/span&gt;&lt;span class="sh"&gt;",
        "arn:aws:s3:::&lt;/span&gt;&lt;span class="nv"&gt;$BUCKET_NAME&lt;/span&gt;&lt;span class="sh"&gt;/*"
      ]
    }
  ]
}
&lt;/span&gt;&lt;span class="no"&gt;EOF

&lt;/span&gt;aws s3api put-bucket-policy &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--bucket&lt;/span&gt; &lt;span class="nv"&gt;$BUCKET_NAME&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--policy&lt;/span&gt; file://bucket-policy.json
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Phase 5: Creating Object Storage Private Endpoint
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Create the Private Endpoint
&lt;/h3&gt;

&lt;p&gt;Convert AWS region to Atlas format:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Convert ap-southeast-3 to AP_SOUTHEAST_3 (Atlas format)&lt;/span&gt;
&lt;span class="nv"&gt;ATLAS_REGION&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="nv"&gt;$AWS_REGION&lt;/span&gt; | &lt;span class="nb"&gt;tr&lt;/span&gt; &lt;span class="s1"&gt;'[:lower:]'&lt;/span&gt; &lt;span class="s1"&gt;'[:upper:]'&lt;/span&gt; | &lt;span class="nb"&gt;tr&lt;/span&gt; &lt;span class="s1"&gt;'-'&lt;/span&gt; &lt;span class="s1"&gt;'_'&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;
&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"Atlas Region: &lt;/span&gt;&lt;span class="nv"&gt;$ATLAS_REGION&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Create the private endpoint:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;cat&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; private-endpoint-payload.json &lt;span class="o"&gt;&amp;lt;&amp;lt;&lt;/span&gt;&lt;span class="no"&gt;EOF&lt;/span&gt;&lt;span class="sh"&gt;
{
  "cloudProvider": "AWS",
  "regionName": "&lt;/span&gt;&lt;span class="nv"&gt;$ATLAS_REGION&lt;/span&gt;&lt;span class="sh"&gt;"
}
&lt;/span&gt;&lt;span class="no"&gt;EOF

&lt;/span&gt;&lt;span class="c"&gt;# Create the private endpoint&lt;/span&gt;
atlas api cloudBackups createBackupPrivateEndpoint &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--cloudProvider&lt;/span&gt; AWS &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--groupId&lt;/span&gt; &lt;span class="nv"&gt;$PROJECT_ID&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--file&lt;/span&gt; private-endpoint-payload.json &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--output&lt;/span&gt; json | &lt;span class="nb"&gt;tee &lt;/span&gt;private-endpoint-response.json

&lt;span class="c"&gt;# Extract endpoint ID&lt;/span&gt;
&lt;span class="nv"&gt;PRIVATE_ENDPOINT_ID&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;jq &lt;span class="nt"&gt;-r&lt;/span&gt; &lt;span class="s1"&gt;'.id'&lt;/span&gt; private-endpoint-response.json&lt;span class="si"&gt;)&lt;/span&gt;
&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"Private Endpoint ID: &lt;/span&gt;&lt;span class="nv"&gt;$PRIVATE_ENDPOINT_ID&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Monitor Private Endpoint Status
&lt;/h3&gt;

&lt;p&gt;The private endpoint goes through several states: &lt;code&gt;INITIATING&lt;/code&gt; → &lt;code&gt;PENDING_ACCEPTANCE&lt;/code&gt; → &lt;code&gt;ACTIVE&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;atlas api cloudBackups getBackupPrivateEndpoint &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--cloudProvider&lt;/span&gt; AWS &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--groupId&lt;/span&gt; &lt;span class="nv"&gt;$PROJECT_ID&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--endpointId&lt;/span&gt; &lt;span class="nv"&gt;$PRIVATE_ENDPOINT_ID&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Wait until the status is &lt;code&gt;ACTIVE&lt;/code&gt; before proceeding. This typically takes a few minutes as Atlas provisions the PrivateLink infrastructure.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk0fs9l7errekvf4fcxvn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk0fs9l7errekvf4fcxvn.png" alt="Private endpoint active" width="800" height="101"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Phase 6: Configuring Export Bucket with Private Networking
&lt;/h2&gt;

&lt;p&gt;Now that the private endpoint is active, you can create an export bucket that uses it.&lt;/p&gt;

&lt;h3&gt;
  
  
  Create Export Bucket with Private Networking
&lt;/h3&gt;

&lt;p&gt;Create the export bucket configuration:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;cat&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; export-bucket-payload.json &lt;span class="o"&gt;&amp;lt;&amp;lt;&lt;/span&gt;&lt;span class="no"&gt;EOF&lt;/span&gt;&lt;span class="sh"&gt;
{
  "bucketName": "&lt;/span&gt;&lt;span class="nv"&gt;$BUCKET_NAME&lt;/span&gt;&lt;span class="sh"&gt;",
  "cloudProvider": "AWS",
  "iamRoleId": "&lt;/span&gt;&lt;span class="nv"&gt;$ROLE_ID&lt;/span&gt;&lt;span class="sh"&gt;",
  "requirePrivateNetworking": true
}
&lt;/span&gt;&lt;span class="no"&gt;EOF

&lt;/span&gt;&lt;span class="c"&gt;# Create the export bucket&lt;/span&gt;
atlas api cloudBackups createExportBucket &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--groupId&lt;/span&gt; &lt;span class="nv"&gt;$PROJECT_ID&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--file&lt;/span&gt; export-bucket-payload.json &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--output&lt;/span&gt; json | &lt;span class="nb"&gt;tee &lt;/span&gt;export-bucket-response.json

&lt;span class="c"&gt;# Extract bucket ID&lt;/span&gt;
&lt;span class="nv"&gt;BUCKET_ID&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;jq &lt;span class="nt"&gt;-r&lt;/span&gt; &lt;span class="s1"&gt;'._id'&lt;/span&gt; export-bucket-response.json&lt;span class="si"&gt;)&lt;/span&gt;
&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"Export Bucket ID: &lt;/span&gt;&lt;span class="nv"&gt;$BUCKET_ID&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;When &lt;code&gt;requirePrivateNetworking&lt;/code&gt; is set to &lt;code&gt;true&lt;/code&gt;, Atlas uses the object storage private endpoint you created earlier. All exports will flow through PrivateLink.&lt;/p&gt;

&lt;h3&gt;
  
  
  Verify Export Bucket Configuration
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;atlas api cloudBackups getExportBucket &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--groupId&lt;/span&gt; &lt;span class="nv"&gt;$PROJECT_ID&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--exportBucketId&lt;/span&gt; &lt;span class="nv"&gt;$BUCKET_ID&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Confirm that &lt;code&gt;requirePrivateNetworking&lt;/code&gt; is set to &lt;code&gt;true&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4hox70g7vjwj4t1bdvix.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4hox70g7vjwj4t1bdvix.png" alt="Export bucket configured with private networking" width="800" height="86"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Phase 7: Exporting Snapshots Over PrivateLink
&lt;/h2&gt;

&lt;p&gt;With everything configured, we can now export snapshots. All exports will automatically use PrivateLink.&lt;/p&gt;

&lt;h3&gt;
  
  
  List Available Snapshots
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;CLUSTER_NAME&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;REPLACE-WITH-YOUR-CLUSTER-NAME
atlas backups snapshots list &lt;span class="nv"&gt;$CLUSTER_NAME&lt;/span&gt; &lt;span class="nt"&gt;--projectId&lt;/span&gt; &lt;span class="nv"&gt;$PROJECT_ID&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This shows all snapshots with IDs and timestamps.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frwlw5y83etm3ev3mx8br.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frwlw5y83etm3ev3mx8br.png" alt="Available snapshots" width="800" height="61"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Manual Export
&lt;/h3&gt;

&lt;p&gt;Export a specific snapshot:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Create export payload&lt;/span&gt;
&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;SNAPSHOT_ID&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;REPLACE-WITH-YOUR-SELECTED-SNAPSHOT-ID

&lt;span class="nb"&gt;cat&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; export-payload.json &lt;span class="o"&gt;&amp;lt;&amp;lt;&lt;/span&gt;&lt;span class="no"&gt;EOF&lt;/span&gt;&lt;span class="sh"&gt;
{
  "snapshotId": "&lt;/span&gt;&lt;span class="nv"&gt;$SNAPSHOT_ID&lt;/span&gt;&lt;span class="sh"&gt;",
  "exportBucketId": "&lt;/span&gt;&lt;span class="nv"&gt;$BUCKET_ID&lt;/span&gt;&lt;span class="sh"&gt;",
  "customData": [
    { "key": "exported_via", "value": "privateLink" }
  ]
}
&lt;/span&gt;&lt;span class="no"&gt;EOF

&lt;/span&gt;&lt;span class="c"&gt;# Start the export&lt;/span&gt;
atlas api cloudBackups createBackupExport &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--clusterName&lt;/span&gt; &lt;span class="nv"&gt;$CLUSTER_NAME&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--groupId&lt;/span&gt; &lt;span class="nv"&gt;$PROJECT_ID&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--file&lt;/span&gt; export-payload.json
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Monitor Export Progress
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;atlas backups exports &lt;span class="nb"&gt;jobs &lt;/span&gt;list &lt;span class="nv"&gt;$CLUSTER_NAME&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--projectId&lt;/span&gt; &lt;span class="nv"&gt;$PROJECT_ID&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Status progresses: &lt;code&gt;QUEUED&lt;/code&gt; → &lt;code&gt;IN_PROGRESS&lt;/code&gt; → &lt;code&gt;SUCCESSFUL&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Export duration depends on snapshot size and network throughput.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flvrmumd3efkfid7lgcae.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flvrmumd3efkfid7lgcae.png" alt="Export job in progress" width="800" height="81"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Verify in S3
&lt;/h3&gt;

&lt;p&gt;Once the export job state changes to SUCCESSFUL, you can verify the export in S3.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;aws s3 &lt;span class="nb"&gt;ls &lt;/span&gt;s3://&lt;span class="nv"&gt;$BUCKET_NAME&lt;/span&gt;/ &lt;span class="nt"&gt;--recursive&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Snapshots are organized by path:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;/exported_snapshots/&amp;lt;orgUUID&amp;gt;/&amp;lt;projectUUID&amp;gt;/&amp;lt;clusterName&amp;gt;/&amp;lt;initiationDateOfSnapshot&amp;gt;/&amp;lt;timestamp&amp;gt;/
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fegqianebo9tsumacohu8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fegqianebo9tsumacohu8.png" alt="Exported snapshots" width="800" height="113"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Phase 8: Automating Exports with Backup Policies
&lt;/h2&gt;

&lt;p&gt;On top of manual exports, you can configure Atlas to automatically export snapshots on a schedule.&lt;/p&gt;

&lt;h3&gt;
  
  
  Configure Automatic Export Schedule
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Create schedule update payload&lt;/span&gt;
&lt;span class="nb"&gt;cat&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; schedule-payload.json &lt;span class="o"&gt;&amp;lt;&amp;lt;&lt;/span&gt;&lt;span class="no"&gt;EOF&lt;/span&gt;&lt;span class="sh"&gt;
{
  "autoExportEnabled": true,
  "export": {
    "exportBucketId": "&lt;/span&gt;&lt;span class="nv"&gt;$BUCKET_ID&lt;/span&gt;&lt;span class="sh"&gt;",
    "frequencyType": "monthly"
  }
}
&lt;/span&gt;&lt;span class="no"&gt;EOF

&lt;/span&gt;&lt;span class="c"&gt;# Update the backup schedule&lt;/span&gt;
atlas api cloudBackups updateBackupSchedule &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--clusterName&lt;/span&gt; &lt;span class="nv"&gt;$CLUSTER_NAME&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--groupId&lt;/span&gt; &lt;span class="nv"&gt;$PROJECT_ID&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--file&lt;/span&gt; schedule-payload.json
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Available frequency types:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;monthly&lt;/code&gt;: Export once per month&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;yearly&lt;/code&gt;: Export once per year&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Atlas automatically exports snapshots matching the frequency type.&lt;/p&gt;

&lt;h3&gt;
  
  
  View Current Schedule
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;atlas api cloudBackups getBackupSchedule &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--clusterName&lt;/span&gt; &lt;span class="nv"&gt;$CLUSTER_NAME&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--groupId&lt;/span&gt; &lt;span class="nv"&gt;$PROJECT_ID&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This shows both snapshot and export schedules.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqcxyxm3023kz5qlfo5hf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqcxyxm3023kz5qlfo5hf.png" alt="Backup schedule with export policy" width="800" height="211"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Summary
&lt;/h2&gt;

&lt;p&gt;This guide covered the complete implementation of MongoDB Atlas snapshot exports to S3 over AWS PrivateLink. The key steps:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Create S3 bucket and IAM role with minimal permissions&lt;/li&gt;
&lt;li&gt;Create object storage private endpoint and export bucket in Atlas with &lt;code&gt;requirePrivateNetworking: true&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Export snapshots and configure automated export scheduling using backup policies&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Key implementation considerations:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Atlas cluster must run on AWS in the same region as the destination S3 bucket&lt;/li&gt;
&lt;li&gt;The &lt;code&gt;requirePrivateNetworking: true&lt;/code&gt; flag enables PrivateLink for all exports to that bucket&lt;/li&gt;
&lt;li&gt;Atlas automatically manages PrivateLink infrastructure—no manual VPC endpoint setup required&lt;/li&gt;
&lt;li&gt;Use unified AWS access (cloud provider access roles) for IAM role setup&lt;/li&gt;
&lt;li&gt;PrivateLink connection costs $0.01/hour (billed separately); data processing charge included in $0.125/GB export price&lt;/li&gt;
&lt;li&gt;Native Atlas backup schedule policies eliminate the need for external automation&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This architecture is appropriate for organizations with regulatory requirements mandating private-only networking for data movement.&lt;/p&gt;

&lt;h2&gt;
  
  
  Reference Documentation
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://www.mongodb.com/docs/atlas/backup/cloud-backup/export/" rel="noopener noreferrer"&gt;MongoDB Atlas - Export Cloud Backup Snapshots&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.mongodb.com/docs/atlas/security/set-up-unified-aws-access/" rel="noopener noreferrer"&gt;MongoDB Atlas - Set Up Unified AWS Access&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.mongodb.com/docs/atlas/cli/current/" rel="noopener noreferrer"&gt;MongoDB Atlas CLI - Documentation&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.mongodb.com/docs/atlas/billing/cluster-configuration-costs/#snapshot-export" rel="noopener noreferrer"&gt;MongoDB Atlas - Snapshot Export Cost&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>mongodb</category>
      <category>aws</category>
    </item>
    <item>
      <title>Rebuilding an Old Website Using Kiro (and What Went Wrong)</title>
      <dc:creator>rizasaputra</dc:creator>
      <pubDate>Wed, 31 Dec 2025 13:02:56 +0000</pubDate>
      <link>https://dev.to/rizasaputra/rebuilding-an-old-website-using-kiro-and-what-went-wrong-3p92</link>
      <guid>https://dev.to/rizasaputra/rebuilding-an-old-website-using-kiro-and-what-went-wrong-3p92</guid>
      <description>&lt;p&gt;I rebuilt an old website I hadn’t touched in 7 years using Kiro and technology I never tried. I went from implementing auth in less than 1 hour, to spending 6 days figuring out a simple CRUD with file upload, to a very productive stretch where I finished the remaining CMS and site features from scratch without a framework in 4 days, and finally spending another 3 days painfully cleaning up AI-generated garbage just to make sure everything actually runs in production.&lt;/p&gt;

&lt;p&gt;You’ve probably seen plenty of similar posts about AI coding assistants already, but here’s my experience anyway:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Learn the concept first when you know nothing about what you're building&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If you're trying something new, take time to actually learn the concept. In my case, I spent days stuck on a simple CRUD because I was totally not familiar with how Cloudflare R2, Workers, and Cloudflare Image Resizing actually work, so I couldn’t direct the AI to do what I want properly. It was only after I spent time reading the docs and understanding the concepts that I could finally understand the mess and garbage I had produced, clean it up, and make it work.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Spec driven development is powerful, but…&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I found there are many cases where I’m very clear on what I want to do, and vibe coding without writing any spec is much more productive. For dev teams, I think this means requirement spec for AI can’t replace specs written by PM or QA, at least not for the moment.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Keep specs small&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If you’re using spec driven development (Kiro style), I found I can only be productive when there are at most 4 requirements. 2–3 requirements per spec works much better. More than that, I lose the cognitive capability required to read the requirements and design the solution, let alone understand the code being generated. Split your spec. Period.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Steering files are very important&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The best hack I found is simply taking time to keep steering files updated. Whenever I finished implementing something, updating the steering files helped the AI understand things that were previously ambiguous, or things where I had already changed my mind.  &lt;/p&gt;

&lt;p&gt;Before this, I often had to correct the output. With updated steering files, the result was much closer to what I wanted from the start. I didn’t really measure it, but the time saving difference was very noticeable.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;AI does silently break things&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Even with steering files, and with Kiro searching and reading the codebase before implementing tasks, it still messed things up. It could reinvent implementations instead of reusing existing code which leads to hidden instability, add conflicting CSS rules that go unnoticed until you actually build and deploy, or inject overly strict security configs that make the app unusable. I also found Kiro has a tendency to over-engineer things.&lt;/p&gt;

&lt;p&gt;To understand what actually happened, one trick I used was committing to git frequently so I could compare the overall file diff. For me, this was much easier than trying to follow small, scattered diffs shown during task execution.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Checkpointing is underrated&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Kiro has a feature I really like: checkpointing. If I was in the middle of implementing something and changed my mind, I could easily restore both the chat and the code to a previous point. It’s basically having an undo button for coding. &lt;/p&gt;

&lt;p&gt;Being able to undo and retry with a different approach easily is extremely valuable, and honestly, it reminded me why I could take some random club from Serie C to European champions in Football Manager back when I still had a life.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;ChatGPT is the steroid booster for Kiro&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Maybe it’s because I’m using Auto model selection so I might be served a weaker model, but I found ChatGPT’s reasoning when engineering the solution is often better.&lt;/p&gt;

&lt;p&gt;Kiro has a tendency to just accept and follow whatever you say, and quickly jump into doing stuff. With Kiro, doing and debugging feels more like trial and error. With ChatGPT, since it’s chat-based, you actually get time to think things through properly.&lt;/p&gt;

&lt;p&gt;In the end, the best setup for me was me and ChatGPT co-bossing Kiro to actually work and develop lol.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Humans beat AI in short-range sprinting&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Did I say vibe coding can be more productive? The only thing that’s sometimes even more productive is implementing or fixing the code myself, especially when I’m already very clear on what needs to be written. I found humans, or at least me, retain context better than AI for a small project, so I don’t need to keep scanning existing code just to get going. I simply can’t type as fast as the AI.&lt;/p&gt;

&lt;p&gt;Before closing this out, here’s the before and after.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Old site&lt;/strong&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbvaysgx92wjh1yaqtgt6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbvaysgx92wjh1yaqtgt6.png" alt="Before" width="800" height="564"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Rebuild&lt;/strong&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdfv86fr9bhjf4t3txugf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdfv86fr9bhjf4t3txugf.png" alt="After" width="800" height="588"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;More importantly, I now have a solid foundation to assist others and scale AI coding assistant usage into something more complex.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>kiro</category>
      <category>webdev</category>
      <category>learning</category>
    </item>
    <item>
      <title>Understanding AWS Elastic Beanstalk Worker Timeout</title>
      <dc:creator>rizasaputra</dc:creator>
      <pubDate>Tue, 16 Apr 2019 11:06:03 +0000</pubDate>
      <link>https://dev.to/rizasaputra/understanding-aws-elastic-beanstalk-worker-timeout-42hi</link>
      <guid>https://dev.to/rizasaputra/understanding-aws-elastic-beanstalk-worker-timeout-42hi</guid>
      <description>&lt;p&gt;In case you never heard it before, AWS has this orchestration service called Beanstalk. Basically it means, instead of setting up EC2 instances, load balancers, SQS, CloudWatch, etc manually, you can use this Beanstalk to orchestrate the setup. All you need to do is zip your code (or package it if you use Java), upload the zip, and do some setup from a centralised dashboard and you are done. If you don't need to have a fine control over your setup, this can really help you to get a running system quickly.&lt;/p&gt;

&lt;p&gt;AWS Beanstalk provide two types of environment: Web server environment, and Worker environment. On Web server environment, you will get a server configured with your platform choices (can be Java, Node JS, Docker, etc), and you also can setup a load balancer easily. On worker environment, you can have server and SQS messaging platform to run heavy background jobs or scheduled jobs. I will focus on Worker environment on this post.&lt;/p&gt;

&lt;p&gt;A Beanstalk worker instance has a daemon process which continuously listen to an SQS queue. Whenever it detects a message in the queue, the daemon will send a HTTP POST request locally to &lt;a href="http://localhost/"&gt;http://localhost/&lt;/a&gt; on port 80 with the contents of the queue message in the body. All that your application needs to do is perform the long-running task in response to the POST. You can configure the daemon to post to different URL, connect to existing queue, and configure the timeouts.&lt;/p&gt;

&lt;p&gt;There are 3 basic timeout you can configure from the worker environment dashboard:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Connection timeout&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This is the number of seconds we want to wait for our application when we try to establish a new connection with the app. The value is between 1 to 60 seconds.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Inactivity timeout&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This is the number of seconds we allow our application to do the processing and return response. You can specify between 1 to 36000 seconds.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. Visibility timeout&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This is the number of seconds to lock the message before returning to queue, and you can specify it to be between 1 to 43200 seconds. I personally feel this is the confusing part, so I'll try my best to explain.&lt;/p&gt;

&lt;p&gt;When the daemon read a message from the SQS queue, it will send POST request to your application. If your application can process the request and return good response (e.g HTTP 200) within the Inactivity timeout, great. The whole processing is done, and the message is deleted automatically from the queue.&lt;/p&gt;

&lt;p&gt;However, if your application failed to give response within Inactivity timeout period, the message will be sent back to SQS to be retried after Visibility timeout reached, calculated from the time your app start processing the message.&lt;/p&gt;

&lt;p&gt;So, for example, let's say you set Inactivity timeout to be 30 seconds, and Visibility timeout to be 60 seconds. You then send a message to the queue at 10:00:00. At 10:00:30, if your application cannot give response after 30 seconds, the daemon will flag the request as failed. At 10:01:00, after 60 seconds reached, the daemon will re-throw the message to the queue and the whole process will be repeated until it reaches the Max Retries setting (default 10 times).&lt;/p&gt;

&lt;p&gt;Now, what if your application need 45 seconds to do the heavy background processing in above example? In this case, at 10:00:00 the request will be fired and the processing starts. At 10:00:30, the daemon will flag the processing as failed, &lt;strong&gt;but the actual processing will still continue on the background&lt;/strong&gt;. At 10:00:45 your app finally gives response, but no one is listening to the response. At 10:01:00, the message is back to the queue and the whole heavy processing is repeated even though it was actually success. So you will want to set the Inactivity timeout and Visibility timeout to be a safe value relative on your expected app processing time, and keep both values relatively close. The default settings from AWS put the Inactivity timeout at 299 seconds and Visibility timeout at 300 seconds, only differ by 1 second.&lt;/p&gt;

&lt;p&gt;Another thing you need to be careful is to make sure you set up Visibility timeout higher than the Inactivity timeout. Now, consider this example:&lt;/p&gt;

&lt;p&gt;You set Inactivity timeout at 60 seconds and Visibility timeout at 30 seconds. Your app needs 45 seconds processing time. In this scenario, when the processing time reached the seconds 30, the message will be made visible again in the queue and then automatically be consumed again by your server. Your server ended up doing double work when it is actually not necessary.&lt;/p&gt;

&lt;p&gt;Phew! Now we should be able to config our worker environment properly to avoid timeout. But then, everything changed when Nginx configuration attack.&lt;/p&gt;

&lt;h2&gt;
  
  
  Nginx
&lt;/h2&gt;

&lt;p&gt;If you are using Node JS for your worker environment platform, AWS will give Nginx out of the box to act as proxy between the actual client and your app. Nginx introduces another layer of timeout, which by default is 60 seconds (I think, never really checked, sorry). Now let's dive to example on how this might cause you problem.&lt;/p&gt;

&lt;p&gt;Let's say your application needs 120 seconds to do processing, and you already set up both Visibility timeout and Inactivity timeout to some safe number, e.g 300 seconds. When a request started to be processed at 10:00:00, at 10:01:00 Nginx will spit out timeout. The daemon will pick this signal as literal error, thus will not delete the message from the queue. Meanwhile, &lt;strong&gt;your app will continue process the request in the background&lt;/strong&gt; until it reached 120 seconds and shouts the response to no one since the daemon is no longer listening. As we reached the 300 seconds, the whole process is repeated again.&lt;/p&gt;

&lt;p&gt;There is an easy way to distinguish whether your timeout is introduced by Inactivity timeout or Nginx timeout. You can grab Nginx access log from the worker environment at &lt;code&gt;/var/log/nginx/access.log&lt;/code&gt;. Then, you can inspect the HTTP response status from the request processed by your app. If the status is 499, it means your app hit Inactivity timeout. However, if the HTTP response is 504, then there's good chance your app hit Nginx timeout limit.&lt;/p&gt;

&lt;p&gt;Alright, so, we also need to extend the Nginx timeout. However, extending Nginx timeout is not as easy as the other timeouts since you cannot do it from AWS Console. You can SSH to your worker server instance and change the config directly, but in this case, every time your server rebuild you will need to apply the config changes manually. There is a better method.&lt;/p&gt;

&lt;p&gt;You can add a folder in your application source code and name it as &lt;code&gt;.ebextensions&lt;/code&gt;. You need to name it as is and are not able to name the folder with other names. Inside the folder, add a file and name it &lt;code&gt;nginx_timeout.config&lt;/code&gt;. You can name the file however you want, but it's better to have a descriptive name. One important point about the file naming is, the file must end with &lt;code&gt;.config&lt;/code&gt; extension. Inside that file, you can simply straight copy paste this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;files:
  /etc/nginx/conf.d/timeout.conf:
    mode: "000644"
    owner: root
    group: root
    content: |
      proxy_connect_timeout 300;
      proxy_send_timeout 300;
      proxy_read_timeout 300;
      send_timeout 300;

&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;What it does is simply that it will create a file &lt;code&gt;/etc/nginx/conf.d/timeout.conf&lt;/code&gt; which will be automatically read by Nginx default standard config. That file states that various Nginx timeout value should be 300 instead of the whatever the default values from standard config, and that's it! Now our worker environment should be good to go digest all heavy processing you feed without hitting timeouts.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>beanstalk</category>
      <category>nginx</category>
    </item>
  </channel>
</rss>
