DEV Community

rizasaputra
rizasaputra

Posted on

Securely Exporting MongoDB Atlas Snapshots to S3 Over AWS PrivateLink

Organizations with strict regulatory compliance requirements often need to ensure that sensitive backup data never traverses the public internet. Exporting Atlas snapshots to your own S3 bucket also provides additional control over retention policies, lifecycle management, and disaster recovery strategies beyond Atlas's built-in backup capabilities.

MongoDB Atlas supports exporting snapshots to S3 over AWS PrivateLink—keeping all traffic on private IP addresses within the AWS network. Atlas exposes a dedicated object storage private endpoint for backup exports; you create it via the Atlas API/CLI, and Atlas provisions and manages the underlying AWS PrivateLink infrastructure for you.

This guide provides a step-by-step implementation to meet compliance requirements around data movement and network isolation by exporting Atlas snapshots to S3 over PrivateLink.

Current Limitations

Before diving in, understand the constraints:

  • AWS only: Your Atlas cluster must be hosted on AWS (not GCP or Azure)
  • Same-region only: Your Atlas cluster and S3 bucket must be in the same AWS region.
  • Requires M10+ Atlas clusters
  • Additional cost: $0.01/hour for PrivateLink connection

Prerequisites

You'll need:

  • MongoDB Atlas cluster (M10+) hosted on AWS with Cloud Backup enabled
  • AWS account with permissions to create IAM roles and S3 buckets
  • Atlas cluster and S3 bucket in the same AWS region

Phase 1: Installing Required Tools

Installing AWS CLI

macOS

brew install awscli
Enter fullscreen mode Exit fullscreen mode

Or using the installer:

curl "https://awscli.amazonaws.com/AWSCLIV2.pkg" -o "AWSCLIV2.pkg"
sudo installer -pkg AWSCLIV2.pkg -target /
Enter fullscreen mode Exit fullscreen mode

Linux

curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
unzip awscliv2.zip
sudo ./aws/install
Enter fullscreen mode Exit fullscreen mode

Configure your credentials:

aws configure
Enter fullscreen mode Exit fullscreen mode

Verify:

aws --version
Enter fullscreen mode Exit fullscreen mode

Installing Atlas CLI

macOS

brew install mongodb-atlas-cli
Enter fullscreen mode Exit fullscreen mode

Linux

# Debian/Ubuntu
wget https://fastdl.mongodb.org/mongocli/mongodb-atlas-cli_latest_linux_x86_64.deb
sudo dpkg -i mongodb-atlas-cli_latest_linux_x86_64.deb

# RHEL/CentOS/Fedora
wget https://fastdl.mongodb.org/mongocli/mongodb-atlas-cli_latest_linux_x86_64.rpm
sudo rpm -i mongodb-atlas-cli_latest_linux_x86_64.rpm
Enter fullscreen mode Exit fullscreen mode

Verify:

atlas --version
Enter fullscreen mode Exit fullscreen mode

Phase 2: Authenticating with Atlas

Creating Atlas API Keys

  1. Log into MongoDB Atlas
  2. Go to your project, then expand the sidebar, choose "Project Identity & Access", and then "Applications", and then "API Keys"
  3. Click "Create API Key"
  4. Name it descriptively (e.g., "PrivateLink S3 Export")
  5. Assign "Project Owner" role (required for PrivateLink and backup exports)
  6. Save both the Public Key and Private Key securely

Creating Atlas API key

Authenticate the CLI

atlas auth login
Enter fullscreen mode Exit fullscreen mode

Select "API Keys" and enter your credentials:

? Select authentication type: API Keys (for existing automations)
? Public API Key: <your-public-key>
? Private API Key: <your-private-key>
Enter fullscreen mode Exit fullscreen mode

Verify:

atlas auth whoami
Enter fullscreen mode Exit fullscreen mode

Phase 3: Creating the S3 Bucket

Set your region and create a private bucket for snapshots:

# Set your region (must match your Atlas cluster region)
export AWS_REGION=REPLACE-WITH-YOUR-AWS-REGION-CODE-LIKE-us-east-1
export BUCKET_NAME=REPLACE-WITH-YOUR-ATLAS-SNAPSHOT-BUCKET-NAME

# Create the bucket
aws s3 mb s3://$BUCKET_NAME --region $AWS_REGION
Enter fullscreen mode Exit fullscreen mode

Verify public access is blocked:

aws s3api get-public-access-block --bucket $BUCKET_NAME
Enter fullscreen mode Exit fullscreen mode

All four settings should be true.

S3 bucket public access blocked

Phase 4: Setting Up Unified AWS Access

Atlas uses a unified AWS access model where you authorize an IAM role once, and it can be used across multiple Atlas features (backups, encryption, etc.).

Step 1: Create Atlas Cloud Provider Access Role

Set your project ID:

export PROJECT_ID=YOUR_PROJECT_ID
Enter fullscreen mode Exit fullscreen mode

Create the access role in Atlas:

atlas cloudProviders accessRoles aws create --projectId $PROJECT_ID --output json | tee atlas-access-role.json
Enter fullscreen mode Exit fullscreen mode

This returns something like this:

{
  "providerName": "AWS",
  "atlasAWSAccountArn": "arn:aws:iam::012345678999:root",
  "atlasAssumedRoleExternalId": "xxxxxxxx-1234-5678-9000-xxxxyyyyzzzz",
  "createdDate": "2026-02-21T03:55:10Z",
  "featureUsages": [],
  "roleId": "xxxxyyyyzzzz123412341234"
}
Enter fullscreen mode Exit fullscreen mode

Save these values:

ATLAS_AWS_ACCOUNT=$(jq -r '.atlasAWSAccountArn' atlas-access-role.json)
EXTERNAL_ID=$(jq -r '.atlasAssumedRoleExternalId' atlas-access-role.json)
ROLE_ID=$(jq -r '.roleId' atlas-access-role.json)

echo "Atlas AWS Account: $ATLAS_AWS_ACCOUNT"
echo "External ID: $EXTERNAL_ID"
echo "Role ID: $ROLE_ID"
Enter fullscreen mode Exit fullscreen mode

Step 2: Create IAM Role in AWS

Create the trust policy using the values from Atlas:

cat > atlas-trust-policy.json <<EOF
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "AWS": "$ATLAS_AWS_ACCOUNT"
      },
      "Action": "sts:AssumeRole",
      "Condition": {
        "StringEquals": {
          "sts:ExternalId": "$EXTERNAL_ID"
        }
      }
    }
  ]
}
EOF
Enter fullscreen mode Exit fullscreen mode

Create the IAM role:

export ROLE_NAME=REPLACE-WITH-YOUR-AWS-IAM-ROLE-NAME

aws iam create-role \
  --role-name $ROLE_NAME \
  --assume-role-policy-document file://atlas-trust-policy.json \
  --description "Role for MongoDB Atlas unified AWS access"
Enter fullscreen mode Exit fullscreen mode

Step 3: Attach S3 Permissions

Create the S3 policy:

cat > atlas-s3-policy.json <<EOF
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Sid": "AllowSnapshotExport",
      "Effect": "Allow",
      "Action": [
        "s3:PutObject",
        "s3:PutObjectAcl",
        "s3:GetObject",
        "s3:ListBucket",
        "s3:GetBucketLocation"
      ],
      "Resource": [
        "arn:aws:s3:::$BUCKET_NAME",
        "arn:aws:s3:::$BUCKET_NAME/*"
      ]
    }
  ]
}
EOF
Enter fullscreen mode Exit fullscreen mode

Create and attach the policy:

# Get your AWS account ID
AWS_ACCOUNT_ID=$(aws sts get-caller-identity --query Account --output text)

# Create policy
aws iam create-policy \
  --policy-name MongoDBAtlasSnapshotExportPolicy \
  --policy-document file://atlas-s3-policy.json

# Attach to role
aws iam attach-role-policy \
  --role-name $ROLE_NAME \
  --policy-arn arn:aws:iam::$AWS_ACCOUNT_ID:policy/MongoDBAtlasSnapshotExportPolicy
Enter fullscreen mode Exit fullscreen mode

Step 4: Authorize the IAM Role in Atlas

Get the IAM role ARN:

IAM_ROLE_ARN=$(aws iam get-role --role-name $ROLE_NAME \
  --query 'Role.Arn' --output text)

echo "IAM Role ARN: $IAM_ROLE_ARN"
Enter fullscreen mode Exit fullscreen mode

Authorize the role in Atlas:

atlas cloudProviders accessRoles aws authorize $ROLE_ID \
  --projectId $PROJECT_ID \
  --iamAssumedRoleArn $IAM_ROLE_ARN
Enter fullscreen mode Exit fullscreen mode

Optional: Add S3 Bucket Policy

For defense in depth, restrict bucket access to only the IAM role:

cat > bucket-policy.json <<EOF
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Sid": "AllowOnlyAtlasRole",
      "Effect": "Allow",
      "Principal": {
        "AWS": "$IAM_ROLE_ARN"
      },
      "Action": [
        "s3:PutObject",
        "s3:PutObjectAcl",
        "s3:GetObject",
        "s3:ListBucket",
        "s3:GetBucketLocation"
      ],
      "Resource": [
        "arn:aws:s3:::$BUCKET_NAME",
        "arn:aws:s3:::$BUCKET_NAME/*"
      ]
    }
  ]
}
EOF

aws s3api put-bucket-policy \
  --bucket $BUCKET_NAME \
  --policy file://bucket-policy.json
Enter fullscreen mode Exit fullscreen mode

Phase 5: Creating Object Storage Private Endpoint

Create the Private Endpoint

Convert AWS region to Atlas format:

# Convert ap-southeast-3 to AP_SOUTHEAST_3 (Atlas format)
ATLAS_REGION=$(echo $AWS_REGION | tr '[:lower:]' '[:upper:]' | tr '-' '_')
echo "Atlas Region: $ATLAS_REGION"
Enter fullscreen mode Exit fullscreen mode

Create the private endpoint:

cat > private-endpoint-payload.json <<EOF
{
  "cloudProvider": "AWS",
  "regionName": "$ATLAS_REGION"
}
EOF

# Create the private endpoint
atlas api cloudBackups createBackupPrivateEndpoint \
  --cloudProvider AWS \
  --groupId $PROJECT_ID \
  --file private-endpoint-payload.json \
  --output json | tee private-endpoint-response.json

# Extract endpoint ID
PRIVATE_ENDPOINT_ID=$(jq -r '.id' private-endpoint-response.json)
echo "Private Endpoint ID: $PRIVATE_ENDPOINT_ID"
Enter fullscreen mode Exit fullscreen mode

Monitor Private Endpoint Status

The private endpoint goes through several states: INITIATINGPENDING_ACCEPTANCEACTIVE

atlas api cloudBackups getBackupPrivateEndpoint \
  --cloudProvider AWS \
  --groupId $PROJECT_ID \
  --endpointId $PRIVATE_ENDPOINT_ID
Enter fullscreen mode Exit fullscreen mode

Wait until the status is ACTIVE before proceeding. This typically takes a few minutes as Atlas provisions the PrivateLink infrastructure.

Private endpoint active

Phase 6: Configuring Export Bucket with Private Networking

Now that the private endpoint is active, you can create an export bucket that uses it.

Create Export Bucket with Private Networking

Create the export bucket configuration:

cat > export-bucket-payload.json <<EOF
{
  "bucketName": "$BUCKET_NAME",
  "cloudProvider": "AWS",
  "iamRoleId": "$ROLE_ID",
  "requirePrivateNetworking": true
}
EOF

# Create the export bucket
atlas api cloudBackups createExportBucket \
  --groupId $PROJECT_ID \
  --file export-bucket-payload.json \
  --output json | tee export-bucket-response.json

# Extract bucket ID
BUCKET_ID=$(jq -r '._id' export-bucket-response.json)
echo "Export Bucket ID: $BUCKET_ID"
Enter fullscreen mode Exit fullscreen mode

When requirePrivateNetworking is set to true, Atlas uses the object storage private endpoint you created earlier. All exports will flow through PrivateLink.

Verify Export Bucket Configuration

atlas api cloudBackups getExportBucket \
  --groupId $PROJECT_ID \
  --exportBucketId $BUCKET_ID
Enter fullscreen mode Exit fullscreen mode

Confirm that requirePrivateNetworking is set to true.

Export bucket configured with private networking

Phase 7: Exporting Snapshots Over PrivateLink

With everything configured, we can now export snapshots. All exports will automatically use PrivateLink.

List Available Snapshots

export CLUSTER_NAME=REPLACE-WITH-YOUR-CLUSTER-NAME
atlas backups snapshots list $CLUSTER_NAME --projectId $PROJECT_ID
Enter fullscreen mode Exit fullscreen mode

This shows all snapshots with IDs and timestamps.

Available snapshots

Manual Export

Export a specific snapshot:

# Create export payload
export SNAPSHOT_ID=REPLACE-WITH-YOUR-SELECTED-SNAPSHOT-ID

cat > export-payload.json <<EOF
{
  "snapshotId": "$SNAPSHOT_ID",
  "exportBucketId": "$BUCKET_ID",
  "customData": [
    { "key": "exported_via", "value": "privateLink" }
  ]
}
EOF

# Start the export
atlas api cloudBackups createBackupExport \
  --clusterName $CLUSTER_NAME \
  --groupId $PROJECT_ID \
  --file export-payload.json
Enter fullscreen mode Exit fullscreen mode

Monitor Export Progress

atlas backups exports jobs list $CLUSTER_NAME \
  --projectId $PROJECT_ID
Enter fullscreen mode Exit fullscreen mode

Status progresses: QUEUEDIN_PROGRESSSUCCESSFUL

Export duration depends on snapshot size and network throughput.

Export job in progress

Verify in S3

Once the export job state changes to SUCCESSFUL, you can verify the export in S3.

aws s3 ls s3://$BUCKET_NAME/ --recursive
Enter fullscreen mode Exit fullscreen mode

Snapshots are organized by path:

/exported_snapshots/<orgUUID>/<projectUUID>/<clusterName>/<initiationDateOfSnapshot>/<timestamp>/
Enter fullscreen mode Exit fullscreen mode

Exported snapshots

Phase 8: Automating Exports with Backup Policies

On top of manual exports, you can configure Atlas to automatically export snapshots on a schedule.

Configure Automatic Export Schedule

# Create schedule update payload
cat > schedule-payload.json <<EOF
{
  "autoExportEnabled": true,
  "export": {
    "exportBucketId": "$BUCKET_ID",
    "frequencyType": "monthly"
  }
}
EOF

# Update the backup schedule
atlas api cloudBackups updateBackupSchedule \
  --clusterName $CLUSTER_NAME \
  --groupId $PROJECT_ID \
  --file schedule-payload.json
Enter fullscreen mode Exit fullscreen mode

Available frequency types:

  • monthly: Export once per month
  • yearly: Export once per year

Atlas automatically exports snapshots matching the frequency type.

View Current Schedule

atlas api cloudBackups getBackupSchedule \
  --clusterName $CLUSTER_NAME \
  --groupId $PROJECT_ID
Enter fullscreen mode Exit fullscreen mode

This shows both snapshot and export schedules.

Backup schedule with export policy

Summary

This guide covered the complete implementation of MongoDB Atlas snapshot exports to S3 over AWS PrivateLink. The key steps:

  • Create S3 bucket and IAM role with minimal permissions
  • Create object storage private endpoint and export bucket in Atlas with requirePrivateNetworking: true
  • Export snapshots and configure automated export scheduling using backup policies

Key implementation considerations:

  • Atlas cluster must run on AWS in the same region as the destination S3 bucket
  • The requirePrivateNetworking: true flag enables PrivateLink for all exports to that bucket
  • Atlas automatically manages PrivateLink infrastructure—no manual VPC endpoint setup required
  • Use unified AWS access (cloud provider access roles) for IAM role setup
  • PrivateLink connection costs $0.01/hour (billed separately); data processing charge included in $0.125/GB export price
  • Native Atlas backup schedule policies eliminate the need for external automation

This architecture is appropriate for organizations with regulatory requirements mandating private-only networking for data movement.

Reference Documentation

Top comments (0)