Organizations with strict regulatory compliance requirements often need to ensure that sensitive backup data never traverses the public internet. Exporting Atlas snapshots to your own S3 bucket also provides additional control over retention policies, lifecycle management, and disaster recovery strategies beyond Atlas's built-in backup capabilities.
MongoDB Atlas supports exporting snapshots to S3 over AWS PrivateLink—keeping all traffic on private IP addresses within the AWS network. Atlas exposes a dedicated object storage private endpoint for backup exports; you create it via the Atlas API/CLI, and Atlas provisions and manages the underlying AWS PrivateLink infrastructure for you.
This guide provides a step-by-step implementation to meet compliance requirements around data movement and network isolation by exporting Atlas snapshots to S3 over PrivateLink.
Current Limitations
Before diving in, understand the constraints:
- AWS only: Your Atlas cluster must be hosted on AWS (not GCP or Azure)
- Same-region only: Your Atlas cluster and S3 bucket must be in the same AWS region.
- Requires M10+ Atlas clusters
- Additional cost: $0.01/hour for PrivateLink connection
Prerequisites
You'll need:
- MongoDB Atlas cluster (M10+) hosted on AWS with Cloud Backup enabled
- AWS account with permissions to create IAM roles and S3 buckets
- Atlas cluster and S3 bucket in the same AWS region
Phase 1: Installing Required Tools
Installing AWS CLI
macOS
brew install awscli
Or using the installer:
curl "https://awscli.amazonaws.com/AWSCLIV2.pkg" -o "AWSCLIV2.pkg"
sudo installer -pkg AWSCLIV2.pkg -target /
Linux
curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
unzip awscliv2.zip
sudo ./aws/install
Configure your credentials:
aws configure
Verify:
aws --version
Installing Atlas CLI
macOS
brew install mongodb-atlas-cli
Linux
# Debian/Ubuntu
wget https://fastdl.mongodb.org/mongocli/mongodb-atlas-cli_latest_linux_x86_64.deb
sudo dpkg -i mongodb-atlas-cli_latest_linux_x86_64.deb
# RHEL/CentOS/Fedora
wget https://fastdl.mongodb.org/mongocli/mongodb-atlas-cli_latest_linux_x86_64.rpm
sudo rpm -i mongodb-atlas-cli_latest_linux_x86_64.rpm
Verify:
atlas --version
Phase 2: Authenticating with Atlas
Creating Atlas API Keys
- Log into MongoDB Atlas
- Go to your project, then expand the sidebar, choose "Project Identity & Access", and then "Applications", and then "API Keys"
- Click "Create API Key"
- Name it descriptively (e.g., "PrivateLink S3 Export")
- Assign "Project Owner" role (required for PrivateLink and backup exports)
- Save both the Public Key and Private Key securely
Authenticate the CLI
atlas auth login
Select "API Keys" and enter your credentials:
? Select authentication type: API Keys (for existing automations)
? Public API Key: <your-public-key>
? Private API Key: <your-private-key>
Verify:
atlas auth whoami
Phase 3: Creating the S3 Bucket
Set your region and create a private bucket for snapshots:
# Set your region (must match your Atlas cluster region)
export AWS_REGION=REPLACE-WITH-YOUR-AWS-REGION-CODE-LIKE-us-east-1
export BUCKET_NAME=REPLACE-WITH-YOUR-ATLAS-SNAPSHOT-BUCKET-NAME
# Create the bucket
aws s3 mb s3://$BUCKET_NAME --region $AWS_REGION
Verify public access is blocked:
aws s3api get-public-access-block --bucket $BUCKET_NAME
All four settings should be true.
Phase 4: Setting Up Unified AWS Access
Atlas uses a unified AWS access model where you authorize an IAM role once, and it can be used across multiple Atlas features (backups, encryption, etc.).
Step 1: Create Atlas Cloud Provider Access Role
Set your project ID:
export PROJECT_ID=YOUR_PROJECT_ID
Create the access role in Atlas:
atlas cloudProviders accessRoles aws create --projectId $PROJECT_ID --output json | tee atlas-access-role.json
This returns something like this:
{
"providerName": "AWS",
"atlasAWSAccountArn": "arn:aws:iam::012345678999:root",
"atlasAssumedRoleExternalId": "xxxxxxxx-1234-5678-9000-xxxxyyyyzzzz",
"createdDate": "2026-02-21T03:55:10Z",
"featureUsages": [],
"roleId": "xxxxyyyyzzzz123412341234"
}
Save these values:
ATLAS_AWS_ACCOUNT=$(jq -r '.atlasAWSAccountArn' atlas-access-role.json)
EXTERNAL_ID=$(jq -r '.atlasAssumedRoleExternalId' atlas-access-role.json)
ROLE_ID=$(jq -r '.roleId' atlas-access-role.json)
echo "Atlas AWS Account: $ATLAS_AWS_ACCOUNT"
echo "External ID: $EXTERNAL_ID"
echo "Role ID: $ROLE_ID"
Step 2: Create IAM Role in AWS
Create the trust policy using the values from Atlas:
cat > atlas-trust-policy.json <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "$ATLAS_AWS_ACCOUNT"
},
"Action": "sts:AssumeRole",
"Condition": {
"StringEquals": {
"sts:ExternalId": "$EXTERNAL_ID"
}
}
}
]
}
EOF
Create the IAM role:
export ROLE_NAME=REPLACE-WITH-YOUR-AWS-IAM-ROLE-NAME
aws iam create-role \
--role-name $ROLE_NAME \
--assume-role-policy-document file://atlas-trust-policy.json \
--description "Role for MongoDB Atlas unified AWS access"
Step 3: Attach S3 Permissions
Create the S3 policy:
cat > atlas-s3-policy.json <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AllowSnapshotExport",
"Effect": "Allow",
"Action": [
"s3:PutObject",
"s3:PutObjectAcl",
"s3:GetObject",
"s3:ListBucket",
"s3:GetBucketLocation"
],
"Resource": [
"arn:aws:s3:::$BUCKET_NAME",
"arn:aws:s3:::$BUCKET_NAME/*"
]
}
]
}
EOF
Create and attach the policy:
# Get your AWS account ID
AWS_ACCOUNT_ID=$(aws sts get-caller-identity --query Account --output text)
# Create policy
aws iam create-policy \
--policy-name MongoDBAtlasSnapshotExportPolicy \
--policy-document file://atlas-s3-policy.json
# Attach to role
aws iam attach-role-policy \
--role-name $ROLE_NAME \
--policy-arn arn:aws:iam::$AWS_ACCOUNT_ID:policy/MongoDBAtlasSnapshotExportPolicy
Step 4: Authorize the IAM Role in Atlas
Get the IAM role ARN:
IAM_ROLE_ARN=$(aws iam get-role --role-name $ROLE_NAME \
--query 'Role.Arn' --output text)
echo "IAM Role ARN: $IAM_ROLE_ARN"
Authorize the role in Atlas:
atlas cloudProviders accessRoles aws authorize $ROLE_ID \
--projectId $PROJECT_ID \
--iamAssumedRoleArn $IAM_ROLE_ARN
Optional: Add S3 Bucket Policy
For defense in depth, restrict bucket access to only the IAM role:
cat > bucket-policy.json <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AllowOnlyAtlasRole",
"Effect": "Allow",
"Principal": {
"AWS": "$IAM_ROLE_ARN"
},
"Action": [
"s3:PutObject",
"s3:PutObjectAcl",
"s3:GetObject",
"s3:ListBucket",
"s3:GetBucketLocation"
],
"Resource": [
"arn:aws:s3:::$BUCKET_NAME",
"arn:aws:s3:::$BUCKET_NAME/*"
]
}
]
}
EOF
aws s3api put-bucket-policy \
--bucket $BUCKET_NAME \
--policy file://bucket-policy.json
Phase 5: Creating Object Storage Private Endpoint
Create the Private Endpoint
Convert AWS region to Atlas format:
# Convert ap-southeast-3 to AP_SOUTHEAST_3 (Atlas format)
ATLAS_REGION=$(echo $AWS_REGION | tr '[:lower:]' '[:upper:]' | tr '-' '_')
echo "Atlas Region: $ATLAS_REGION"
Create the private endpoint:
cat > private-endpoint-payload.json <<EOF
{
"cloudProvider": "AWS",
"regionName": "$ATLAS_REGION"
}
EOF
# Create the private endpoint
atlas api cloudBackups createBackupPrivateEndpoint \
--cloudProvider AWS \
--groupId $PROJECT_ID \
--file private-endpoint-payload.json \
--output json | tee private-endpoint-response.json
# Extract endpoint ID
PRIVATE_ENDPOINT_ID=$(jq -r '.id' private-endpoint-response.json)
echo "Private Endpoint ID: $PRIVATE_ENDPOINT_ID"
Monitor Private Endpoint Status
The private endpoint goes through several states: INITIATING → PENDING_ACCEPTANCE → ACTIVE
atlas api cloudBackups getBackupPrivateEndpoint \
--cloudProvider AWS \
--groupId $PROJECT_ID \
--endpointId $PRIVATE_ENDPOINT_ID
Wait until the status is ACTIVE before proceeding. This typically takes a few minutes as Atlas provisions the PrivateLink infrastructure.
Phase 6: Configuring Export Bucket with Private Networking
Now that the private endpoint is active, you can create an export bucket that uses it.
Create Export Bucket with Private Networking
Create the export bucket configuration:
cat > export-bucket-payload.json <<EOF
{
"bucketName": "$BUCKET_NAME",
"cloudProvider": "AWS",
"iamRoleId": "$ROLE_ID",
"requirePrivateNetworking": true
}
EOF
# Create the export bucket
atlas api cloudBackups createExportBucket \
--groupId $PROJECT_ID \
--file export-bucket-payload.json \
--output json | tee export-bucket-response.json
# Extract bucket ID
BUCKET_ID=$(jq -r '._id' export-bucket-response.json)
echo "Export Bucket ID: $BUCKET_ID"
When requirePrivateNetworking is set to true, Atlas uses the object storage private endpoint you created earlier. All exports will flow through PrivateLink.
Verify Export Bucket Configuration
atlas api cloudBackups getExportBucket \
--groupId $PROJECT_ID \
--exportBucketId $BUCKET_ID
Confirm that requirePrivateNetworking is set to true.
Phase 7: Exporting Snapshots Over PrivateLink
With everything configured, we can now export snapshots. All exports will automatically use PrivateLink.
List Available Snapshots
export CLUSTER_NAME=REPLACE-WITH-YOUR-CLUSTER-NAME
atlas backups snapshots list $CLUSTER_NAME --projectId $PROJECT_ID
This shows all snapshots with IDs and timestamps.
Manual Export
Export a specific snapshot:
# Create export payload
export SNAPSHOT_ID=REPLACE-WITH-YOUR-SELECTED-SNAPSHOT-ID
cat > export-payload.json <<EOF
{
"snapshotId": "$SNAPSHOT_ID",
"exportBucketId": "$BUCKET_ID",
"customData": [
{ "key": "exported_via", "value": "privateLink" }
]
}
EOF
# Start the export
atlas api cloudBackups createBackupExport \
--clusterName $CLUSTER_NAME \
--groupId $PROJECT_ID \
--file export-payload.json
Monitor Export Progress
atlas backups exports jobs list $CLUSTER_NAME \
--projectId $PROJECT_ID
Status progresses: QUEUED → IN_PROGRESS → SUCCESSFUL
Export duration depends on snapshot size and network throughput.
Verify in S3
Once the export job state changes to SUCCESSFUL, you can verify the export in S3.
aws s3 ls s3://$BUCKET_NAME/ --recursive
Snapshots are organized by path:
/exported_snapshots/<orgUUID>/<projectUUID>/<clusterName>/<initiationDateOfSnapshot>/<timestamp>/
Phase 8: Automating Exports with Backup Policies
On top of manual exports, you can configure Atlas to automatically export snapshots on a schedule.
Configure Automatic Export Schedule
# Create schedule update payload
cat > schedule-payload.json <<EOF
{
"autoExportEnabled": true,
"export": {
"exportBucketId": "$BUCKET_ID",
"frequencyType": "monthly"
}
}
EOF
# Update the backup schedule
atlas api cloudBackups updateBackupSchedule \
--clusterName $CLUSTER_NAME \
--groupId $PROJECT_ID \
--file schedule-payload.json
Available frequency types:
-
monthly: Export once per month -
yearly: Export once per year
Atlas automatically exports snapshots matching the frequency type.
View Current Schedule
atlas api cloudBackups getBackupSchedule \
--clusterName $CLUSTER_NAME \
--groupId $PROJECT_ID
This shows both snapshot and export schedules.
Summary
This guide covered the complete implementation of MongoDB Atlas snapshot exports to S3 over AWS PrivateLink. The key steps:
- Create S3 bucket and IAM role with minimal permissions
- Create object storage private endpoint and export bucket in Atlas with
requirePrivateNetworking: true - Export snapshots and configure automated export scheduling using backup policies
Key implementation considerations:
- Atlas cluster must run on AWS in the same region as the destination S3 bucket
- The
requirePrivateNetworking: trueflag enables PrivateLink for all exports to that bucket - Atlas automatically manages PrivateLink infrastructure—no manual VPC endpoint setup required
- Use unified AWS access (cloud provider access roles) for IAM role setup
- PrivateLink connection costs $0.01/hour (billed separately); data processing charge included in $0.125/GB export price
- Native Atlas backup schedule policies eliminate the need for external automation
This architecture is appropriate for organizations with regulatory requirements mandating private-only networking for data movement.








Top comments (0)