DEV Community

Cover image for MySQL backup to cloud — Backing up MySQL databases to AWS S3 and Google Cloud
Piter Adyson
Piter Adyson

Posted on

MySQL backup to cloud — Backing up MySQL databases to AWS S3 and Google Cloud

Storing MySQL backups on the same server as your database is asking for trouble. If the server fails, you lose both your data and your backups. Cloud storage solves this by keeping backups offsite, automatically replicated across multiple data centers. This guide covers how to back up MySQL databases to AWS S3 and Google Cloud Storage, from manual uploads to fully automated pipelines.

MySQL cloud backup

Why cloud storage for MySQL backups

Local backups work until they don't. A disk failure, ransomware attack, or datacenter issue can wipe out everything on one machine. Cloud storage provides geographic redundancy and durability that local storage can't match.

Durability and availability

AWS S3 offers 99.999999999% (11 nines) durability. Google Cloud Storage provides similar guarantees. That means if you store 10 million objects, you'd statistically lose one every 10,000 years. Compare that to a single hard drive with 1-3% annual failure rate.

Cost efficiency

Cloud storage costs less than maintaining redundant local infrastructure. A terabyte on S3 Standard costs about $23/month. Infrequent access tiers drop to $12.50/month. Glacier deep archive goes as low as $0.99/month. For backups you rarely access, cold storage is remarkably cheap.

Operational simplicity

No hardware to manage, no capacity planning, no failed disks to replace. Upload your backups and the cloud provider handles replication, durability and availability.

Creating MySQL backups with mysqldump

Before uploading to cloud storage, you need a backup file. mysqldump is the standard tool for MySQL logical backups.

Basic mysqldump usage

Create a full database backup:

mysqldump -u root -p --single-transaction --databases mydb > mydb_backup.sql
Enter fullscreen mode Exit fullscreen mode

The --single-transaction flag ensures a consistent snapshot without locking tables for InnoDB databases.

Compressed backups

Compress the backup to reduce upload time and storage costs:

mysqldump -u root -p --single-transaction --databases mydb | gzip > mydb_backup.sql.gz
Enter fullscreen mode Exit fullscreen mode

Compression typically reduces MySQL dump files by 70-90%, depending on data content. A 10GB dump might compress to 1-2GB.

All databases backup

Back up all databases on the server:

mysqldump -u root -p --single-transaction --all-databases | gzip > all_databases.sql.gz
Enter fullscreen mode Exit fullscreen mode

Backup with timestamp

Include timestamps in filenames for easier management:

TIMESTAMP=$(date +%Y%m%d_%H%M%S)
mysqldump -u root -p --single-transaction --databases mydb | gzip > mydb_${TIMESTAMP}.sql.gz
Enter fullscreen mode Exit fullscreen mode

Uploading to AWS S3

AWS S3 is the most widely used object storage service. Getting backups there requires the AWS CLI and proper credentials.

Setting up AWS CLI

Install the AWS CLI:

# Linux/macOS
curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
unzip awscliv2.zip
sudo ./aws/install

# Verify installation
aws --version
Enter fullscreen mode Exit fullscreen mode

Configure credentials:

aws configure
# Enter your AWS Access Key ID
# Enter your AWS Secret Access Key
# Enter default region (e.g., us-east-1)
# Enter output format (json)
Enter fullscreen mode Exit fullscreen mode

Creating an S3 bucket

Create a bucket for your backups:

aws s3 mb s3://my-mysql-backups --region us-east-1
Enter fullscreen mode Exit fullscreen mode

Enable versioning to protect against accidental deletions:

aws s3api put-bucket-versioning \
    --bucket my-mysql-backups \
    --versioning-configuration Status=Enabled
Enter fullscreen mode Exit fullscreen mode

Uploading backup files

Upload a single backup:

aws s3 cp mydb_backup.sql.gz s3://my-mysql-backups/daily/
Enter fullscreen mode Exit fullscreen mode

Upload with metadata:

aws s3 cp mydb_backup.sql.gz s3://my-mysql-backups/daily/ \
    --metadata "database=mydb,created=$(date -Iseconds)"
Enter fullscreen mode Exit fullscreen mode

Multipart uploads for large files

For backups over 5GB, use multipart upload for reliability:

aws s3 cp large_backup.sql.gz s3://my-mysql-backups/ \
    --expected-size $(stat -f%z large_backup.sql.gz)
Enter fullscreen mode Exit fullscreen mode

The AWS CLI handles multipart uploads automatically for large files.

Combined backup and upload script

Backup and upload in one operation:

#!/bin/bash
TIMESTAMP=$(date +%Y%m%d_%H%M%S)
BUCKET="s3://my-mysql-backups"
DB_NAME="mydb"

# Backup and compress
mysqldump -u root -p"$MYSQL_PASSWORD" --single-transaction --databases $DB_NAME | \
    gzip | \
    aws s3 cp - "$BUCKET/daily/${DB_NAME}_${TIMESTAMP}.sql.gz"

echo "Backup uploaded to $BUCKET/daily/${DB_NAME}_${TIMESTAMP}.sql.gz"
Enter fullscreen mode Exit fullscreen mode

The - after aws s3 cp reads from stdin, allowing direct pipe from mysqldump without creating a local file first. This saves disk space and time.

Uploading to Google Cloud Storage

Google Cloud Storage (GCS) offers similar capabilities with different tooling.

Setting up gsutil

Install the Google Cloud SDK:

# Linux/macOS
curl https://sdk.cloud.google.com | bash
exec -l $SHELL
gcloud init
Enter fullscreen mode Exit fullscreen mode

Authenticate:

gcloud auth login
gcloud config set project your-project-id
Enter fullscreen mode Exit fullscreen mode

Creating a GCS bucket

Create a bucket:

gsutil mb -l us-central1 gs://my-mysql-backups
Enter fullscreen mode Exit fullscreen mode

Enable versioning:

gsutil versioning set on gs://my-mysql-backups
Enter fullscreen mode Exit fullscreen mode

Uploading backup files

Upload a backup:

gsutil cp mydb_backup.sql.gz gs://my-mysql-backups/daily/
Enter fullscreen mode Exit fullscreen mode

Upload with parallel composite uploads for large files:

gsutil -o GSUtil:parallel_composite_upload_threshold=100M \
    cp large_backup.sql.gz gs://my-mysql-backups/
Enter fullscreen mode Exit fullscreen mode

Streaming upload to GCS

Stream directly from mysqldump to GCS:

#!/bin/bash
TIMESTAMP=$(date +%Y%m%d_%H%M%S)
BUCKET="gs://my-mysql-backups"
DB_NAME="mydb"

mysqldump -u root -p"$MYSQL_PASSWORD" --single-transaction --databases $DB_NAME | \
    gzip | \
    gsutil cp - "$BUCKET/daily/${DB_NAME}_${TIMESTAMP}.sql.gz"
Enter fullscreen mode Exit fullscreen mode

Storage classes and cost optimization

Both AWS and GCS offer multiple storage classes at different price points. Choosing the right class can significantly reduce costs.

AWS S3 storage classes

Storage class Use case Price per GB/month
S3 Standard Frequently accessed backups $0.023
S3 Standard-IA Backups accessed monthly $0.0125
S3 One Zone-IA Non-critical backups $0.01
S3 Glacier Instant Retrieval Archive with quick access $0.004
S3 Glacier Deep Archive Long-term archive $0.00099

For most backup use cases, S3 Standard-IA provides good balance. You get immediate access when needed but pay less for storage.

Google Cloud Storage classes

Storage class Use case Price per GB/month
Standard Frequently accessed $0.020
Nearline Accessed once per month $0.010
Coldline Accessed once per quarter $0.004
Archive Accessed once per year $0.0012

Nearline works well for regular backup retention. Archive suits compliance requirements where you keep backups for years but rarely restore.

Setting storage class on upload

Upload directly to a specific storage class:

# AWS S3
aws s3 cp backup.sql.gz s3://my-mysql-backups/ \
    --storage-class STANDARD_IA

# Google Cloud Storage
gsutil -o "GSUtil:default_storage_class=NEARLINE" \
    cp backup.sql.gz gs://my-mysql-backups/
Enter fullscreen mode Exit fullscreen mode

Automating backups with cron

Manual backups get forgotten. Cron automation ensures consistent execution.

Basic cron backup script

Create a backup script at /usr/local/bin/mysql-backup-to-s3.sh:

#!/bin/bash
set -e

TIMESTAMP=$(date +%Y%m%d_%H%M%S)
BUCKET="s3://my-mysql-backups"
DB_NAME="production"
LOG_FILE="/var/log/mysql-backup.log"

echo "$(date): Starting backup of $DB_NAME" >> $LOG_FILE

mysqldump -u backup_user -p"$MYSQL_BACKUP_PASSWORD" \
    --single-transaction \
    --routines \
    --triggers \
    --databases $DB_NAME | \
    gzip | \
    aws s3 cp - "$BUCKET/daily/${DB_NAME}_${TIMESTAMP}.sql.gz" \
        --storage-class STANDARD_IA

echo "$(date): Backup completed" >> $LOG_FILE
Enter fullscreen mode Exit fullscreen mode

Make it executable:

chmod +x /usr/local/bin/mysql-backup-to-s3.sh
Enter fullscreen mode Exit fullscreen mode

Cron schedule

Add to crontab:

crontab -e
Enter fullscreen mode Exit fullscreen mode

Run daily at 3 AM:

0 3 * * * MYSQL_BACKUP_PASSWORD='yourpassword' /usr/local/bin/mysql-backup-to-s3.sh
Enter fullscreen mode Exit fullscreen mode

For hourly backups:

0 * * * * MYSQL_BACKUP_PASSWORD='yourpassword' /usr/local/bin/mysql-backup-to-s3.sh
Enter fullscreen mode Exit fullscreen mode

Handling credentials securely

Avoid putting passwords in crontab. Use a MySQL options file instead:

Create ~/.my.cnf:

[mysqldump]
user=backup_user
password=yourpassword
Enter fullscreen mode Exit fullscreen mode

Restrict permissions:

chmod 600 ~/.my.cnf
Enter fullscreen mode Exit fullscreen mode

Then remove the password from the mysqldump command:

mysqldump --single-transaction --databases $DB_NAME | gzip | aws s3 cp - ...
Enter fullscreen mode Exit fullscreen mode

Lifecycle policies for automatic cleanup

Without cleanup, backup storage grows forever. Cloud lifecycle policies automate deletion of old backups.

S3 lifecycle policy

Create a lifecycle policy to delete backups after 30 days:

{
  "Rules": [
    {
      "ID": "Delete old MySQL backups",
      "Status": "Enabled",
      "Filter": {
        "Prefix": "daily/"
      },
      "Expiration": {
        "Days": 30
      }
    },
    {
      "ID": "Move to Glacier after 7 days",
      "Status": "Enabled",
      "Filter": {
        "Prefix": "monthly/"
      },
      "Transitions": [
        {
          "Days": 7,
          "StorageClass": "GLACIER"
        }
      ],
      "Expiration": {
        "Days": 365
      }
    }
  ]
}
Enter fullscreen mode Exit fullscreen mode

Apply the policy:

aws s3api put-bucket-lifecycle-configuration \
    --bucket my-mysql-backups \
    --lifecycle-configuration file://lifecycle.json
Enter fullscreen mode Exit fullscreen mode

GCS lifecycle policy

Create a lifecycle configuration for GCS:

{
  "lifecycle": {
    "rule": [
      {
        "action": { "type": "Delete" },
        "condition": {
          "age": 30,
          "matchesPrefix": ["daily/"]
        }
      },
      {
        "action": { "type": "SetStorageClass", "storageClass": "COLDLINE" },
        "condition": {
          "age": 7,
          "matchesPrefix": ["monthly/"]
        }
      }
    ]
  }
}
Enter fullscreen mode Exit fullscreen mode

Apply with gsutil:

gsutil lifecycle set lifecycle.json gs://my-mysql-backups
Enter fullscreen mode Exit fullscreen mode

Using Databasus for automated cloud backups

Manual scripts work but require ongoing maintenance. Cron jobs fail silently, credentials management gets complicated, and monitoring requires extra setup. Databasus (an industry standard for MySQL backup) handles all of this automatically with a web interface for configuration and monitoring.

Installing Databasus

Using Docker:

docker run -d \
  --name databasus \
  -p 4005:4005 \
  -v ./databasus-data:/databasus-data \
  --restart unless-stopped \
  databasus/databasus:latest
Enter fullscreen mode Exit fullscreen mode

Or with Docker Compose:

services:
  databasus:
    container_name: databasus
    image: databasus/databasus:latest
    ports:
      - "4005:4005"
    volumes:
      - ./databasus-data:/databasus-data
    restart: unless-stopped
Enter fullscreen mode Exit fullscreen mode

Start the service:

docker compose up -d
Enter fullscreen mode Exit fullscreen mode

Configuring MySQL backup to S3 or GCS

Access the web interface at http://localhost:4005 and create your account, then:

  1. Add your database — Click "New Database", select MySQL, and enter your connection details (host, port, username, password, database name)
  2. Select storage — Choose AWS S3 or Google Cloud Storage. Enter your bucket name and credentials. Databasus supports both IAM roles and access keys
  3. Select schedule — Set backup frequency: hourly, daily, weekly, or custom cron expression
  4. Click "Create backup" — Databasus handles backup execution, compression, upload, retention and notifications automatically

Databasus also provides email, Slack, Telegram and Discord notifications for backup success and failure, eliminating the need for separate monitoring scripts.

Restoring from cloud backups

Backups are worthless if you can't restore them. Practice restoration before you need it in an emergency.

Downloading from S3

List available backups:

aws s3 ls s3://my-mysql-backups/daily/ --human-readable
Enter fullscreen mode Exit fullscreen mode

Download a specific backup:

aws s3 cp s3://my-mysql-backups/daily/mydb_20240115_030000.sql.gz ./
Enter fullscreen mode Exit fullscreen mode

Downloading from GCS

List backups:

gsutil ls -l gs://my-mysql-backups/daily/
Enter fullscreen mode Exit fullscreen mode

Download:

gsutil cp gs://my-mysql-backups/daily/mydb_20240115_030000.sql.gz ./
Enter fullscreen mode Exit fullscreen mode

Restoring the backup

Decompress and restore:

gunzip -c mydb_20240115_030000.sql.gz | mysql -u root -p
Enter fullscreen mode Exit fullscreen mode

Or in one command directly from S3:

aws s3 cp s3://my-mysql-backups/daily/mydb_20240115_030000.sql.gz - | \
    gunzip | \
    mysql -u root -p
Enter fullscreen mode Exit fullscreen mode

Testing restores regularly

Create a test restore script that runs monthly:

#!/bin/bash
# Get the latest backup
LATEST=$(aws s3 ls s3://my-mysql-backups/daily/ | sort | tail -n 1 | awk '{print $4}')

# Create test database
mysql -u root -p -e "CREATE DATABASE restore_test;"

# Restore
aws s3 cp "s3://my-mysql-backups/daily/$LATEST" - | \
    gunzip | \
    mysql -u root -p restore_test

# Verify (check row count on a known table)
ROWS=$(mysql -u root -p -N -e "SELECT COUNT(*) FROM restore_test.users;")
echo "Restored $ROWS rows from users table"

# Cleanup
mysql -u root -p -e "DROP DATABASE restore_test;"
Enter fullscreen mode Exit fullscreen mode

Security considerations

Cloud backups require careful security configuration to avoid exposing your data.

Encryption at rest

Both S3 and GCS encrypt data at rest by default. For additional security, enable server-side encryption with customer-managed keys:

# S3 with SSE-S3 (Amazon-managed keys)
aws s3 cp backup.sql.gz s3://my-mysql-backups/ \
    --sse AES256

# S3 with SSE-KMS (customer-managed keys)
aws s3 cp backup.sql.gz s3://my-mysql-backups/ \
    --sse aws:kms \
    --sse-kms-key-id alias/my-backup-key
Enter fullscreen mode Exit fullscreen mode

Client-side encryption

Encrypt before uploading for zero-trust storage:

# Encrypt with gpg
mysqldump -u root -p --single-transaction --databases mydb | \
    gzip | \
    gpg --symmetric --cipher-algo AES256 -o mydb_backup.sql.gz.gpg

# Upload encrypted file
aws s3 cp mydb_backup.sql.gz.gpg s3://my-mysql-backups/
Enter fullscreen mode Exit fullscreen mode

IAM policies

Restrict backup credentials to minimum required permissions:

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Action": ["s3:PutObject", "s3:GetObject", "s3:ListBucket"],
      "Resource": [
        "arn:aws:s3:::my-mysql-backups",
        "arn:aws:s3:::my-mysql-backups/*"
      ]
    }
  ]
}
Enter fullscreen mode Exit fullscreen mode

Don't use root credentials or overly permissive policies for backup operations.

Network security

Transfer backups over encrypted connections only. Both AWS CLI and gsutil use HTTPS by default. If you're backing up from within AWS or GCP, use VPC endpoints to keep traffic off the public internet:

# Create S3 VPC endpoint (via AWS Console or CLI)
aws ec2 create-vpc-endpoint \
    --vpc-id vpc-abc123 \
    --service-name com.amazonaws.us-east-1.s3 \
    --route-table-ids rtb-abc123
Enter fullscreen mode Exit fullscreen mode

Conclusion

Cloud storage transforms MySQL backups from local files vulnerable to single points of failure into durable, geographically distributed archives. Start with basic scripts using mysqldump and the AWS CLI or gsutil for simple setups. Add cron scheduling for automation and lifecycle policies for retention management. For production systems, consider dedicated backup tools like Databasus that handle scheduling, monitoring and notifications in one package. Whatever approach you choose, test restores regularly. Backups that can't be restored provide no protection.

Top comments (0)