DEV Community

Vinit Parakh
Vinit Parakh

Posted on

Zero-Downtime MySQL Migration to Aurora Using AWS DMS and Binlog Replication

Migrating MySQL to Amazon Aurora RDS: Backup, Restore, and Version Compatibility Guide

Amazon Aurora offers high availability, scalability, and managed performance enhancements over traditional MySQL. Migrating your self-managed MySQL databases to Aurora can drastically simplify operationsโ€”but how you migrate depends on your MySQL version, data size, and constraints like foreign keys.

In this guide, Iโ€™ll walk you through two proven backup-and-restore approaches:

  1. Percona XtraBackup for physical backup and streaming to Aurora
  2. mysqldump to Amazon S3 and restore to Aurora RDS

And weโ€™ll clarify when to use native restore, and when to use AWS Database Migration Service (DMS).


๐Ÿ› ๏ธ Method 1: Physical Backup using Percona XtraBackup (for large, same-version migrations)

Step 1: Take a Physical Backup

From the EC2 instance or the on-prem MySQL server, run:

time xtrabackup --backup --slave-info --safe-slave-backup --stream=xbstream \
--parallel=6 --target-dir=/home/rack/240311-03603 2> /home/rack/240311-03603/xtrabackup_log \
| pv -s $(du -sb /san/mysql-fs/mysql/ | cut -f1) \
| zstd -3 -T8 | ssh user@destination_ip "zstd -d - | xbstream --parallel=3 -x -C /mysql_incoming"
Enter fullscreen mode Exit fullscreen mode

Step 2: Prepare the Backup

xtrabackup --prepare --apply-log-only --target-dir=/mysql_incoming
Enter fullscreen mode Exit fullscreen mode

Step 3: Upload to Amazon S3

aws s3 cp /mysql_incoming s3://<bucket-name>/mysql_prepared/ --recursive
Enter fullscreen mode Exit fullscreen mode

Step 4: Restore into Aurora

Use the Aurora feature to restore from S3, following:
๐Ÿ“– AWS Docs - Restore from S3 to Aurora MySQL


๐Ÿ’ก When to Use Percona XtraBackup?

  • You are migrating from a MySQL version that's identical or Aurora-compatible (e.g., MySQL 5.7.35 โ†’ Aurora MySQL 5.7)
  • You want a faster and consistent snapshot with binary log position (great for replication setup)
  • You want to replicate to Aurora after restoration for a phased cutover

๐Ÿ› ๏ธ Method 2: mysqldump and S3 Upload (for lighter or version-sensitive migrations)

Step 1: Export Schema

mysqldump -h <source-endpoint> -u root -p --no-data --set-gtid-purged=OFF mydb > schema.sql
Enter fullscreen mode Exit fullscreen mode

Step 2: Upload to S3

aws s3 cp schema.sql s3://<bucket-name>/schema/
Enter fullscreen mode Exit fullscreen mode

Step 3: Restore into Aurora

SSH into an EC2 or RDS-compatible shell:

mysql -h <aurora-endpoint> -u root -p < mydb < schema.sql
Enter fullscreen mode Exit fullscreen mode

โš ๏ธ If MySQL Versions Don't Match: Use Staged Migration

Letโ€™s say your source is MySQL 5.7.36 and Aurora only supports up to 5.7.35:

  1. Restore the backup to an intermediate RDS MySQL instance of same version (e.g., RDS MySQL 5.7.36)
  2. Use AWS DMS to migrate to Aurora (which supports MySQL 5.7.35 or compatible)

โ— DMS handles schema conversion and is ideal when you cannot do a physical restore due to version mismatch or when foreign key constraints exist.


๐Ÿ” Handling DEFINERS in Procedures, Triggers, Functions, and Views

When migrating MySQL databases to Amazon Aurora using tools like Percona XtraBackup or mysqldump, stored logic objects often retain a DEFINER clause that references the original MySQL user.

If the same user does not exist in Aurora or lacks necessary privileges, it can cause:

  • โš ๏ธ Import or replication failures
  • โŒ Application runtime errors
  • ๐Ÿšฉ Replication halts due to permission issues

โœ… Step 1: Identify DEFINERs on the Source MySQL

SELECT ROUTINE_NAME, ROUTINE_TYPE, DEFINER 
FROM INFORMATION_SCHEMA.ROUTINES 
WHERE ROUTINE_SCHEMA = 'your_db';

SELECT TRIGGER_NAME, EVENT_OBJECT_TABLE, DEFINER 
FROM INFORMATION_SCHEMA.TRIGGERS 
WHERE TRIGGER_SCHEMA = 'your_db';

SELECT TABLE_NAME, DEFINER 
FROM INFORMATION_SCHEMA.VIEWS 
WHERE TABLE_SCHEMA = 'your_db';
Enter fullscreen mode Exit fullscreen mode

๐Ÿ‘ฅ Step 2: Recreate DEFINER Users in Aurora with Required Privileges

CREATE USER 'admin'@'%' IDENTIFIED BY 'StrongPassword123!';
GRANT ALL PRIVILEGES ON your_db.* TO 'admin'@'%';
FLUSH PRIVILEGES;
Enter fullscreen mode Exit fullscreen mode

If the user doesnโ€™t require login capabilities:

ALTER USER 'admin'@'%' ACCOUNT LOCK;
Enter fullscreen mode Exit fullscreen mode

โœ‚๏ธ Optional: Remove DEFINERs from Dumps

mysqldump --routines --triggers --skip-definer --no-data -u root -p your_db > schema.sql
Enter fullscreen mode Exit fullscreen mode

Or manually remove them:

sed -i.bak 's/DEFINER=[^*]*\*/\*/g' schema.sql
Enter fullscreen mode Exit fullscreen mode

๐Ÿ” Step 3: Post-Restore Validation in Aurora

SHOW PROCEDURE STATUS WHERE Db = 'your_db';
SHOW TRIGGERS FROM your_db;
SELECT TABLE_NAME, DEFINER FROM INFORMATION_SCHEMA.VIEWS WHERE TABLE_SCHEMA='your_db';
Enter fullscreen mode Exit fullscreen mode

๐Ÿ” Setting Up Replication from MySQL Source to Amazon Aurora

To maintain real-time sync between the source and the Aurora target until production cutover, follow these steps:

๐Ÿ› ๏ธ Step 1: Enable Binary Logging on Source MySQL

In my.cnf on the source server:

[mysqld]
server-id = 1
log-bin = mysql-bin
binlog_format = ROW
binlog_row_image = FULL
expire_logs_days = 7
sync_binlog = 1
Enter fullscreen mode Exit fullscreen mode

Restart MySQL:

sudo systemctl restart mysqld
Enter fullscreen mode Exit fullscreen mode

๐Ÿ‘ค Step 2: Create Replication User on Source

CREATE USER 'repluser'@'%' IDENTIFIED BY 'replpassword';
GRANT REPLICATION SLAVE ON *.* TO 'repluser'@'%';
FLUSH PRIVILEGES;
Enter fullscreen mode Exit fullscreen mode

๐Ÿ“Œ Step 3: Lock Tables and Get Replication Position

FLUSH TABLES WITH READ LOCK;
SHOW MASTER STATUS;
Enter fullscreen mode Exit fullscreen mode

Note the File and Position, e.g., mysql-bin.000005, 1540. Keep session open until backup completes.

Then:

UNLOCK TABLES;
Enter fullscreen mode Exit fullscreen mode

โ˜๏ธ Step 4: Restore to Aurora (Already Done)

Assume restore was done using same binlog position.

๐Ÿ”— Step 5: Configure External Replication in Aurora

CALL mysql.rds_set_external_master (
  host := 'source-mysql.example.com',
  port := 3306,
  user := 'repluser',
  password := 'replpassword',
  log_file := 'mysql-bin.000005',
  log_pos := 1540,
  ssl := 0
);
Enter fullscreen mode Exit fullscreen mode

โ–ถ๏ธ Step 6: Start Replication

CALL mysql.rds_start_replication;
Enter fullscreen mode Exit fullscreen mode

๐Ÿ”Ž Step 7: Monitor Replication Status

SHOW SLAVE STATUS\G
Enter fullscreen mode Exit fullscreen mode

Ensure:

  • Slave_IO_Running: Yes
  • Slave_SQL_Running: Yes
  • Seconds_Behind_Master: 0 (or minimal)

๐Ÿ“Š Step 8: Monitor Replication Lag with CloudWatch

Enable and alert on:

  • AuroraReplicaLag
  • ReplicaLag

Use CloudWatch alarms if lag exceeds 30s.

๐Ÿšฆ Step 9: Keep Replication Running Until Cutover

  • Allow app writes to source until ready
  • Stop writes during switchover
  • Wait for Seconds_Behind_Master = 0
  • Switch DNS (Route53) or promote Aurora:
aws rds promote-read-replica-db-cluster --db-cluster-identifier your-cluster
Enter fullscreen mode Exit fullscreen mode

๐Ÿ” Best Practices Before Replication or Cutover

  • Set binlog_format=ROW and binlog_row_image=FULL
  • Set read_only=ON on the Aurora replica
  • Adjust innodb_buffer_pool_size based on source sizing
  • Ensure all users and grants are recreated in Aurora
  • Pause replication before major changes; resume after sync
  • Use CloudWatch to monitor AuroraReplicaLag
  • Test workloads in a non-prod clone of Aurora before switching traffic

๐Ÿ›๏ธ Next Steps

In my next post, Iโ€™ll walk you through:

  • Creating DMS source and target endpoints
  • Setting up AWS DMS replication tasks
  • Handling view tables, user permissions, and Route 53 cutover

Stay tuned!

Top comments (0)