Let me tell you about the time I thought migrating a database would be straightforward. Spoiler alert: it wasn't.
The Setup
I was tasked with migrating our MySQL database from DigitalOcean's managed service to AWS RDS. Armed with confidence and the AWS DMS documentation, I dove in headfirst.
The First Roadblock: Serverless Seemed Like a Good Idea
Creating the source and target endpoints went surprisingly smooth. I felt like I was on a roll. Then came the task creation, and I thought, "Hey, let's use serverless DMS. Modern, scalable, perfect."
That's when everything came to a grinding halt.
The networking configuration for serverless DMS had me completely stumped. I spent way too long trying to figure out the right VPC setup, subnet configurations, and security group rules. Nothing seemed to work the way I expected. The documentation made sense in theory, but practice was a different story.
Eventually, I gave up on serverless and pivoted to the traditional EC2-based replication instance. Sometimes the old way is the right way.
Excluding the Noise
With my shiny new replication instance ready, I created the migration task. But I needed to make sure I wasn't migrating MySQL's system databases along with my actual data. Nobody needs that mess.
I configured table mappings to include all user databases while explicitly excluding the system ones:
{
"rules": [
{
"rule-type": "selection",
"rule-id": "1",
"rule-name": "include-all-user-dbs",
"object-locator": {
"schema-name": "%",
"table-name": "%"
},
"rule-action": "include"
},
{
"rule-type": "selection",
"rule-id": "2",
"rule-name": "exclude-mysql",
"object-locator": {
"schema-name": "mysql",
"table-name": "%"
},
"rule-action": "exclude"
},
{
"rule-type": "selection",
"rule-id": "3",
"rule-name": "exclude-sys",
"object-locator": {
"schema-name": "sys",
"table-name": "%"
},
"rule-action": "exclude"
},
{
"rule-type": "selection",
"rule-id": "4",
"rule-name": "exclude-information_schema",
"object-locator": {
"schema-name": "information_schema",
"table-name": "%"
},
"rule-action": "exclude"
},
{
"rule-type": "selection",
"rule-id": "5",
"rule-name": "exclude-performance_schema",
"object-locator": {
"schema-name": "performance_schema",
"table-name": "%"
},
"rule-action": "exclude"
}
]
}
Clean and specific. I felt good about this.
The Premigration Checks Humbled Me
I ran the premigration assessment checks, expecting maybe a warning or two. Instead, I was greeted with a wall of failures. Major ones. The kind that make you question your life choices.
I spent the next few hours going through each failed check, cross-referencing with AWS documentation, and fixing issues one by one. Most of the critical warnings got resolved, though some minor ones remained. I figured those were acceptable and proceeded with the migration.
The Migration Itself: A False Sense of Security
This was our staging database, and it was massive. The migration kicked off, and surprisingly, it ran smoothly. Hours passed, data transferred, progress bars filled. Everything looked perfect.
We switched the application endpoint to the new RDS instance, deployed the changes, and waited for the green light.
Then the login feature stopped working entirely.
The Investigation That Nearly Broke Me
Cue several hours of frantic debugging. We checked connection strings, verified credentials, tested queries manually, checked network rules, and compared database structures. Everything looked identical.
Until we looked closer at the tables themselves.
Our foreign keys were gone. Primary keys were missing. Auto-increment sequences had reset. DMS had essentially eaten the structural integrity of our database.
Turns out, this is a known behavior. DMS focuses on moving data efficiently, and in doing so, it doesn't always preserve things like constraints and keys perfectly during the initial load.
The Actual Solution
After more digging through documentation and forums, we found the recommended approach: manually dump and restore the schema first, then let DMS handle just the data migration.
We also discovered that you can pass additional connection parameters to the DMS endpoint configuration to better preserve database objects during migration. We updated our endpoint settings with these parameters and ran the migration again.
This time, everything worked. Foreign keys intact, primary keys preserved, auto-increment sequences functioning as expected. The application came back to life, and logins worked perfectly.
Lessons Learned
First, serverless isn't always the answer, especially when you're still figuring out the networking intricacies of a new service.
Second, premigration checks exist for a reason. Those warnings are trying to save you from pain later.
Third, and most importantly, when migrating databases with DMS, take the time to migrate your schema separately. Don't rely on DMS to handle everything. It's a data migration service, not a complete database cloning tool.
Fourth, if you're planning to use CDC (Change Data Capture) for ongoing replication, make sure binary logging is enabled on your source database with the correct format. MySQL requires binlog_format set to ROW for DMS to capture changes properly. Without this, your CDC tasks will fail silently or miss updates entirely.
The whole experience was frustrating, time-consuming, and honestly a bit embarrassing. But it taught me more about AWS DMS in a few days than I would have learned in weeks of casual reading.
If you're planning a DMS migration, learn from my mistakes. Your future self will thank you.


Top comments (1)
Very Insightful Amaan .