DEV Community

Cover image for Coding the Cloud ☁️: A Deep Dive into AWS Database Migration Magic 🪄
itsmenilik
itsmenilik

Posted on • Updated on

Coding the Cloud ☁️: A Deep Dive into AWS Database Migration Magic 🪄

Have you ever faced the daunting task of migrating an on-premise database to the cloud?

Well, I recently embarked on a journey to learn more about a technical enabler that is meant to do just that.

HINT: If you haven't figured it out yet, its AWS Database Migration Service

HOW IT ALL STARTED

Image description

One of the project's that I am in is tasked to create the central authoritative data domain hub for a client I am working for. To do this requires massive adaptability, huge problem solving skills, and a little perseverance.

Image description

I was tasked to capture requirements for a client. The client has faced a series of problems where they are retrieving all of their data from multiple source locations (mainframes, on-premise data centers, different AWS cloud infrastructures, etc). They would prefer to just have all of their data in one central location.

However, to do this requires us to move a database that is on their platform onto ours. One of the solutions that my teammate suggested we use was AWS DMS. It would allow us to translate the client's current database schema onto an RDS database instance that is on our platform. That way, when the the migration is complete, they no longer have to worry about managing data on their platform. My project team can worry about organizing the treasury's data as we create the central authoritative data hub.

In this blog post, I'll take you through my experience with using AWS Database Migration Service (DMS) in five epic stages.

  • STAGE 1 : Provision the environment and review tasks
  • STAGE 2 : Establish Private Connectivity Between the environments (VPC Peer)
  • STAGE 3 : Create & Configure the AWS Side infrastructure (App and DB)
  • STAGE 4 : Migrate Database & Cut over
  • STAGE 5 : Cleanup the account

Where Did You Learn This

Image description

I am a big proponent of continuous learning and continuous development. So I did what anyone with internet would do, I searched YouTube for videos that could teach me the processes related to Database Migration. Lucky, I came across LearnCantrill videos and went through one of his Mini Projects.

Here is a link to his channel https://www.youtube.com/@LearnCantrill.

I was able to find him through one of my favorite AWS Gurus Be A Better Dev.

Here is a link to his channel https://www.youtube.com/@BeABetterDev

What Is The End Goal?!? SHOW ME SOME ARCHITECTURE

Okay okay, settle hahaha. You're going to read about how I migrated a simple web application from an on-premises environment into AWS. The on-premises environment is a virtual web server simulated using EC2 and a self-managed MariaDB database server also simulated via EC2.

This migration will happen in AWS and running the architecture on an EC2 web server together with an RDS managed SQL database. This migration is all happening because we are using the Database Migration Service, or DMS, from AWS. Now this is the architecture

Image description

STAGE 1: Provision the Environment and Review Tasks

Image description

Stage one is about implementing the base infrastructure. We will be creating the simulated on-premises environment on the left and the base AWS infrastructure on the right.

The adventure began with provisioning the necessary AWS resources and reviewing the migration tasks.

These resources were made from a Cloud Formation stack that created the following resources:

AWS Cloud Resources

  • VPC
  • Internet Gateway
  • Internet Gateway Attachment
  • Default Route Table
  • Private Route Table
  • Public Route Table
  • Database Security Group
  • Security Group Web Application
  • Private Subnet A
  • Private Subnet B
  • Public Subnet A
  • Public Subnet B
  • Private A Route Table Association
  • Private B Route Table Association
  • Public A Route Table Association
  • Public B Route Table Association
  • DMS Instance Profile
  • IAM Role

On-Premises Resources

  • VPC
  • Internet Gateway
  • Internet Gateway Attachment
  • Default Route Table
  • Public Route Table
  • Database Security Group
  • Security Group Web Application
  • Public Subnet
  • Public Route Table Association
  • DMS Instance Profile
  • IAM Role

You'll see that I've got two instances, CatWeb and CatDB. CatWeb is the simulated virtual machine web server and CatDB is the simulated virtual machine self-managed database.

Image description

Now that every resource on the premise side is provisioned, we can take a look at the front facing website. We do this by copying the Public IPv4 DNS onto our web browser and searching for the url. Here is what it looks like:

Image description

My Internal Thoughts
AWS Cloud Formation makes this stage surprisingly straightforward, allowing you to set up replication instances, source, and target endpoints with ease. AWS Cloud Formation Service is something I used in my previous project, so I was feeling exceptionally great since I took the time in my past experience to use this service before hand. It had me feeling like a becoming a Solutions Architect in no time.

STAGE 2: Establish Private Connectivity Between the Environments (VPC Peer)

Image description

This stages involves provisioning private connectivity between the simulated on-premises environment on the left and the AWS environment on the right.Now in production, you'd be using a VPN or Direct Connect, but to simulate that in this project, I'm going to be configuring a VPC Peering Connection. This will configure the connection between the on-premises and AWS environment, and then this will allow us to connect over this secure connection between these two VPCs.

You'll see that I've created the connection by selecting the on-premise VPC to the AWS VPC. You can even see that we can select VPC's from another region or an AWS account.

Image description

Now, if I were to create these VPC Peers in separate AWS accounts, then one account would need to create the request, and the other account would need to accept it. Because we're creating both of these in the same account, then you can do both of these processes.

The next step in this stage is to configure routing. We need to configure the VPC Routers in each VPC to know how to send traffic to the other side of the VPC Peer. To do this I had to go to the route table associated to the On-Premise VPC and edit the details

Image description

The specific edits involve adding the AWS Cloud VPC CIDR number as the Destination and the recently created Peering Connection as the Target.

Image description

That's one side of this peering relationship configured. Next, we need to edit both of the AWS Route Tables. Now the AWS cloud side has two Route Tables; the private Route Table and the public Route Table. We'll edit the public Route Table first.

This time we'll need the on-premises VPC CIDR range
as the Destination and the Peering Connection as the Target.

Image description

We'll now edit the private Route Table next. We will also include the on-premises VPC CIDR range
as the Destination and the Peering Connection as the Target.
Image description

And that's the routing configure for both sides of this VPC Peer.

My Internal Thoughts
By establishing a private connection, I could guarantee the confidentiality and integrity of the data in transit. It was like forging a secret passage between two worlds.

STAGE 3: Create & Configure the AWS Side Infrastructure (App and DB)

Image description

Now in this stage of the project, I'm provisioning all of the infrastructure at the AWS Cloud side.

I started by provisioning the database within AWS Cloud. This includes an RDS subnet group and an RDS managed database implementation.

Image description

I'm going to configure a Single-AZ implementation of RDS. That's going to be the end state database for this application migration.

Image description

I'm also going to be provisioning an EC2 instance which will function as the web application server.

Image description

We then have to update the instance.

Image description

Now we're installing the Maria DB command line tools. This will install Apache and Maria DB, specifically the command line tools.

Image description

We need to make sure that the web server is both started, and set to start every time the instance reboots. Then we need to make it so that we can transfer the content from the on-premises web server across to this server. We're going to be using secure copy to perform that transfer. To make it easier we need to allow logins to this EC2 instance, using password authentication. After that we need to restart the SSH daemon, so that this config will take effect.

Image description

Next we are going to SSH into the CatWeb server on-premise. We're going to copy the entire web root from this instance across the AWS web server.

To do this we are going to copy the HTML folder
to this destination: var/www. Then we will copy all of the WordPress assets from this server to the AWS instance.

Image description

Now we SSH into the awsCatWeb instance. We already copied those web assets into the home folder of the EC2 hyphen user. Now we will correct any permissions issues on those files that we've just copied.

Image description

Now, at this point, this instance should now be a functional WordPress application server, and it should be pointing at the on premises database server.

At this point, users can connect to this EC2 instance and see the same application.

My Internal Thoughts
There was a moment were I received the Apache testing page web interface message. At first this was not a good sign because I wasn't able to copy the word press html documents from the on-premise vpc onto the AWS cloud vpc public Subnet. Luckily I was able to trouble shoot the issue 😅

STAGE 4: Migrate Database & Cut over

Image description

We're going to complete a database migration
from CatDB through to the previously created RDS instance using AWS DMS.

What we'll be doing is creating a DMS replication instance and using this to replicate all the data
from the CatDB on-premises database instance
across to RDS. It'll be using the DMS replication instance to act as an intermediary and it will be replicating all the changes through to the RDS instance.

We start by creating a subnet group for the AWS cloud private subnets

Image description

Next we create the replication instance. Details like selecting the correct DMS subnet group, vpc security groups, and instance class are important here

Image description

Now at that point, we can go ahead and configure the endpoints. You can think of these as the containers of the configuration for the source
and destination databases.

Here are the details regarding the Source Endpoint

Image description

Here are the details regarding the Destination Endpoint

Image description

At this point we want to start testing the ability of DMS to connect to both the source and the destination. After a few minutes, the status should change from testing to successful.

Image description

Image description

I then went over to Database Migration Task. This is the DMS functionality that uses the replication instance together with both of these endpoints that we've just configured.

Image description

I then went over to the Table Mappings section. Now the schema, which is just another way of talking about the database name or the section of the architecture that contains tables, needs some configuration.

Inside the simulated on-premises environment
on the self-managed database, all of the data is stored within a database is called a4lwordpress, so we need to enter that for schema name.

Image description

Now, this starts the replication task and it's doing a full load or a full migration from the source, which is catdbonpremises, which references the CatDB simulated on-premises virtual machine database server. It's transferring all this data
to the RDS instance, which is a4lwordpress. This will go through a number of different states.

It'll start off in the creating state, then it will move to the running state and then finally, it will move into load complete to completely finish.

Image description

Once that was complete, I went over to the AWS web application server (awsCatWeb), so that instead of pointing at the on-premises database instance, it now points at the RDS instance. We SSH back into that instance and edit the wp-config.php file to include the RDS instance endpoint.

Image description

Now, there's one final thing that I needed to do.
Wordpress has a strange behavior that the IP address that I installed the software on and first provisioned the database, is hard-coded into the database, so we need to update the database so that it points at our new instance IP address.

Image description

What this does is load the config of that file
that you've just edited, so wp-config, gets the database username, the database password, the database name, and the host and it uses all that information to essentially replace the old IP address with the new information of this instance.

Image description

So all that we have running at this point is the AWS based Wordpress web server which should now be pointing at the RDS instance, which should now contain the migrated data that was copied across by the Database Migration Service.

My Internal Thoughts
The heart-pounding climax of my journey was the actual database migration. AWS DMS's replication capabilities kicked into high gear, seamlessly moving data from my on-premise database to the cloud. The ability to monitor and track the progress in real-time provided peace of mind. And then came the epic moment of cut over, where the final switch was flipped, and my application seamlessly transitioned to the cloud database.

Image description

STAGE 5: Cleanup the Account

As my migration story neared its conclusion, it was time to tidy up. AWS DMS allows for easy resource cleanup, ensuring that I only paid for what I used. The journey's end was met with cost-effectiveness and a sense of accomplishment.

My adventure with AWS DMS was a thrilling ride through five stages of database migration. It demonstrated the power of cloud technology, making the once-daunting task feel like a heroic saga. If you're contemplating a database migration, fear not—AWS DMS is your trusty guide on this epic odyssey.

Top comments (0)