DEV Community

Cover image for Prisma Migrate with AWS Aurora Serverless
Ryan Dsouza for Prisma

Posted on

Prisma Migrate with AWS Aurora Serverless

Introduction

In the previous post, we saw how we could deploy a GraphQL API using AWS AppSync and Prisma. Querying the database with Prisma Client worked great, but there was a minor issue with the overall workflow: We had to create the tables manually in the RDS Query Editor and sync them manually with the Prisma model in schema.prisma.

In this post, we will look at how we can use Prisma Migrate to create our tables with migrations. These migrations then can be directly run on any other setup with a single command.

Before we move forward...

I am assuming that you have read the previous post in this series. We will be adding snippets to the current application that we have and deploy it. It's not necessary to deploy the application beforehand for this and so you would be good to go by just cloning the repo below and following along.

GitHub logo ryands17 / graphql-api-cdk-serverless-postgres

A basic example of AppSync + Lambda resolvers with AWS Aurora Serverless and Prisma

Setup

In this section, we will see how to make our Aurora Serverless database accessible on our local machine via an EC2 instance and SSH.

Intro

Amazon Aurora PostgreSQL Serverless databases are not accessible locally. They can only be accessed from the VPC in which they are deployed. So to add the table to our database locally, we will use a Jump Box (basically an EC2 instance) to forward our local connection to our remote database in the VPC.

Creating an SSH key

For this we first need to create an SSH key. So in the AWS Console, navigate to EC2 and select the Key Pairs section from the left hand side of the menu. We will see a UI like this where we need to select the Create key pair option.

Create a Key Pair

I have already one named "prisma" which I will be using for the rest of this walkthrough.

Next, let's add the name of this key to our cdk.context.json file like this:

"keyName": "prisma"
Enter fullscreen mode Exit fullscreen mode

Adding CDK resources

Now let's add some snippets to our CDK file.

The very first snippet will tell AWS to fetch us the latest Ubuntu Server:

// appsync-cdk-rds-stack.ts

const ami = new ec2.LookupMachineImage({
  name: 'ubuntu/images/hvm-ssd/ubuntu-focal-20.04-amd64-server-*',
  filters: { 'virtualization-type': ['hvm'] },
  // Canonical AWS Account ID
  owners: ['099720109477'],
})
Enter fullscreen mode Exit fullscreen mode

This will search for the latest Ubuntu image (20.04) which we will be using for our EC2 instance and set it to the ami variable.

The next snippet is for allowing SSH access to our EC2 instance and also to make sure that it can access our database.

// appsync-cdk-rds-stack.ts

const publicSg = new ec2.SecurityGroup(this, 'public-sg', {
  vpc,
  securityGroupName: 'public-sg',
})
publicSg.addIngressRule(
  ec2.Peer.anyIpv4(),
  ec2.Port.tcp(22),
  'allow SSH access'
)

privateSg.addIngressRule(
  publicSg,
  ec2.Port.tcp(5432),
  'allow Aurora Serverless Postgres access'
)
Enter fullscreen mode Exit fullscreen mode

We have created a public security group so that we can port forward our database locally via the EC2 instance, which is why we have also opened port 22 on our public security group.

We have also given access to port 5432 which is our database port to our public SG from our private SG so that we can successfully access the database on our local machine.

Creating our EC2 instance (Jump Box)

This last snippet is for creating our EC2 instance assigining all the above parameters:

// appsync-cdk-rds-stack.ts

new ec2.Instance(this, 'jump-box', {
  vpc,
  securityGroup: publicSg,
  vpcSubnets: { subnetType: ec2.SubnetType.PUBLIC },
  instanceType: ec2.InstanceType.of(
    ec2.InstanceClass.T2,
    ec2.InstanceSize.MICRO
  ),
  machineImage: ec2.MachineImage.genericLinux({
    [this.region]: ami.getImage(this).imageId,
  }),
  keyName: this.node.tryGetContext('keyName'),
})
Enter fullscreen mode Exit fullscreen mode

Finally, we create our EC2 instance where we assign it our vpc, public security group and subnets. We have selected a t2.micro instance here to save costs as we just need to port forward our database. Finally, we have assigned the AMI's imageId to the machineImage which means we will be using Ubuntu 20.04 as our OS.

Deploying to AWS

We're done with the setup so let's deploy our current stack using the following command:

yarn deploy
Enter fullscreen mode Exit fullscreen mode

SSH forwarding

After deploying, all our resources including our EC2 instance will be created. Now we shall move on to SSH forwarding. The basic command used will like this:

ssh -N -L 5432:aurora-serverless-hostname:5432 ubuntu@ipAddress -i keyName.pem -v
Enter fullscreen mode Exit fullscreen mode

This command forwards our instance locally to port 5432 so now we can access our Serverless database as if it were present locally i.e. localhost.

By looking at the above command, we need the host name of the newly created database and the IP address of our EC2 instance.

The hostname for our database can be obtained by visiting Secrets Manager from the AWS Console and selecting the secret created.

Retrieve database secret

The IP address of the EC2 Instance can be obtained from the instances menu in the EC2 section of the AWS Console

IP Address of the EC2 instance

Now, we can run the SSH command specified above and replace the placeholder values with the ones we found from the steps above. After running this command you will see something like this in the console, which means that our port forwarding setup has been successfully completed.

Let's leave this running as is and move on to the next section.

Creating and applying migrations with Prisma Migrate

Finally we have setup port forwarding so let's move on to creating the table by implementing our migrations with the Prisma CLI.

Currently our schema.prisma file look something like this:

generator client {
  provider      = "prisma-client-js"
  binaryTargets = ["native", "rhel-openssl-1.0.x"]
}

datasource db {
  provider = "postgresql"
  url      = env("DB_URL")
}

model Post {
  id      String  @id
  title   String?
  content String?

  @@map("posts")
}
Enter fullscreen mode Exit fullscreen mode

Looking at the schema.prisma file we have in our lambda-fns/prisma directory, we can see that we already have the model Post and our database URL in the datasource block which would need to be specified via the DB_URL environment variable.

So let's create an .env file in the lambda-fns directory and the placeholder values would look something like this:

DB_URL="postgresql://username:password@localhost:5432/BlogDB"
Enter fullscreen mode Exit fullscreen mode

In the above snippet, we only need to replace the username and password which can be obtained in the same place we fetched the database host i.e. from our secret created in Secrets Manager.

Retrieve database secret

Let's run yarn db:save. This command invokes the Prisma CLI's prisma migrate dev --create-only command that will create our migration stored in an SQL file. Which will look something like this after you accept the defaults in the prompt:

Migration command successful

And the created SQL migration:

-- CreateTable
CREATE TABLE "posts" (
    "id" TEXT NOT NULL,
    "title" TEXT,
    "content" TEXT,

    PRIMARY KEY ("id")
);
Enter fullscreen mode Exit fullscreen mode

Now to create our table in the database, let's run yarn db:migrate which will again invoke Prisma CLI's prisma migrate dev which will run the SQL migrations created onto the database and our table will be created.

Migration file created

We can verify this by going to RDS in the AWS Console and selecting Query Editor from the left menu.

Selecting our secrets and entering the database name, we can now execute the following query:

select * from "posts"
Enter fullscreen mode Exit fullscreen mode

And this will give us the following output:

Query runs successfully

Yayy! We can finally see the database created with the migrations via Prisma Migrate! You can now play around with the AppSync API in a similar manner as the previous post.

Conclusion

In this post, we created our database migrations with Prisma Migrate and applied them on our Aurora Serverless database using an EC2 instance with SSH port forwarding that helps connect to our database which is not publicly accessible otherwise.

Here's the link to the repo again for those who want to dive in.

GitHub logo ryands17 / graphql-api-cdk-serverless-postgres

A basic example of AppSync + Lambda resolvers with AWS Aurora Serverless and Prisma

Thanks a lot for reading this post, and if you have implemented this stack, please DO NOT FORGET TO DELETE THIS STACK via

yarn cdk destroy
Enter fullscreen mode Exit fullscreen mode

Top comments (4)

Collapse
 
bulletninja profile image
Bulletninja

In django there is the concept of "data migration" (See docs.djangoproject.com/en/3.1/topi... ), is that possible with prisma? If not, are there any plans to implement something like that. Otherwise it seems like we need to either create scripts (and figure the dependencies with schema migrations by ourselves), or manually modifying SQL (one generated by prisma to avoid having to deal with the dependencies ourselves :P ).

Collapse
 
trixobird profile image
Giorgos Trichopoulos

from prisma docs: prisma.io/docs/guides/deployment/d...

Ideally, migrate deploy should be part of an automated CI/CD pipeline, and we do not generally recommend running this command locally to deploy changes to a production database (for example, by temporarily changing the DATABASE_URL environment variable). It is not generally considered good practice to store the production database URL locally.

Collapse
 
cmonteiro128 profile image
Chris Monteiro

Great info, thanks so much!

Collapse
 
edmbn profile image
Eduard Monfort

Hello! Can I ask why on the repository some part of the code relative to this part (2) is commented?