DEV Community

Cover image for A crash course on Serverless with AWS - Building APIs with Lambda and Aurora Serverless
Adnan Rahić
Adnan Rahić

Posted on • Updated on

A crash course on Serverless with AWS - Building APIs with Lambda and Aurora Serverless

Ever since AWS made a serverless option of AWS RDS Aurora generally available, I've been on the edge of my seat, eagerly anticipating all the various possibilities. It would mean a breakthrough in building serverless architectures. No more managing connection pools with SQL. No more worrying about capping out available connections. This would make it viable to use serverless for database intensive operations.

I wrote an article a while back explaining how to build a serverless API with MongoDB. But, I saw a pattern emerge. Higher throughput in a system, even though a database instance could take the load, would cause connections to drop because of a limited set of available connections.

TL;DR

I've refactored the example with MongoDB to use SQL. It's agnostic regarding which database provider you use, but the example I'll show here is with AWS Aurora Serverless. Check out the code here, or if you want to follow along and learn how to set up everything yourself, keep reading.

What'll we be doing?

This article will show you how to hook up a MySQL database as a service to a serverless API. We'll create a database cluster with AWS RDS and configure Aurora Serverless. What’s awesome with this set up is that you'll pay for the services based on how much you use them. Even the database! It's the ideal serverless scenario.

What if you're new to this?

Let's break down the basics. We'll use AWS Lambda as the compute service. An AWS Lambda function is basically a container. Once the Lambda is invoked, the container spins up and runs the code. This is when we want to initialize the database connection, the first time the function is invoked, when the container is first initialized. Every subsequent request to the lambda function will use the existing database connection. Sounds quite simple once I break it down. Let's jump in.

Configuring the project

I’ll assume you already have a basic understanding of the Serverless Framework. I would also hope you have an AWS account set up. If you don’t, please check this out.

1. Creating a service

As always we need a fresh service to hold all our code.

$ sls create -t aws-nodejs -p web-api && cd web-api
Enter fullscreen mode Exit fullscreen mode

Running this command, you'll get a nice boilerplate to begin developing your functions. It'll set up the basic functions and event triggers. You also see we're entering the web-api directory right away. We jumped into this directory to set up the required dependencies.

2. Installing modules

In total, we only need three dependencies. The MySQL ORM called Sequelize to handle connections and map relations, and serverless-offline for local development.

Make sure you’re in the web-api directory. First, install serverless-offline, then mysql2 and sequelize.

$ npm init -y
$ npm i --save-dev serverless-offline
$ npm i --save mysql2 sequelize
Enter fullscreen mode Exit fullscreen mode

That’s it, let’s take a break from the terminal and jump over to AWS to create a database cluster.

3. Creating a database on AWS RDS

I've always hated setting up resources through a web console, but it's a necessary evil. Bare with me, jump over to your AWS account and choose RDS. You'll land on a page similar to this one.

rds-1

Click the orange Create Database button and follow along.

rds-select-engine

First, you'll choose the engine you want to use. Choose Amazon Aurora and click next.

rds-create-1

Once you've specified the engine, configure Aurora to be Serverless, add a name to your cluster and configure the master username and password.

rds-create-2

Only one more set of advanced settings to configure, and you should be set.

rds-advanced-config-1

Here you'll set up the capacity and network. Because Aurora Serverless lives inside a VPC you'll have to configure the access. Here you're choosing the default VPC.

rds-advanced-config-2

With the additional configuration, you can set up backups of you so please. Once you're happy with it all, press the orange Create database button.

Finally, it'll start provisioning the cluster. You'll get redirected and see some output looking like this.

rds-cluster-created-1

But, what's interesting to us is the details section.

rds-cluster-created-2

Here you have access to security groups and subnets. You'll need to add them to the AWS Lambda functions in order for them to be able to communicate with the database. More about that in the next section.

Note: If you want to access the AWS Aurora Serverless cluster follow this tutorial to learn how to do it with the CLI, or this official tutorial by AWS to connect through Cloud9. You will need to connect to the cluster and create a database manually because Sequelize can't create them for you.

The engineers over at Statsbot have shown me how awesome AWS RDS can be with the huge amounts of data they're processing. Rumors are they're in the works of creating a completely serverless open-source analytics framework called Cube.js. I'm eager to start using it myself. Hopefully, Aurora Serverless can be of help.

Writing code

Configuring database resources on AWS is always a handful, especially the new serverless database offering. Luckily, the configuration part is over. Now we need to write the configuration for our serverless resources in the serverless.yml file and add the actual CRUD methods to the handler.js.

4. Configuring the YAML

The Serverless Framework helps a great deal with setting up the initial project structure. It can scaffold out pretty much everything you need when you're starting out. Jump back to the directory where you created the serverless project and open it up in your favorite code editor.

Open the serverless.yml file first and take a breather. Here you'll see a bunch of comments with sample configurations. I like clean code, so let's delete it all, and paste in this snippet instead.

service: web-api

custom:
  secrets: ${file(secrets.json)}

provider:
  name: aws
  runtime: nodejs8.10
  timeout: 30
  stage: ${self:custom.secrets.NODE_ENV}
  environment: 
    NODE_ENV: ${self:custom.secrets.NODE_ENV}
    DB_NAME: ${self:custom.secrets.DB_NAME}
    DB_USER: ${self:custom.secrets.DB_USER}
    DB_PASSWORD: ${self:custom.secrets.DB_PASSWORD}
    DB_HOST: ${self:custom.secrets.DB_HOST}
    DB_PORT: ${self:custom.secrets.DB_PORT}
  vpc:
    securityGroupIds:
      - ${self:custom.secrets.SECURITY_GROUP_ID}
    subnetIds:
      - ${self:custom.secrets.SUBNET1_ID}
      - ${self:custom.secrets.SUBNET2_ID}
      - ${self:custom.secrets.SUBNET3_ID}
      - ${self:custom.secrets.SUBNET4_ID}

functions:
  healthCheck:
    handler: handler.healthCheck
    events:
      - http:
          path: /
          method: get
          cors: true
  create:
    handler: handler.create
    events:
      - http:
          path: notes
          method: post
          cors: true
  getOne:
    handler: handler.getOne
    events:
      - http:
          path: notes/{id}
          method: get
          cors: true
  getAll:
    handler: handler.getAll
    events:
     - http:
         path: notes
         method: get
         cors: true
  update:
    handler: handler.update
    events:
     - http:
         path: notes/{id}
         method: put
         cors: true
  destroy:
    handler: handler.destroy
    events:
     - http:
         path: notes/{id}
         method: delete
         cors: true

plugins:
  - serverless-offline
Enter fullscreen mode Exit fullscreen mode

Let's break it down piece by piece. The provider section contains all the basic info about the cloud provider. But also data about the runtime, function timeouts, environment variables, and VPC configuration. The VPC setup is needed here because AWS Aurora Serverless can only run behind a VPC, so you need to put your AWS Lambda functions in the same VPC in order to access the database.

Moving on, there's a custom section at the top where we load environment variables and then pass them to the environment section.

Finally, let's talk about the functions section. We added a total of 6 functions: healthCheck, create, getOne, getAll, update, and destroy. They all point to identically named exported functions in the handler.js file. Their event triggers are hooked up to HTTP events which are handled by AWS API Gateway. Meaning, an HTTP request to a specified path will trigger the referenced function.

That’s pretty much it, the last thing is to add a plugins section and serverless-offline. We installed this module above and we’ll use it for testing the service out before deploying to AWS.

5. Adding secrets

You saw in the serverless.yml that we referenced a secrets file. Before moving on we need to create it and make sure it's loading our secrets correctly.

In the root of the project create a secrets.json and paste this snippet in.

{
  "DB_NAME": "test",
  "DB_USER": "root",
  "DB_PASSWORD": "root",
  "DB_HOST": "127.0.0.1",
  "DB_PORT": 3306,
  "NODE_ENV": "dev",
  "SECURITY_GROUP_ID": "sg-xx",
  "SUBNET1_ID": "subnet-xx",
  "SUBNET2_ID": "subnet-xx",
  "SUBNET3_ID": "subnet-xx",
  "SUBNET4_ID": "subnet-xx"
}
Enter fullscreen mode Exit fullscreen mode

This setup will work perfectly fine for a local development environment. Once you want to deploy it to AWS you'll have to add your own secrets. AWS Aurora Serverless will output everything like in the image above. The important secrets here are the database connection parameters, security group ID and subnet IDs.

That's everything regarding the YAML. Let's play with the handler.js next. Finally, some real code!

6. Fleshing out the functions

To start out we'll define 6 functions. They'll describe the initial layout and behavior we want. Once that's done, we'll move and create the database connection and database interaction logic with Sequelize.

Switch over to the handler.js file. You’ll see the default boilerplate. Guess what? Delete it all and add the code below.

module.exports.healthCheck = async () => {
  await connectToDatabase()
  console.log('Connection successful.')
  return {
    statusCode: 200,
    body: JSON.stringify({ message: 'Connection successful.' })
  }
}

module.exports.create = async (event) => {
  try {
    const { Note } = await connectToDatabase()
    const note = await Note.create(JSON.parse(event.body))
    return {
      statusCode: 200,
      body: JSON.stringify(note)
    }
  } catch (err) {
    return {
      statusCode: err.statusCode || 500,
      headers: { 'Content-Type': 'text/plain' },
      body: 'Could not create the note.'
    }
  }
}

module.exports.getOne = async (event) => {
  try {
    const { Note } = await connectToDatabase()
    const note = await Note.findById(event.pathParameters.id)
    if (!note) throw new HTTPError(404, `Note with id: ${event.pathParameters.id} was not found`)
    return {
      statusCode: 200,
      body: JSON.stringify(note)
    }
  } catch (err) {
    return {
      statusCode: err.statusCode || 500,
      headers: { 'Content-Type': 'text/plain' },
      body: err.message || 'Could not fetch the Note.'
    }
  }
}

module.exports.getAll = async () => {
  try {
    const { Note } = await connectToDatabase()
    const notes = await Note.findAll()
    return {
      statusCode: 200,
      body: JSON.stringify(notes)
    }
  } catch (err) {
    return {
      statusCode: err.statusCode || 500,
      headers: { 'Content-Type': 'text/plain' },
      body: 'Could not fetch the notes.'
    }
  }
}

module.exports.update = async (event) => {
  try {
    const input = JSON.parse(event.body)
    const { Note } = await connectToDatabase()
    const note = await Note.findById(event.pathParameters.id)
    if (!note) throw new HTTPError(404, `Note with id: ${event.pathParameters.id} was not found`)
    if (input.title) note.title = input.title
    if (input.description) note.description = input.description
    await note.save()
    return {
      statusCode: 200,
      body: JSON.stringify(note)
    }
  } catch (err) {
    return {
      statusCode: err.statusCode || 500,
      headers: { 'Content-Type': 'text/plain' },
      body: err.message || 'Could not update the Note.'
    }
  }
}

module.exports.destroy = async (event) => {
  try {
    const { Note } = await connectToDatabase()
    const note = await Note.findById(event.pathParameters.id)
    if (!note) throw new HTTPError(404, `Note with id: ${event.pathParameters.id} was not found`)
    await note.destroy()
    return {
      statusCode: 200,
      body: JSON.stringify(note)
    }
  } catch (err) {
    return {
      statusCode: err.statusCode || 500,
      headers: { 'Content-Type': 'text/plain' },
      body: err.message || 'Could destroy fetch the Note.'
    }
  }
}
Enter fullscreen mode Exit fullscreen mode

I can understand you think this is a bit of a huge snippet. Don't worry. These are only 6 basic async functions. The connectToDatabase() function resolves to an object containing all the database models. We'll only implement the Note model in this tutorial. The logic behind the function will connect to the SQL database, cache the connection and make sure to never retry connecting if an existing connection is alive.

But wait, we haven’t defined or created any of this. Well, I did it on purpose, I first want you to see that this is not that complicated, nor any different from creating an API with Node.js and Express.

7. Adding the database connection

Adding the database connection is our next step. When an AWS Lambda function is invoked for the first time, which is called a cold start, AWS will spin up a container to run the code. This is when we connect to the database. All subsequent requests will use the existing database connection. Conceptually, it’s rather easy to understand, but a handful when we need to wrap our heads around it in the code. Here goes nothing!

Create a new file in the root directory of the service, right alongside the handler.js. Give it a pretty logical name of db.js, and add the code below.

const Sequelize = require('sequelize')
const NoteModel = require('./models/Note')
const sequelize = new Sequelize(
  process.env.DB_NAME,
  process.env.DB_USER,
  process.env.DB_PASSWORD,
  {
    dialect: 'mysql',
    host: process.env.DB_HOST,
    port: process.env.DB_PORT
  }
)
const Note = NoteModel(sequelize, Sequelize)
const Models = { Note }
const connection = {}

module.exports = async () => {
  if (connection.isConnected) {
    console.log('=> Using existing connection.')
    return Models
  }

  await sequelize.sync()
  await sequelize.authenticate()
  connection.isConnected = true
  console.log('=> Created a new connection.')
  return Models
}
Enter fullscreen mode Exit fullscreen mode

At the top, we're requiring Sequelize and the NoteModel.

Note: We're yet to create the model, but hold on, we'll get to that right after this part.

We then initialize sequelize, pass in the connection variables and establish a connection. Calling NoteModel will initialize the model, which we then pass to a Models constant. The arbitrary connections object is there just to use as a cache for the connection. Making sure not to sync the database if it's not necessary. The second reason for awaiting the .sync() and .authenticate() methods are to make sure the database connection is established during the initial function call, before handling any business logic.

With the db.js file created, let’s require it in the handler.js. Just add this snippet to the top of the handler.

const connectToDatabase = require('./db') // initialize connection

// simple Error constructor for handling HTTP error codes
function HTTPError (statusCode, message) {
  const error = new Error(message)
  error.statusCode = statusCode
  return error
}
Enter fullscreen mode Exit fullscreen mode

8. Adding a Note model

Jump back to the handler.js. You can see we’re calling const { Note } = await connectToDatabase() in the functions to retrieve data, but there’s no Note model defined. Well, now is as good a time as any.

Create a new folder in the service root directory and name it models. In it create another file and name it Note.js. This will be just a simple model definition.

module.exports = (sequelize, type) => {
  return sequelize.define('note', {
    id: {
      type: type.INTEGER,
      primaryKey: true,
      autoIncrement: true
    },
    title: type.STRING,
    description: type.STRING
  })
}
Enter fullscreen mode Exit fullscreen mode

That’s it. Time to try it out.

Note: Make sure to add your secrets.json to your .gitignore file.

How about some testing?

We’re ready to test the API. To be sure, let's start by running it all locally. One tiny thing to take into account is that there’s a flag we need to add while running serverless-offline.

$ sls offline start --skipCacheInvalidation
Enter fullscreen mode Exit fullscreen mode

Note: Serverless Offline invalidates the Node require cache on every run by default, we add this flag to disable it. In Node.js when you require() a module, it stores a cached version of the module, so that all subsequent calls to require() do not have to reload the module from the file system.

Once you've run the command in the terminal, you should see something like this.

sls-offline-start

All our routes are up and running. Using Insomnia, I’ve created a POST request to http://localhost:3000/noteswith a JSON body.

insom-1

Checking the terminal you can see => using new database connection get logged, meaning the initial database connection has been established. Send another POST request and you’ll see => using existing database connection get logged instead.

Awesome, adding a new note works. Let’s retrieve all the notes we just added using the getAll method.

insom-2

Try out the other endpoints yourself. Come back here once you're done playing.

Deploying to AWS

Now comes the tricky part. Make sure to write down all the database parameters you'll get in the AWS Aurora Serverless console, and of course the security group ID and subnet IDs as well. Usually, you'll have one security group and three subnets. But, it can vary, so don't worry. After you have the values, add them to your secrets.json. That's it! You're ready to deploy.

The Serverless framework makes deployments quick and painless. All you need to do is to run one command.

$ sls deploy
Enter fullscreen mode Exit fullscreen mode

It will automagically provision resources on AWS, package up and push all code to S3 from where it will be sent to the Lambdas. The terminal should show output similar to this.

sls-deploy

Note: You can repeat the testing process from above with the endpoints provided. But, be patient with the cold-start times. AWS Aurora Serverless can take a decent 10 seconds to start. A similar issue happens with AWS Lambda in a VPC.

That’s all there is to the deployment process. Easy right? This is why I love the Serverless framework so much.

Wrapping up

This is an ever-evolving architecture. With the rise of a serverless relational database, the possibilities are endless in creating entirely serverless infrastructure. I’ve tried my best to explain the process of creating a proper API. Hopefully, you've learned a lot. Enjoy digging deeper into the possibilities of serverless architecture and all that comes with it!

And, of course, here's the repo once again, give it a star if you want more people to see it on GitHub. If you want to read some of my previous serverless musings head over to my profile or join my serverless newsletter!

I had an absolute playing with AWS Aurora Serverless. Hope you guys and girls enjoyed reading it as much as I enjoyed writing it. If you liked it, slap that tiny unicorn so more people here on dev.to will see this article. Don't forget to give the peeps at Cube.js some love if you need a serverless SQL analytics framework! Until next time, be curious and have fun.


Disclaimer: Tracetest is sponsoring this blogpost. Use observability to reduce time and effort in test creation + troubleshooting by 80%.


Top comments (28)

Collapse
 
bryangerre profile image
BryanGerre

Hey Adnan. Thank you so much for taking the time to develop this example. How would you introduce another model to this solution? Set's say I wanted to add UsersModel. I've created the users.js within the models directory. the db.js you have only passes in one model ( const Models = { Note }) could that be an array of models? const Models = [Note, User, etc]? If so how does the handler.js know which model?

Collapse
 
adnanrahic profile image
Adnan Rahić

Add another model to the object.
In db.js:

const Models = { Note, User }

Once you want to require the models in the handler.js you can choose which one you want.

const { Note } = await connectToDatabase()
// or
const { User } = await connectToDatabase()
// or even
const { Note, User } = await connectToDatabase()

It's up to you to use what you like more. I personally think destructuring is freaking awesome. :)

Collapse
 
bryangerre profile image
BryanGerre • Edited

Last question, what would be the best way to associate those two tables? I've found many examples, but it would require a refactoring of the model's .js file.

Collapse
 
bryangerre profile image
BryanGerre

Thanks so much, I'll try the above. thanks again

Collapse
 
christiams profile image
Christian Carrillo

What do you mean with destructuring?

Thread Thread
 
kayis profile image
K
Collapse
 
agstevens profile image
andrewstevens

To install mysql locally on a mac (mojave):
brew install mysql@5.7 to install an older version of mysql. Then everything works fine locally.

To answer Weiren's question:
in the secrets.json file:
"DB_HOST": "superdupercluster.cluster-.rds.amazonaws.com"

Fixes the connection refused issue you are seeing, as 127.0.0.1 works for the local machine, but not for whatever machine that the Lambda function runs on.

However, now I am getting the same issue as sobhanthankur:

{
"errorType": "SequelizeConnectionError",
"errorMessage": "Unknown database 'test'",
"trace": [
"SequelizeConnectionError: Unknown database 'test'",
" at Promise.tap.then.catch.err (/var/task/node_modules/sequelize/lib/dialects/mysql/connection-manager.js:133:19)",
" at tryCatcher (/var/task/node_modules/bluebird/js/release/util.js:16:23)",
" at Promise._settlePromiseFromHandler (/var/task/node_modules/bluebird/js/release/promise.js:517:31)",
" at Promise._settlePromise (/var/task/node_modules/bluebird/js/release/promise.js:574:18)",
" at Promise._settlePromise0 (/var/task/node_modules/bluebird/js/release/promise.js:619:10)",
" at Promise._settlePromises (/var/task/node_modules/bluebird/js/release/promise.js:695:18)",
" at _drainQueueStep (/var/task/node_modules/bluebird/js/release/async.js:138:12)",
" at _drainQueue (/var/task/node_modules/bluebird/js/release/async.js:131:9)",
" at Async._drainQueues (/var/task/node_modules/bluebird/js/release/async.js:147:5)",
" at Immediate.Async.drainQueues as _onImmediate",
" at runCallback (timers.js:705:18)",
" at tryOnImmediate (timers.js:676:5)",
" at processImmediate (timers.js:658:5)"
]
}

Do we need to initialize the Aurora database and create the database named 'test' first, and then Sequelize will automatically create the notes table in that database for us? Or is there a missing permission for the lambda role/policy?

Collapse
 
adnanrahic profile image
Adnan Rahić

Yes, you need to create it manually. I mentioned it in the article:

Note: If you want to access the AWS Aurora Serverless cluster follow this tutorial to learn how to do it with the CLI, or this official tutorial by AWS to connect through Cloud9. You will need to connect to the cluster and create a database manually because Sequelize can't create them for you.

I've yet to find the free time to explore the Lambda role/policy to let Sequelize create it automatically. But, I'd ideally not want my Lambda role to have access to create/delete/edit databases. It could end badly. Only letting them change the data is a bit more safe for my liking.

Collapse
 
kra007027 profile image
@kra007027

Hello.
Did you solve the issue?
Could you help me to solve this issue?

Collapse
 
softnayr profile image
Ryed Ban • Edited

Hi Adnan,

Great work buddy!

I just got one question - how would you associate another model to the Note model?
Let's say we have a User model and we would like to implement Note.belongsTo(User).

I have made something like this but it keeps me getting an error like "note is not associated to user"

const User = require('./User');
module.exports = (sequelize, type) => {
const Note = sequelize.define('note', {
id: {
type: type.INTEGER,
primaryKey: true,
autoIncrement: true
},
title: type.STRING,
description: type.STRING
})

Note.associate = (models) => {
    Note.belongsTo(models.User, {
        foreignKey: 'user_id',
        as: user
    })
}

return Note
Enter fullscreen mode Exit fullscreen mode

}

Collapse
 
lilhamad profile image
Lilhamad Adewale

Hi @softnayr were you able to solve the "note is not associated to user" issue

Collapse
 
sobhanthakur profile image
sobhanthakur

Can somebody please help me out with this error

{
"errorType": "SequelizeConnectionError",
"errorMessage": "Unknown database 'test_aurora'",
"trace": [
"SequelizeConnectionError: Unknown database 'test_aurora'",
" at Promise.tap.then.catch.err (/var/task/node_modules/sequelize/lib/dialects/mysql/connection-manager.js:133:19)",
" at tryCatcher (/var/task/node_modules/bluebird/js/release/util.js:16:23)",
" at Promise._settlePromiseFromHandler (/var/task/node_modules/bluebird/js/release/promise.js:517:31)",
" at Promise._settlePromise (/var/task/node_modules/bluebird/js/release/promise.js:574:18)",
" at Promise._settlePromise0 (/var/task/node_modules/bluebird/js/release/promise.js:619:10)",
" at Promise._settlePromises (/var/task/node_modules/bluebird/js/release/promise.js:695:18)",
" at _drainQueueStep (/var/task/node_modules/bluebird/js/release/async.js:138:12)",
" at _drainQueue (/var/task/node_modules/bluebird/js/release/async.js:131:9)",
" at Async._drainQueues (/var/task/node_modules/bluebird/js/release/async.js:147:5)",
" at Immediate.Async.drainQueues as _onImmediate",
" at runCallback (timers.js:705:18)",
" at tryOnImmediate (timers.js:676:5)",
" at processImmediate (timers.js:658:5)"
]
}

Collapse
 
adnanrahic profile image
Adnan Rahić

I answered this question here:

Yes, you need to create it manually. I mentioned it in the article:

Note: If you want to access the AWS Aurora Serverless cluster follow this tutorial to learn how to do it with the CLI, or this official tutorial by AWS to connect through Cloud9. You will need to connect to the cluster and create a database manually because Sequelize can't create them for you.

I've yet to find the free time to explore the Lambda role/policy to let Sequelize create it automatically. But, I'd ideally not want my Lambda role to have access to create/delete/edit databases. It could end badly. Only letting them change the data is a bit more safe for my liking.

You need to manually connect to the Aurora cluster and create a database.

Collapse
 
hzburki profile image
Haseeb Burki • Edited

Hey Adnan. Can you please explain a little how database connections in serverless framework work? More specifically how many database connections would your code create if you call the same endpoint multiple time and if you call the same endpoint from two different source simultaneously.

I have a serverless project integrated with express and sequelize, such that a single lambda has multiple routes. The F.E calls 6 api end points for different graphs simultaneously. My code create 6 connections instead of one, and the CPU utilization on my RDS instance goes up to 60-80% :| ...

Collapse
 
adnanrahic profile image
Adnan Rahić

This is a cool question. The keyword here is simultaneously. This has nothing to do with the Serverless Framework, and everything to do with the way AWS Lambda works under the hood.

When your front end sends 6 concurrent requests to a Lambda function, it will spawn 6 instances and every instance will handle one request. If you were to send 6 requests sequentially, one instance would be enough.

Because, in this case, you have 6 Lambda function instances, every one of those will create a database connection. Makes sense?

The way to handle this is to have one Lambda function dedicated to only database interaction, while the other act as proxies. This is very hard to configure. I'd suggest you use a serverless database that can scale as easily as Lambda. :)

Very cool question, feel free to hit me up through the chat here in dev.to if you have any more topics like these. I'd love to nerd out.

Collapse
 
bryangerre profile image
BryanGerre

Hey Adnan, I'm trying to query the database using

const user = await User.findAll({
where: {
user_email: event.pathParameters.user_email
}

The event.pathParameters.user_email has a %40 instead of the @ symbol.

any suggestions?

Collapse
 
wei0121 profile image
Weiren • Edited

Hi Adnan, Thank you so much with your wonderful article. I did exactly same as it till doing offline test. I got an issue which it can not connect to the Aurora DB. The things I have tried is :

  • check the error from Sequelize. I got an SequelizeConnectionRefusedError with "ECONNREFUSED"
  • deploy to aws. ( same issue can not connec to db )

What I missed during the process that cause the issue?
In the secrets.json file, What should be changed according to my RDS configuration ( especially DB_HOST and DB_NAME)

Collapse
 
jmhorn profile image
John Horn

Had the same exact issue and as someone with a mac with no mysql installed this was the solution for me. Remember a local instance of mysql needs to still be available.

Install mysql locally

brew install mysql

Start MySQL

mysql.server start OR service mysqld start

Make change to user auth

Per stack overflow need to edit password auth

mysql -u root
ALTER USER 'root'@'localhost' IDENTIFIED WITH mysql_native_password BY 'password'

mysql -u root -p
"password"

Make sure to change the db.js DB_PASSWORD

db.js
"DB_PASSWORD": "password"

and voila it worked for me

Collapse
 
virendrasharma1 profile image
Virendra Sharma

Hello Adnan, I am getting this error while connecting to database!

Serverless: Failure: connect ETIMEDOUT
SequelizeConnectionError: connect ETIMEDOUT
at Promise.tap.then.catch.err (D:\litmus-workspace\litmus-serverless\node_modules\sequelize\lib\dialects\mysql\connection-manager.js:133:19)
at tryCatcher (D:\litmus-workspace\litmus-serverless\node_modules\bluebird\js\release\util.js:16:23)
at Promise._settlePromiseFromHandler (D:\litmus-workspace\litmus-serverless\node_modules\bluebird\js\release\promise.js:517:31)
at Promise._settlePromise (D:\litmus-workspace\litmus-serverless\node_modules\bluebird\js\release\promise.js:574:18)
at Promise._settlePromise0 (D:\litmus-workspace\litmus-serverless\node_modules\bluebird\js\release\promise.js:619:10)
at Promise._settlePromises (D:\litmus-workspace\litmus-serverless\node_modules\bluebird\js\release\promise.js:695:18)
at _drainQueueStep (D:\litmus-workspace\litmus-serverless\node_modules\bluebird\js\release\async.js:138:12)
at _drainQueue (D:\litmus-workspace\litmus-serverless\node_modules\bluebird\js\release\async.js:131:9)
at Async._drainQueues (D:\litmus-workspace\litmus-serverless\node_modules\bluebird\js\release\async.js:147:5)
at Immediate.Async.drainQueues as _onImmediate
at runCallback (timers.js:705:18)
at tryOnImmediate (timers.js:676:5)
at processImmediate (timers.js:658:5)
at process.topLevelDomainCallback (domain.js:120:23)

I have exactly followed your steps but I am getting this error. My database inbound traffic is set to all but still I am getting this error. I was able to connect to my database through SSH tunneling.

Collapse
 
prajapatirohanp profile image
prajapatirohanp

Hello Virendra,

Good morning.

Have you resolved this issue? Please let me know.

Thanks,
Rohan.

Collapse
 
realpchauhan profile image
Pradeep Chauhan

I need a boilerplate with AWS Node.js Lambda, AWS Cognito, JWT, MongoDB, Cloudformation for user authentication and authorization
dev.to/realpchauhan/user-authentic...

Collapse
 
fwahlqvist profile image
Fredrik Wahlqvist

Hey Adnan,
Great tutorial!!
Just wondered if you have any experience with that Lambda consistently timeout, although i get a db connection its keeps timing out and never returns anything? (input from other are of-course welcome to :) )

Collapse
 
imhashir profile image
Hashir Baig

Hi Adnan!
Thanks for this amazing guide.
I wanted to ask if, performance-wise, it is fine to require "db" into every lambda function?
As far as I am aware of Sequelize (and my knowledge is limited) whenever we run an express app having models created using Sequelize, it first checks if all tables exist, and if they don't it creates them first.
Isn't it overhead in every lambda execution?
Let me know if I am mistaken.
Thank You again for this awesome post :)

Collapse
 
dvddpl profile image
Davide de Paolis

Hi Adnan. thanx for sharing. it's very useful. at work we recently ported an app from php to react + cognito + gateway + lambda + vpc to connect to a Vertica DB and just started moving to Aurora Serverless.

can i ask why you are add in msql2 in the package.json? i couldn't see it used in the code. maybe is a necessary peer dependency for Sequelize as soon as you define the dialect you want to use?

Collapse
 
appupcloud profile image
appup-cloud • Edited

Hello Adnan,

Thank you for this article which has given me a really great start.

The steps which you wrote worked really good until I tried to connect publish SNS message to a topic

After little research I figured out that as we are putting the lambda in Aurora's serverless VPC it is not allowing me connect to any internet facing external services.

Can you suggest me a solution ?

Cheers

Collapse
 
tommedema profile image
Tom Medema

Is there a reason why you don't create the aurora database with cloudformation?

Collapse
 
parampal profile image
parampal

What do you think is the best way to add parameter validation for POST/PUT calls in this example? Also is there any testing library you would recommend?

Thanks for the great tutorial!