DEV Community

Teo Deleanu
Teo Deleanu

Posted on

How to use Sequelize ORM in your Express application

Adding and ORM or picking up one is definitely not an easy task. But the speed it will bring to you coding is critical.
I've checked fast one of the ORMs available for JS here.
For those of you that prefer just relational databases based on SQL, you can use Sequelize.
That is the main reason we picked it for this example.

PROs and CONs

Sequelize - 483K weekly downloads at the time of writing - seems like the most stable with a difference in not using mongo (which is the preferred MVP database or project).
Sequelize is a promise-based Node.js ORM for Postgres, MySQL, MariaDB, SQLite, and Microsoft SQL Server. It features reliable transaction support, relations, eager and lazy loading, read replication and more.
PROs:

  • Sequelize abstracts away the nuanced differences between various SQL implementations.
  • Sequelize has a middleware implemented which allows customizing the error message for fails in every field.
  • Sequelize provides a promise-based library, which helps better managing of the exception and the result when it is unexpected.
  • Has the capability to "sync" the database on changed models. So it will alter or automatically create the tables for you.
  • Has the capacity to query all the complicated join's for you. You just have to learn the way to connect the Objects with each other.

Cons:

  • Sequelize sometimes generates complex queries. In the model definition sometimes doesn't understand the relation correctly when the relationship is "hasOne."
  • Sequalize lacks NoSQL support: Wanna use Mongo? Gotta switch.
  • Has some weird exceptions when things get, and you want to create more connections to the DB. You can use a connection pool for that.

The most significant advantage of an ORM - Object Relationship Model - are fast switching between any SQL DB's. So if you started with SQLite as local database and you want to switch next week to MySQL or Postgre, you just change the connector.

Choosing the SQL Database

  • SQlite - easy to use a local database with no server required to run. It will use just a local file, so it's suitable for non-concurrent access and single server applications. When to use SQLite: Embedded applications, Disk access replacement, Testing When not to use SQLite: Working with lots of data, High write volume, Network access required.
  • mySQL - this is the de-facto SQL server that has support in most of the old languages. Nokia, Google, Facebook, YouTube uses it. It allows multiple users to connect to the same server and have different databases. Also has mysql workbench, one of the most used UI tools for MySQL, in case you don't like MySQL CLI. When to use MySQL: Popularity and ease of use - so you will easily find resources about it, Security, Speed, and replication. Websites and web applications, Distributed operations, Expected future growth When not to use MySQL: Known limitations - lacks support for FULL joins, Licensing, and proprietary features - some features are not available in the open source license since 2009 was acquired by Oracle, and new features development was slowed down. SQL compliance is necessary, Concurrency and large data volumes. MySQL is widely chosen for web-based projects that need a database only for straightforward data transactions. It is common, though, for MySQL to underperform when strained by heavy loads or when attempting to complete complex queries. MySQL performs well when only read speeds are required. MySQL + InnoDB provides very good read/write speeds for different scenarios. Overall, MySQL performs well with high concurrency scenarios. MySQL is reliable and works well with Business Intelligence applications, as business intelligence applications are typically read-heavy.
  • postgre: PostgreSQL’s PGAdmin tool is pretty awesome and fairly complete when it comes to configuring and interacting with your server. PostgreSQL is widely used in large systems where to read and write speeds are crucial, and data needs to validate. Also, it supports a variety of performance optimizations that are available only in commercial solutions such as Geospatial data support, concurrency without reading locks. Overall, PostgreSQL performance is utilized best in systems requiring the execution of complex queries. PostgreSQL performs well when read/write speeds are required, and extensive data analysis is needed. PostgreSQL also works well with Business Intelligence applications but is better suited for Data Warehousing and data analysis applications that require fast read/write speeds. When you use Postgres: Data integrity is essential, integration with other tools, Complex operations - queries can be distributed on multiple CPUs. When not to use Postgres: Speed is imperative - for fasted reads, Simple setups, Complex replication.
  • MariaDB is being used by many large corporations, Linux distributions, and more. Some organizations that use MariaDB include Google, Craigslist, Wikipedia, archlinux, RedHat, CentOS, and Fedora. MariaDB is a fork of Mysql, so most of the features are compatible, including queries and indexes.
  • tedious - Microsoft SQL Server Very similar to MySQL and also a fork from it. With MSSQL, you create a database and don’t specify different engines. Is generally expensive to run, because you need licenses for the server is running the software. If you use .NET, you have some advantages, but it's not our case. On the other side, you can extend an existing functionality with Node connecting directly to MSSQL.

Configuring Sequelize

Now let's get back to our Sequelize ORM.
For our example we will choose the most straightforward way: Sqlite - 20K weekly downloads - since has zero configuration for the server side.
If you need more complex examples, on a given subject, please comment in the section below.

Let's start installing things around:

$ npm install --save sequelize # This will install v5

$ npm install --save sqlite3

# Optionally add other db connectors - but you will need only one at a time
# keep your code clean and remove the unused ones from package.json
$ npm install --save pg pg-hstore # [Postgres](https://www.npmjs.com/package/pg) 680K vs [HStore deserializer](https://www.npmjs.com/package/pg-hstore) 80K AMT
$ npm install --save mysql2 # [Mysql](https://www.npmjs.com/package/mysql2) 198K weekly downloads ATM
$ npm install --save mariadb #  [MariaDB connector](https://www.npmjs.com/package/mariadb) 5K weekly downloads ATM
$ npm install --save tedious # [Microsoft SQL Server](https://www.npmjs.com/package/mssql) - 80K downloads ATM
Enter fullscreen mode Exit fullscreen mode

That was the easy part. We should actually want to use also the sequelize-cli.

$ npm install --save --only=dev sequelize-cli
Enter fullscreen mode Exit fullscreen mode

Now we want to create default config files.

node_modules/.bin/sequelize init
Enter fullscreen mode Exit fullscreen mode

This will generate some folders config/config.js, seeders, models/index.js, migrations.

Unrelated to this subject: In Appseed PRO version you can also get everything set up for easily checking code quality in VSCode. You can even get all the above code for free and use it yourself. Licensed MIT.

The file index.js gets all the models that you will create under models folder and loads them at run time. It also calls the sync method for you to make sure the DB is up to date.

Let's have a look at the config file:

module.exports = {
    development: {
        dialect: 'sqlite',
        storage: './db.development.sqlite'
    },
    test: {
        dialect: 'sqlite',
        storage: ':memory:'
    },
    production: {
        username: process.env.DB_USERNAME,
        password: process.env.DB_PASSWORD,
        database: process.env.DB_NAME,
        host: process.env.DB_HOSTNAME,
        port: process.env.DB_PORT,
        dialect: 'mysql',
        use_env_variable: 'DATABASE_URL'
    },
    production2: {
        use_env_variable:"DB_CONNECTION_STRING",
        dialect: "postgres",
        ssl: true,
        dialectOptions: {
            ssl: true
        }
    }
};
Enter fullscreen mode Exit fullscreen mode

As you may see you can use different configurations for your database based on diferent environemnt. That must match the ENV variable from your .env file.
Among options you can use are: env variable, ssl, host, port, full connection string, storage, database name, dialectOptions, protocol(tcp or null for Heroku), logging(can be set to null or console.log), sync: { force: true }, pool(max: 5, idle: 30000, acquire: 60000). More details with sequelize options on coonfig can be found on Sequelize doc page).

Now let's create a User. You can either use the CLI or write the code itself. I like using both when adding pieces of information as primary keys and other extra details.

node_modules/.bin/sequelize model:create --name User --attributes name:string,surname:string,email:string,password:string
Enter fullscreen mode Exit fullscreen mode

The coolest thing is this generates the full model automatically. If you want you can copy the id from the migration and add it there id: DataTypes.INTEGER

'use strict';
module.exports = (sequelize, DataTypes) => {
  const User = sequelize.define('User', {
    name: DataTypes.STRING,
    surname: DataTypes.STRING,
    email: DataTypes.STRING,
    password: DataTypes.STRING
  }, {});
  User.associate = function(models) {
    // associations can be defined here
  };
  return User;
};
Enter fullscreen mode Exit fullscreen mode

As an extra, it also generates the migration for this table model.

'use strict';
module.exports = {
  up: (queryInterface, Sequelize) => {
    return queryInterface.createTable('Users', {
      id: {
        allowNull: false,
        autoIncrement: true,
        primaryKey: true,
        type: Sequelize.INTEGER
      },
      name: {
        type: Sequelize.STRING
      },
      surname: {
        type: Sequelize.STRING
      },
      email: {
        type: Sequelize.STRING
      },
      password: {
        type: Sequelize.STRING
      },
      createdAt: {
        allowNull: false,
        type: Sequelize.DATE
      },
      updatedAt: {
        allowNull: false,
        type: Sequelize.DATE
      }
    });
  },
  down: (queryInterface, Sequelize) => {
    return queryInterface.dropTable('Users');
  }
};
Enter fullscreen mode Exit fullscreen mode

So you get a full example on how you can do it your own.
If you already have some models created manually and want to create a migration you can do it by calling sequelize:

$ sequelize migration:generate --name [name_of_your_migration]
Enter fullscreen mode Exit fullscreen mode

Coding your way to the DB

But let's get back to the most beautiful part of our life: coding!!!
In your methods, you must include the models. I would suggest separating models and database access from your API routes handlers.

const User        = require('../../models').User;

//get some fake data for example
const faker = require('faker');
const email = faker.internet.email();
const password = faker.internet.password();
const firstName = faker.name.firstName();
const lastName = faker.name.lastName();
const body = {email: email, password: password, name: lastName, surname: firstName};

//create a user
const newUser = await User.create(body);

//find the user
let user = await User.findOne({where: {email}});

//destroy the user with the object, this will call DELETE where id = our_user_id automatically.
await user.destroy();
Enter fullscreen mode Exit fullscreen mode

And now we have the all cycle for an object.

Bonus 1: Besides the normal cycle of one user you may want to display all the users in an admin dashboard:


let limit = 50;   // number of records per page
let offset = 0;
try{
    const data = await User.findAndCountAll();
    let page = req.params.page;      // page number
    let pages = Math.ceil(data.count / limit);
    offset = limit * (page - 1);
    const users = User.findAll({
        attributes: ['id', 'name', 'surname', 'email'],
        limit: limit,
        offset: offset,
        $sort: { id: 1 }
    });
    // next return an object that describes
    // the page and offset
    // this ensures you can display pages
    // and give to the ui the option
    // to display this to the user
    return {
        page,
        pages,
        offset,
        users
    };
}catch(err){
    //do something with your status like in the commented example
    //res.status(500).send('Internal Server Error');
} 
});
Enter fullscreen mode Exit fullscreen mode

Bonus 2: you can use Joi to validate your data on the route side before you add it in your model.

const Joi         = require('joi');
const userSchema = Joi.object().keys({
    username: Joi.string().alphanum().min(3).max(30).optional(),
    password: Joi.string().required(),
    email: Joi.string().email({ minDomainAtoms: 2 }).required(),
    name: Joi.string().alphanum().min(2).max(100).optional(),
    surname: Joi.string().alphanum().min(2).max(100).optional()
});
//....
const result = Joi.validate(user, userSchema);
if(result.error){
    return res.status(422).json({
        errors: result.error
    });
}
Enter fullscreen mode Exit fullscreen mode

Bonus 3:
Calling npm run test in the code will check everything for you. To set up that check the article with BDD testing.
You can also set up your code quality and error detections pretty easy with ESLint.

Please let me know your opinion about this article in the comments section below is. I would be glad to answer to your requests.

PS: This article was first published on appseed.us blog.

Top comments (0)