DEV Community

Cover image for Building Scalable GraphQL Microservices With Node.js and Docker: A Comprehensive Guide
Ege Aytin for Permify

Posted on • Originally published at permify.co

Building Scalable GraphQL Microservices With Node.js and Docker: A Comprehensive Guide

In this guide, we will explore the step-by-step process of building scalable GraphQL microservices using Node.js and Docker.

Here is what we will cover:

  • Understanding Microservices Architecture
  • What Is GraphQL?
  • Building Blocks of GraphQL
  • Building a GraphQL Microservice
    • Step 1: Create a Project Folder
    • Step 2: Initialize Apollo Server
    • Step 3: Configure Sequelize ORM
    • Step 4: Create the Data Model
    • Step 5: Define the GraphQL Schema
    • Step 6: Create Resolvers for the GraphQL API
  • Containerization With Docker
  • Dockerizing the GraphQL Microservice
  • What Is Docker Compose?
  • Deploying Microservices
  • Container Orchestration Platforms

If you want to quickly build on top of what this tutorial has covered, you can clone the complete code for this project from this GitHub repository.

Let's start by briefly understanding the fundamentals of microservices, as well as the architecture of GraphQL APIs.

Understanding Microservices Architecture

Microservices architecture is a software design approach that focuses on building applications as separate and autonomous units.

These units, known as microservices, are loosely coupled and communicate with each other through APIs or message brokers. Together, they form a cohesive system.

Each microservice is responsible for a specific business function and manages its resources. Moreover, they can be developed, deployed, scaled, and maintained independently of each other.

Before we dive into the process of building microservices, it's important to understand the role of GraphQL in this architecture.

What Is GraphQL?

GraphQL is a query language and runtime for APIs. It provides a flexible and efficient way for clients to request and retrieve specific data from a server using a single API endpoint.

Some of its key features include:

  • Declarative data fetching: With GraphQL, clients can precisely specify the data they need, including the fields and relationships, in their queries. This eliminates the problem of over-fetching and under-fetching of data that often occurs with traditional REST APIs.

  • Efficient type system: GraphQL has a robust type system that makes it possible to define the structure and relationships of the data in their APIs.

  • Efficient data loading capabilities: GraphQL enables clients to retrieve multiple resources in a single request. This reduces the number of round trips to the server, improving efficiency and reducing latency.

Building Blocks of GraphQL

GraphQL is composed of several essential building blocks that collectively define the structure and capabilities of the API.

  • Queries: Queries are used to request data from the server. They define the structure of the response and specify the fields and relationships to be included.

  • Mutations: Mutations are operations that are used to modify data on the server. They allow clients to create, update, or delete data. Like queries, mutations specify the fields and relationships involved in the operation.

  • Type Definitions (Typedefs): Type definitions, often referred to as typedefs, define the structure of the GraphQL schema. They provide a way to describe the available object types, scalar types (primitive data types), queries, and mutations in a clear and structured manner.

These components work together to define the structure of a GraphQL API and provide an efficient, yet, flexible way to query and manipulate data.

Building a GraphQL Microservice

Before diving into code, let's talk about our goal—we'll be building a simple CRUD-based API to manage user data.

There are several GraphQL server implementations, however, for this tutorial, we'll utilize Apollo GraphQL's Apollo Server, a lightweight and flexible JavaScript server that makes it easy to build GraphQL APIs.

Step 1: Create a Project Folder

To get started, create a project folder locally as follows:

mkdir graphql-API
cd graphql-API
Enter fullscreen mode Exit fullscreen mode

Next, run this command to initialize a new Node.js project using npm:

npm init --yes

Finally, install these packages.

npm install apollo-server pg dotenv sequelize
Enter fullscreen mode Exit fullscreen mode

Step 2: Initialize Apollo Server

Now, create a server.js file in the root directory, and include this code to initialize your Apollo Server:

const { ApolloServer } = require('apollo-server');
const typeDefs = require('./graphql/typeDefs');
const resolvers = require('./graphql/resolvers');
const db = require('./utils/db'); 
require('dotenv').config(); 

const server = new ApolloServer({
  typeDefs,
  resolvers,
  context: ({ req }) => ({ req, db }), 
});

server.listen({ port: 3000 }).then(({ url }) => {
    console.log(`Server ready at ${url}`);
  });
Enter fullscreen mode Exit fullscreen mode

The above code snippet will create an instance of a GraphQL server using Apollo Server, which includes typeDefs and resolver configuration that specify the schema and the corresponding data fetching logic for the API.

Furthermore, the server configuration includes a context option that will provide access to request-specific information through the req object, while also facilitating database connections for resolver functions to interact seamlessly with the data source.

Step 3: Configure Sequelize ORM

Now, the GraphQL API will make use of a Postgres database image running in a Docker container as a data source.

While you can opt to manage your server-database interaction using custom database scripts, for simplicity, we'll utilize the Sequelize ORM (Object Relational Mapper).

In the root directory of your project, create a utils/db.js file. Then, populate it with the provided code.

const { Sequelize } = require('sequelize');
const db = new Sequelize({
  dialect: 'postgres',
  host: process.env.POSTGRES_HOST,
  port: process.env.POSTGRES_PORT,
  username: process.env.POSTGRES_USER,
  password: process.env.POSTGRES_PASSWORD,
  database: process.env.POSTGRES_DB,
});

async function syncDatabase() {
  try {
    await db.authenticate();
    await db.sync({ alter: true });
    console.log('Database synchronized successfully');
  } catch (error) {
    console.error('Error synchronizing database:', error);
  }
}
syncDatabase();

module.exports = db;
Enter fullscreen mode Exit fullscreen mode

With this code, you should be able to initialize a connection to a PostgreSQL database. It also checks if the database schema matches the Sequelize data models and synchronizes them.

This means that, when you run the application, it will first check if the database and tables exist. If they don't exist, they will be created.

Alternatively, you can choose to create database migration scripts to handle such operations.

Now, create a .env file and populate it with the following environment variables.

POSTGRES_HOST=<db-host-name>
POSTGRES_PORT=5432
POSTGRES_USER=<user-name>
POSTGRES_PASSWORD=<password>
POSTGRES_DB=<DB-name>
Enter fullscreen mode Exit fullscreen mode

Step 4: Create the Data Model

Let's define a model for user data. Create a new models/user.js file, and add the following code.

const { DataTypes } = require('sequelize');
const db = require('../utils/db');
const User = db.define('User', {
  firstName: {
    type: DataTypes.STRING,
    allowNull: false
  },
  lastName: {
    type: DataTypes.STRING,
    allowNull: false
  },  
});
module.exports = User;
Enter fullscreen mode Exit fullscreen mode

Now that we have the model set up, let's move forward to define the GraphQL schema.

Step 5: Define the GraphQL Schema

The GraphQL schema defines how data is structured and accessed within the API. It outlines the available operations, such as queries and mutations, that can be performed to interact with the data.

To define the schema, create a new folder named graphql in the root directory of your project. Then add these two files: typeDefs.js and resolvers.js.

In the typeDefs.js file, include the following code:

const { gql } = require('apollo-server');
const typeDefs = gql`
  type User {
    id: ID!
    firstName: String!
    lastName: String!
  }
  type Query {
    getUser(id: ID!): User
    getAllUsers: [User]
  }
  type Mutation {
    createUser(firstName: String!, lastName: String!): User
    updateUser(id: ID!, firstName: String!, lastName: String!): User
    deleteUser(id: ID!): User
  }
`;
module.exports = typeDefs;
Enter fullscreen mode Exit fullscreen mode

Now that we have defined the schema, we can proceed to implement the operations (resolvers) for the GraphQL API.

Step 6: Create Resolvers for the GraphQL API

Resolver functions are responsible for managing client requests, including queries and mutation operations, within a GraphQL server.

When a client sends a request, the server invokes the corresponding resolver functions to process it, fetching data from various sources such as databases or APIs and manipulating it (e.g., adding or deleting data) before returning a response.

To implement resolver functions, add the following code to the resolvers.js file.

const resolvers = {
  Query: {
    async getUser(parent, { id }) {
      try {
        const user = await User.findByPk(id);
        return user;
      } catch (error) {
        throw new Error('Error fetching user');
      }
    },
    async getAllUsers() {
      try {
        const users = await User.findAll();
        return users;
      } catch (error) {
        console.log(error)
        throw new Error('Error fetching all users');

      }
    },
  },
}
module.exports = resolvers;
Enter fullscreen mode Exit fullscreen mode

The resolvers object provided above contains functions that handle queries to fetch user data from the database.

These functions will manage client requests to retrieve a single user by their ID, as well as all user data. This is a basic data-fetching example, nonetheless, you can customize the logic to fit different use cases.

Lastly, include the following mutations object within the resolvers object

Mutation: {
    async createUser(parent, { firstName, lastName }) {
      try {
        const user = await User.create({ firstName, lastName });
        return user;
      } catch (error) {
        throw new Error('Error creating user');
      }
    },
    async updateUser(parent, { id, firstName, lastName }) {
      try {
        const user = await User.findByPk(id);
        if (!user) {
          throw new Error('User not found');
        }
        user.firstName = firstName;
        user.lastName = lastName;
        await user.save();
        return user;
      } catch (error) {
        throw new Error('Error updating user');
      }
    },
    async deleteUser(parent, { id }) {
      try {
        const user = await User.findByPk(id);
        if (!user) {
          throw new Error('User not found');
        }
        await user.destroy();
        return user;
      } catch (error) {
        throw new Error('Error deleting user');
      }
    },
  },
Enter fullscreen mode Exit fullscreen mode

The mutation functions, on the other hand, are responsible for managing data manipulation operations, in this case, they will handle client requests that involve creating, updating, and deleting data from the database.

Go ahead and spin up the server.
node server.js

The server should start at the specified port.

server running
The server will also throw an error as it tries to establish connections to the database. Don't worry, this is expected behavior.

Once you containerize the server along with the PostgreSQL database image, the server should be able to connect to the database successfully.

Awesome! Now that your user API service is ready, the next step is to deploy it as an independent application, including its dependencies within a containerized environment.

Now, let's explore the steps to build and deploy the microservice as Docker images.

Containerization With Docker

The classic developer joke "But it works on my computer..." humorously illustrates the challenge of deploying and running applications across different environments and platforms.

The main hurdle involves configuring dependencies and ensuring compatibility across various software versions, operating systems, and hardware setups.

Docker, an open-source development platform, provides containerization technology for building and packaging applications along with their dependencies into portable images.

These images can then be executed as standalone components within isolated container environments, complete with required computing resources, regardless of the underlying infrastructure.

With Docker (or any other containerization technology), you can encapsulate each microservice within its own container, providing a high level of isolation.

Each container runs as an independent unit, with its own dependencies and runtime environment.

Moreover, you can easily scale the microservices. You can scale individual containers horizontally by spinning up multiple instances of a microservice to handle increased loads.

Now to get started with Docker, download and install Docker Desktop on your local machine.

Dockerizing the GraphQL Microservice

To containerize your GraphQL API using Docker, you'll need to create a Dockerfile. This file contains a series of instructions that the Docker engine will follow to build a Docker image, including your application's source code and its dependencies.

In the root directory of your project, create a new Dockerfile file, and add the following contents:

FROM node:16.3.0-alpine3.13
WORKDIR /app
COPY package*.json ./
RUN npm install
COPY . .
EXPOSE 3000
CMD ["node", "server.js"]
Enter fullscreen mode Exit fullscreen mode

Let's explore the role of each command:

FROM: specifies the base image, in this case, a lightweight Node.js alpine image, that the Docker engine should use to build the application's image.

  • WORKDIR: sets the /app directory (or any other directory name you specify) as the working directory for the GraphQL API within the container.

  • COPY package*.json./: Copies all the files with that filename format from the current working directory to the app directory.

  • RUN npm install: instructs Docker engine to install the packages (or dependencies) required by the application.

  • COPY. . : instructs the Docker engine to copy all files and directories from the current working directory on the host machine into the /app directory within the container.

  • Expose: specifies which ports should be made available by the Docker container. These ports allow the containerized application to communicate with other services or external clients.

  • CMD: specifies the command that should be executed when the container starts.

Now that we've completed the first step of containerizing the GraphQL API. The next step is to configure the data source, that is, the PostgreSQL database.

Instead of installing and configuring it locally, we'll utilize an existing PostgreSQL database application image, which will be run on a separate Docker container.

This approach provides several benefits, including simplified dependency management and ensuring consistent setups across various development environments.

To efficiently manage both the GraphQL API and the PostgreSQL database containers, we'll make use of Docker Compose.

What Is Docker Compose?

Docker Compose is a tool that simplifies the management of multiple Docker containers. It allows you to define and manage a multi-container application in a manifest YAML file. Within this file, you can specify the services, networks, volumes, and configurations required for your application, and then launch and manage all the containers with a single command.

To define a YAML file to manage the GraphQL API and the PostgreSQL database containers, in the root directory of your project folder, create a new docker-compose.yml file, and add the following content:

version: '3.9'

services:
  server:
    build: .
    ports:
      - '3000:3000'
    depends_on:
      - db
    env_file:
      - .env

  db:
    image: postgres
    restart: always
    ports:
      - '5432:5432'
    volumes:
      - data:/var/lib/postgresql/data
    env_file:
      - .env

volumes:
  data:
Enter fullscreen mode Exit fullscreen mode

This Docker compose configuration will manage two services: a server, the GraphQL API image container, and db, the PostgreSQL database image container.

The server service will build its image using the provided Dockerfile, while the db service will use the official PostgreSQL image.

One important aspect of this configuration is the dependency relationship between the services.

Specifically, the API service depends on the database service—this ensures that the server waits for the database to be fully initialized before starting to allow the API to seamlessly connect to the PostgreSQL database.

To build the images and start the containers, run the following command:

docker compose up

Finally, you can now proceed to test the functionality of the user API service. To do this, simply access the Apollo Server API sandbox in your browser by navigating to http://localhost:<port>/graphql.

Once you're in the sandbox, you can send requests and observe the responses. For example, you can add details of a new user by utilizing the createUser mutation.

Here is an example of how the user API test might look:
User API Test

By following these steps, you should be able to test the functionality of the user API service successfully.

Deploying Microservices

After building your application images, you can push them to Docker Hub, which serves as a centralized repository similar to GitHub, but specifically designed for Docker images.

Docker Hub provides a secure storage solution for your images, ensuring they are readily available for deployment across different environments and platforms. And similar to GitHub, Docker Hub seamlessly integrates with various deployment platforms, including popular cloud services like AWS.

This integration simplifies the deployment process, allowing you to effortlessly deploy your Dockerized applications to production environments.

To push your Docker images to Docker Hub, follow these steps.

  1. Go to Docker Hub, sign up, and log in to your account's overview page.

  2. Click on the Create a repository button. Provide a name for your repository and choose its visibility (Public or Private). Then, click on Create.
    docker hub

  3. Login to your Docker account by running the following command:

docker login
Provide your Docker username and password when prompted.

  1. Update the Docker image name to match the format: <your docker username>/<repo name> by running the command below.

docker tag <image-name> <your-docker-username>/<repo-name>

  1. Finally, push the Docker image to Docker Hub,

    docker push <your-image-name>/<repo-name>

And that's it! You have successfully pushed your images to Docker Hub.

Container Orchestration Platforms

Container orchestration platforms, such as Kubernetes, streamline the management of containerized applications.

They provide tools to automate the deployment, scaling, and monitoring of containers. This simplifies the complexities of managing large-scale microservice applications.

growtika-pr5lUMgocTs-unsplash (2)

Some of their key features include:

  • Automated deployment: Orchestration platforms automate the deployment of containers, eliminating the need for manual intervention and ensuring consistent and reliable deployments.
  • Dynamic scaling: They dynamically adjust the number of containers based on demand, optimizing resource utilization and ensuring application performance.
  • Comprehensive monitoring: These platforms provide real-time monitoring of container health, performance, and resource consumption, allowing administrators to identify and resolve issues proactively.

To learn more, you can start by exploring the official Kubernetes documentation.

Conclusion

Throughout this article, we have covered the fundamentals of microservices architecture, GraphQL API design, and a comprehensive guide on building a GraphQL microservice with Node.js and Docker.

Now, what are the next steps? You can build a client using your preferred frontend framework, to consume the API, as a separate service.

Additionally, you can explore advanced topics related to microservices such as API security and authentication, including learning how to Microservices Authentication and Authorization Using API Gateway.

Top comments (0)