DEV Community

Cover image for How to Build & Deploy Scalable Microservices with NodeJS, TypeScript and Docker || A Comprehesive Guide
Abeinemukama Vicent
Abeinemukama Vicent

Posted on

How to Build & Deploy Scalable Microservices with NodeJS, TypeScript and Docker || A Comprehesive Guide

In the ever-evolving landscape of software development, the paradigm of microservices has emerged as a game-changer, offering unparalleled scalability, flexibility, and resilience. As organizations strive to build and deploy applications that can seamlessly adapt to the dynamic demands of the digital era, the combination of Node.js, TypeScript, Docker, and other cutting-edge technologies has become a beacon of innovation. In this technological odyssey, we embark on a journey to unravel the intricacies of microservices architecture, exploring the potent synergy of Node.js's asynchronous prowess, the type safety of TypeScript, and the containerization magic of Docker. This article delves deep into the art of crafting microservices that not only elevate the performance and efficiency of applications but also empower developers to navigate the complexities of modern software development with finesse. We will navigate through the realms of next-level architecture, where Node.js, TypeScript, and Docker converge to redefine the way we conceptualize and construct scalable and resilient software systems by building and deploying 4 microservices for an ecommerce application(Auth, Products, Orders and Notifications) using various technologies in the NodeJS ecosystem.

What Exactly are Microservices

Microservices is an architectural style that structures an application as a collection of small, independent services, each focused on a specific business capability. These services can be developed, deployed, and scaled independently, allowing for greater flexibility and agility in software development.
Big companies like Netflix, Amazon, Ebay, Spotify among others have embraced and use microservice architecture, which enables rapid scaling, seamless updates, and the introduction of new features with ease among others.
In comparison with monolithic architecture, a monolithic application is built as a single unified unit using a single code base and framework.
the choice between monolithic and microservices approaches represents a fundamental decision that significantly influences the development, scalability, and maintainability of a software application. The monolithic architecture, characterized by a single, tightly integrated codebase, contrasts with the microservices architecture, where applications are decomposed into small, independent services. Each approach brings its own set of advantages and challenges, impacting factors such as scalability, development speed, and team autonomy. In this comparison, we delve into the key characteristics, pros, and cons of both monolithic and microservices architectures to help illuminate the considerations guiding the architectural decisions made by software development teams.

Microservices Vs Monolithic Architecture(Key Technicalities)

Scalability

Microservices:

Microservices architecture excels in scalability by offering a granular approach to resource allocation. Each microservice operates independently, allowing teams to scale specific services based on demand without affecting the entire application. This fine-grained scalability enhances efficiency and cost-effectiveness, as resources can be allocated precisely where needed. Whether it's handling increased user traffic, improving response times, or managing specific functionalities, the modular nature of microservices provides unparalleled flexibility for scaling individual components.
However, it's essential to note that this scalability comes with the responsibility of managing the orchestration and communication between microservices, which requires robust infrastructure and careful consideration of dependencies to ensure optimal performance.

Monolithic:

In contrast, monolithic architectures present a more straightforward but potentially less efficient approach to scalability. Scaling a monolithic application involves replicating the entire application, even if only a specific module or feature requires additional resources. This uniform scaling approach can result in over-provisioning, where resources are allocated to the entire application, even if certain components do not experience increased demand.
While scaling a monolith might be simpler in terms of deployment, it lacks the precision and efficiency offered by microservices. The challenge lies in predicting which parts of the application will require additional resources, potentially leading to underutilization or overutilization of resources based on the scaling needs of different modules.

Complexity:

Microservices:

Microservices introduce complexity through their decentralized and distributed nature. The system is broken down into independent services, each requiring careful coordination and communication. Service discovery, data consistency, and network management become critical aspects. While offering flexibility, autonomy, and scalability, microservices demand expertise in distributed systems. Development and deployment complexities arise, necessitating robust infrastructure and monitoring tools to ensure seamless operation across multiple services.

Monolithic:

Monolithic architectures are inherently less complex as they consist of a single, cohesive codebase. Development and deployment are simplified without the challenges of inter-service communication. However, as the application scales, maintaining a clear code structure becomes vital to prevent complexity. Updates may require deploying the entire application, potentially causing downtime during the process. While monoliths are simpler in certain aspects, they face challenges in maintaining clarity and efficiency as they grow in size and complexity.

Team Competency:

Microservices:

Microservices architectures demand a higher level of team competency due to their decentralized and independent nature. Each microservice may be developed using different technologies, requiring expertise in various programming languages and frameworks. Teams must possess strong communication skills to coordinate effectively across services. The autonomy granted to teams working on individual microservices necessitates a deep understanding of the overall system architecture to ensure seamless integration and collaboration. Additionally, the need for continuous integration and continuous deployment (CI/CD) practices becomes crucial to manage the frequent updates and releases associated with microservices.

Monolithic:

Monolithic architectures generally require less diverse skill sets within a team. Since the entire application is built using a single technology stack, the team can specialize in a unified set of skills. Communication and coordination are more straightforward within a shared codebase. However, as the application grows, maintaining code coherence and preventing dependencies from becoming bottlenecks require careful planning and competency in software design principles.

App Size:

Microservices:

Microservices architectures are characterized by smaller, independent services, leading to a more modular and lightweight approach. Each microservice addresses a specific business capability, resulting in smaller codebases. This modularity allows for easier maintenance, updates, and scalability. However, the overall infrastructure required to manage the distributed nature of microservices might introduce additional overhead.

Monolithic:

Monolithic architectures encompass the entire application within a single codebase, resulting in a larger and more comprehensive application size. While this simplicity can be advantageous for smaller projects or during initial development, the size can become a challenge as the application scales. Deploying updates or changes often involves deploying the entire monolith, potentially causing downtime and impacting efficiency.

Infrastructure:

Microservices:

Microservices architectures demand a more sophisticated and robust infrastructure compared to monolithic counterparts. The distributed nature of microservices requires effective solutions for service discovery, load balancing, and inter-service communication. Containerization technologies, such as Docker, and orchestration tools, like Kubernetes, are often employed to streamline deployment and management. Microservices benefit from cloud-native approaches, allowing dynamic scaling and resource allocation. While providing scalability and flexibility, the intricate infrastructure setup can introduce operational complexities and require a skilled DevOps team.

Monolithic:

Monolithic architectures are more straightforward in terms of infrastructure requirements. A single codebase simplifies deployment and management processes, reducing the need for intricate infrastructure configurations. However, as the application scales, ensuring efficient resource utilization becomes crucial. Monoliths may still benefit from cloud services but lack the flexibility and autonomy in resource allocation seen in microservices. The infrastructure for a monolith is generally more centralized, and scaling involves replicating the entire application.

Development & Deployment:

Microservices:

Microservices revolutionize development by allowing teams to work independently on small, specialized services. This parallel development approach aligns with DevOps principles, emphasizing continuous integration and continuous deployment (CI/CD). Containerization tools like Docker ensure consistency across environments, simplifying development and testing workflows. The distributed nature of microservices requires advanced orchestration, often managed by tools such as Kubernetes. DevOps practices play a critical role in automating testing, deployment, and scaling of individual services, ensuring agility and responsiveness to changes.

Monolithic:

In contrast, monolithic architectures follow a more conventional development and deployment model. The entire application is treated as a single unit, simplifying initial development but potentially introducing challenges as the application grows. Updates or changes involve deploying the entire monolith, which may lead to downtime. While DevOps practices are applicable, the focus is more on maintaining stability across the entire application. Continuous integration and deployment practices streamline workflows, but the impact on development agility is not as pronounced as in microservices. Overall, the development and deployment process in monolithic architectures tends to be more straightforward.

Communication Between Microservices

In microservices architecture, communication between services is a critical aspect. There are various ways to facilitate communication between microservices in NodeJS.
In this artcle we will discuss the 7 most common ones and use the first 2 on a real world project(ecommerce application) that we will build.

  • HTTP/RESTful APIs:

    This is one of the most common and widely used methods of communication between microservices. Each microservice exposes a set of HTTP endpoints (RESTful APIs) that other services can call to request or send data.

    Pros:

    Simple, widely adopted, and works well in a stateless environment.

    Cons:

    Synchronous and can introduce latency.

  • Message Brokers:

    Here, microservices communicate by sending messages to a central message broker (e.g., RabbitMQ, Apache Kafka). Services can subscribe to specific topics or queues to receive messages and react accordingly.

Mode of Operation:
  • Message brokers are designed for decoupling components in a distributed system by allowing them to communicate through messages.
  • Producers publish messages to a queue or topic, and consumers subscribe to receive these messages.
  • Brokers manage the routing, delivery, and storage of messages, ensuring that messages are reliably processed even if consumers are temporarily unavailable.

RabbitMQ comes with administrative tools to manage user permissions and broker security and is perfect for low latency message delivery and complex routing.
In comparison, Apache Kafka architecture provides secure event streams with Transport Layer Security(TLS) and is best suited for big data use cases requiring the best throughput.

Pros:

Asynchronous, decouples services, supports event-driven architectures.

Cons:

Complexity, potential for message loss (depending on the broker).

  • gRPC (Remote Procedure Call):

    gRPC is a high-performance, open-source RPC (Remote Procedure Call) framework initially developed by Google. It uses Protocol Buffers for serialization and supports bidirectional streaming.

    Pros:

    Efficient, supports multiple programming languages, bi-directional communication.

    Cons:

    More complex setup compared to REST, may not be as widely adopted in all scenarios.

  • GraphQL:

    A query language for APIs that allows clients to request only the data they need. It provides a more flexible and efficient alternative to RESTful APIs.
    GraphQL provides a single endpoint for multiple data sources, making it efficient for clients to retrieve only the necessary information forexample if a Node.js microservice uses GraphQL, a client can send a query to request specific data, and the GraphQL service will fetch the required information from the underlying data sources and respond accordingly.

    Pros:

    Reduces over-fetching of data, allows clients to define the shape of the response.

    Cons:

    Complexity in setting up, may not be suitable for all use cases.

  • WebSocket:

    Websockets provide full-duplex communication channels over a single, long-lived connection, allowing real-time communication between microservices.
    It is an advanced technology and makes it possible to open a two-way interactive communication session between the user's browser and a server. You can send messages to a server and receive event-driven responses without having to poll the server for a reply.
    WebSockets are suitable for scenarios where low-latency, bidirectional communication is crucial, such as chat applications, real-time updates, or collaborative editing tools.

    Pros:

    Real-time communication, low-latency updates.

    Cons:

    May not be suitable for all use cases, increased complexity.

  • Service Mesh (e.g., Istio):

    It is a dedicated infrastructure layer that manages service-to-service communication, providing features like load balancing, encryption, authentication, and monitoring.
    Istio deploys sidecar proxies alongside each microservice instance. These proxies handle communication, providing features like load balancing, service discovery, encryption, monitoring and authentication.

    Pros:

    Centralized control over communication, observability.

    Cons:

    Adds complexity to the infrastructure.

  • Distributed Databases:

    Services communicate indirectly through a shared database. Each microservice has its own database, and they interact by reading and writing to the shared database.

    Pros:

    Simple, easy to implement.

    Cons:

    Tight coupling, potential for data inconsistency, may not scale well.

The choice of communication method depends on various factors such as the nature of the application, scalability requirements, latency considerations, and the team's expertise. Often, a combination of these methods is used within a microservices architecture to meet different communication needs.
Also, depending on the team's expertise, and other core influential factors, you can decide to use only one method of communicating microservices say HTTP/RESTful API, its totally fine regardless of the technologies/programming languages used to build the various services.
In this article, we will use a combination of HTTP/RESTful apis and RabbitMQ(message broker) to communicate the 4 microservices for our ecommerce application.

Project Synopsis

This article presents a hands-on exploration of architecting a robust and scalable e-commerce application using microservices architecture.
The four meticulously crafted microservices include an Authentication service (using MongoDB and mongoose), a Product Management service (using PostgreSQL and Prisma), an Order Processing service (using MySQL and Sequelize), and a real-time Notifications service (using GraphQL, MongoDB and mongoose). The Authentication microservice ensures secure user management through MongoDB and Mongoose, while the Product Management service adopts PostgreSQL and Prisma for efficient data handling. The Order Processing microservice, powered by MySQL and Sequelize, orchestrates seamless order fulfillment. To enable real-time notifications, the Notifications microservice capitalizes on the power of GraphQL, exploiting its built-in mechanisms for efficient and scalable real-time interactions. Each microservice will be built using the Express framework, and represents a unique technical challenge and a testament to the flexibility of Node.js and TypeScript for developing distributed, scalable, and maintainable microservices. The use of TypeScript adds an extra layer of type safety, making the entire development process smoother and more reliable. This article provides a deep dive into the implementation details, communication strategies, and best practices, offering valuable insights for developers venturing into microservices architecture with Node.js and TypeScript.
Additionally, we will dockerise every microservice for a seamless deployment.

Prerequisites

Before we embark on the journey of building our microservices, it's essential to ensure that your development environment is properly configured. This section outlines the prerequisites that you need to have in place before diving into building the services.

Node.js and npm:

Have Node.js and npm (Node Package Manager) installed on your system. These are essential for developing and running our microservices in a NodeJS environment. You can download Node.js from the official website or use a version manager like nvm for better control over Node.js versions.

Text Editor or IDE:

Choose a text editor or integrated development environment (IDE) for writing your TypeScript/JavaScript code. Popular choices include Visual Studio Code, Atom, or any editor of your preference.

MongoDB Atlas Account:

We will be using MongoDB as a database on both the Auth microservice and notifications microservice, sign up for a MongoDB Atlas account here incase you donot have one and donot have its desktop application(mongodb campass) installed and would like to use mongodb atlas. This cloud-based database service offers a free tier and simplifies the process of managing MongoDB databases.

GitHub Account (Optional):

For version control and implementing CI/CD pipelines in later sections, we will be using Github, ensure you have an account set up. This is optional but highly recommended for a streamlined development workflow.
We will also store our overall source code in a single github organisation with each microservice in its own public github repository.

Basic Knowledge of TypeScript, Express, Docker and NodeJS ORMs:

Familiarize yourself with TypeScript and Express.js, as they form the foundation of our microservices. If you're new to these technologies, consider exploring this article and this to get comfortable with the basics.
In addition, you need some basic knowledge of docker and ORMs in nodejs ecosystem especially prisma and sequelize.
You can checkout this article to level up your docker skills where I explained everything from environment setup to deployment with a CI/CD pipeline using nodejs and typescript.

Communicating Our 4 Micro-services

In this article, we will use a simple yet effective communication architecture for our 4 microservices.

The Authentication microservice, responsible for user management and authentication, will expose a set of HTTP endpoints (RESTful APIs). These endpoints will include user registration and login. We would have handled user profile management here or in an independent microservice but its not a big deal as there is nothing much we need from it in this guide. Other microservices, such as Products and Orders, can make HTTP requests to these endpoints to authenticate users and obtain necessary user information.

The Product Management microservice will handle for us the CRUD operations for products. It will expose HTTP/RESTful APIs for retrieving product information, adding new products, updating product details, and deleting products. Additionally, when changes occur in the product catalog, such as the addition or modification of products, the microservice will publish events to a RabbitMQ exchange. Other microservices, like the Orders microservice, will subscribe to this exchange to receive real-time updates about changes in the product catalog.

Our Order Processing microservice will focus on managing and fulfilling customer orders. It will subscribe to the RabbitMQ exchange where product-related events are published. This will allow the Order Processing microservice to receive real-time updates about changes in the product catalog and adjust order processing accordingly.

Lastly, We will expose a GraphQL API for handling notifications functionality.
When events are published to RabbitMQ that need a user to get notified, the notifications microservice will subscribe to them and deliver a realtime notification to the destined user.

Sharing Code/Functions Between Microservices

Since our application involves user authentication and authorization, we will have methods/functions we may need across all the 4 micro-services for example jwt middlewares for user authorisation, among others and we need a strategy to reuse them all through.
Various options exist to help us achieve this of course depending on the tech stack/programming languages used to build the various microservices but since for we are using nodejs to build all 4 services, we can write a simple shared library and publish it to npm, then install it in any microservice we need to consume the functions.

Now that we have a flow for our microservices, lets get our hands dirty and start building, commencing with the shared library.

Building a Shared Library for All the Microservices and Why we Need it.

Before we start on our Auth Service, from what we have architected so far, we need jwt middlewares across almost all our microservices to help us verify a user trying to access any resource on our application.
Conside a user trying to hit the endpoint products/updatebyid/:productId exposed to the client from the products microservice.
This endpoint should be hit/accessed by a loggedin/authenticated user who in addition has admin previlages(isAdmin set to true). However, the jwt middleware for verifying a user token and checking user roles, from our former monolithic architecture has always been tied to the authentication along side the one that generates jwt upon successfull login, this creates a gap for us to verify jwt from the client(in the headers) in other microservices with protected endpoints necessitating us to craft a way of verifying the token received trying to access that resource/endpoint giving birth to need for a shared library.

Steps:

Step 1: Project Setup

Create a new folder in your desired location and open it with your desired code editor, in my case, I called it: nodejs_ms_shared_library.
Initialise a new nodejsproject with the following command:

npm init -y
Enter fullscreen mode Exit fullscreen mode

Step 2: Install Dependencies

Install the following dependencies with the following command:

npm install express jsonwebtoken
Enter fullscreen mode Exit fullscreen mode

and development dependencies with the following command:

npm install typescript
Enter fullscreen mode Exit fullscreen mode

Step 3: Configure Typescript

Inside the home directory, create a new file: tsconfig.json and place the following code:

{
  "compilerOptions": {
    "target": "ES2020",
    "module": "NodeNext",
    "moduleResolution": "NodeNext",
    "esModuleInterop": true,
    "skipLibCheck": true,
    "declaration": true,
    "declarationMap": true,
    "forceConsistentCasingInFileNames": true,
    "outDir": "./dist",
    "rootDir": "./src",
    "lib": ["ES2020", "DOM", "DOM.Iterable", "ScriptHost"]
  },
  "include": ["src/**/*.ts"],
  "exclude": ["node_modules", "dist"]
}
Enter fullscreen mode Exit fullscreen mode

Step 4: Write JWT Middleware Functions

Create another file: src/jwtUtils.ts and place the following code:

// shared-library/src/jwtUtils.ts
import { Request, Response, NextFunction } from "express";
import jwt, { VerifyErrors, Secret } from "jsonwebtoken";

// Define the JWT payload type
export type JWTPayload = {
  id: string;
  username: string;
  email: string;
  isAdmin: boolean;
};

// Custom Request type with 'user' property
export interface CustomRequest extends Request {
  user: JWTPayload;
  params: {
    id: string;
  };
  headers: {
    token?: string;
  };
}

// Export types used in the library
export type { Secret, VerifyErrors, NextFunction };

// Generate JWT token
export const generateToken = (
  payload: JWTPayload,
  secret: Secret,
  expiresIn: string
): string => {
  const token = jwt.sign(payload, secret, { expiresIn });
  return token;
};

// Verify JWT token middleware
export const verifyToken = (
  req: CustomRequest,
  res: Response,
  next: NextFunction,
  secret: Secret
): void => {
  const authHeader = req.headers.token;

  if (authHeader) {
    const token = Array.isArray(authHeader)
      ? authHeader[0].split(" ")[1]
      : authHeader.split(" ")[1];

    jwt.verify(token, secret, (err: VerifyErrors | null, user: any) => {
      if (err) {
        // Use the res object with a status function
        res.status(403).json("Token is not valid!");
      } else {
        req.user = user;
        next();
      }
    });
  } else {
    // Use the res object with a status function
    res.status(401).json("You are not authenticated!");
  }
};

// Authorize account owner middleware
export const verifyTokenAndAuthorization = (
  req: CustomRequest,
  res: Response,
  next: NextFunction,
  secret: Secret
): void => {
  verifyToken(
    req,
    res,
    () => {
      if (req.user.id === req.params.id || req.user.isAdmin) {
        next();
      } else {
        res.status(403).json("You are not allowed to do that!");
      }
    },
    secret
  );
};

// Authorize admin middleware
export const verifyTokenAndAdmin = (
  req: CustomRequest,
  res: Response,
  next: NextFunction,
  secret: Secret
): void => {
  verifyToken(
    req,
    res,
    () => {
      if (req.user.isAdmin) {
        next();
      } else {
        return res.status(403).json("You are not allowed to do that!");
      }
    },
    secret
  );
};
Enter fullscreen mode Exit fullscreen mode

We export four functions, one for generating jwt token, other for verifying the token, other for for authorizing user who owns a resource and other for authorizing admin.
These functions act as a bark bone for our authentication and authorization system.

Lets now create an index fie and export our functions. Create a new file: src/index.ts and place the following code:

export {
  generateToken,
  verifyTokenAndAdmin,
  verifyTokenAndAuthorization,
  CustomRequest,
  JWTPayload,
  VerifyErrors,
  verifyToken,
  Secret,
} from "./jwtUtils";
Enter fullscreen mode Exit fullscreen mode

step 5: Transpile TypeScript

We also need a way of converting our ts to a format that our nodejs runtime can run.
Before we run the command for transpiling our ts to js, we need to adjust our package.json and add lacking scripts and other information to help us in publishing our library.
Update package.json to look like below:

{
  "name": "nodejs_ms_shared_library",
  "version": "1.0.5",
  "description": "",
  "main": "dist/index.js",
  "module": "module",
  "types": "dist/index.d.ts",
  "scripts": {
    "build": "tsc"
  },
  "engines": {
    "node": ">=16.0.0"
  },
  "keywords": [],
  "author": "",
  "license": "ISC",
  "devDependencies": {
    "typescript": "^5.3.3"
  },
  "dependencies": {
    "express": "^4.18.2",
    "jsonwebtoken": "^9.0.2"
  }
}
Enter fullscreen mode Exit fullscreen mode

We added a build script for tanspiling typescript to javascript that can be run by our nodejs runtime.
Now run the following command on terminal:

npm run build
Enter fullscreen mode Exit fullscreen mode

If all is good, you should have the following folder structure including dist with the compiled typescript and declaration files:
Image description

Step 5: Publish to NPM

Before we publish our shared library to npm, we need an npm account. Head over to npm website and login or create a new account.
After that, get back to terminal and run the following command:

npm login
Enter fullscreen mode Exit fullscreen mode

Follow the onscreen instructions to complete login process from the terminal too.

Before publishing, we need to update the version number in our package.json. You can manually change the version number or use the npm version command:

npm version patch  # Or "minor" or "major"
Enter fullscreen mode Exit fullscreen mode

Considering 1.0.0 as our initial version number, patch increments it to 1.0.1, 1.0.2 etc,on the other hand, minor would increment it to 1.1.0, 1.2.0 etc and then major would increment to 2.0.0, 3.0.0 etc.
In our case we will be updating our library version number with patch but it all depends on the chages you have made to your library for example, patch is suited for small bug fixes, minor for some backward compatible changes and major for major changes with backward incompatible updates.

Now run the following command to publish:

npm publish --access public
Enter fullscreen mode Exit fullscreen mode

Note that the --access public flag is vital since if not included, npm will assume youre library is a private one and its not free to publish a library to a private npm repository so you would need to first check out their pricing
Check your npm account and you should have your library published:

Image description

Our library is ready for use. We could have added unit tests and a ci/cd pipeline to automate deployment to our npm repository to make it better but that's outside the scope of the article and that's not why we are here 😄. However, a future article may concentrate on this subject as its way helpful in many scenarios.

Before we proceed, lets first discuss about other approaches if say we were using various programming languages, some not in nodejs ecosystem, say python(django, flask, etc), php(laravel, etc), java(spring boot, etc) among others to build various microservices, our approach of a shared nodejs library may not work for us and we have various other methods that could come to rescue.

1. Centralized Authentication Service:

You can create a dedicated authentication service that handles JWT generation and verification and then expose endpoints for microservices to validate tokens regardless of their technology stack.

2. API Gateway with Authentication:

You can also implement an API gateway that sits in front of microservices and then handle authentication and authorization at the gateway level, forwarding only validated requests to downstream services.

3. Language-Specific JWT Libraries:

Most common way of handling this situation while using various technologies/programming languages to build microservices is using language-specific JWT libraries in each microservice to verify tokens independently.
This ensures consistent secret keys and algorithms across libraries.

4. Shared Verification Logic as a Service:

Lastly, you can isolate JWT verification logic into a separate, technology-agnostic microservice and expose endpoints for other microservices to call for verification, regardless of their language.

All good, we can now start on writing our microservices and we will have our shared library installed with following command: npm i nodejs_ms_shared_library just like any other nodejs library in any of the microservice that will need our shared functions.

Building the Auth Microservice

Our authentication microservice is abit more straight forward just like we plotted, and is concerned with user authentication that is registration and login. We will be using MongoDB with mongoose ODM, together with jest and supertest for unit testing and we will dockerise it at the end:

Steps

Step 1: Initialise Project

Create a new folder in your desired location and name it auth_service.
With the folder open in your favourite code editor, open terminal and run the following command to initialise a NodeJS project:

npm init -y
Enter fullscreen mode Exit fullscreen mode

After initialising the project, install the following dependencies with the following command:

npm install express cors helmet nodejs_ms_shared_library dotenv bcrypt jsonwebtoken mongoose morgan
Enter fullscreen mode Exit fullscreen mode

and development dependencies with the following command:

npm install -D typescript ts-node nodemon jest ts-jest @types/jest supertest @types/supertest @types/cors @types/express @types/bcrypt @types/morgan 
Enter fullscreen mode Exit fullscreen mode
Folder Structure

Following will be our overall folder structure for our auth microservice:

Image description

Setp 2: COnfigure TypeScript and Nodemon

Create a file named tsconfig.json in the root directory and add the following configuration:

{
  "compilerOptions": {
    "module": "NodeNext",
    "moduleResolution": "NodeNext",
    "target": "ES2020",
    "baseUrl": "src",
    "noImplicitAny": true,
    "sourceMap": true,
    "esModuleInterop": true,
    "outDir": "dist"
  },
  "include": ["src/**/*"],
  "exclude": ["node_modules"]
}
Enter fullscreen mode Exit fullscreen mode

Create another file named nodemon.json in the root directory and place the following code:

{
  "watch": ["src"],
  "ext": ".ts,.js",
  "exec": "ts-node ./src/index"
}
Enter fullscreen mode Exit fullscreen mode

Inside package.json, add the following scripts in the scripts section:

"scripts": {
    "build": "tsc",
    "start": "npm run build && node dist/src/index.js",
    "dev": "nodemon",
    "test": "jest --watchAll  --detectOpenHandles"
  },
Enter fullscreen mode Exit fullscreen mode

The build command helps us transpile TypeScript to JavaScript in production where npm start command is used to start or restart our server. The dist folder is the destination for our resultant JavaScript code as specified in tsconfig.json. In development, nodemon restarts our server automatically without a need for transpilation first.

Also, add the line:

"module": "module",
Enter fullscreen mode Exit fullscreen mode

to your package.json to specify that we are using EsModules not NodeJS's default CommonJS pattern.

Step 3: Write Database Model

In our auth service, we will have only one model that is User model that will define our User schema. This model will handle all the users of our ecommerce application that is admin and customer.
Create a new file in the root diretory: src/models/User.ts and place the following code:

import mongoose, { Document, Schema } from "mongoose";

export interface IUser extends Document {
  email: string;
  username: string;
  password: string;
  isAdmin: boolean;
  profileImage: string;
}

const userSchema = new Schema<IUser>({
  email: { type: String, required: true, unique: true },
  username: { type: String, required: true, unique: true },
  password: { type: String, required: true },
  isAdmin: { type: Boolean, default: false },
  profileImage: { type: String },
});

const User = mongoose.model<IUser>("User", userSchema);

export default User;
Enter fullscreen mode Exit fullscreen mode

Setp 4: Writing User Service and Controller

Our logic for creating and logging in a user is also straight forward but we will tear it down to have a service and a controller for easy understanding.
Create a new file in the root diretory: src/services/userService.ts and place the following code:

// Import necessary modules
import { generateToken } from "nodejs_ms_shared_library";
import User, { IUser } from "../models/User";
import { comparePassword, hashPassword } from "../utils/passwordUtils";

// Create a new user
export const createUser = async (userInput: IUser): Promise<IUser> => {
  try {
    // Hash the user's password before storing it
    const hashedPassword = await hashPassword(userInput.password);

    // Create the user with the hashed password
    const newUser = await User.create({
      ...userInput,
      password: hashedPassword,
    });

    return newUser;
  } catch (error) {
    throw new Error(`Error creating user: ${error.message}`);
  }
};

// Login user
export const loginUser = async (
  email: string,
  password: string
): Promise<{ user: Omit<IUser, "password">; token: string }> => {
  try {
    // Find user by email
    const user = await User.findOne({ email });
    if (!user) {
      throw new Error("User not found");
    }

    // Compare the provided password with the stored hashed password
    const isPasswordValid = await comparePassword(password, user.password);
    if (!isPasswordValid) {
      throw new Error("Invalid password");
    }

    // Generate JWT token
    const token = generateToken(
      {
        id: user._id,
        username: user.username,
        email: user.email,
        isAdmin: user.isAdmin,
      },
      process.env.JWT_SEC,
      process.env.JWT_EXPIRY_PERIOD
    );

    // Destructure password from the data returned
    const { password: _password, ...userData } = user.toObject();

    return { user: userData as Omit<IUser, "password">, token };
  } catch (error) {
    throw new Error(`Error logging in: ${error.message}`);
  }
};
Enter fullscreen mode Exit fullscreen mode

and src/controllers/userController and place the following code:

import { Request, Response } from "express";
import * as UserService from "../services/userService";

// Create a new user
export const createUser = async (
  req: Request,
  res: Response
): Promise<void> => {
  try {
    const newUser = await UserService.createUser(req.body);
    res.status(201).json(newUser);
  } catch (error) {
    res.status(500).json({ error: error.message });
  }
};

// Login user
export const loginUser = async (req: Request, res: Response): Promise<void> => {
  const { email, password } = req.body;
  try {
    const { user, token } = await UserService.loginUser(email, password);
    res.status(200).json({ user, token });
  } catch (error) {
    console.log(error);
    res.status(500).json({ error: error.message });
  }
};
Enter fullscreen mode Exit fullscreen mode

Our service uses separately defined functions for encrypting and decrypting password using bcrypt library for security purposes, defined at src/utils/passwordUtils.ts as shown below:

// Import necessary modules
import bcrypt from "bcrypt";

// Hash a password
export const hashPassword = async (password: string): Promise<string> => {
  const saltRounds = 10;
  const hashedPassword = await bcrypt.hash(password, saltRounds);
  return hashedPassword;
};

// Compare a password with its hash
export const comparePassword = async (
  password: string,
  hashedPassword: string
): Promise<boolean> => {
  return bcrypt.compare(password, hashedPassword);
};

Enter fullscreen mode Exit fullscreen mode

Additionally, our service uses generateToken() function from our custom shared library: nodejs_ms_shared_library and takes all the three parameters, just like we defined while building the library.
Create a file in the root directory, name it .env and place the following code:

JWT_SEC=YourJWTSecret
JWT_EXPIRY_PERIOD=YourJWTExpiryTime
Enter fullscreen mode Exit fullscreen mode

Step 5: Writing/Exposing Auth Routes

After writing our authentication logic, we now need to expose our routes for login and register such that the client can use them.
Create a new file in the root directory: src/routes/authRoute.ts andplace the following code:

// src/auth.route.ts
import { Router } from "express";
import * as AuthController from "../controllers/userController";

const router = Router();
// Register new user
router.post("/register", AuthController.createUser);

// Login user
router.post("/login", AuthController.loginUser);

export default router;

Enter fullscreen mode Exit fullscreen mode

Inside the root directory, create a index file at src/index.ts and place the following code:

import express from "express";
const app = express();
import cors from "cors";
import helmet from "helmet";
import dotenv from "dotenv";
import mongoose from "mongoose";
import authRoute from "./routes/authRoute";
import morgan from "morgan";

dotenv.config();
app.use(morgan("common"));

// USE HELMET AND CORS MIDDLEWARES
app.use(
  cors({
    origin: ["*"], // Comma separated list of your urls to access your api. * means allow everything
    credentials: true, // Allow cookies to be sent with requests
  })
);
app.use(helmet());

app.use(express.json());

app.get("/", async (req: express.Request, res: express.Response) => {
  try {
    res.send(
      "Welcome to unit testing guide for nodejs, typescript and express!"
    );
  } catch (err) {
    console.log(err);
  }
});

// DB CONNECTION

if (!process.env.MONGODB_URL) {
  throw new Error("MONGO_URI environment variable is not defined");
}

mongoose
  .connect(process.env.MONGODB_URL)
  .then(() => {
    console.log("MongoDB connected to the backend successfully");
  })
  .catch((err) => console.log(err));

app.get("/", async (req: express.Request, res: express.Response) => {
  try {
    res.send(
      "Welcome to unit testing guide for nodejs, typescript and express"
    );
  } catch (err) {
    console.log(err);
  }
});

// Serve other routes
app.use("/api/v1/auth/", authRoute);

// Start backend server
const PORT = process.env.PORT || 8900;

app.listen(PORT, () => {
  console.log(`Backend server is running at port ${PORT}`);
});

export default app;

Enter fullscreen mode Exit fullscreen mode

Our index file includes all connections to other resources like our mongodb using mongoose among others and our backend server will be listening at port 8900 on our local machine.

Lets add our database connection url from mongodb atlas into our .env file. Add the following line:

MONGODB_URL=YourDatabaseConnectionString
Enter fullscreen mode Exit fullscreen mode

Step 6: Writing Unit Tests

In this service, we will write a simple unit test only to verify our unit testing environment and be sure our tests would progress given that we had them for all our code.
Create a new file in the root: src/__tests__/app.test.ts and place the folllowing code:

// Unit test for testing initial route ("/")
describe("GET /", () => {
  it("Tests initial route '/'", async () => {
    expect(true).toBe(true);
  });
});
Enter fullscreen mode Exit fullscreen mode

If you would like to write unit tests for the entire service, its totally fine and you can check out the following article for assistance:

Step 7: Dockerising the Auth Service

To dockerise our auth microservice, we will need a Dockerfile to hold our instructions for building a Docker image.
If youre not familiar with docker, check out this article for a step by step guide.
Create a new file in the root: Dockerfile and place the following code:

# Use an official Node.js runtime as a parent image
FROM node:latest as builder

# Set the working directory in the container
WORKDIR /usr/src/app

# Copy package.json and package-lock.json to the working directory
COPY package*.json ./

# Install app dependencies
RUN npm install

# Copy all files from the current directory to the working directory
COPY . .

# Development stage
FROM builder as development
# Set NODE_ENV to development
ENV NODE_ENV=development

# Expose the port the app runs on
EXPOSE 8900

# Command to run the application(in development)
CMD ["npm", "run", "dev"]

# Production stage
FROM builder as production
# Set NODE_ENV to production
ENV NODE_ENV=production

# Run any production-specific build steps if needed here

# Run the production command
CMD ["npm", "start"]
Enter fullscreen mode Exit fullscreen mode

and another file, still in the root: .dockerignore and place the following code:

/node_modules
npm-debug.log
.DS_Store
/*.env
./idea
Enter fullscreen mode Exit fullscreen mode

Our Dockerfile contains instructions for building the docker image both in production and development according to the NODE_ENV environment variable.
Also, .dockerignore helps us ignore some files and directories while building the docker image just like you see .gitignore while using git.

All set, lets now build and run the docker image with the following commands:

docker build -t auth_service:development --target development .
Enter fullscreen mode Exit fullscreen mode

and

docker run -p 8900:8900 -v $(pwd):/usr/src/app -e PORT=8900 auth_service:development
Enter fullscreen mode Exit fullscreen mode

respectively.
If youre using mac or linux and didn't add your user to the docker group while installing docker, to run Docker commands without sudo, donot forget to use super user do(sudo) at the beginning of these commands for building and running docker image.

If everything is well, you should have the following in terminal:

Image description

You can spin up/split another terminal instance and run unit tests(inside the docker container), with the following command:

sudo docker exec -it your_docker_container_id  npm test
Enter fullscreen mode Exit fullscreen mode

Before we start on our products service, you can first create atleast 2 users one admin and one customer(without admin previlages) using any means of your choice, say postman, insomnia or any other means.

Building Products Microservice

Our products microservice is also straight forward just like how the auth has been. As previously plotted, we will be using different technologies on each service and we are using PostgreSQL as a database and prisma orm(Object Relational Mapper) for querying our DB.
ORMs are used to translate between the data representations used by databases and those used in object-oriented programming, and in this service, we will be using one of the most common ones in the nodejs ecosystem, Prisma.
It is the only fully type-safe ORM in the TypeScript ecosystem. The generated Prisma Client ensures typed query results even for partial queries and relations.

On the other hand, PostreSQL is a free and open-source relational database management system emphasizing extensibility and SQL compliance.
In this product service, it will be our database.

Steps:

Our steps on this service are nearly the same with the auth service except some small differences. However we will redo everything since this is an entirely independent microservice that will even be hosted independently.

Step 1: Project Setup

Create a new folder in your desired location and name it products_service.
With the folder open in your favorite code editor, open terminal and run the following command to initialize a NodeJS project:

npm init -y
Enter fullscreen mode Exit fullscreen mode

After initializing the project, install the following dependencies with the following command:

npm install express cors helmet nodejs_ms_shared_library dotenv bcrypt jsonwebtoken morgan
Enter fullscreen mode Exit fullscreen mode

and development dependencies with the following command:

npm install -D typescript ts-node nodemon jest ts-jest @types/jest supertest @types/supertest @types/cors @types/express @types/bcrypt @types/morgan 
Enter fullscreen mode Exit fullscreen mode
Folder Structure

Following will be our overall folder structure for our product microservice:

Image description

Setp 2: Configure TypeScript and Nodemon

Create a file named tsconfig.json in the root directory and add the following configuration:

{
  "compilerOptions": {
    "module": "NodeNext",
    "moduleResolution": "NodeNext",
    "target": "ES2020",
    "baseUrl": "src",
    "noImplicitAny": true,
    "sourceMap": true,
    "esModuleInterop": true,
    "outDir": "dist"
  },
  "include": ["src/**/*"],
  "exclude": ["node_modules"]
}
Enter fullscreen mode Exit fullscreen mode

Create another file named nodemon.json in the root directory and place the following code:

{
  "watch": ["src"],
  "ext": ".ts,.js",
  "exec": "ts-node ./src/index"
}
Enter fullscreen mode Exit fullscreen mode

Inside package.json, add the following scripts in the scripts section:

"scripts": {
    "build": "tsc",
    "start": "npm run build && node dist/src/index.js",
    "dev": "nodemon",
    "test": "jest --watchAll  --detectOpenHandles"
  },
Enter fullscreen mode Exit fullscreen mode

Also, add the line:

"module": "module",
Enter fullscreen mode Exit fullscreen mode

to your package.json to specify that we are using EsModules not NodeJS's default CommonJS pattern.

Step 3: Install PostgreSQL

If you donot have PostgreSQL database installed, head over to their download website and get the one for your operating system.
As always, if youre using linux and are not a fan of graphical user interfaces, I will provide the commands to help you install everything without leaving your terminal:

Update Package Lists:
sudo apt update
Enter fullscreen mode Exit fullscreen mode
Install the PostgreSQL server:
sudo apt install postgresql postgresql-contrib
Enter fullscreen mode Exit fullscreen mode
Start and enable the PostgreSQL service to start on boot:
sudo systemctl start postgresql
sudo systemctl enable postgresql
Enter fullscreen mode Exit fullscreen mode
Access the PostgreSQL shell

Access the PostgreSQL shell by switching to the postgres user and running the psql command:

sudo -u postgres psql
Enter fullscreen mode Exit fullscreen mode

You should have the following terminal:

Image description

Step 4: Building the Database Schema/Model

Since we are using prisma orm, it will help us on lifting all the heavy work in setting up the database and its schema.
If you dont have it installed, run the following command:

npm install -g prisma
Enter fullscreen mode Exit fullscreen mode

You can also install it in our project as a dependency not globally with the following command:

npm install prisma
Enter fullscreen mode Exit fullscreen mode

After installing Prisma, we need to initialize the Prisma configuration by running the following command:

npx prisma init
Enter fullscreen mode Exit fullscreen mode

You should have prisma.schema file in the location: prisma/prisma.schema. Update it to the following code:

// prisma/schema.prisma
generator client {
  provider = "prisma-client-js"
  output   = "../src/prisma/client"
}

datasource db {
  provider = "postgresql"
  url      = env("DATABASE_URL")
}

model Product {
  id        Int      @id @default(autoincrement())
  title     String
  desc      String
  img       String
  categories String[]
  size      String[]
  color     String[]
  price     Float
}

Enter fullscreen mode Exit fullscreen mode

Our Product schema has 8 fields as shown above.
The npx prisma init command updated our .env file by adding DATABASE_URL variable holding the databse connection url but we need to update it with our actual db name, username and password.

# This was inserted by `prisma init`:
# Environment variables declared in this file are automatically made available to Prisma.
# See the documentation for more detail: https://pris.ly/d/prisma-schema#accessing-environment-variables-from-the-schema

# Prisma supports the native connection string format for PostgreSQL, MySQL, SQLite, SQL Server, MongoDB and CockroachDB.
# See the documentation for all the connection string options: https://pris.ly/d/connection-strings

DATABASE_URL="postgresql://your_db_root_user:your_db_root_user_password@localhost:5432/your_db_name?schema=public"
Enter fullscreen mode Exit fullscreen mode

Replace the password, username and db name with what you used when setting up PostgreSQl after installation.
In my case, my db name is product_service.

After initialization, we need to generate the Prisma client by running:

npx prisma generate
Enter fullscreen mode Exit fullscreen mode

This command generates the Prisma client based on your schema and creates the necessary files for database access.

Now, we have Prisma installed and configured in our Node.js project, we can start using the Prisma client in your application code to interact with our database.

Step 6: Writing Product Service and Controller.

Create a new file: src/services/productService.ts and place the following code:

import { PrismaClient } from "../prisma/client";

const prisma = new PrismaClient();

// Create new product
const createProduct = async (data: any) => {
  return prisma.product.create({
    data,
  });
};

// Get all products
const getAllProducts = async () => {
  return prisma.product.findMany();
};

// Get product by id
const getProductById = async (productId: number) => {
  return prisma.product.findUnique({
    where: {
      id: productId,
    },
  });
};

// Update product
const updateProduct = async (productId: number, data: any) => {
  return prisma.product.update({
    where: {
      id: productId,
    },
    data,
  });
};

// Delete product
const deleteProduct = async (productId: number) => {
  return prisma.product.delete({
    where: {
      id: productId,
    },
  });
};

export {
  createProduct,
  getAllProducts,
  getProductById,
  updateProduct,
  deleteProduct,
};

Enter fullscreen mode Exit fullscreen mode

and src/controllers/productController.ts and place the following code:

// src/controllers/productController.ts
import { Request, Response } from "express";
import * as productService from "../services/productService";
import { CustomRequest } from "nodejs_ms_shared_library";

const createProduct = async (req: CustomRequest, res: Response) => {
  try {
    const product = await productService.createProduct(req.body);
    res.status(201).json({
      message: "Product created successfully!",
      user: req.user,
      product,
    });
  } catch (error) {
    console.error(error);
    res.status(500).json({ error: "Internal Server Error" });
  }
};

const getAllProducts = async (req: CustomRequest, res: Response) => {
  try {
    const products = await productService.getAllProducts();
    res.status(200).json({ products, user: req.user });
  } catch (error) {
    console.error(error);
    res.status(500).json({ error: "Internal Server Error" });
  }
};

const getProductById = async (req: CustomRequest, res: Response) => {
  const productId = parseInt(req.params.id, 10);
  try {
    const product = await productService.getProductById(productId);
    if (!product) {
      res.status(404).json({ error: "Product not found" });
    } else {
      res.status(200).json({ product, user: req.user });
    }
  } catch (error) {
    console.error(error);
    res.status(500).json({ error: "Internal Server Error" });
  }
};

const updateProduct = async (req: CustomRequest, res: Response) => {
  const productId = parseInt(req.params.id, 10);
  try {
    const product = await productService.updateProduct(productId, req.body);
    res.status(200).json({ product, user: req.user });
  } catch (error) {
    console.error(error);
    res.status(500).json({ error: "Internal Server Error" });
  }
};

const deleteProduct = async (req: CustomRequest, res: Response) => {
  const productId = parseInt(req.params.id, 10);
  try {
    await productService.deleteProduct(productId);
    res.status(200).json({
      message: "Product deleted successfully!",
      user: req.user,
    });
  } catch (error) {
    console.error(error);
    res.status(500).json({ error: "Internal Server Error" });
  }
};

export {
  createProduct,
  getAllProducts,
  getProductById,
  updateProduct,
  deleteProduct,
};

Enter fullscreen mode Exit fullscreen mode

Our crud operations on the product are straight to the point creating, updating, deleting and retrieving products or product by id with the help of PrismaClient from prisma client.
We are sending the response from the backend server alongside with the user obtained from req.user with details we formerly stored in the jwt.
This how we are achieving communication between auth and products, when the request is received from client, the token in headers has all the user details we stored there in the auth microservice when we called generateToken and passed a payload.

Step 7: Writing/Exposing Product Routes

Before we write our product routes, we already know they will be protected that is an authenticated user and with some previlages should be the one to access some of them. we however already written the logic in our shared library and installed it too.
Lets create another file: src/middlewares/jwtMiddlewares and get them ready for use in our routes:

import { NextFunction, Response } from "express";
import {
  CustomRequest,
  verifyToken,
  verifyTokenAndAdmin,
  verifyTokenAndAuthorization,
} from "nodejs_ms_shared_library";

// Wrap the middleware functions with their parameters

// Verify token from the client
export const verifyTokenMiddleware = (
  req: CustomRequest,
  res: Response,
  next: NextFunction
) => {
  verifyToken(req, res, next, process.env.JWT_SEC);
};

// Verify token and authorise account owner
export const verifyTokenAndAuthoriationMiddleware = (
  req: CustomRequest,
  res: Response,
  next: NextFunction
) => {
  verifyTokenAndAuthorization(req, res, next, process.env.JWT_SEC);
};

// Verify token and authorise admin
export const verifyTokenAndAdminMiddleware = (
  req: CustomRequest,
  res: Response,
  next: NextFunction
) => {
  verifyTokenAndAdmin(req, res, next, process.env.JWT_SEC);
};
Enter fullscreen mode Exit fullscreen mode

We do this because our functions in the custom library receive many parameters which we need to first supply when we invoke them.
Add the same JWT_SECRET and JWT_EXPIRY_PERIOD as you did in the auth microservice inside .env file.

Create a new file: src/routes/productRoutes.ts and place the following code:

import express from "express";
import * as productController from "../controller/productController";
import {
  verifyTokenAndAdminMiddleware,
  verifyTokenMiddleware,
} from "../middlewares/jwtMiddlewares";

const router = express.Router();

// Create new product
router.post(
  "/",
  verifyTokenAndAdminMiddleware,
  productController.createProduct
);

// Get all product
router.get("/", verifyTokenMiddleware, productController.getAllProducts);

// Get product by id
router.get("/:id", verifyTokenMiddleware, productController.getProductById);

// Update product by id
router.put(
  "/:id",
  verifyTokenAndAdminMiddleware,
  productController.updateProduct
);

// Delete product by id
router.delete(
  "/:id",
  verifyTokenAndAdminMiddleware,
  productController.deleteProduct
);

export default router;

Enter fullscreen mode Exit fullscreen mode

Finally, create an index file for the project at src/index.ts and place the following code:

import express from "express";
const app = express();
import cors from "cors";
import helmet from "helmet";
import dotenv from "dotenv";
import productRoute from "./routes/productRoute";
import morgan from "morgan";

dotenv.config();
app.use(morgan("common"));

// USE HELMET AND CORS MIDDLEWARES
app.use(
  cors({
    origin: ["*"], // Comma separated list of your urls to access your api. * means allow everything
    credentials: true, // Allow cookies to be sent with requests
  })
);
app.use(helmet());

app.use(express.json());

// Serve other routes
app.use("/api/v1/products/", productRoute);

// Start backend server
const PORT = process.env.PORT || 8000;

app.listen(PORT, () => {
  console.log(`Backend server is running at port ${PORT}`);
});

export default app;
Enter fullscreen mode Exit fullscreen mode

On the client, when a request is made to any of the product endpoints, a token header will be supplied and when the product service receives it, it checks to see if the user thats making a request has all the previlages he is supposed to be with depending on the jwt middleware invoked on that route handler and req.user will contain all the data we stored in the token.
The following should be obtained when you make a request to create a product without admin previlages:

Image description
and the following for some one with admin previlages:

Image description
All set lets now dockerise our product microservice too.

Step 8: Dockerising the Product Microservice

Create a new file in the root: Dockerfile and place the following code:

# Use an official Node.js runtime as a parent image
FROM node:latest as builder

# Set the working directory in the container
WORKDIR /usr/src/app

# Copy package.json and package-lock.json to the working directory
COPY package*.json ./

# Install app dependencies
RUN npm install

# Copy all files from the current directory to the working directory
COPY . .

# Development stage
FROM builder as development
# Set NODE_ENV to development
ENV NODE_ENV=development

# Expose the port the app runs on
EXPOSE 8900

# Command to run the application(in development)
CMD ["npm", "run", "dev"]

# Production stage
FROM builder as production
# Set NODE_ENV to production
ENV NODE_ENV=production

# Run any production-specific build steps if needed here

# Run the production command
CMD ["npm", "start"]
Enter fullscreen mode Exit fullscreen mode

and another file, still in the root: .dockerignore and place the following code:

/node_modules
npm-debug.log
.DS_Store
/*.env
./idea
Enter fullscreen mode Exit fullscreen mode

Also, .dockerignore helps us ignore some files and directories while building the docker image just like you see .gitignore while using git.

All set, lets now build and run the docker image with the following commands:

docker build -t product_service:development --target development .
Enter fullscreen mode Exit fullscreen mode

and

docker run -p 8700:8700 -v $(pwd):/usr/src/app -e PORT=8700 auth_service:development
Enter fullscreen mode Exit fullscreen mode

respectively.

If everything is well, you should have the following in terminal:
Image description

You can also spin up/split another terminal instance and run unit tests(inside the docker container) just like we did on the auth service, with the following command:

sudo docker exec -it your_docker_container_id  npm test
Enter fullscreen mode Exit fullscreen mode

Building the Orders Microservice

Our orders microservice will have its own set of teachnologies just like we earlier plotted that is mysql database and sequelize orm.
MySQL is an open-source relational database management system (RDBMS) that is widely used for building web applications and managing data. It is a popular choice for many developers and organizations due to its performance, reliability, and ease of use.
Sequelize is a popular Object-Relational Mapping (ORM) library for Node.js. It provides a way to interact with relational databases like MySQL, PostgreSQL, SQLite, and MSSQL using JavaScript or TypeScript. It simplifies database operations by allowing developers to use JavaScript objects to represent database tables and records, instead of writing raw SQL queries.
In this microservice, we will use it to query our MySQL database.

Steps

Step 1: Project Setup

Create a new folder in your desired location and name it order_service.
With the folder open in your favorite code editor, open terminal and run the following command to initialize a NodeJS project:

npm init -y
Enter fullscreen mode Exit fullscreen mode

After initializing the project, install the following dependencies with the following command:

npm install express cors helmet nodejs_ms_shared_library dotenv bcrypt jsonwebtoken morgan
Enter fullscreen mode Exit fullscreen mode

and development dependencies with the following command:

npm install -D nodemon jest supertest
Enter fullscreen mode Exit fullscreen mode
Folder Structure

Following will be our overall folder structure for our orders microservice:

Image description

Inside package.json, add the following scripts in the scripts section:

"scripts": {
    "start": "node index.js",
    "dev": "nodemon index.js",
    "test": "jest --watchAll --detectOpenHandles"
  },
Enter fullscreen mode Exit fullscreen mode

Step 2: Install MySQL

If youre using windows or mac, I still recommend a desktop application for mysql(MySQL workbench).
Visit this site to download the one for your operating system.
After downloading, follow the on screen instructions to install it as well

If you're using linux, you can use the following commands to get everything sorted right at the comfort of your terminal:

Update Package Lists:

sudo apt update
Enter fullscreen mode Exit fullscreen mode

Install MySQL Server:

sudo apt install mysql-server
Enter fullscreen mode Exit fullscreen mode

Start MySQL Service:

sudo service mysql start
Enter fullscreen mode Exit fullscreen mode

Secure MySQL Installation (Optional but recommended):

sudo mysql_secure_installation
Enter fullscreen mode Exit fullscreen mode

This command will guide you through securing your MySQL installation by setting a root password, removing anonymous users, and other security-related configurations.
Access MySQL Shell:

mysql -u root -p
Enter fullscreen mode Exit fullscreen mode

You will be prompted to enter the password you set during the secure installation or the default password if you didn't set one.

Now, you are in the MySQL shell, and you can interact with the MySQL database entirely through the terminal.
However, just like we said, we willbe using sequelize to help us on the heavy lifting like writing sql queries.

Step 3: Setup and Initialize Sequelize

Inside the root directory, open terminal and install the following extra dependencies:

npm install mysql2 sequelize
Enter fullscreen mode Exit fullscreen mode

We also need to install sequelize-cli, a powerful tool that helps you manage database migrations, models, and configurations in a Sequelize project.
Open terminal and run following command:

npm install -g sequelize-cli
Enter fullscreen mode Exit fullscreen mode

and initialise sequelize with the following command:

sequelize init
Enter fullscreen mode Exit fullscreen mode

This command initializes a basic Sequelize project structure in your current directory. It creates a config folder for database configurations, a models folder for your Sequelize models, and a migrations folder for handling database schema changes.
Update the generated config/config.json with your database credentials for development like username, password, database name, etc. I recommend you place all the credentials in a .env file and reference them instead of placing them directly in the config file.

Step 4: Writing Database Models

In sequelize, the database models(model names) you write will translate to tables in the database with all the fields you include with their data types.
Create a new file: models/Order.js and place the following code:

module.exports = (sequelize, DataTypes) => {
  const Orders = sequelize.define("Orders", {
    userId: {
      type: DataTypes.STRING,
    },

    orderId: {
      type: DataTypes.STRING,
      primaryKey: true,
    },

    products: {
      type: DataTypes.JSON,
    },

    amount: {
      type: DataTypes.INTEGER,
    },

    status: {
      type: DataTypes.STRING,
      enum: ["Pending", "Delivering", "Delivered"],
      defaultValue: "Pending",
    },
  });

  return Orders;
};
Enter fullscreen mode Exit fullscreen mode

Our Order model has 5 fields as shown including userId, the id of the user trying to make an order. This id will be supplied from the auth microservice. On the client, when a user logs in, his id is part of the data returned alongside the token(jwt). Inside the application, when a user tries to make an order you will send a req.body including the fields specified in the schema including the userId, from localStorage or any where you desired to keep it at login time.
Optionally, you can pick it from the jwt sent in headers as we are soon to discuss.

Step 5: Writing Controller Logic

Create a new file: controllers/orderController.ts and place the following code:

const { Orders } = require("../models"); // Adjust the path accordingly
const { generateOrderId } = require("../utils/orderIdGenerator");
// Create a new order
const createOrder = async (req, res) => {
  try {
    const order = await Orders.create({
      orderId: generateOrderId(),
      ...req.body,
    });
    res.status(201).json({
      order,
      user: req.user,
    });
  } catch (error) {
    console.error(error);
    res.status(500).json({ error: "Internal Server Error" });
  }
};

// Get all orders
const getAllOrders = async (req, res) => {
  try {
    console.log(req.user, "u");
    const orders = await Orders.findAll();
    const ordersWithParsedProducts = orders.map((order) => {
      return {
        ...order.toJSON(),
        products: JSON.parse(order.products),
      };
    });

    res.status(200).json({
      user: req.user,
      orders: ordersWithParsedProducts,
    });
  } catch (error) {
    console.error(error);
    res.status(500).json({ error: "Internal Server Error" });
  }
};

// Get a specific order by orderId
const getOrderById = async (req, res) => {
  const { orderId } = req.params;
  try {
    const order = await Orders.findOne({
      where: { orderId },
    });
    if (order) {
      res.status(200).json({
        user: req.user,
        order,
      });
    } else {
      res.status(404).json({ error: "Order not found" });
    }
  } catch (error) {
    console.error(error);
    res.status(500).json({ error: "Internal Server Error" });
  }
};

// Update an order by orderId
const updateOrderById = async (req, res) => {
  const { orderId } = req.params;
  try {
    const [updatedCount, updatedOrders] = await Orders.update(req.body, {
      where: { orderId },
      returning: true,
    });
    if (updatedCount > 0) {
      res.status(200).json({ user: req.user, orders: updatedOrders[0] });
    } else {
      res.status(404).json({ error: "Order not found" });
    }
  } catch (error) {
    console.error(error);
    res.status(500).json({ error: "Internal Server Error" });
  }
};

// Delete an order by orderId
const deleteOrderById = async (req, res) => {
  const { orderId } = req.params;
  try {
    const deletedCount = await Orders.destroy({
      where: { orderId },
    });
    if (deletedCount > 0) {
      res.status(200).json({ message: "Order deleted successfully" });
    } else {
      res.status(404).json({ error: "Order not found" });
    }
  } catch (error) {
    console.error(error);
    res.status(500).json({ error: "Internal Server Error" });
  }
};

module.exports = {
  createOrder,
  getAllOrders,
  getOrderById,
  updateOrderById,
  deleteOrderById,
};
Enter fullscreen mode Exit fullscreen mode

We define a set of functions for handling CRUD (Create, Read, Update, Delete) operations on our orders controller. The createOrder function creates a new order by generating an order ID and saving the order details to the database. getAllOrders retrieves all orders, parsing the product information from JSON format. getOrderById retrieves a specific order based on the provided order ID. The updateOrderById function updates an existing order by its order ID, and deleteOrderById deletes an order based on the order ID. Each function handles potential errors, responding with appropriate HTTP status codes and error messages. We also utilize a utility function (generateOrderId) for order ID generation. Additionally, the functions include user information in the response to provide context about the user interacting with the orders.

The utility function at utils/orderIDGenerator.js is as shown:

const generateOrderId = () => {
  let dt = new Date().getTime();
  let orderID = "xxxxxxxxxxxx4xxxyxxxxxxxxxxxxxxx".replace(/[xy]/g, (c) => {
    let r = (dt + Math.random() * 16) % 16 | 0;
    dt = Math.floor(dt / 16);
    return (c == "x" ? r : (r & 0x3) | 0x8).toString(16);
  });
  return orderID;
};

module.exports = { generateOrderId };
Enter fullscreen mode Exit fullscreen mode

Step 6: Exposing our Order Routes

Before we expose our Order routes, we also need to prepare our jwt middleware functions from our custom library as we will also use them in our routes that we will expose to the client.

Lets create another file: middlewares/jwtMiddlewares and place the following code:

const {
  verifyToken,
  verifyTokenAndAdmin,
  verifyTokenAndAuthorization,
} = require("nodejs_ms_shared_library");

// Wrap the middleware functions with their parameters
// Verify token from the client
const verifyTokenMiddleware = (req, res, next) => {
  verifyToken(req, res, next, process.env.JWT_SEC);
};

// Verify token and authorise account owner
const verifyTokenAndAuthoriationMiddleware = (req, res, next) => {
  verifyTokenAndAuthorization(req, res, next, process.env.JWT_SEC);
};

// Verify token and authorise admin
const verifyTokenAndAdminMiddleware = (req, res, next) => {
  verifyTokenAndAdmin(req, res, next, process.env.JWT_SEC);
};

module.exports = {
  verifyTokenAndAdminMiddleware,
  verifyTokenAndAuthoriationMiddleware,
  verifyTokenMiddleware,
};
Enter fullscreen mode Exit fullscreen mode

Now create a new file: routes/orderRoutes.js and place the following code:

const {
  createOrder,
  getAllOrders,
  getOrderById,
  updateOrderById,
  deleteOrderById,
} = require("../controllers/orderController");
const {
  verifyTokenMiddleware,
  verifyTokenAndAuthoriationMiddleware,
  verifyTokenAndAdminMiddleware,
} = require("../middlewares/jwtMiddlewares");
const router = require("express").Router();

// Create new order
router.post("/", verifyTokenMiddleware, createOrder);

// Get all orders
router.get("/", verifyTokenAndAdminMiddleware, getAllOrders);

// Get order by id
router.get("/:orderId", verifyTokenMiddleware, getOrderById);

// Update order by id
router.put("/:orderId", verifyTokenAndAuthoriationMiddleware, updateOrderById);

// Delete order by id
router.delete("/:orderId", verifyTokenAndAdminMiddleware, deleteOrderById);

module.exports = router;

Enter fullscreen mode Exit fullscreen mode

Finally, place the following code in index file inside root directory:

const express = require("express");
const app = express();
const dotenv = require("dotenv").config();
const cors = require("cors");
const helmet = require("helmet");
const morgan = require("morgan");
const cors = require("cors");
const database = require("./models");
const ordersRoute = require("./routes/ordersRoute");

// Middleware
app.use(express.json());
app.use(cors());
app.use(morgan("common"));
app.use(helmet());
app.use("/api/v1/orders", ordersRoute);

// Configure sequelize to sync all models and create corresponding tables accordingly
database.sequelize.sync().then(() => {
  console.log("Db connection successful");
  const PORT = process.env.PORT || 8300;
  app.listen(PORT, () => {
    console.log(`Backend server is listening at port ${PORT}`);
  });
});
Enter fullscreen mode Exit fullscreen mode

Here, we are using various middleware modules such as dotenv, cors, helmet, and morgan to enhance security, handle environment variables, enable Cross-Origin Resource Sharing (CORS), and log HTTP requests, respectively.Our application defines a route for handling orders ("/api/v1/orders") using the "ordersRoute" module. we also configured sequelize to synchronize with the database, ensuring that tables corresponding to the defined models are created.
Finally, we set our server to listen on a specified port (retrieved from the environment variable or defaulting to 8300), and a log message is displayed upon successful database connection and server initialization.

Step 7: Dockerising our Orders Service

Unlike other services we have so far, we will dockerise this service and configure an additional docker-compose to help us run multiple containers at ago.
we will have our nodejs order service in a container(docker) and use docker-compose to run it together with its mysql database.
Create a new file: Dockerfile in the root directory and place the following code:

# Use an official Node.js runtime as a parent image
FROM node:latest as builder

# Set the working directory in the container
WORKDIR /usr/src/app

# Copy package.json and package-lock.json to the working directory
COPY package*.json ./

# Install app dependencies
RUN npm install

# Copy all files from the current directory to the working directory
COPY . .

# Development stage
FROM builder as development
# Set NODE_ENV to development
ENV NODE_ENV=development

# Expose the port the app runs on
EXPOSE 8300

# Command to run the application(in development)
CMD ["npm", "run", "dev"]

# Production stage
FROM builder as production
# Set NODE_ENV to production
ENV NODE_ENV=production

# Run any production-specific build steps if needed here

# Run the production command
CMD ["npm", "start"]
Enter fullscreen mode Exit fullscreen mode

and .dockerignore and place the following code:

node_modules
dist
.git
.dockerignore
Enter fullscreen mode Exit fullscreen mode

Lets now create docker-compose.yml file in the root directory and place the following code:

version: "3.8"

services:
  node-api:
    build:
      context: .
      dockerfile: Dockerfile
    ports:
      - "8300:8300"
    depends_on:
      - mysql
    env_file:
      - .env # Use the same .env file for both services
    working_dir: /usr/src/app
    volumes:
      - /home/abeine/eoyprojects/nodejsapis/containerisedprojects/microservices_with_nodejs/orders_service:/usr/src/app
    command: npm run dev

  mysql:
    image: mysql:latest
    env_file:
      - .env # Use the same .env file for both services
    command: --default-authentication-plugin=mysql_native_password --character-set-server=utf8mb4 --collation-server=utf8mb4_unicode_ci --innodb_force_recovery=1
    environment:
      MYSQL_ROOT_PASSWORD: ${DB_DEV_PASSWORD}
    ports:
      - "3306:3306"
    networks:
      - my-network
    volumes:
      - mysql-data:/var/lib/mysql

volumes:
  mysql-data:

networks:
  my-network:

Enter fullscreen mode Exit fullscreen mode

We defined a multi-container environment for our Node.js API service and our MySQL database service. The Node.js API service, named "node-api," is built from the specified Dockerfile, exposes its application on port 8300, and depends on the MySQL service. It uses an environment file (.env) for configuration and mounts the local project directory as a volume into the container. The MySQL service uses the latest MySQL Docker image, sets up the necessary environment variables, including the root password from the .env file, and exposes MySQL on port 3306. It utilizes a custom network called "my-network" to enable communication between the services and employs a volume named "mysql-data" to persistently store MySQL data.
Our overall configuration facilitates the development environment for a Node.js API interacting with our MySQL database, ensuring ease of deployment and reproducibility across different environments.

Building Notifications Microservice

We have so far completed 3 microservices and dockerised them all as well. However, in an ecommerce application, we always need to notify our users/customers when various events happen forexample, when new products are added, their previous orders get fulfilled, when their goods get shipped and many other scenarios. we will handle all the notification related logic in an independent microservice.
In this service, we will be using graphql, mongodb and mongoose.
For illustration in this article, we will use express-graphql.It is a piece of middleware, to quickly setup a GraphQL Server, either with Express, or any web-framework that supports middleware.
In sophisticated implementations of graphql, I recommend using Apollo-Server in the place of express-graphql as its more feature rich than the later, with poweful features like subscriptions(allows a server to send data to its clients when a specific event happens in realtime) and support for nearly all Apollo Client libraries.
A future article may concentrate on this subject too as its also helpful in many scenarios.
We will use mongoose ODM to interact with our MongoDB database.

Steps:

Step 1: Project Setup

Create a new folder in your desired location and name it notifications_service.
With the folder open in your favorite code editor, open terminal and run the following command to initialize a NodeJS project:

npm init -y
Enter fullscreen mode Exit fullscreen mode

After initializing the project, install the following dependencies with the following command:

npm install express cors helmet nodejs_ms_shared_library dotenv bcrypt jsonwebtoken morgan mongoose
Enter fullscreen mode Exit fullscreen mode

and development dependencies with the following command:

npm install -D typescript ts-node nodemon jest ts-jest @types/jest supertest @types/supertest @types/cors @types/express @types/bcrypt @types/morgan 
Enter fullscreen mode Exit fullscreen mode
Folder Structure

Following will be our overall folder structure for our notifications microservice:

notifications Microservice Folder Structure

Setp 2: Configure TypeScript and Nodemon

Create a file named tsconfig.json in the root directory and add the following configuration:

{
  "compilerOptions": {
    "module": "NodeNext",
    "moduleResolution": "NodeNext",
    "target": "ES2020",
    "baseUrl": "src",
    "noImplicitAny": true,
    "sourceMap": true,
    "esModuleInterop": true,
    "outDir": "dist"
  },
  "include": ["src/**/*"],
  "exclude": ["node_modules"]
}
Enter fullscreen mode Exit fullscreen mode

Create another file named nodemon.json in the root directory and place the following code:

{
  "watch": ["src"],
  "ext": ".ts,.js",
  "exec": "ts-node ./src/index"
}
Enter fullscreen mode Exit fullscreen mode

Inside package.json, add the following scripts in the scripts section:

"scripts": {
    "build": "tsc",
    "start": "npm run build && node dist/src/index.js",
    "dev": "nodemon",
    "test": "jest --watchAll  --detectOpenHandles"
  },
Enter fullscreen mode Exit fullscreen mode

Also, add the line:

"module": "module",
Enter fullscreen mode Exit fullscreen mode

to your package.json to specify that we are using EsModules not NodeJS's default CommonJS pattern.

Step 3: Writing Notification Model

Create a new folder: src/models/Notification.ts and place the following code:

// Import necessary modules
import mongoose, { Schema, Document } from "mongoose";

// Define the interface for User document
export interface INotification extends Document {
  title: string;
  text: string;
  userId: string;
}

// Create a schema for the User model
const notificationSchema: Schema<INotification> = new Schema(
  {
    title: { type: String, required: true },
    text: { type: String, required: true },
    userId: { type: String, required: true },
  },
  { timestamps: true }
);

// Create and export the User model
export default mongoose.model<INotification>(
  "Notification",
  notificationSchema
);
Enter fullscreen mode Exit fullscreen mode

Our model describes 3 fields we will have in the notifications schema as shown above.

Step 4: Creating Notification Service

In our notification service, we isolate the business logic for handling notification related functionality.
Create a new file: src/services/notificationService.ts and place the following code.

import Notification from "../models/Notification";

export const getAllNotifications = async () => {
  try {
    return await Notification.find();
  } catch (error) {
    throw new Error(error.message);
  }
};

export const getNotificationById = async (id: string) => {
  try {
    return await Notification.findById(id);
  } catch (error) {
    throw new Error(error.message);
  }
};

export const addNotification = async (args: {
  title: string;
  text: string;
  userId: string;
}) => {
  try {
    const notification = new Notification(
      args as {
        title: string;
        text: string;
        userId: string;
      }
    );
    return await notification.save();
  } catch (error) {
    throw new Error(error.message);
  }
};

export const updateNotification = async (args: {
  id: string;
  title: string;
  text: string;
}) => {
  try {
    return await Notification.findByIdAndUpdate(
      args.id,
      { title: args.title, text: args.text },
      { new: true }
    );
  } catch (error) {
    throw new Error(error.message);
  }
};

export const deleteNotification = async (id: string) => {
  try {
    return await Notification.findByIdAndDelete(id);
  } catch (error) {
    throw new Error(error.message);
  }
};
Enter fullscreen mode Exit fullscreen mode

Lets now write our graphql schema and then consume our services:
Create a new file: src/schema/Notification.ts and place the following code:

import { GraphQLObjectType, GraphQLID, GraphQLString } from "graphql";

const NotificationType = new GraphQLObjectType({
  name: "Notification",
  fields: () => ({
    id: { type: GraphQLID },
    userId: { type: GraphQLID },
    title: { type: GraphQLString },
    text: { type: GraphQLString },
  }),
});

export default NotificationType;

Enter fullscreen mode Exit fullscreen mode

and src/schema/index.ts and place the following code:

import {
  GraphQLSchema,
  GraphQLObjectType,
  GraphQLList,
  GraphQLNonNull,
  GraphQLString,
} from "graphql";
import NotificationType from "./Notification";
import * as NotificationService from "../services/notificationService";

// Queries
const RootQuery = new GraphQLObjectType({
  name: "RootQueryType",
  fields: {
    // Query to get all notifications
    notifications: {
      type: GraphQLList(NotificationType),
      resolve: async () => NotificationService.getAllNotifications(),
    },

    // Query to get a notification by ID
    notification: {
      type: NotificationType,
      args: { id: { type: GraphQLNonNull(GraphQLString) } },
      resolve: async (_, args) =>
        NotificationService.getNotificationById(args.id),
    },
  },
});

// Mutations
const Mutation = new GraphQLObjectType({
  name: "Mutation",
  fields: {
    // Mutation to add a new notification
    addNotification: {
      type: NotificationType,
      args: {
        title: { type: GraphQLNonNull(GraphQLString) },
        text: { type: GraphQLNonNull(GraphQLString) },
        userId: { type: GraphQLNonNull(GraphQLString) },
      },
      resolve: async (
        _,
        args: { title: string; text: string; userId: string }
      ) => NotificationService.addNotification(args),
    },

    updateNotification: {
      type: NotificationType,
      args: {
        id: { type: GraphQLNonNull(GraphQLString) },
        title: { type: GraphQLNonNull(GraphQLString) },
        text: { type: GraphQLNonNull(GraphQLString) },
      },
      resolve: async (_, args: { id: string; title: string; text: string }) =>
        NotificationService.updateNotification(args),
    },

    deleteNotification: {
      type: NotificationType,
      args: { id: { type: GraphQLNonNull(GraphQLString) } },
      resolve: async (_, args) =>
        NotificationService.deleteNotification(args.id),
    },
  },
});

export default new GraphQLSchema({
  query: RootQuery,
  mutation: Mutation,
});

Enter fullscreen mode Exit fullscreen mode

Our GraphQL schema is defined to handle notifications, including queries and mutations. The RootQueryType contains two queries: notifications for retrieving all notifications and notification for fetching a specific notification by its ID. The Mutation type encompasses three mutations: addNotification for creating a new notification with mandatory title, text, and userId parameters, updateNotification for modifying an existing notification's title and text by ID, and deleteNotification for removing a notification by its ID. Each resolver function within the schema delegates its logic to corresponding functions in the NotificationService module, which interacts with the database and performs CRUD operations on notifications. The schema is constructed using GraphQLObjectType instances, and it exports a GraphQLSchema containing the defined queries and mutations. This structure adheres to GraphQL conventions, providing a clear and organized way to handle notifications within our GraphQL-powered notifications service.

Lastly, setup index file: src/index.ts with the following code:

import express from "express";
const app = express();
import cors from "cors";
import helmet from "helmet";
import dotenv from "dotenv";
import mongoose from "mongoose";
import morgan from "morgan";
import { graphqlHTTP } from "express-graphql";
import schema from "./schema";

dotenv.config();
app.use(morgan("common"));

// USE HELMET AND CORS MIDDLEWARES
app.use(
  cors({
    origin: ["*"], // Comma separated list of your urls to access your api. * means allow everything
    credentials: true, // Allow cookies to be sent with requests
  })
);
// app.use(helmet());
app.use(
  helmet({
    contentSecurityPolicy:
      process.env.NODE_ENV === "production" ? undefined : false,
  })
);

app.use(express.json());

// DB CONNECTION

if (!process.env.MONGODB_URL) {
  throw new Error("MONGO_URI environment variable is not defined");
}

mongoose
  .connect(process.env.MONGODB_URL)
  .then(() => {
    console.log("MongoDB connected to the backend successfully");
  })
  .catch((err: Error) => console.log(err));

app.use(
  "/graphql",
  graphqlHTTP({
    schema,
    graphiql: true,
  })
);

// Start backend server
const PORT = process.env.PORT || 8500;

// Check if it's not a test environment before starting the server

app.listen(PORT, () => {
  console.log(`Backend server is running at port ${PORT}`);
});

export default app;

Enter fullscreen mode Exit fullscreen mode

Step 6: Dockerising our Notifications Service

Create a new file in the root: Dockerfile and place the following code:

# Use an official Node.js runtime as a parent image
FROM node:latest as builder

# Set the working directory in the container
WORKDIR /usr/src/app

# Copy package.json and package-lock.json to the working directory
COPY package*.json ./

# Install app dependencies
RUN npm install

# Copy all files from the current directory to the working directory
COPY . .

# Development stage
FROM builder as development
# Set NODE_ENV to development
ENV NODE_ENV=development

# Expose the port the app runs on
EXPOSE 8900

# Command to run the application(in development)
CMD ["npm", "run", "dev"]

# Production stage
FROM builder as production
# Set NODE_ENV to production
ENV NODE_ENV=production

# Run any production-specific build steps if needed here

# Run the production command
CMD ["npm", "start"]
Enter fullscreen mode Exit fullscreen mode

and another file, still in the root: .dockerignore and place the following code:

/node_modules
npm-debug.log
.DS_Store
/*.env
./idea
Enter fullscreen mode Exit fullscreen mode

Also, .dockerignore helps us ignore some files and directories while building the docker image just like you see .gitignore while using git.

All set, lets now build and run the docker image with the following commands:

docker build -t notification_service:development --target development .
Enter fullscreen mode Exit fullscreen mode

and

docker run -p 8500:8500 -v $(pwd):/usr/src/app -e PORT=8700 notification_service:development
Enter fullscreen mode Exit fullscreen mode

respectively.

If everything is well, you should have the following in terminal:

Image description

You can also spin up/split another terminal instance and run unit tests(inside the docker container) just like we did on other services, with the following command:

sudo docker exec -it your_docker_container_id  npm test
Enter fullscreen mode Exit fullscreen mode

RabbitMQ Setup and Implementation

RabbitMQ, with its decoupling mechanism, will provide us several benefits:

Asynchronous Communication:

It will enable asynchronous communication, allowing our microservices to continue processing requests independently without waiting for a response from other services.
Forexample when placing an order, the order service can publish a message to update inventory or trigger other processes without blocking the user's request.

Resilience and Scalability:

Our microservices will be able to handle requests more resiliently. If a service is temporarily unavailable, the message broker will hold the message until the service is back online.
Scalability will be improved, as microservices can scale independently without direct dependencies on one another.

Loose Coupling:

Our microservices will evolve independently without tight dependencies. Changes in one service won't necessarily affect others, as long as the message format remains consistent.
This is particularly useful in our ecommerce application, where you might add new features or services without affecting the existing ones.

Event-Driven Architecture:

Message brokers support event-driven architecture, allowing microservices to react to events or updates in real-time.
In our ecommerce app, we will be able to trigger events for promotions, inventory updates, or order status changes among others.

consider the order placement scenario:

Without Message Broker:

User places an order.
Order service processes the order, updates inventory, and notifies other services directly via HTTP.
If one service is slow or unavailable, it might affect the entire user experience.

With Message Broker:

User places an order.
Order service publishes an "Order Placed" event to the message broker.
Inventory service and other relevant microservices subscribe to the "Order Placed" event and perform their actions asynchronously.
The user receives a response immediately, and the order processing continues in the background.

While it's possible to build microservices without a message broker just like our current system at the moment, using one provides a more flexible, scalable, and resilient architecture, especially in complex scenarios like ecommerce where various microservices need to collaborate without introducing tight coupling.
That being said, lets commence with our previously chosen RabbitMQ setup.

RabbitMQ Setup

In development, I highly recommend you setup a local RabbitMQ instance and as always its setup may vary depending on your operating system.
Because RabbitMQ primarily operates as a server and doesn't have a native desktop application for Windows, macOS, or Linux, I will provide all the commands you need to thrive setting it up on your local machine if youre using linux. If youre using windows or mac, I recommend you checkout this guide to have your environment setup.

RabbitMQ Docker Image with Management UI:

You can also run RabbitMQ in a Docker container with the management plugin enabled, which allows you to access the management UI. This is more suitable for development and testing purposes.
Run the following command on your terminal:

docker run -d --name rabbitmq -p 5672:5672 -p 15672:15672 rabbitmq:management
Enter fullscreen mode Exit fullscreen mode

and then access the management UI at http://localhost:15672/.

If youre using linux, follow the following instructions to get RabbitMQ setup on your local machine:

sudo apt-get update
sudo apt-get install rabbitmq-server

Enter fullscreen mode Exit fullscreen mode

Start the server with the following command:

sudo service rabbitmq-server start
Enter fullscreen mode Exit fullscreen mode

enable RabbitMQ management plugin with the following command:

# Enable the plugin
rabbitmq-plugins enable rabbitmq_management
Enter fullscreen mode Exit fullscreen mode

After that, create a user and virtual host for your application

# Create a user
rabbitmqctl add_user your_user your_password

# Give the user administrative privileges
rabbitmqctl set_user_tags your_user administrator

# Create a virtual host
rabbitmqctl add_vhost your_virtual_host

# Set permissions for the user on the virtual host
rabbitmqctl set_permissions -p your_virtual_host your_user ".*" ".*" ".*"
Enter fullscreen mode Exit fullscreen mode

Rabbitmq in Prodcution Environments.

Before we start implementing RabbitMQ in our microservice, I prefer we fist discuss about how you go about it in production and its what I will actually use in this guide, not a local server.
Also note that, this is all about environment setup and we havent approached its actual implementation, thats where we will discuss client libraries for various technologies and we will be moving on with most commonly used one with nodejs.

In production, we have various options for setting up our RabbitMQ depending on the hosting service/cloud provider.
Lets discuss 3 most commonly used ones and we will be moving with only one in this guide.

Cloud-Neutral Option: External RabbitMQ Service:

You can host RabbitMQ on a cloud provider or service that is cloud-neutral and can be accessed from any hosting provider.
Examples include services like RabbitMQ on AWS (using Amazon MQ), RabbitMQ on Azure, or a RabbitMQ instance hosted on a platform like Heroku.
This approach allows your microservices hosted on different cloud providers to connect to the same RabbitMQ instance.

Host RabbitMQ on One of the Cloud Providers:

You can also choose one of your cloud providers (e.g., AWS) to host RabbitMQ.
Ensure that the RabbitMQ instance is accessible from other cloud providers. This might involve setting up networking configurations, security groups, or VPC peering, depending on the specific cloud providers you are using.
Microservices hosted on different cloud providers would then connect to the RabbitMQ instance on the chosen cloud provider.

Hybrid Approach: Multiple RabbitMQ Instances:

Lastly, you can host separate RabbitMQ instances on each cloud provider (AWS, Render, Railway, etc) but configure them to communicate with each other.
Use RabbitMQ's federation or shovel features to link different RabbitMQ instances together, allowing messages to be exchanged across instances.
This approach provides a degree of isolation but still allows for communication between microservices on different cloud providers.

In this article we will setup our RabbitMQ on AWS and configure it to communicate with other microservices wherever they will be hosted, using Amazon MQ. It is a managed message broker service for Apache ActiveMQ and RabbitMQ that simplifies setup and operation of open-source message brokers on AWS.

Setup

Step 1: Create AWS Account

Head over to AWS and login or create a new account. Note that, AWS has a 12 month free tier and allows customers to use the product for free up to specified limits. Additionally, creating a new account involves submitting your credit card details.

Step 2: Create Broker

On the AWS console, search for Amazon MQ and proceed to https://console.aws.amazon.com/amazon-mq.
Click the get started button, select RabbitMQ as your broker engine and follow on screen instructions to create a new message broker selecting options that apply to your use case.
Configure the broker instance details, including the instance type, storage, and authentication method.
Choose the VPC and subnet settings for your RabbitMQ broker.
Set up security groups and define rules to control inbound and outbound traffic.

Step 3: Complete Broker Setup

After reviewing your configuration, click create broker button and the provisioning may take to around 20 minutes.
If all is well, you should have the following on your created broker's dashboard.
Image description

In our case, we will be using this RabbitMQ setup for both development and production, so after provisioning the broker, we will pick some credentials and take them to our .env file and start implementation.

RabbitMQ Implementation in NodeJS Microservices

From what we have architected so far, we need only 2 functions to implement RabbitMQ and use it in communicating our microservices.

The first function is to publish an event to RabbitMQ server. In our case, we are meaning the one we just created on AWS Amazon MQ. If you did setup a local RabbitMQ instance, you will also have credentials from your local setup.

The second function is for subscribing to these events on RabbitMQ.
with these 2 functions, you will find RabbitMQ implementation easier than you ever imaged. Since we need the same functions across nearly all our microservices, we are going to add them to our shared library instead of hard coding them in every service. We will pass all the credentials and everything else we need as a parameter and we will be good to go.

Writing Shared RabbitMQ Functions

Inside the nodejs_ms_shared_library's root directory, create a new file: src/rabbitMQUtils.ts and place the following code:

import * as amqp from "amqplib";

// Subscribe to rabbitmq
export async function subscribeToRabbitMQ(
  exchange: string,
  routingKey: string,
  handleMessage: (message: Buffer) => void,
  rabbitMQConfig: {
    host: string;
    port: number;
    username: string;
    password: string;
  }
): Promise<void> {
  const connection = await amqp.connect({
    protocol: "amqp",
    hostname: rabbitMQConfig.host,
    port: rabbitMQConfig.port,
    username: rabbitMQConfig.username,
    password: rabbitMQConfig.password,
  });

  const channel = await connection.createChannel();
  await channel.assertExchange(exchange, "direct", { durable: false });

  const queue = await channel.assertQueue("", { exclusive: true });
  channel.bindQueue(queue.queue, exchange, routingKey);

  channel.consume(
    queue.queue,
    (msg) => {
      if (msg !== null && msg.content) {
        handleMessage(msg.content as Buffer);
      }
    },
    { noAck: true }
  );
}

// Publish event to rabbitmq
export async function publishToRabbitMQ(
  exchange: string,
  routingKey: string,
  message: string,
  rabbitMQConfig: {
    host: string;
    port: number;
    username: string;
    password: string;
  }
): Promise<void> {
  const connection = await amqp.connect({
    protocol: "amqp",
    hostname: rabbitMQConfig.host,
    port: rabbitMQConfig.port,
    username: rabbitMQConfig.username,
    password: rabbitMQConfig.password,
  });

  const channel = await connection.createChannel();
  await channel.assertExchange(exchange, "direct", { durable: false });

  // Publish the message to the exchange with the specified routing key
  channel.publish(exchange, routingKey, Buffer.from(message));

  // Close the connection
  await channel.close();
  await connection.close();
}

Enter fullscreen mode Exit fullscreen mode

amqplib is a library for making AMQP 0-9-1 clients for Node.JS, and an AMQP 0-9-1 client for Node.JS v10+. It implements the machinery needed to make clients for AMQP 0-9-1, and includes such a client.

The exchange is the routing mechanism for RabbitMQ. When you publish a message, you specify an exchange and a routing key.
The routingKey is a key that the exchange uses to route the message to the correct queue.
handleMessage: This is a callback function that will be invoked when a message is received from the RabbitMQ queue. You define the logic for handling the message inside this function.
rabbitMQConfig:
host: You can find the RabbitMQ host in the AWS Management Console under the details of your RabbitMQ broker. It's usually in the format of your-broker-name.YourRegion.amazonaws.com.
port: The default port for RabbitMQ is 5672. This is the port your RabbitMQ broker is listening on.
username and password: These are the credentials you set up when configuring your RabbitMQ broker. You can find these in the AWS Management Console as well.

All set, before we publish a new version of our library, we need to pass through all the steps we passed through and they are pretty straight forward.
First, include the new functions in the index file:

export {
  generateToken,
  verifyTokenAndAdmin,
  verifyTokenAndAuthorization,
  CustomRequest,
  JWTPayload,
  VerifyErrors,
  verifyToken,
  Secret,
  Response,
} from "./jwtUtils";

export { subscribeToRabbitMQ, publishToRabbitMQ } from "./rabbitMQUtils";

Enter fullscreen mode Exit fullscreen mode

Then update library version number. Like we agreed, we will be using patch. Run:

npm version patch
Enter fullscreen mode Exit fullscreen mode

Now we need to transpile typescript once again to generate fresh JavaScript files in dist that includes our changes. Run:

npm run build
Enter fullscreen mode Exit fullscreen mode

All set, lets finalise with the command that pushes to npm repository(Assuming we are still logged in both on terminal and in the browser).

npm publish --access public 
Enter fullscreen mode Exit fullscreen mode

Check your npm repository and you should have an update with the version number corresponsing to the one you generated when your ran: npm version patch.
For now, since all our work towards npm is still manual, you will uninstall the shared library and install it afresh in all the microservices that you had already installed it.

We have quite many places already where we may possibly send events to RabbitMQ with our deployed function for doing so, such that other services that are concerned about the event can subscribe to it, lets however start with product creation.
When a new product is created, we will publish a ProductCreated event to rabbitmq and the notifications service will subscribe to the event.

Update the controller for creating a new product to look like this:

import { Request, Response } from "express";
import * as productService from "../services/productService";
import { CustomRequest } from "nodejs_ms_shared_library";
import { publishToRabbitMQ } from "nodejs_ms_shared_library";

const createProduct = async (req: CustomRequest, res: Response) => {
  try {
    const product = await productService.createProduct(req.body);

    // After successfully creating a product, publish a "ProductCreated" event
    const productCreatedEvent = {
      productId: product.id, // Assuming you have the product's ID in the model
      productName: product.title,
      // add any other relevant product data ...
    };

    // Use the RabbitMQ publishing function
    await publishToRabbitMQ(
      "your_exchange",
      "ProductCreated",
      JSON.stringify(productCreatedEvent),
      {
        host: process.env.RABBIT_MQ_HOST,
        port: Number(process.env.RABBIT_MQ_PORT),
        username: process.env.RABBIT_MQ_USERNAME,
        password: process.env.RABBIT_MQ_HOST,
      }
    );

    res.status(201).json({
      message: "Product created successfully!",
      user: req.user,
      product,
    });
  } catch (error) {
    console.error(error);
    res.status(500).json({ error: "Internal Server Error" });
  }
};
Enter fullscreen mode Exit fullscreen mode

In the notifications_service, head over to the index file, just after mongodb connection section and add the following code for subscribing to RabbitMQ event:

// RabbitMQ subscription for "ProductCreated" event
subscribeToRabbitMQ(
  "your_exchange",
  "ProductCreated",
  async (message) => {
    try {
      // Parse the incoming message
      const productCreatedEvent = JSON.parse(message.toString());

      // Handle the event in your NotificationService e.g send a notification to user that created
      // the product and tell him a new product was created under his account for transprency
      // You can destructure this event to pick up only information you need
      // key value pairs in the new notification
      await NotificationService.addNotification(productCreatedEvent);

    } catch (error) {
      console.error("Error processing ProductCreated event:", error);
    }
  },
  {
    host: process.env.RABBIT_MQ_HOST,
    port: Number(process.env.RABBIT_MQ_PORT),
    username: process.env.RABBIT_MQ_USERNAME,
    password: process.env.RABBIT_MQ_HOST,
  }
);
Enter fullscreen mode Exit fullscreen mode

Donot forget to import the function from our library on top in the imports section:

import { subscribeToRabbitMQ } from "nodejs_ms_shared_library";
Enter fullscreen mode Exit fullscreen mode

Congulatulations, you have successfully implemented RabbitMQ message broker in nodejs microservice architecture.
Note that, RabbitMQ message exchange doesnot happen in realtime by default, if you need realtime message exchange you implement it on your own using technologies like socket.io , vanilla websockets or graphql subscriptions among others.
Nothing changes from the actual implemenation, so I would suggest you use a technology youre already familiar with implementing real time features in nodejs in case need for realtime RabbitMQ message exchange arises.

We have just implemented a small portion of the overall usage in an ecommerce application but the main goal is to get the concept as we may have various scenarios we need both to publish events to RabbitMQ and have them subscribed to by other services.
Another potential usecase is when an order is placed, assuming we have the shipping and inventory microservices, they would all need to subscribe to that event published by order microservice and keep getting updates about order status as well. In the same way, like we have seen publishng a new event upon successfully creting a new product, you can publish as many events as you need to in as many microservices as you need to.

Hosting Microservices

Hosting microservices involves deploying and managing individual services that make up the application architecture. There are various ways to host microservices, and the choice depends on your specific requirements, technology stack, and preferences.
In this article, we will discuss the most common approaches in hosting microservices:

Cloud Platforms:

Amazon Web Services (AWS): Services like Elastic Container Service (ECS), Elastic Kubernetes Service (EKS), and AWS Lambda can be used to deploy and manage microservices.
Microsoft Azure: Azure Kubernetes Service (AKS), Azure Container Instances (ACI), and Azure Functions are popular options for hosting microservices.
Google Cloud Platform (GCP): Google Kubernetes Engine (GKE), Google Cloud Run, and Cloud Functions are suitable for hosting microservices.

Container Orchestration:

Kubernetes is a powerful container orchestration platform widely used for deploying, managing, and scaling containerized applications, including microservices.
Docker Swarm: Docker Swarm is Docker's native clustering and orchestration solution that allows you to deploy and manage containers at scale.

Serverless Computing:

AWS Lambda:
Ideal for functions and small microservices that can be triggered by events.
Azure Functions: Serverless computing offering on Azure, supporting multiple programming languages.
Google Cloud Functions: Allows you to deploy and run event-driven functions.

Platform-as-a-Service (PaaS):

Services like Heroku, Cloud Foundry, and others provide a platform for deploying and managing applications without worrying about the underlying infrastructure.

Containerization:

Use containerization platforms like Docker to package microservices along with their dependencies into containers. You can then deploy these containers to various environments, including Kubernetes, Docker Swarm, or other container orchestration solutions.

Self-Managed Infrastructure:

You can host microservices on your own infrastructure, either on-premises or using virtual machines in a cloud environment.

In our case, since we used Docker to containerise each microservice, with some requiring docker-compose to run various services at ago, you can look through options that involve docker/containerisation.

Until next time, happy coding!

Helpful Links:

Top comments (0)