DEV Community

Abeinemukama Vicent
Abeinemukama Vicent

Posted on

How to Dockerise a NodeJS - TypeScript API || A Comprehensive Guide from Environment Setup to Deployment with a CI/CD Pipeline

In the dynamic world of software development, streamlining the deployment process and ensuring consistency across different environments are crucial aspects of building robust and scalable applications. Docker, a powerful containerization platform, has become an indispensable tool for developers looking to achieve these goals. This article will guide you through the process of Dockerizing a Node.js TypeScript API, from setting up the development environment to deploying a dockerised application with an efficient CI/CD pipeline.

What Exactly is Docker

Docker is a containerization platform designed to simplify the packaging, distribution, and deployment of applications.
It uses OS-level virtualization to deliver software in packages called containers.
At its core, a Docker container is a lightweight, standalone executable package that includes everything needed to run a piece of software, including the code, runtime, libraries, and system tools. This article delves into the fundamental concepts of Docker, explaining how containers differ from traditional virtualization and highlighting the key components that make Docker a versatile and powerful tool for developers. Gain insights into the Docker image, container, and Dockerfile, and comprehend the significance of Docker in creating reproducible and isolated environments for Node.js TypeScript APIs.

Pros and Cons of Docker

Docker brings a myriad of advantages to the table, transforming the way we develop, deploy, and manage applications. In this section, we'll delve into the benefits that make Docker a go-to solution for developers. From consistent environments across different stages of development to improved scalability and resource utilization, Docker streamlines the development lifecycle.

However, no technology is without its considerations. We'll also explore the potential disadvantages and challenges that developers may encounter when adopting Docker. Understanding both the strengths and limitations will empower you to make informed decisions as you Dockerize a Node.js TypeScript API. Let's navigate through the advantages and disadvantages of Docker, ensuring a comprehensive view of its role in modern application development.

Pros:

Portability:

Containers encapsulate the application and its dependencies, making it highly portable across different environments. The same container can run consistently on a developer's laptop, testing servers, and production systems.

Isolation:

Containers provide a level of isolation, ensuring that an application and its dependencies are isolated from the host system and other containers. This helps in avoiding conflicts between different applications.

Resource Efficiency:

Docker containers share the host OS kernel, making them more lightweight than traditional virtual machines. This results in better resource utilization and faster startup times.

Scalability:

Docker simplifies the process of scaling applications. Multiple instances of a containerized application can be easily deployed and managed, either manually or through orchestration tools like Docker Swarm or Kubernetes.

Consistency:

Docker ensures consistency across different environments. Developers can work in the same environment as the production system, reducing the "it works on my machine" problem.

Versioning and Rollback:

Docker allows versioning of images, making it easy to roll back to previous versions in case of issues. This is particularly useful for deployments and updates.

Community and Ecosystem:

Docker has a large and active community, leading to a vast ecosystem of pre-built images, plugins, and tools. This community support can be valuable for problem-solving and knowledge sharing.

Continuous Integration and Deployment (CI/CD):

Docker containers integrate well with CI/CD pipelines, allowing for automated testing, building, and deployment. This promotes a more streamlined and efficient development process.

Cons:

Learning Curve:

Docker has a learning curve, especially for those who are new to containerization concepts. Understanding container orchestration tools and Dockerfile syntax may take time.

Security Concerns:

While Docker provides isolation, misconfigurations or vulnerabilities in the host kernel could potentially pose security risks. It's crucial to follow best practices and keep containers and images up to date.

Resource Overhead:

While containers are more lightweight than virtual machines, there is still some overhead associated with running multiple containers on a host system.

Limited Windows Support:

Docker was initially designed for Linux, and although it has improved its support for Windows, it may not be as seamless or feature-rich on Windows as it is on Linux.

Persistence and Stateful Applications:

Docker containers are often designed to be stateless, which can be challenging for applications that require persistent storage. Handling data persistence and managing stateful applications may require additional configurations.

Orchestration Complexity:

While Docker simplifies containerization, managing large-scale container deployments with orchestration tools like Kubernetes can introduce complexity. Understanding and configuring these tools can be challenging for some users.

Networking Challenges:

Configuring and managing container networks can be complex, especially when dealing with multiple containers that need to communicate with each other. Understanding Docker networking modes and creating custom networks may require additional expertise.

Image Size:

Docker images can become large, especially if they include unnecessary dependencies. This can impact storage requirements and increase the time it takes to transfer images over the network.

While docker offers many advantages in terms of portability, isolation, and resource efficiency, it's important to be aware of potential challenges such as security considerations, the learning curve, and the complexities associated with managing large-scale containerized applications in order to make an informed decision adopting docker.

Terminologies Used in Docker (Most Common)

Dockerfile:

A Dockerfile is a text file that contains a set of instructions for building a Docker image. These instructions define the base image, set up environment variables, copy files into the image, and execute commands during the image build process. Understanding the Dockerfile is crucial for customizing and optimizing the containerized environment for a Node.js TypeScript API.

docker-compose:

Docker Compose is a tool for defining and running multi-container Docker applications. It allows you to specify the services, networks, and volumes for your application in a single YAML file (docker-compose.yml). This simplifies the process of orchestrating multiple containers, making it easier to manage complex configurations and dependencies.

.dockerignore:

Similar to .gitignore, the .dockerignore file specifies patterns for files and directories that should be excluded when building a Docker image. This helps reduce the size of the image by excluding unnecessary files and directories like node_modules, making the image more efficient and lightweight.

Docker Image:

A Docker image is a lightweight, standalone, and executable package that includes everything needed to run a software application, including the code, runtime, libraries, and dependencies. Images are the building blocks used to create Docker containers.

Docker Container:

A Docker container is a running instance of a Docker image. It encapsulates the application and its dependencies in an isolated environment, ensuring consistency across different environments. Containers are portable and can run on any system that supports Docker.

Docker Registry:

A Docker registry is a repository for storing and sharing Docker images. Popular public registries include Docker Hub, where you can find a vast collection of pre-built images. Organizations often use private registries for hosting proprietary images.
Other common docker registries include Amazon Elastic Container Registry (Amazon ECR) for AWS, Google Container Registry (GCR) for Google Cloud and Azure Container Registry (ACR) for Microsoft Azure.
In this comprehensive guide, we will be deploying our image to dockerhub both manually and using a custom ci/cd pipeline with github actions. A future article may specialise on deploying docker on any of the other registries discussed but the steps are generally the same except minimal platform specific configurations.

When you deploy a Docker image to Docker Hub, it means that your containerized application is available for others to pull and run on their local systems or servers. However, Docker Hub itself is primarily a registry for Docker images and doesn't provide a runtime environment for hosting your application.
If you want to run your backend application in a production environment, you'll still need to deploy it to a hosting platform. Platforms like Heroku, AWS, Google Cloud, Azure, and others provide infrastructure and services to run and manage containerized applications.

Volume:

A Docker volume is a mechanism for persisting data generated by and used by Docker containers. Volumes enable data to persist beyond the lifecycle of a container, making them essential for scenarios like database storage or sharing data between containers.

Docker Network:

Docker networks provide communication between containers. By default, containers on the same network can communicate with each other using container names as hostnames. Docker networks facilitate seamless communication between different services within a Dockerized application.

Understanding these common Docker terminologies is fundamental to mastering the art of containerization and will serve as a solid foundation as we proceed with Dockerizing a Node.js TypeScript API.

Prerequisites

Before we embark on the journey of Dockerizing a Node.js TypeScript API, it's essential to ensure that your development environment is properly configured. This section outlines the prerequisites that you need to have in place before diving into the Dockerization process.

Node.js and npm:

Have Node.js and npm (Node Package Manager) installed on your system. These are essential for developing and running our Node.js TypeScript API. You can download Node.js from the official website or use a version manager like nvm for better control over Node.js versions.

Text Editor or IDE:

Choose a text editor or integrated development environment (IDE) for writing your TypeScript code. Popular choices include Visual Studio Code, Atom, or any editor of your preference.

MongoDB Atlas Account:

If you plan to use MongoDB as a database, sign up for a MongoDB Atlas account. This cloud-based database service offers a free tier and simplifies the process of managing MongoDB databases for our Dockerized Node.js TypeScript API.

GitHub Account (Optional):

For version control and implementing CI/CD pipelines in later sections, we will be using Github, ensure you have an account set up. This is optional but highly recommended for a streamlined development workflow.

Basic Knowledge of TypeScript and Express:

Familiarize yourself with TypeScript and Express.js, as they form the foundation of our Node.js API. If you're new to these technologies, consider exploring this article and this to get comfortable with the basics.

Understanding of RESTful APIs:

A basic understanding of RESTful API concepts is beneficial. This knowledge will help you structure the API endpoints effectively as we Dockerize our Node.js TypeScript API.
By ensuring that these prerequisites are met, you'll be well-prepared to follow the subsequent steps in this comprehensive guide. Now, let's move forward and start building our Dockerized Node.js TypeScript API.

Folder Structure

Following will be the overall folder structure for our dockerised nodejs-typescript api:

Image description

we will understand what each file contains step by step.

Steps to Dockerize our Application

Step 1: Install Docker and Docker Compose

Depending on your operating system, installation of docker may differ, if you're using windows or mac, I highly recommend installing docker desktop, follow this guide for windows and this for mac.
As of this writing, docker has separate desktop applications for mac with apple silicon and mac with intel chip, so if you're using mac, be careful installing one that aligns with your processor.

If youre using windows, do not forget to enable virtualization in the BIOS/UEFI settings. After that, also enable Hyper-V(Hardware Virtualisation).It is Microsoft's hypervisor technology, and it provides virtualization capabilities for running virtual machines (VMs) on Windows.
When you install Docker on Windows, it uses Hyper-V to create a lightweight virtual machine called MobyLinuxVM to host the containers. Hyper-V is responsible for managing these containers and ensuring their isolation from the host system.

If youre using linux, instead of docker desktop, I recommend installing docker engine as it comes bundled with Docker Desktop for Linux. This is the easiest and quickest way to get started on linux. You can follow this guide to complete its setup.

If youre using any Debian-based Linux distribution, I will provide the commands to help you install everything required without leaving your command line:

First, update the package index:

sudo apt update
Enter fullscreen mode Exit fullscreen mode

Install necessary dependencies:

sudo apt install -y apt-transport-https ca-certificates curl software-properties-common
Enter fullscreen mode Exit fullscreen mode

Add the Docker GPG key:

curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg
Enter fullscreen mode Exit fullscreen mode

Add the Docker repository:

echo "deb [signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
Enter fullscreen mode Exit fullscreen mode

Update the package index again:

sudo apt update
Enter fullscreen mode Exit fullscreen mode

Install Docker:

sudo apt install -y docker-ce docker-ce-cli containerd.io
Enter fullscreen mode Exit fullscreen mode

Add your user to the docker group to run Docker commands without sudo:

sudo usermod -aG docker $USER
Enter fullscreen mode Exit fullscreen mode

Log out and log back in or restart your system to apply the group changes.

Install Docker Compose:

Download the Docker Compose binary:

sudo curl -L "https://github.com/docker/compose/releases/latest/download/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
Enter fullscreen mode Exit fullscreen mode

Apply execute permissions to the binary:

sudo chmod +x /usr/local/bin/docker-compose
Enter fullscreen mode Exit fullscreen mode

Test the installation:
Run:

docker-compose --version
Enter fullscreen mode Exit fullscreen mode

to confirm docker compose installation and:

docker --version
Enter fullscreen mode Exit fullscreen mode

to confirm docker installation.
These commands should display the installed Docker Compose and docker version respectively.

Step 2: Initialize a NodeJS-TypeScript Project

Instead of initialising a new NodeJS app, we will clone our previous project for the artcle that was about unit testing with node and typescript using jest and supertest in order to concentrate on docker in today's article.
It was a 3 part article and here is it:

To clone the repository, first create a new folder in your desired location and open it with your desired code editor e.g vscode.
I called mine: node_ts_api_with_docker but feel free to call any other name of your convenience.
On the terminal, run:

git clone https://github.com/Abeinevincent/nodejs_unit_testing_guide .
Enter fullscreen mode Exit fullscreen mode

to clone the repository to your created folder.
Install dependencies:

npm install
Enter fullscreen mode Exit fullscreen mode

Step 3: Dockerize the Application with Dockerfile

Create a Dockerfile in the project root. This file defines the steps for building your Docker image. Use the following as a starting point:

# Use an official Node.js runtime as a parent image
FROM node:latest as builder

# Set the working directory in the container
WORKDIR /usr/src/app

# Copy package.json and package-lock.json to the working directory
COPY package*.json ./

# Install app dependencies
RUN npm install

# Copy all files from the current directory to the working directory
COPY . .

# Development stage
FROM builder as development
# Set NODE_ENV to development
ENV NODE_ENV=development

# Expose the port the app runs on
EXPOSE 8800

# Command to run the application(in development)
CMD ["npm", "run", "dev"]

# Production stage
FROM builder as production
# Set NODE_ENV to production
ENV NODE_ENV=production

# Run any production-specific build steps if needed here

# Run the production command
CMD ["npm", "start"]

Enter fullscreen mode Exit fullscreen mode

Lets understand each line of code in our Dockerfile one at time because this is why we are here 😄

FROM node:latest as builder

This line specifies the base image for your Docker image. In this case, it uses the official Node.js image from Docker Hub with the latest tag. The as builder is an optional alias for this stage, allowing you to reference this stage later in the Dockerfile.

WORKDIR /usr/src/app

Sets the working directory inside the container to /usr/src/app. This is the directory where subsequent commands will be executed, and it's a common practice to organize your application code.

COPY package*.json ./

Copies the package.json and package-lock.json files from the host machine (where your Dockerfile is located) to the working directory inside the container.

RUN npm install

Installs the Node.js application dependencies specified in package.json using the npm package manager. This step is crucial to ensure that all required dependencies are available in the container.

COPY . .

Copies all files from the current directory on the host machine to the working directory inside the container. This includes the application code, configuration files, and any other files required for the application.

FROM builder as development

Starts a new stage in the Dockerfile named development, using the alias builder from the previous stage. This allows you to reference the previous stage's files and dependencies.

ENV NODE_ENV=development

Sets the NODE_ENV environment variable to development. This helps us configure different behaviors based on the environment.

EXPOSE 8800

Informs Docker that the application inside the container will use port 8800. Note that this does not actually publish the port; it's more of a documentation feature.

CMD ["npm", "run", "dev"]

Specifies the default command to run when the container starts in development mode. In this case, it runs the npm run dev script, assuming that it's defined in your package.json.

FROM builder as production

Starts a new stage in the Dockerfile named production, using the alias builder from the previous stage.

ENV NODE_ENV=production

Sets the NODE_ENV environment variable to production. This is common for production environments to ensure optimized settings.

Run any production-specific build steps if needed here

This line is a comment indicating that you can add any production-specific build steps or configurations if needed. This is a placeholder for actions specific to the production environment.

CMD ["npm", "start"]

Specifies the default command to run when the container starts in production mode. In this case, it runs the npm start script, assuming that it's defined in your package.json.

Collectively, we have defined a multi-stage Dockerfile for a Node.js application. It allows us to build the application with different settings for development and production environments. we have structured it to optimize caching during the build process and separate concerns between development and production stages.

Create a .dockerignore File:

Create a .dockerignore file in the project root. This file specifies patterns for files and directories that should be excluded during the Docker image build process. Place the following code:

node_modules
dist
.git
.dockerignore
npm-debug.log
Enter fullscreen mode Exit fullscreen mode

Exclude unnecessary files such as node_modules, dist (transpiled TypeScript files), and .git from being copied into the Docker image.

All set, lets first try to build our docker image and run it before we start on the docker compose journey.
To build the image in development,run the following command:

sudo docker build -t node_ts_api_with_docker:development --target development .
Enter fullscreen mode Exit fullscreen mode

The command builds a Docker image named node_ts_api_with_docker with the tag development. It specifically targets the development stage in our multi-stage Dockerfile. This allows us to build the development stage independently of the production stage, which can be useful for testing and development workflows.
The . at the end of the command specifies the build context. It represents the current directory where the Dockerfile is located. All files and directories in this context will be sent to the Docker daemon during the build process.

If all is good, you should have this in your terminal:

Image description

Before we run our built image,lets first check for the image availability. To do that, run:

docker images
Enter fullscreen mode Exit fullscreen mode

and check if your built image exists in the list of images you have.
If it does, run the following command to run the image:

sudo docker run -p 8800:8800 -v $(pwd):/usr/src/app -e PORT=8800 node_ts_api_with_docker:development
Enter fullscreen mode Exit fullscreen mode

The sudo docker run command is used to launch a Docker container based on the specified image. In our case, the -p 8800:8800 flag maps port 8800 on the host machine to port 8800 within the running container, allowing external access to the application. The -v $(pwd):/usr/src/app flag mounts the current working directory ($(pwd)) on the host to the /usr/src/app directory inside the container, ensuring that any changes in the code are reflected in the container. The -e PORT=8800 flag sets the environment variable PORT to the value 8800 inside the container, allowing the Node.js application to use the specified port. Finally, node_ts_api_with_docker:development specifies the Docker image to be used, and development indicates that the container should be started in the development stage as defined in the multi-stage Dockerfile. Our command orchestrates the containerization of a Node.js application, exposing it on port 8800, and providing a seamless synchronization of code changes between the host and the container during development.

If all is well you should see the following in the terminal:

Image description
Congratulations, you have successfully run a docker image in development mode.
Now on wards you can start developing in the docker container. Try making some changes in the code and the development server inside the container should restart using nodemon upon saving.
To discover other docker commands that might help you in various scenarios like listing running containers, killing/stopping containers(you may need this to free some ports or related) among others, run the following command:

docker --help
Enter fullscreen mode Exit fullscreen mode

The command lists all the possible docker commands alongside what they can help you achieve.

Because we are in development, you can spin up/split another terminal instance and run your unit tests(inside the docker container). To do that, you can use the following command:

sudo docker exec -it 0f9380148bb5  npm test
Enter fullscreen mode Exit fullscreen mode

0f9380148bb5 is the docker container id for the image whose unit tests youre running.
If all is good, you should seethe following terminal:

Image description

Docker Compose Setup for MongoDB

Docker Compose is a tool for defining and running multi-container Docker applications. Utilizing Docker Compose for MongoDB in our Node.js API brings numerous benefits. It ensures environment consistency by defining the entire development environment, including MongoDB and the Node.js API, in a single configuration file, facilitating collaboration among developers. With a simplified setup, developers can start the entire stack with a single command, reducing configuration issues and ensuring proper dependency management. Docker Compose also offers isolation, running MongoDB and the Node.js API in separate containers for easier troubleshooting and independent scaling. This approach simplifies testing environments, as configurations can be tailored for testing scenarios. Furthermore, the same Docker Compose configuration used for development and testing can serve as a foundation for production deployment, promoting consistency across environments and minimizing deployment issues.

Since we are using MongoDB Atlas for our database, we don't have any specific requirements that necessitate Docker Compose (such as running multiple services locally).However,for illustration, we will create a simple service for running our database independent from the api.
If you use MongoDB campass or any RDBMS like PostgreSQL, MySQL among others, this is a must do, you need docker-compose truly to run the database service with the api service.
In our case, however much we are using MongoDB Atlas that gives us a fully-managed cloud database that handles all the complexity of deploying, managing, and healing your deployments on the cloud service, I will still demonstrate how docker-compose can help you achieve running more than one container at ago.

Create a new file in the root of the project, name it: docker-compose.yml and place the following code:

version: "3.8"

services:
  node-api:
    build:
      context: . #. means this docker-compose is inside the node api home directory, in this scenario, you can even abandon this option.
      dockerfile: Dockerfile
    ports:
      - "8800:8800"
    depends_on:
      - mongodb
    env_file:
      - .env # Use the same .env file for both services
    working_dir: /usr/src/app
    volumes:
      - /path/to/your/api:/usr/src/app
    command: npm run dev

  mongodb:
    image: mongo
    env_file:
      - .env # Use the same .env file for both services
    volumes:
      - mongodb-data:/data/db

volumes:
  mongodb-data:
Enter fullscreen mode Exit fullscreen mode

Lets also understand each line of code in our docker-compose.yml file before we proceed.
Our Docker Compose file orchestrates two services (node-api and mongodb) and uses named volumes to ensure data persistence for the MongoDB service. The shared .env file contains environment variables used by both services.

version: "3.8": Specifies the version of the Docker Compose file format being used. In this case, it's version 3.8.
services: Begins the section where you define your services.

node-api: This is the name of the service. It represents your Node.js API service.

build: This section is used to specify the build context and Dockerfile for building the image.

context: Specifies the path to the directory containing your Node.js API code. In this case, it's the current directory (.) where the docker-compose.yml file is located.

dockerfile: Dockerfile: Specifies the name of the Dockerfile to use for building the image. In this case, it's Dockerfile in the same directory.
ports: Maps port 8800 on the host to port 8800 in the container. This allows you to access your Node.js API at http://localhost:8800 on your host machine.
depends_on: Specifies that this service depends on the mongodb service, ensuring that it starts after mongodb.

env_file: Specifies the file or files from which to read environment variables. In this case, it's .env, which is shared between both services.
working_dir: /usr/src/app: Sets the working directory inside the container to /usr/src/app.

volumes: Mounts the local /home/abeine/eoyprojects/nodejsapis/containerisedprojects/node_ts_api_with_docker directory into the container at /usr/src/app. This facilitates live code reloading during development.

command: npm run dev: Specifies the command to run when the container starts, which is npm run dev for your Node.js API.
mongodb: This is the name of the MongoDB service.

image: mongo: Specifies the Docker image to use for the MongoDB service. In this case, it's the official MongoDB image from Docker Hub.

env_file: Specifies the file or files from which to read environment variables. In this case, it's also .env, which is shared between both services.
volumes: Creates a named volume named mongodb-data and mounts it to /data/db inside the MongoDB container. This ensures data persistence.

mongodb-data: This is the name of the volume created for MongoDB data persistence.

To execute the docker-compose.yml configuration, run the following command:

docker-compose up
Enter fullscreen mode Exit fullscreen mode

inside the directory where docker-compose.yml is.

If everything is well, you should see the success messages in the terminal like we saw before that Backend server is running at port 8800 and MongoDB connected to the backend successfully.

Deploying Docker Images

The easiest, most enjoyable, yet trickiest part of docker is deployment. In this article we will deploy our docker image first at a public docker registry called docker-hub and we will see how other team members can clone the image and start working right away after running the image.
For large teams or for hosting images on private repository on docker hub, charges may incur. You can visit the pricing page for a detailed billing structure.

Deploying Docker Images on Docker Hub

To deploy our previously built docker image on docker hub, head over to Docker Hub official website and login or create an account if you dont have one.
After logging in to docker hub, you also need to login on your terminal. Use the following command:

docker login
Enter fullscreen mode Exit fullscreen mode

and follow on screen instructions to complete login process.
After that, create a new repository on docker hub and grab the repository name. In my case, I used same name as local image(node_ts_api_with_docker)

Back to terminal, run the following command to build the image for deployment:

docker build -t abeinevicenthome/node_ts_api_with_docker:latest .
Enter fullscreen mode Exit fullscreen mode

Me I used latest an example for a generic tag. You might want to use a version number or any other tag that suits your versioning strategy.
Also, replace abeinevicenthome with your dockerhub username.

Lets now push our image to dockerhub.Use the following command:

docker push abeinevicenthome/node_ts_api_with_docker:latest
Enter fullscreen mode Exit fullscreen mode

If the push is successful, you should have some thing like this in terminal:

Image description
Open docker hub once again and confirm your image:

Image description

Congulatulations once again, you successfully pushed a docker image to docker hub registry.

As you can see on the bottom right, automated builds are only available with Pro, Team and Business subscriptions.
You also have access to webhooks that can help you automate your builds if you conform to the pricing requirements.

However, on our free tier, we still have an option of using our custom CI/CD pipeline using any platform of our choice and in this article we will be using github actions as a CI/CD platform.
This will help us build and push new images when we make say a push to the main branch of our github repository.

The process is as easy as setting up any other pipeline with github actions.
in the project root, create a new file: .github/workflows/docker_deploy.yml and place the following code:

name: Node Docker Deploy

on:
  push:
    branches:
      - master

jobs:
  test:
    runs-on: ubuntu-latest

    steps:
      - name: Checkout Repository
        uses: actions/checkout@v2

      - name: Install Dependencies
        run: npm install

      - name: Run Unit Tests
        env:
          MONGODB_URL: ${{ secrets.MONGODB_URL }}
          JWT_EXPIRY_PERIOD: ${{ secrets.JWT_EXPIRY_PERIOD }}
          JWT_SEC: ${{ secrets.JWT_SEC }}
        run: npm test

  build:
    runs-on: ubuntu-latest

    needs: test

    steps:
      - name: Checkout Repository
        uses: actions/checkout@v2

      - name: Build Docker Image
        run: docker build -t ${{ secrets.DOCKER_HUB_USERNAME }}/node_ts_api_with_docker:latest .

      - name: Push Docker Image
        run: |
          echo ${{ secrets.DOCKER_HUB_TOKEN }} | docker login -u ${{ secrets.DOCKER_HUB_USERNAME }} --password-stdin
          docker push ${{ secrets.DOCKER_HUB_USERNAME }}/node_ts_api_with_docker:latest


Enter fullscreen mode Exit fullscreen mode

Head over to your github and create a new repository, then include all the secrets used in the pipeline such as DOCKER_HUB_USERNAME, DOCKER_HUB_TOKEN, MONGODB_URL among others in the repository secrets. Follow the following path: settings -> Secrets and Variables -> Actions -> Repository Secrets -> New Repository Secret.
DOCKER_HUB_TOKEN can be gotten from your docker hub account settings. Follow the following path to get there: Profile Image -> My Account -> Security -> Access Tokens -> New Access Token and give it atleast read and write permissions.
The other secrets used are self explanatory like MongoDB URL and JWT secrets are already available in your local .env file. Copy and include them in your repository secrets too since we are not moving with .env on github and we need them in the process of unit testing.

Lets further understand our pipeline:
Our GitHub Actions workflow is designed to automate the testing, building, and deployment of our Node.js application with Docker.
The workflow consists of two jobs, "test" and "build," triggered on a push to the master branch. In the "test" job, the repository is checked out, dependencies are installed using npm, and unit tests are executed with environment variables such as MONGODB_URL, JWT_EXPIRY_PERIOD, and JWT_SEC being set from GitHub Secrets. The "build" job, dependent on the "test" job, involves checking out the repository again, building a Docker image with the application code, and pushing the image to Docker Hub using the provided secrets for Docker Hub authentication.
Our workflow streamlines the process of testing and deploying a Node.js application with Docker, ensuring a seamless integration into a CI/CD pipeline.

All set, lets push our code to github and observe our github actions workflow. To do that run the following commands

git add .
git commit -m "YourCommitMessage"
git push -u origin master
Enter fullscreen mode Exit fullscreen mode

If all is good,you should see the following in github actions tab:

Image description

Pulling Docker Images from DockerHub

Next step is to understand how your team mates will clone and run the image. To pull image from docker hub, run the following command inside your directory housing the source code:

docker pull abeinevicenthome/node_ts_api_with_docker:latest
Enter fullscreen mode Exit fullscreen mode

After pulling the image, the next step is to build and run it as previously discussed.

Sammary:

The Docker image crafted in this guide serves as a versatile and portable encapsulation of the Node.js API developed with TypeScript. This image, containing all the necessary dependencies and configurations, becomes a self-contained unit that can be effortlessly deployed on various hosting services. Whether it's Heroku, AWS, Google Cloud, Microsoft Azure or any other platform supporting Docker, the consistent environment provided by the Docker image ensures seamless deployment. The image acts as a standardized package, abstracting away the underlying infrastructure intricacies, and enabling developers to deploy their application across diverse hosting services with minimal effort. This level of abstraction not only simplifies deployment but also promotes compatibility, making the Dockerized Node.js API an adaptable solution for different hosting environments.

Conclusion:

In conclusion, the integration of Docker with a Node.js API written in TypeScript provides a robust and efficient solution for modern software development practices. The GitHub Actions workflow showcased here not only automates the testing and deployment processes but also encapsulates the application in a portable and reproducible Docker image. This containerization approach enhances the application's scalability, simplifies deployment across various environments, and facilitates seamless collaboration among developers. By harnessing the power of Docker and TypeScript in tandem, developers can achieve greater consistency, reliability, and ease of maintenance for their Node.js applications, paving the way for a more streamlined and agile development workflow. As technology continues to evolve, embracing containerization with Docker remains a valuable strategy for building resilient and scalable Node.js APIs.
Till next time, happy coding!

Important Links:

Top comments (1)

Collapse
 
zakihaha profile image
zakihaha • Edited

Excellent! Very clearly explanation 🚀