DEV Community

Cover image for DOCKER FOR EVERYONE - (Learn about Caching, Load-Balancing, and Virtual Machines).
STEVE
STEVE

Posted on

DOCKER FOR EVERYONE - (Learn about Caching, Load-Balancing, and Virtual Machines).

INTRODUCTION

Hello there and welcome to this comprehensive tutorial on Docker, where I will be guiding you through the exciting world of load-balancing, caching, and deploying Docker containers to cloud services. Whether you're a beginner or an experienced developer, this tutorial is designed to be accessible and beneficial for everyone.

In this tutorial, we will cover a range of fundamental concepts and practical techniques to enhance your Docker skills. First and foremost, we'll delve into the basics of Docker and containerization, helping you understand the core principles and advantages of this powerful technology.

One of the key topics we'll explore is caching user sessions using Redis. Redis is an open-source, in-memory data structure store that allows for lightning-fast data retrieval, making it an ideal tool for caching frequently accessed data, like user sessions. I will guide you through the process of integrating Redis into your Docker workflow to optimize the performance of your applications.

Another critical aspect we'll address is load balancing using Nginx. Nginx is a high-performance web server that excels at distributing incoming network traffic across multiple endpoints. By incorporating Nginx into your Docker environment, you can effectively distribute the workload, ensuring smooth and efficient handling of incoming API requests.

Finally, we'll cover deploying your Docker containers to Microsoft Azure or your preferred cloud service. The ability to deploy applications to the cloud offers numerous benefits, including scalability, reliability, and easy access from anywhere. I'll provide step-by-step instructions to facilitate a seamless deployment process.

Before we begin, the only prerequisite for this tutorial is having a working API to follow along. If you don't have your own API, don't worry! You can simply clone my repository on Github by following this link : DOCKER, which we'll use throughout the tutorial.

I am committed to making this tutorial as comprehensive and informative as possible, hence the title "Docker for everyone." However, if you encounter any challenges along the way, feel free to reach out to me via the comment section. Additionally, don't hesitate to use online resources to overcome any roadblocks you may encounter during your learning journey.

What is Docker?

Docker is a powerful tool that provides a standardized and efficient way to package, distribute, and run applications. It addresses several challenges faced in traditional software development and deployment processes. including but not limited to the following.

  1. Compatibility Issues: Docker ensures consistent behavior across different environments by encapsulating applications and their dependencies within containers. This eliminates compatibility issues that arise due to differences in operating systems, libraries, and configurations.

  2. Dependency Management: With Docker, developers define application dependencies in a Dockerfile, and Docker takes care of including all required libraries and frameworks in the container image. This simplifies dependency management and ensures reproducible deployments.

  3. Deployment Complexities: Docker's containerization simplifies application deployment, especially in complex setups with multiple microservices. It allows each service to run in its own container, making scaling, deployment, and management easier.

  4. Scalability and Resource Utilization: Docker enables seamless application scaling through container orchestration platforms like Kubernetes or Docker Swarm. These platforms automatically adjust the number of containers based on demand, optimizing resource utilization and ensuring smooth user experiences.

In summary, Docker provides an efficient solution to challenges such as compatibility, dependency management, deployment complexities, and scalability, making it an essential tool for modern software development and deployment workflows.

Now that we've set the stage, let's dive into the fascinating world of Docker, load-balancing, caching, and virtual machines. Get ready to unlock the true potential of your applications with Docker's powerful capabilities.

As mentioned earlier, we will use a preexisting API. you can clone this repository from GitHub via this link : DOCKER

After cloning, you will need to supply the following environment variables :

file structure

From the environment variables above, we have included some credentials related to Redis. As mentioned earlier, we will use Redis to cache user sessions. so let me show you how we can include that in a typical Nodejs application. First, create a redis.js file in the config folder and populate it with the following code.

const { createClient } = require("redis");

const client = createClient({
    password: process.env.REDIS_PASSWORD,
    socket: {
        host: process.env.REDIS_HOST,
        port: process.env.REDIS_PORT,
    }
});

client.on("connect", () => {
    console.log("Connected to redis...")
})

client.on("error", (error) => {
    console.log("Error connecting to redis...", error)
})

module.exports = client
Enter fullscreen mode Exit fullscreen mode

please ensure that you have a Redis instance running so that you can easily connect it to your Nodejs Application. visit Redis cloud to create a new Redis instance.

Now in the app.js, we will connect Redis to our API and use the express-session module to create sessions in our Redis database for our users.

This is the relevant code that achieves that purpose.

const redisClient = require("./config/redis");
const RedisStore = require('connect-redis').default;
const session = require('express-session');

// Initialize sesssion storage.
app.use(session({
  store: new RedisStore({ client: redisClient }),
  secret: process.env.SESSION_SECRET,
  name: 'express-session',
  cookie: {
    secure: false,
    httpOnly: true,
    maxAge: 60000, // 1 minute. you can extend the maxAge value to suite your needs.
    // You can also set other cookie options if needed.
  },
  resave: false, // Set this to false to prevent session being saved on every request.
  saveUninitialized: true, // Set this to true to save new sessions that are not modified.
}));

const start = async () => {
  try {
    await redisClient.connect() //connect API to redis
    await connectDB(mongoUrl || 'mongodb://localhost:27017/express-mongo');
    app.listen(PORT, () => {
      console.log(`Server is running on port ${PORT}`);
    });
  } catch (error) {
    console.log(error);
  }
};

start();
Enter fullscreen mode Exit fullscreen mode

If everything works fine, our terminal should look like this :

connected to redis terminal

And if we try to log in, we should see our cookie express-session in the cookie section.

cookies

After 1 minute the session should expire and you will get this error when you hit the get all users endpoint.

Session Expired

Great! Now that our cache works as expected, let us containerize our application. But before we dive into that, let's take a moment to familiarize ourselves with some essential keywords related to Docker:

  1. Container: A container is a lightweight, isolated execution environment that contains an application and all its dependencies. It encapsulates the application, libraries, and configurations required to run the software. Containers provide consistency and portability, ensuring that the application runs consistently across different environments.

  2. Image: An image is a read-only template used to create containers. It includes the application code, runtime, libraries, environment variables, and any other files required for the application to run. Docker images are the building blocks for containers.

  3. Volume: A volume in Docker is a persistent data storage mechanism that allows data to be shared between the host machine and the container. Volumes enable data to persist even after the container is stopped or deleted, making it ideal for managing databases and other persistent data.

  4. Dockerfile: A Dockerfile is a text file that contains instructions for building a Docker image. It specifies the base image, adds application code, sets environment variables, and defines other configurations needed for the container.

  5. Dockerignore: The .dockerignore file is used to specify which files and directories should be excluded from the Docker image build process. This is useful to prevent unnecessary files from being included in the image and reduces the image size.

  6. Docker Compose: Docker Compose is a tool for defining and managing multi-container Docker applications. It uses a YAML file to define the services, networks, and volumes required for the application to run. Compose simplifies the process of managing complex applications with multiple containers.

  7. Services: In the context of Docker Compose, services refer to the individual components of a multi-container application. Each service represents a separate container running a specific part of the application, such as a web server, a database, or a cache.

Understanding these keywords will help you confidently move forward with containerizing your application using Docker. Let's explore how to utilize Docker to package our application into containers for seamless deployment and scalability.

First, as described above, create a Dockerfile in the root directory and populate it with the following code :

# specify the node base image with your desired version node:<version>
FROM node:16

WORKDIR /app

# copy the package.json to install dependencies
COPY package.json .

# install dependencies
RUN npm install

ARG NODE_ENV
RUN if [ "$NODE_ENV" = "development" ]; \
    then npm install; \
    else npm install --only=production; \
    fi

# copy the rest of the files
COPY . ./

# replace this with your application's default port
EXPOSE 3000

# start the app
CMD ["node", "app.js"]
Enter fullscreen mode Exit fullscreen mode

Let's break down the configuration in the Dockerfile step by step:

  1. FROM node:16: This line sets the base image for our Docker container. In this case, we are using the official Node.js Docker image with version 16 as our starting point. This base image includes the Node.js runtime and package manager, which we need to run our application.

  2. WORKDIR /app: This line sets the working directory inside the container to /app. This is the directory where our application code will be copied and where we'll execute commands.

  3. COPY package.json .: This line copies the package.json file from our local directory (the same directory as the Dockerfile) into the container's working directory. We do this first to take advantage of Docker's layer caching mechanism. It allows Docker to cache the dependencies installation step if the package.json file hasn't changed.

  4. RUN npm install: This command runs the npm install command inside the container to install the application's dependencies listed in the package.json file. This ensures that all required packages are available inside the container.

  5. ARG NODE_ENV: This line declares an argument named NODE_ENV. Arguments can be passed to the Docker build command using --build-arg option. It allows us to specify whether we are building the container for development or production environment.

  6. RUN if [ "$NODE_ENV" = "development" ]; ...: This conditional statement checks the value of the NODE_ENV argument. If it is set to "development," it will run npm install again, installing the development dependencies. Otherwise, if NODE_ENV is set to anything other than "development" (e.g., "production"), it will only install production dependencies using npm install --only=production.

  7. COPY . ./: This line copies all the files and directories from our local directory (the same directory as the Dockerfile) into the container's working directory (/app). This includes our application code, configuration files, and any other necessary files.

  8. EXPOSE 3000: This instruction specifies that the container will listen on port 3000. It doesn't actually publish the port to the host machine; it's merely a way to document the port that the container exposes.

  9. CMD ["node", "app.js"]: This sets the default command to be executed when the container starts. In this case, it runs the Node.js application using the node command with the entry point file app.js.

In summary, the Dockerfile is a set of instructions to build a Docker image for our Node.js application. It starts from the official Node.js image, sets up the working directory, installs dependencies based on the environment (development or production), copies our application code, specifies the exposed port, and defines the command to start our application. With this configuration, we can create a containerized version of our Node.js application that can be easily deployed and run consistently across different environments.

Next, let us create and populate three docker-compose files in the root directory of our application:

First docker-compose.yml file :

version: "3" # specify docker-compose version
services:
  nginx:
    image: nginx:stable-alpine # specify image to build container from
    ports:
      - "5000:80" # specify port mapping
    volumes:
      - ./nginx/default.conf:/etc/nginx/conf.d/default.conf # mount nginx config
  node-app:
    build: . # use the Dockerfile in the current directory
    environment:
      - PORT=3000 # container
Enter fullscreen mode Exit fullscreen mode

Second docker-compose.dev.yml file :

version: "3"
services:
  nginx:
    image: nginx:stable-alpine # specify image to build container from
    ports:
      - "3000:80" # specify port mapping
    volumes:
      - ./nginx/default.conf:/etc/nginx/conf.d/default.conf:ro # mount nginx config file
  node-app:
    build:
      context: . # current directory
      args:
        - NODE_ENV=development
    volumes:
      - ./:/app
      - /app/node_modules
    environment:
      - NODE_ENV=development
    command: npm run dev

Enter fullscreen mode Exit fullscreen mode

Third docker-compose.prod.yml file :

version: "3"
services:
  nginx:
    image: nginx:stable-alpine
    ports:
      - "80:80"
    volumes:
      - ./nginx/default.conf:/etc/nginx/conf.d/default.conf:ro
  node-app:
    deploy:
      restart_policy:
        condition: on-failure
    build: 
      context: .
      args:
        - NODE_ENV=${NODE_ENV}
    volumes:
      - ./:/app
      - /app/node_modules
    command: npm start
    environment:
      - MONGO_USERNAME=${MONGO_USERNAME}
      - MONGO_PASSWORD=${MONGO_PASSWORD}
      - REDIS_HOST=${REDIS_HOST}
      - REDIS_PORT=${REDIS_PORT}
      - SESSION_SECRET=${SESSION_SECRET}
      - REDIS_PASSWORD=${REDIS_PASSWORD}
      - NODE_ENV=${NODE_ENV}
Enter fullscreen mode Exit fullscreen mode

By using these docker-compose files, we can easily manage our containers and define different configurations for development and production environments. The combination of Docker and docker-compose simplifies the process of containerizing and deploying our application, making it more efficient and scalable in real-world scenarios.

Now let us break down the contents of all three files.

  • docker-compose.yml file:

The docker-compose.yml file is the main configuration file for our application. It allows us to define and manage multiple services, each running in its own container. Let's go through its contents:
version: "3": This line specifies the version of the docker-compose syntax that we are using. In this case, we are using version 3.
services: This section defines the different services (containers) that compose our application.
nginx: This service is responsible for running the Nginx web server.
image: nginx:stable-alpine: It specifies the base image for the nginx container, which will be pulled from Docker Hub. We are using the stable Alpine version of Nginx, a lightweight and efficient web server.
ports: This line maps port 5000 on the host machine to port 80 inside the nginx container. This allows us to access the Nginx server through port 5000 on our local machine.
volumes: Here, we mount the ./nginx/default.conf file from the host machine to the container's /etc/nginx/conf.d/default.conf path. This file is used to configure Nginx.
node-app: This service represents our Node.js application.
build: .: It tells Docker to build the node-app container using the Dockerfile located in the current directory (.).
environment: In this line, we set the PORT environment variable to 3000 inside the container. This variable allows our Node.js application to listen on port 3000.
These settings in the docker-compose.yml file allow us to run both Nginx and our Node.js application together, making them work seamlessly in tandem.

Now since we created a volume that mounts a custom nginx configuration in the docker-compose file, later on, we will need to create that file in our development environment and ensure we provide accurate configuration settings - (More on this later).

Next, we'll look at the other two docker-compose files used for different scenarios - development and production environments.

  • docker-compose.dev.yml file :

The docker-compose.dev.yml file is used for the development environment. It allows us to set up our application with configurations optimized for development purposes. Let's go through its contents:

version: "3": Same as in the previous file, this specifies the version of the docker-compose syntax used.
services: This section defines the services (containers) specific to the development environment.
nginx: This service runs the Nginx web server, just like in the previous file.
image: nginx:stable-alpine: The same base image for Nginx.
ports: Here, we map port 3000 on the host machine to port 80 inside the nginx container. This allows us to access the Nginx server through port 3000 on our local machine.
volumes: We mount the same ./nginx/default.conf file, but this time with the ro (read-only) option, as we don't need to modify it during development.
node-app: This service represents our Node.js application specifically for development.
build: It tells Docker to build the my-node-app container using the Dockerfile in the current directory (.). Additionally, we pass the NODE_ENV=development argument to the build process, allowing our application to use development-specific configurations.
volumes: Here, we mount the current directory (./) to the /app directory inside the container. This allows us to have real-time code changes reflected in the container without rebuilding it. We also mount /app/node_modules to prevent overriding the node_modules directory in the container and ensure our installed dependencies are available.
environment: We set the NODE_ENV environment variable to development inside the container to activate development-specific behavior in our Node.js application.
command: This line specifies the command to run when the container starts. In this case, we execute the npm run dev command, which usually starts our application in development mode.
The docker-compose.dev.yml file enables us to set up our development environment with the necessary configurations, ensuring the smooth and efficient development of our application.
Now, let's proceed to the last docker-compose file.

  • docker-compose.prod.yml file:

The docker-compose.prod.yml file is designed for the production environment. It defines the configurations optimized for running the application in a production setting, where reliability and scalability are crucial. Let's examine its contents:

version: "3": As before, this specifies the version of the docker-compose syntax used.
services: This section defines the services (containers) specific to the production environment.
nginx: This service runs the Nginx web server, just like in the previous files.
image: nginx:stable-alpine: The same base image for Nginx.
ports: Here, we map port 80 on the host machine to port 80 inside the nginx container, allowing HTTP traffic to reach the Nginx server on port 80.
volumes: Again, we mount the ./nginx/default.conf file, but this time with the ro (read-only) option, as we don't need to modify it during production.
node-app: This service represents our Node.js application specifically for production.
deploy: This section specifies deployment-related configurations for the service.
restart_policy: We set the restart policy to "on-failure," which means the container will automatically restart if it fails.
build: Similar to previous files, it tells Docker to build the node-app container using the Dockerfile in the current directory (.). Additionally, we use the NODE_ENV=${NODE_ENV} argument, allowing our application to use production-specific configurations.
volumes: We mount the current directory (./) to the /app directory inside the container, along with mounting /app/node_modules to preserve installed dependencies.
command: This line specifies the command to run when the container starts. In this case, we execute the npm start command, which usually starts our application in production mode.
environment: We set various environment variables (MONGO_USERNAME, MONGO_PASSWORD, REDIS_HOST, REDIS_PORT, SESSION_SECRET, REDIS_PASSWORD, and NODE_ENV) required by our Node.js application for production-specific settings.
The docker-compose.prod.yml file ensures that our application is optimally configured for a production environment, with reliability, scalability, and automatic restarts on failure. It allows us to deploy our application confidently, knowing that it is running efficiently and can handle real-world production scenarios.

At this point, we are almost done with the file setup, we now need to write our custom Nginx configuration to enable effective load-balancing within our container.

So create the Nginx config file to match the volume we declared in our docker-compose file - ./nginx/default.conf: now add the following lines of code :

upstream backend { # this is the name of the upstream block
    server using_docker_node-app_1:3000;
    server using_docker_node-app_2:3000;
    server using_docker_node-app_3:3000;
}

server {
    listen 80; # this is the port that the server will listen on

    location /api/ {
        proxy_set_header X-Real-IP $remote_addr; # this is required to pass on the client's IP to the node app
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; # this is required to pass on the client's IP to the node app
        proxy_set_header Host $http_host; # this is required to pass on the client's IP to the node app
        proxy_set_header X-NginX-Proxy true; # this is required to pass on the client's IP to the node app
        proxy_pass http://backend; # this is the name of the upstream block
        proxy_redirect off; # this is required to pass on the client's IP to the node app
    }
}

Enter fullscreen mode Exit fullscreen mode

Now let us explain this configuration :

upstream backend: This block defines an upstream group named "backend." It is used to define a list of backend servers that Nginx will load balance requests to. In this case, we have three servers (using_docker_node-app_1, using_docker_node-app_2, and using_docker_node-app_3) running our Node.js application on port 3000.

server: This block defines the server configuration for Nginx.

listen 80: This line specifies that the Nginx server will listen on port 80 for incoming HTTP requests.

location /api/: This block defines a location for Nginx to handle requests that start with /api/. We use this location to route requests to our backend Node.js application for API calls.

proxy_set_header: These lines set various headers to pass on information to the Node.js application:

X-Real-IP: Sets the client's IP address as seen by the Nginx server.
X-Forwarded-For: Appends the client's IP address to the X-Forwarded-For header, indicating the chain of proxy servers.
Host: Sets the original host header to preserve the client's hostname.
X-NginX-Proxy: Sets a header to indicate that the request is being proxied by Nginx.
proxy_pass http://backend;: This line directs Nginx to pass the incoming requests to the backend group named "backend" that we defined earlier. Nginx will automatically load balance the requests among the three servers specified in the "backend" group.

proxy_redirect off;: This line disables any automatic rewriting of HTTP redirects.

This custom Nginx configuration enables load-balancing across multiple instances of our Node.js application, ensuring better performance, high availability, and efficient utilization of resources. With this configuration, Nginx acts as a reverse proxy, directing incoming requests to one of the backend servers in the "backend" group, effectively distributing the load and improving overall application responsiveness.

Our folder and file structure should now look like this :

Folder structure

Now it is time to build our docker image. Since we are still on VS-code, we will start by building the docker-compose.dev.yml file. afterward, when we deploy our Virtual machine using Azure or any other third-party cloud service of your choice, we will then run the docker-compose.prod.yml file.

To build our Docker image and work with Docker Compose, you will need to have Docker and Docker Compose installed on your machine. You can follow the links below to find the installation instructions that work best for your operating system:

  1. Docker Installation:

  2. Docker Compose Installation:

Please choose the appropriate link for your operating system and follow the step-by-step instructions provided to install Docker and Docker Compose. Once installed, you will be able to proceed with containerizing and deploying your applications using Docker and Docker Compose.

Let's proceed with building our container (and image, if one doesn't exist yet).

To do this, open your terminal and execute the following command:

docker-compose -f docker-compose.yml -f docker-compose.dev.yml up --scale node-app=3 -d --build

Now, let's break down and understand this command:

docker-compose: This is the command-line tool we use to interact with Docker Compose.
-f docker-compose.yml -f docker-compose.dev.yml: We are using two Compose files here, docker-compose.yml and docker-compose.dev.yml, to define configurations for both the general compose configuration and the development environment.
up: This option tells Compose to create and start the containers.
--scale node-app=3: It scales the node-app service to run three instances, effectively setting up load balancing across these instances.
-d: The containers run in detached mode, meaning they will continue to run in the background.
--build: This flag ensures that Docker builds the image from the Dockerfile before starting the container.

By running this command, we initiate the process of creating and launching our containers based on the configurations we defined in the Compose files. The --scale option ensures that three instances of our Node.js application will be running concurrently, allowing us to efficiently handle incoming traffic and improve performance through load balancing.

If everything has been set up correctly your terminal should look like this :

build

Let's check the status of our running containers by executing the docker ps command. After scaling our Node.js application with three instances, we should observe three containers running.

However, there might be an issue with the Nginx service, which can be identified by running the docker-compose logs -f command. This log display is likely to reveal an error associated with the Nginx container, which is caused by the way we named our server files in the Nginx configuration (Further details on this will be explained later). The error will look like this :

nginx error

To resolve this error, we need to ensure that the server names in our Nginx configuration file match the servers created by our containers. After making these adjustments, we can rebuild our containers using the following command:

docker-compose -f docker-compose.yml -f docker-compose.dev.yml up --scale node-app=3 -d --build

By running this updated build command, all our containers, including Nginx, will be up and running without any issues.

As a reminder, we have made changes to our Nginx configuration file to ensure it matches the expected service names that Nginx requires. The updated Nginx configuration file should now look like this:

./nginx/default.conf

upstream backend { # this is the name of the upstream block
    server learningdocker-node-app-1:3000;
    server learningdocker-node-app-2:3000;
    server learningdocker-node-app-3:3000;
}

server {
    listen 80; # this is the port that the server will listen on

    location /api/ {
        proxy_set_header X-Real-IP $remote_addr; # this is required to pass on the client's IP to the node app
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; # this is required to pass on the client's IP to the node app
        proxy_set_header Host $http_host; # this is required to pass on the client's IP to the node app
        proxy_set_header X-NginX-Proxy true; # this is required to pass on the client's IP to the node app
        proxy_pass http://backend; # this is the name of the upstream block
        proxy_redirect off; # this is required to pass on the client's IP to the node app
    }
}

Enter fullscreen mode Exit fullscreen mode

Now if we run docker ps again, we should see four running containers.

Nginx running..

Now, let's verify if our application is effectively load-balancing API calls among the three instances of our Node-API.

To do this, I have added a console.log("testing nginx") statement in the "get all users" endpoint of our Node.js application.

Nginx test

We will now make multiple requests to this endpoint to observe how well Nginx distributes these requests among the instances that have been created.

By running the load-balanced setup, we can assess the even distribution of API calls and ensure that our system is effectively utilizing the resources provided by the three instances. This testing will help us validate that Nginx is indeed handling load balancing as expected, improving the overall performance and scalability of our application.

Don't forget that we are working with sessions, so we must log in again before we can access the get-all user's endpoint.

LOGIN:

login

GET ALL USERS:

All users

In the "get-all-users" endpoint, I have triggered an API call 8 times consecutively to simulate multiple requests being made to our application.

To observe the real-time results of our experiment, I will open three separate terminal instances. In each terminal, I will run the following commands: docker logs -f learningdocker-node-app-1, docker logs -f learningdocker-node-app-2, and docker logs -f learningdocker-node-app-3. These commands will allow me to continuously follow the log outputs of each container to see how our application is load-balancing the API calls among the three instances of our Node-API.

RESULT:

load balancing result

The outcome of the experiment indicates that our load balancer is functioning as expected. It has effectively distributed the API requests among the Node instances that our container created. This demonstrates that our application is successfully load balancing and handling the requests in a balanced and efficient manner.

Excellent! Up to this point, our application is running smoothly in the development environment. However, to make it production-ready, we'll need to deploy it on a virtual machine. Creating a virtual machine is a straightforward process. For this tutorial, I'll demonstrate using Microsoft Azure as the cloud provider. However, keep in mind that you have the flexibility to choose any cloud provider you prefer, such as Google Cloud, AWS, UpCloud, or others. The essential requirement is to set up a Linux server, and any of these providers will be suitable for the task at hand. Let's proceed with the deployment process!

Sign in or sign up for your Microsoft Azure account using the Azure portal (https://portal.azure.com/).

Once you're signed in, click on "Create a resource" in the top-left corner of the dashboard.

In the search bar, type "Virtual Machine" and select "Virtual Machines" from the suggested results.

Click on "Add" to create a new virtual machine.

Now, let's configure the virtual machine:

a. Basics:

Choose your subscription.
Create a new resource group or use an existing one.
Enter a unique virtual machine name.
Choose a region close to your target audience for better
performance.
Select "Ubuntu Server" as the image.

b. Instance details:

Choose a virtual machine size based on your needs (e.g., S
tandard B2s).
Enable "SSH public key" authentication and provide your public
SSH key. This allows you to sign in using SSH securely.

c. Disks:
Choose your preferred OS disk settings, usually, the default
settings are sufficient.

d. Networking:

Create a new virtual network or select an existing one.
Choose a subnet within the virtual network.
Enable "Public IP" and choose "Static" for a consistent IP
address.
Open port 22 for SSH (necessary for remote login), 80 for HTTP
and 443 for HTTPS.

e. Management:

Choose "Enable" for Boot diagnostics to troubleshoot startup
issues if necessary.

f. Advanced:

Customize any additional settings according to your
requirements.
Once you've completed the configuration, click on "Review +
create" to review your choices.

Review the details to ensure everything is correct, and then click on "Create" to start deploying your virtual machine.

Azure will now create the virtual machine based on your configuration. This process may take a few minutes.

If everything works fine, your Virtual machine should be up and running like this

VM up and running

After the virtual machine is successfully deployed, you can access it using SSH. To log in to the Ubuntu server, open your terminal and execute the following command:

ssh -i /path/to/your/sshkey.pem azureuser@your_external_ip

Replace /path/to/your/sshkey.pem with the path to your SSH private key file and azureuser with your SSH username. The your_external_ip should be replaced with the public IP address assigned to your virtual machine.

Once connected, your terminal prompt will look like this:

azureuser@your_virtual_machine_name:~$

Here is a visual representation :

Ubuntu server

Now you have secure access to your Ubuntu server, and you can perform various configurations and deploy your applications as needed. Remember to keep your server secure by using SSH keys and regularly updating your system packages.

Next thing we have to do now is install docker and docker compose on our newly created ubuntu server.

Now, our next step is to install Docker and Docker Compose on the Ubuntu server we just created.

To install the latest stable versions of Docker CLI, Docker Engine, and their dependencies :

# 1. download the script
#
#   $ curl -fsSL https://get.docker.com -o install-docker.sh
#
# 2. verify the script's content
#
#   $ cat install-docker.sh
#
# 3. run the script with --dry-run to verify the steps it executes
#
#   $ sh install-docker.sh --dry-run
#
# 4. run the script either as root, or using sudo to perform the installation.
#
#   $ sudo sh install-docker.sh
#

Enter fullscreen mode Exit fullscreen mode

After the installation, verify the successful download of Docker by running docker -v in the terminal.

Next, we need to download Docker Compose, just as we did during development.

To download docker-compose, simply copy and paste the following command into your terminal :

sudo curl -L "https://github.com/docker/compose/releases/download/1.29.2/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
sudo chmod +x /usr/local/bin/docker-compose
docker-compose --version
Enter fullscreen mode Exit fullscreen mode

Now we have successfully installed docker and docker-compose.

Docker compose

Now, we need to set up our environment variables. Remember that our API will be listening for specific variables that we didn't commit to GitHub.

To set up the environment variables, we will create and add them to the .env file located in the root of our server. You can use the command sudo nano .env to open and edit the .env file. After making the necessary changes, press "Ctrl + X," then "Y" on your keyboard to save the changes before pressing the enter button. This will ensure that the environment variables are correctly configured and saved.

To verify if the changes you made have been updated successfully, use the command cat .env this will display the content of the .env file.

You should get something that looks like this :

REDIS_HOST= visit https://app.redislabs.com to get your redis host
REDIS_PORT= visit https://app.redislabs.com to get your redis port
REDIS_PASSWORD= visit https://app.redislabs.com to get your redis password
MONGO_USERNAME= vist https://cloud.mongodb.com to get your mongo username
MONGO_PASSWORD= visit https://cloud.mongodb.com to get your mongo password
SESSION_SECRET= use any random string
NODE_ENV= development or production

Enter fullscreen mode Exit fullscreen mode

However, there is one issue to address. Currently, if we build the container, our API will be unable to read the .env file on our host machine unless we establish a way to link our application and persist the environment variables through reboots. To tackle this problem, we will edit the .profile file and add the following code at the bottom:

set -o allexport
source /home/azureuser/.env
set +o allexport
Enter fullscreen mode Exit fullscreen mode

This way, our API will have access to the required environment variables, and we can keep them confidential and isolated from the codebase.

Note : /home/azureuser/.env
is the path to my env file. Kindly replace yours to match the absolute path of your .env file on your host machine.

To access the profile file, ensure that you are in the root directory using the following command.

pwd

Then use sudo nano .profile command to open the profile file in a text editor.

After editing, make sure you save your file and exit the editor.

When you type cat .profile in your terminal, it should be displayed like this :

profile

To apply the changes we made to the .profile file, you'll need to log out of your server. After logging back in, you can confirm if the .env file is now persistent and readable by the Node.js application by using the command printenv. In the output, if you find all the environment variables you added in the .env file, then everything is set up correctly. However, if some variables are missing, you should troubleshoot the issue until all your environment variables are displayed when you use the printenv command.

Now we will clone the app we developed and pushed to Github:

Simply type git clone https://github.com/REALSTEVEIG/USING_DOCKER and CD into the project. In my case, the project name will be USING_DOCKER.

Since we have already installed docker, set up our environment variable, all we need to do now is run the build command but this time around, we will build using the docker-compose.prod.yml file.

docker-compose -f docker-compose.yml -f docker-compose.prod.yml up --scale node-app=3 -d --build

This command will build the image and create the required containers on the host machine and now we run into another error :

Service name error

Upon closer inspection, you'll notice that the service names have changed due to pulling this project from GitHub. Consequently, the build process has modified the container names, which causes Nginx to be unaware of them. As a result, the server encounters issues recognizing the new container names.

To resolve this error, we need to go to our VsCode and modify the server names to match what Nginx can recognize. After making the necessary changes, we will push the updates to GitHub. On our Ubuntu server, we'll pull these changes and then run the build command again. This way, Nginx will correctly recognize the server names, and the issue will be resolved.

Our ./nginx/default.conf file should now look like this :

upstream backend { # this is the name of the upstream block
    server using_docker_node-app_1:3000;
    server using_docker_node-app_2:3000;
    server using_docker_node-app_3:3000;
}

server {
    listen 80; # this is the port that the server will listen on

    location /api/ {
        proxy_set_header X-Real-IP $remote_addr; # this is required to pass on the client's IP to the node app
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; # this is required to pass on the client's IP to the node app
        proxy_set_header Host $http_host; # this is required to pass on the client's IP to the node app
        proxy_set_header X-NginX-Proxy true; # this is required to pass on the client's IP to the node app
        proxy_pass http://backend; # this is the name of the upstream block
        proxy_redirect off; # this is required to pass on the client's IP to the node app
    }
}

Enter fullscreen mode Exit fullscreen mode

Now we rebuild the container on our Ubuntu server using the same command:

docker-compose -f docker-compose.yml -f docker-compose.prod.yml up --scale node-app=3 -d --build

If we run docker ps Nginx should now be running alongside three instances of our Nodejs API.

All containers running

Now let us test our API with the external IP provider by the cloud provider. In my case 20.69.20.104.

Login

As you can observe, the Login route is functioning correctly. Now, let's verify if our Load-balancing is working as intended.

Create 8 requests, similar to what we did during development for the "get all users" endpoint, and observe if Nginx appropriately proxies our requests across the different Node instances. This will help us ensure that our Load-balancing mechanism is functioning as expected.

load-balancing

From the image above, we can confidently conclude that our load balancing works impeccably, efficiently distributing API requests among the different Node instances as expected.

With this, we have reached the conclusion of this comprehensive tutorial. Throughout this guide, we have covered a wide array of topics, including caching using Redis, load-balancing with Nginx, containerizing our application using Docker, and migrating our API to a Linux Ubuntu server on the Microsoft Azure cloud service. By following this tutorial, you have acquired valuable skills that can greatly enhance your application's performance, scalability, and deployment process.

As you continue your journey in the world of DevOps and cloud computing, there are endless possibilities to explore. You can dive deeper into deploying your API, attaching a custom domain, and implementing advanced load-balancing strategies. Additionally, learning about Kubernetes, a powerful container orchestration tool, can further boost your expertise in managing containerized applications at scale.

Remember, continuous learning and experimentation are vital in the ever-evolving tech landscape. Don't hesitate to explore new technologies, best practices, and industry trends to stay ahead in your journey as a skilled developer.

Thank you for embarking on this learning journey with me, and I wish you all the best in your future projects and endeavors! Happy coding!

Top comments (2)

Collapse
 
excellencyjumo profile image
Adedamola Adejumo • Edited

Nice read

Collapse
 
jagadeeshkoppala profile image
Jagadeesh Kumar Koppala

Worth time, Thank you.