DEV Community

Cover image for Nodejs Nginx load balancer using docker-compose
Riaz Laskar
Riaz Laskar

Posted on

Nodejs Nginx load balancer using docker-compose

Docker and Containers

Docker is a software container platform. Developers use Docker to eliminate “works on my machine” problem when collaborating with co-workers. This is done by putting pieces of a software architecture on containers (a.k.a. dockerize or containerize).

Using containers, everything required to make a piece of software run is packaged into isolated containers. Unlike Virtual Machines (VMs), containers do not bundle a full operating system—only libraries and settings required to make the software work are needed. This makes them efficient, lightweight, self-contained and guarantees that software will always run on the same configuration, regardless of where it’s deployed.

Installing Docker

Everything that we will need to test this architecture is Docker. As the instances of our Node.js application and NGINX will run inside Docker containers, we won't need to install them on our development machine. To install Docker, simply follow the instructions on their website.

Creating the Node.js Application

To show NGINX load balancing in action, we are going to create a simple Node.js application that serves a static HTML file. After that, we are going to containerize this application and run it.
Next we want to have a NGINX service running which can dynamically discovery and update its load balance configuration when new containers are loaded. Thankfully this has already been created and is called nginx-proxy.
Nginx-proxy accepts HTTP requests and proxies the request to the appropriate container based on the request Hostname. This is transparent to the user with happens without any additional performance overhead.

Lets Begin

Our directory structure
Alt Text

node-app containerize simple node app

index.js

var http = require('http');

http.createServer(function (req, res) {
  res.writeHead(200, {'Content-Type': 'text/html'});
  res.end(`<h1>Node Instance : ${process.env.HOSTNAME}</h1>`);
}).listen(8080);
Enter fullscreen mode Exit fullscreen mode

Dockerfile

FROM node:alpine
RUN mkdir -p /usr/src/app
COPY index.js /usr/src/app
EXPOSE 8080

CMD [ "node", "/usr/src/app/index" ]
Enter fullscreen mode Exit fullscreen mode

So our sample node app is done and containerized.

Next is the docker-compose.yml which is going to orchestrate everything.

docker-compose.yml

version: '3'

services:
    nginix-loadbalancer:
        image: jwilder/nginx-proxy:latest
        volumes:
            - //var/run/docker.sock:/tmp/docker.sock:ro
        environment:
            - DEFAULT_HOST=proxy.example
        ports:
            - "80:80"   
    web-app:
        build:
          context: ./node-app
        environment:
            - VIRTUAL_HOST=proxy.example
        ports:
            - "8080" 
Enter fullscreen mode Exit fullscreen mode

We defined two services one is the proxy container and other is our node app

nginix-loadbalancer There are three keys properties required to be configured when launching the proxy container.

The first is binding the container to port 80 on the host using 80:80. This ensures all HTTP requests are handled by the proxy.

The second is to mount the docker.sock file. This is a connection to the Docker daemon running on the host and allows containers to access its metadata via the API. Nginx-proxy uses this to listen for events and then updates the NGINX configuration based on the container IP address. Mounting file works in the same way as directories using /var/run/docker.sock:/tmp/docker.sock:ro. We specify :ro to restrict access to read-only.

Finally, we can set an optional DEFAULT_HOST=. If a request comes in and doesn't make any specified hosts, then this is the container where the request will be handled. This enables you to run multiple websites with different domains on a single machine with a fall-back to a known website.

If everything is set ok we are ready to run everything.
docker-compose build will build the images and get everything ready.docker-compose up will spin up the containers up and running.
try curl http://localhost in the terminal should return response similar to <h1>Node Instance : af5936adc981</h1> with random instance host/machine name as its dynamic.

Now the real part , scaling of the app its really simple with docker-compose docker-compose scale web-app=<no of instance to scale>,
docker-compose scale web-app=2 we scaled our node app to 2 instance. Now again use curl http://localhost twice in the terminal the first request will be handled by our first container. A second HTTP request will return a different machine name meaning it was dealt with by our second container.

So thats it with docker-compose orchestrating everything is quite simple once you get to know stuff. I hope it helps someone out there in the web.

Codes available at https://github.com/riazXrazor/docker-nginx-loadbalance

Top comments (0)