DEV Community

Wycliffe A. Onyango
Wycliffe A. Onyango

Posted on

100 Days of DevOps: Day 16

Installing and Configuring Nginx as a Load Balancer

Deploying a web application on a high-availability stack requires careful configuration to ensure seamless performance and scalability. This case study details the process of installing and configuring Nginx as a Load Balancer (LBR) to address performance degradation on a website due to increasing traffic. By following a structured approach, we successfully migrated the application and resolved critical configuration errors.


Step 1: Initial Nginx Installation

The first step was to install Nginx on the designated Load Balancer (LBR) server. Nginx is a powerful open-source web server often used as a reverse proxy, HTTP cache, and load balancer due to its high performance and low resource usage. The installation process was straightforward using the server's package manager.

  • Command: sudo yum install nginx -y
  • Service Management: Once installed, the Nginx service was started and enabled to ensure it would launch automatically upon system reboot.
    • sudo systemctl start nginx
    • sudo systemctl enable nginx

After installation, Nginx was ready to be configured to handle and distribute incoming web traffic.


Step 2: Configuring the Load Balancer

The core of the task was to configure Nginx to act as a load balancer for the application servers (stapp01, stapp02, stapp03). This was done by editing the main Nginx configuration file, /etc/nginx/nginx.conf. The strategy was to use the upstream directive to define a group of backend servers and the proxy_pass directive to forward client requests to that group.

Initial configuration attempt in /etc/nginx/nginx.conf:

http {
    ...
    upstream backend {
        server stapp01;
        server stapp02;
        server stapp03;
    }
    server {
        listen 80;
        location / {
            proxy_pass http://backend;
        }
        root /usr/share/nginx/html;
        include /etc/nginx/default.d/*.conf;
    }
}
Enter fullscreen mode Exit fullscreen mode

This configuration successfully defined the load-balancing logic, but it was flawed. The presence of the root and include directives within the server block caused Nginx to serve local files from /usr/share/nginx/html instead of forwarding the request to the application servers. This resulted in an Nginx-specific error page, indicating a local file-serving issue rather than a load-balancing failure.


Step 3: Troubleshooting and Resolving the "502 Bad Gateway" Error

After correcting the Nginx configuration, a new error emerged: "502 Bad Gateway". This error is a clear indicator that Nginx was able to connect to the backend but received an invalid response. The cause was not immediately apparent, so a systematic troubleshooting approach was necessary.

  1. Backend Server Status: The first logical step was to check if the application servers were running. On each server, we confirmed that the Apache service was active using sudo systemctl status httpd. This check passed, ruling out a simple service outage.
  2. Port Mismatch: The next step was to investigate a potential port mismatch. While Nginx defaults to port 80 for HTTP, Apache can be configured to listen on any port. We examined the Apache configuration on the app servers and discovered that the Listen directive was set to port 5000.

    # Listen 12.34.56.78:80
    Listen 5000
    
  3. Correcting the Configuration: The root cause was a port mismatch between the Nginx load balancer and the Apache application servers. The Nginx upstream block was trying to connect to the default port 80, but Apache was listening on 5000. The solution was to explicitly specify the port in the Nginx configuration.

Final, corrected Nginx configuration:

http {
    ...
    upstream backend {
        server stapp01:5000;
        server stapp02:5000;
        server stapp03:5000;
    }
    server {
        listen 80;
        server_name stlb01.stratos.xfusioncorp.com;
        location / {
            proxy_pass http://backend;
        }
    }
}
Enter fullscreen mode Exit fullscreen mode

After testing the new configuration with sudo nginx -t and reloading the Nginx service with sudo systemctl reload nginx, the 502 error was resolved and the website was successfully accessible through the load balancer. The traffic was now correctly distributed across the three application servers, completing the migration to a high-availability stack.

Top comments (2)

Collapse
 
chamika_nimnajith_1196e9b profile image
Chamika Nimnajith

Great work. keep continue

Collapse
 
wycliffealphus profile image
Wycliffe A. Onyango

Thanks Chamika.