DEV Community

Cover image for Nginx Deep Dive: Architecture, Configuration, and Practical Examples
Shingai Zivuku
Shingai Zivuku

Posted on

Nginx Deep Dive: Architecture, Configuration, and Practical Examples

Introduction

Nginx (“Engine-X”) is a high-performance HTTP and reverse proxy server widely used in various scenarios such as web services, load balancing, API gateways, reverse proxies, and static resource servers. Due to its high performance, low resource consumption, and flexible configuration, Nginx has become the preferred choice for many internet companies, enterprises, and developers.

This article will begin with a basic introduction to Nginx, delve into its working principles, and use practical examples to help readers better understand how to configure and optimize Nginx.

Introduction to Nginx Basics

History and Background of Nginx

Nginx was originally developed by Russian programmer Igor Sysoev and released in 2004. It was initially designed to address the C10K problem (the problem of handling 10,000 simultaneous connections), thus exhibiting excellent performance in handling high-concurrency requests. Due to its outstanding performance and scalability, Nginx has become one of the world’s most popular web servers, particularly excelling in handling static resources and reverse proxying.

Nginx’s Core Functions

Nginx has the following core functionalities:

  • Reverse proxy: Nginx can act as a reverse proxy server, forwarding client requests to backend servers.
  • Load balancing: Nginx supports a variety of load balancing algorithms, such as Round Robin, IP Hash, and Least Connections.
  • Static file service: Nginx can efficiently serve static files such as HTML, CSS, JavaScript, and images.
  • HTTP caching: Nginx provides a caching mechanism that can cache the response content of HTTP requests, improving access performance.
  • SSL/TLS support: Nginx fully supports the SSL/TLS protocol and can provide support for HTTPS services.
  • Reverse proxy combined with load balancing: Nginx can forward requests to multiple backend servers, distributing traffic and providing high availability.
  • WebSocket support: Nginx supports the WebSocket protocol and can handle long-lived connections.

How Nginx Works

Nginx’s high performance and scalability stem from its unique design philosophy and working principles, as follows:

Event-Driven Model

Nginx employs an event-driven architecture. Unlike traditional multi-threaded or multi-process models, Nginx uses a single-process model and handles client connection requests through asynchronous event-driven mechanisms. Whenever a new request arrives, Nginx places it in an event queue, which is then scheduled by the main thread.

In this way, Nginx can handle a large number of concurrent connections in a single process, while avoiding the context switching and memory consumption problems caused by traditional multithreading.

Request Processing Flow

The Nginx request processing flow can be divided into the following stages:

  • Receiving requests: Nginx listens for client requests and adds them to the event queue.
  • Parsing requests: Driven by the event-driven model, Nginx parses the HTTP request headers (such as URL, method, hostname, etc.) of the client request.
  • Selecting the appropriate service: Based on the instructions in the configuration file, Nginx will select the appropriate backend server for request forwarding, or directly provide static resources.
  • Response generation: Nginx generates response data based on the response from the backend server or local static files and sends it back to the client.
  • Logging: Nginx logs requests for analysis and debugging by operations and development personnel.

Nginx Configuration File Structure

Nginx’s configuration file is usually located at /etc/nginx/nginx.conf, and its basic structure is as follows:

  • Global block: Sets global configurations, such as the number of worker processes, user permissions, log paths, etc.
  • HTTP block: Configures HTTP server settings such as caching, compression, and load balancing.
  • Server block: Defines virtual hosts and handles requests from different domains.
  • Location block: Used to match request URLs and configure the processing method for different URLs.

Advanced Features of Nginx

Load Balancing

Nginx provides various load-balancing algorithms to distribute client requests evenly across multiple backend servers. Common load balancing algorithms include:

  • Round Robin: The default load balancing method, which distributes requests evenly across all servers.
  • Least Connections: Distribute requests to the server with the fewest connections.
  • IP Hash: Distributes requests to specific backend servers based on the hash value of the client’s IP address.

Example configuration:

http {
    upstream backend {
        server backend1.example.com;
        server backend2.example.com;
        server backend3.example.com;
    }

    server {
        location / {
            proxy_pass http://backend;
        }
    }
}
Enter fullscreen mode Exit fullscreen mode

Caching and Compression

Nginx supports caching of static files and HTTP responses, which can significantly improve access speed and reduce the burden on the backend. In addition, Nginx supports compression, which can reduce the amount of data transmitted over the network and improve page load speed.

Example of cache configuration:

http {
    proxy_cache_path /tmp/cache keys_zone=my_cache:10m;

    server {
        location / {
            proxy_cache my_cache;
            proxy_pass http://backend;
        }
    }
}
Enter fullscreen mode Exit fullscreen mode

Compression configuration example:

http {
    gzip on;
    gzip_types text/plain application/javascript text/css;
    gzip_min_length 1000;

}
Enter fullscreen mode Exit fullscreen mode

SSL/TLS Configuration

Nginx supports SSL/TLS, providing secure HTTPS services for websites. Configuring SSL/TLS can prevent man-in-the-middle attacks and protect user privacy.

server {
    listen 443 ssl;
    server_name www.example.com;

    ssl_certificate /path/to/cert.pem;
    ssl_certificate_key /path/to/key.pem;

    location / {
        root /var/www/html;
        index index.html;
    }
}
Enter fullscreen mode Exit fullscreen mode

Nginx Practical Case Studies

Static File Server

Suppose we have a website that needs to serve static resources via Nginx. The configuration is as follows:

server {
    listen 80;
    server_name www.example.com;

    root /var/www/html;
    index index.html index.htm;

    location /images/ {
        root /var/www/assets;
    }
}
Enter fullscreen mode Exit fullscreen mode

In this configuration, Nginx will serve the website homepage located at /var/www/html and forward requests under the /images/ path to resources in the /var/www/assets folder.

Reverse Proxy and Load Balancing

Assuming we have three backend application servers, we will use Nginx for load balancing:

http {

upstream backend {
    server backend1.example.com;
    server backend2.example.com;
    server backend3.example.com;
}

server {
    listen 80;
    server_name www.example.com;

    location / {
        proxy_pass http://backend;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;

    }
}
Enter fullscreen mode Exit fullscreen mode

}

URL Rewriting and Redirection

Sometimes we need to redirect old URLs to new URLs. For example, redirecting all http requests to new https requests:

server {

listen 80;
server_name www.example.com;

return 301 https://$host$request_uri;
Enter fullscreen mode Exit fullscreen mode

}

Performance Optimization and Monitoring

Nginx performance optimization can be approached from the following aspects:

  • Adjust the number of worker processes: Configure them appropriately based on the number of CPU cores on the server worker_processes.
  • Use caching: Cache static resources and reverse proxy caching to reduce backend pressure.
  • Enable GZIP compression: Reduce the amount of data transmitted and improve page loading speed.

For monitoring, you can use Nginx’s stub_status module to monitor server status:

server {
    listen 80;
    server_name status.example.com;

    location /status {
        stub_status on;
        access_log off;
        allow 127.0.0.1;
        deny all;
    }
}
Enter fullscreen mode Exit fullscreen mode

Conclusion

Nginx, as a high-performance web server, is an indispensable component of modern web services due to its unique event-driven architecture, reverse proxy, load balancing, and caching capabilities. This article provides a foundation for further discussion.

Top comments (0)