DEV Community

Saikat Goswami
Saikat Goswami

Posted on

Setting Up a Reverse Proxy with Nginx on Ubuntu

In the modern web architecture, the humble web server has evolved far beyond simply serving static HTML files. As applications have grown more complex—decoupled into microservices, powered by multiple backend languages, and demanding robust security—the need for a sophisticated traffic manager has become paramount. Enter the reverse proxy. Positioned between client requests and your application servers, a reverse proxy is the maître d' of your digital infrastructure, directing traffic, handling security, and ensuring everything runs smoothly behind the scenes.
Nginx (pronounced "Engine-X") has risen to become the gold standard for this role. Renowned for its high performance, stability, and low resource consumption, Nginx is not just a web server; it is an excellent reverse proxy solution. On Ubuntu, one of the most popular Linux distributions for cloud and server environments, setting up Nginx is a rite of passage for system administrators and developers alike.
This article will serve as your comprehensive guide to setting up a reverse proxy with Nginx on Ubuntu. We will move from a basic configuration to advanced implementations, covering traffic routing, SSL termination for rock-solid security, and performance tuning to make your applications fly.

Chapter 1: The Foundation - What is a Reverse Proxy and Why Nginx?

Before diving into terminal commands and configuration files, it’s crucial to understand the tool you are wielding. A reverse proxy is a server that sits between client devices (like web browsers) and one or more backend servers. It intercepts requests from clients and forwards them to the appropriate server, acting as a gateway.
This differs from a forward proxy, which sits in front of clients and is used to mask their identities (e.g., a corporate firewall or services like VPNs). The reverse proxy masks the backend servers, making them invisible to the outside world .

Why deploy a reverse proxy? The advantages are substantial:

1.Security: By hiding the identity and characteristics of your backend servers, you drastically reduce the attack surface. Clients never connect directly to your application server (like a Node.js app or a Gunicorn-hosted Python app); they only see the proxy. You can also centralize SSL/TLS termination, offloading the encryption/decryption overhead from your application servers .

2.Load Balancing: As your traffic grows, a reverse proxy can distribute incoming requests across multiple backend servers, ensuring no single server becomes a bottleneck and guaranteeing high availability .

3.Improved Performance: Nginx can efficiently serve static files (images, CSS, JavaScript) directly, taking that load off your application logic. It can also cache dynamic content, compressing responses with gzip to reduce bandwidth and speed up load times .

4.Flexibility and Abstraction: You can change your backend infrastructure (e.g., move a service from port 8080 to 8081, or add new servers) without clients ever knowing. The reverse proxy abstracts the internal layout of your infrastructure .
Nginx is the ideal tool for this job because it uses an asynchronous, event-driven architecture. Unlike older servers that spawn a new thread or process per connection, Nginx handles thousands of concurrent connections within a single thread, making it incredibly efficient even under heavy load.

Chapter 2: Laying the Groundwork - Installation and Basic Server Setup

Our journey begins on a fresh or existing Ubuntu server (20.04, 22.04, or 24.04). You will need sudo privileges and access to a terminal. We'll assume your server has a public IP address and, ideally, a domain name pointed to it (like example.com).

Step 1: Installing Nginx First
Update your package index to ensure you have access to the latest software versions. Then, install Nginx.

sudo apt update
sudo apt install nginx -y

Enter fullscreen mode Exit fullscreen mode

Once installed, Nginx will usually start automatically. We can verify this by checking its status with systemctl, the service manager for Linux .

sudo systemctl status nginx

You should see an output indicating the service is active (running). If it didn't start automatically, you can kick it off with sudo systemctl start nginx.

Step 2: Adjusting the Firewall
If you have the Uncomplicated Firewall (UFW) enabled (which is common on Ubuntu), you need to allow traffic to Nginx. Nginx registers a few profiles with UFW upon installation. The safest bet is to allow "Nginx Full", which permits traffic on both port 80 (HTTP) and port 443 (HTTPS) .

sudo ufw allow 'Nginx Full'

Step 3: Verifying the Installation
Finally, check if Nginx is reachable. Open your web browser and navigate to your server's IP address (e.g., http://your_server_ip). You should be greeted with the default Nginx welcome page. This confirms that Nginx is installed, running, and accessible.

Chapter 3: The Core Configuration - Routing Traffic with proxy_pass

The heart of any reverse proxy is its ability to pass requests from the client to a backend server and then return the response. In Nginx, this is achieved with the proxy_pass directive. We will configure this inside a server block (similar to an Apache virtual host) which defines how Nginx handles requests for a specific domain or port.

Nginx's recommended configuration structure uses two main directories:
/etc/nginx/sites-available/: Where configuration files for your websites/apps are stored.
/etc/nginx/sites-enabled/: Contains symbolic links to files in sites-available that Nginx should actually load and use.

Step 1: Creating a Configuration File
Let's create a new configuration file for our application in the sites-available directory. We'll name it myapp for clarity.

sudo nano /etc/nginx/sites-available/myapp

Step 2: The Basic Reverse Proxy Configuration
Inside this file, we will define a server block. This example assumes your backend application is running on localhost on port 3000 (a common port for Node.js, React, or other development servers).

server {
    listen 80;
    listen [::]:80;
    server_name example.com www.example.com;

    location / {
        proxy_pass http://localhost:3000;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
    }
}
Enter fullscreen mode Exit fullscreen mode

Explanation of the Directives:

listen 80;: Tells Nginx to listen for incoming connections on port 80 (standard HTTP) .
server_name example.com;: This block will only respond to requests for this specific domain name. Replace it with your own domain or your server's IP address .
location / { ... }: This block defines how to handle requests for the root URL (/) and everything beneath it. You can have multiple location blocks for different parts of your site (e.g., /api might point to a different backend than /) .
proxy_pass http://localhost:3000;: This is the magic line. It forwards the client's request to the specified backend server address. In this case, http://localhost:3000 .
proxy_set_header: These lines are critical for the backend application to function correctly. They modify the HTTP headers of the request being forwarded .
Host $host: Passes the original Host header from the client. Without this, the backend might see all requests as coming from localhost.
X-Real-IP $remote_addr: Passes the real IP address of the client. The backend would otherwise only see the IP of the Nginx server.
X-Forwarded-For $proxy_add_x_forwarded_for: Appends the client's IP address to a list of proxies the request has passed through.
X-Forwarded-Proto $scheme: Tells the backend whether the original request was HTTP or HTTPS.

Step 3. Enabling the Site: To activate this configuration, we need to create a symbolic link from our file in sites-available to sites-enabled .

sudo ln -s /etc/nginx/sites-available/myapp /etc/nginx/sites-enabled/

Step 4. Testing and Reloading: Always test your Nginx configuration for syntax errors before reloading. This simple step can save you from accidentally taking your site down .

sudo nginx -t

If the test is successful, gracefully reload Nginx to apply the new configuration.

sudo systemctl reload nginx

Your reverse proxy is now live. Any visitor to http://example.com will have their traffic seamlessly forwarded to your application running on port 3000.

Chapter 4: Fortifying the Connection - Handling SSL/TLS

In today's web, security is non-negotiable. Serving your site over HTTPS encrypts all communication between the client and your server, protecting sensitive data from eavesdroppers. Using Nginx to handle SSL termination is a best practice, as it centralizes certificate management and offloads the computationally expensive encryption/decryption work from your application servers .
While you can use self-signed certificates for testing, for a production site, you need a trusted certificate from a Certificate Authority (CA). Let's Encrypt is a free, automated, and open CA that is perfect for this task, and its certbot tool integrates beautifully with Nginx on Ubuntu.

Step 1: Installing Certbot First.
Install the Certbot client and its Nginx plugin.

sudo apt install certbot python3-certbot-nginx -y

Step 2: Obtaining and Installing the Certificate
This is the magic step. Run Certbot with the --nginx plugin, and it will automatically obtain a certificate for your domain and modify your Nginx configuration to use it .

sudo certbot --nginx -d example.com -d www.example.com

•--nginx: Tells Certbot to use the Nginx plugin.
•-d: Specifies the domain names you want the certificate to be valid for.
Certbot will ask you for an email address for urgent renewal and security notices, and then ask you to agree to the terms of service. After that, it will communicate with the Let's Encrypt servers, perform a challenge to prove you control the domain, and then update your Nginx configuration (/etc/nginx/sites-available/myapp) to enable HTTPS.
What Certbot Changes in Your Configuration: After Certbot runs, your server block will look something like this :

server {
    listen 80;
    listen [::]:80;
    server_name example.com www.example.com;
    return 301 https://$server_name$request_uri;
}
Enter fullscreen mode Exit fullscreen mode
server {
    listen 443 ssl;
    listen [::]:443 ssl;
    server_name example.com www.example.com;

    ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem;
    ssl_certificate      
    /etc/letsencrypt/live/example.com/privkey.pem;    
    include /etc/letsencrypt/options-ssl-nginx.conf;
    ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem;

    location / {
        proxy_pass http://localhost:3000;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
    }
}
Enter fullscreen mode Exit fullscreen mode

Notice what happened:
1.The original HTTP server block now has a return 301 https://... directive, which forces all HTTP traffic to redirect to HTTPS.

  1. A new server block for port 443 (HTTPS) has been created, containing the paths to your new SSL certificate and key.
  2. It includes secure configuration files provided by Certbot to ensure modern, strong encryption.

Step 3: Auto-Renewal
Let's Encrypt certificates are valid for 90 days. Certbot installs a cron job or systemd timer that will automatically attempt to renew your certificates before they expire. You can test the renewal process with:

sudo certbot renew --dry-run

With this in place, your reverse proxy is now a secure gateway, ensuring all traffic to and from your users is encrypted .

Chapter 5: Supercharging Performance - Caching, Compression, and Tuning

Now that traffic is flowing securely, it's time to optimize. Nginx offers a powerful suite of tools to make your applications feel faster and handle more load. We will explore some key performance-enhancing features.

5.1 Enabling Gzip Compression
Text-based resources like HTML, CSS, and JavaScript can be compressed significantly before being sent over the network, drastically reducing page load times. Enable gzip compression within the http block of your main nginx.conf file, or within your specific server/location blocks.

http {
    # Enable gzip compression
    gzip on;
    # Compression level (1-9). Level 6 is a good trade-off between CPU and compression.
    gzip_comp_level 6;
    # Minimum length of a response to compress (in bytes)
    gzip_min_length 256;
    # Compress responses for these MIME types
    gzip_types
        text/plain
        text/css
        text/xml
        text/javascript
        application/json
        application/javascript
        application/xml+rss
        application/rss+xml;
    # Vary: Accept-Encoding header
    gzip_vary on;
    # Enable compression for proxied requests
    gzip_proxied any;
}

Enter fullscreen mode Exit fullscreen mode

This configuration tells Nginx to compress eligible responses on-the-fly, significantly reducing bandwidth usage and improving load times .

5.2 Implementing Caching for Static Assets
For files that don't change often (images, CSS, JavaScript), you can instruct Nginx to cache them. This serves two purposes: it offloads work from your backend server and allows clients to reuse downloaded files.
First, define a cache path in the http block of your main /etc/nginx/nginx.conf .

http {
# ...
proxy_cache_path /var/cache/nginx levels=1:2
keys_zone=static_cache:10m max_size=1g inactive=60m
use_temp_path=off;
}

/var/cache/nginx: The directory on disk where the cache will be stored.
keys_zone=static_cache:10m: Creates a shared memory zone named static_cache of 10 MB to store cache keys and metadata.
max_size=1g: Limits the physical cache size on disk to 1 gigabyte.
inactive=60m: Removes items from the cache if they haven't been accessed in 60 minutes.
Then, in your server block, you can apply this cache to specific locations. For example, to cache all images, CSS, and JavaScript files for a day .

server {
    # ...
    location ~* \.(jpg|jpeg|png|gif|ico|css|js)$ {
        proxy_cache static_cache;
        proxy_pass http://localhost:3000;
        proxy_cache_valid 200 302 24h;
        proxy_cache_valid 404 1m;
        proxy_cache_use_stale error timeout updating http_500 
        http_502 http_503 http_504;
        add_header X-Proxy-Cache $upstream_cache_status;
        expires 30d;
    }

    location / {
        proxy_pass http://localhost:3000;
        proxy_set_header Host $host;
        # ... other headers
        proxy_cache my_app_cache; # You could have another cache for dynamic content
        proxy_cache_bypass $http_pragma;
        proxy_no_cache $http_pragma;
    }
}

Enter fullscreen mode Exit fullscreen mode

proxy_cache_valid 200 302 24h;: Cache responses with status codes 200 and 302 for 24 hours.
expires 30d;: Sets the Expires and Cache-Control headers for the client browser, telling them they can cache these assets for 30 days.
add_header X-Proxy-Cache ...: Adds a custom header to the response, which is useful for debugging to see if a response came from the cache (HIT) or the backend (MISS).

5.3 Tuning Worker Processes and Connections
Nginx's performance is heavily influenced by its core settings in the main nginx.conf file. A good starting point is to let Nginx automatically determine the optimal number of worker processes .
At the top of /etc/nginx/nginx.conf

user www-data;

Set worker processes to auto (matches number of CPU cores)

worker_processes auto;
pid /run/nginx.pid;

events {
    Each worker can handle up to 4096 connections simultaneously
    worker_connections 4096;
    **Efficient handling of multiple connections**
    use epoll;
    multi_accept on;
}
Enter fullscreen mode Exit fullscreen mode
http {
    **Basic settings for efficiency**
    sendfile on;
    tcp_nopush on;
    tcp_nodelay on;
    keepalive_timeout 65;
    keepalive_requests 100;
    types_hash_max_size 2048;
    ** ... rest of http block**
}
Enter fullscreen mode Exit fullscreen mode

worker_processes auto;: Sets the number of worker processes equal to the number of CPU cores, allowing Nginx to fully utilize all available processing power .
worker_connections 4096;: Increases the number of simultaneous connections each worker can handle.
sendfile, tcp_nopush, tcp_nodelay: These are OS-level optimizations for sending files and packets more efficiently .
keepalive_timeout and keepalive_requests: Allow clients to reuse a single connection for multiple requests, reducing the overhead of creating new connections .

By implementing these performance strategies, you transform your Nginx reverse proxy from a simple traffic router into a powerful optimization layer.

Chapter 6: Advanced Scenarios and Troubleshooting

With a solid foundation in place, let's look at a couple of common advanced scenarios and how to troubleshoot when things go wrong.

6.1 Load Balancing with upstream
If your application grows and you need to run multiple instances of your backend server (e.g., on different ports or different machines), Nginx can act as a load balancer. You define a group of servers using the upstream module .

upstream backend_servers {
    # Use the least-connected load balancing method
    least_conn;
    server 10.0.0.1:3000 weight=3;
    server 10.0.0.2:3000;
    server 10.0.0.3:3000 backup;
}

Enter fullscreen mode Exit fullscreen mode
server {
    listen 80;
    server_name example.com;
    return 301 https://$server_name$request_uri;
}
Enter fullscreen mode Exit fullscreen mode
server {
    listen 443 ssl http2;
    server_name example.com;

    #... ssl certificate configuration ...

    location / {
        proxy_pass http://backend_servers;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
    }
}
Enter fullscreen mode Exit fullscreen mode

least_conn;: Nginx will pass a request to the server with the fewest active connections.
weight=3: This server will receive three times as many connections as the others.
backup: This server will only be used if all the other servers are unavailable.
This setup not only distributes load but also provides automatic failover, greatly increasing your application's resilience.

6.2 Handling WebSocket Connections
Applications using WebSockets (like live chat or real-time dashboards) require a persistent connection. Proxying WebSockets with Nginx requires a special configuration to handle the Upgrade header.

location /wsapp/ {
    proxy_pass http://websocket-backend;
    proxy_http_version 1.1;
    proxy_set_header Upgrade $http_upgrade;
    proxy_set_header Connection "upgrade";
    proxy_set_header Host $host;
    proxy_set_header X-Real-IP $remote_addr;
    # Increase timeouts for long-lived connections
    proxy_read_timeout 3600s;
    proxy_send_timeout 3600s;
}

Enter fullscreen mode Exit fullscreen mode

The key directives are proxy_http_version 1.1 and the explicit setting of the Upgrade and Connection headers, which are required for the WebSocket handshake to succeed.

6.3 Common Troubleshooting Steps
When something isn't working, here’s a systematic approach to diagnosing the issue.

  1. Check Nginx Configuration Syntax: This is always the first step.
  2. sudo nginx -t
  3. Check Nginx Error Logs: The error log is your best friend. It will often give you a precise reason for a failure.
  4. sudo tail -f /var/log/nginx/error.log Look for lines related to connect() failed or permission denied .
  5. Check Your Backend Application: Is your backend application actually running and listening on the expected port? Test it locally on the server.
  6. curl http://localhost:3000 If this fails, the problem is with your application, not Nginx .
  7. Check Firewall and SELinux: Ensure that no firewall is blocking the connection between Nginx and your backend. If you are using a cloud server, also check the cloud provider's security groups. On some systems, SELinux might block Nginx from making network connections. Check the audit logs (/var/log/audit/audit.log) for denials. 8.Check Permissions: Ensure that the Nginx user (usually www-data) has read access to your SSL certificates and the directories containing your static files .

Conclusion
Setting up a reverse proxy with Nginx on Ubuntu is a fundamental skill for anyone deploying modern web applications. We have journeyed from a simple traffic-forwarding setup to a hardened, high-performance gateway. You have learned how to:
• Route traffic seamlessly using proxy_pass and proxy_set_header directives.
• Fortify your application with automated SSL/TLS certificates from Let's Encrypt, ensuring all traffic is encrypted and trusted .
• Supercharge performance through gzip compression, intelligent caching strategies, and core system tuning .
By implementing these configurations, your Nginx server does more than just serve content; it becomes an intelligent layer that protects your backend, optimizes the user experience, and provides the flexibility to scale your infrastructure.
The beauty of Nginx lies in its stability and its granular control. As your application evolves and your needs grow more complex—whether it's handling WebSockets, load balancing across a global fleet of servers, or implementing sophisticated rate limiting—your Nginx configuration can grow with you. The commands and concepts in this guide form the foundation upon which you can build a robust, secure, and lightning-fast web presence.

Top comments (0)