<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Saikat Goswami</title>
    <description>The latest articles on DEV Community by Saikat Goswami (@saikat_goswami_fd81ed5950).</description>
    <link>https://dev.to/saikat_goswami_fd81ed5950</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/saikat_goswami_fd81ed5950"/>
    <language>en</language>
    <item>
      <title>Setting Up a Reverse Proxy with Nginx on Ubuntu</title>
      <dc:creator>Saikat Goswami</dc:creator>
      <pubDate>Tue, 24 Mar 2026 05:52:20 +0000</pubDate>
      <link>https://dev.to/saikat_goswami_fd81ed5950/setting-up-a-reverse-proxy-with-nginx-on-ubuntu-3hk1</link>
      <guid>https://dev.to/saikat_goswami_fd81ed5950/setting-up-a-reverse-proxy-with-nginx-on-ubuntu-3hk1</guid>
      <description>&lt;p&gt;In the modern web architecture, the humble web server has evolved far beyond simply serving static HTML files. As applications have grown more complex—decoupled into microservices, powered by multiple backend languages, and demanding robust security—the need for a sophisticated traffic manager has become paramount. Enter the reverse proxy. Positioned between client requests and your application servers, a reverse proxy is the maître d' of your digital infrastructure, directing traffic, handling security, and ensuring everything runs smoothly behind the scenes.&lt;br&gt;
Nginx (pronounced "Engine-X") has risen to become the gold standard for this role. Renowned for its high performance, stability, and low resource consumption, Nginx is not just a web server; it is an excellent reverse proxy solution. On Ubuntu, one of the most popular Linux distributions for cloud and server environments, setting up Nginx is a rite of passage for system administrators and developers alike.&lt;br&gt;
This article will serve as your comprehensive guide to setting up a reverse proxy with Nginx on Ubuntu. We will move from a basic configuration to advanced implementations, covering traffic routing, SSL termination for rock-solid security, and performance tuning to make your applications fly.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Chapter 1: The Foundation - What is a Reverse Proxy and Why Nginx?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Before diving into terminal commands and configuration files, it’s crucial to understand the tool you are wielding. A reverse proxy is a server that sits between client devices (like web browsers) and one or more backend servers. It intercepts requests from clients and forwards them to the appropriate server, acting as a gateway.&lt;br&gt;
This differs from a forward proxy, which sits in front of clients and is used to mask their identities (e.g., a corporate firewall or services like VPNs). The reverse proxy masks the backend servers, making them invisible to the outside world .&lt;/p&gt;

&lt;p&gt;Why deploy a reverse proxy? The advantages are substantial:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1.Security:&lt;/strong&gt; By hiding the identity and characteristics of your backend servers, you drastically reduce the attack surface. Clients never connect directly to your application server (like a Node.js app or a Gunicorn-hosted Python app); they only see the proxy. You can also centralize SSL/TLS termination, offloading the encryption/decryption overhead from your application servers .&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2.Load Balancing:&lt;/strong&gt; As your traffic grows, a reverse proxy can distribute incoming requests across multiple backend servers, ensuring no single server becomes a bottleneck and guaranteeing high availability .&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3.Improved Performance:&lt;/strong&gt; Nginx can efficiently serve static files (images, CSS, JavaScript) directly, taking that load off your application logic. It can also cache dynamic content, compressing responses with gzip to reduce bandwidth and speed up load times .&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4.Flexibility and Abstraction:&lt;/strong&gt; You can change your backend infrastructure (e.g., move a service from port 8080 to 8081, or add new servers) without clients ever knowing. The reverse proxy abstracts the internal layout of your infrastructure .&lt;br&gt;
Nginx is the ideal tool for this job because it uses an asynchronous, event-driven architecture. Unlike older servers that spawn a new thread or process per connection, Nginx handles thousands of concurrent connections within a single thread, making it incredibly efficient even under heavy load.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Chapter 2: Laying the Groundwork - Installation and Basic Server Setup&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Our journey begins on a fresh or existing Ubuntu server (20.04, 22.04, or 24.04). You will need sudo privileges and access to a terminal. We'll assume your server has a public IP address and, ideally, a domain name pointed to it (like example.com).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 1: Installing Nginx First&lt;/strong&gt; &lt;br&gt;
Update your package index to ensure you have access to the latest software versions. Then, install Nginx.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;apt update
&lt;span class="nb"&gt;sudo &lt;/span&gt;apt &lt;span class="nb"&gt;install &lt;/span&gt;nginx &lt;span class="nt"&gt;-y&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once installed, Nginx will usually start automatically. We can verify this by checking its status with systemctl, the service manager for Linux .&lt;/p&gt;

&lt;p&gt;&lt;code&gt;sudo systemctl status nginx&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;You should see an output indicating the service is active (running). If it didn't start automatically, you can kick it off with sudo systemctl start nginx.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 2: Adjusting the Firewall&lt;/strong&gt; &lt;br&gt;
If you have the Uncomplicated Firewall (UFW) enabled (which is common on Ubuntu), you need to allow traffic to Nginx. Nginx registers a few profiles with UFW upon installation. The safest bet is to allow "Nginx Full", which permits traffic on both port 80 (HTTP) and port 443 (HTTPS) .&lt;/p&gt;

&lt;p&gt;&lt;code&gt;sudo ufw allow 'Nginx Full'&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 3: Verifying the Installation&lt;/strong&gt; &lt;br&gt;
Finally, check if Nginx is reachable. Open your web browser and navigate to your server's IP address (e.g., &lt;a href="http://your_server_ip" rel="noopener noreferrer"&gt;http://your_server_ip&lt;/a&gt;). You should be greeted with the default Nginx welcome page. This confirms that Nginx is installed, running, and accessible.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Chapter 3: The Core Configuration - Routing Traffic with proxy_pass&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The heart of any reverse proxy is its ability to pass requests from the client to a backend server and then return the response. In Nginx, this is achieved with the proxy_pass directive. We will configure this inside a server block (similar to an Apache virtual host) which defines how Nginx handles requests for a specific domain or port.&lt;/p&gt;

&lt;p&gt;Nginx's recommended configuration structure uses two main directories:&lt;br&gt;
•&lt;strong&gt;/etc/nginx/sites-available/:&lt;/strong&gt; Where configuration files for your websites/apps are stored.&lt;br&gt;
•&lt;strong&gt;/etc/nginx/sites-enabled/:&lt;/strong&gt; Contains symbolic links to files in sites-available that Nginx should actually load and use.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 1: Creating a Configuration File&lt;/strong&gt; &lt;br&gt;
Let's create a new configuration file for our application in the sites-available directory. We'll name it myapp for clarity.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;sudo nano /etc/nginx/sites-available/myapp&lt;br&gt;
&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 2: The Basic Reverse Proxy Configuration&lt;/strong&gt; &lt;br&gt;
Inside this file, we will define a server block. This example assumes your backend application is running on localhost on port 3000 (a common port for Node.js, React, or other development servers).&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight nginx"&gt;&lt;code&gt;&lt;span class="k"&gt;server&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="kn"&gt;listen&lt;/span&gt; &lt;span class="mi"&gt;80&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="kn"&gt;listen&lt;/span&gt; &lt;span class="s"&gt;[::]:80&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="kn"&gt;server_name&lt;/span&gt; &lt;span class="s"&gt;example.com&lt;/span&gt; &lt;span class="s"&gt;www.example.com&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

    &lt;span class="kn"&gt;location&lt;/span&gt; &lt;span class="n"&gt;/&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="kn"&gt;proxy_pass&lt;/span&gt; &lt;span class="s"&gt;http://localhost:3000&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
        &lt;span class="kn"&gt;proxy_set_header&lt;/span&gt; &lt;span class="s"&gt;Host&lt;/span&gt; &lt;span class="nv"&gt;$host&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
        &lt;span class="kn"&gt;proxy_set_header&lt;/span&gt; &lt;span class="s"&gt;X-Real-IP&lt;/span&gt; &lt;span class="nv"&gt;$remote_addr&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
        &lt;span class="kn"&gt;proxy_set_header&lt;/span&gt; &lt;span class="s"&gt;X-Forwarded-For&lt;/span&gt; &lt;span class="nv"&gt;$proxy_add_x_forwarded_for&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
        &lt;span class="kn"&gt;proxy_set_header&lt;/span&gt; &lt;span class="s"&gt;X-Forwarded-Proto&lt;/span&gt; &lt;span class="nv"&gt;$scheme&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Explanation of the Directives:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;•&lt;strong&gt;listen 80;:&lt;/strong&gt; Tells Nginx to listen for incoming connections on port 80 (standard HTTP) .&lt;br&gt;
• &lt;strong&gt;server_name example.com;:&lt;/strong&gt; This block will only respond to requests for this specific domain name. Replace it with your own domain or your server's IP address .&lt;br&gt;
•&lt;strong&gt;location / { ... }:&lt;/strong&gt; This block defines how to handle requests for the root URL (/) and everything beneath it. You can have multiple location blocks for different parts of your site (e.g., /api might point to a different backend than /) .&lt;br&gt;
• &lt;strong&gt;proxy_pass &lt;a href="http://localhost:3000;:" rel="noopener noreferrer"&gt;http://localhost:3000;:&lt;/a&gt;&lt;/strong&gt; This is the magic line. It forwards the client's request to the specified backend server address. In this case, &lt;a href="http://localhost:3000" rel="noopener noreferrer"&gt;http://localhost:3000&lt;/a&gt; .&lt;br&gt;
•&lt;strong&gt;proxy_set_header:&lt;/strong&gt; These lines are critical for the backend application to function correctly. They modify the HTTP headers of the request being forwarded .&lt;br&gt;
• &lt;strong&gt;Host $host:&lt;/strong&gt; Passes the original Host header from the client. Without this, the backend might see all requests as coming from localhost.&lt;br&gt;
• &lt;strong&gt;X-Real-IP $remote_addr:&lt;/strong&gt; Passes the real IP address of the client. The backend would otherwise only see the IP of the Nginx server.&lt;br&gt;
• &lt;strong&gt;X-Forwarded-For $proxy_add_x_forwarded_for&lt;/strong&gt;: Appends the client's IP address to a list of proxies the request has passed through.&lt;br&gt;
• &lt;strong&gt;X-Forwarded-Proto $scheme&lt;/strong&gt;: Tells the backend whether the original request was HTTP or HTTPS.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 3. Enabling the Site:&lt;/strong&gt; To activate this configuration, we need to create a symbolic link from our file in sites-available to sites-enabled .&lt;/p&gt;

&lt;p&gt;&lt;code&gt;sudo ln -s /etc/nginx/sites-available/myapp /etc/nginx/sites-enabled/&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 4. Testing and Reloading:&lt;/strong&gt; Always test your Nginx configuration for syntax errors before reloading. This simple step can save you from accidentally taking your site down .&lt;/p&gt;

&lt;p&gt;&lt;code&gt;sudo nginx -t&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;If the test is successful, gracefully reload Nginx to apply the new configuration.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;sudo systemctl reload nginx&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Your reverse proxy is now live. Any visitor to &lt;code&gt;http://example.com&lt;/code&gt; will have their traffic seamlessly forwarded to your application running on port 3000.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Chapter 4: Fortifying the Connection - Handling SSL/TLS&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In today's web, security is non-negotiable. Serving your site over HTTPS encrypts all communication between the client and your server, protecting sensitive data from eavesdroppers. Using Nginx to handle SSL termination is a best practice, as it centralizes certificate management and offloads the computationally expensive encryption/decryption work from your application servers .&lt;br&gt;
While you can use self-signed certificates for testing, for a production site, you need a trusted certificate from a Certificate Authority (CA). Let's Encrypt is a free, automated, and open CA that is perfect for this task, and its certbot tool integrates beautifully with Nginx on Ubuntu.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 1: Installing Certbot First.&lt;/strong&gt;&lt;br&gt;
Install the Certbot client and its Nginx plugin.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;sudo apt install certbot python3-certbot-nginx -y&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 2: Obtaining and Installing the Certificate&lt;/strong&gt;&lt;br&gt;
This is the magic step. Run Certbot with the --nginx plugin, and it will automatically obtain a certificate for your domain and modify your Nginx configuration to use it .&lt;/p&gt;

&lt;p&gt;&lt;code&gt;sudo certbot --nginx -d example.com -d www.example.com&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;•--nginx: Tells Certbot to use the Nginx plugin.&lt;br&gt;
•-d: Specifies the domain names you want the certificate to be valid for.&lt;br&gt;
Certbot will ask you for an email address for urgent renewal and security notices, and then ask you to agree to the terms of service. After that, it will communicate with the Let's Encrypt servers, perform a challenge to prove you control the domain, and then update your Nginx configuration (/etc/nginx/sites-available/myapp) to enable HTTPS.&lt;br&gt;
What Certbot Changes in Your Configuration: After Certbot runs, your server block will look something like this :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight nginx"&gt;&lt;code&gt;&lt;span class="k"&gt;server&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="kn"&gt;listen&lt;/span&gt; &lt;span class="mi"&gt;80&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="kn"&gt;listen&lt;/span&gt; &lt;span class="s"&gt;[::]:80&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="kn"&gt;server_name&lt;/span&gt; &lt;span class="s"&gt;example.com&lt;/span&gt; &lt;span class="s"&gt;www.example.com&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="kn"&gt;return&lt;/span&gt; &lt;span class="mi"&gt;301&lt;/span&gt; &lt;span class="s"&gt;https://&lt;/span&gt;&lt;span class="nv"&gt;$server_name$request_uri&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight nginx"&gt;&lt;code&gt;&lt;span class="k"&gt;server&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="kn"&gt;listen&lt;/span&gt; &lt;span class="mi"&gt;443&lt;/span&gt; &lt;span class="s"&gt;ssl&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="kn"&gt;listen&lt;/span&gt; &lt;span class="s"&gt;[::]:443&lt;/span&gt; &lt;span class="s"&gt;ssl&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="kn"&gt;server_name&lt;/span&gt; &lt;span class="s"&gt;example.com&lt;/span&gt; &lt;span class="s"&gt;www.example.com&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

    &lt;span class="kn"&gt;ssl_certificate&lt;/span&gt; &lt;span class="n"&gt;/etc/letsencrypt/live/example.com/fullchain.pem&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="kn"&gt;ssl_certificate&lt;/span&gt;      
    &lt;span class="n"&gt;/etc/letsencrypt/live/example.com/privkey.pem&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;    
    &lt;span class="kn"&gt;include&lt;/span&gt; &lt;span class="n"&gt;/etc/letsencrypt/options-ssl-nginx.conf&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="kn"&gt;ssl_dhparam&lt;/span&gt; &lt;span class="n"&gt;/etc/letsencrypt/ssl-dhparams.pem&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

    &lt;span class="kn"&gt;location&lt;/span&gt; &lt;span class="n"&gt;/&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="kn"&gt;proxy_pass&lt;/span&gt; &lt;span class="s"&gt;http://localhost:3000&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
        &lt;span class="kn"&gt;proxy_set_header&lt;/span&gt; &lt;span class="s"&gt;Host&lt;/span&gt; &lt;span class="nv"&gt;$host&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
        &lt;span class="kn"&gt;proxy_set_header&lt;/span&gt; &lt;span class="s"&gt;X-Real-IP&lt;/span&gt; &lt;span class="nv"&gt;$remote_addr&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
        &lt;span class="kn"&gt;proxy_set_header&lt;/span&gt; &lt;span class="s"&gt;X-Forwarded-For&lt;/span&gt; &lt;span class="nv"&gt;$proxy_add_x_forwarded_for&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
        &lt;span class="kn"&gt;proxy_set_header&lt;/span&gt; &lt;span class="s"&gt;X-Forwarded-Proto&lt;/span&gt; &lt;span class="nv"&gt;$scheme&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Notice what happened:&lt;/strong&gt;&lt;br&gt;
1.The original HTTP server block now has a return 301 https://... directive, which forces all HTTP traffic to redirect to HTTPS.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;A new server block for port 443 (HTTPS) has been created, containing the paths to your new SSL certificate and key.&lt;/li&gt;
&lt;li&gt;It includes secure configuration files provided by Certbot to ensure modern, strong encryption.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Step 3: Auto-Renewal&lt;/strong&gt; &lt;br&gt;
Let's Encrypt certificates are valid for 90 days. Certbot installs a cron job or systemd timer that will automatically attempt to renew your certificates before they expire. You can test the renewal process with:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;sudo certbot renew --dry-run&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;With this in place, your reverse proxy is now a secure gateway, ensuring all traffic to and from your users is encrypted .&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Chapter 5: Supercharging Performance - Caching, Compression, and Tuning&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Now that traffic is flowing securely, it's time to optimize. Nginx offers a powerful suite of tools to make your applications feel faster and handle more load. We will explore some key performance-enhancing features.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;5.1 Enabling Gzip Compression&lt;/strong&gt;&lt;br&gt;
Text-based resources like HTML, CSS, and JavaScript can be compressed significantly before being sent over the network, drastically reducing page load times. Enable gzip compression within the http block of your main nginx.conf file, or within your specific server/location blocks.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight nginx"&gt;&lt;code&gt;&lt;span class="k"&gt;http&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="c1"&gt;# Enable gzip compression&lt;/span&gt;
    &lt;span class="kn"&gt;gzip&lt;/span&gt; &lt;span class="no"&gt;on&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="c1"&gt;# Compression level (1-9). Level 6 is a good trade-off between CPU and compression.&lt;/span&gt;
    &lt;span class="kn"&gt;gzip_comp_level&lt;/span&gt; &lt;span class="mi"&gt;6&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="c1"&gt;# Minimum length of a response to compress (in bytes)&lt;/span&gt;
    &lt;span class="kn"&gt;gzip_min_length&lt;/span&gt; &lt;span class="mi"&gt;256&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="c1"&gt;# Compress responses for these MIME types&lt;/span&gt;
    &lt;span class="kn"&gt;gzip_types&lt;/span&gt;
        &lt;span class="nc"&gt;text/plain&lt;/span&gt;
        &lt;span class="nc"&gt;text/css&lt;/span&gt;
        &lt;span class="nc"&gt;text/xml&lt;/span&gt;
        &lt;span class="nc"&gt;text/javascript&lt;/span&gt;
        &lt;span class="nc"&gt;application/json&lt;/span&gt;
        &lt;span class="nc"&gt;application/javascript&lt;/span&gt;
        &lt;span class="nc"&gt;application/xml&lt;/span&gt;&lt;span class="s"&gt;+rss&lt;/span&gt;
        &lt;span class="nc"&gt;application/rss&lt;/span&gt;&lt;span class="s"&gt;+xml&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="c1"&gt;# Vary: Accept-Encoding header&lt;/span&gt;
    &lt;span class="kn"&gt;gzip_vary&lt;/span&gt; &lt;span class="no"&gt;on&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="c1"&gt;# Enable compression for proxied requests&lt;/span&gt;
    &lt;span class="kn"&gt;gzip_proxied&lt;/span&gt; &lt;span class="s"&gt;any&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This configuration tells Nginx to compress eligible responses on-the-fly, significantly reducing bandwidth usage and improving load times .&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;5.2 Implementing Caching for Static Assets&lt;/strong&gt;&lt;br&gt;
For files that don't change often (images, CSS, JavaScript), you can instruct Nginx to cache them. This serves two purposes: it offloads work from your backend server and allows clients to reuse downloaded files.&lt;br&gt;
First, define a cache path in the http block of your main /etc/nginx/nginx.conf .&lt;/p&gt;

&lt;p&gt;&lt;code&gt;http {&lt;br&gt;
    # ...&lt;br&gt;
    proxy_cache_path /var/cache/nginx levels=1:2  &lt;br&gt;
  keys_zone=static_cache:10m max_size=1g inactive=60m&lt;br&gt;
  use_temp_path=off;&lt;br&gt;
}&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;•&lt;strong&gt;/var/cache/nginx&lt;/strong&gt;: The directory on disk where the cache will be stored.&lt;br&gt;
•&lt;strong&gt;keys_zone=static_cache:10m&lt;/strong&gt;: Creates a shared memory zone named static_cache of 10 MB to store cache keys and metadata.&lt;br&gt;
•&lt;strong&gt;max_size=1g&lt;/strong&gt;: Limits the physical cache size on disk to 1 gigabyte.&lt;br&gt;
•&lt;strong&gt;inactive=60m&lt;/strong&gt;: Removes items from the cache if they haven't been accessed in 60 minutes.&lt;br&gt;
Then, in your server block, you can apply this cache to specific locations. For example, to cache all images, CSS, and JavaScript files for a day .&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight nginx"&gt;&lt;code&gt;&lt;span class="k"&gt;server&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="c1"&gt;# ...&lt;/span&gt;
    &lt;span class="kn"&gt;location&lt;/span&gt; &lt;span class="p"&gt;~&lt;/span&gt;&lt;span class="sr"&gt;*&lt;/span&gt; &lt;span class="err"&gt;\&lt;/span&gt;&lt;span class="s"&gt;.(jpg|jpeg|png|gif|ico|css|js)&lt;/span&gt;$ &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="kn"&gt;proxy_cache&lt;/span&gt; &lt;span class="s"&gt;static_cache&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
        &lt;span class="kn"&gt;proxy_pass&lt;/span&gt; &lt;span class="s"&gt;http://localhost:3000&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
        &lt;span class="kn"&gt;proxy_cache_valid&lt;/span&gt; &lt;span class="mi"&gt;200&lt;/span&gt; &lt;span class="mi"&gt;302&lt;/span&gt; &lt;span class="s"&gt;24h&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
        &lt;span class="kn"&gt;proxy_cache_valid&lt;/span&gt; &lt;span class="mi"&gt;404&lt;/span&gt; &lt;span class="mi"&gt;1m&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
        &lt;span class="kn"&gt;proxy_cache_use_stale&lt;/span&gt; &lt;span class="s"&gt;error&lt;/span&gt; &lt;span class="s"&gt;timeout&lt;/span&gt; &lt;span class="s"&gt;updating&lt;/span&gt; &lt;span class="s"&gt;http_500&lt;/span&gt; 
        &lt;span class="s"&gt;http_502&lt;/span&gt; &lt;span class="s"&gt;http_503&lt;/span&gt; &lt;span class="s"&gt;http_504&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
        &lt;span class="kn"&gt;add_header&lt;/span&gt; &lt;span class="s"&gt;X-Proxy-Cache&lt;/span&gt; &lt;span class="nv"&gt;$upstream_cache_status&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
        &lt;span class="kn"&gt;expires&lt;/span&gt; &lt;span class="s"&gt;30d&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;

    &lt;span class="kn"&gt;location&lt;/span&gt; &lt;span class="n"&gt;/&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="kn"&gt;proxy_pass&lt;/span&gt; &lt;span class="s"&gt;http://localhost:3000&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
        &lt;span class="kn"&gt;proxy_set_header&lt;/span&gt; &lt;span class="s"&gt;Host&lt;/span&gt; &lt;span class="nv"&gt;$host&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
        &lt;span class="c1"&gt;# ... other headers&lt;/span&gt;
        &lt;span class="kn"&gt;proxy_cache&lt;/span&gt; &lt;span class="s"&gt;my_app_cache&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="c1"&gt;# You could have another cache for dynamic content&lt;/span&gt;
        &lt;span class="kn"&gt;proxy_cache_bypass&lt;/span&gt; &lt;span class="nv"&gt;$http_pragma&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
        &lt;span class="kn"&gt;proxy_no_cache&lt;/span&gt; &lt;span class="nv"&gt;$http_pragma&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;• &lt;strong&gt;proxy_cache_valid 200 302 24h;&lt;/strong&gt;: Cache responses with status codes 200 and 302 for 24 hours.&lt;br&gt;
• &lt;strong&gt;expires 30d;&lt;/strong&gt;: Sets the Expires and Cache-Control headers for the client browser, telling them they can cache these assets for 30 days.&lt;br&gt;
• &lt;strong&gt;add_header X-Proxy-Cache ...&lt;/strong&gt;: Adds a custom header to the response, which is useful for debugging to see if a response came from the cache (HIT) or the backend (MISS).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;5.3 Tuning Worker Processes and Connections&lt;/strong&gt;&lt;br&gt;
Nginx's performance is heavily influenced by its core settings in the main nginx.conf file. A good starting point is to let Nginx automatically determine the optimal number of worker processes .&lt;br&gt;
At the top of /etc/nginx/nginx.conf&lt;/p&gt;

&lt;p&gt;&lt;code&gt;user www-data;&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Set worker processes to auto (matches number of CPU cores)&lt;/p&gt;

&lt;p&gt;&lt;code&gt;worker_processes auto;&lt;/code&gt;&lt;br&gt;
&lt;code&gt;pid /run/nginx.pid;&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight nginx"&gt;&lt;code&gt;&lt;span class="k"&gt;events&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="kn"&gt;Each&lt;/span&gt; &lt;span class="s"&gt;worker&lt;/span&gt; &lt;span class="s"&gt;can&lt;/span&gt; &lt;span class="s"&gt;handle&lt;/span&gt; &lt;span class="s"&gt;up&lt;/span&gt; &lt;span class="s"&gt;to&lt;/span&gt; &lt;span class="mi"&gt;4096&lt;/span&gt; &lt;span class="s"&gt;connections&lt;/span&gt; &lt;span class="s"&gt;simultaneously&lt;/span&gt;
    &lt;span class="s"&gt;worker_connections&lt;/span&gt; &lt;span class="mi"&gt;4096&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="kn"&gt;**Efficient&lt;/span&gt; &lt;span class="s"&gt;handling&lt;/span&gt; &lt;span class="s"&gt;of&lt;/span&gt; &lt;span class="s"&gt;multiple&lt;/span&gt; &lt;span class="s"&gt;connections**&lt;/span&gt;
    &lt;span class="s"&gt;use&lt;/span&gt; &lt;span class="s"&gt;epoll&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="kn"&gt;multi_accept&lt;/span&gt; &lt;span class="no"&gt;on&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight nginx"&gt;&lt;code&gt;&lt;span class="k"&gt;http&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="kn"&gt;**Basic&lt;/span&gt; &lt;span class="s"&gt;settings&lt;/span&gt; &lt;span class="s"&gt;for&lt;/span&gt; &lt;span class="s"&gt;efficiency**&lt;/span&gt;
    &lt;span class="s"&gt;sendfile&lt;/span&gt; &lt;span class="no"&gt;on&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="kn"&gt;tcp_nopush&lt;/span&gt; &lt;span class="no"&gt;on&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="kn"&gt;tcp_nodelay&lt;/span&gt; &lt;span class="no"&gt;on&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="kn"&gt;keepalive_timeout&lt;/span&gt; &lt;span class="mi"&gt;65&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="kn"&gt;keepalive_requests&lt;/span&gt; &lt;span class="mi"&gt;100&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="kn"&gt;types_hash_max_size&lt;/span&gt; &lt;span class="mi"&gt;2048&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="kn"&gt;**&lt;/span&gt; &lt;span class="s"&gt;...&lt;/span&gt; &lt;span class="s"&gt;rest&lt;/span&gt; &lt;span class="s"&gt;of&lt;/span&gt; &lt;span class="s"&gt;http&lt;/span&gt; &lt;span class="s"&gt;block**&lt;/span&gt;
&lt;span class="err"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;• &lt;strong&gt;worker_processes auto;&lt;/strong&gt;: Sets the number of worker processes equal to the number of CPU cores, allowing Nginx to fully utilize all available processing power .&lt;br&gt;
• &lt;strong&gt;worker_connections 4096;&lt;/strong&gt;: Increases the number of simultaneous connections each worker can handle.&lt;br&gt;
• &lt;strong&gt;sendfile, tcp_nopush, tcp_nodelay&lt;/strong&gt;: These are OS-level optimizations for sending files and packets more efficiently .&lt;br&gt;
• &lt;strong&gt;keepalive_timeout and keepalive_requests&lt;/strong&gt;: Allow clients to reuse a single connection for multiple requests, reducing the overhead of creating new connections .&lt;/p&gt;

&lt;p&gt;By implementing these performance strategies, you transform your Nginx reverse proxy from a simple traffic router into a powerful optimization layer.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Chapter 6: Advanced Scenarios and Troubleshooting&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;With a solid foundation in place, let's look at a couple of common advanced scenarios and how to troubleshoot when things go wrong.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;6.1 Load Balancing with upstream&lt;/strong&gt;&lt;br&gt;
If your application grows and you need to run multiple instances of your backend server (e.g., on different ports or different machines), Nginx can act as a load balancer. You define a group of servers using the upstream module .&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight nginx"&gt;&lt;code&gt;&lt;span class="k"&gt;upstream&lt;/span&gt; &lt;span class="s"&gt;backend_servers&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="c1"&gt;# Use the least-connected load balancing method&lt;/span&gt;
    &lt;span class="kn"&gt;least_conn&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="kn"&gt;server&lt;/span&gt; &lt;span class="nf"&gt;10.0.0.1&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="mi"&gt;3000&lt;/span&gt; &lt;span class="s"&gt;weight=3&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="kn"&gt;server&lt;/span&gt; &lt;span class="nf"&gt;10.0.0.2&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="mi"&gt;3000&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="kn"&gt;server&lt;/span&gt; &lt;span class="nf"&gt;10.0.0.3&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="mi"&gt;3000&lt;/span&gt; &lt;span class="s"&gt;backup&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight nginx"&gt;&lt;code&gt;&lt;span class="k"&gt;server&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="kn"&gt;listen&lt;/span&gt; &lt;span class="mi"&gt;80&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="kn"&gt;server_name&lt;/span&gt; &lt;span class="s"&gt;example.com&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="kn"&gt;return&lt;/span&gt; &lt;span class="mi"&gt;301&lt;/span&gt; &lt;span class="s"&gt;https://&lt;/span&gt;&lt;span class="nv"&gt;$server_name$request_uri&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight nginx"&gt;&lt;code&gt;&lt;span class="k"&gt;server&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="kn"&gt;listen&lt;/span&gt; &lt;span class="mi"&gt;443&lt;/span&gt; &lt;span class="s"&gt;ssl&lt;/span&gt; &lt;span class="s"&gt;http2&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="kn"&gt;server_name&lt;/span&gt; &lt;span class="s"&gt;example.com&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

    &lt;span class="c1"&gt;#... ssl certificate configuration ...&lt;/span&gt;

    &lt;span class="kn"&gt;location&lt;/span&gt; &lt;span class="n"&gt;/&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="kn"&gt;proxy_pass&lt;/span&gt; &lt;span class="s"&gt;http://backend_servers&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
        &lt;span class="kn"&gt;proxy_set_header&lt;/span&gt; &lt;span class="s"&gt;Host&lt;/span&gt; &lt;span class="nv"&gt;$host&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
        &lt;span class="kn"&gt;proxy_set_header&lt;/span&gt; &lt;span class="s"&gt;X-Real-IP&lt;/span&gt; &lt;span class="nv"&gt;$remote_addr&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
        &lt;span class="kn"&gt;proxy_set_header&lt;/span&gt; &lt;span class="s"&gt;X-Forwarded-For&lt;/span&gt; &lt;span class="nv"&gt;$proxy_add_x_forwarded_for&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
        &lt;span class="kn"&gt;proxy_set_header&lt;/span&gt; &lt;span class="s"&gt;X-Forwarded-Proto&lt;/span&gt; &lt;span class="nv"&gt;$scheme&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;• &lt;strong&gt;least_conn;&lt;/strong&gt;: Nginx will pass a request to the server with the fewest active connections.&lt;br&gt;
• &lt;strong&gt;weight=3&lt;/strong&gt;: This server will receive three times as many connections as the others.&lt;br&gt;
• &lt;strong&gt;backup&lt;/strong&gt;: This server will only be used if all the other servers are unavailable.&lt;br&gt;
This setup not only distributes load but also provides automatic failover, greatly increasing your application's resilience.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;6.2 Handling WebSocket Connections&lt;/strong&gt;&lt;br&gt;
Applications using WebSockets (like live chat or real-time dashboards) require a persistent connection. Proxying WebSockets with Nginx requires a special configuration to handle the Upgrade header.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight nginx"&gt;&lt;code&gt;&lt;span class="k"&gt;location&lt;/span&gt; &lt;span class="n"&gt;/wsapp/&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="kn"&gt;proxy_pass&lt;/span&gt; &lt;span class="s"&gt;http://websocket-backend&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="kn"&gt;proxy_http_version&lt;/span&gt; &lt;span class="mf"&gt;1.1&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="kn"&gt;proxy_set_header&lt;/span&gt; &lt;span class="s"&gt;Upgrade&lt;/span&gt; &lt;span class="nv"&gt;$http_upgrade&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="kn"&gt;proxy_set_header&lt;/span&gt; &lt;span class="s"&gt;Connection&lt;/span&gt; &lt;span class="s"&gt;"upgrade"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="kn"&gt;proxy_set_header&lt;/span&gt; &lt;span class="s"&gt;Host&lt;/span&gt; &lt;span class="nv"&gt;$host&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="kn"&gt;proxy_set_header&lt;/span&gt; &lt;span class="s"&gt;X-Real-IP&lt;/span&gt; &lt;span class="nv"&gt;$remote_addr&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="c1"&gt;# Increase timeouts for long-lived connections&lt;/span&gt;
    &lt;span class="kn"&gt;proxy_read_timeout&lt;/span&gt; &lt;span class="s"&gt;3600s&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="kn"&gt;proxy_send_timeout&lt;/span&gt; &lt;span class="s"&gt;3600s&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The key directives are proxy_http_version 1.1 and the explicit setting of the Upgrade and Connection headers, which are required for the WebSocket handshake to succeed.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;6.3 Common Troubleshooting Steps&lt;/strong&gt;&lt;br&gt;
When something isn't working, here’s a systematic approach to diagnosing the issue.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Check Nginx Configuration Syntax&lt;/strong&gt;: This is always the first step.&lt;/li&gt;
&lt;li&gt;&lt;code&gt;sudo nginx -t&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Check Nginx Error Logs:&lt;/strong&gt; The error log is your best friend. It will often give you a precise reason for a failure.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;sudo tail -f /var/log/nginx/error.log&lt;/code&gt;
Look for lines related to connect() failed or permission denied .&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Check Your Backend Application:&lt;/strong&gt; Is your backend application actually running and listening on the expected port? Test it locally on the server.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;curl http://localhost:3000&lt;/code&gt;
If this fails, the problem is with your application, not Nginx .&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Check Firewall and SELinux&lt;/strong&gt;: Ensure that no firewall is blocking the connection between Nginx and your backend. If you are using a cloud server, also check the cloud provider's security groups. On some systems, SELinux might block Nginx from making network connections. Check the audit logs (&lt;code&gt;/var/log/audit/audit.log&lt;/code&gt;) for denials.
8.&lt;strong&gt;Check Permissions&lt;/strong&gt;: Ensure that the Nginx user (usually www-data) has read access to your SSL certificates and the directories containing your static files .&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Conclusion&lt;/strong&gt;&lt;br&gt;
Setting up a reverse proxy with Nginx on Ubuntu is a fundamental skill for anyone deploying modern web applications. We have journeyed from a simple traffic-forwarding setup to a hardened, high-performance gateway. You have learned how to:&lt;br&gt;
• Route traffic seamlessly using proxy_pass and proxy_set_header directives.&lt;br&gt;
• Fortify your application with automated SSL/TLS certificates from Let's Encrypt, ensuring all traffic is encrypted and trusted .&lt;br&gt;
• Supercharge performance through gzip compression, intelligent caching strategies, and core system tuning .&lt;br&gt;
By implementing these configurations, your Nginx server does more than just serve content; it becomes an intelligent layer that protects your backend, optimizes the user experience, and provides the flexibility to scale your infrastructure.&lt;br&gt;
The beauty of Nginx lies in its stability and its granular control. As your application evolves and your needs grow more complex—whether it's handling WebSockets, load balancing across a global fleet of servers, or implementing sophisticated rate limiting—your Nginx configuration can grow with you. The commands and concepts in this guide form the foundation upon which you can build a robust, secure, and lightning-fast web presence.&lt;/p&gt;

</description>
      <category>devops</category>
      <category>linux</category>
      <category>tutorial</category>
      <category>webdev</category>
    </item>
    <item>
      <title>Python HTTP Server on Ubuntu</title>
      <dc:creator>Saikat Goswami</dc:creator>
      <pubDate>Tue, 17 Mar 2026 11:39:03 +0000</pubDate>
      <link>https://dev.to/saikat_goswami_fd81ed5950/python-http-server-on-ubuntu-3hp3</link>
      <guid>https://dev.to/saikat_goswami_fd81ed5950/python-http-server-on-ubuntu-3hp3</guid>
      <description>&lt;p&gt;Running a lightweight HTTP server is one of the easiest ways to test, present, or prototype your work on Ubuntu. Python makes this process simple. Its built-in HTTP server provides a fast way to serve files, test APIs, or validate network behavior without heavy web frameworks or external services.&lt;br&gt;
This article walks through how to set up and run a Python HTTP server on Ubuntu, how to secure and customize it, and how to use it in different real-world scenarios.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Why Use Python's HTTP Server?&lt;/strong&gt;
Ubuntu already includes tools like Apache or Nginx, so why use Python instead?
The Python HTTP server is ideal when you need:&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;• &lt;strong&gt;A quick test environment&lt;/strong&gt; – Serve a directory over HTTP with a     single command. No config files, no services, no packages.&lt;br&gt;
 • &lt;strong&gt;A simple way to demo files&lt;/strong&gt; – Colleagues can open your work through their browser without SSH or shared drives.&lt;br&gt;
 • &lt;strong&gt;A sandbox for API experiments&lt;/strong&gt; – You can define a custom handler    that responds to GET, POST, PUT, or DELETE requests.&lt;br&gt;
• &lt;strong&gt;A portable tool&lt;/strong&gt; – Works the same on Ubuntu, macOS, or Windows. &lt;/p&gt;

&lt;p&gt;If Python is installed, you’re ready.&lt;br&gt;
In short, it’s perfect for learning, teaching, prototyping, and troubleshooting.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Setting Up Your Ubuntu Environment&lt;/strong&gt;
Most modern Ubuntu distributions include Python 3 by default. Check your version with:&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;code&gt;python3 --version&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;If Python isn’t installed or you want to update it:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;sudo apt update&lt;br&gt;
sudo apt install python3&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;For development or testing, it’s helpful to install:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;sudo apt install python3-pip&lt;br&gt;
sudo apt install python3-venv&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;These tools let you create virtual environments or extend the simple HTTP server with Python packages.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Creating a Simple Python HTTP Server&lt;/strong&gt;
Python 3 includes the http.server module to instantly launch a basic web server. Navigate to your desired directory:&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;$ cd /path/to/your/folder&lt;br&gt;
$ python3 -m http.server&lt;/p&gt;

&lt;p&gt;Http Server will start. Linux Console will show:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7ckrmgpi6b63f5a4opxm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7ckrmgpi6b63f5a4opxm.png" alt="Fig. 1 Linux console output" width="800" height="417"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Go to browser, and type:  &lt;a href="http://localhost:8000" rel="noopener noreferrer"&gt;http://localhost:8000&lt;/a&gt;. Browser will show:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F729cfthe7x1llh8hbbek.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F729cfthe7x1llh8hbbek.png" alt="Fig 2. Browser shows Directory listing of the shared directory" width="800" height="417"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Since there is no index.html, the page shows a list of files in the directory from which the command was typed. &lt;br&gt;
To use a different port, type:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;python3 -m http.server 8080&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;To allow access from other devices, type:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;python3 -m http.server 8000 --bind 0.0.0.0&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Now the server is live, and anyone who can reach your IP and port can view your directory.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Managing Firewall Rules with UFW&lt;/strong&gt;
Ubuntu uses UFW (Uncomplicated Firewall) for firewall management. To enable UFW, type:&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;code&gt;sudo ufw enable&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;To allow external access, type:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;sudo ufw allow 8000&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;The command sudo ufw allow 8000 in Ubuntu tells the firewall (UFW) to &lt;strong&gt;open TCP/UDP port 8000&lt;/strong&gt; for incoming connections, allowing external clients to reach services running on that port.&lt;/p&gt;

&lt;p&gt;Again, type:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;sudo ufw reload&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;The above command reloads the firewall rules without disabling or re-enabling UFW.&lt;/p&gt;

&lt;p&gt;Check status:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;sudo ufw status&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;For local-only access:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;python3 -m http.server 8000 --bind 127.0.0.1&lt;/code&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Creating a Custom Python HTTP Server&lt;/strong&gt;
The built-in command is handy, but sometimes you’ll want to control responses—like adding headers, handling POST requests, or serving APIs.
Example – Simple API endpoint:&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;`from http.server import BaseHTTPRequestHandler, HTTPServer&lt;br&gt;
import json&lt;/p&gt;

&lt;p&gt;class SimpleAPI(BaseHTTPRequestHandler):&lt;br&gt;
    def do_GET(self):&lt;br&gt;
        if self.path == "/status":&lt;br&gt;
            self.send_response(200)&lt;br&gt;
            self.send_header("Content-Type", "application/json")&lt;br&gt;
            self.end_headers()&lt;br&gt;
            response = {"server": "running", "path": self.path}&lt;br&gt;
            self.wfile.write(json.dumps(response).encode())&lt;br&gt;
        else:&lt;br&gt;
            self.send_response(404)&lt;br&gt;
            self.end_headers()&lt;/p&gt;

&lt;p&gt;def run_server():&lt;br&gt;
    server = HTTPServer(("localhost", 8000), SimpleAPI)&lt;br&gt;
    print("Server running at &lt;a href="http://localhost:8000/%22" rel="noopener noreferrer"&gt;http://localhost:8000/"&lt;/a&gt;)&lt;br&gt;
    server.serve_forever()&lt;/p&gt;

&lt;p&gt;if &lt;strong&gt;name&lt;/strong&gt; == "&lt;strong&gt;main&lt;/strong&gt;":&lt;br&gt;
    run_server()`&lt;/p&gt;

&lt;p&gt;Run it:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;python3 server.py&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Test it in your browser:&lt;/p&gt;

&lt;p&gt;&lt;a href="http://localhost:8000/status" rel="noopener noreferrer"&gt;http://localhost:8000/status&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F580rwq4hj70vca97dmxw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F580rwq4hj70vca97dmxw.png" alt="Fig 3. Browser will show:" width="800" height="417"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This structure lets you build mock APIs or test behavior without deploying a full web framework.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Running the Server in the Background
To keep the server running long-term, use tools like:
• systemd
• tmux
• screen
• nohup&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Example with nohup:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;nohup python3 -m http.server 8000 &amp;amp;&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Example with systemd:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;sudo nano /etc/systemd/system/python-http.service&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Add:&lt;br&gt;
[Unit]&lt;br&gt;
Description=Python HTTP Server&lt;br&gt;
After=network.target&lt;/p&gt;

&lt;p&gt;[Service]&lt;br&gt;
ExecStart=/usr/bin/python3 -m http.server 8000 --directory /var/www&lt;br&gt;
WorkingDirectory=/var/www&lt;br&gt;
Restart=always&lt;/p&gt;

&lt;p&gt;[Install]&lt;br&gt;
WantedBy=multi-user.target&lt;/p&gt;

&lt;p&gt;Enable and start:&lt;br&gt;
&lt;code&gt;sudo systemctl enable python-http&lt;br&gt;
sudo systemctl start python-http&lt;br&gt;
sudo systemctl status python-http&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;The server will now start automatically on boot.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Adding HTTPS with a Reverse Proxy&lt;/strong&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Python’s default server doesn’t support HTTPS, but you can add it using Nginx as a reverse proxy.&lt;/p&gt;

&lt;p&gt;Install Nginx:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;sudo apt install nginx&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Configuration example:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;server {&lt;br&gt;
    listen 80;&lt;br&gt;
    location / {&lt;br&gt;
        proxy_pass http://127.0.0.1:8000;&lt;br&gt;
    }&lt;br&gt;
}&lt;br&gt;
&lt;/code&gt;&lt;br&gt;
Reload Nginx:&lt;br&gt;
&lt;code&gt;sudo systemctl reload nginx&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Add HTTPS with Let’s Encrypt:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;sudo apt install certbot python3-certbot-nginx&lt;br&gt;
sudo certbot --nginx&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Now your Python server is securely accessible via Nginx.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Troubleshooting Common Issues
• Address already in use (in case you have already started http on     port 8000, or if there is already another process running on port 8000), type:&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;code&gt;sudo lsof -i :8000&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;This command will give process id of program running in port 8000.&lt;/p&gt;

&lt;p&gt;Stop the running program using the process id  using the command below:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;sudo kill &amp;lt;pid&amp;gt;&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;• Permission denied: Use accessible directories or adjust     permissions.&lt;br&gt;&lt;br&gt;
• Cannot access from other devices: Check firewall, bind address, and router isolation.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Conclusion&lt;/strong&gt;&lt;br&gt;
Running a Python HTTP server on Ubuntu is simple yet powerful. With one command, you can share files, test web content, or build temporary APIs. With a bit of customization, it becomes a versatile development tool for any workflow.&lt;/p&gt;

</description>
      <category>linux</category>
      <category>python</category>
      <category>tutorial</category>
      <category>ubuntu</category>
    </item>
    <item>
      <title>SSH in Ubuntu — Complete Explanation With Code Samples</title>
      <dc:creator>Saikat Goswami</dc:creator>
      <pubDate>Tue, 17 Mar 2026 11:37:09 +0000</pubDate>
      <link>https://dev.to/saikat_goswami_fd81ed5950/ssh-in-ubuntu-complete-explanation-with-code-samples-3a8m</link>
      <guid>https://dev.to/saikat_goswami_fd81ed5950/ssh-in-ubuntu-complete-explanation-with-code-samples-3a8m</guid>
      <description>&lt;p&gt;&lt;strong&gt;SSH (Secure Shell)&lt;/strong&gt; is a protocol that lets you securely connect to a remote Linux machine over a network. It provides:&lt;br&gt;
• Encrypted login&lt;br&gt;
• Secure file transfer&lt;br&gt;
• Remote command execution&lt;br&gt;
• Port forwarding and tunneling&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Install SSH (OpenSSH Server &amp;amp; Client)&lt;/strong&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Check if SSH client is installed (usually preinstalled)&lt;/strong&gt;&lt;br&gt;
&lt;code&gt;ssh -V&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Install SSH client&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;code&gt;sudo apt update&lt;/code&gt;&lt;br&gt;
&lt;code&gt;sudo apt install openssh-client&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Install SSH server (needed if you want others to connect to your PC)&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;code&gt;sudo apt install openssh-server&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Start and enable SSH service&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;code&gt;sudo systemctl start ssh&lt;/code&gt;&lt;br&gt;
&lt;code&gt;sudo systemctl enable ssh&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Check service status&lt;/strong&gt;&lt;br&gt;
&lt;code&gt;sudo systemctl status ssh&lt;/code&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Find Your Machine’s IP Address&lt;/strong&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;If you want to connect to this machine from another device:&lt;br&gt;
&lt;code&gt;ip a&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Look under:&lt;br&gt;
&lt;code&gt;inet 192.168.x.x&lt;/code&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Connect to a Remote Machine Using SSH&lt;/strong&gt;
Basic SSH command:
&lt;code&gt;ssh username@hostname&lt;/code&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Examples:&lt;br&gt;
&lt;code&gt;ssh ubuntu@192.168.1.10&lt;/code&gt;&lt;br&gt;
&lt;code&gt;ssh saikat@myserver.com&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;SSH will ask for the remote password.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;SSH Using a Custom Port&lt;/strong&gt;&lt;br&gt;
If the server runs SSH on another port:&lt;br&gt;
&lt;code&gt;ssh -p 2222 user@server&lt;/code&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Generate SSH Keys (Passwordless Login)&lt;/strong&gt;&lt;br&gt;
SSH keys are more secure and convenient than passwords.&lt;br&gt;
Generate keys&lt;br&gt;
&lt;code&gt;ssh-keygen&lt;/code&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;This creates:&lt;br&gt;
• &lt;code&gt;Private key → ~/.ssh/id_rsa&lt;/code&gt;&lt;br&gt;
• &lt;code&gt;Public key → ~/.ssh/id_rsa.pub&lt;/code&gt;&lt;br&gt;
Press Enter to accept defaults.&lt;br&gt;
Copy your public key to the server&lt;/p&gt;

&lt;p&gt;&lt;code&gt;ssh-copy-id user@server&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Now you can log in without entering a password.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;SSH Server Settings (Advanced)&lt;/strong&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;SSH server config file:&lt;br&gt;
&lt;code&gt;/etc/ssh/sshd_config&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;You can modify:&lt;br&gt;
• Port number&lt;br&gt;
• Root login permissions&lt;br&gt;
• Key-only authentication&lt;/p&gt;

&lt;p&gt;Example: change SSH port&lt;br&gt;
Open the file:&lt;br&gt;
&lt;code&gt;sudo nano /etc/ssh/sshd_config&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Change:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight ssh"&gt;&lt;code&gt;&lt;span class="k"&gt;Port&lt;/span&gt; &lt;span class="m"&gt;2222&lt;/span&gt;
&lt;span class="k"&gt;PermitRootLogin&lt;/span&gt; &lt;span class="no"&gt;no&lt;/span&gt;
&lt;span class="k"&gt;PasswordAuthentication&lt;/span&gt; &lt;span class="no"&gt;yes&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Restart SSH:&lt;br&gt;
&lt;code&gt;sudo systemctl restart ssh&lt;br&gt;
&lt;/code&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Running Remote Commands with SSH&lt;/strong&gt;&lt;br&gt;
You can execute commands directly:&lt;br&gt;
&lt;code&gt;ssh user@server "ls -l /var/www"&lt;/code&gt;&lt;br&gt;
&lt;code&gt;ssh user@server "sudo apt update"&lt;/code&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Copy Files via SCP (Secure Copy)&lt;/strong&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Upload a file to remote server:&lt;/strong&gt;&lt;br&gt;
&lt;code&gt;scp file.txt user@server:/home/user/&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Download from remote:&lt;/strong&gt;&lt;br&gt;
&lt;code&gt;scp user@server:/var/log/syslog .&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Copy a folder recursively:&lt;/strong&gt;&lt;br&gt;
&lt;code&gt;scp -r myfolder user@server:/home/user/&lt;/code&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Using SFTP (SSH File Transfer)&lt;/strong&gt;
Start SFTP session:&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;code&gt;sftp user@server&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;In SFTP:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;ls
&lt;/span&gt;get file.txt
put upload.zip
&lt;span class="nb"&gt;cd&lt;/span&gt; /var/www
Exit:
bye
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;SSH Tunneling (Port Forwarding)&lt;/strong&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Local port forward (access remote DB locally)&lt;/strong&gt;&lt;br&gt;
&lt;code&gt;ssh -L 3307:localhost:3306 user@server&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;This exposes remote MySQL (3306) on your local machine at 3307.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Remote port forward (expose a local website)&lt;/strong&gt;&lt;br&gt;
&lt;code&gt;ssh -R 8080:localhost:3000 user@server&lt;/code&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Test SSH in WSL (Windows Subsystem for Linux)&lt;/strong&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;You can use SSH from WSL.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Client works automatically:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;code&gt;ssh user@server&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Using SSH server inside WSL&lt;/strong&gt;&lt;br&gt;
You need to enable the service:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;sudo apt install openssh-server&lt;/code&gt;&lt;br&gt;
&lt;code&gt;sudo service ssh start&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Find WSL IP:&lt;br&gt;
&lt;code&gt;ip a&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;But note:&lt;br&gt;
WSL resets its IP each restart, and incoming connections may not work without configuring Windows firewall/port-forwarding.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Spring Boot/Microservices/Hibernate/JavaScript/React/Linux/Node.js Technical Writer</title>
      <dc:creator>Saikat Goswami</dc:creator>
      <pubDate>Mon, 02 Mar 2026 14:17:55 +0000</pubDate>
      <link>https://dev.to/saikat_goswami_fd81ed5950/spring-bootmicroserviceshibernatejavascriptreactlinuxnodejs-technical-writer-1m57</link>
      <guid>https://dev.to/saikat_goswami_fd81ed5950/spring-bootmicroserviceshibernatejavascriptreactlinuxnodejs-technical-writer-1m57</guid>
      <description>&lt;p&gt;Check out my webpage at: &lt;a href="https://saigoswa.github.io/hire-me.html" rel="noopener noreferrer"&gt;https://saigoswa.github.io/hire-me.html&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Hire me as a Linux/Spring Boot/Microservices technical writer.&lt;/p&gt;

&lt;p&gt;List of articles I have written:&lt;/p&gt;

&lt;p&gt;1.Introduction to Clustering and Load Balancing&lt;br&gt;
2.Python HTTP Server on Ubuntu&lt;br&gt;
3.Cascading Types in Hibernate&lt;br&gt;
4.Containerization Architectture&lt;br&gt;
5.Database per Service&lt;br&gt;
6.Installing Podman on Linux&lt;br&gt;
7.SSH on Ubuntu&lt;br&gt;
8.How to create and manage Systemd services on Linux&lt;/p&gt;

</description>
    </item>
  </channel>
</rss>
