DEV Community

Ramer Labs
Ramer Labs

Posted on

Performance Tuning for Nginx: Reduce TTFB and Boost Throughput

Introduction

If you’ve ever stared at a sluggish Time‑to‑First‑Byte (TTFB) on a site powered by Nginx, you know how frustrating it can be. As an SRE, you’re expected to squeeze every last millisecond out of the stack without breaking the architecture. This tutorial walks you through a pragmatic, step‑by‑step checklist that covers OS tweaks, Nginx core settings, TLS optimizations, and caching strategies. By the end you’ll have a lean, high‑throughput Nginx instance ready for production traffic.


1. Prep the Linux Host

Before touching Nginx, make sure the underlying OS is tuned for network‑heavy workloads.

1.1 Increase File Descriptor Limits

# /etc/security/limits.conf
*               soft    nofile          65535
*               hard    nofile          65535
Enter fullscreen mode Exit fullscreen mode

Reload the limits with ulimit -n 65535 or restart the session.

1.2 Optimize TCP Stack

Add the following to /etc/sysctl.conf and run sysctl -p:

net.core.somaxconn = 65535
net.ipv4.tcp_syncookies = 1
net.ipv4.tcp_tw_reuse = 1
net.ipv4.ip_local_port_range = 1024 65535
net.ipv4.tcp_fin_timeout = 15
net.core.netdev_max_backlog = 5000
Enter fullscreen mode Exit fullscreen mode

These settings raise the maximum pending connections, enable fast socket reuse, and reduce TIME_WAIT buildup.


2. Core Nginx Worker Configuration

Nginx’s performance hinges on how you size its worker processes and connections.

2.1 Worker Processes & Connections

worker_processes auto;                     # Leverage all CPU cores
worker_rlimit_nofile 65535;                # Match OS limit

events {
    worker_connections 8192;               # Max concurrent connections per worker
    multi_accept on;                       # Accept as many as possible per event loop
    use epoll;                             # Linux‑specific, best for high I/O
}
Enter fullscreen mode Exit fullscreen mode

Why it matters: auto ensures Nginx spawns a process per core, while worker_connections multiplied by worker_processes defines the theoretical max concurrent connections.

2.2 Keep‑Alive Settings

http {
    keepalive_timeout 65;                 # Keep connections open for 65 s
    keepalive_requests 10000;             # Max requests per keep‑alive connection
    tcp_nopush on;                        # Reduce packet fragmentation
    tcp_nodelay on;                       # Send packets immediately when needed
}
Enter fullscreen mode Exit fullscreen mode

Longer keep‑alive reduces handshake overhead for browsers that request multiple assets from the same host.


3. TLS Optimizations

TLS handshakes are a common culprit for high TTFB. Modern Nginx can offload most of the cost.

3.1 Enable Session Tickets & Cache

ssl_session_cache   shared:SSL:20m;
ssl_session_timeout 1d;
ssl_session_tickets on;
Enter fullscreen mode Exit fullscreen mode

A 20 MiB cache holds roughly 400 k sessions, allowing repeat visitors to skip full handshakes.

3.2 Prefer Modern Cipher Suites

ssl_protocols TLSv1.3 TLSv1.2;
ssl_prefer_server_ciphers on;
ssl_ciphers "TLS_AES_256_GCM_SHA384:TLS_CHACHA20_POLY1305_SHA256:TLS_AES_128_GCM_SHA256";
Enter fullscreen mode Exit fullscreen mode

TLS 1.3 reduces round‑trips dramatically and the selected ciphers are hardware‑accelerated on most CPUs.


4. Compression: Gzip vs Brotli

Compressing assets saves bandwidth and speeds up perceived load time, but it also adds CPU load.

4.1 When to Use Gzip

gzip on;
gzip_types text/plain text/css application/json application/javascript image/svg+xml;
gzip_comp_level 4;   # Balance speed vs compression ratio
gzip_min_length 256;
Enter fullscreen mode Exit fullscreen mode

4.2 When to Prefer Brotli

If you have the ngx_brotli module compiled, enable it for browsers that support it:

brotli on;
brotli_static on;                # Serve pre‑compressed .br files when available
brotli_types text/plain text/css application/javascript image/svg+xml;
Enter fullscreen mode Exit fullscreen mode

Rule of thumb: Use Brotli for static assets (CSS/JS) and fall back to Gzip for older clients.


5. Static Asset Caching

Leverage long‑term caching headers to let browsers keep assets for weeks.

location ~* \.(css|js|svg|png|jpg|jpeg|gif|webp)$ {
    expires 30d;
    add_header Cache-Control "public, immutable";
    try_files $uri $uri/ =404;
}
Enter fullscreen mode Exit fullscreen mode

Combine this with a versioned filename strategy (e.g., app.1a2b3c.js) to bust caches on updates.


6. Reverse Proxy & Upstream Tuning

If Nginx fronts an application server (Node.js, PHP‑FPM, etc.), fine‑tune the upstream block.

upstream app_backend {
    server 127.0.0.1:3000 max_fails=3 fail_timeout=30s;
    keepalive 32;                     # Persistent connections to upstream
}

server {
    location / {
        proxy_pass http://app_backend;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_http_version 1.1;
        proxy_set_header Connection "";   # Enable keep‑alive upstream
    }
}
Enter fullscreen mode Exit fullscreen mode

A modest keepalive pool reduces connection churn and improves latency.


7. Monitoring & Alerting

Performance is a moving target. Set up lightweight metrics to catch regressions early.

  • nginx_status endpoint for active connections, request rates, and dropped connections.
  • Prometheus exporter (nginx-prometheus-exporter) to feed Grafana dashboards.
  • Alert on high 5xx rates or average request time > 200 ms.

Example nginx_status block:

server {
    listen 127.0.0.1:8080;
    location /nginx_status {
        stub_status;
        allow 127.0.0.1;
        deny all;
    }
}
Enter fullscreen mode Exit fullscreen mode

8. Validation Checklist

✅ Item Description
OS limits raised ulimit -n >= 65535
TCP stack tuned sysctl values applied
Workers sized worker_processes auto & worker_connections 8192
TLS fast TLS 1.3 + session tickets
Compression Brotli for modern browsers, Gzip fallback
Cache headers expires 30d + Cache‑Control immutable
Upstream keep‑alive keepalive 32 in upstream block
Monitoring hooked nginx_status exposed and scraped

Run through this list after each deployment to ensure you haven’t unintentionally rolled back a performance tweak.


Conclusion

Tuning Nginx is less about a single magic switch and more about a disciplined series of small, measurable adjustments. By aligning OS parameters, worker configuration, TLS settings, compression, and caching, you can routinely shave 30‑50 ms off TTFB and sustain thousands of concurrent connections on modest hardware.

For deeper dives into production‑grade Nginx setups, consider checking out the resources on https://lacidaweb.com – they often surface real‑world case studies that complement the checklist above.

Top comments (0)