Introduction
As a DevOps lead, you’ve probably seen Nginx shoulder the load for everything from static assets to API gateways. While the default configuration works for a quick start, production traffic demands fine‑tuned settings to keep latency low and throughput high. In this practical guide we’ll walk through six concrete tweaks you can apply today, explain why they matter, and show you the exact configuration snippets you need.
1. Right‑size Worker Processes & Connections
Nginx spawns a worker process per CPU core by default. However, you should verify that the worker_processes
directive matches the actual core count, especially on cloud VMs where the advertised vCPU count can differ from the physical cores.
# /etc/nginx/nginx.conf
worker_processes auto; # Let Nginx detect cores automatically
worker_rlimit_nofile 65535; # Raise the open‑file limit for all workers
events {
worker_connections 8192; # Max simultaneous connections per worker
multi_accept on; # Accept as many connections as possible per event loop
}
Why it matters: Each worker can handle worker_connections
concurrent connections. Multiply that by worker_processes
to get the theoretical max connections. Setting these values too low throttles traffic; too high can exhaust system resources.
2. Optimize Keep‑Alive Settings
Persistent connections reduce the TCP handshake overhead for subsequent requests. However, an overly generous keepalive_timeout
can tie up worker connections.
http {
keepalive_timeout 15; # Seconds a connection stays open after a request
keepalive_requests 100; # Max requests per keep‑alive connection
send_timeout 10s; # Close idle connections after 10 seconds of silence
}
Tip: Monitor active
vs idle
connections with nginx_status
. If idle connections dominate, lower the timeout.
3. Enable Gzip or Brotli Compression Wisely
Compressing responses cuts bandwidth and improves perceived speed, but CPU cost can become a bottleneck. Use conditional compression:
http {
# Gzip – fallback for older browsers
gzip on;
gzip_types text/css application/javascript image/svg+xml;
gzip_min_length 1024; # Only compress >1KB responses
gzip_comp_level 4; # Balance speed vs compression ratio
# Brotli – modern, higher compression ratio
brotli on;
brotli_types text/css application/javascript image/svg+xml;
brotli_comp_level 5;
brotli_min_length 1024;
}
Best practice: Enable both, letting Nginx negotiate the best algorithm based on the Accept‑Encoding
header.
4. Fine‑Tune Buffer Sizes for Large Headers & Files
When serving large files or handling APIs with big JSON payloads, the default buffer sizes may cause unnecessary disk writes.
http {
client_body_buffer_size 128k;
client_max_body_size 50m; # Adjust per your upload limits
large_client_header_buffers 4 16k;
proxy_buffer_size 64k;
proxy_buffers 8 64k;
proxy_busy_buffers_size 128k;
}
Result: Reduces the chance of 502 Bad Gateway
errors under load and keeps memory usage predictable.
5. Harden TLS without Sacrificing Speed
TLS termination is a common Nginx responsibility. Modern ciphers provide both security and performance.
server {
listen 443 ssl http2;
ssl_certificate /etc/ssl/certs/example.crt;
ssl_certificate_key /etc/ssl/private/example.key;
ssl_protocols TLSv1.2 TLSv1.3;
ssl_ciphers "TLS_AES_256_GCM_SHA384:TLS_CHACHA20_POLY1305_SHA256";
ssl_prefer_server_ciphers on;
ssl_session_cache shared:SSL:10m;
ssl_session_timeout 1d;
ssl_stapling on;
ssl_stapling_verify on;
}
Why it matters: TLS 1.3 reduces handshake latency dramatically, and enabling HTTP/2 (http2
) lets browsers multiplex requests over a single connection.
6. Leverage Built‑in Caching for Static Assets
Off‑loading static content to Nginx’s fast cache can shave milliseconds off every request.
http {
proxy_cache_path /var/cache/nginx levels=1:2 keys_zone=STATIC:100m inactive=60m use_temp_path=off;
}
server {
location /assets/ {
alias /var/www/app/public/assets/;
expires 30d;
add_header Cache-Control "public, immutable";
try_files $uri $uri/ =404;
}
location /api/ {
proxy_pass http://backend:8080;
proxy_cache STATIC;
proxy_cache_valid 200 10m;
proxy_cache_use_stale error timeout updating;
}
}
Key points:
-
expires
andCache‑Control
tell browsers to keep assets for a month. -
proxy_cache
stores API responses that are safe to cache, reducing backend load.
Monitoring & Validation
After applying these tweaks, validate the impact:
-
Metrics: Use
nginx -s status
or a Prometheus exporter to trackrequest_latency_seconds
,active_connections
, andworker_connections
. -
Load testing: Tools like
hey
orwrk
can simulate traffic and reveal the new throughput ceiling. -
Log analysis: Enable
$request_time
in the log format to spot outliers.
log_format timed '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" $request_time';
access_log /var/log/nginx/access.log timed;
Conclusion
Performance tuning is an iterative process—start with the low‑hanging fruit (workers, keep‑alive, compression), then move to TLS hardening and caching. By regularly reviewing metrics and adjusting the knobs above, you’ll keep latency in the single‑digit milliseconds range even as traffic spikes.
If you’re looking for a reliable partner to audit your Nginx setup or help with a full‑scale migration, consider checking out https://lacidaweb.com for a no‑pressure conversation.
Top comments (0)