π
This is my first blog post on dev.to. I hope you enjoy it.
β‘
Nginx is a high-performance, event-driven, lightweight web server and reverse proxy. Thanks to its asynchronous and non-blocking architecture, it can handle a large number of concurrent connections with very low resource usage. Besides serving static assets efficiently, Nginx can route requests to backend services via proxy_pass and supports multiple load-balancing algorithms, such as round-robin, least connections, and IP hash. It is also commonly used for SSL termination, caching, separating static and dynamic traffic, and basic security hardening, which makes it a key traffic gateway in modern microservice and front-end/back-end separated architectures.
Basic Configuration
http vs server vs location
| Context | What it does | Example |
|---|---|---|
| http | Global HTTP-level settings (timeouts, logs, compression, cache, etc.) | http { proxy_read_timeout 300s; gzip on; } |
| server | A virtual host (a βsiteβ/service). Binds listen and server_name
|
server { listen 80; server_name [example.com](http://example.com); β¦ } |
| location | URI prefix or regex matching rules. Defines how that kind of request is handled | location /static/ { root /var/www; } |
- Nesting structure
http {
server {
location { β¦ }
location { β¦ }
}
server { β¦ }
}
-
Responsibilities
-
http: framework-level defaults, modules, overall behavior -
server: split traffic by domain/port -
location: serve static files, reverse proxy, URL rewrite, etc.
-
A minimal example (serving a SPA)
http {
include mime.types;
default_type application/octet-stream;
sendfile on;
keepalive_timeout 65;
server {
listen 80;
server_name your.domain.com; # or an IP
# Point to your built dist directory
root /var/www/vue-app/dist; # Linux path
# Windows example: root C:/nginx/html/vue-app/dist;
index index.html;
# Try static first; if not found, fall back to index.html (SPA History mode)
location / {
try_files $uri $uri/ /index.html;
}
# Optional: cache static assets
location ~* \.(js|css|png|jpg|jpeg|gif|svg|woff2?)$ {
expires 30d;
add_header Cache-Control "public";
}
# Error page
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /var/www/vue-app/dist;
}
}
}
include mime.types and what it means
include loads an external file (or a set of files) into the current configuration context.
include mime.types; tells Nginx to load the file extension β MIME type mapping table. That way, when Nginx serves static files like .html, .css, .js, .png, etc., it can automatically set the correct Content-Type header so browsers interpret assets correctly.
What $uri is (and why it matters for try_files)
$uri is a built-in Nginx variable. It comes from the request lineβs URI part (without the query string) and is normalized by Nginx.
Example:
- Browser requests:
GET /foo/bar.html?abc=123 HTTP/1.1 - Nginx sees
$request_uri = /foo/bar.html?abc=123 - Nginx strips the query part and normalizes the path, producing
$uri = /foo/bar.html
In try_files $uri $uri/ /index.html;, Nginx checks the filesystem for the file or directory first. If nothing matches, it falls back to /index.html. This is why SPA routes still work on refresh under History mode.
Frontend-oriented settings (static site)
-
rootpoints to the builtdistdirectory -
indexis the default entry file -
try_filesenables SPA routing fallback (History mode)
Backend-oriented settings (reverse proxy)
-
listendefines the listening port -
server_namedefines the domain (or host) to match -
locationdefines request matching rules and the upstream routing logic
Location matching order (common rules)
-
=exact match -
^~longest prefix match (and stop searching) - regex matches (
~,~*) in the order they appear (first match wins) - normal prefix match (longest)
- if nothing matches, return 404
Reverse proxy request flow (high level)
-
Client sends a request
Browser or HTTP client connects to Nginx on the listening port and sends an HTTP request.
-
Nginx worker accepts the connection
In the event-driven model, a worker process
accept()s the connection and parses it into an internal request object. -
Pick an upstream server
If using
upstream, Nginx selects a backend node based on the chosen algorithm. Ifproxy_passpoints to a fixed host, it routes to that one. -
Connect to upstream
Nginx creates a non-blocking socket and initiates a TCP connection, controlled by
proxy_connect_timeout. -
Forward the request
Nginx sends the request line, headers, and optional body to the upstream, controlled by
proxy_send_timeout. -
Read the response
Nginx reads the response status line, headers, and body from upstream, controlled by
proxy_read_timeout, and streams or buffers it back to the client. -
Reuse or close connections
With keepalive enabled, Nginx can reuse upstream connections to reduce handshake overhead.
Reverse Proxy
| Feature | Forward Proxy | Reverse Proxy |
|---|---|---|
| Primary target | Client | Server (the website/service) |
| Traffic direction | Client β proxy β any external server | Client β proxy β internal backend servers |
| Typical use cases | Bypass restrictions, filtering, client-side caching | Load balancing, SSL termination, caching, hiding backend topology |
| Client configuration | Client must configure proxy | Client does not need to know (transparent) |
- Forward proxy: you explicitly choose a proxy to access external sites.
- Reverse proxy: a proxy sits in front of your service and forwards requests to your backend.
Load Balancing
1) Minimal upstream (round-robin by default)
http {
upstream backend {
server 192.168.0.101:8080;
server 192.168.0.102:8080;
server 192.168.0.103:8080;
}
server {
listen 80;
server_name example.com;
location / {
proxy_pass http://backend;
}
}
}
- Algorithm: round-robin
- Pros: simplest setup
2) Weighted round-robin
http {
upstream backend {
server 192.168.0.101:8080 weight=5;
server 192.168.0.102:8080 weight=3;
server 192.168.0.103:8080 weight=2;
}
server {
listen 80;
location / {
proxy_pass http://backend;
}
}
}
- Larger
weightmeans more traffic. - Useful when backend nodes have different capacity.
3) Least connections
http {
upstream backend {
least_conn;
server 192.168.0.101:8080;
server 192.168.0.102:8080;
server 192.168.0.103:8080;
}
server {
listen 80;
location / {
proxy_pass http://backend;
}
}
}
- Routes new requests to the node with the fewest active connections.
- Good when request duration varies a lot.
4) IP hash (session affinity)
http {
upstream backend {
ip_hash;
server 192.168.0.101:8080;
server 192.168.0.102:8080;
server 192.168.0.103:8080;
}
server {
listen 80;
location / {
proxy_pass http://backend;
}
}
}
- Same client IP tends to hit the same backend.
- Does not support
weight.
5) URI hash (consistent hash)
http {
upstream backend {
hash $request_uri consistent;
server 192.168.0.101:8080;
server 192.168.0.102:8080;
server 192.168.0.103:8080;
}
server {
listen 80;
location / {
proxy_pass http://backend;
}
}
}
- Hashes a variable (like
$request_uri) to pick the node. -
consistentreduces remapping when nodes change.
6) Basic failure handling
http {
upstream backend {
server 192.168.0.101:8080 max_fails=3 fail_timeout=30s;
server 192.168.0.102:8080 max_fails=3 fail_timeout=30s;
server 192.168.0.103:8080 max_fails=3 fail_timeout=30s;
}
}
-
max_fails: failure threshold -
fail_timeout: temporarily stop sending traffic to that node
Summary
- Round-robin is the default and easiest.
- Weighted round-robin helps with uneven node capacity.
- Least connections is useful when requests have uneven duration.
- IP hash and URI hash help with affinity.
-
max_failsandfail_timeoutprovide basic resilience.
Common Nginx Commands
# Test configuration syntax
nginx -t
# Start Nginx (if not running as a system service)
nginx
# Or specify a config file:
nginx -c /path/to/nginx.conf
# Reload gracefully
nginx -s reload
# Graceful stop
nginx -s quit
# Force stop
nginx -s stop
# Reopen log files (useful after log rotation)
nginx -s reopen
# Show version
nginx -v
# Show version and build options
nginx -V
# Show running processes
ps aux | grep nginx
# If managed by systemd
systemctl start nginx
systemctl stop nginx
systemctl reload nginx
systemctl status nginx
# (Optional) hot upgrade binary without dropping connections
kill -USR2 `cat /var/run/nginx.pid`
# Check listening ports
netstat -tulpn | grep nginx
Timeout Settings
1) Client-side timeouts
| Directive | Default | Description |
|---|---|---|
client_header_timeout |
60s | Max time to receive request headers from the client. Returns 408 on timeout. |
client_body_timeout |
60s | Max time to receive the request body. Returns 408 on timeout. |
send_timeout |
60s | If the client does not read any response data within this time, Nginx closes the connection. |
http {
client_header_timeout 10s;
client_body_timeout 30s;
send_timeout 120s;
}
2) Reverse proxy (proxy_pass) timeouts
| Directive | Default | Description |
|---|---|---|
proxy_connect_timeout |
60s | Timeout for establishing TCP connection to upstream (handshake). Returns 504 on timeout. |
proxy_send_timeout |
60s | Timeout for sending request to upstream (write). Returns 504 on timeout. |
proxy_read_timeout |
60s | Timeout for reading the response from upstream. Returns 504 on timeout. |
proxy_buffering |
on | When buffering is enabled, read timeout can behave differently. Turning it off can make streaming responses behave more predictably. |
server {
location /api/ {
proxy_pass http://backend;
proxy_connect_timeout 120s;
proxy_send_timeout 120s;
proxy_read_timeout 300s;
proxy_buffering off;
}
}
3) FastCGI / uWSGI / SCGI timeouts
| Module | Directive | Default | Description |
|---|---|---|---|
| FastCGI | fastcgi_connect_timeout |
60s | Connection timeout to FastCGI server |
fastcgi_send_timeout |
60s | Send timeout to FastCGI server | |
fastcgi_read_timeout |
60s | Read timeout from FastCGI server | |
| uWSGI | uwsgi_connect_timeout |
60s | Connection timeout to uWSGI server |
uwsgi_send_timeout |
60s | Send timeout to uWSGI server | |
uwsgi_read_timeout |
60s | Read timeout from uWSGI server | |
| SCGI | scgi_connect_timeout |
60s | Connection timeout to SCGI server |
scgi_send_timeout |
60s | Send timeout to SCGI server | |
scgi_read_timeout |
60s | Read timeout from SCGI server |
location ~ \.php$ {
fastcgi_pass unix:/run/php-fpm.sock;
fastcgi_connect_timeout 30s;
fastcgi_send_timeout 180s;
fastcgi_read_timeout 180s;
}
4) Stream (TCP/UDP) proxy timeouts
stream {
upstream mysql_up {
server 127.0.0.1:3306;
}
server {
listen 3307;
proxy_pass mysql_up;
proxy_connect_timeout 10s;
proxy_read_timeout 300s;
proxy_send_timeout 300s;
}
}
Keepalive
1) Client β Nginx keepalive
| Directive | Default | Description |
|---|---|---|
keepalive_timeout |
75s | Idle keepalive timeout after a request completes |
keepalive_requests |
100 | Max requests per keepalive connection |
http {
keepalive_timeout 65s;
keepalive_requests 200;
}
2) Nginx β upstream keepalive (connection reuse)
http {
upstream backend {
server 192.168.0.101:8080;
server 192.168.0.102:8080;
keepalive 32;
}
server {
listen 80;
location / {
proxy_pass http://backend;
# Use HTTP/1.1 and clear Connection header for upstream keepalive
proxy_http_version 1.1;
proxy_set_header Connection "";
}
}
}
-
proxy_http_version 1.1: required for upstream keepalive -
proxy_set_header Connection "": preventsConnection: closefrom breaking reuse

Top comments (0)