How to Fix 502 Bad Gateway in Nginx (With Exact Commands)
502 Bad Gateway is one of the most panic-inducing errors in production. Your site is down, users are getting errors, and you have no idea where to start.
This guide gives you the exact diagnostic steps and fixes — no placeholder commands.
What Does 502 Bad Gateway Actually Mean?
Nginx received a bad response (or no response) from the upstream server. Nginx itself is fine. Something behind Nginx is broken.
Common culprits:
- Your Node.js/Python/PHP app has crashed
- Wrong upstream port configured
- App is binding to
localhostbut Nginx is trying127.0.0.1(or vice versa) - App is overloaded and not responding in time
Step 1: Check If Nginx Is Actually Running
systemctl status nginx
If it's active, Nginx is fine. Move to step 2.
If it's dead:
nginx -t
systemctl restart nginx
Step 2: Check Your Upstream App
curl -I http://127.0.0.1:3000
Replace 3000 with your app's port. If you get Connection refused, your app is down — not Nginx.
Step 3: Read the Error Logs
tail -100 /var/log/nginx/error.log
Look for: connect() failed, upstream timed out, no live upstreams.
Step 4: Restart Your App Process
# PM2
pm2 restart all
pm2 logs --lines 50
# Systemd
systemctl restart myapp
journalctl -u myapp -n 50 --no-pager
# Docker
docker ps -a
docker restart my-container
docker logs my-container --tail 50
Step 5: Fix Common nginx.conf Issues
upstream backend {
server 127.0.0.1:3000; # Use IP, not 'localhost'
keepalive 32;
}
server {
location / {
proxy_pass http://backend;
proxy_read_timeout 90s; # Add this if timing out
proxy_connect_timeout 10s;
}
}
After editing:
nginx -t && systemctl reload nginx
Step 6: If App Keeps Crashing
# Check memory
free -h
# Check disk space (full disk = crash)
df -h
# Check CPU
top -bn1 | head -20
Quick Decision Tree
502 Error
├── curl 127.0.0.1:PORT → Connection refused?
│ └── App is down → Check app logs → Restart app
├── curl works fine?
│ └── Nginx config issue → Check proxy_pass → nginx -t
└── Intermittent 502?
└── App overloaded → Add proxy_read_timeout → Scale up
Most Common Root Causes (from real production issues)
-
App crashed due to unhandled promise rejection — add
process.on('unhandledRejection') - Port mismatch — app on 3001 but Nginx pointing to 3000
-
App bound to
0.0.0.0but Nginx usinglocalhost— use127.0.0.1everywhere -
Memory exhaustion — app OOM killed, use
pm2 restart --max-memory-restart 512M
Debugging production errors at 3AM shouldn't take hours. Step2Dev — paste your error, get the exact fix for your infra in 60 seconds. No generic answers.
Top comments (0)