Most SaaS applications run behind Nginx, and most teams only look at their logs when something breaks. That is a mistake. Your access logs are a real-time feed of what is happening to your application — including attacks, abuse, and infrastructure problems — if you know what to look for.
Here are five patterns worth monitoring continuously.
1. Repeated 401/403 Responses to the Same Endpoint
A spike in 401 Unauthorized or 403 Forbidden responses targeting a single endpoint — especially /api/login, /admin, or /api/token — is a strong indicator of brute force or credential stuffing activity.
awk '$9 == "401" || $9 == "403" {print $7}' /var/log/nginx/access.log \
| sort | uniq -c | sort -rn | head -20
If you see hundreds of hits on /api/login returning 401, an automated attack is almost certainly in progress. Combine with IP analysis:
awk '$9 == "401" {print $1}' /var/log/nginx/access.log \
| sort | uniq -c | sort -rn | head -20
A single IP hammering your login endpoint warrants an immediate block via fail2ban or a firewall rule.
2. Abnormal 4xx/5xx Ratios
A healthy application has a low error rate — typically under 1-2%. A sudden spike in 500 errors often means a deploy went wrong, a dependency is down, or something is crashing under load. A spike in 400 errors can indicate a scanning tool probing malformed requests.
awk '{print $9}' /var/log/nginx/access.log \
| grep -E '^[45]' \
| sort | uniq -c | sort -rn
Track this over time with a rolling window:
tail -n 10000 /var/log/nginx/access.log \
| awk '{print $9}' \
| sort | uniq -c | sort -rn
If 500 errors appear after a deployment, you want to know within minutes — not hours.
3. High Request Volume from a Single IP
Legitimate users do not send 10,000 requests per hour. Bots, scrapers, and DDoS traffic do. Catching this early lets you rate-limit or block before it impacts paying users.
awk '{print $1}' /var/log/nginx/access.log \
| sort | uniq -c | sort -rn | head -20
For a more useful view, filter to the last hour only:
awk -v date="$(date '+%d/%b/%Y:%H')" '$4 ~ date {print $1}' /var/log/nginx/access.log \
| sort | uniq -c | sort -rn | head -10
Pair this with nginx rate limiting in your config:
limit_req_zone $binary_remote_addr zone=api:10m rate=30r/m;
server {
location /api/ {
limit_req zone=api burst=10 nodelay;
}
}
4. Requests to Non-Existent Routes (404 Farming)
A wave of 404 responses across random paths — /wp-admin, /.env, /phpMyAdmin, /config.json — is an automated scanner probing for known vulnerabilities. These are typically harmless if your app does not have those files, but they indicate your server is being actively enumerated.
awk '$9 == "404" {print $7}' /var/log/nginx/access.log \
| sort | uniq -c | sort -rn | head -30
Watch for patterns like:
-
/.env— environment file leakage probes -
/wp-login.php— WordPress brute force -
/actuator/health— Spring Boot endpoint scanning -
/.git/config— source code exposure attempts
If you see these, the traffic is coming from automated tools. Blocking the source IP is reasonable.
5. Slow Response Times on Critical Paths
Nginx can log request processing time with $request_time. If your checkout, login, or payment endpoints are suddenly taking 5+ seconds, something is wrong — slow queries, N+1 problems, or resource exhaustion.
First, enable timing in your log format:
log_format timed '$remote_addr - $remote_user [$time_local] '
'"$request" $status $body_bytes_sent '
'$request_time $upstream_response_time';
access_log /var/log/nginx/access.log timed;
Then find your slowest endpoints:
awk '{print $NF, $7}' /var/log/nginx/access.log \
| sort -rn | head -20
A sudden increase in $upstream_response_time (time your backend took) vs $request_time (total time) tells you whether the bottleneck is in your app or at the Nginx layer.
Putting It Together
Monitoring these five patterns manually is a start, but doing it continuously at scale requires automation. You need alerting when error rates spike, when a new IP crosses a request threshold, or when a scanner starts probing your endpoints.
If you want this without spinning up a full ELK stack, LogAudit runs these checks against your Nginx logs automatically and alerts you when something worth acting on shows up.
Top comments (0)