DEV Community

Ajit Kumar
Ajit Kumar

Posted on

Backing Up Nginx Logs the Right Way: From Basics to Automation

When you run a website or API on a server, logs are your first and last line of truth. Whether you are debugging an issue, estimating traffic costs, or investigating suspicious activity, Nginx logs are critical.

This article explains:

  • What Nginx logs are and why they matter
  • The difference between access logs and error logs
  • Why log rotation is essential
  • Different ways to back up logs
  • A practical approach to backing up Nginx logs to Amazon S3
  • How to automate and verify the entire setup

This guide assumes basic Linux knowledge but no prior experience with log management.


1. What Are Nginx Logs?

Nginx generates logs for every request it handles. By default, these logs are stored in:

/var/log/nginx/
Enter fullscreen mode Exit fullscreen mode

The two most important files are:

  • access.log
  • error.log

2. Access Log vs Error Log

Access Log

The access log records every HTTP request served by Nginx.

Example entry:

43.202.80.217 - - [16/Dec/2025:02:14:10 +0000] "GET /news/latest HTTP/1.1" 200 5421 "-" "Mozilla/5.0"
Enter fullscreen mode Exit fullscreen mode

What it tells you:

  • Client IP address
  • Date and time of the request
  • Requested URL
  • HTTP status code (200, 404, 500, etc.)
  • Response size
  • User agent (browser, bot, crawler)

Used for:

  • Traffic analysis
  • Estimating bandwidth usage and AWS data transfer cost
  • Detecting bots and scrapers
  • Analytics (page views, visits)

Error Log

The error log records issues Nginx encounters while processing requests.

Example:

connect() failed (111: Connection refused) while connecting to upstream
Enter fullscreen mode Exit fullscreen mode

Used for:

  • Debugging backend failures
  • Finding misconfigurations
  • Diagnosing outages and performance issues

3. Why You Must Back Up Logs

Logs grow continuously. If left unmanaged:

  • Disk space fills up
  • Server performance degrades
  • You lose historical data

Backing up logs helps with:

  • Security audits
  • Traffic analysis
  • Cost estimation
  • Compliance and troubleshooting
  • Post-incident investigation

4. Why Log Rotation Is Required

Nginx keeps writing to the same log file. Over time, access.log can grow to gigabytes.

Log rotation:

  • Splits logs into daily (or size-based) chunks
  • Compresses old logs
  • Removes very old logs

On most Linux systems, this is handled by logrotate.


5. Logrotate Basics

Nginx usually ships with a logrotate configuration:

cat /etc/logrotate.d/nginx
Enter fullscreen mode Exit fullscreen mode

Typical configuration:

/var/log/nginx/*.log {
    daily
    rotate 14
    compress
    delaycompress
    missingok
    notifempty
    postrotate
        invoke-rc.d nginx rotate
    endscript
}
Enter fullscreen mode Exit fullscreen mode

What this means:

  • Logs rotate daily
  • Old logs are compressed (.gz)
  • 14 days of logs are kept
  • Nginx is notified after rotation

Rotated logs look like:

access.log.1
access.log.2.gz
access.log.3.gz
Enter fullscreen mode Exit fullscreen mode

6. Backup Options for Nginx Logs

Once logs are rotated, you have multiple backup strategies.

Option 1: Download Logs to Local Machine

Useful for:

  • Small projects
  • Manual analysis
scp user@server:/var/log/nginx/access.log.2.gz .
Enter fullscreen mode Exit fullscreen mode

Limitations:

  • Manual
  • Not scalable
  • Risk of loss

Option 2: Store Logs on Another Disk or Server

Better than nothing, but:

  • Still prone to server failures
  • Requires additional infrastructure

Option 3: Cloud Storage (Recommended)

Cloud storage offers:

  • Durability
  • Low cost
  • Easy retrieval

Popular choices:

  • Amazon S3
  • Google Cloud Storage
  • Azure Blob Storage

7. Why Amazon S3 Is a Good Choice

If your server runs on AWS:

  • Uploading logs to S3 in the same region has very low cost
  • S3 provides 11 nines durability
  • Easy lifecycle rules (auto-delete old logs)
  • Easy analytics (Athena, GoAccess, etc.)

8. Designing an S3 Log Backup Structure

A good structure avoids confusion later:

s3://nginx-logs/
  └── 16-12-2025/
      └── ip-172-31-44-115/
          ├── access.log.2.gz
          ├── access.log.3.gz
          └── error.log.2.gz
Enter fullscreen mode Exit fullscreen mode

Benefits:

  • Logs grouped by date
  • Supports multiple servers
  • Easy automation

9. Automating Backup with a Shell Script

Create a script:

sudo nano /usr/local/bin/nginx_log_backup.sh
Enter fullscreen mode Exit fullscreen mode

Example script:

#!/bin/bash
set -e

LOG_DIR="/var/log/nginx"
BUCKET="s3://ngnix-logs"
DATE=$(date +%d-%m-%Y)
HOST=$(hostname)

aws s3 sync "$LOG_DIR" \
  "$BUCKET/$DATE/$HOST/" \
  --exclude "*" \
  --include "*.gz"
Enter fullscreen mode Exit fullscreen mode

Make it executable:

sudo chmod +x /usr/local/bin/nginx_log_backup.sh
Enter fullscreen mode Exit fullscreen mode

10. Automating with Cron

Since logrotate runs at 00:00 UTC, schedule backup after that.

Edit root crontab:

sudo crontab -e
Enter fullscreen mode Exit fullscreen mode

Add:

30 0 * * * /usr/local/bin/nginx_log_backup.sh >> /var/log/nginx_backup.log 2>&1
Enter fullscreen mode Exit fullscreen mode

What this does:

  • Runs daily at 00:30 UTC
  • Ensures logs are rotated first
  • Captures output and errors

11. Verifying Everything Is Working

Check log rotation

ls -lh /var/log/nginx/*.gz
Enter fullscreen mode Exit fullscreen mode

Run backup manually

sudo /usr/local/bin/nginx_log_backup.sh
Enter fullscreen mode Exit fullscreen mode

Verify S3 upload

aws s3 ls s3://ngnix-logs/$(date +%d-%m-%Y)/$(hostname)/ --recursive
Enter fullscreen mode Exit fullscreen mode

Check cron execution

grep CRON /var/log/syslog
Enter fullscreen mode Exit fullscreen mode

Check backup logs

tail /var/log/nginx_backup.log
Enter fullscreen mode Exit fullscreen mode

12. Common Mistakes to Avoid

  • Backing up access.log directly (use rotated .gz files)
  • Running backup before logrotate
  • Not testing scripts in a cron-like environment
  • Forgetting to log cron output
  • Not setting S3 lifecycle rules

13. Final Thoughts

Log backups are often ignored until something breaks.

A simple setup using:

  • Nginx
  • Logrotate
  • Shell scripting
  • Cron
  • Amazon S3

…gives you reliable, low-cost, and auditable log storage.

Once this foundation is in place, you can:

  • Analyze traffic
  • Estimate cloud costs
  • Detect abuse
  • Improve reliability

Logs are not noise. They are data. Treat them accordingly.

Top comments (0)