DEV Community

Fredrick Anyanwu
Fredrick Anyanwu

Posted on

Building a Real-Time HTTP Anomaly Detection Engine for Nextcloud with Python, Nginx, and iptables

For this project, I deployed Nextcloud behind Nginx and built a Python daemon that performs real-time anomaly detection on incoming HTTP traffic.

The key requirement was to avoid static assumptions and instead learn “normal” traffic behavior continuously.

Stack and deployment model

  • VPS: Linux (2 vCPU / 2 GB RAM minimum)
  • Nextcloud container (provided image)
  • Nginx reverse proxy
  • Python detector daemon
  • Slack Incoming Webhooks
  • iptables for active mitigation
  • Live dashboard (Flask) Nginx writes JSON access logs to /var/log/nginx/hng-access.log, stored in a named Docker volume HNG-nginx-logs shared read-only with detector.

Structured logging

Nginx access logs include at minimum:

source_ip

  • timestamp
  • method
  • path
  • status
  • response_size This simplifies parsing and keeps the detector robust under load.

Sliding windows (deque-based)

I maintain two 60-second windows:

  • Global request timestamps
  • Per-IP request timestamp deques For each request:
  1. append timestamp
  2. evict entries older than 60 seconds
  3. compute rate from deque length This is a true rolling window implementation (not a minute bucket counter).

Rolling baseline implementation

I keep a rolling 30-minute history of per-second counts and recalculate every 60 seconds.

Computed metrics:

  • mean request count
  • standard deviation
  • baseline error rate I also maintain hourly slots and prefer current-hour baseline once that slot has sufficient points. Floor values protect against near-zero baseline behavior.

Detection logic

An anomaly triggers when either condition is true:

  • z_score > 3.0
  • current_rate > 5 * baseline_mean Error-surge path:

if an IP’s error activity significantly exceeds baseline error behavior, per-IP thresholds are tightened automatically.

Mitigation workflow

Per-IP anomaly:

  • insert firewall rule:
  • iptables -I INPUT -s -j DROP
  • send Slack BAN alert
    Global anomaly:

  • send Slack GLOBAL alert

  • no broad blocking
    Unban policy (backoff):

  1. 10m
  2. 30m
  3. 2h
  4. permanent for repeated offenders Every unban emits Slack and audit events.

Dashboard and audit trail

Dashboard refreshes every 3 seconds and exposes:

  • global req/s
  • top source IPs
  • banned IPs
  • CPU/memory
  • effective baseline stats
  • uptime Audit log format:

[timestamp] ACTION ip | condition | rate | baseline | duration

Actions include: BAN, UNBAN, BASELINE, ALERT.

Practical outcomes

This project demonstrates:

Top comments (0)