<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Felix Gogodae</title>
    <description>The latest articles on DEV Community by Felix Gogodae (@trojanhorse7).</description>
    <link>https://dev.to/trojanhorse7</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/trojanhorse7"/>
    <language>en</language>
    <item>
      <title>Building a Rolling-Baseline HTTP Anomaly Detector (No Fail2Ban)</title>
      <dc:creator>Felix Gogodae</dc:creator>
      <pubDate>Tue, 28 Apr 2026 04:46:39 +0000</pubDate>
      <link>https://dev.to/trojanhorse7/building-a-rolling-baseline-http-anomaly-detector-no-fail2ban-4kf2</link>
      <guid>https://dev.to/trojanhorse7/building-a-rolling-baseline-http-anomaly-detector-no-fail2ban-4kf2</guid>
      <description>&lt;p&gt;Every VPS running a public web app gets hit with traffic it didn't ask for, from scrapers, brute-force login attempts, or just someone's misconfigured bot hammering the same endpoint every second. Most tutorials say "install Fail2Ban and move on." But what if you want to &lt;em&gt;understand&lt;/em&gt; the traffic before you block it? What if you need thresholds that adapt to your actual load instead of a hardcoded "5 failures in 10 minutes"?&lt;/p&gt;

&lt;p&gt;That's what I built for the HNG DevOps track: a Python daemon that tails Nginx access logs, compares live request rates to a &lt;strong&gt;rolling 30-minute baseline&lt;/strong&gt;, and reacts — Slack alerts for global spikes, &lt;code&gt;iptables DROP&lt;/code&gt; for abusive individual IPs, with tiered auto-unban so a single bad minute doesn't permanently lock someone out.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Repository:&lt;/strong&gt; &lt;a href="https://github.com/Trojanhorse7/hng-anomaly-detector" rel="noopener noreferrer"&gt;github.com/Trojanhorse7/hng-anomaly-detector&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  The Stack
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyx70lp3oqfibdd60dyqp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyx70lp3oqfibdd60dyqp.png" alt="Detector daemon running" width="800" height="343"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The whole system runs on a single Linux VPS with Docker Compose:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Nextcloud&lt;/strong&gt; — the upstream &lt;a href="https://hub.docker.com/r/kefaslungu/hng-nextcloud" rel="noopener noreferrer"&gt;&lt;code&gt;kefaslungu/hng-nextcloud&lt;/code&gt;&lt;/a&gt; image, unmodified.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Nginx&lt;/strong&gt; — reverse proxy in front of Nextcloud, configured to write &lt;strong&gt;JSON-formatted access logs&lt;/strong&gt; (not the default combined format). This is critical — structured logs let the detector parse fields reliably instead of regex-guessing.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Detector&lt;/strong&gt; — a Python 3.12 container that tails the shared log volume, runs the detection logic, calls Slack, and executes &lt;code&gt;iptables&lt;/code&gt; commands on the host.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Shared volume&lt;/strong&gt; — a named Docker volume (&lt;code&gt;HNG-nginx-logs&lt;/code&gt;) that Nginx writes to and the detector reads from.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3s5pb38do5vkfk4ojdrg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3s5pb38do5vkfk4ojdrg.png" alt="Architecture diagram" width="800" height="362"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The detector container runs with &lt;code&gt;network_mode: host&lt;/code&gt; and &lt;code&gt;cap_add: NET_ADMIN&lt;/code&gt; so its &lt;code&gt;iptables&lt;/code&gt; calls affect the actual host firewall — not an isolated container network.&lt;/p&gt;




&lt;h2&gt;
  
  
  How Detection Works
&lt;/h2&gt;

&lt;p&gt;The detection pipeline has three layers: &lt;strong&gt;sliding windows&lt;/strong&gt;, &lt;strong&gt;rolling baseline&lt;/strong&gt;, and &lt;strong&gt;anomaly evaluation&lt;/strong&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Layer 1: Sliding Windows (60 seconds)
&lt;/h3&gt;

&lt;p&gt;Every parsed log line feeds into &lt;code&gt;collections.deque&lt;/code&gt; structures — one global deque for all requests, and one per source IP. Timestamps older than 60 seconds are continuously evicted from the left side. At any moment, &lt;strong&gt;RPS = count / 60&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;There's no "bucket per minute" approximation. Every request is tracked individually and aged out precisely. Parallel deques track 4xx/5xx errors separately for the error-surge path (more on that below).&lt;/p&gt;

&lt;h3&gt;
  
  
  Layer 2: Rolling Baseline (30 minutes)
&lt;/h3&gt;

&lt;p&gt;A background thread recomputes the baseline every 60 seconds. It builds a dense vector of per-second request counts over the last 1,800 seconds (30 minutes) and calculates:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;effective_mean&lt;/code&gt;&lt;/strong&gt; — average requests per second&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;effective_std&lt;/code&gt;&lt;/strong&gt; — standard deviation of per-second counts&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;There's an important twist: if&lt;br&gt;
 enough samples exist in the &lt;strong&gt;current UTC hour&lt;/strong&gt;, the baseline uses only that hour's data instead of the full 30-minute window. This matters because traffic patterns shift — 2 AM is different from 2 PM, and the baseline should reflect &lt;em&gt;current&lt;/em&gt; conditions, not a blend of quiet and busy periods.&lt;/p&gt;

&lt;p&gt;Floor values prevent divide-by-zero edge cases in z-score calculations. Every recompute is &lt;strong&gt;audited&lt;/strong&gt; to a structured log file with the timestamp, source (hourly vs full window), and the computed mean/std.&lt;/p&gt;
&lt;h3&gt;
  
  
  Layer 3: Anomaly Evaluation
&lt;/h3&gt;

&lt;p&gt;For each incoming request, the detector compares current RPS to the baseline. An anomaly fires if &lt;strong&gt;either&lt;/strong&gt; condition is true:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Z-score&lt;/strong&gt; &amp;gt; threshold (default &lt;strong&gt;3.0&lt;/strong&gt;) — the current rate is more than 3 standard deviations above the baseline mean&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Rate&lt;/strong&gt; &amp;gt; &lt;strong&gt;multiplier × baseline mean&lt;/strong&gt; (default &lt;strong&gt;5×&lt;/strong&gt;) — the current rate is more than 5 times the average&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Error surge tightening:&lt;/strong&gt; if an IP's error RPS (4xx/5xx responses) exceeds 3× the baseline error mean, thresholds tighten automatically — z-score drops to &lt;strong&gt;2.0&lt;/strong&gt; and the rate multiplier drops to &lt;strong&gt;3×&lt;/strong&gt;. This means an IP generating lots of failed requests gets scrutinized more aggressively, which is exactly what you want for brute-force login attempts.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Normal:     z &amp;gt; 3.0  OR  rate &amp;gt; 5 × mean  →  anomaly
Error surge: z &amp;gt; 2.0  OR  rate &amp;gt; 3 × mean  →  anomaly (tighter)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  What Happens When an Anomaly Fires
&lt;/h2&gt;

&lt;p&gt;The system distinguishes between &lt;strong&gt;global&lt;/strong&gt; and &lt;strong&gt;per-IP&lt;/strong&gt; anomalies, and they trigger different responses:&lt;/p&gt;

&lt;h3&gt;
  
  
  Global Anomaly → Slack Only
&lt;/h3&gt;

&lt;p&gt;If the aggregate RPS across all IPs spikes above the baseline, the detector sends a Slack notification. It does &lt;strong&gt;not&lt;/strong&gt; apply iptables rules — blocking all traffic would take the service down. Global alerts are informational: "your server is seeing unusual load right now."&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fls8nfh42pv0rjy6zn8oy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fls8nfh42pv0rjy6zn8oy.png" alt="Global anomaly Slack alert" width="800" height="124"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;A cooldown (default 120 seconds) prevents Slack spam if the global anomaly persists for minutes.&lt;/p&gt;

&lt;h3&gt;
  
  
  Per-IP Anomaly → iptables DROP + Slack + Audit
&lt;/h3&gt;

&lt;p&gt;If a single IP is responsible for anomalous traffic, the detector:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Adds an &lt;code&gt;iptables -I INPUT -s &amp;lt;IP&amp;gt; -j DROP&lt;/code&gt; rule&lt;/strong&gt; — the IP is immediately blocked at the kernel level, before Nginx even sees the packets.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Sends a Slack notification&lt;/strong&gt; with the IP, the detection condition (z-score or rate multiplier), the current rate, and the baseline stats.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Writes a structured audit log entry&lt;/strong&gt; with all the same details plus the ban duration.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fadji0h0baclhdeejsr2c.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fadji0h0baclhdeejsr2c.png" alt="Ban Slack notification" width="800" height="155"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr1wsoqrtrlyrn31lenu6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr1wsoqrtrlyrn31lenu6.png" alt="iptables showing DROP rule" width="800" height="94"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Tiered Auto-Unban
&lt;/h3&gt;

&lt;p&gt;Permanently banning IPs from a single spike is too aggressive. The system uses &lt;strong&gt;escalating timeouts&lt;/strong&gt;:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Strike&lt;/th&gt;
&lt;th&gt;Ban Duration&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;1st&lt;/td&gt;
&lt;td&gt;10 minutes&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;2nd&lt;/td&gt;
&lt;td&gt;30 minutes&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;3rd&lt;/td&gt;
&lt;td&gt;2 hours&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;4th+&lt;/td&gt;
&lt;td&gt;Permanent (no auto-unban)&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;A background thread checks every 3 seconds for IPs whose ban has expired, removes the iptables rule, and sends an unban Slack notification. The strike counter persists across container restarts via a JSON file (&lt;code&gt;ban_state.json&lt;/code&gt;).&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0gxunbxazp8plvft3fa9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0gxunbxazp8plvft3fa9.png" alt="Unban Slack notification" width="800" height="150"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This means a legitimate user who triggered a false positive gets unblocked in 10 minutes. A repeat offender escalates through the tiers. By the 4th strike, they're gone for good.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Audit Trail
&lt;/h2&gt;

&lt;p&gt;Every significant event is appended to a structured log file at &lt;code&gt;data/audit.log&lt;/code&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;BASELINE_RECALC&lt;/code&gt;&lt;/strong&gt; — every 60 seconds, with source (hourly vs full), mean, std&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;BAN&lt;/code&gt;&lt;/strong&gt; — IP, condition, rate, baseline stats, duration&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;UNBAN&lt;/code&gt;&lt;/strong&gt; — IP, reason, historical ban count&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F58t8mhs28kiyfqob80ma.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F58t8mhs28kiyfqob80ma.png" alt="Structured audit log" width="800" height="236"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This file is the source of truth for debugging, compliance, and the baseline graph (more below).&lt;/p&gt;




&lt;h2&gt;
  
  
  The Dashboard
&lt;/h2&gt;

&lt;p&gt;A FastAPI server on port 8080 serves a single-page dashboard with live metrics via &lt;strong&gt;WebSocket push&lt;/strong&gt; (every 2.5 seconds). If WebSocket fails (e.g., behind a proxy without Upgrade support), the page falls back to HTTP polling automatically.&lt;/p&gt;

&lt;p&gt;The &lt;code&gt;/api/state&lt;/code&gt; JSON endpoint returns:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Uptime, event count, CPU/memory&lt;/li&gt;
&lt;li&gt;Current global RPS and baseline &lt;code&gt;effective_mean&lt;/code&gt; / &lt;code&gt;effective_std&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;List of currently banned IPs with tier info&lt;/li&gt;
&lt;li&gt;Top 10 source IPs by request count in the current window&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Baseline Over Time
&lt;/h2&gt;

&lt;p&gt;One of the requirements was demonstrating that the baseline actually adapts. By parsing &lt;code&gt;BASELINE_RECALC&lt;/code&gt; lines from the audit log and plotting &lt;code&gt;effective_mean&lt;/code&gt; over time, you can see the baseline shift as traffic patterns change between UTC hours.&lt;/p&gt;

&lt;p&gt;During a busy period, &lt;code&gt;effective_mean&lt;/code&gt; climbs. When traffic drops, it falls. The hourly-slice preference means the baseline reacts to the &lt;em&gt;current&lt;/em&gt; hour's pattern rather than being dragged by stale data from 25 minutes ago.&lt;/p&gt;




&lt;h2&gt;
  
  
  Lessons Learned
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;1. JSON logs are non-negotiable.&lt;/strong&gt; Parsing regex against Nginx's default combined log format is fragile. One unusual user-agent string with spaces and quotes breaks your parser. JSON logs with &lt;code&gt;escape=json&lt;/code&gt; in the Nginx config give you reliable field extraction every time.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Host networking in Docker is powerful but surprising.&lt;/strong&gt; &lt;code&gt;network_mode: host&lt;/code&gt; means the container shares the host's network stack — &lt;code&gt;iptables&lt;/code&gt; rules apply to the actual server, not a virtual bridge. This is exactly what you want for blocking IPs, but it also means port conflicts are your problem.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. Hardcoded thresholds are the enemy.&lt;/strong&gt; "Block after 100 requests per minute" sounds reasonable until your app legitimately serves 200 req/s during peak hours. A rolling baseline that adapts to actual traffic means your thresholds stay meaningful whether you're serving 2 req/s at 3 AM or 50 req/s at noon.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4. Tiered responses prevent self-inflicted outages.&lt;/strong&gt; The first time I tested with aggressive thresholds, my own monitoring IP got permanently banned. Escalating tiers (10m → 30m → 2h → permanent) give false positives a way to recover while still catching persistent abuse.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;5. Audit everything.&lt;/strong&gt; When something goes wrong — a legitimate user gets blocked, or an attack slips through — the audit log tells you exactly what the baseline was, what the detector saw, and why it made the decision it did. Without that, you're guessing.&lt;/p&gt;




&lt;h2&gt;
  
  
  Running It Yourself
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;git clone https://github.com/Trojanhorse7/hng-anomaly-detector
&lt;span class="nb"&gt;cd &lt;/span&gt;hng-anomaly-detector
&lt;span class="nb"&gt;cp&lt;/span&gt; .env.example .env
&lt;span class="c"&gt;# Set SLACK_WEBHOOK_URL in .env&lt;/span&gt;
docker compose build &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; docker compose up &lt;span class="nt"&gt;-d&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Nextcloud at &lt;code&gt;http://&amp;lt;VPS_IP&amp;gt;/&lt;/code&gt;, dashboard at &lt;code&gt;http://&amp;lt;VPS_IP&amp;gt;:8080/&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Thresholds, window sizes, and ban durations are all in &lt;code&gt;detector/config.yaml&lt;/code&gt; — no code changes needed to tune the system.&lt;/p&gt;




&lt;h2&gt;
  
  
  What I'd Improve
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Per-IP baselines&lt;/strong&gt; — currently all IPs are compared against the global baseline. High-traffic legitimate IPs (like a CDN edge) could benefit from their own rolling stats.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;HTTPS on the dashboard&lt;/strong&gt; — right now it's plain HTTP on 8080. A reverse proxy with TLS would be better for production.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Prometheus/Grafana&lt;/strong&gt; — the audit log works, but a proper time-series database would make baseline visualization trivial.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;IPv6&lt;/strong&gt; — the current implementation only handles IPv4 in iptables rules.&lt;/li&gt;
&lt;/ul&gt;




&lt;p&gt;&lt;em&gt;Built for the &lt;a href="https://hng.tech/internship" rel="noopener noreferrer"&gt;HNG DevOps track&lt;/a&gt;. The full source is at &lt;a href="https://github.com/Trojanhorse7/hng-anomaly-detector" rel="noopener noreferrer"&gt;github.com/Trojanhorse7/hng-anomaly-detector&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>devops</category>
      <category>python</category>
      <category>docker</category>
      <category>security</category>
    </item>
  </channel>
</rss>
