<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Goteh Mbaza</title>
    <description>The latest articles on DEV Community by Goteh Mbaza (@goteh_mbaza_e513bdbf1871a).</description>
    <link>https://dev.to/goteh_mbaza_e513bdbf1871a</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/goteh_mbaza_e513bdbf1871a"/>
    <language>en</language>
    <item>
      <title># How I Built a Real-Time HTTP Anomaly Detector for cloud.ng with Python, Nginx, Docker, and iptables</title>
      <dc:creator>Goteh Mbaza</dc:creator>
      <pubDate>Mon, 27 Apr 2026 11:32:59 +0000</pubDate>
      <link>https://dev.to/goteh_mbaza_e513bdbf1871a/-how-i-built-a-real-time-http-anomaly-detector-for-cloudng-with-python-nginx-docker-and-1d01</link>
      <guid>https://dev.to/goteh_mbaza_e513bdbf1871a/-how-i-built-a-real-time-http-anomaly-detector-for-cloudng-with-python-nginx-docker-and-1d01</guid>
      <description>&lt;p&gt;When a platform is public and always online, one of the biggest security questions is:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;How do you know when traffic is normal, and when something suspicious is happening?&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;That was the goal of this project.&lt;/p&gt;

&lt;p&gt;I built a real-time anomaly detection engine for cloud.ng, a cloud storage platform powered by Nextcloud, that watches incoming&lt;br&gt;
  HTTP traffic, learns what normal traffic looks like, detects unusual behavior, and reacts automatically.&lt;/p&gt;

&lt;p&gt;If one IP becomes abusive, the system blocks it with iptables. If the whole platform suddenly gets a global traffic spike, the&lt;br&gt;
  system sends an alert to Slack. It also provides a live dashboard so you can watch traffic behavior in real time.&lt;/p&gt;

&lt;p&gt;In this post, I’ll explain how I built it in a beginner-friendly way.&lt;/p&gt;

&lt;p&gt;———&lt;/p&gt;

&lt;p&gt;## What this project does&lt;/p&gt;

&lt;p&gt;At a high level, the system works like this:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;A user sends an HTTP request&lt;/li&gt;
&lt;li&gt;Nginx receives the request first&lt;/li&gt;
&lt;li&gt;Nginx forwards it to Nextcloud&lt;/li&gt;
&lt;li&gt;Nginx writes the request into a JSON access log&lt;/li&gt;
&lt;li&gt;A Python detector daemon reads that log continuously&lt;/li&gt;
&lt;li&gt;The detector compares live traffic against a learned baseline&lt;/li&gt;
&lt;li&gt;If traffic becomes abnormal, it blocks the IP or sends a Slack alert&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;So instead of using a fixed hardcoded limit like “100 requests per minute,” this project tries to learn what normal looks like&lt;br&gt;
  first.&lt;/p&gt;

&lt;p&gt;———&lt;/p&gt;

&lt;p&gt;## Why this matters&lt;/p&gt;

&lt;p&gt;A fixed limit is easy to write, but not always smart.&lt;/p&gt;

&lt;p&gt;Traffic at 2 a.m. is usually different from traffic at 2 p.m. Some endpoints naturally get bursts. Some spikes are harmless,&lt;br&gt;
  and some are not.&lt;/p&gt;

&lt;p&gt;If your threshold is too low:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;you block legitimate users&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If your threshold is too high:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;suspicious traffic slips through&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That’s why I used a rolling baseline instead of a static number.&lt;/p&gt;

&lt;p&gt;———&lt;/p&gt;

&lt;p&gt;## The stack I used&lt;/p&gt;

&lt;p&gt;This project uses:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Docker Compose&lt;/li&gt;
&lt;li&gt;Nextcloud&lt;/li&gt;
&lt;li&gt;Nginx&lt;/li&gt;
&lt;li&gt;Python&lt;/li&gt;
&lt;li&gt;iptables&lt;/li&gt;
&lt;li&gt;Slack webhook&lt;/li&gt;
&lt;li&gt;a live metrics dashboard&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The Nextcloud image came from Docker Hub and was used exactly as required.&lt;/p&gt;

&lt;p&gt;———&lt;/p&gt;

&lt;p&gt;## Architecture overview&lt;/p&gt;

&lt;p&gt;The traffic flow looks like this:&lt;/p&gt;

&lt;p&gt;Internet Clients&lt;br&gt;
        |&lt;br&gt;
        v&lt;br&gt;
  Nginx Reverse Proxy&lt;br&gt;
        |&lt;br&gt;
        +--&amp;gt; Nextcloud&lt;br&gt;
        |&lt;br&gt;
        +--&amp;gt; JSON access logs&lt;br&gt;
                |&lt;br&gt;
                v&lt;br&gt;
         Python Detector Daemon&lt;br&gt;
           |       |       |&lt;br&gt;
           v       v       v&lt;br&gt;
       iptables  Slack  Dashboard&lt;/p&gt;

&lt;p&gt;Nginx and the detector share a Docker volume so the detector can read the live access log without modifying the application&lt;br&gt;
  container.&lt;/p&gt;

&lt;p&gt;———&lt;/p&gt;

&lt;p&gt;## Step 1: Logging traffic with Nginx&lt;/p&gt;

&lt;p&gt;The detector needs reliable traffic data before it can make decisions.&lt;/p&gt;

&lt;p&gt;So I configured Nginx to log every request in JSON format with fields like:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;source IP&lt;/li&gt;
&lt;li&gt;timestamp&lt;/li&gt;
&lt;li&gt;method&lt;/li&gt;
&lt;li&gt;path&lt;/li&gt;
&lt;li&gt;status code&lt;/li&gt;
&lt;li&gt;response size&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;A simplified example looks like this:&lt;/p&gt;

&lt;p&gt;{&lt;br&gt;
    "source_ip": "203.0.113.10",&lt;br&gt;
    "timestamp": "2026-04-27T09:25:51+00:00",&lt;br&gt;
    "method": "GET",&lt;br&gt;
    "path": "/",&lt;br&gt;
    "status": "200",&lt;br&gt;
    "response_size": "612"&lt;br&gt;
  }&lt;/p&gt;

&lt;p&gt;Structured logs are much easier to parse safely than plain text.&lt;/p&gt;

&lt;p&gt;I also configured Nginx to trust and forward the real client IP using X-Forwarded-For, so the detector sees the actual request&lt;br&gt;
  source.&lt;/p&gt;

&lt;p&gt;———&lt;/p&gt;

&lt;p&gt;## Step 2: Continuously reading logs with Python&lt;/p&gt;

&lt;p&gt;The detector is not a cron job and not a one-time script.&lt;/p&gt;

&lt;p&gt;It runs as a long-lived daemon and continuously tails the Nginx access log file.&lt;/p&gt;

&lt;p&gt;For every new line, it:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;parses the JSON&lt;/li&gt;
&lt;li&gt;extracts the traffic fields&lt;/li&gt;
&lt;li&gt;updates request windows&lt;/li&gt;
&lt;li&gt;updates baselines&lt;/li&gt;
&lt;li&gt;checks whether the traffic looks anomalous&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That means detection happens in near real time.&lt;/p&gt;

&lt;p&gt;———&lt;/p&gt;

&lt;p&gt;## Step 3: Using a sliding window with deques&lt;/p&gt;

&lt;p&gt;One of the most important parts of this project is the 60-second sliding window.&lt;/p&gt;

&lt;p&gt;I used Python deque objects because they are excellent for “keep the latest items, remove the oldest items” logic.&lt;/p&gt;

&lt;p&gt;### What I tracked&lt;/p&gt;

&lt;p&gt;I kept:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;one global request deque&lt;/li&gt;
&lt;li&gt;one per-IP request deque&lt;/li&gt;
&lt;li&gt;one per-IP error deque for 4xx and 5xx responses&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;### How it works&lt;/p&gt;

&lt;p&gt;When a request arrives:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;append the current timestamp to the relevant deque&lt;/li&gt;
&lt;li&gt;remove any timestamps older than 60 seconds&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;This gives a true moving view of the latest traffic.&lt;/p&gt;

&lt;p&gt;### Why this matters&lt;/p&gt;

&lt;p&gt;A simple “requests per minute” counter resets at fixed minute boundaries, which can hide short bursts.&lt;/p&gt;

&lt;p&gt;A sliding window answers the better question:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;How much traffic happened in the last 60 seconds right now?&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;That is much better for anomaly detection.&lt;/p&gt;

&lt;p&gt;———&lt;/p&gt;

&lt;p&gt;## Step 4: Teaching the baseline to learn from traffic&lt;/p&gt;

&lt;p&gt;A sliding window shows what is happening now, but it does not tell us whether that traffic is unusual.&lt;/p&gt;

&lt;p&gt;For that, I built a rolling baseline manager.&lt;/p&gt;

&lt;p&gt;### What the baseline tracks&lt;/p&gt;

&lt;p&gt;The baseline stores:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;per-second request counts&lt;/li&gt;
&lt;li&gt;per-second error counts&lt;/li&gt;
&lt;li&gt;a rolling 30-minute history&lt;/li&gt;
&lt;li&gt;hourly traffic slots&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;### What gets recalculated&lt;/p&gt;

&lt;p&gt;Every 60 seconds, the detector recalculates:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;mean requests per second&lt;/li&gt;
&lt;li&gt;standard deviation&lt;/li&gt;
&lt;li&gt;error rate&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;### Why idle seconds matter&lt;/p&gt;

&lt;p&gt;One important detail was making sure quiet seconds are also included.&lt;/p&gt;

&lt;p&gt;If you only record seconds where traffic exists, the average becomes artificially high. Then the system thinks normal traffic&lt;br&gt;
  is busier than it really is.&lt;/p&gt;

&lt;p&gt;So I made sure the baseline includes:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;active seconds&lt;/li&gt;
&lt;li&gt;idle seconds with zero traffic&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That makes the learned average much more realistic.&lt;/p&gt;

&lt;p&gt;### Hour-slot preference&lt;/p&gt;

&lt;p&gt;Traffic usually changes throughout the day.&lt;/p&gt;

&lt;p&gt;So I added a rule:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;if the current hour has enough samples, use the current hour’s baseline&lt;/li&gt;
&lt;li&gt;otherwise, fall back to the rolling 30-minute baseline&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This helps the detector adapt to time-of-day behavior.&lt;/p&gt;

&lt;p&gt;———&lt;/p&gt;

&lt;p&gt;## Step 5: How the detector makes decisions&lt;/p&gt;

&lt;p&gt;Once I had:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;a live request rate&lt;/li&gt;
&lt;li&gt;a learned baseline&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I needed a way to decide whether traffic is abnormal.&lt;/p&gt;

&lt;p&gt;I used two checks.&lt;/p&gt;

&lt;p&gt;### 1. Z-score&lt;/p&gt;

&lt;p&gt;The z-score answers:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;How far is the current traffic from the normal average, measured in standard deviations?&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;A high z-score means traffic is statistically unusual.&lt;/p&gt;

&lt;p&gt;### 2. Rate multiplier&lt;/p&gt;

&lt;p&gt;I also added a simpler check:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Is the current rate more than N times the learned average?&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;That catches obvious spikes even when the z-score is not dramatic yet.&lt;/p&gt;

&lt;p&gt;### Detection rule&lt;/p&gt;

&lt;p&gt;A request pattern is considered anomalous if either fires first:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;z-score exceeds threshold&lt;/li&gt;
&lt;li&gt;current rate exceeds multiplier of baseline mean&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I used this logic for:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;per-IP traffic&lt;/li&gt;
&lt;li&gt;global traffic&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;———&lt;/p&gt;

&lt;p&gt;## Step 6: Tightening thresholds when errors surge&lt;/p&gt;

&lt;p&gt;Not all suspicious behavior is about volume alone.&lt;/p&gt;

&lt;p&gt;Sometimes an IP causes a lot of:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;401&lt;/li&gt;
&lt;li&gt;403&lt;/li&gt;
&lt;li&gt;404&lt;/li&gt;
&lt;li&gt;500&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That often suggests scanning, brute force, or probing.&lt;/p&gt;

&lt;p&gt;So I added an error surge rule.&lt;/p&gt;

&lt;p&gt;If an IP’s 4xx/5xx rate becomes much worse than its normal baseline, the detector automatically tightens its thresholds.&lt;/p&gt;

&lt;p&gt;That way, a suspicious IP gets less tolerance than a normal user.&lt;/p&gt;

&lt;p&gt;———&lt;/p&gt;

&lt;p&gt;## Step 7: Blocking bad IPs with iptables&lt;/p&gt;

&lt;p&gt;When a per-IP anomaly is confirmed, the detector blocks the source IP using Linux iptables.&lt;/p&gt;

&lt;p&gt;The command is:&lt;/p&gt;

&lt;p&gt;iptables -I INPUT -s  -j DROP&lt;/p&gt;

&lt;p&gt;For example:&lt;/p&gt;

&lt;p&gt;iptables -I INPUT -s 203.0.113.10 -j DROP&lt;/p&gt;

&lt;p&gt;### What this means&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;-I INPUT inserts the rule into the input chain&lt;/li&gt;
&lt;li&gt;-s selects the source IP&lt;/li&gt;
&lt;li&gt;-j DROP silently drops all packets from that IP&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In simple terms:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;“If traffic comes from this IP, ignore it.”&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;This is useful because it stops abusive traffic at the firewall level.&lt;/p&gt;

&lt;p&gt;———&lt;/p&gt;

&lt;p&gt;## Step 8: Automatically unbanning IPs&lt;/p&gt;

&lt;p&gt;Blocking forever on a first offense is not always ideal.&lt;/p&gt;

&lt;p&gt;So I added a backoff-based unban system:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;first ban: 10 minutes&lt;/li&gt;
&lt;li&gt;second ban: 30 minutes&lt;/li&gt;
&lt;li&gt;third ban: 2 hours&lt;/li&gt;
&lt;li&gt;fourth offense onward: permanent&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;A background unban loop checks whether each active ban has expired.&lt;/p&gt;

&lt;p&gt;If a ban expires:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;the firewall rule is removed&lt;/li&gt;
&lt;li&gt;the audit log records the release&lt;/li&gt;
&lt;li&gt;Slack gets an unban notification&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;———&lt;/p&gt;

&lt;p&gt;## Step 9: Sending Slack alerts&lt;/p&gt;

&lt;p&gt;The detector sends Slack notifications for:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;per-IP bans&lt;/li&gt;
&lt;li&gt;unbans&lt;/li&gt;
&lt;li&gt;global anomaly alerts&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Each alert includes:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;the condition that fired&lt;/li&gt;
&lt;li&gt;the current rate&lt;/li&gt;
&lt;li&gt;the baseline&lt;/li&gt;
&lt;li&gt;the timestamp&lt;/li&gt;
&lt;li&gt;the ban duration if applicable&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That makes each notification immediately useful.&lt;/p&gt;

&lt;p&gt;———&lt;/p&gt;

&lt;p&gt;## Step 10: Building the live dashboard&lt;/p&gt;

&lt;p&gt;I also built a live dashboard that shows:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;global requests per second&lt;/li&gt;
&lt;li&gt;top source IPs&lt;/li&gt;
&lt;li&gt;currently banned IPs&lt;/li&gt;
&lt;li&gt;CPU usage&lt;/li&gt;
&lt;li&gt;memory usage&lt;/li&gt;
&lt;li&gt;uptime&lt;/li&gt;
&lt;li&gt;effective baseline values&lt;/li&gt;
&lt;li&gt;baseline graph over time&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This made testing much easier, because I could see how the detector was behaving without constantly reading raw logs.&lt;/p&gt;

&lt;p&gt;———&lt;/p&gt;

&lt;p&gt;## Example: sliding window idea in Python&lt;/p&gt;

&lt;p&gt;Here is the basic idea behind the 60-second deque window:&lt;/p&gt;

&lt;p&gt;from collections import deque&lt;br&gt;
  import time&lt;/p&gt;

&lt;p&gt;requests = deque()&lt;/p&gt;

&lt;p&gt;def add_request():&lt;br&gt;
      now = time.time()&lt;br&gt;
      requests.append(now)&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;  cutoff = now - 60
  while requests and requests[0] &amp;lt; cutoff:
      requests.popleft()

  return len(requests) / 60
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;That tiny pattern is the core of the live request-rate logic.&lt;/p&gt;

&lt;p&gt;———&lt;/p&gt;

&lt;p&gt;## Problems I ran into&lt;/p&gt;

&lt;p&gt;This project also taught me that detection logic is only half the job. The other half is operational reliability.&lt;/p&gt;

&lt;p&gt;### 1. The baseline can learn the wrong thing&lt;/p&gt;

&lt;p&gt;If you attack too early, the detector can start treating attack traffic as normal.&lt;/p&gt;

&lt;p&gt;The fix:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;warm the system with light traffic first&lt;/li&gt;
&lt;li&gt;wait for a baseline recalculation&lt;/li&gt;
&lt;li&gt;then run the burst&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;### 2. Too much per-request logging&lt;/p&gt;

&lt;p&gt;Logging every request at INFO created too much output during heavy bursts.&lt;/p&gt;

&lt;p&gt;The fix:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;make request-by-request logging configurable&lt;/li&gt;
&lt;li&gt;keep it off by default&lt;/li&gt;
&lt;li&gt;keep audit events on&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;### 3. Blocking my own SSH session&lt;/p&gt;

&lt;p&gt;At one point, I attacked from the same IP I used for SSH, and the detector correctly blocked that IP.&lt;/p&gt;

&lt;p&gt;The fix:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;add a whitelist for admin IPs&lt;/li&gt;
&lt;li&gt;use a separate IP for attack traffic&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;### 4. Capturing iptables state at the right time&lt;/p&gt;

&lt;p&gt;Sometimes the ban happened correctly, but the live iptables state was hard to catch.&lt;/p&gt;

&lt;p&gt;The fix:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;automatically write iptables snapshots during BAN and UNBAN&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;———&lt;/p&gt;

&lt;p&gt;## What I learned&lt;/p&gt;

&lt;p&gt;This project helped me understand that security tooling is not just about rules.&lt;/p&gt;

&lt;p&gt;It is also about:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;observability&lt;/li&gt;
&lt;li&gt;realistic baselines&lt;/li&gt;
&lt;li&gt;good logging&lt;/li&gt;
&lt;li&gt;safe testing&lt;/li&gt;
&lt;li&gt;automated response&lt;/li&gt;
&lt;li&gt;collecting proof that your system actually worked&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;It also showed me how simple data structures like a deque can be powerful when used carefully.&lt;/p&gt;

&lt;p&gt;———&lt;/p&gt;

&lt;p&gt;## Final thoughts&lt;/p&gt;

&lt;p&gt;In the end, I built a system that can:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;monitor HTTP traffic in real time&lt;/li&gt;
&lt;li&gt;learn what normal looks like&lt;/li&gt;
&lt;li&gt;detect per-IP anomalies&lt;/li&gt;
&lt;li&gt;detect global anomalies&lt;/li&gt;
&lt;li&gt;block abusive IPs with iptables&lt;/li&gt;
&lt;li&gt;notify Slack&lt;/li&gt;
&lt;li&gt;automatically unban IPs&lt;/li&gt;
&lt;li&gt;expose live metrics in a dashboard&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;For a beginner-friendly DevSecOps project, this was a great way to connect traffic monitoring, anomaly detection, alerting, and&lt;br&gt;
  response in one real system.&lt;/p&gt;

&lt;p&gt;If you are learning security engineering or DevSecOps, this kind of project is a very practical way to understand how defensive&lt;br&gt;
  controls work in production-style environments.&lt;/p&gt;

&lt;p&gt;———&lt;/p&gt;

&lt;p&gt;## Project links&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Live dashboard: &lt;a href="http://54.90.137.142:8081" rel="noopener noreferrer"&gt;http://54.90.137.142:8081&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;GitHub repository: &lt;a href="https://github.com/Patrickmbaza/hng14-stage3-devops-" rel="noopener noreferrer"&gt;https://github.com/Patrickmbaza/hng14-stage3-devops-&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>automation</category>
      <category>monitoring</category>
      <category>python</category>
      <category>security</category>
    </item>
  </channel>
</rss>
