DEV Community

Cover image for # How I Built a Live Cybersecurity Intelligence Dashboard on a Raspberry Pi 5
Signal Over Noise
Signal Over Noise

Posted on

# How I Built a Live Cybersecurity Intelligence Dashboard on a Raspberry Pi 5

A few weeks ago I got tired of manually trawling through 15+ security blogs, RSS feeds, and Twitter/X accounts every morning. So I built an automated cybersecurity intelligence dashboard that runs 24/7 on a Raspberry Pi 5 sitting on my desk.

Here's what it does, how I built it, and every mistake I made along the way.

What it does

signal-noise.tech is a live threat intelligence feed that:

  • Aggregates 18 cybersecurity RSS sources (The Hacker News, BleepingComputer, CISA advisories, Krebs, Unit 42, Cisco Talos, and more)
  • Refreshes every 30 minutes
  • Classifies stories by severity (Critical/Gov/Vendor/Media)
  • Detects 30+ known threat actors in headlines (Lazarus, ALPHV, Volt Typhoon, etc.)
  • Counts CVEs tracked, critical headlines, CISA advisories — live KPIs on the homepage
  • Auto-posts the top stories to @SignalOverNoizX on X

Everything runs on a Pi 5 behind a domestic broadband connection via Cloudflare tunnelling (no port forwarding required).


The stack

  • Hardware: Raspberry Pi 5, 8GB RAM, running Raspberry Pi OS Bookworm 64-bit
  • Web server: Apache2 with SSL (Let's Encrypt), reverse proxy headers, security hardening
  • Backend: Python 3 scripts, scheduled via cron.d
  • AI: GPT-5-mini for tweet commentary generation
  • Frontend: Vanilla HTML/CSS/JS (no frameworks — keeps it fast on a Pi)
  • Feed data: news.json served as a static file, rebuilt every 30 minutes

The feed pipeline

The core of the system is update_news.py. Every 30 minutes it:

  1. Fetches all 18 RSS feeds concurrently with a thread pool
  2. Deduplicates by URL
  3. Attempts to pull og:image for each story (for the card thumbnails)
  4. Falls back to Bing image search if no OG image is found
  5. Generates a branded dark-themed placeholder card if all else fails
  6. Scores stories by recency and source tier
  7. Writes news.json to the web root
with ThreadPoolExecutor(max_workers=10) as ex:
    futures = {ex.submit(fetch_feed, src): src for src in SOURCES}
    for f in as_completed(futures):
        items.extend(f.result())
Enter fullscreen mode Exit fullscreen mode

The JSON structure is dead simple:

{
  "generated_at": "2026-02-25T14:30:00Z",
  "items": [
    {
      "title": "Critical RCE in Ivanti products under active exploitation",
      "url": "https://...",
      "source": "BleepingComputer",
      "published": "2026-02-25T12:00:00Z",
      "image": "https://..."
    }
  ]
}
Enter fullscreen mode Exit fullscreen mode

The frontend is a single JS file that fetches news.json and renders everything client-side. No database, no API calls at page load — just a static JSON file. The Pi barely breaks a sweat.


The hardest part: getting decent images

Security articles are plagued with terrible default images — the CISA US flag, padlock stock photos, vendor logos. I built a three-level fallback:

  1. Parse og:image from the article HTML
  2. Query Bing Image Search API for the article title
  3. Generate a branded dark-themed card with PIL (Python Imaging Library)

The generated cards look surprisingly good — dark cyber aesthetic, company name, colour-coded by source type. Here's the basic approach:

img = Image.new('RGB', (800, 400), color='#0a0f1c')
draw = ImageDraw.Draw(img)
# dot grid background
for y in range(0, 400, 28):
    for x in range(0, 800, 28):
        draw.ellipse([x-1, y-1, x+1, y+1], fill='#1a2744')
# company name centred
draw.text((400, 200), company_name, font=font, fill=accent_colour, anchor='mm')
Enter fullscreen mode Exit fullscreen mode

The X automation

Four times a day, GPT-5-mini reads the top story and generates commentary in the voice of a sharp, opinionated security analyst. Not "Here's the latest news from BleepingComputer" — actual takes:

System: You are a sharp, opinionated cybersecurity analyst with dry humour.
        Rules: max 220 chars, no hashtags, no sycophancy, never promotional.
        Sound human. Sound confident. Make people want to follow you.

User: Tweet about: Critical RCE in Palo Alto GlobalProtect...
Enter fullscreen mode Exit fullscreen mode

The model is a reasoning model, which means it burns internal "thinking tokens" before producing output. Important gotchas I learned the hard way:

  • Use max_completion_tokens, NOT max_tokens (different parameter name)
  • Set it to at least 4000 — at 1000 it often burns everything on reasoning and returns empty output
  • Minimum 60s timeout — reasoning can take 30-45 seconds
  • Temperature parameter is NOT supported (reasoning models ignore it)

Security hardening the Pi

Exposing a Pi to the internet requires some care. Things I did:

  • ServerTokens Prod + ServerSignature Off — hides Apache version
  • Options -Indexes — no directory listings
  • /scripts/ blocked in Apache config (301 to /) + robots.txt
  • Full security header suite: X-Frame-Options, X-Content-Type-Options, Referrer-Policy, Content-Security-Policy
  • GoAccess analytics behind basic auth, LAN IP restriction only
  • Credentials in /etc/openclaw/ (root-owned, 600 perms), never in code

The stats dashboard (GoAccess) only serves to LAN IPs — I can check it on my local network but it's invisible to the internet. Given the /stats/ path isn't listed in robots.txt, it's security through obscurity, but combined with the IP restriction it's fine for this use case.


The RSS feed

Generating RSS 2.0 from news.json is surprisingly straightforward. The feed is at /feed.xml and updates every 30 minutes alongside the JSON. Already picked up by some RSS readers.

The key is correct RFC-822 date formatting — RSS validators are picky:

from email.utils import format_datetime
pub_date = format_datetime(dt)  # "Tue, 25 Feb 2026 14:30:00 +0000"
Enter fullscreen mode Exit fullscreen mode

Lessons learned

  1. Static files beat databases for this use case. No query latency, trivially cacheable, survives traffic spikes on a Pi.

  2. Reasoning models need breathing room. Set max_completion_tokens to 4000 even if your output is 50 characters. The model burns tokens thinking before it writes.

  3. Image quality matters more than you'd think. The first version showed every article with a padlock clipart or the CISA flag. It looked terrible. The three-level fallback (OG → Bing → generated card) made a massive difference.

  4. Deduplication by URL, not title. The same CVE advisory gets published by 8 sources simultaneously. Title dedup gives false positives (slightly different wording); URL dedup is clean.

  5. Apache's mod_headers is your friend. Adding security headers takes 10 minutes and dramatically improves security posture.


What's next

  • Email newsletter (subscribers get a weekly "Week in Cyber" digest)
  • Reddit integration for /r/netsec and /r/cybersecurity monitoring (API access pending)
  • Mastodon cross-posting to infosec.exchange (live at @SignalOverNoizX@infosec.exchange)
  • Subscriber analytics — who's reading what

The site is live at signal-noise.tech and the X account is @SignalOverNoizX. RSS feed at /feed.xml if that's your thing.

Happy to answer questions in the comments — especially around the image pipeline and the GPT-5-mini quirks, which took the most debugging.

Top comments (0)