DEV Community

Cover image for Sentinel Openclaw
Samuel Komfi
Samuel Komfi

Posted on

Sentinel Openclaw

OpenClaw Challenge Submission 🦞

This is a submission for the OpenClaw Writing Challenge

What I Built

My home server used to shout at me constantly and I'd tuned it out entirely. Every time NetworkManager blinked or GNOME sneezed, I'd get a noise storm. Then one night nginx quietly died and I didn't notice for six hours.

I built Sentinel, an event-driven Linux log anomaly detector that watches your systemd journal in real time, uses a local LLM to reason about each log line, and fires a Slack alert only when something genuinely looks wrong.

The core idea: instead of rigid grep rules that miss things you didn't think to filter for, Sentinel asks a local Ollama model "is this critical or just noise?" and acts on the answer. No cloud dependencies. No API keys. Everything runs on hardware you already own.

How I Used OpenClaw

The first version of this was a standalone Python script — a while True: loop with some subprocess.Popen and requests.post calls stapled together. It worked, but adding a second output channel meant editing the loop. Testing anything in isolation meant mocking half the file.

Rebuilding it on OpenClaw fixed all of that. Here's the architecture:

journalctl -f
     │
     ▼
Keyword gate          ← pure Python, no LLM invoked
     │
     ▼
Skip-pattern gate     ← drops known-noisy services
     │
     ▼
OpenClaw fires Event  ← "system_log_error"
     │
     ▼
LogAnalyzer Skill     ← local Ollama (llama3)
     │                   prompt: "Anomaly or Noise?"
     ▼
if "Anomaly" → SlackNotifier Skill
Enter fullscreen mode Exit fullscreen mode

Skills are the heart of the implementation. Each one is a focused, independently testable class:

The LogAnalyzer skill sends a zero-shot classification prompt to Ollama and returns a dict with a classification key. The prompt is intentionally constrained, respond only with Noise or Anomaly: [reason]; which keeps parsing trivial and forces the model to commit to a decision rather than hedge.

class LogAnalyzer(Skill):
    @action
    def analyze_log(self, raw_log_line: str) -> dict:
        prompt = (
            "You are a Linux sysadmin. Analyze this log line. "
            "Is it a critical anomaly or expected noise? "
            "Reply ONLY with 'Noise' or 'Anomaly: [brief reason]'.\n\n"
            f"Log: {raw_log_line}"
        )
        response = ollama.chat(model=OLLAMA_MODEL,
                               messages=[{'role': 'user', 'content': prompt}])
        return {"classification": response['message']['content'].strip()}
Enter fullscreen mode Exit fullscreen mode

The SlackNotifier skill knows nothing about Ollama. The LogAnalyzer skill knows nothing about Slack. The orchestration in main.py connects them with four lines:

def process_log_event(agent: Agent, event: Event):
    log_line = event.payload['log_line']
    result = agent.call_skill(LogAnalyzer, "analyze_log", raw_log_line=log_line)
    if "Anomaly" in result['classification']:
        agent.call_skill(SlackNotifier, "send_alert",
                         analysis_text=result['classification'],
                         log_line=log_line)
Enter fullscreen mode Exit fullscreen mode

One workflow. Two skills. Clean separation throughout.

The two-gate pre-filter before the LLM matters more than it sounds. On a busy Linux desktop, journalctl -f produces hundreds of lines per minute. The keyword + skip-pattern gates bring LLM invocations down to a handful per hour under normal conditions — keeping inference fast and the machine cool.

Demo

Slack alert for a real anomaly:

🛡️ Sentinel Alert
Analysis: Anomaly: OOM killer invoked — nginx process terminated unexpectedly
Raw log: Apr 25 03:14:27 kernel: Out of memory: Killed process 3821 (nginx) total-vm:512444kB
Enter fullscreen mode Exit fullscreen mode

Terminal output during normal operation:

[Sentinel] Active — watching systemd journal...
[SKIP] NetworkManager line suppressed
[SKIP] gnome-shell line suppressed
[ANALYZE] kernel: Out of memory: Killed process 3821...
[ALERT] Anomaly detected — Slack notified
Enter fullscreen mode Exit fullscreen mode

📎 Full source: github.com/samaras/sentinel-openclaw

What I Learned

Local LLMs are genuinely good at this task. I expected to spend time prompt-engineering my way to reliable classifications. In practice, llama3 with a tightly constrained prompt (Noise or Anomaly: [reason], nothing else) was accurate enough on the first try. The structured output constraint does most of the work, open-ended prompts produce hedged, verbose responses that are hard to act on.

The pre-filter matters more than the LLM. My first instinct was to send everything to Ollama and let it decide. That was naive. Hundreds of lines per minute, even at sub-second inference, adds up. The two-gate CPU filter — keyword match then skip-pattern check — is the real performance story. The LLM is the last resort, not the first line of defense.

Skills as a unit of composition beat functions in a script. I've written a lot of automation that became unmaintainable because it grew organically inside one file. Having a named, typed skill class with explicit inputs, outputs, and a single responsibility made the project easy to reason about from day one — and easy to explain to someone else, which is harder than it sounds.

What's next: A memory_skill that logs anomalies to SQLite so I can ask the agent "how many OOM events happened last week?", and a Slack slash command listener so I can interact with Sentinel conversationally from my phone. The OpenClaw agent architecture makes both of these additive rather than surgical changes — which is the whole point.

ClawCon Michigan

I didn't attend ClawCon Michigan. I am far away in Johannesburg, South Africa.

Top comments (0)