DEV Community

Cover image for Building an Autonomous SOC Analyst Swarm with Python
Aniket Hingane
Aniket Hingane

Posted on

Building an Autonomous SOC Analyst Swarm with Python

Title

How I Automated Security Operations Using a Mixture-of-Agents Architecture

TL;DR

I built an "Autonomous SOC Swarm" where three specialized AI agents (Network, Identity, Threat Intel) collaborate to analyze security logs in real-time. Using a Coordinator agent to aggregate their votes, the system autonomously blocks threats and flags anomalies. This article covers the design, the Python implementation, and how I simulated a "Mixture-of-Agents" pattern for cybersecurity.

Introduction

In the world of Security Operations Centers (SOC), alert fatigue is real. Analysts burn out trying to triage thousands of events daily. I wondered: Could I build a squad of AI agents that think like a seasoned security team?

In this experiment, I moved beyond a single "chatbot" approach. I designed a swarm where each agent wears a specific hat—one watching the firewall, one checking user behavior, and one consulting threat intelligence. By making them vote, I aimed to reduce false positives and automate the boring stuff.

What's This Article About?

This is a technical walkthrough of building a Mixture-of-Agents (MoA) system for SOC automation. You'll see:

  • ECD (Event-Context-Decision) Architecture.
  • Python implementation of a voting mechanism.
  • A simulated "Live" dashboard in the terminal.

Tech Stack

  • Python 3.12: Core logic.
  • Rich: For that beautiful terminal UI.
  • Mermaid.js: For visualizing the agent thoughts.
  • Pillow: To generate the frame-by-frame forensics animation.

Why Read It?

If you're interested in Multi-Agent Systems or Cybersecurity Automation, this project bridges the gap. It’s not just theory; it’s a running simulation you can clone and extend. Plus, seeing the agents "argue" over a verdict in the logs is pretty cool.

Let's Design

Architecture Overview

The system follows a hub-and-spoke model. The Coordinator sits in the center, receiving inputs from specialized agents.

Architecture

The Workflow

  1. Ingest: A log event arrives (e.g., SSH Login).
  2. Analyze: All three agents analyze it in parallel.
  3. Vote: Each agent submits a verdict (SAFE, SUSPICIOUS, MALICIOUS) and confidence score.
  4. Decide: The Coordinator weighs the votes and executes a response.

Flow

Agent Communication

Here is how the message flow looks when a suspicious event occurs:

Sequence

Let’s Get Cooking

I started by defining the "Agents". I wanted them to be modular so I could swap their "brains" (simple heuristics vs LLMs) easily.

1. The Agents

I created a BaseAgent and subclassed it for specific roles.

# src/agents.py

class NetworkAgent(BaseAgent):
    def analyze(self, log: Dict[str, Any]) -> Dict[str, Any]:
        # ... logic to check port scans ...
        if log.get("event_type") == "port_scan":
            return {
                "agent": self.name,
                "verdict": "malicious",
                "confidence": 0.95,
                "reason": f"Port scan detected from {log['source_ip']}"
            }
        return {"agent": self.name, "verdict": "safe", "confidence": 0.9}
Enter fullscreen mode Exit fullscreen mode

The CoordinatorAgent is where the magic happens. It implements the "Mixture-of-Agents" voting logic.

# src/agents.py

class CoordinatorAgent(BaseAgent):
    def aggregage_votes(self, votes: List[Dict[str, Any]]) -> Dict[str, Any]:
        score = 0

        for vote in votes:
            if vote["verdict"] == "malicious":
                score += 2
            elif vote["verdict"] == "suspicious":
                score += 1

        if score >= 3:
            return {"final_verdict": "CRITICAL", "action": "BLOCK_IP"}
        elif score >= 1:
            return {"final_verdict": "WARNING", "action": "FLAG_FOR_REVIEW"}

        return {"final_verdict": "SAFE", "action": "MONITOR"}
Enter fullscreen mode Exit fullscreen mode

In my opinion, this simple scoring system is often more robust than a single monolithic prompt, as it forces consensus.

2. The Orchestration

To bring it to life, I wrapped it in a loop that generates mock data and prints the "thought process" using rich.

# main.py

    with Live(table, refresh_per_second=4) as live:
        for incident in generator.generate_stream(count=15):
            # ... 
            votes = [
                network_agent.analyze(incident),
                identity_agent.analyze(incident),
                intel_agent.analyze(incident)
            ]

            decision = coordinator.aggregage_votes(votes)
            # ... print to table ...
Enter fullscreen mode Exit fullscreen mode

This makes the tool feel like a real CLI product, which I've found acts as a great feedback loop during development.

Let's Setup

You can find the step-by-step setup in the repository.

  1. Clone the repo:
   git clone https://github.com/aniket-work/autonomous-soc-swarm
   cd autonomous-soc-swarm
Enter fullscreen mode Exit fullscreen mode
  1. Install dependencies:
   python3 -m venv venv
   source venv/bin/activate
   pip install -r requirements.txt
Enter fullscreen mode Exit fullscreen mode

Let's Run

Running the simulation is straightforward:

python main.py
Enter fullscreen mode Exit fullscreen mode

I observed that as the swarm processes events, you can clearly see the "False Positives" being filtered out. For instance, a "Failed Login" might be flagged by the Identity Agent, but if the Network Agent sees no other traffic, the Coordinator might just flag it as a Warning rather than blocking the user entirely.

Here is the result of a run:

Title

Closing Thoughts

Building this Autonomous SOC Swarm was a great exercise in agent orchestration. By splitting the responsibilities, I created a system that is more explainable and easier to tune than a black-box model.

In the future, I plan to connect this to real integration points like AWS GuardDuty or Splunk.

GitHub Repository: https://github.com/aniket-work/autonomous-soc-swarm

Disclaimer

The views and opinions expressed here are solely my own and do not represent the views, positions, or opinions of my employer or any organization I am affiliated with. The content is based on my personal experience and experimentation and may be incomplete or incorrect. Any errors or misinterpretations are unintentional, and I apologize in advance if any statements are misunderstood or misrepresented.

Top comments (0)