A simple, universal swarm intelligence engine for red teaming β simulate real attackers, not just tools.
Security training often falls into two traps: static labs that feel like a checklist, and dumb automation that chains tools without context. RedSwarm sits in the middle: a multi-agent simulator where each agent has a persona, memory, and tactics, and the system produces an attack narrative you can reason about β including MITRE ATT&CK mapping and a visual attack graph.
What problem does it solve?
| Pain | Typical answer | RedSwarmβs angle |
|---|---|---|
| Red teaming is slow and expensive | Manual engagements | Many parallel, adaptive attack paths in a controlled model |
| Training feels fake | Scripted scenarios | Persona-driven agents (e.g. APT-style, opportunistic, insider) |
| Blue teams see alerts, not stories | SIEM noise | End-to-end chain β how, why, what might come next |
| Hard to test βwhat if we patch X?β | Guesswork | God Mode β inject defenses and watch the swarm adapt |
The point is not to replace a skilled red team. It is to practice judgment, tell a coherent attack story, and stress-test assumptions in a sandbox.
What you actually run
RedSwarm is a FastAPI backend plus a Vue 3 + Vite + Tailwind frontend. The LLM layer is Anthropic Claude by default (OpenAI is also supported). Agent memory and simulation history live in SQLite.
At a high level:
- You define a scope (lab-style targets β the project is explicit about ethical constraints).
- You spin up a swarm of agents with different roles and personas.
- You get a dashboard: live-ish status, graph, and reports with TTP tags.
Core ideas worth highlighting
1. Swarm intelligence, not a single chatbot
The README describes four agent flavors β recon, exploit, post-exploit, insider β with memory, personality, and tactics grounded in MITRE ATT&CK. Agents can hand off work (one finds weakness, another pushes the chain forward) or compete for paths, which is closer to how real operations feel than a single monolithic βhacker GPT.β
2. God Mode
You can inject constraints β firewall rules, EDR on a host, patch notes, policy changes β and observe how the narrative shifts. That turns the tool into a defense rehearsal instrument, not only an attack toy.
3. Training and CTF angle
Built-in framing includes scenario-style modes (e.g. themed challenges) and gamification hooks like leaderboards for speed or stealth. That makes it approachable for classes, CTF organizers, and internal lunch-and-learns.
Quick start (abbreviated)
Full steps live in the repo README; the shape is:
- Clone RedSwarm.
- Copy
.env.exampleβ.envand setANTHROPIC_API_KEY(or OpenAI). -
Backend: Python 3.11+,
uvicornon port8000. -
Frontend: Node 18+,
npm run devon port3000. - Open the UI and run a simulation; use
/docson the API for Swagger.
From the repo root you can also use the npm run dev workflow (with concurrently) to run backend and frontend together β handy for contributors.
API in one breath
Everything is driven through REST β start a simulation, poll status, pull a report with MITRE mapping, and hit God Mode inject endpoints. The README includes curl examples; the interactive docs at http://localhost:8000/docs are the source of truth while you integrate.
Ethics and license (non-negotiable context)
The maintainers emphasize sandbox-only use: lab ranges, authorized environments, no real-world targeting. Exploit behavior is simulated β this is a training and research system, not a weaponized scanner. The license is AGPL-3.0, which keeps derivatives open and aligns with transparency for security tooling.
Disclaimer: only use this on systems you own or are explicitly authorized to test. Unauthorized access is illegal everywhere that matters.
Why open-source it?
RedSwarm is the kind of project that benefits from public scrutiny: agent logic, guardrails, and API surface are easier to trust when the community can read and patch them. If the idea resonates, the most helpful things are issues (bugs, threats, misleading docs), PRs, and honest feedback on what makes a simulation useful versus theatrical.
Links
- Source & stars: github.com/tal7aouy/RedSwarm
- Issues: github.com/tal7aouy/RedSwarm/issues
- Discussions: github.com/tal7aouy/RedSwarm/discussions
If you try it in your lab, leave a comment with what worked, what felt unrealistic, and what youβd want next β that feedback loop is how tools like this get honest.

Top comments (0)