For the past months I’ve been experimenting with ways to visualize trust and stability in distributed AI systems — the kind of architectures where dozens of agents must cooperate without a central brain.
The result is something I call Swarm-ISM-X.
The Public Demo (v2) is now open-sourced — a clean, safe version that shows how the swarm behaves, not why it behaves that way.
🌀 What you’ll see
A Tkinter-based GUI that displays 10 agents along a horizontal line.
Each agent moves, stabilizes, and maintains formation under light “wind” disturbances.
Each agent has a “passport” indicator (green = valid, red = invalid).
An “Auto Demo” mode runs scripted sequences for presentations.
The simulation updates in real time — you can watch the system find balance, lose it, and regain it.
🔍 What’s really happening
Under the hood, each agent is governed by a simplified consensus-like controller:
Controller
$$
u_i \;=\; -\,k_i \nabla_i S
$$
where $S$ is a constraint vector maintaining equal spacing and total span.
The real ISM-X framework extends this idea with:
Adaptive gain tuning using resonant feedback (not in public demo).
Cryptographic attestation (Ed25519 + HMAC commitments).
Passport issuance and verification between agents.
Log-periodic modulation for stability over communication delays.
The public demo keeps only the first-order visible dynamics — enough to show formation control and disturbance recovery — while replacing sensitive parts with lightweight placeholders.
🔒 What’s included vs. hidden
Layer Included Hidden
GUI visualization ✅ –
Swarm dynamics (simple consensus) ✅ –
Passport system (stubbed SHA-1) ✅ Real attestation (Ed25519/HMAC)
Adaptive control & resonance ❌ proprietary
Informational geometry layer ❌ research
⚙️ Run it yourself
git clone https://github.com/Freeky7819/swarm-ismx-gui-demo.git
cd swarm-ismx-gui-demo
pip install numpy
python main_gui_public.py
Works out of the box on Python 3.10+.
The GUI shows live values of (||S||), (J), and per-agent gains (k_i).
🧩 Why it matters
Visual demos like this help bridge AI orchestration and trust architectures.
You can see — literally — what happens when an agent’s integrity fails, when noise enters, or when collective damping stabilizes the system.
This isn’t a neural network or RL — it’s a physically grounded, interpretable control system.
Think of it as a way to watch trust itself breathe.
GitHub: Swarm-ISM-X GUI Demo v2
Author: Damjan
Reason in resonance.
Feedback is always welcome — especially if you work on:
multi-agent coordination,
real-time visualization,
control theory + cryptographic verification bridges.
Let’s make AI agents not only smarter — but also more honest.
Top comments (0)