Forem

Ambar
Ambar

Posted on

I built a simulator that runs AI regulations through 10,000 agents and shows you how many comply, relocate, and who evades

I got tired of AI policy debates being purely theoretical. Everyone argues about what a regulation should do. Nobody shows what companies will do.

So I built SwarmCast.

You upload a document — a policy draft, a news article, a hypothetical. It parses it and runs a population of heterogeneous agents (companies, startups, regulators, investors) through it across 15 jurisdictions. Compliance curves, evasion patterns, jurisdiction flight, lobbying coalitions — emerging from individual decisions, not hand-coded outcomes.

Two things I cared about:

Epistemic honesty. Every output is tagged GROUNDED, DIRECTIONAL, or ASSUMED. If a number traces to calibrated empirical data, it says so. If it's a structural assumption, it says that too. ASSUMED outputs are visually dimmed. Most simulation tools present all their numbers with equal confidence. This one doesn't.

Adversarial injection. Push a belief into a fraction of the population mid-run and measure how far it spreads and how much it bends aggregate behavior. Built for testing whether a governance framework survives coordinated narrative pressure — not just whether it looks good on paper.

Under the hood: vectorized engine runs ~3s for 10,000 agents. Optional LLM swarm mode spins up 23 persona agents in parallel to reason about the scenario and seed behavioral priors — slower, but the reasoning trace is readable and useful for presentations.

Built it for AI policy. Curious what happens on financial regulation, public health mandates, climate. The engine doesn't know the difference.

GitHub repo: https://github.com/Ambar-13/SwarmCast#

Try uploading something unexpected!

Top comments (0)