Quick Comparison
| gstack | Superpowers | AEGIS | |
|---|---|---|---|
| Creator | Garry Tan (YC President) | Jesse Vincent (obra) | AEGIS Contributors |
| GitHub Stars | ~23K+ (7 days) | ~40K+ | New (PyPI: aegis-gov) |
| Philosophy | Startup sprint workflow | Engineering methodology | Constitutional governance |
| Approach | 15 opinionated skills as roles | TDD + debugging + brainstorming framework | Boardroom meetings + rule engine + red team |
| Governance | None (trust the workflow) | Methodology-enforced discipline | Explicit rules, verdicts, audit trails |
| Agent Count | 6 virtual roles | Subagent-driven (dynamic) | 9 default + 8 specialists (17 council members) |
| Scalability | Solo developer / small team | Solo to small team | Solo to enterprise (140+ agents in full version) |
| Learning Curve | Low — copy skills, run commands | Medium — understand methodology first | Medium — understand governance model |
| LLM Support | Claude Code (+ Codex, Gemini CLI) | Claude Code primary | Anthropic, OpenAI, Ollama (any LLM) |
| License | MIT | MIT | Apache 2.0 |
gstack: The Startup Sprint
What it is: Garry Tan's personal Claude Code setup, open-sourced. 15 opinionated workflow skills that turn Claude Code into a virtual engineering team — CEO, Designer, Eng Manager, Release Manager, Doc Engineer, QA.
The philosophy: AI agents work best when they follow the same sprint cadence that works for human teams. Think, Plan, Build, Review, Test, Ship, Reflect.
Strengths
Immediate productivity. Copy the skills, run the commands, ship code. gstack hit 23K stars in a week because it delivers instant value. No configuration ceremony — just /office-hours to think, /plan-ceo-review to plan, /ship to deploy.
Real-world provenance. This is how the president of Y Combinator actually builds software. It's not theoretical — it's battle-tested on real products.
Browser-first architecture. gstack runs a persistent Chromium daemon with sub-second latency. This is genuinely hard engineering — the browser doesn't cold-start between commands, so QA testing and visual reviews are fast and stateful.
Cross-agent compatibility. Through the SKILL.md standard, gstack works with Claude Code, Codex, Gemini CLI, and Cursor.
Limitations
No governance layer. There's no mechanism to prevent an agent from taking a harmful action. The workflow assumes good outcomes follow good process, which is true until it isn't.
Copy-paste culture risk. 23K stars in a week means thousands of developers are running one person's opinionated workflow without modification. gstack is Garry Tan's brain — your team might need a different brain.
Solo-focused. The skills are designed for a single developer working with AI. There's no multi-agent coordination, no conflict resolution, no audit trail for team accountability.
Superpowers: The Methodology
What it is: An agentic skills framework and software development methodology. More installs than Playwright on the Claude Code marketplace. 40K+ stars.
The philosophy: The bottleneck in AI-assisted development isn't model capability — it's methodology. If you teach agents disciplined engineering practices, they earn your trust.
Strengths
Trust through discipline. Superpowers enforces red-green-refactor TDD cycles where tests must fail before implementation. It requires root cause investigation before any fix. It runs Socratic brainstorming sessions that refine requirements before coding begins. This is genuine engineering methodology, not vibes.
Subagent-driven development. Once you approve the plan, Superpowers launches subagents to work through each task, inspecting and reviewing their work before continuing. The implementation plan is deliberately written for "an enthusiastic junior engineer with poor taste, no judgement, and an aversion to testing" — meaning the instructions are unambiguous enough for any agent to follow.
Compound learning. Each development cycle documents learnings for future AI agent consumption. 80% of developer time goes to planning and review, systematically creating a self-improving system.
Strong community. 40K+ stars and active development mean continuous improvement, community skills, and broad compatibility.
Limitations
Single-user scope. Like gstack, Superpowers is designed for a developer working with their AI agent. It doesn't address multi-agent governance, cross-team coordination, or organizational-scale decision making.
No enforcement mechanism. The methodology is advisory — agents follow it because the prompts tell them to. There's no rule engine that can BLOCK an action, no HALT that stops all processes, no human escalation gate that requires approval.
Methodology, not governance. Superpowers ensures agents build well. It doesn't ensure they should build at all. There's no red team challenging whether the decision itself was correct.
AEGIS: The Constitution
What it is: A governance-first framework where AI agents debate decisions in structured boardroom meetings, face mandatory red team review, and operate under constitutional rules with enforceable verdicts.
The philosophy: Every other multi-agent framework helps AI agents do things. AEGIS makes sure they should.
Strengths
Enforceable governance. AEGIS has a 5-verdict rule engine (PASS, FLAG, BLOCK, ESCALATE_TO_HUMAN, HALT) that prevents actions, not just advises against them. Self-review is blocked. Low-confidence decisions are flagged. Production deployments without review are escalated to humans.
from aegis_gov import RuleEngine
engine = RuleEngine()
# Production deploy without review? -> ESCALATE_TO_HUMAN
engine.evaluate("DevOps", "deploy", {
"environment": "production",
"tests_passed": True,
"review_approved": False,
})
Mandatory red team. Every decision faces a DevilsAdvocate (challenges assumptions, demands evidence) and a Skeptic (explores alternatives, runs pre-mortem analysis). The red team cannot be disabled in the default configuration.
Structured decision-making. 17 AI agents with distinct roles debate every decision across 6 phases: CEO Opening, Executive Council (7 C-level perspectives), Advisory Input (8 specialists), Critical Review (red team), Open Debate, and CEO Synthesis with vote tally and confidence score.
Compliance-ready. The audit trail, decision categorization, and human escalation gates map directly to EU AI Act, NIST AI RMF, and ISO 42001 requirements.
LLM-agnostic. Works with Anthropic, OpenAI, or any OpenAI-compatible API (including local models via Ollama).
Limitations
Overhead for small projects. If you're a solo developer building a side project, a 17-agent boardroom meeting is overkill. gstack or Superpowers will get you shipping faster.
Newer project. AEGIS doesn't have 40K stars (yet). The community is smaller, and the ecosystem is younger.
Governance adds latency. A full boardroom meeting with red team review takes time. For rapid prototyping, you want speed. For production decisions with real consequences, you want governance.
When to Use Which
| Scenario | Best Choice | Why |
|---|---|---|
| Solo developer, ship fast | gstack | Instant productivity, proven workflow |
| Engineering team, build trust in AI | Superpowers | TDD methodology, compound learning |
| Multi-agent systems, need accountability | AEGIS | Governance, audit trails, enforcement |
| Compliance-sensitive industry | AEGIS | EU AI Act / NIST / ISO alignment |
| Learning AI-assisted development | Superpowers | Best teaching methodology |
| Startup MVP sprint | gstack | Fastest path from idea to deploy |
| Production decisions with real consequences | AEGIS | Red team + rule engine + human escalation |
The Combination Play
These tools are not mutually exclusive. The strongest setup might be:
- gstack for your sprint workflow (Think, Plan, Build)
- Superpowers for your engineering methodology (TDD, debugging, brainstorming)
- AEGIS as the governance layer on top (Can we? Should we? Who approves?)
AEGIS is explicitly designed to be "the governance layer you add on top" of existing frameworks.
Try AEGIS
pip install aegis-gov
aegis convene "Should we mass-email all users about the new feature?" --category TACTICAL
- GitHub: github.com/pyonkichi369/aegis-oss
- PyPI: pypi.org/project/aegis-gov
- License: Apache 2.0
Want the full 140-agent configuration with 148 optimized prompts? The AI Agent Prompts Pack includes production-ready agent definitions, constitutional governance templates, and the complete AEGIS organizational structure.
What's your approach to AI agent governance? Are you in the "trust the workflow" camp, the "enforce methodology" camp, or the "constitutional governance" camp? Drop a comment below.
Top comments (0)