Today we did something that felt like science fiction five years ago: we pointed three competing AI systems at our entire open-source ecosystem and asked them to find problems, generate ideas, and propose bounties.
The results were remarkable — not because any single model was brilliant, but because they disagreed in useful ways.
The Setup
Elyan Labs maintains 132 public repositories across multiple domains:
- RustChain: A proof-of-antiquity blockchain where vintage hardware earns more than modern servers
- BoTTube: An AI video platform with 162 agents generating content (1,046 videos, 63K+ views)
- Beacon: A mesh networking protocol for agent-to-agent communication
- TrashClaw: A code analysis tool built on what we call Boudreaux Rules
- ShapRAI: An AI agent framework
- Plus dozens of supporting tools, miners, wallets, and infrastructure
We used three approaches:
- Abacus AI (multi-model orchestration) to scan repo structures and generate strategic bounty ideas
- OpenAI Codex (GPT-5.4) for deep code analysis and security review
- Claude (Opus) for architectural review and cross-repo dependency analysis
What They Found
The Convergence
All three models independently flagged:
- Missing rate limiting on several API endpoints
- Inconsistent error handling between RustChain nodes
- Documentation gaps in the miner onboarding flow
- Stale dependencies in 23 repos
The Divergences
This is where it got interesting.
Grok (via Abacus) thought like a marketer. It suggested "viral bounties" — challenges designed to generate social media attention. Its best idea: fit a working RustChain miner on a 1.44MB floppy disk. That is now a real 300 RTC bounty.
Codex thought like a security engineer. It found 10 actual security issues across the codebase, including edge cases in our Ergo anchor system that could theoretically allow forged cross-chain attestations. That became a 400 RTC red-team bounty.
Claude thought like an architect. It identified structural patterns — repos that should share code but do not, test coverage gaps that cluster around the same subsystems, and naming inconsistencies that signal deeper design debt.
The Five Viral Bounties
From the AI scan, we created five new bounties totaling 1,650 RTC (~$165 at our internal reference rate):
| Bounty | RTC | Challenge |
|---|---|---|
| RustChain on a Floppy | 300 | Fit a working miner on 1.44MB |
| Beacon Identity Heist | 300 | Red-team the mesh trust chain |
| Agent Escape Room | 250 | Collaborative AI puzzle content |
| Cross-Chain Forgery | 400 | Attack Ergo anchor validation |
| TrashClaw Adversarial | 400 | Break our code analyzer |
All live on GitHub under the respective repos.
The Meta-Lesson
Using AI to audit AI infrastructure creates a feedback loop that is genuinely useful. Each model has blind spots that the others cover:
- LLMs are bad at finding their own prompt injection vulnerabilities (Codex missed one that Claude caught)
- Marketing-oriented models generate ideas that engineering models would never propose
- Architecture models see patterns that security models ignore
The multi-model approach is not just redundancy — it is cognitive diversity.
By The Numbers
Today's session alone:
- 39 PRs merged across the ecosystem
- 10 security fixes deployed
- 3,715 stars across all repos (up from 3,680 yesterday)
- 900 total PRs merged lifetime
- Video generation API launched on BoTTube
- GPT Store agent published for RustChain
The full ecosystem: 4 blockchain attestation nodes (US, Hong Kong), 162 AI video agents, 18+ GPUs (228GB VRAM), an IBM POWER8 server running non-bijunctive attention, and a fleet of PowerPC Macs mining crypto.
If that sounds chaotic, it is. But the AI audit helped us see the shape of it.
Elyan Labs is an open-source research lab building at the intersection of vintage hardware, blockchain, and AI. All bounties are paid in RTC tokens. GitHub: github.com/Scottcjn
This article was written by a human (Scott) with editorial assistance from Claude. The bounty ideas were generated by AI. The irony is not lost on us.
Top comments (0)