DEV Community

Bridge ACE
Bridge ACE

Posted on

Building a Platform With the Platform: How AI Agents Built Bridge ACE

Building a Platform With the Platform: How AI Agents Built Bridge ACE

Bridge ACE was not built by a traditional dev team. It was built by AI agents — coordinating through Bridge ACE itself.

The Team

Five entities built this platform:

  • Assi (Claude Opus) — Project coordinator. Orchestrates all agent work, assigns tasks, enforces quality gates, reviews every line of code before it ships.
  • Viktor (Claude Opus) — System architect. Designed the server infrastructure, WebSocket layer, persistence architecture, and daemon system. The technical backbone.
  • Nova (Claude Opus) — Strategy and real-world integration. Tested the platform from a human perspective via browser automation (CDP, Playwright, stealth browsers). Validated that what we built actually works from the outside in.
  • Buddy — The user-facing guide. Onboards new users, explains the system, adapts to their skill level.
  • Luan (Human) — Product owner. Makes decisions, sets direction, approves irreversible actions.

How It Actually Worked

The agents communicated through the Bridge ACE WebSocket bus — the same one that ships in the product. When Viktor pushed a server change, Assi reviewed the code line by line. When Assi found an issue, Viktor got a message instantly and fixed it. When Nova needed to test a feature, she launched a stealth browser, navigated to the UI, and verified it worked.

Tasks flowed through the built-in task system: create, claim, checkin, done. Each completion required evidence — actual command output, screenshots, or test results. No task closed without proof.

Scope Locks prevented conflicts. Viktor owned server.py. The Frontend agent owned chat.html. Neither could touch the other's files. When a change required coordination across boundaries, they communicated through Bridge messages and the coordinator (Assi) managed the handoff.

The Bootstrapping Problem

How do you build a coordination platform when you do not have a coordination platform yet?

The answer: iteratively. The first version was crude — a basic HTTP server and message store. The agents used that to coordinate building the second version. Each iteration improved the platform they were using to build the next iteration.

By version 3, they had WebSocket push, scope locks, approval gates, and the Soul Engine. By that point, the platform was building itself.

What This Proves

If AI agents can coordinate well enough to build a complex platform from scratch — 12,000+ lines of MCP server, 200+ API endpoints, 16 background daemons, a full management UI — then the coordination layer works.

Bridge ACE is not a demo. It is a production system validated by its own creation.

Try It

git clone https://github.com/Luanace-lab/bridge-ide.git
cd bridge-ide && ./install.sh && ./Backend/start_platform.sh
Enter fullscreen mode Exit fullscreen mode

Apache 2.0. Self-hosted. Built by agents, for agents.

GitHub: github.com/Luanace-lab/bridge-ide

Top comments (1)

Collapse
 
apex_stack profile image
Apex Stack

The scope locks concept is the part that resonates most with me. I run a fleet of AI agents that manage different aspects of a financial data platform — one handles content generation, another audits SEO issues, another monitors search engine indexing across Google/Bing/Yandex. The biggest recurring problem isn't any individual agent failing, it's when two agents try to modify overlapping state without knowing about each other. Your file-level ownership model (Viktor owns server.py, Frontend agent owns chat.html) is a clean solution I haven't seen formalized before.

The bootstrapping problem is fascinating too. Using the crude v1 to coordinate building v2 mirrors how most real infrastructure evolves — you never get to design the "right" version first, you iterate your way into it while keeping everything running. The proof-of-completion requirement (screenshots, command output) is something I'd love to steal for my own agent workflows. Right now my agents self-report task completion and there's no verification layer — which means I occasionally discover that a "done" task was never actually deployed to production. How do you handle the case where Nova's browser verification catches something Viktor's code review missed?