DEV Community

Chris King
Chris King

Posted on

I built LLM Council: frontier models debating in an immersive 3D chamber

Most AI apps still feel like chat in a box.

I wanted something different: a system where multiple frontier models could debate each other in real time, interrupt, build on prior points, and do it inside an interface that actually feels consequential.

So I built LLM Council.

What it is

LLM Council is a multi-agent debate app where AI participants engage in rich, multi-turn discussion inside an immersive WebGL-powered 3D environment.

Think:

  • boardroom energy
  • senate floor drama
  • structured debate instead of isolated completions
  • voice, atmosphere, and presence instead of plain text walls

The goal was to make AI interaction feel less like prompting a tool and more like watching intelligence deliberate.

Core features

1. Frontier models for rich debates

The app uses frontier models as council members, giving each participant the ability to contribute nuanced reasoning, rebuttals, and follow-up arguments.

Instead of one-shot answers, you get:

  • evolving positions
  • disagreement
  • synthesis
  • sharper tradeoff analysis

2. Multi-turn debates with interruption

This was a big part of the experience.

Real debates are not just turn-taking. They involve:

  • interruption
  • reactions
  • counterpoints
  • momentum shifts

So LLM Council supports multi-turn debate with interrupt mechanics, which makes the exchanges feel much more alive and much less robotic.

3. Immersive 3D experience with WebGL

I didn’t want a flat UI.

The app uses WebGL to create immersive environments, including themes like:

  • boardroom
  • senate floor

That visual framing changes how the product feels. It turns AI output into a staged interaction with context, tension, and presence.

4. AWS Polly for voice

To push the experience further, I added AWS Polly TTS.

That gives the council spoken delivery, which makes the debate easier to follow and significantly more engaging in demo format.

5. Built on Backboard.io

The whole app is built on Backboard.io — which made it possible to unify the AI stack behind one API.

That means one platform for:

  • model access
  • orchestration
  • memory

And importantly: the world’s smartest memory.

For an app like this, memory matters a lot. Debate gets better when agents can track context, recall prior turns, and maintain continuity across an interaction.

Why I built it

I think one of the most interesting directions in AI UX is moving from:

  • single-response systems to
  • interactive intelligence systems

Not just “ask model, get answer.”

But:

  • multiple minds
  • structured conflict
  • evolving context
  • cinematic presentation

LLM Council is a step in that direction.

What I learned

A few things became obvious while building this:

  • presentation matters more than people think

    The same intelligence feels dramatically different in an immersive environment

  • multi-agent UX needs rhythm

    Timing, interruption, and pacing matter as much as the raw model output

  • memory is a force multiplier

    Better continuity creates better debate

  • voice changes everything

    Once the agents can speak, the interaction becomes much more legible and compelling

Stack

  • Backboard.io
  • Frontier LLMs
  • WebGL
  • AWS Polly
  • Multi-agent orchestration
  • Persistent memory

Closing

LLM Council started as an experiment in making AI debate feel more alive.

It ended up becoming a strong example of what happens when you combine:

  • frontier models
  • immersive interfaces
  • voice
  • memory
  • orchestration

If you’re exploring multi-agent systems, AI-native UX, or more cinematic product experiences, I think this design space is going to get very interesting very fast.

Top comments (0)