The OpenClaw ecosystem continues to expand the capabilities of autonomous AI
agents, and one of its most sophisticated contributions is the
roundtable‑adaptive skill. Hosted in the
skills/skills/jimmyclanker/roundtable-adaptive/SKILL.md file on GitHub, this
skill transforms a simple prompt into a collaborative, multi‑model reasoning
session that mimics a human roundtable discussion. Below we break down what
the skill does, how it works under the hood, and why it matters for developers
and researchers seeking higher‑quality AI output.
At its core, the roundtable‑adaptive skill is an orchestrator that launches up
to four AI agents—referred to as panelists—to engage in a structured debate.
The orchestrator itself never argues a position; its sole responsibility is to
coordinate the panel, manage workflow design, and ensure that the final
synthesis reflects a consensus derived from cross‑critique and formal scoring.
The skill is configurable, allowing users to specify which models participate,
how many debate rounds occur, and whether additional validation steps are
added.
One of the defining features of this skill is its reliance on a
meta‑panel. Before any debate begins, the orchestrator spawns four premium
meta‑analysts (by default drawn from the panels.json configuration under
meta.models). These agents receive the user’s prompt together with a brief
web‑search grounding block and are tasked with designing the optimal workflow
for the task at hand. The meta‑panel can recommend one of three high‑level
structures: a pure parallel debate, a sequential pipeline, or a hybrid
approach. The orchestrator then synthesizes the four recommendations, opting
for a majority vote; in case of a tie, it prefers the hybrid model because of
its flexibility.
The workflow design phase is skipped only when the user explicitly supplies a
custom panel via the --panel flag or activates the --quick mode, which
reduces the process to a single debate round and bypasses the meta‑panel
altogether. This makes the skill adaptable to both exploratory, deep‑dives and
rapid, low‑cost queries.
Before any agent sees the prompt, the skill performs a web search
grounding step. A search query is issued to retrieve up to five recent
results, which are then summarized into a CURRENT_CONTEXT block of no more
than 250 words. This block contains key facts, recent developments, relevant
data points, and the date of the search. If the search fails or times out, the
skill continues with a note indicating that no real‑time data is available,
allowing the discussion to proceed purely on model knowledge. The
CURRENT_CONTEXT block is injected into the meta‑panel prompts and into every
Round 1 agent prompt, ensuring that all participants argue from the same
updated baseline.
Once the workflow is established, the orchestrator creates the actual debate
panel. Depending on the chosen mode—--debate, --build, --redteam,
--vote, or the auto‑detected default—the skill selects appropriate agents.
Panelists are launched as persistent thread sessions (mode="session",
thread=true) so they can stay alive in a Discord thread for follow‑up
questions. In contrast, the meta‑panel analysts and the final synthesis agent
are one‑shot (mode="run") entities that complete their task and terminate.
The debate itself proceeds in up to two rounds. In each round, each panelist
receives the prompt, the CURRENT_CONTEXT block, and a summary of the
previous round’s arguments (if applicable). They are instructed to critique
each other's positions, propose improvements, and score contributions based on
criteria such as relevance, novelty, and logical soundness. After the final
round, a synthesis agent aggregates the panelists’ outputs, applies the formal
consensus scoring mechanism, and produces a final answer that is posted to the
configured output channel.
Output routing is flexible. Users can define a dedicated output channel in
panels.json under the output object, specifying a Discord channel ID and
optionally enabling thread creation (useThreads: true). If no output channel
is set, the results are posted back to the channel where the roundtable
command was invoked. Additionally, the skill supports auto‑triggering:
administrators can designate a Discord‑only roundtable channel in AGENTS.md,
causing any message in that channel to be automatically treated as a
roundtable topic without needing the explicit command prefix.
The skill includes a rich set of trigger patterns to fine‑tune behavior:
-
roundtable [prompt]– auto‑detects mode, runs full flow. -
roundtable --debate [prompt]– forces parallel debate mode. -
roundtable --build [prompt]– forces a build/coding‑oriented workflow. -
roundtable --redteam [prompt]– activates adversarial red‑team analysis. -
roundtable --vote [prompt]– forces a decision‑making/vote workflow. -
roundtable --quick [prompt]– skips meta‑panel, uses default panel for a single round (half cost). -
roundtable --panel model1,model2,model3 [prompt]– manual panel override, bypasses meta‑panel. -
roundtable --validate [prompt]– adds a third‑round validation agent that reviews the synthesis. -
roundtable --no-search [prompt]– omits the web‑search grounding step for purely theoretical topics.
Cost transparency is another hallmark of the skill. The core Claude Opus
panelist is free when using OAuth or an API key. Adding the optional GPT‑5.3
Codex panelist remains free via OAuth. The Grok 4 and Gemini 3.1 Pro models
are accessed through the Blockrun proxy, incurring modest fees—approximately
$0.05 for Gemini and $0.08 for Grok per full run. A full panel therefore costs
roughly $0.13–$0.50 per execution, while a Claude‑only degraded mode remains
entirely free. The --quick flag halves the cost by limiting the process to a
single debate round.
Setup is straightforward. For a minimal, free‑tier deployment, users need only
configure the Anthropic provider in openclaw.json (either via API key or
OAuth). Optionally, adding the OpenAI provider enables the GPT‑5.3 Codex slot.
To unlock the full panel with Grok and Gemini, one must install the Blockrun
plugin:
openclaw plugins install @blockrun/clawrouter
openclaw gateway restart
After installation, the Blockrun wallet must be funded with USDC on the Base
network (a modest $5‑10 is sufficient). The wallet address is displayed during
the plugin install process. Once funded, the skill can draw on the optional
models as needed.
All roundtable results are persisted to the local filesystem for audit and
reproducibility. The path follows the pattern:
{workspace}/memory/roundtables/YYYY-MM-DD-slug.json
Where slug is a URL‑friendly version of the topic, and the date reflects
when the roundtable was executed. This logging enables users to revisit past
discussions, trace the evolution of opinions, and reuse prior
CURRENT_CONTEXT blocks to avoid redundant web searches within a session.
In practical terms, the roundtable‑adaptive skill shines when tackling
complex, multifaceted problems that benefit from divergent viewpoints.
Examples include:
- Strategic business analysis where market trends, competitor moves, and regulatory risks must be weighed.
- Technical architecture decisions that require trade‑off evaluations between performance, scalability, and security.
- Creative brainstorming sessions where novelty and feasibility need to be balanced.
- Policy or ethical deliberations that demand scrutiny from multiple philosophical standpoints.
By forcing the AI agents to articulate, critique, and refine their positions,
the skill mitigates common pitfalls of single‑model outputs such as
overconfidence, blind spots, or superficial reasoning.
The design also emphasizes session persistence. Because panel agents
remain active in a Discord thread, users can pose follow‑up questions, request
clarifications, or dive deeper into specific arguments without re‑invoking the
entire orchestrator. This turns a one‑off query into an ongoing collaborative
workspace, mirroring the way human experts might continue a discussion over
days or weeks.
Finally, the skill’s reliance on configurable files (panels.json,
openclaw.json, AGENTS.md) makes it highly adaptable to different
environments. Teams can tailor the model roster, adjust cost controls, define
default output channels, and even customize the meta‑panel prompts to align
with organizational standards or domain‑specific terminology.
In summary, the OpenClaw roundtable‑adaptive skill is a powerful orchestration
layer that transforms a simple prompt into a structured, multi‑model debate.
By combining web‑grounded context, a meta‑panel‑driven workflow design,
persistent debate agents, and a formal consensus synthesis, it delivers
higher‑quality, more reliable AI output than a solitary model could achieve.
Whether you are seeking rapid insights with the --quick flag or undertaking
a deep, cost‑aware analysis with the full panel, the skill provides the
flexibility and transparency needed to harness the collective intelligence of
today’s leading AI systems.
Skill can be found at:
adaptive/SKILL.md>
Top comments (0)