The first time I ran two coding agents side by side, I spent ten minutes copying text between panes, three minutes hunting for which one was waiting on a [y/N] prompt, and another minute force-killing the one that had quietly hung. The agents were fast and I was the slow part, which is roughly the opposite of how this is supposed to work. renga is what I built to get out of the way.
suisya-systems
/
renga
Claude Code Multiplexer — manage multiple Claude Code instances in TUI panes
renga
Read this in other languages: 日本語
An AI-native terminal substrate for orchestrating multiple Claude Code and Codex agents in one TUI — mixed-client peer messaging, Claude-specific UX where it matters, single binary.
What renga is
renga is a terminal where the panes know they are AI agents. Splits, tabs, and focus work like any TUI multiplexer, but the substrate underneath treats each pane as a first-class agent endpoint: it detects which panes are running Claude Code, lets Codex panes participate in the same peer network, exposes pane-control tools (spawn_claude_pane, spawn_codex_pane, set_pane_identity, new_tab, …), and keeps peer scope authoritative at the renga-tab level. Claude peers receive interrupt-style channel pushes; Codex peers get pane-local nudges from renga and read the actual message body with check_messages. Each pane also carries an optional role label (used for filtering and display), but message routing itself is by id…
What was actually wrong
The obvious framing is that I needed a better terminal multiplexer, but that isn't quite right. tmux and WezTerm are excellent multiplexers. What they don't give you (and shouldn't, since it isn't their job) is a way for the agents inside the panes to operate the workspace.
When agents can't operate the workspace, the human has to. You become the message broker between Claude Code and Codex. You open the new pane. You read the prompt the worker is stuck on. You type y. You close the session when it's done. None of that is hard, but all of it scales linearly with how many agents you're running, which defeats the point of running them in parallel.
Adding an MCP messaging layer doesn't fix this on its own either. Even with peer messaging, if the only thing agents can do to each other is exchange text, the human is still the one spawning workers, watching for blocked prompts, and cleaning up panes when work finishes. You've moved the bottleneck, not removed it.
So renga is a terminal multiplexer with one specific addition: a built-in MCP server that exposes the workspace itself (panes, lifecycle, prompt state, peer discovery) as tools the agents can call. The point isn't to give agents an organization. It's to give them the substrate they need to run their own.
What I tried before going in this direction
I didn't start by building a TUI in Rust. That would have been an absurd opening move. I started with smaller pieces and let them tell me what was missing.
First I patched an existing peer-messaging MCP. I didn't like the unauthenticated localhost transport, so I wrote happy-ryo/claude-peers-mcp on top of it. That gave Claude Code instances a way to talk to each other, which was the fun part. It also made it obvious that messaging alone wasn't enough: the agents could chat, but they couldn't act on the workspace.
So I wrote a WezTerm plugin that let Claude Code spawn and manipulate panes. That worked, and it kept working, but the seams between "messaging lives over here, pane control lives over there, and prompt handling lives nowhere" started to dominate. Every workflow needed a different piece of glue.
The lesson I took from those two experiments was that if you want agents to coordinate, messaging and pane lifecycle and prompt handling all have to sit behind one surface. Splitting them across separate tools means the agent (and the human) has to bridge the gaps every time, which is most of the work.
renga itself started as a fork of Shin-sibainu/ccmux, which gave me a working Rust + ratatui + portable-pty multiplexer to build the MCP surface, peer-messaging design, and IME overlay on top of, rather than starting from an empty Cargo.toml. It has since diverged enough that the two projects share little day-to-day code, but the seed mattered and is credited here.
The MCP surface
There are two clusters of tools.
The peer-messaging side is deliberately small: list_peers, send_message, check_messages. Discovery is scoped to the current renga tab rather than to cwd, git root, or PID heuristics. I tried a few of those alternatives and they all had the same failure mode, where the scope the human sees (the visible workspace) drifts out of sync with the scope the agent discovers. Tab-scoped discovery sidesteps that because the tab is the workspace: if two agents are sitting in it together, they can talk, and if they aren't, they can't.
The pane-orchestration side is bigger because there's more to do: list_panes, spawn_pane, spawn_claude_pane, spawn_codex_pane, close_pane, focus_pane, new_tab, inspect_pane, send_keys, set_pane_identity, poll_events. Two of those are worth zooming in on.
inspect_pane reads the visible state of another pane. This sounds boring until you realize it's how one agent figures out why another agent has gone quiet. Workers don't always finish or fail cleanly; they sit on a [y/N], on a "press enter to continue," on Claude Code's tool-permission prompt. Without inspect_pane, peer messaging is a one-way street: you can ask a worker to do something, but you can't tell whether it ever started.
send_keys is the other half. Once you can see that a worker is stuck on a prompt, you need to be able to answer it. Together, those two tools let one agent babysit another end to end.
Claude Code and Codex don't get messages the same way
Claude Code has a channel mechanism that lets renga push peer messages directly into its MCP session. When a peer sends a message, the receiving Claude sees it as an inbound channel notification and reacts on the spot.
Codex doesn't expose anything equivalent, so renga delivers Codex peer messages by driving the pane. It drafts a short notification into the Codex prompt buffer, waits about a second (CODEX_PEER_NUDGE_SUBMIT_DELAY), then submits it. The actual peer message body lives in the MCP inbox, and Codex has to call check_messages to read it; the pane nudge is just a doorbell.
The one-second delay isn't arbitrary. Without it, Codex's own input handling and renga's key injection race each other and you get half-typed prompts. With it, the result looks, from the user's perspective, like Codex noticed the message on its own.
I would not pick this asymmetry if I were designing both clients from scratch. But I'm not, and pretending the two clients are the same would just push the asymmetry into the agents' prompts instead of hiding it in the host. The agents get to see one consistent peer-messaging surface; the host eats the difference.
The bug I didn't see coming
I expected the hard problems in renga to be PTY plumbing, layout math, or the MCP server. Those were fine. The thing that nearly stopped the project was input handling, specifically the interaction between aggressive TUI redraws and OS-level IME composition. If you only type ASCII you may never have hit this; if you compose Japanese, Chinese, or Korean text, you've probably suffered through some version of it. The OS IME candidate window anchors itself to the terminal caret, and when Claude Code redraws several times per second while it's thinking, the caret moves out from under the IME. The candidate popup jumps, vanishes, reopens in the wrong place, and composing a sentence stops being typing and turns into a fight.
The fix in renga is a dedicated composition overlay (src/input/overlay.rs): a centered multi-line box that takes input ownership while you compose. Two details took longer than they should have. The cursor inside the overlay has to be indexed by character, not by byte, because multibyte text plus byte-indexed backspace silently corrupts the buffer. And while the overlay is open, the panes underneath have to stop redrawing on every PTY tick so the IME has a stable caret to anchor to. When you commit (Alt+Enter, the only chord I found that works across every tier-1 terminal), the buffer ships to the target pane in one shot and normal rendering resumes.
The lesson generalizes past IME. When you build a TUI for users whose input environment differs from your own, the assumptions baked into your render loop will collide with input layers you didn't know existed, and you should budget for it.
What this looks like in practice
Suppose I'm in a Claude Code session and I want a second worker to investigate something in parallel. I tell the lead Claude to spawn a Codex pane and hand it the task. Lead calls spawn_codex_pane, then send_message with the brief. A minute later the worker stops responding; lead calls inspect_pane, sees Codex is sitting on a permission prompt, calls send_keys to approve, and waits for the result to come back through check_messages. When it's done, lead calls close_pane. I never touched the keyboard.
Without renga, every one of those steps is me. The agents got faster, my role got smaller, and that's the trade I was after.
The deeper shift, though, is that I stopped being the org chart. With side-by-side panes and copy-paste, I am the only thing connecting the agents. With peer messaging alone, I'm the one deciding who exists and what they should do. Once pane orchestration, prompt handling, and peer messaging all sit behind one MCP surface, the agents start making those calls themselves: who to spawn, who to talk to, who to wait on, who to retire. renga doesn't ship a fixed organization of agents. It ships the operating substrate that lets the agents organize themselves, which turns out to be a more interesting product than "a multiplexer with a chat channel."
Try it
npm install -g @suisya-systems/renga
renga mcp install --client claude
renga mcp install --client codex
renga mcp install --client codex --codex-auto-approve-peer-tools
Start renga, launch Codex normally in a pane, and press Alt+P to inject the peer-enabled Claude Code launch command. From there you can ask either agent to split panes, spawn workers, inspect siblings, and coordinate without you in the loop.

Top comments (0)