All I wanted was to point a swarm of Claude Code agents at a set of GitHub issues and let them work in parallel. iloom does exactly that: spin up isolated git worktrees, assign an agent to each issue, let them run concurrently, come back to a set of PRs. I ran il start on my Linux dev server and hit three errors in a row.
First, it couldn't find the code CLI command (I use Code Server, which doesn't ship one). Passed --no-code. Second, it detected a non-interactive environment (I was in a Claude Code terminal session). Passed --epic explicitly. Third:
Terminal window launching not yet supported on linux.
No flag. No fallback. No workaround. The core workflow, the entire reason the tool exists, was hardcoded for macOS. Every invocation mode failed: single issue, epic, one-shot. The worktree creation succeeded each time, then the process died trying to open a Terminal.app window on a headless server, leaving orphaned worktrees behind.
I filed issue #795 and started reading the codebase. By the end of the morning I'd filed four items across two repos and two organizations, rewritten the terminal launching system, and discovered a Linux kernel limit that most developers don't know exists. I never got to test the agent swarm.
What Is iloom-cli?
iloom-cli is a TypeScript CLI (and VS Code extension) by iloom-ai that orchestrates structured AI-assisted development. You point it at GitHub issues, and it spins up isolated git worktrees with dedicated database branches and port assignments (3000 + issue_number). Each worktree gets its own Claude Code agent working independently. The concept is genuinely clever: parallel AI development without context contamination between tasks.
About 79 stars at the time I found it. For a tool solving a real workflow problem with 4,500+ tests, that's low.
The Snapshot
| Project | iloom-cli |
| Stars | ~79 at time of writing |
| Maintainer | Small team (iloom-ai), active collaborator acreeger |
| Code health | Heavy test coverage, clean separation of concerns, mature patterns |
| Docs | Detailed CLAUDE.md, command docs, but Linux/WSL support undocumented |
| Contributor UX | Excellent. Fast reviews, structured feedback, maintainer gives credit |
| Worth using | Yes on macOS. On Linux, now yes (with our merged PR) |
Under the Hood
The codebase is substantial TypeScript with clear architectural intent. Business logic lives in lib/ (WorkspaceManager, GitWorktreeManager, DatabaseManager), CLI commands are separate classes in commands/, and utilities sit in utils/. Strategy pattern shows up in the database provider abstraction (Neon, Supabase, PlanetScale all implement the same interface) and, after our PR, in the terminal backends. Dependency injection runs through the core classes.
The test suite is the standout: 4,500+ tests across 135 files, with a behavior-focused testing philosophy documented in their CLAUDE.md. Vitest handles the runner with global mock cleanup configured project-wide. Pre-commit hooks run lint, compile, test, and build. This is a codebase that takes quality seriously.
What they got right: the worktree isolation model is the real product insight. Each GitHub issue gets its own worktree, its own database branch, its own port. Agents can't step on each other. Port conflicts can't happen. The orchestration layer (WorkspaceManager) ties these pieces together cleanly.
What was rough: the terminal launching code. Before our PR, terminal.ts was a ~400-line monolith with macOS hardcoded throughout. Terminal.app support, iTerm2 support via AppleScript, nothing else. On Linux it threw an error and quit. On WSL it threw an error and quit. In a headless SSH session it threw an error and quit. This is a CLI tool designed for autonomous agent workflows, and it required a GUI window on a Mac.
The other surprise was the agent loading system. iloom passes agent definitions to Claude Code as CLI arguments. All 8-9 agent templates, totaling 215KB of JSON, get crammed into a single --agents argument regardless of which agents the workflow actually needs. This works on macOS, where the effective per-argument limit is stack_size / 4 (around 16MB). On Linux, it crashes immediately.
Most developers know about ARG_MAX, the total argument space limit (typically 2MB on Linux). What they don't know is that the kernel also enforces MAX_ARG_STRLEN, a separate per-argument cap of PAGE_SIZE * 32 (128KB on most systems). The 215KB agents blob isn't even close to hitting the total limit. It's dead on arrival because a single argument can't exceed 128KB, full stop. On macOS nobody noticed because the effective limit is two orders of magnitude higher. The E2BIG error shows up after the terminal backends work perfectly, which makes it especially confusing to debug.
The Contribution
I refactored the terminal launching system from the ~400-line monolith into pluggable backends using a strategy pattern:
| Platform | Backend | Terminal(s) |
|---|---|---|
| macOS | DarwinBackend |
Terminal.app, iTerm2 |
| Linux (GUI) | LinuxBackend |
gnome-terminal, konsole, xterm |
| Linux (headless) | TmuxBackend |
tmux sessions |
| WSL | WSLBackend |
Windows Terminal via wt.exe |
A factory in index.ts detects the environment and picks the right backend. On Linux, it tries GUI terminals first (checking DISPLAY / WAYLAND_DISPLAY to avoid SIGABRT when a terminal emulator is installed but no display server is running), then falls back to tmux automatically. Headless Linux (SSH, Docker, CI, Code Server) is arguably the most common deployment target for a tool that orchestrates autonomous agents. That's the environment where tmux becomes essential, and it was completely unsupported. The public API stayed unchanged: openTerminalWindow and openMultipleTerminalWindows work identically regardless of platform.
The PR went through two thorough review rounds from acreeger. The first caught three deduplication issues (duplicated command-building logic across backends, duplicate iTerm2 detection, overlapping platform detection functions). All fair. The second round caught two real bugs: tmux sessions created by openSingle() lacked the iloom- prefix, making them invisible to findIloomSession(), and the WSL backend didn't append ; exec bash, so terminal tabs closed the instant the command finished. Good catches both times.
After I addressed everything, the maintainer approved and merged PR #796 the same day. Final terminal.ts: ~105 lines, down from ~400. Four platform backends, 36 new tests, zero regressions against the existing 4,500.
I also filed issue #797 for the E2BIG problem (the 215KB agents JSON exceeding Linux's per-argument kernel limit) and a feature request against Claude Code for an --agents-file flag. The maintainer is exploring rendering agents to worktree subfolders as a fix.
The Verdict
iloom is for developers who are already using Claude Code and want to parallelize it across multiple issues without agents interfering with each other. The worktree isolation model is sound, the codebase is well-tested, and the maintainer runs one of the best review processes I've seen on a project this size: structured feedback graded by severity, fast turnaround, credit given openly.
The project's trajectory is upward. It's young (September 2025), actively developed, and now works on Linux. The platform gap was a growing pain, not a design flaw. The underlying architecture handled the expansion cleanly, which says something about the foundations.
There's a broader lesson here. I was running Code Server over the network to a remote Linux box, through a Claude Code terminal session. That's three layers of edge case that nobody on a macOS laptop will encounter. But every one of those layers represents a real deployment scenario: SSH sessions, Docker containers, Chromebooks with Linux, CI pipelines. My setup wasn't the edge case. The MacBook is. The most valuable QA you can do for any project is to use it on a platform the developers don't use.
What would push iloom further: lazy agent loading (don't pass 215KB when the workflow needs 40KB), documentation for non-macOS platforms, and broader awareness. If you're running Claude Code agents on anything other than a MacBook, this tool now works where you actually need it.
Go Look At This
Star iloom-cli and try it on your next multi-issue sprint. If you're on Linux or WSL, that works now. Here's the PR that made it happen, and here's the issue that started it.
This is Review Bomb #4, a series where I find under-the-radar projects on GitHub, read the code, contribute something, and write it up. If you know a project that deserves more eyeballs, drop it in the comments.
Top comments (0)