DEV Community

Cover image for IOSM CLI: AI Engineering Runtime. Not Another Chat Wrapper.
rokoss21
rokoss21

Posted on

IOSM CLI: AI Engineering Runtime. Not Another Chat Wrapper.

Chat agents hit a ceiling.

You feel it around week three of using any of them — Claude Code, Gemini CLI, Cursor, Aider. The first sessions are impressive. Then you hit a real task: a cross-cutting refactor, a parallel migration, a codebase you've touched across a hundred sessions. And the tool starts to break at the seams. Re-explain the context. Manually merge competing edits. Hope the rollback works.

Engineering requires structure. Chat doesn't have it.

IOSM CLI is the runtime for that structure.


"AI without a methodology is just faster improvisation."

This is the sentence that drives every design decision in iosm-cli. Not a tagline — an architectural constraint. A coding agent that can't measure its own outcomes, can't coordinate parallel work, and can't remember decisions across sessions is not an engineering tool. It's a search engine that writes code.

AI adoption is no longer the advantage. Systematic AI engineering is.


🛠️ Why This Exists

I built agent systems for production codebases for years. The pattern was always the same: impressive demos, fragile execution at scale. When the task grew beyond a single context window — spanning multiple modules, multiple agents, multiple sessions — chat-based tools collapsed into manual coordination overhead.

The missing piece wasn't a better model. It was a missing runtime layer: something that enforces methodology, tracks outcomes, coordinates agents, and survives session boundaries. That's what IOSM CLI is.


👥 Who Is This For

Three types of engineers use IOSM CLI, and they come for different reasons.

The solo developer who wants a real coding agent

You've tried the other tools. You're tired of re-explaining your project every session. You want something that already knows your architectural decisions, your banned dependencies, your team conventions — and actually executes tasks autonomously.

With iosm-cli, you run iosm, type your task, and the agent works. It reads your files, runs your tests, handles rollbacks. Persistent memory means session 10 knows everything session 1 learned.

Time to productive first result: under 5 minutes.

The senior engineer running complex refactors

180K-line monolith. Extract the payment service, migrate auth to OAuth2, keep CI green throughout. One sequential agent on a 15-hour task is not a plan.

With /orchestrate, you spin up parallel agents with dependency ordering, file lock guarantees, and git worktree isolation. You get a coordinated team in one command, with a consolidated result you can actually review.

The kind of work you previously couldn't safely delegate to AI.

The team lead operationalizing AI coding

You need engineering workflows that are auditable, reproducible, safe for shared codebases. Every cycle should leave traces: what changed, why, what the metrics were before and after.

With IOSM cycles, every run captures baseline metrics, hypothesis cards, and outcome deltas in .iosm/cycles/. The next engineer resumes from the same artifact state.

AI coding as a team engineering system, not a solo productivity hack.


⚡ Barrier to Entry: Minimal

The tool is layered. You start at the bottom and unlock depth only when you need it.

Day 1 — Three commands to a working agent

npm install -g iosm-cli
cd your-project
iosm
Enter fullscreen mode Exit fullscreen mode

Inside the session:

/login     → guided API key setup (30 seconds)
/model     → pick provider + model
your task  → start immediately
Enter fullscreen mode Exit fullscreen mode

No YAML. No config files. No methodology training. The default full profile gives you a capable coding agent with full filesystem access, shell tooling, and semantic search — ready out of the box.

Week 1 — Unlock depth when you need it

Shift+Tab           → switch to iosm profile
/init               → bootstrap IOSM workspace
/iosm 0.95          → run your first structured cycle
Enter fullscreen mode Exit fullscreen mode

Entirely optional. Stay in full profile forever if it works. The IOSM layer appears when you need measurable, auditable improvement cycles — not before.

No provider lock-in

export ANTHROPIC_API_KEY="..."      # Claude models
export OPENAI_API_KEY="..."         # OpenAI GPT models
export GEMINI_API_KEY="..."         # Google Gemini models
export OPENROUTER_API_KEY="..."     # 100+ models via OpenRouter
Enter fullscreen mode Exit fullscreen mode

Node.js >=20.6.0 is the only hard requirement. Everything else is optional.


🆚 Honest Positioning vs Other Tools

This isn't a "we win every cell" table. It's a map so you can pick the right tool.

Claude Code Gemini CLI Cursor OpenCode IOSM CLI
Provider Claude-native Gemini-native Any (IDE) Any (75+ providers) Any
Mode Terminal CLI IDE Terminal Terminal runtime
IOSM methodology
Parallel agent orchestration Partial ¹
Structured checkpoint / rollback Partial ² Partial ³
Persistent cross-session memory Via CLAUDE.md Via Automations
Semantic / vector code search Agentic ⁴ ✅ IDE-native ✅ terminal-native
MCP support
SDK / JSON-RPC mode
Free tier ❌ paid only ⁵ ✅ 1000 req/day ✅ Hobby (limited) ✅ open-source ✅ open-source

Notes (to keep this honest):

¹ Claude Code supports parallel subagents via /batch and skills, but without dependency DAGs, file locks, or worktree isolation.

² Claude Code introduced checkpoints for exploration in 2025, but without structured rollback to named states.

³ Cursor has a "Restore Checkpoint" UI button within a session, but not an explicit CLI-level /checkpoint + /rollback workflow.

⁴ Claude Code does deep agentic codebase search (reads and understands files in-context with a 200K token window) — not vector embeddings, but highly capable for many use cases.

⁵ Claude Code requires a paid Pro ($20/mo) or Max ($100+/mo) plan. The free claude.ai web interface handles general coding questions but is not the Claude Code CLI agent.

The pattern: Claude Code and Gemini CLI are go-to choices for their respective native models. Cursor excels at IDE-integrated flows. OpenCode is the best fully open-source lightweight option. IOSM CLI is the only terminal runtime that combines structured methodology, coordinated parallel execution with dependency ordering, and a full platform layer — across any provider.

Different tools for different jobs. IOSM CLI is not a "better chat" — it is a different category: an engineering runtime.


🏗️ Three Architectural Layers

The runtime is layered. Each layer adds capabilities. You get value at any level.


Layer 1 — Runtime: Agents, Orchestration, Worktrees

The base is a full coding agent with direct filesystem and shell access. Real file reads, real diffs, real test runs — no hallucinated paths.

For complex work, /orchestrate turns one agent into a coordinated team:

/orchestrate --parallel --agents 4 \
  --profiles iosm_analyst,explore,iosm_verifier,full \
  --depends 3>1,4>2 \
  --locks schema,config \
  --worktree \
  Refactor auth module, verify invariants, document changes
Enter fullscreen mode Exit fullscreen mode
  • Dependency DAG (--depends 3>1,4>2): agent 3 waits for 1, agent 4 waits for 2
  • File locks (--locks schema,config): zero parallel write collisions
  • Git worktrees (--worktree): main branch untouched until merge

This is continuous dispatch — tasks launch the moment their dependencies are satisfied, not when an arbitrary wave completes. 3–5× reduction in wall-clock time for parallelizable work.


Layer 2 — Methodology: IOSM Cycles, Metrics, Artifacts

IOSM is Improve → Optimize → Shrink → Modularize — a four-phase iterative loop that turns vague "make this better" requests into measurable engineering decisions.

Shift+Tab              # switch to iosm profile
/init                  # bootstrap workspace
/iosm 0.95 --max-iterations 5
Enter fullscreen mode Exit fullscreen mode

/init generates:

  • iosm.yaml — thresholds, weights, governance policies
  • IOSM.md — operator + agent playbook
  • .iosm/cycles/ — artifact workspace for cycle history

Every cycle run captures: baseline metrics → hypothesis cards → improve/verify/optimize iterations → outcome deltas → artifact write.

iosm> Baseline captured
iosm> Planned cycle from team artifacts: simplify auth module
iosm> Running improve -> verify -> optimize loop
iosm> Result: simplicity +18%, modularity +11%, performance +6%
iosm> Artifacts written to .iosm/cycles/2026-03-10-001/
Enter fullscreen mode Exit fullscreen mode

These numbers are real and reproducible. Not vibes. Not impressions. Measurable deltas with full decision log.

Also in this layer:

  • /memory — persistent project facts across sessions. Active decisions, anti-patterns, architectural constraints. The agent loads them at startup.
  • /contract — hard engineering constraints the agent enforces. "No new dependencies without approval." "Test coverage must stay above 80%."
  • /semantic — intent-based code search. Query by meaning, not tokens. "Find all places handling token expiry" — across renamed variables and different module boundaries.
  • /singular — before implementing anything complex, run feasibility analysis across 3 variants. Choose before you build.

Layer 3 — Platform: SDK, JSON-RPC, MCP

iosm-cli is a foundation you build on, not a closed product.

SDK — embed the runtime in your own tooling:

import { createAgent } from 'iosm-cli';

const agent = await createAgent({
  model: 'sonnet',
  profile: 'iosm',
  tools: ['read', 'write', 'bash']
});

await agent.run('Analyze auth module security posture');
Enter fullscreen mode Exit fullscreen mode

JSON-RPC — wire into CI pipelines and custom dashboards:

iosm --json-rpc --port 3042
Enter fullscreen mode Exit fullscreen mode

Print mode — pipe to other tools:

iosm -p "Audit src/ for dead code" --output-format json | jq '.findings'
Enter fullscreen mode Exit fullscreen mode

MCP — connect any external tool ecosystem:

/mcp    # interactive MCP server manager
Enter fullscreen mode Exit fullscreen mode

🔄 A Full Production Workflow

No demo. A real scenario: refactor an authentication module safely, with verification, measurable outcomes, under 3 hours.

$ iosm
IOSM CLI v0.1.3 [full]

you> /orchestrate --parallel --agents 4 \
     --profiles iosm_analyst,explore,iosm_verifier,full \
     --depends 3>1,4>2 --locks schema,config --worktree \
     Refactor auth module, verify security invariants, document changes

iosm> Team run started: #77
iosm> agent[1] architecture map complete
iosm> agent[2] implementation patch set prepared
iosm> agent[3] verification suite and rollback checks ready
iosm> agent[4] integration validation passed
iosm> Consolidated patch plan generated

→ Shift+Tab (switch to iosm profile)
→ /init
→ /iosm 0.95 --max-iterations 5

iosm> Baseline captured
iosm> Planned cycle from team artifacts: simplify auth module
iosm> Running improve -> verify -> optimize loop
iosm> Result: simplicity +18%, modularity +11%, performance +6%
iosm> Artifacts written to .iosm/cycles/2026-03-10-001/
Enter fullscreen mode Exit fullscreen mode

Outcome: completed in ~2.5 hours. Measurable deltas. Full audit trail. Safe to present to the team and repeat next week.


📦 Install

npm install -g iosm-cli
cd your-project
iosm

# In session:
/doctor    # verify model + auth + tools are healthy
Enter fullscreen mode Exit fullscreen mode

For maximum performance on large codebases:

# macOS
brew install ripgrep fd ast-grep comby jq yq semgrep

# Ubuntu/Debian
sudo apt-get install -y ripgrep fd-find jq yq sed
Enter fullscreen mode Exit fullscreen mode

🌐 Open Spec, Open Runtime

The methodology is a separate, versioned specification: github.com/rokoss21/IOSM — formal definitions, schemas, artifact templates, quality gate validators.

The spec is the contract. The CLI is one implementation. Nothing stops you from running IOSM cycles in your CI, your own orchestrator, your custom tooling. The spec is the invariant.


One Last Thing

Most teams have already adopted some AI coding tool. Most have hit the ceiling: autocomplete works, quick boilerplate works, but anything requiring real coordination across sessions, files, or agents — breaks down.

The next gap in engineering teams won't be "do you use AI?" Everyone will. The gap will be how systematically.

Stop prompting. Start executing.


GitHub: github.com/rokoss21/iosm-cli

npm: npm install -g iosm-cli

Docs: github.com/rokoss21/iosm-cli/docs

IOSM Spec: github.com/rokoss21/IOSM

Top comments (0)