TL;DR:
miii-cliis an open source terminal AI coding assistant powered by local models. No API keys. No cloud. No subscription. One command to install.
AI coding tools are getting expensive.
Claude Code, OpenCode, Kilo — genuinely useful, real cost. $20/month base, API usage on top, and every keystroke, every file, every snippet of a codebase going through someone else's servers.
miii-cli was built to fix that.
Same terminal-native workflow. Same agentic file editing and shell execution. Runs entirely on local hardware via Ollama. Free forever.
What miii actually does
miii is a terminal AI assistant that:
- Reads, writes, edits, and runs — the model calls tools autonomously, chaining up to 6 hops deep without manual intervention
-
Injects files via
@filename— type@anywhere to fuzzy-search and pull any file into context instantly -
Remembers sessions — conversations persist across launches, stored at
~/.config/miii/sessions/ -
Supports custom skills — create custom
/commands in Markdown or TypeScript - Works with any OpenAI-compatible API — Ollama, LM Studio, vLLM, Groq, Together, or any self-hosted server
npm install -g miii-cli
miii
That's the entire install. Ollama running, a model pulled, and you're in.
The pain point it solves
Here's the honest comparison:
| Feature | miii | Claude Code | OpenCode | Kilo | OpenAI Codex |
|---|---|---|---|---|---|
| Local / offline | ✅ Ollama | ❌ | partial | partial | ❌ |
| Air-gapped | ✅ | ❌ | ❌ | ❌ | ❌ |
| $0 / month | ✅ | ❌ | ❌ | ❌ | ❌ |
| Switch model live | ✅ | ❌ | ❌ | partial | ❌ |
| File checkpoints | ✅ | ❌ | ❌ | ❌ | ❌ |
| MCP client | ✅ | ✅ | ✅ | ✅ | ❌ |
| Skills / npm | ✅ | plugins | ❌ | ❌ | ❌ |
Every other tool in this space requires a cloud account, an API key, or a monthly bill. miii doesn't ask for any of it.
Who it's built for
Privacy-first teams. Healthcare, fintech, defense — code never leaves the machine. Nothing sent to Anthropic, OpenAI, or anyone. Not even metadata.
Cost-sensitive developers. For solo devs or small teams, $20/month + API costs is real money. Ollama is free. miii is free. The only cost is electricity.
Model explorers. Compare Llama 3.3, Qwen2.5-Coder, and Mistral on the same codebase — switch mid-session with /models. No tool-switching, no context loss.
Air-gapped orgs. For environments that literally cannot use cloud AI due to compliance requirements, miii with Ollama is the only full-featured coding CLI that works with zero internet.
How the file context system works
The @ system makes injecting project context frictionless:
❯ review the auth logic in @src/auth/middleware.ts
❯ refactor @src/utils/parser.ts to handle edge cases
❯ does @src/models/user.ts match the schema in @db/migrations/001.sql
Type @ and a fuzzy picker opens over project files. Select what's needed, it gets injected into context. node_modules, dist, .git, lock files, and binaries are automatically excluded.
The model gets full read access to the selected files and can call read_file, edit_file, run_command, and more — autonomously, or with explicit approval gates enabled.
Skills: custom / commands
The skills system is the extensibility layer that sets miii apart.
Drop a Markdown file in ~/.config/miii/skills/:
---
name: review
description: review current changes for bugs and improvements
---
Review the code I'm about to share. Look for bugs, edge cases, and improvements.
Be direct and specific. No generic advice.
/review is now a first-class command in every session. TypeScript skill files with an execute function support programmatic behavior — running scripts, fetching context, piping data in.
Community-built skill packs are installable via npm. There's already an AI security skills package covering threat modeling and vulnerability analysis.
Sessions are first-class
Every conversation is saved and resumed automatically.
miii # resumes "default" session
miii --session feature-auth # resumes or creates "feature-auth"
miii -s work -m llama3.2 # short flags, specific model
Switch between projects, come back days later, pick up exactly where things left off. The context is there. The history is there.
Security in 0.1.5
Security is taken seriously — this tool runs shell commands.
-
Path traversal protection — all file operations restricted to current working directory via
guardPath() -
Command timeout —
run_commandenforces a 30-second execution timeout - Config allowlisting — config loading whitelists allowed keys; session data validated before use
- Sanitized session names — alphanumeric + hyphens only
-
Permission gates — writes, shell commands, and tool calls require explicit approval before running
The model can only touch what's inside
cwd. It can't escape.
The broader miii ecosystem
miii-cli is the terminal core. The project ships two more tools alongside it:
- miii web app — browser-based chat UI that connects to local Ollama. No account. No telemetry. No cloud relay. Claude-like UX, locally powered.
- mii-ai-security — skill package for security-focused workflows: threat modeling, code review, vulnerability analysis. All three work together. All three are free. Full docs at miii.in.
Get started in 3 steps
1. Install Ollama and pull a model
# Install from ollama.com, then:
ollama pull llama3.2
2. Install miii-cli
npm install -g miii-cli
3. Run it
miii
A model picker opens. Select a model. Start coding.
What's next
- MCP server support is live — connect any MCP-compatible tool, database, or API
- File checkpoints (state saved before every edit, one command to revert) are shipping
- Community skill packs are growing Contributions are welcome — check CONTRIBUTING.md for issues, PRs, and skill pack guidelines.
Links:
- GitHub: github.com/maruakshay/miii-cli
- Website: miii.in
- npm: npmjs.com/package/miii-cli
Top comments (0)