
No cloud. No API keys. No data leaving your machine.
Claude Code is great. But every keystroke, every file, every snippet of your codebase hits Anthropic's servers.
For a lot of developers — those working with client codebases, sensitive projects, or under strict company data policies — that's a deal-breaker.
So I built miii-cli. A terminal-native AI coding assistant powered by local models via Ollama (or any OpenAI-compatible API). Same agentic workflow as Claude Code. Zero cloud.
What it does
miii isn't just a chatbot in your terminal. It's a full agentic loop:
- Reads and writes files — edits, creates, overwrites, deletes
- Runs shell commands — tests its own output, verifies changes
- Chains up to 6 tool calls deep — reads, edits, runs, verifies autonomously
-
Reads full project context — type
@filenameto instantly inject any file - Persists session memory — conversations survive across terminal launches
- Supports custom slash commands — extend it with your own Markdown or TypeScript skill files It plans the task, executes it, checks the result, and iterates. You don't babysit it.
Why I built this
I couldn't find a local CLI AI tool that actually worked well.
The ones that existed were either too clunky to set up, required cloud APIs, or had terminal output that was genuinely painful to read — weird formatting, broken renders, text that ran together.
I wanted something that felt as clean as Claude Code but ran entirely on local models.
So I built miii.
Install
npm install -g miii-cli
Requirements: Node.js 18+ and Ollama (or any OpenAI-compatible API like LM Studio, vLLM, Groq, Together)
Quick start
# Make sure Ollama is running
ollama serve
# Start miii
miii
On launch, miii opens a model picker. Select your model. Start coding.
miii # default session
miii --model codellama # specific model
miii --session myproject # named session
miii -s work -m llama3.2 # short flags
File context with @
One of my favourite features. Type @ anywhere in your message to fuzzy-search and inject project files into context instantly:
❯ review the auth logic in @src/auth/middleware.ts
❯ refactor @src/utils/parser.ts to handle edge cases
Auto-excluded: node_modules, dist, .git, lock files, binaries, images.
Built-in tools (what the model can call on its own)
| Tool | What it does |
|---|---|
read_file |
Read any file |
list_files |
List directory contents |
edit_file |
Create or overwrite a file |
create_folder |
Create a directory |
move_file |
Move or rename |
delete_file |
Delete a file |
run_command |
Run a shell command in cwd |
The model chains these automatically — no prompting needed.
Sessions
Every conversation is saved and resumed automatically.
miii # resumes "default" session
miii --session feature-auth # resumes or creates "feature-auth"
Sessions stored at ~/.config/miii/sessions/.
Skills — custom slash commands
Create a Markdown file in ~/.config/miii/skills/:
---
name: review
description: review current changes for bugs and improvements
---
Review the code I'm about to share. Look for bugs, edge cases, and improvements.
Be direct and specific. No markdown.
Then use it:
/review
Skills can also be TypeScript files with an execute function for programmatic behaviour.
Configuration
Works with Ollama by default. Switch to any OpenAI-compatible provider:
Ollama (default):
{
"model": "llama3.2",
"provider": "ollama",
"baseUrl": "http://localhost:11434"
}
OpenAI-compatible (LM Studio, Groq, vLLM, Together, etc.):
{
"model": "gpt-4o",
"provider": "openai",
"baseUrl": "https://api.openai.com/v1"
}
Config loads from .miii.json in your current directory, or ~/.config/miii/config.json.
Security
miii 0.1.5 addresses the following out of the box:
-
Path traversal — all file operations restricted to cwd via
guardPath() -
@filenamereferences validated against cwd before reading -
run_commandenforces a 30-second execution timeout - Config loading whitelists allowed keys; session data validated as array
- File paths in context XML attributes properly escaped
What's next
This is early days. I'm working on:
- Better model compatibility testing (Qwen2.5-Coder, DeepSeek-Coder)
- Improved context window management for large codebases
- More built-in skills out of the box
Links
- 📦 npm: https://www.npmjs.com/package/miii-cli
- ⭐ GitHub: https://github.com/maruakshay/miii-cli If you try it, drop a star. If you break it, open an issue. If you want to contribute, PRs are open.
Built with TypeScript. MIT licensed. No VC money. No cloud dependency. Just a local tool that does the job.
Tags: localai opensource ai terminal devtools
Top comments (0)