DEV Community

maruakshay
maruakshay

Posted on

I Built a Claude Code-Level Coding Assistant That Runs Entirely on Your Machine


No cloud. No API keys. No data leaving your machine.


Claude Code is great. But every keystroke, every file, every snippet of your codebase hits Anthropic's servers.

For a lot of developers — those working with client codebases, sensitive projects, or under strict company data policies — that's a deal-breaker.

So I built miii-cli. A terminal-native AI coding assistant powered by local models via Ollama (or any OpenAI-compatible API). Same agentic workflow as Claude Code. Zero cloud.


What it does

miii isn't just a chatbot in your terminal. It's a full agentic loop:

  • Reads and writes files — edits, creates, overwrites, deletes
  • Runs shell commands — tests its own output, verifies changes
  • Chains up to 6 tool calls deep — reads, edits, runs, verifies autonomously
  • Reads full project context — type @filename to instantly inject any file
  • Persists session memory — conversations survive across terminal launches
  • Supports custom slash commands — extend it with your own Markdown or TypeScript skill files It plans the task, executes it, checks the result, and iterates. You don't babysit it.

Why I built this

I couldn't find a local CLI AI tool that actually worked well.

The ones that existed were either too clunky to set up, required cloud APIs, or had terminal output that was genuinely painful to read — weird formatting, broken renders, text that ran together.

I wanted something that felt as clean as Claude Code but ran entirely on local models.

So I built miii.


Install

npm install -g miii-cli
Enter fullscreen mode Exit fullscreen mode

Requirements: Node.js 18+ and Ollama (or any OpenAI-compatible API like LM Studio, vLLM, Groq, Together)


Quick start

# Make sure Ollama is running
ollama serve

# Start miii
miii
Enter fullscreen mode Exit fullscreen mode

On launch, miii opens a model picker. Select your model. Start coding.

miii                          # default session
miii --model codellama        # specific model
miii --session myproject      # named session
miii -s work -m llama3.2      # short flags
Enter fullscreen mode Exit fullscreen mode

File context with @

One of my favourite features. Type @ anywhere in your message to fuzzy-search and inject project files into context instantly:

❯ review the auth logic in @src/auth/middleware.ts
❯ refactor @src/utils/parser.ts to handle edge cases
Enter fullscreen mode Exit fullscreen mode

Auto-excluded: node_modules, dist, .git, lock files, binaries, images.


Built-in tools (what the model can call on its own)

Tool What it does
read_file Read any file
list_files List directory contents
edit_file Create or overwrite a file
create_folder Create a directory
move_file Move or rename
delete_file Delete a file
run_command Run a shell command in cwd

The model chains these automatically — no prompting needed.


Sessions

Every conversation is saved and resumed automatically.

miii                          # resumes "default" session
miii --session feature-auth   # resumes or creates "feature-auth"
Enter fullscreen mode Exit fullscreen mode

Sessions stored at ~/.config/miii/sessions/.


Skills — custom slash commands

Create a Markdown file in ~/.config/miii/skills/:

---
name: review
description: review current changes for bugs and improvements
---

Review the code I'm about to share. Look for bugs, edge cases, and improvements.
Be direct and specific. No markdown.
Enter fullscreen mode Exit fullscreen mode

Then use it:

/review
Enter fullscreen mode Exit fullscreen mode

Skills can also be TypeScript files with an execute function for programmatic behaviour.


Configuration

Works with Ollama by default. Switch to any OpenAI-compatible provider:

Ollama (default):

{
  "model": "llama3.2",
  "provider": "ollama",
  "baseUrl": "http://localhost:11434"
}
Enter fullscreen mode Exit fullscreen mode

OpenAI-compatible (LM Studio, Groq, vLLM, Together, etc.):

{
  "model": "gpt-4o",
  "provider": "openai",
  "baseUrl": "https://api.openai.com/v1"
}
Enter fullscreen mode Exit fullscreen mode

Config loads from .miii.json in your current directory, or ~/.config/miii/config.json.


Security

miii 0.1.5 addresses the following out of the box:

  • Path traversal — all file operations restricted to cwd via guardPath()
  • @filename references validated against cwd before reading
  • run_command enforces a 30-second execution timeout
  • Config loading whitelists allowed keys; session data validated as array

- File paths in context XML attributes properly escaped

What's next

This is early days. I'm working on:

  • Better model compatibility testing (Qwen2.5-Coder, DeepSeek-Coder)
  • Improved context window management for large codebases

- More built-in skills out of the box

Links


Built with TypeScript. MIT licensed. No VC money. No cloud dependency. Just a local tool that does the job.


Tags: localai opensource ai terminal devtools

Top comments (0)