DEV Community

Arshdeep Singh
Arshdeep Singh

Posted on

AIChat: One CLI Tool to Rule All Your LLMs

AIChat: One CLI Tool to Rule All Your LLMs

Written by Arshdeep Singh


If you work with multiple LLMs regularly, you know the friction: different web interfaces, different API clients, different context windows, different ways to attach files. Switching between Claude, GPT-4, Gemini, and a local Ollama model means juggling multiple tools and losing the flow of your terminal-based workflow.

AIChat solves this with one elegantly built CLI — 29,000+ GitHub stars, written in Rust, and supporting 20+ LLM providers through a unified interface that actually respects how developers work.

GitHub: sigoden/aichat


What Is AIChat?

AIChat is an all-in-one command-line LLM tool that puts every major AI provider — and your local models — behind a single, consistent interface. It's not just a wrapper; it's a fully-featured AI workbench built for the terminal.

The 29k+ star count on GitHub isn't hype. It's the result of years of steady development, a genuinely useful feature set, and a community of developers who've found it indispensable.


20+ Providers, One Config

AIChat supports:

  • OpenAI (GPT-4, GPT-4o, o1, o3)
  • Anthropic (Claude 3.5 Sonnet, Claude 3 Opus)
  • Google (Gemini 1.5 Pro, Gemini Flash)
  • Ollama (run any local model — Llama, Mistral, Phi, Qwen)
  • Groq (blazing fast inference)
  • Azure OpenAI
  • AWS Bedrock
  • Mistral
  • DeepSeek
  • Perplexity
  • OpenRouter
  • And more

One config file. One tool. Switching providers is a flag: aichat -m claude:claude-3-5-sonnet "Explain this code".


Core Features

Shell Assistant 🐚

This might be AIChat's most immediately useful feature for developers. Natural language → shell commands, directly in your terminal.

aichat -e "find all files modified in the last 24 hours larger than 10MB"
Enter fullscreen mode Exit fullscreen mode

AIChat generates the find command, shows it to you, and asks if you want to run it. It can also run it directly with the --execute flag. For anyone who spends time Googling obscure shell commands, this alone is worth the install.

Chat-REPL 💬

A full interactive chat mode with persistent conversation history, multi-line input, and syntax-highlighted responses. Think of it as a terminal-native ChatGPT that works with any model.

aichat  # opens the REPL
Enter fullscreen mode Exit fullscreen mode

Inside the REPL, you can switch models mid-conversation, attach files, run shell commands, and navigate history — all without leaving the terminal.

RAG (Retrieval-Augmented Generation) 📚

AIChat has a built-in RAG pipeline. You can create "sessions" that index local files (PDFs, code, docs) and query them with your LLM of choice. No external vector database setup required — it handles chunking, embedding, and retrieval internally.

aichat --rag ./docs/ "What does the authentication module do?"
Enter fullscreen mode Exit fullscreen mode

AI Agents + Function Calling 🤖

AIChat supports function calling across providers that expose it. You can define tools in a YAML config and have the model call them — file operations, web requests, custom scripts. Build lightweight agents that actually do things.

Built-in HTTP Server 🌐

AIChat ships with an OpenAI-compatible HTTP server. This means you can point any tool or library that expects an OpenAI endpoint at AIChat — and route requests to whatever model you actually want, including local Ollama models.

aichat serve --port 8080
Enter fullscreen mode Exit fullscreen mode

Your local code that calls openai.chat.completions.create(...) now routes to DeepSeek. No code changes.

LLM Arena ⚔️

Side-by-side model comparison in the terminal. Send the same prompt to multiple models simultaneously and compare responses. Essential for prompt engineering and model selection.

aichat --arena "Explain the CAP theorem" -m gpt-4o -m claude-3-5-sonnet -m gemini-1.5-pro
Enter fullscreen mode Exit fullscreen mode

Installation

macOS (Homebrew):

brew install aichat
Enter fullscreen mode Exit fullscreen mode

Rust/Cargo:

cargo install aichat
Enter fullscreen mode Exit fullscreen mode

Linux (pre-built binary):
Download from GitHub Releases — available for x86_64 and ARM.

Windows: Pre-built binary available via GitHub Releases.


Configuration

On first run, AIChat creates a config file at ~/.config/aichat/config.yaml. The structure is clean and well-documented:

model: claude:claude-3-5-sonnet
clients:
  - type: claude
    api_key: sk-ant-...
  - type: openai
    api_key: sk-...
  - type: ollama
    api_base: http://localhost:11434
Enter fullscreen mode Exit fullscreen mode

Set up once, use forever.


Real-World Use Cases

For developers:

  • Quick code review in the terminal: cat main.py | aichat "review this for bugs"
  • Generate boilerplate: aichat "write a Dockerfile for a Node.js app"
  • Debug errors: cat error.log | aichat "what's causing this?"

For sysadmins:

  • Shell assistant for complex command generation
  • Log analysis with RAG over log files
  • Generate Terraform/Ansible configs from natural language

For researchers:

  • Multi-model comparison for evaluation
  • RAG over paper PDFs
  • Automated pipelines via the HTTP server

Why 29,000 Stars?

AIChat has earned its star count by doing one thing exceptionally well: respecting developer workflow.

It doesn't try to be a web app, a hosted service, or a product with a dashboard. It's a tool — fast, composable, local-first, and configurable. It fits into existing workflows rather than replacing them.

In a world full of AI tools that want to be platforms, AIChat is content to be a very good hammer.


Final Thoughts

If you use LLMs and live in the terminal, AIChat belongs in your toolkit. The breadth of provider support, the shell assistant, the built-in HTTP server, and the LLM arena make it genuinely more useful than any single-provider CLI.

And because it's open-source Rust with active development, it keeps getting better.

brew install aichat
aichat "what can you do?"
Enter fullscreen mode Exit fullscreen mode

Find out for yourself.

👉 GitHub: sigoden/aichat


Written by Arshdeep Singh

Top comments (0)