Run Claude Code with Ollama (Local, Cloud, or Any Model)
This guide shows how to run Claude Code using Ollama, allowing you to use local models, cloud models, or any Ollama-supported model directly from your terminal.
Prerequisites
Make sure the following tools are installed:
- Ollama
- Claude Code
Install Ollama
If Ollama is not installed, you can install it using the commands below.
You can also follow this guide:
https://dev.to/sushan/how-to-connect-a-local-ai-model-to-vs-code-1g8d
Windows (PowerShell)
irm https://ollama.com/install.ps1 | iex
macOS / Linux
curl -fsSL https://ollama.com/install.sh | sh
Verify installation:
ollama --version
Install Claude Code
Windows (PowerShell)
irm https://claude.ai/install.ps1 | iex
macOS / Linux
curl -fsSL https://claude.ai/install.sh | bash
Verify installation:
claude --version
Running Claude Code with Ollama
Once both tools are installed, you can start Claude Code through Ollama.
The commands work the same on Windows, macOS, and Linux.
Option 1: Launch and Select a Model
Run the command:
ollama launch claude
This will open a model selection menu where you can choose a model using the arrow keys.
Option 2: Launch with a Specific Model
You can also specify the model directly.
Example:
ollama launch claude --model kimi-k2.5:cloud
Other examples:
ollama launch claude --model llama3
ollama launch claude --model qwen2.5
ollama launch claude --model kimi-k2.5:cloud
Replace the model name with any model available in your Ollama environment.
Grant Folder Access
When Claude Code starts, it will ask permission to access the current folder.
Select Yes to allow Claude to read and modify files in the directory.
Done
Claude Code will now start and connect to the selected model.
You can start interacting with your codebase immediately.
Official Documentation
For more details, see the official docs:



Top comments (0)