Originally published on Remote OpenClaw.
Marketplace
Free skills and AI personas for OpenClaw — browse the marketplace.
Join the Community
Join 1k+ OpenClaw operators sharing deployment guides, security configs, and workflow automations.
Where Is the Model Configuration File?
OpenClaw's model configuration lives in your .env file. The location depends on your installation method:
- Docker deployment:
~/openclaw/.env(or wherever your docker-compose.yml is located) - npm global install:
~/.openclaw/.env - Hostinger Docker Manager: Edit through the Docker Manager UI under Environment Variables
The two key variables for model selection are:
# Which provider to use
OPENCLAW_MODEL_PROVIDER=openai
# Which specific model from that provider
OPENCLAW_MODEL_NAME=gpt-5.4
After changing these values, you must restart OpenClaw for the changes to take effect:
docker compose down && docker compose up -d
What AI Models Does OpenClaw Support?
As of OpenClaw 3.23 (March 2026), the following model providers and models are supported:
Provider
Models
API Key Variable
OpenAI
gpt-5.4, gpt-4o, gpt-4o-mini, o1, o1-mini
OPENCLAW_OPENAI_API_KEY
Anthropic
claude-sonnet-4-20250514, claude-3.5-sonnet, claude-3-haiku
OPENCLAW_ANTHROPIC_API_KEY
gemini-2.5-pro, gemini-2.0-flash, gemini-1.5-pro
OPENCLAW_GOOGLE_API_KEY
DashScope
qwen-max, qwen-plus, qwen-turbo, qwen-long
OPENCLAW_DASHSCOPE_API_KEY
Azure OpenAI
Any deployed OpenAI model
OPENCLAW_AZURE_API_KEY
Anthropic Vertex
Claude via Google Cloud
OPENCLAW_VERTEX_API_KEY
Ollama
Any model available locally
None (local)
The default model in OpenClaw 3.22+ is gpt-5.4 via OpenAI. If you don't set these variables, that's what you'll get.
How Do You Switch to Claude?
Claude is the recommended model for complex reasoning, nuanced conversations, and tasks that require careful instruction following. To switch to Claude:
# In your .env file
OPENCLAW_MODEL_PROVIDER=anthropic
OPENCLAW_MODEL_NAME=claude-sonnet-4-20250514
OPENCLAW_ANTHROPIC_API_KEY=sk-ant-your-api-key-here
Get your Anthropic API key from console.anthropic.com.
If you want to run Claude through Google Cloud (for compliance or billing reasons), use the Vertex AI provider instead:
OPENCLAW_MODEL_PROVIDER=vertex
OPENCLAW_MODEL_NAME=claude-sonnet-4-20250514
OPENCLAW_VERTEX_PROJECT_ID=your-gcp-project
OPENCLAW_VERTEX_LOCATION=us-east5
OPENCLAW_VERTEX_API_KEY=your-vertex-api-key
How Do You Switch to GPT?
GPT-5.4 is the default in OpenClaw 3.22+. If you're using a different model and want to switch back, or want to use a specific GPT variant:
# GPT-5.4 (default, most capable)
OPENCLAW_MODEL_PROVIDER=openai
OPENCLAW_MODEL_NAME=gpt-5.4
OPENCLAW_OPENAI_API_KEY=sk-your-api-key-here
# GPT-4o (good balance of speed and capability)
OPENCLAW_MODEL_PROVIDER=openai
OPENCLAW_MODEL_NAME=gpt-4o
# GPT-4o-mini (cheapest, fastest, good for simple tasks)
OPENCLAW_MODEL_PROVIDER=openai
OPENCLAW_MODEL_NAME=gpt-4o-mini
Get your OpenAI API key from platform.openai.com.
For Azure-hosted OpenAI models (enterprise compliance):
OPENCLAW_MODEL_PROVIDER=azure
OPENCLAW_MODEL_NAME=your-deployment-name
OPENCLAW_AZURE_API_KEY=your-azure-key
OPENCLAW_AZURE_ENDPOINT=https://your-resource.openai.azure.com/
How Do You Switch to Gemini?
Google's Gemini models are strong at multimodal tasks and offer competitive pricing:
# Gemini 2.5 Pro (most capable)
OPENCLAW_MODEL_PROVIDER=google
OPENCLAW_MODEL_NAME=gemini-2.5-pro
OPENCLAW_GOOGLE_API_KEY=your-google-api-key
# Gemini 2.0 Flash (fast and cheap)
OPENCLAW_MODEL_PROVIDER=google
OPENCLAW_MODEL_NAME=gemini-2.0-flash
Get your Google AI API key from aistudio.google.com.
Gemini works well for tasks involving image analysis, document processing, and multilingual content. For pure text reasoning, Claude and GPT generally perform better.
How Do You Switch to a Local Model With Ollama?
Running a local model with Ollama means zero API costs. Here's how to set it up:
Step 1: Install Ollama
# On Linux/Mac
curl -fsSL https://ollama.ai/install.sh | sh
# On the same machine as OpenClaw, or a machine accessible on your network
Step 2: Pull a model
# Recommended models for OpenClaw:
ollama pull llama3.1:70b # Best quality (needs 40GB+ RAM)
ollama pull llama3.1:8b # Good balance (needs 8GB+ RAM)
ollama pull mistral # Fast and capable (needs 8GB+ RAM)
ollama pull qwen2:7b # Good for multilingual (needs 8GB+ RAM)
Step 3: Configure OpenClaw
# In your .env file
OPENCLAW_MODEL_PROVIDER=ollama
OPENCLAW_MODEL_NAME=llama3.1:8b
OPENCLAW_OLLAMA_URL=http://localhost:11434
# If Ollama is on a different machine:
OPENCLAW_OLLAMA_URL=http://192.168.1.100:11434
Step 4: Restart OpenClaw
docker compose down && docker compose up -d
Local models have some limitations compared to cloud models:
- Slower response times (depends on hardware — GPU strongly recommended)
- Smaller context windows (typically 4K-8K tokens vs 100K+ for cloud models)
- Lower reasoning capability (especially the smaller models)
- No built-in tool calling in some models (OpenClaw works around this with prompt engineering)
For a detailed guide on the best local models, see our Best Ollama Models for OpenClaw guide.
Marketplace
Free skills and AI personas for OpenClaw — browse the marketplace.
Key numbers to know
How Does Multi-Model Routing Work?
Multi-model routing lets you use different models for different tasks. Instead of picking one model for everything, you assign models based on what each is best at. This optimizes both cost and quality.
Configure routing in your .env file:
# Primary model (used for conversations and complex tasks)
OPENCLAW_MODEL_PROVIDER=anthropic
OPENCLAW_MODEL_NAME=claude-sonnet-4-20250514
OPENCLAW_ANTHROPIC_API_KEY=sk-ant-your-key
# Secondary model (used for simple tasks, cheaper)
OPENCLAW_SECONDARY_PROVIDER=openai
OPENCLAW_SECONDARY_MODEL=gpt-4o-mini
OPENCLAW_OPENAI_API_KEY=sk-your-key
# Local model (used for classification and simple lookups)
OPENCLAW_TERTIARY_PROVIDER=ollama
OPENCLAW_TERTIARY_MODEL=llama3.1:8b
OPENCLAW_OLLAMA_URL=http://localhost:11434
Then define routing rules through the web UI (Settings → Model Routing) or in your configuration:
{
"routing": {
"conversation": "primary",
"web_search": "secondary",
"classification": "tertiary",
"content_generation": "primary",
"data_extraction": "secondary"
}
}
Common routing strategies:
- Cost optimization: Use Claude for customer-facing conversations (quality matters), GPT-4o-mini for background tasks (speed and cost matter), and Ollama for simple classification (free).
- Performance optimization: Use whichever model is fastest for real-time conversations, and the most capable model for complex reasoning tasks that can tolerate latency.
- Privacy optimization: Use Ollama for tasks involving sensitive data (no data leaves your server), and cloud models for general-purpose tasks.
What Is the Best Model for OpenClaw?
There is no single best model — it depends on your use case, budget, and priorities:
Best overall quality: Claude Sonnet 4 or GPT-5.4. Both deliver excellent reasoning, instruction following, and conversation quality. Claude tends to be better at nuanced judgment calls; GPT-5.4 tends to be faster.
Best value: GPT-4o-mini. Dramatically cheaper than the frontier models while still being capable enough for most agent tasks. If cost is a concern, start here.
Best for free/local: Llama 3.1 8B via Ollama. The best quality-to-resource ratio for a local model. Needs at least 8GB of RAM (16GB recommended for comfortable operation alongside OpenClaw).
Best for multilingual: Qwen-max via DashScope. Particularly strong for Chinese, Japanese, Korean, and other Asian languages. Also competitive for European languages.
Best for multimodal: Gemini 2.5 Pro. If your agent needs to process images, analyze documents, or handle video, Gemini's multimodal capabilities are the strongest.
For most operators, we recommend starting with Claude or GPT-5.4 as the primary model, adding GPT-4o-mini as a secondary model for simple tasks, and optionally adding Ollama for free local processing. This three-tier setup gives you high quality, cost efficiency, and privacy when needed.
The marketplace has pre-built personas that come with optimized model routing already configured. Browse the Marketplace to discuss which setup is right for your business.
Top comments (0)