DEV Community

Chung Duy
Chung Duy

Posted on

Use OpenCode with local LLM, not bad all at

Local LLM Coding Setup: LMStudio + OpenCode

A guide to setting up a local AI coding assistant using LMStudio and OpenCode — a solid alternative to Claude Code when you run out of daily usage.

1. Install LMStudio

Download and install from https://lmstudio.ai/

2. Select a Model

Choose an appropriate model depending on your hardware. In this case, I chose Qwen3-Coder-Next-MLX-6bit because:

  • It fits within my available RAM
  • It's optimized for macOS with Apple Silicon (M4 chip)
  • It can leverage the M4 GPU

You may need to wait a bit for the model to fully download.

3. Load and Configure the Model

Load the model you selected in Step 2 (e.g., Qwen3-Coder-Next-MLX-6bit) and configure the following:

Setting Value
Temperature 1.0
Context Length 80000

⚠️ Do not leave the context length at the default 16000 — it's too small.

4. Install OpenCode

Install from https://opencode.ai/

5. Configure OpenCode to Use LMStudio

Open the config file at ~/.config/opencode/opencode.jsonc and paste the following:

{
  "$schema": "https://opencode.ai/config.json",
  "theme": "tokyonight",
  "disabled_providers": [],
  "provider": {
    "localllm": {
      "name": "Local LLM",
      "npm": "@ai-sdk/openai-compatible",
      "models": {
        "qwen3-coder-next-mlx": {
          "name": "Qwen3-Coder-Next"
        }
      },
      "options": {
        "baseURL": "http://127.0.0.1:1234/v1"
      }
    }
  }
}
Enter fullscreen mode Exit fullscreen mode

⚠️ Make sure the key "qwen3-coder-next-mlx" matches the model name in LMStudio exactly, otherwise you'll get an error: "can not load model..."

6. Run OpenCode

Open a new terminal, navigate to your project directory, and run:

opencode
Enter fullscreen mode Exit fullscreen mode

E.g: "help me to understand the code base", while opencode running, you can watch out LMstudio server log to see it really works

7. Results

Tested with a demo project and the results are not bad at all compared to Sonnet 4.5. More testing on larger projects is needed, but the output quality makes it a worthwhile alternative:

  • 🔄 Use as a fallback when you run out of daily Claude Code usage
  • 💡 Explore other use cases where a local LLM fits your workflow
  • 💰 Zero API cost — everything runs locally

Top comments (0)