DEV Community

Cover image for Opencode for Agentic Development with Local LLMs
A. S. Md. Ferdousul Haque
A. S. Md. Ferdousul Haque

Posted on

Opencode for Agentic Development with Local LLMs

Agentic development is rapidly transforming the way developers design, build, and ship software. Tools like Opencode let developers pair powerful local LLMs with intelligent agents to automate coding tasks, refactor large codebases, and accelerate development—all while keeping data private and within your own machine.

If you want to get started with Opencode using local LLMs (like Llama, Mistral, Qwen, DeepSeek, Gemma), here’s a simple, practical guide. before that, let's know

Why OpenCode?

  • Agentic workflows – AI agents that can modify your codebase intelligently.
  • Local-first development – Integrate your own LLM running on GPU or CPU.
  • Extensibility – Bring your own models, tools, and workflows.
  • Security & Privacy – No proprietary code leaves your machine.

Pre-requisites:

  • Ollama
  • GhostTTY
  • Opencode

Install and Run a Local LLM

  • Go to Ollama and follow the steps to install on your OS
  • Add the desired LLM that supports agentic flow e.g. qwen3:8B, llama3.1:8b and so on. Check on ollama site.
  • Run the following to install Qwen LLM to your local environment
ollama pull qwen3:8b
ollama list
ollama run qwen3:8b
Enter fullscreen mode Exit fullscreen mode
  • Now add the following tweaking to increase context window for agents to work properly.
ollama run <model_name>
>>> /set parameter num_ctx 32768
>>> /save <model_name>
Enter fullscreen mode Exit fullscreen mode

Install Ghostty

There are several TTY supported with opencode, I prefer ghostty which is very simple at the same time good UIUX. Follow instructions on Ghostty to install.

Install OpenCode

Follow instructions on OpenCode to install.

Configuration

Once the pre-requisite steps are done. Now comes the execution part. First we need to add the LLM to the OpenCode config opencode.json file located at .config/opencode/opencode.json to work on.

For more providers of opencode check here

{
  "$schema": "https://opencode.ai/config.json",
  "provider": {
    "ollama": {
      "npm": "@ai-sdk/openai-compatible",
      "name": "Ollama (local)",
      "options": {
        "baseURL": "http://localhost:11434/v1"
      },
      "models": {
        "qwen3:8b": {
          "name": "qwen3:8b"
        }
      }
    }
  }
}
Enter fullscreen mode Exit fullscreen mode

Once this is done, opencode is ready for action.

Execution

Now let's build something. Follow the steps:

  • Open the ghostty terminal
  • Create a directory for the application
  • Run opencode inside the directory

OpenCode in Ghostty

  • Select the model from /models

Select Local Model

  • By using TAB of keyboard select the BUILD AGENT mode
  • Give prompt for generating the code for new features or fix a bug in the application

Final Product

I just build a tic-tac-toe game using the local LLM, although my CPUs are burning now 🔥 Play here

Tic Tac Toe Game

Benefits of Using Local LLMs

  • Zero data leakage across the internet
  • Better cost efficiency—no API billing
  • Unlimited customization of models
  • Offline development
  • Faster iteration with GPU acceleration

Setting up Opencode with a local LLM unlocks a powerful, private, and fully autonomous coding partner. Whether you're building services, refactoring monoliths, or improving developer productivity, agentic development gives you a major edge.

If you're working with large codebases or exploring AI-powered software engineering, this setup is one of the best ways to get started.

Top comments (0)