<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: A. S. Md. Ferdousul Haque</title>
    <description>The latest articles on DEV Community by A. S. Md. Ferdousul Haque (@ferdousulhaque).</description>
    <link>https://dev.to/ferdousulhaque</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/ferdousulhaque"/>
    <language>en</language>
    <item>
      <title>Opencode for Agentic Development with Local LLMs</title>
      <dc:creator>A. S. Md. Ferdousul Haque</dc:creator>
      <pubDate>Sun, 23 Nov 2025 12:48:22 +0000</pubDate>
      <link>https://dev.to/ferdousulhaque/opencode-for-agentic-development-with-local-llms-2h4k</link>
      <guid>https://dev.to/ferdousulhaque/opencode-for-agentic-development-with-local-llms-2h4k</guid>
      <description>&lt;p&gt;Agentic development is rapidly transforming the way developers design, build, and ship software. Tools like Opencode let developers pair powerful local LLMs with intelligent agents to automate coding tasks, refactor large codebases, and accelerate development—all while keeping data private and within your own machine.&lt;/p&gt;

&lt;p&gt;If you want to get started with Opencode using local LLMs (like Llama, Mistral, Qwen, DeepSeek, Gemma), here’s a simple, practical guide. before that, let's know&lt;/p&gt;

&lt;h2&gt;
  
  
  Why OpenCode?
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Agentic workflows&lt;/strong&gt; – AI agents that can modify your codebase intelligently.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Local-first development&lt;/strong&gt; – Integrate your own LLM running on GPU or CPU.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Extensibility&lt;/strong&gt; – Bring your own models, tools, and workflows.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Security &amp;amp; Privacy&lt;/strong&gt; – No proprietary code leaves your machine.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Pre-requisites:
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Ollama&lt;/li&gt;
&lt;li&gt;GhostTTY&lt;/li&gt;
&lt;li&gt;Opencode&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Install and Run a Local LLM
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Go to &lt;a href="https://ollama.com" rel="noopener noreferrer"&gt;Ollama&lt;/a&gt; and follow the steps to install on your OS&lt;/li&gt;
&lt;li&gt;Add the desired LLM that supports agentic flow e.g. qwen3:8B, llama3.1:8b and so on. Check on ollama site.&lt;/li&gt;
&lt;li&gt;Run the following to install Qwen LLM to your local environment
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ollama pull qwen3:8b
ollama list
ollama run qwen3:8b
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Now add the following tweaking to increase context window for agents to work properly.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ollama run &amp;lt;model_name&amp;gt;
&amp;gt;&amp;gt;&amp;gt; /set parameter num_ctx 32768
&amp;gt;&amp;gt;&amp;gt; /save &amp;lt;model_name&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Install Ghostty
&lt;/h3&gt;

&lt;p&gt;There are several TTY supported with opencode, I prefer ghostty which is very simple at the same time good UIUX. Follow instructions on  &lt;a href="https://ghostty.org/docs/install/binary" rel="noopener noreferrer"&gt;Ghostty&lt;/a&gt; to install.&lt;/p&gt;

&lt;h3&gt;
  
  
  Install OpenCode
&lt;/h3&gt;

&lt;p&gt;Follow instructions on &lt;a href="https://opencode.ai/docs/" rel="noopener noreferrer"&gt;OpenCode&lt;/a&gt; to install.&lt;/p&gt;

&lt;h2&gt;
  
  
  Configuration
&lt;/h2&gt;

&lt;p&gt;Once the pre-requisite steps are done. Now comes the execution part. First we need to add the LLM to the OpenCode config &lt;code&gt;opencode.json&lt;/code&gt; file located at &lt;code&gt;.config/opencode/opencode.json&lt;/code&gt; to work on.&lt;/p&gt;

&lt;p&gt;For more providers of opencode check &lt;a href="https://opencode.ai/docs/providers/" rel="noopener noreferrer"&gt;here&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
  "$schema": "https://opencode.ai/config.json",
  "provider": {
    "ollama": {
      "npm": "@ai-sdk/openai-compatible",
      "name": "Ollama (local)",
      "options": {
        "baseURL": "http://localhost:11434/v1"
      },
      "models": {
        "qwen3:8b": {
          "name": "qwen3:8b"
        }
      }
    }
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once this is done, opencode is ready for action.&lt;/p&gt;

&lt;h2&gt;
  
  
  Execution
&lt;/h2&gt;

&lt;p&gt;Now let's build something. Follow the steps:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Open the ghostty terminal&lt;/li&gt;
&lt;li&gt;Create a directory for the application&lt;/li&gt;
&lt;li&gt;Run &lt;code&gt;opencode&lt;/code&gt; inside the directory&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvfaumxulwbqwc02mq84t.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvfaumxulwbqwc02mq84t.png" alt="OpenCode in Ghostty" width="800" height="390"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Select the model from &lt;code&gt;/models&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxhuqf1f806b7xqrtixzu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxhuqf1f806b7xqrtixzu.png" alt="Select Local Model" width="800" height="511"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;By using TAB of keyboard select the &lt;code&gt;BUILD AGENT&lt;/code&gt; mode&lt;/li&gt;
&lt;li&gt;Give prompt for generating the code for new features or fix a bug in the application&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Final Product
&lt;/h2&gt;

&lt;p&gt;I just build a tic-tac-toe game using the local LLM, although my CPUs are burning now 🔥 &lt;a href="https://ferdousulhaque.github.io/tic-tac-toe/" rel="noopener noreferrer"&gt;Play here&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8m8s5ynfnvt4l8ghpg72.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8m8s5ynfnvt4l8ghpg72.png" alt="Tic Tac Toe Game" width="800" height="837"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Benefits of Using Local LLMs
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Zero data leakage across the internet&lt;/li&gt;
&lt;li&gt;Better cost efficiency—no API billing&lt;/li&gt;
&lt;li&gt;Unlimited customization of models&lt;/li&gt;
&lt;li&gt;Offline development&lt;/li&gt;
&lt;li&gt;Faster iteration with GPU acceleration&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Setting up Opencode with a local LLM unlocks a powerful, private, and fully autonomous coding partner. Whether you're building services, refactoring monoliths, or improving developer productivity, agentic development gives you a major edge.&lt;/p&gt;

&lt;p&gt;If you're working with large codebases or exploring AI-powered software engineering, this setup is one of the best ways to get started.&lt;/p&gt;

</description>
      <category>agents</category>
      <category>ai</category>
      <category>llm</category>
      <category>privacy</category>
    </item>
  </channel>
</rss>
