<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: AI monkey</title>
    <description>The latest articles on DEV Community by AI monkey (@aimonkey).</description>
    <link>https://dev.to/aimonkey</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/aimonkey"/>
    <language>en</language>
    <item>
      <title>How to Install Ollama on Linux and Windows: Complete Setup Guide</title>
      <dc:creator>AI monkey</dc:creator>
      <pubDate>Tue, 21 Apr 2026 09:29:48 +0000</pubDate>
      <link>https://dev.to/aimonkey/how-to-install-ollama-on-linux-and-windows-complete-setup-guide-4c7c</link>
      <guid>https://dev.to/aimonkey/how-to-install-ollama-on-linux-and-windows-complete-setup-guide-4c7c</guid>
      <description>&lt;p&gt;Running large language models locally has never been easier thanks to Ollama. It allows you to download, run, and manage LLMs on your own machine with minimal configuration. In this guide, you’ll learn how to install Ollama on both Linux and Windows, configure it properly, and run your first model.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;🚀 What is Ollama?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Ollama is a lightweight runtime that lets you run open-source LLMs locally (like Llama, Mistral, Gemma, and others). It handles model downloading, optimization, and inference through a simple CLI and API.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Key benefits:&lt;/strong&gt;&lt;br&gt;
Runs models locally (privacy-first)&lt;br&gt;
Simple CLI interface&lt;br&gt;
Supports multiple LLMs&lt;br&gt;
Works on Linux, Windows (WSL2), and macOS&lt;br&gt;
Built-in model management&lt;br&gt;
&lt;strong&gt;🐧 How to Install Ollama on Linux&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;1. System Requirements&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Before installing, make sure you have:&lt;/p&gt;

&lt;p&gt;Linux distro (Ubuntu recommended)&lt;br&gt;
64-bit system&lt;br&gt;
At least 8GB RAM (16GB+ recommended for larger models)&lt;br&gt;
GPU optional (NVIDIA improves performance)&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Install via official script&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Open terminal and run:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;curl &lt;span class="nt"&gt;-fsSL&lt;/span&gt; https://ollama.com/install.sh | sh
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This script will:&lt;/p&gt;

&lt;p&gt;Download Ollama&lt;br&gt;
Install binaries&lt;br&gt;
Set up system service (if supported)&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. Verify installation&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;ollama &lt;span class="nt"&gt;--version&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If installed correctly, you should see version output.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4. Run your first model&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;ollama run llama3
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This will:&lt;/p&gt;

&lt;p&gt;Download the model (first run only)&lt;br&gt;
Start interactive chat session&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Useful Linux commands
&lt;/li&gt;
&lt;/ol&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;ollama list        &lt;span class="c"&gt;# show installed models&lt;/span&gt;
ollama pull mistral &lt;span class="c"&gt;# download model&lt;/span&gt;
ollama &lt;span class="nb"&gt;rm &lt;/span&gt;llama3    &lt;span class="c"&gt;# remove model&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;🪟 How to Install Ollama on Windows&lt;/p&gt;

&lt;p&gt;Windows installation is slightly different because it uses a native app or WSL2.&lt;/p&gt;

&lt;p&gt;**Option 1: Native Windows Installation (Recommended)&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Download installer**&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Go to:&lt;br&gt;
👉 &lt;a href="https://ollama.com/download" rel="noopener noreferrer"&gt;https://ollama.com/download&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Download the Windows installer (.exe).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Install&lt;/strong&gt;&lt;br&gt;
Run the installer&lt;br&gt;
Follow setup wizard&lt;br&gt;
Ollama will install system services automatically&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. Verify installation&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Open PowerShell:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;ollama &lt;span class="nt"&gt;--version&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;4. Run a model&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;ollama run llama3
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Option 2: Install via WSL2 (Advanced users)&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;If you want Linux-like performance:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Enable WSL2&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;wsl &lt;span class="nt"&gt;--install&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Restart your system.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Install Ubuntu&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;From Microsoft Store, install Ubuntu.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. Install Ollama inside WSL&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;curl &lt;span class="nt"&gt;-fsSL&lt;/span&gt; https://ollama.com/install.sh | sh
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;4. Run model&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;ollama run mistral
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;⚡ Running Models with Ollama&lt;/p&gt;

&lt;p&gt;Once installed, you can run different models:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;ollama run llama3
ollama run mistral
ollama run gemma
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can also pass prompts directly:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;ollama run llama3 &lt;span class="s2"&gt;"Explain quantum computing in simple terms"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;🔌 Using Ollama API (Local AI Server)&lt;/p&gt;

&lt;p&gt;Ollama runs a local API server automatically.&lt;/p&gt;

&lt;p&gt;Example request:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="err"&gt;curl&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;http://localhost:&lt;/span&gt;&lt;span class="mi"&gt;11434&lt;/span&gt;&lt;span class="err"&gt;/api/generate&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;-d&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;'&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"model"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"llama3"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"prompt"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Write a blog about AI"&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="err"&gt;'&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;This makes Ollama usable for:&lt;/strong&gt;&lt;br&gt;
apps&lt;br&gt;
bots&lt;br&gt;
automation&lt;br&gt;
coding assistants&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;🧠 Performance Tips&lt;/strong&gt;&lt;br&gt;
Use smaller models (3B–8B) for CPU-only machines&lt;br&gt;
Enable GPU acceleration on NVIDIA systems&lt;br&gt;
Close heavy apps to free RAM&lt;br&gt;
Use quantized models for better speed&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;🧩 Common Issues &amp;amp; Fixes&lt;/strong&gt;&lt;br&gt;
❌ Command not found&lt;br&gt;
Restart terminal&lt;br&gt;
Check PATH variables&lt;br&gt;
❌ Slow performance&lt;br&gt;
Use smaller model&lt;br&gt;
Ensure GPU drivers are installed&lt;br&gt;
❌ Model download stuck&lt;br&gt;
ollama rm &lt;br&gt;
ollama pull &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;🔥 Final Thoughts&lt;/strong&gt;&lt;br&gt;
Ollama is one of the easiest ways to run local AI models on Linux and Windows in 2026. Whether you're a developer, researcher, or AI enthusiast, it provides a simple yet powerful way to bring LLMs directly to your machine without relying on cloud APIs.&lt;/p&gt;

&lt;p&gt;If you're building AI tools or experimenting with local inference, Ollama is the fastest way to get started.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Source: &lt;a href="https://inferencerig.com/setup/how-to-install-ollama-on-linux-and-windows-complete-setup-guide/" rel="noopener noreferrer"&gt;https://inferencerig.com/setup/how-to-install-ollama-on-linux-and-windows-complete-setup-guide/&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>llm</category>
      <category>linux</category>
    </item>
  </channel>
</rss>
