What is Ollama?
Ollama is a tool that lets you run large language models (LLMs) like Llama 2, Mistral, or GPT-3 directly on your computer. Think of it as a "local ChatGPT" that doesn’t require an internet connection or cloud services. You can chat with AI, test ideas, or build apps without sharing data with third parties.
Why Run Ollama in WSL (Windows Subsystem for Linux)?
Most AI/ML tools (including Ollama) are optimized for Linux. Here’s why WSL is ideal for Ollama on Windows:
-
Native Linux Compatibility
- Ollama works best with Linux libraries and dependencies. WSL lets you run Linux commands and tools seamlessly on Windows, avoiding compatibility headaches.
-
Better Performance
- WSL 2 (the latest version) integrates tightly with Windows while offering near-native Linux speeds. This is crucial for running resource-heavy LLMs smoothly.
-
GPU Support
- Modern LLMs work faster with GPUs. WSL 2 supports GPU passthrough (e.g., NVIDIA CUDA), letting Ollama leverage your computer’s graphics card for faster responses.
-
Easy Setup
- Installing Ollama in WSL is as simple as running a Linux command. No complex workarounds for Windows-specific issues.
-
Keep Your Windows Environment Clean
- Isolate Ollama’s dependencies (like Python packages) within WSL. No clutter in your main Windows system!
Installing Ollama (The Engine)
💡Ollama is the foundational component for running AI models.
Windows Users (WSL)
We'll leverage WSL to create a Linux environment within Windows.
- Open a "
Terminal
" or "Command Prompt
" and run it as administrator. This is important for certain commands to function correctly.
- Type
wsl --install
and press Enter. This command will install the latest version of WSL with the Ubuntu distribution of Linux.
- After WSL is installed, you will need to restart your computer.
- When you log back in, a new terminal window will open and continue the installation. You will be prompted to create a Linux username and password. These are different from your Windows username and password.
- Your terminal is now running inside the Linux system.
Upgrade your WSL environment to newest versions
- Type the following commands (one at a time, pressing Enter after each):
-
sudo apt update
(This command refreshes the package lists (also known as the "package cache") from the software repositories that your system is configured to use. Think of it as updating your grocery store's price list, not buying the items themselves). You will be asked for your password to run thesudo
(root role).
-
- `sudo apt upgrade -y` (This command upgrades all upgradable packages on your system to their newest versions, based on the information obtained by`apt update`. The `-y` automatically confirms any updates)
Install Ollama from WSL terminal
curl -fsSL [https://ollama.com/install.sh](https://ollama.com/install.sh) | sh
for installing Ollama.
After Installation (All Users):
- The Llama API service will be running in the background and is configured to listen on port 11434. This is the port which will allow other programs to access it. You can check by open your Browser and navigate to
http://localhost:11434
- You can download different AI models using the command line. For example, to get the
Deepseek-R1
model, you would typeollama pull deepseek-r1:8b
. The downloaded model will be stored locally on your computer.
- You can run the
Deepseek-R1
model using the command line, you should typeollama run deepseek-r1:8b
.
Mac and Linux Users
- Download the correct version of Llama for your operating system from llama.ai and follow the installation instructions provided on the website.
Top comments (0)