DEV Community

Surya Sekhar Datta
Surya Sekhar Datta

Posted on • Originally published at Medium

How to run Ollama on Windows using WSL

Ever wanted to ask something to ChatGPT or Gemini, but stopped, worrying about your private data? But what if you could run your own LLM locally? That is exactly what Ollama is here to do. Follow along to learn how to run Ollama on Windows, using the Windows Subsystem for Linux (WSL).

For steps on MacOS, please refer to https://medium.com/@suryasekhar/how-to-run-ollama-on-macos-040d731ca3d3

Ollama

1. Pre-Requisites

First, you need to have WSL installed on your system. To do that, execute:

wsl --install

This will prompt you to set a new username and password for your Linux Subsystem. After you are done with that, there are a few things that you need to install in your new system.

If you already have Ubuntu installed in WSL, connect with it using:

wsl -d Ubuntu

Here’s everything you need to do now:

Add Docker's official GPG key:

sudo apt-get update
sudo apt-get install ca-certificates curl
sudo install -m 0755 -d /etc/apt/keyrings
sudo curl -fsSL https://download.docker.com/linux/ubuntu/gpg -o /etc/apt/keyrings/docker.asc
sudo chmod a+r /etc/apt/keyrings/docker.asc
Enter fullscreen mode Exit fullscreen mode

Add the repository to Apt sources:

echo \
"deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.asc] https://download.docker.com/linux/ubuntu \
$(. /etc/os-release && echo "$VERSION_CODENAME") stable" | \
sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
sudo apt-get update
Enter fullscreen mode Exit fullscreen mode

Install Docker

sudo apt-get install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin
Enter fullscreen mode Exit fullscreen mode

Install everything one by one. This sets up your system by installing the required command line tools, and most importantly, ‘docker’ that will be useful later on.

After this is done, let us go ahead and install Ollama.

2. Installing Ollama

Now that you have installed WSL and logged in, you need to install Ollama. Head over to the Ollama website, copy the command, and execute it in WSL

Ollama for Linux

curl -fsSL https://ollama.com/install.sh | sh
Enter fullscreen mode Exit fullscreen mode

After it is successfully installed, head over to localhost:11434. It will verify whether Ollama is running or not. If successful, it should show something like this:

Ollama is running!

3. Pull Models

Ollama has its own CLI, and you can verify if it was successfully installed or not by using ollama -v. Ollama supports a lot of LLMs. Here is a complete list, https://ollama.com/library and you can use all of them.

Let’s try ‘llama3’, the latest LLM from Meta. To install the model, we need to run the command

ollama pull llama3
Enter fullscreen mode Exit fullscreen mode

llama3 pull

After that is done, you can also install another multi-modal model, that can also understand images, called ‘llava’.

ollama pull llava
Enter fullscreen mode Exit fullscreen mode

4. Start models and chat

After all the installations are done, it is time for us to start our Local LLM and have a chat! To run ‘llama3’, we have to execute the command:

ollama run llama3
Enter fullscreen mode Exit fullscreen mode

Here you can ask anything. For example, this:

llama3

Keep an eye out for your machine’s performance, If you are using your own local resources, make sure not to stress it too much!

5. Installing and using Open-WebUI for easy GUI

We have the functionality, but chatting with an LLM from a command line is a bit difficult, no?

Let’s fix that. For this, we need to run the docker image for open-webui. Execute the following command:

sudo docker run -d --network=host -v open-webui:/app/backend/data -e OLLAMA_BASE_URL=http://127.0.0.1:11434 --name open-webui --restart always ghcr.io/open-webui/open-webui:main
Enter fullscreen mode Exit fullscreen mode

After it is done, head over to localhost:8080, which is the default port for open-webui. You will be greeted by a login screen. Click on “Sign Up”, enter your Full Name, Email and create a new Password. Don’t worry, these are not going anywhere, and are only stored locally. After all this is done, here is the screen that you are greeted with. Just like ChatGPT, isn’t it? But better.

open-webui Interface

Select your model from up top, and get started!

That’s all you need to know, to run your own LLM Locally. It’s as simple as that. Have fun!

Top comments (0)