DEV Community

Cover image for Step-by-Step Guide: Running LLM Models with Ollama
Snehal Rajeev Moon
Snehal Rajeev Moon

Posted on

Step-by-Step Guide: Running LLM Models with Ollama

Hello Artisan,

In today's blog post, we will learn about Ollama, its key features, and how to install it on different OS.

What is Ollama?

  • Ollama is an open-source tool that allows you to run a Large Language Model (LLM) on your local machine. It has vast collections of LLM models. It ensures privacy and security of data making it a more popular choice among AI developers, researchers, and business owners who prioritize data confidentiality.
  • Ollama provides full ownership of your data and avoids potential risk.
  • Ollama is an offline tool that reduces latency and dependency on external servers, making it faster and more reliable.

Features of Ollama:

1. Management of AI model: It allows you to easily manage all its models on your system by giving you full control over it to download, run, and remove models from your systems. It also maintains the version of each model installed on your machine.

2. Command Line Interface (CLI): We operate on CLI to pull, run, and manage the LLM models locally. For users who prefer a more visual experience, it also supports third-party graphical user interface (GUI) tools like Open WebUI.

3. Multi-platform support: Ollama offers cross-platform compatibility that includes Windows, Linux, and MacOS, making it easy to integrate into your existing workflows, no matter which operating system you use.

How to use Ollama on multiple platforms

In this section, we will see how to download, install, and run Ollama locally on cross-platforms.

  • To download Ollma visit its official website here and download it as per your preferred operating system.
  • The installation process for MacOS is similar to Windows and for Linux, you have to run a command to install Ollama on your system.

I will walk you through the installation process for Windows, which you can follow similarly for macOS.

  • Click the download button for your preferred OS to download an executable file. Then, open the file to start the installation process.

Image description

  • To install it on Linux, open a terminal and run the following command to install Ollama on your machine.
curl -fsSL https://ollama.com/install.sh | sh
Enter fullscreen mode Exit fullscreen mode

Yeah!, you have successfully installed Ollama. It will be in a tray of your system showing it was running.

  • Now we will see how to use, and download different models provided by Ollama with the help of the Command Line Interface (CLI).

  • Open your terminal and follow these steps. Here is a list of LLM models provided by Ollama.

  1. ollama: this command will list all the available commands.
  2. ollama -v or --version: display the version
  3. ollama list: list all the models installed in your systems

Image description

Now we will see how to install the model using Ollama

The LLM model can be installed in two ways:

  1. ollama pull model_name
  2. ollama run model_name - If the model is not already downloaded on the system, it will first pull the model and then run it

We will install gemma2 model in our system
gemma2: Google Gemma 2 is a high-performing and efficient model available in three sizes: 2B, 9B, and 27B.

ollama run gemma2
Enter fullscreen mode Exit fullscreen mode

It will open a prompt to write a message like below:

>>>Send a message (/? for help)
Enter fullscreen mode Exit fullscreen mode
  • We have to write our prompt here, which returns the response.
  • To exit from the model we have to write /bye

Image description

You can now use any model provided by Ollama in these. Explore the model and try to use it as per your need.

Conclusion:
We have explored Ollama, an open-source tool that allows you to run LLM models locally, unlike other tools that rely on cloud servers. Ollama ensures data security and privacy, and we've learned how to run and use it on your local machine. It offers a simple and straightforward way to run LLM models effortlessly, directly on your system.

Happy Reading!
Happy Coding!

🦄 ❤️

Top comments (0)