DEV Community

Cover image for Setup Llama 3 using Ollama and Open-WebUI
Blacknight318
Blacknight318

Posted on • Edited on • Originally published at saltyoldgeek.com

Setup Llama 3 using Ollama and Open-WebUI

Tested Hardware

Below is a list of hardware I've tested this setup on. In all cases things went reasonably well, the Lenovo is a little despite the RAM and I'm looking at possibly adding an eGPU in the future. Both Macs with the M1 processors run great, though the 8GB RAM on the Air means that your MacBook may stutter and/or stick, in hindsight if I'd done more research I would've gone for the 16GB RAM version.

  • Lenovo M700 tiny
    • Intel(R) Core(TM) i7-6700
    • 32GB RAM
    • 500GB NVME Drive
    • Ubuntu Bonle 24.04 LTS
  • MacBook Pro
    • M1 Processor
    • 16GB RAM
    • 500GB SSD
    • MacOS Sonoma 14.4.1
  • MacBook Air
    • M3 Processor
    • 8GB RAM
    • 256GB SSD
    • MacOS Sonoma 14.4.1

Setup

Now that we've looked at the hardware let's get started setting things up.

Ollama

  • Installation

    • For MacOS download and run the installer, that's it.
    • For Linux or WSL, run the following command
    curl -fsSL https://ollama.com/install.sh | sh
    
  • Now we'll want to pull down the Llama3 model, which we can do with the following command.

  ollama pull llama3
Enter fullscreen mode Exit fullscreen mode
  • Running Ollama

    • If you're on MacOS you should see a llama icon on the applet tray indicating it's running
    • If you click on the icon and it says restart to update, click that and you should be set.
    • For Linux you'll want to run the following to restart the Ollama service
    sudo systemctl restart ollama
    

Open-Webui

  • Prerequisites

    • Docker
    • For MacOS download and run the Docker Desktop App
    • For Linux I would recommend using the convenience script with the command below.
      curl -fsSL https://get.docker.com -o get-docker.sh
      sudo sh get-docker.sh
    
  • Install (for both Mac and Linux)

    • Here's the command to get a basic Open-WebUI up and running(NOTE: This will take a few minutes)
    docker run -d --network=host -v open-webui:/app/backend/data -e OLLAMA_BASE_URL=http://127.0.0.1:11434 --name open-webui --restart always ghcr.io/open-webui/open-webui:main
    
    • You should now be able to go to http://server-ip:8080/ and set up an account, you're ready to start talking with your local llm.

Troubleshooting

If you're concerned that something isn't running here are a few things to check.

  • On macOS
    • Check that Ollama is running in the applet tray.
  • On Ubuntu and MacOS

    • Perform the following ps command to check that Ollama is running
    ps -fe | grep ollama
    
    • Check that the Open-WebUI container is running with this command
    docker ps
    

TLDR

Any M series MacBook or Mac Mini should be up to the task and near ChatGPT performance(provided you have 16GB RAM or more). The Lenovo on the other hand is still decent, if not a little slow, if I get the chance to get and test an eGPU I'll update this post. All-in-all this was a fun project and I hope you found this article useful, if so consider donating the the blog by clicking the Buy Me a Coffee button below. Until the next one, fair winds and following seas.

References

Image of Datadog

The Future of AI, LLMs, and Observability on Google Cloud

Datadog sat down with Google’s Director of AI to discuss the current and future states of AI, ML, and LLMs on Google Cloud. Discover 7 key insights for technical leaders, covering everything from upskilling teams to observability best practices

Learn More

Top comments (0)

Image of Datadog

Create and maintain end-to-end frontend tests

Learn best practices on creating frontend tests, testing on-premise apps, integrating tests into your CI/CD pipeline, and using Datadog’s testing tunnel.

Download The Guide

👋 Kindness is contagious

Immerse yourself in a wealth of knowledge with this piece, supported by the inclusive DEV Community—every developer, no matter where they are in their journey, is invited to contribute to our collective wisdom.

A simple “thank you” goes a long way—express your gratitude below in the comments!

Gathering insights enriches our journey on DEV and fortifies our community ties. Did you find this article valuable? Taking a moment to thank the author can have a significant impact.

Okay