DEV Community

Micheal Angelo
Micheal Angelo

Posted on

Keep Your AI Conversations Local: Open WebUI + Ollama Setup

Want Your AI to Stay Private? Run a Fully Local LLM with Open WebUI + Ollama

As LLMs become part of daily workflows, one question comes up more often:

Where does the data go?

Most cloud-based AI tools send prompts and responses to remote servers for processing.

For many use cases, that’s perfectly fine.

But in some situations:

  • Sensitive code
  • Personal notes
  • Internal documentation
  • Experimental ideas

You may prefer not to send that data outside your machine.

This is where local LLM setups become useful.


🧠 What This Setup Provides

This setup creates a fully local ChatGPT-like experience:

  • Runs entirely on your machine
  • No external API calls
  • No data leaving your system
  • Modern chat interface
  • Model switching support

βš™οΈ Architecture Overview

Browser (Open WebUI)
        ↓
Docker Container (Open WebUI)
        ↓
Ollama API (localhost:11434)
        ↓
Local LLM Model (e.g., mistral)
Enter fullscreen mode Exit fullscreen mode

Everything runs locally.


🧩 Components

1. Ollama

Runs LLM models locally and exposes an API.

2. Open WebUI

Provides a ChatGPT-like interface with:

  • Chat history
  • Model selection
  • Clean UI

πŸ”— https://openwebui.com/


3. Docker

Runs Open WebUI in an isolated container.


πŸš€ Installation & Setup

1. Install Ollama

curl -fsSL https://ollama.com/install.sh | sh
Enter fullscreen mode Exit fullscreen mode

2. Start Ollama

ollama serve
Enter fullscreen mode Exit fullscreen mode

If you see:

address already in use
Enter fullscreen mode Exit fullscreen mode

It simply means Ollama is already running.


3. Pull a Model

ollama pull mistral
Enter fullscreen mode Exit fullscreen mode

Check available models:

ollama list
Enter fullscreen mode Exit fullscreen mode

4. Run Open WebUI

sudo docker run -d \
  --network=host \
  -v open-webui:/app/backend/data \
  -e OLLAMA_BASE_URL=http://127.0.0.1:11434 \
  --name open-webui \
  --restart unless-stopped \
  ghcr.io/open-webui/open-webui:main
Enter fullscreen mode Exit fullscreen mode

🌐 Access the Interface

Open your browser:

http://localhost:8080
Enter fullscreen mode Exit fullscreen mode

You now have a local ChatGPT-style interface.


πŸ”— Important Fix (Docker Networking)

If Open WebUI cannot detect Ollama:

Use:

--network=host
Enter fullscreen mode Exit fullscreen mode

This allows the container to directly access:

http://127.0.0.1:11434
Enter fullscreen mode Exit fullscreen mode

Without this, Docker may isolate the container from the local API.


▢️ Daily Usage

Start WebUI

sudo docker start open-webui
Enter fullscreen mode Exit fullscreen mode

Stop WebUI

sudo docker stop open-webui
Enter fullscreen mode Exit fullscreen mode

Check Models

ollama list
Enter fullscreen mode Exit fullscreen mode

Run Model in Terminal

ollama run mistral
Enter fullscreen mode Exit fullscreen mode

πŸ” Troubleshooting

Port already in use (11434)

Ollama is already running β€” no action required.


Model not visible in UI

sudo docker restart open-webui
Enter fullscreen mode Exit fullscreen mode

Connection issue

Check:

curl http://127.0.0.1:11434
Enter fullscreen mode Exit fullscreen mode

πŸ”’ Why This Matters

This setup ensures:

  • Prompts stay local
  • Files remain on your machine
  • No external logging or tracking
  • Full control over your environment

It is especially useful for:

  • Developers working with sensitive code
  • Offline workflows
  • Learning and experimentation
  • Privacy-conscious users

⚠️ Trade-offs

Local models are not identical to large cloud models.

Expect:

  • Slightly lower reasoning capability
  • Slower responses (CPU-based inference)
  • Limited context window (depending on model)

But for many use cases, they are more than sufficient.


⚑ Final Result

You now have:

  • A local LLM (e.g., Mistral)
  • A ChatGPT-like interface
  • A fully private AI environment
  • No dependency on external APIs

🧾 Quick Cheat Sheet

# Start WebUI
sudo docker start open-webui

# Open UI
http://localhost:8080

# Check models
ollama list

# Run model
ollama run mistral

# Stop WebUI
sudo docker stop open-webui
Enter fullscreen mode Exit fullscreen mode

🏁 Final Thought

Cloud AI is powerful and convenient.

Local AI is controlled and private.

Both have their place.

This setup simply gives you the option.

Top comments (0)