DEV Community

Jigin Vp
Jigin Vp

Posted on

Getting Started with Open WebUI: A Self-Hosted AI Interface

Open WebUI is an MIT-licensed project whose goal is to provide “the best AI user interface” for self-hosted large language models. At its core it’s a web app (Svelte + TypeScript + Python backend) that talks to:

  • Ollama (local LLM runner)
  • OpenAI-compatible APIs (e.g. LMStudio, Mistral, GroqCloud via custom endpoints)
  • Custom pipelines (RAG, tool-use via the pipelines framework)

The result? You get chat, voice/video calls, document uploads, memory/contexts, and even a built-in “model builder” to craft your own agents—all in one place.

Quickstart with Docker

The easiest way to stand up Open WebUI is with Docker. Here’s a GPU-enabled example:

docker run -d \
  -p 3000:8080 \
  --gpus=all \
  --add-host=host.docker.internal:host-gateway \
  -v ollama:/root/.ollama \
  -v open-webui:/app/backend/data \
  --name open-webui \
  --restart=always \
  ghcr.io/open-webui/open-webui:ollama
Enter fullscreen mode Exit fullscreen mode

If you don’t have a GPU, just omit the --gpus=all:

docker run -d \
  -p 3000:8080 \
  --add-host=host.docker.internal:host-gateway \
  -v ollama:/root/.ollama \
  -v open-webui:/app/backend/data \
  --name open-webui \
  --restart=always \
  ghcr.io/open-webui/open-webui:ollama
Enter fullscreen mode Exit fullscreen mode

Once it’s running, point your browser at:
http://localhost:3000/auth

Log in (there’s a default admin user/password in the docs), and you’re off to the races!


Key Features at a Glance

  • Effortless Setup: Docker, Docker Compose, Helm/Kustomize—pick your poison.
  • Multi-Runner Support: Ollama + any OpenAI-compatible URL.
  • Granular Permissions: Create user groups, roles, and fine-grained ACLs.
  • Responsive Design: Desktop, tablet, mobile—all handled.
  • Voice & Video Calls: Hands-free chat with built-in WebRTC support.
  • Model Builder: Create, customize, and deploy new Ollama models via the UI.
  • Plugin Framework: Extend with custom pipelines, filters, and memory modules.

First Steps After Installation

Connect Your Runner
Go to Settings → Model Runners and point to your Ollama socket (host.docker.internal:11434) or your OpenAI-compatible endpoint.

Create a Workspace
Workspaces let you isolate data, users, and models per project or team.
Chat & Explore
Hit New Chat, pick a model, and start experimenting with prompts, file uploads, or voice calls.

Customization & Extensions

Open WebUI’s power is in its extensibility:

  • Custom Pipelines: Write Python plugins to add status emitters, word filters, or long-term memory.
  • Theming: Override the default Svelte styles with your own CSS or brand colors.
  • Chrome Extension: Browse your workspace from any tab.
  • Desktop App: Use the Electron-based client for a native-feel experience.

Check out the docs for guides on each of these—there’s even a community-maintained list of “extensions you must try.”

Real-World Use Cases

  • Internal AI Assistant Host a Slack-integrated knowledge bot on your own servers—no third-party needed.
  • Research & Development Experiment with new LLMs in a controlled, offline environment.
  • Customer Support Ship a branded, self-hosted help-desk chatbot with RAG over your own docs.
  • Education Give students hands-on experience with AI without exposing them to external APIs.

Caveats & Tips

  • Resource Needs: Large models can be memory hungry. Make sure your host has enough RAM/VRAM.
  • Security: If exposing to the internet, sit behind an authenticated reverse-proxy (e.g., Nginx with OAuth).
  • Backups: Data lives in the open-webui volume—back it up regularly to avoid data loss.

Wrapping Up

Open WebUI puts the power and flexibility of modern LLMs right in your hands—completely self-hosted, fully open source. Whether you’re running internal assistants, experimenting with new models, or building your own AI-driven products, it’s a fantastic starting point.

Give it a spin, join the Discord community, and unlock the full potential of offline AI!
Happy hacking! 🚀

Top comments (1)

Some comments may only be visible to logged-in visitors. Sign in to view all comments.