DEV Community

Alex Spinov
Alex Spinov

Posted on

Open WebUI Has a Free API — Self-Hosted ChatGPT Interface for Any LLM

Open WebUI: ChatGPT Interface for Your Own Models

Open WebUI (formerly Ollama WebUI) is a self-hosted web interface for LLMs. Connect it to Ollama, OpenAI, or any OpenAI-compatible API. Multi-user, RAG, web search, image generation — all in one interface.

Key Features

  • ChatGPT-like interface for any LLM
  • Multi-user with role-based access
  • RAG (upload documents, chat with them)
  • Web search integration
  • Image generation (DALL-E, Stable Diffusion)
  • Voice input/output
  • Model management
  • Prompt library

The Free API

Open WebUI exposes an OpenAI-compatible API:

# Chat completion (same as OpenAI format)
curl http://localhost:3000/api/chat/completions \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -d "{\"model\": \"llama3\", \"messages\": [{\"role\": \"user\", \"content\": \"Hello\"}]}"

# List available models
curl http://localhost:3000/api/models \
  -H "Authorization: Bearer YOUR_API_KEY"

# Upload document for RAG
curl -X POST http://localhost:3000/api/v1/files/ \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -F "file=@document.pdf"

# Get chat history
curl http://localhost:3000/api/v1/chats/ \
  -H "Authorization: Bearer YOUR_API_KEY"
Enter fullscreen mode Exit fullscreen mode

Docker Setup

# With Ollama on same machine
docker run -d -p 3000:8080 \
  --add-host=host.docker.internal:host-gateway \
  -e OLLAMA_BASE_URL=http://host.docker.internal:11434 \
  -v open-webui:/app/backend/data \
  --name open-webui \
  ghcr.io/open-webui/open-webui:main

# With OpenAI API
docker run -d -p 3000:8080 \
  -e OPENAI_API_KEY=sk-... \
  -v open-webui:/app/backend/data \
  ghcr.io/open-webui/open-webui:main
Enter fullscreen mode Exit fullscreen mode

RAG Pipeline

# 1. Upload documents
curl -X POST http://localhost:3000/api/v1/files/ \
  -H "Authorization: Bearer $KEY" \
  -F "file=@docs.pdf"

# 2. Create knowledge base
curl -X POST http://localhost:3000/api/v1/knowledge/ \
  -H "Authorization: Bearer $KEY" \
  -d "{\"name\": \"Company Docs\"}"

# 3. Chat with context from your documents
# Use the web UI to select knowledge base in chat
Enter fullscreen mode Exit fullscreen mode

Real-World Use Case

A law firm needed AI assistance but could not send client data to OpenAI. Open WebUI + Ollama on a local server. Lawyers upload case documents, chat with them via RAG. Zero data leaves the building. Compliance team approved.

Quick Start

docker run -d -p 3000:8080 -v open-webui:/app/backend/data ghcr.io/open-webui/open-webui:main
# Open http://localhost:3000
Enter fullscreen mode Exit fullscreen mode

Resources


Need automated AI data pipelines? Check out my tools on Apify or email spinov001@gmail.com for custom AI solutions.

Top comments (0)