Open WebUI's quickstart is great for local dev. One command, it's running. But putting it on a real server for a team
requires a lot more — SSL, auth lockdown, WebSocket proxy, backups.
Here's the full production config I use.
## The stack
- ollama — LLM runner
- openwebui — Chat UI
- nginx — Reverse proxy + SSL
## The nginx config that trips everyone up
Open WebUI uses WebSockets for streaming. Without this, responses just hang:
nginx
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_read_timeout 300s;
Also set client_max_body_size 50M — users will upload documents and images.
Lock down auth
By default Open WebUI allows anyone to sign up. For a team setup:
WEBUI_AUTH=true
ENABLE_SIGNUP=false
Now only the admin can create accounts. Add SMTP config to send email invites.
GPU passthrough (NVIDIA)
In docker-compose.yml, uncomment:
deploy:
resources:
reservations:
devices:
- driver: nvidia
count: all
capabilities: [gpu]
Then install nvidia-container-toolkit and restart. Without this, Ollama runs on CPU — still works but much slower.
Automated backups
docker run --rm \
--volumes-from "$(docker compose ps -q openwebui)" \
-v "$(pwd)/backups:/backup" \
alpine \
tar czf "/backup/openwebui_$(date +%Y%m%d).tar.gz" /app/backend/data
Add to cron: 0 2 * * * /opt/openwebui/scripts/backup.sh
Recommended free VPS
Oracle Cloud Always Free — 4 vCPU / 24GB RAM ARM. Enough for llama3.2 (2B) with a small team.
Full kit
I packaged everything above into a ZIP — docker-compose, nginx config, backup + model scripts, .env.example.
Gumroad: https://peachjed.gumroad.com/l/cazucc
Top comments (0)