If you want to run local LLMs with a clean, containerized setup without Docker, this guide shows how to deploy DeepSeek R1 + Ollama + Open WebUI using Podman Compose.
This setup is ideal for:
- Local AI experimentation
- Developers who prefer rootless containers
- Teams standardizing on Podman instead of Docker
Why use Podman?
Using Podman is recommended because it is more secure and better aligned with Kubernetes: it runs containers rootless by default, does not rely on a daemon, reduces the attack surface, and fits better in enterprise and CI/CD environments. In addition, it is Docker-compatible at the command level, officially supported by Red Hat, and ideal for Linux servers and production, especially when security and compliance are a priority.
Architecture Overview
The stack runs inside a single Podman Pod, meaning all containers share the same network namespace.
- Ollama → model runtime & API
- Open WebUI → chat-style web interface
- Model Loader → one-shot container to pull the model
- Volumes → persistent storage for models and app data
Requirements
- Podman
- Podman Compose (podman compose or podman-compose)
- Free ports:
- 11434 (Ollama API)
- 3000 (Open WebUI)
On SELinux-enabled systems, volumes are mounted using :Z to avoid permission issues.
podman-compose.yml
Here is the core compose file powering the stack:
services:
ollama:
image: ollama/ollama:latest
container_name: ollama-deepseek
ports:
- "11434:11434"
environment:
- OLLAMA_HOST=0.0.0.0:11434
volumes:
- ollama-data:/root/.ollama:Z
restart: unless-stopped
healthcheck:
test: ["CMD", "ollama", "list"]
interval: 30s
timeout: 10s
retries: 3
start_period: 40s
open-webui:
image: ghcr.io/open-webui/open-webui:latest
container_name: open-webui
depends_on:
- ollama
ports:
- "3000:8080"
environment:
- OLLAMA_BASE_URL=http://ollama:11434
- WEBUI_SECRET_KEY=${WEBUI_SECRET_KEY:-secret-key-change-me-in-production}
- ENABLE_SIGNUP=${ENABLE_SIGNUP:-true}
volumes:
- open-webui-data:/app/backend/data:Z
restart: unless-stopped
model-loader:
image: ollama/ollama:latest
container_name: model-loader
depends_on:
- ollama
environment:
- OLLAMA_HOST=http://ollama:11434
entrypoint: ["ollama"]
command: ["pull", "deepseek-r1:1.5b"]
restart: on-failure
volumes:
ollama-data:
driver: local
open-webui-data:
driver: local
Configuration
Create a .env file:
WEBUI_SECRET_KEY=change-me-to-a-long-random-secret
ENABLE_SIGNUP=true
Generate a secure secret:
openssl rand -hex 32
Run the Stack
podman compose -f podman-compose.yml up -d
Check status:
podman ps
podman compose -f podman-compose.yml ps
Access:
- Open WebUI → http://localhost:3000
- Ollama API → http://localhost:11434
On first access, Open WebUI will prompt you to create the initial admin user (expected behavior).
Validation
Verify Ollama:
curl http://localhost:11434
Expected output:
Ollama is running
List models:
curl http://localhost:11434/api/tags
If the model is missing, pull it manually:
podman exec -it ollama-deepseek ollama pull deepseek-r1:1.5b
Stop & Teardown
Stop containers:
podman compose stop
Remove containers (keep data):
podman compose stop
Full cleanup (⚠️ deletes models and users):
podman compose down -v
Security Notes
- Always change WEBUI_SECRET_KEY
- After creating the first admin user, disable public signups:
ENABLE_SIGNUP=false
Then recreate:
podman compose up -d --force-recreate
Final Thoughts
Running LLMs locally doesn’t have to be messy.
With Podman + Ollama, you get a setup that feels:
- clean
- reproducible
- and production-adjacent
If you’re already using Podman for your workloads, this stack fits right

Top comments (0)