DEV Community

Cover image for Running DeepSeek R1 + Ollama + Open WebUI with Podman Compose
Afu Tse (Chainiz)
Afu Tse (Chainiz)

Posted on

Running DeepSeek R1 + Ollama + Open WebUI with Podman Compose

If you want to run local LLMs with a clean, containerized setup without Docker, this guide shows how to deploy DeepSeek R1 + Ollama + Open WebUI using Podman Compose.

This setup is ideal for:

  • Local AI experimentation
  • Developers who prefer rootless containers
  • Teams standardizing on Podman instead of Docker

Why use Podman?

Using Podman is recommended because it is more secure and better aligned with Kubernetes: it runs containers rootless by default, does not rely on a daemon, reduces the attack surface, and fits better in enterprise and CI/CD environments. In addition, it is Docker-compatible at the command level, officially supported by Red Hat, and ideal for Linux servers and production, especially when security and compliance are a priority.


Architecture Overview

The stack runs inside a single Podman Pod, meaning all containers share the same network namespace.

  • Ollama → model runtime & API
  • Open WebUI → chat-style web interface
  • Model Loader → one-shot container to pull the model
  • Volumes → persistent storage for models and app data

architecture


Requirements

  • Podman
  • Podman Compose (podman compose or podman-compose)
  • Free ports:
    • 11434 (Ollama API)
    • 3000 (Open WebUI)

On SELinux-enabled systems, volumes are mounted using :Z to avoid permission issues.


podman-compose.yml

Here is the core compose file powering the stack:

services:
  ollama:
    image: ollama/ollama:latest
    container_name: ollama-deepseek
    ports:
      - "11434:11434"
    environment:
      - OLLAMA_HOST=0.0.0.0:11434
    volumes:
      - ollama-data:/root/.ollama:Z
    restart: unless-stopped
    healthcheck:
      test: ["CMD", "ollama", "list"]
      interval: 30s
      timeout: 10s
      retries: 3
      start_period: 40s

  open-webui:
    image: ghcr.io/open-webui/open-webui:latest
    container_name: open-webui
    depends_on:
      - ollama
    ports:
      - "3000:8080"
    environment:
      - OLLAMA_BASE_URL=http://ollama:11434
      - WEBUI_SECRET_KEY=${WEBUI_SECRET_KEY:-secret-key-change-me-in-production}
      - ENABLE_SIGNUP=${ENABLE_SIGNUP:-true}
    volumes:
      - open-webui-data:/app/backend/data:Z
    restart: unless-stopped

  model-loader:
    image: ollama/ollama:latest
    container_name: model-loader
    depends_on:
      - ollama
    environment:
      - OLLAMA_HOST=http://ollama:11434
    entrypoint: ["ollama"]
    command: ["pull", "deepseek-r1:1.5b"]
    restart: on-failure

volumes:
  ollama-data:
    driver: local
  open-webui-data:
    driver: local
Enter fullscreen mode Exit fullscreen mode

source code


Configuration

Create a .env file:

WEBUI_SECRET_KEY=change-me-to-a-long-random-secret
ENABLE_SIGNUP=true
Enter fullscreen mode Exit fullscreen mode

Generate a secure secret:

openssl rand -hex 32
Enter fullscreen mode Exit fullscreen mode

Run the Stack

podman compose -f podman-compose.yml up -d
Enter fullscreen mode Exit fullscreen mode

Check status:

podman ps
podman compose -f podman-compose.yml ps
Enter fullscreen mode Exit fullscreen mode

Access:

On first access, Open WebUI will prompt you to create the initial admin user (expected behavior).


Validation

Verify Ollama:

curl http://localhost:11434
Enter fullscreen mode Exit fullscreen mode

Expected output:

Ollama is running
Enter fullscreen mode Exit fullscreen mode

List models:

curl http://localhost:11434/api/tags
Enter fullscreen mode Exit fullscreen mode

If the model is missing, pull it manually:

podman exec -it ollama-deepseek ollama pull deepseek-r1:1.5b
Enter fullscreen mode Exit fullscreen mode

Stop & Teardown

Stop containers:

podman compose stop
Enter fullscreen mode Exit fullscreen mode

Remove containers (keep data):

podman compose stop
Enter fullscreen mode Exit fullscreen mode

Full cleanup (⚠️ deletes models and users):

podman compose down -v
Enter fullscreen mode Exit fullscreen mode

Security Notes

  • Always change WEBUI_SECRET_KEY
  • After creating the first admin user, disable public signups:
ENABLE_SIGNUP=false
Enter fullscreen mode Exit fullscreen mode

Then recreate:

podman compose up -d --force-recreate
Enter fullscreen mode Exit fullscreen mode

Final Thoughts

Running LLMs locally doesn’t have to be messy.
With Podman + Ollama, you get a setup that feels:

  • clean
  • reproducible
  • and production-adjacent

If you’re already using Podman for your workloads, this stack fits right

Top comments (0)