Easily deploy and manage your own AI-powered ChatGPT website using Nanocl, Ollama, and Open WebUI.
Overview
This guide will show you how to self-host an AI model using Nanocl, a lightweight container orchestration platform. By combining Nanocl with Ollama (for running large language models locally) and Open WebUI (for a user-friendly web interface), you can quickly set up your own private ChatGPT-like service.
📺 Watch the YouTube video tutorial
https://www.youtube.com/watch?v=xh5wB8J56N8
Stack Components
- Nanocl: Simple, efficient container orchestration for easy deployment and scaling.
- Ollama: Run large language models locally via a powerful API.
- Open WebUI: Modern web interface to interact with your AI model.
Prerequisites
Before you begin, ensure you have the following installed:
1. Docker
Install Docker by following the official guide for your Linux distribution.
2. Nanocl
Install the Nanocl CLI with:
curl -fsSL https://download.next-hat.com/scripts/get-nanocl.sh | sh
Set up Nanocl's group and internal services:
sudo groupadd nanocl
sudo usermod -aG nanocl $USER
newgrp nanocl
nanocl install
For more details, see the Nanocl documentation.
3. (Optional) Nvidia Container Toolkit
If you want GPU acceleration, follow the Nvidia container toolkit installation guide.
Step 1: Deploy Ollama with Nanocl
Create a file named ollama.Statefile.yml:
ApiVersion: v0.17
Cargoes:
- Name: ollama
Container:
Image: docker.io/ollama/ollama:latest
Hostname: ollama.local
HostConfig:
Binds:
- ollama:/root/.ollama # Persist Ollama data
Runtime: nvidia # Enable GPU support (optional)
DeviceRequests:
- Driver: nvidia
Count: -1
Capabilities: [[gpu]]
Deploy Ollama:
nanocl apply -s ollama.Statefile.yml
Step 2: Deploy Open WebUI with Nanocl
Create a file named openwebui.Statefile.yml:
ApiVersion: v0.17
Cargoes:
- Name: open-webui
Container:
Image: ghcr.io/open-webui/open-webui:main
Hostname: open-webui.local
Env:
- OLLAMA_BASE_URL=http://ollama.local:11434 # Connect to Ollama
HostConfig:
Binds:
- open-webui:/app/backend/data # Persist WebUI data
Resources:
- Name: open-webui.local
Kind: ncproxy.io/rule
Data:
Rules:
- Domain: open-webui.local
Network: All
Locations:
- Path: /
Version: 1.1
Headers:
- Upgrade $http_upgrade
- Connection "Upgrade"
Target:
Key: open-webui.global.c
Port: 8080
Deploy Open WebUI:
nanocl apply -s openwebui.Statefile.yml
It will take a bit of time for Open WebUI to start up as it downloads necessary components.
You can see the download progress in the logs:
nanocl cargo logs open-webui -f
Step 3: Access Open WebUI
Add the following line to your /etc/hosts file to map the domain:
127.0.0.1 open-webui.local
Now, open your browser and go to http://open-webui.local. You should see the Open WebUI welcome screen.
1. Create Your Admin Account
Click Get Started to begin the setup process.
Fill in your details to create your admin account, then click Create Admin Account.
2. Download a Model
After logging in, click your avatar in the top right corner and select Admin Panel.
Navigate to Settings → Models. In the top right, click the download icon to open the model selection dialog.
For this example, select the gemma2:2b model and click Download.
Wait for the download to complete. The model will appear in your list of available models.
3. Start Chatting
Once the model is ready, create a new chat and say "Hi" to your AI model!
And that's it! You now have your own self-hosted AI model running with Nanocl, Ollama, and Open WebUI.





Top comments (0)