Managing Kubernetes can be powerful—but also overwhelming.
kubewall makes it simple with a single-binary, browser-based dashboard for real-time cluster monitoring and management.
Why to use ollama + kubewall ?
Quick Answer: Security and Easy to Configure
Ollama and kubewall runs 100% locally on your system preventing any security leaks and safe-guard your cluster secrets and other config maps.
Using this you are tied to any 3rd party servers, it will be your server - your infra - your system - your ai model
Prerequisites
Install
- Download and install ollama, that will help you run AI models locally based on your OS type ( mac/linux/windows ). It can be downloaded from here https://ollama.com/download
-
We will add qwen3 model to our system that ollama can access.
- We are using qwen3 in this case, since it support thinking + coding + tools
- open your terminal and run this command
ollama run qwen3
-
Install kubewall based on your system type. Visit kubewall Read Me More Guide
- For Mac
brew install --cask kubewall/tap/kubewall
- For linux
sudo snap install kubewall
- For Windows
winget install --id=kubewall.kubewall -e
- If you like to download binary directly you can visit release page. List of binary
- For Mac
Investigate cluster using ollama + kubewall
- Launch Kubewall Open your terminal and start Kubewall:
kubewall
- Choose Your Cluster From the dashboard, select the Kubernetes cluster you want to connect to.
- Open AI Settings Click the AI Settings panel in the upper-right corner.
Select AI Provider
From the dropdown, choose Ollama as your provider.Set URL-API-Key Key
Enter yourollama
as KEY and URL ashttp://127.0.0.1:11434/v1
(for most setups, the default key will work).Pick qwen3. Model
- Start Chatting Chat with your local AI model—ask questions about your cluster, troubleshoot issues, or request optimization tips.
Top comments (0)