Running AI models locally used to be complex. With Ollama, it's a few terminal commands.
In this guide, I'll walk through installing Ollama, downloading a business-capable AI model, and connecting it to LivChart for AI-powered dashboard generation — all running locally, no cloud dependency.
Step 1: Install Ollama
macOS:
brew install ollama
Linux:
curl -fsSL https://ollama.com/install.sh | sh
Windows: Download from ollama.com
After installation, start the Ollama server:
ollama serve
Step 2: Download a Model
For business analytics, I recommend starting with Qwen2.5 7B — it handles multilingual prompts well (including Turkish) and performs reliably for chart generation.
ollama pull qwen2.5:7b
This downloads approximately 4.7 GB. Other good options:
| Model | Size | Best For | RAM Required |
|---|---|---|---|
| Qwen2.5 7B | 4.7 GB | Multilingual analytics | 8 GB |
| Llama 3.1 8B | 4.9 GB | High-accuracy charts | 16 GB |
| Gemma 4 E2B | 1.6 GB | Fast interactive use | 8 GB |
| Mistral 7B | 4.1 GB | Lightweight deployment | 8 GB |
Step 3: Test Your Model
ollama run qwen2.5:7b
Try a business prompt:
Show me a bar chart of monthly revenue by region for Q1 2026
If the model responds with structured output, it's working.
Step 4: Connect to LivChart
- Open LivChart and go to Settings → AI Configuration
- Set the AI provider to Ollama
- Set the endpoint:
http://localhost:11434(default Ollama port) - Select your model:
qwen2.5:7b - Click Test Connection
If the connection succeeds, you're ready to use AI-powered chart generation.
Step 5: Create Your First AI Dashboard
- Import your data (Excel, CSV, or SQL database)
- Open the AI Chart Wizard
- Describe what you want: "Show me quarterly revenue comparison by product category"
- The AI generates the chart
- Refine if needed, then add to your dashboard
Common Problems and Solutions
"Connection refused" — Make sure Ollama is running (ollama serve). Check that port 11434 is not blocked by firewall.
"Model not found" — Run ollama list to see downloaded models. If empty, pull a model first.
"Slow responses" — Try a smaller model (Gemma 4 E2B) or enable GPU acceleration. On Apple Silicon, Ollama uses Metal by default.
"Incorrect charts" — Try rephrasing your prompt with more specific terms. Include chart type explicitly: "Create a line chart showing..."
Hardware Recommendations
| Setup | RAM | GPU | Models |
|---|---|---|---|
| Entry-level | 16 GB | Not required | Gemma 4 E2B, Mistral 7B |
| Mid-range | 32 GB | Recommended | Qwen2.5 7B, Llama 3.1 8B |
| Enterprise | 64 GB+ | Multi-GPU | Large models, concurrent users |
Why Local AI for Dashboards?
- Privacy: Business data never leaves your machine
- Speed: No network latency, models respond locally
- Compliance: KVKK and GDPR requirements are simpler when data stays internal
- Cost: No per-request API charges after initial hardware investment
- Offline: Works without internet connection
The setup takes under 5 minutes. The benefits last much longer.
Try it yourself — download LivChart and connect Ollama for local AI-powered analytics.
Top comments (0)