Why Ollama?
Run LLMs locally. No API keys, no costs, no data leaving your machine.
curl -fsSL https://ollama.com/install.sh | sh
ollama run llama3.2
ollama run mistral
ollama run codellama
OpenAI-Compatible API
curl http://localhost:11434/api/chat -d '{
"model": "llama3.2",
"messages": [{"role": "user", "content": "Explain Docker"}],
"stream": false
}'
TypeScript
import { Ollama } from "ollama"
const res = await new Ollama().chat({ model: "llama3.2", messages: [{ role: "user", content: "Hello" }] })
Custom Models
FROM llama3.2
SYSTEM "You are a senior developer."
PARAMETER temperature 0.7
| Model | RAM | Quality |
|---|---|---|
| Phi-3 | 4GB | Good |
| Llama 3.2 | 8GB | Great |
| Llama 3.1 70B | 48GB | Excellent |
Need to extract data from any website at scale? I build custom web scrapers — 77 production scrapers running on Apify Store. Email me at spinov001@gmail.com for a tailored solution.
Top comments (0)