Using Ollama Locally for Crypto Market Analysis: No API Costs
Every cloud AI API costs money per request. If you're running a crypto analysis agent that checks prices every hour, those costs add up fast. Ollama solves this — it runs large language models on your local machine, with zero per-request charges.
This guide shows you how to point a crypto analysis agent at a local Ollama instance instead of a paid API.
Why Local LLMs for Crypto?
- No API costs — Run 10,000 analyses for free
- No data leakage — Your portfolio details never leave your machine
- No rate limits — Analyze as fast as your hardware allows
- Offline capable — Works without internet (after model download)
The tradeoff: local models are slightly less capable than GPT-4. For trend summaries and pattern descriptions, they're more than sufficient.
Step 1: Install Ollama
Download from ollama.ai — available for Mac, Windows (preview), and Linux.
# Mac
brew install ollama
# Linux
curl -fsSL https://ollama.ai/install.sh | sh
# Start the server
ollama serve
Pull a model. For crypto analysis, llama3.2 or mistral work well:
ollama pull llama3.2
# or smaller/faster:
ollama pull llama3.2:1b
Step 2: Verify the API
Ollama exposes a REST API on localhost:11434:
curl http://localhost:11434/api/generate -d '{
"model": "llama3.2",
"prompt": "BTC is up 3% today. What might this indicate?",
"stream": false
}'
You should see a JSON response with the model's analysis.
Step 3: Build a Price-to-Insight Pipeline
import requests
import json
OLLAMA_URL = "http://localhost:11434/api/generate"
MODEL = "llama3.2"
def get_crypto_price(symbol: str) -> dict:
"""Fetch latest price data from Binance public API."""
url = f"https://api.binance.com/api/v3/ticker/24hr?symbol={symbol}"
r = requests.get(url)
data = r.json()
return {
"symbol": symbol,
"price": float(data["lastPrice"]),
"change_24h": float(data["priceChangePercent"]),
"volume": float(data["volume"]),
"high": float(data["highPrice"]),
"low": float(data["lowPrice"])
}
def analyze_with_ollama(market_data: dict) -> str:
"""Send market data to local Ollama for analysis."""
symbol = market_data["symbol"]
price = market_data["price"]
change = market_data["change_24h"]
volume = market_data["volume"]
prompt = f"""
You are a brief, no-nonsense crypto market analyst.
Market data for {symbol}:
- Current price: ${price:,.2f}
- 24h change: {change:+.2f}%
- 24h volume: {volume:,.0f}
- 24h high: ${market_data['high']:,.2f}
- 24h low: ${market_data['low']:,.2f}
In 2-3 sentences, describe what this data suggests. Focus on trend direction and any notable patterns.
Do NOT give financial advice or recommendations to buy/sell.
"""
response = requests.post(OLLAMA_URL, json={
"model": MODEL,
"prompt": prompt,
"stream": False,
"options": {"temperature": 0.3} # Lower temp = more consistent analysis
})
return response.json().get("response", "Analysis unavailable.").strip()
# Run it
symbols = ["BTCUSDT", "ETHUSDT", "SOLUSDT"]
for symbol in symbols:
data = get_crypto_price(symbol)
analysis = analyze_with_ollama(data)
print(f"\n=== {symbol} @ ${data['price']:,.2f} ({data['change_24h']:+.2f}%) ===")
print(analysis)
Step 4: Add Technical Indicator Context
Plain price data is limited. Add basic indicators to give the model more signal:
def simple_rsi(prices: list, period: int = 14) -> float:
"""Calculate RSI from a list of closing prices."""
if len(prices) < period + 1:
return 50.0 # Neutral if not enough data
gains, losses = [], []
for i in range(1, period + 1):
diff = prices[i] - prices[i-1]
if diff > 0:
gains.append(diff)
losses.append(0)
else:
gains.append(0)
losses.append(abs(diff))
avg_gain = sum(gains) / period
avg_loss = sum(losses) / period
if avg_loss == 0:
return 100.0
rs = avg_gain / avg_loss
return 100 - (100 / (1 + rs))
def get_klines(symbol: str, interval: str = "1h", limit: int = 20) -> list:
"""Get candlestick data from Binance."""
url = f"https://api.binance.com/api/v3/klines?symbol={symbol}&interval={interval}&limit={limit}"
data = requests.get(url).json()
return [float(candle[4]) for candle in data] # closing prices
# Enhanced analysis with RSI
closes = get_klines("BTCUSDT")
rsi = simple_rsi(closes)
# Pass RSI to Ollama prompt
prompt = f"BTC current RSI: {rsi:.1f}. Price: $29,500. 24h change: -2.1%. Briefly interpret this technical picture."
Step 5: Schedule Hourly Reports
import time
from datetime import datetime
def run_analysis_cycle():
timestamp = datetime.now().strftime("%Y-%m-%d %H:%M")
print(f"\n{'='*50}")
print(f"Analysis cycle: {timestamp}")
print(f"{'='*50}")
for symbol in ["BTCUSDT", "ETHUSDT"]:
try:
data = get_crypto_price(symbol)
analysis = analyze_with_ollama(data)
print(f"\n{symbol}: {analysis}")
except Exception as e:
print(f"Error analyzing {symbol}: {e}")
# Run every hour
while True:
run_analysis_cycle()
time.sleep(3600)
Model Comparison for Crypto Analysis
| Model | Size | Speed | Quality |
|---|---|---|---|
llama3.2:1b |
1.3GB | Very fast | Good for quick summaries |
llama3.2 |
4.7GB | Fast | Better reasoning |
mistral |
4.1GB | Fast | Strong at structured output |
llama3.1:8b |
4.7GB | Medium | Best balance |
For hourly crypto summaries, llama3.2:1b is plenty. For complex multi-asset analysis, use the full llama3.2.
Integrating With OpenClaw
OpenClaw has a built-in local_llm.py module that handles Ollama connections with automatic fallbacks:
from local_llm import LocalLLM
llm = LocalLLM(model="llama3.2")
analysis = llm.ask("Summarize today's crypto market in one paragraph")
It handles connection retries, model loading waits, and graceful fallbacks automatically.
The Full Setup
Want the complete crypto analysis agent — Ollama + OpenClaw + Binance Testnet + Telegram alerts — pre-configured and ready to run?
Full guide at dragonwhisper36.gumroad.com. No subscription, one download.
Top comments (0)