I've been working with crypto APIs for a while, and honestly? Most of them are frustrating to use. Rate limits that aren't documented. Authentication that requires 15 steps. SDKs that haven't been updated since 2021.
So I built my own. Here's what I learned.
The Problem
I wanted to build a simple portfolio tracker. Should be easy, right?
# What I wanted
prices = get_prices(["bitcoin", "ethereum"])
portfolio_value = sum(holding.amount * prices[holding.coin] for holding in holdings)
What I got instead:
- 3 different APIs with different auth methods
- Rate limits that varied by endpoint (and weren't documented)
- Response formats that changed between versions
- SDKs that threw cryptic errors
The Solution: Build It Myself
I decided to build a simple REST API that does exactly what I needed. No more, no less.
Tech stack:
- FastAPI (because it's fast and the auto-docs are great)
- SQLite (simple, no separate database server needed)
- CoinGecko for price data (free tier is generous)
The Architecture
Here's the basic structure:
app/
├── api/
│ ├── portfolios.py # Portfolio CRUD
│ ├── prices.py # Price endpoints
│ └── alerts.py # Price alerts
├── services/
│ ├── crypto_data.py # CoinGecko integration
│ └── calculations.py # P/L calculations
├── models.py # SQLAlchemy models
└── main.py # FastAPI app
Nothing fancy. That's intentional.
Key Technical Decisions
1. API Key Authentication (Keep It Simple)
I see a lot of APIs use OAuth or JWT with refresh tokens. For a simple API? Overkill.
def get_api_key(request: Request) -> str:
api_key = request.headers.get("X-API-Key")
if not api_key:
raise HTTPException(401, "Missing API key")
return api_key
One header. That's it. No token exchange, no expiration dance.
2. Rate Limiting Per Tier
This was trickier than expected. The naive approach:
# Don't do this - race condition!
user.request_count += 1
if user.request_count > limit:
raise HTTPException(429, "Rate limit exceeded")
The problem? Two requests can read the same count, both increment, and both pass.
Better approach - use the database as the source of truth:
# Check and increment atomically
result = db.execute(
update(User)
.where(User.id == user_id)
.where(User.request_count < rate_limit)
.values(request_count=User.request_count + 1)
)
if result.rowcount == 0:
raise HTTPException(429, "Rate limit exceeded")
3. Caching Price Data
CoinGecko has rate limits. Hitting them on every request would be bad. So I cache prices:
CACHE_DURATION = 60 # seconds
def get_price(coin_id: str, db: Session) -> float:
# Check cache first
cached = db.query(CryptoPrice).filter(
CryptoPrice.coin_id == coin_id,
CryptoPrice.updated_at > datetime.utcnow() - timedelta(seconds=CACHE_DURATION)
).first()
if cached:
return cached.price_usd
# Fetch fresh data
price = fetch_from_coingecko(coin_id)
# ... save to database
return price
Simple, but effective. Most requests hit the cache.
4. P/L Calculations
Portfolio profit/loss sounds simple until you think about cost basis. I went with weighted average:
def calculate_pnl(holdings: List[Holding], current_prices: dict) -> dict:
total_cost = sum(h.amount * h.purchase_price for h in holdings)
total_value = sum(h.amount * current_prices[h.coin_id] for h in holdings)
return {
"total_cost": total_cost,
"current_value": total_value,
"profit_loss": total_value - total_cost,
"profit_loss_percent": ((total_value - total_cost) / total_cost) * 100
}
Not FIFO, not LIFO, just weighted average. Good enough for most use cases.
The Gotchas
CoinGecko API Quirks
CoinGecko uses different IDs than you might expect:
| You might search for | Actual CoinGecko ID |
|---|---|
| BTC | bitcoin |
| ETH | ethereum |
| SOL | solana |
I added a /prices/supported endpoint that returns all valid coin IDs. Saves a lot of confusion.
SQLite Concurrency
SQLite doesn't handle concurrent writes well. For a small API with <100 req/sec, it's fine. But I designed the schema to make a PostgreSQL migration easy:
- No SQLite-specific features
- All timestamps are UTC
- Foreign keys properly defined
Response Time Optimization
First version: 800ms average response time. Yikes.
The fix? Batch queries instead of N+1:
# Before: N+1 queries
for holding in holdings:
price = get_price(holding.coin_id) # DB query each time
# After: 1 query
coin_ids = [h.coin_id for h in holdings]
prices = get_prices_batch(coin_ids) # Single query
Now: ~120ms average.
What I'd Do Differently
Start with PostgreSQL - SQLite was fine for development, but I'll need to migrate eventually
Add request ID logging from day 1 - Debugging production issues without request IDs is painful
Build the SDK first - I built the API, then realized a Python SDK would make it much more usable
Try It Out
I've open-sourced the Python SDK:
pip install coinpulse
from coinpulse import CoinPulse
client = CoinPulse(api_key="your-key")
# Get current prices
prices = client.get_prices(["bitcoin", "ethereum"])
print(f"BTC: ${prices[0].price_usd:,.2f}")
# Track a portfolio
portfolio = client.create_portfolio("My Holdings")
client.add_holding(portfolio.id, "bitcoin", amount=0.5, purchase_price=45000)
# Check P/L
holdings = client.get_holdings(portfolio.id)
for h in holdings:
print(f"{h.coin_id}: {h.profit_loss_percent:+.1f}%")
GitHub: github.com/soutone/coinpulse-python
The free tier gives you 25 requests/hour - enough to build and test without committing.
Questions?
Happy to answer questions about the implementation. What other crypto API pain points have you run into?
Building something with crypto data? I'd love to hear what you're working on.
Top comments (0)