DEV Community

Anybody237
Anybody237

Posted on

How I Built a 25-Signal Bitcoin Cycle Bottom Detector with FastAPI, Supabase, and Next.js

I got tired of checking 10 different tabs to see whether Bitcoin was in a cycle bottom or not. So I built bitcoinbottom.app — a free tool that aggregates 25 on-chain and macro signals into a single daily probability score.

Here's the full technical breakdown.

What It Does

The Bitcoin Bottom Score combines signals like:

  • MVRV Z-Score (CoinMetrics) — measures market cap vs. realized cap
  • Puell Multiple — miner revenue stress
  • Hash Ribbon (Mempool.space) — miner capitulation via hash rate MAs
  • Fear & Greed Index (Alternative.me) — composite sentiment
  • ETF Net Flows (Farside Investors) — institutional demand
  • ...and 20 more

Each signal is scored −1 to +1 and weighted by historical predictive accuracy. The composite is passed through a sigmoid to produce a 0–100% probability score.

Stack

  • Backend: FastAPI on Railway (Python)
  • Data: Supabase PostgreSQL for historical signal storage
  • Frontend: Next.js 14 App Router on Vercel (TypeScript)
  • Scheduling: APScheduler running pipelines at 00:05 and 12:05 UTC

The Scoring Model

def compute_recommendation(signals: list[dict]) -> dict:
    weighted_sum = sum(s["score"] * s["weight"] for s in signals if s.get("available"))
    total_weight = sum(s["weight"] for s in signals if s.get("available"))
    composite = weighted_sum / total_weight if total_weight else 0

    # Sigmoid calibration
    p_bottom = 1 / (1 + math.exp(-composite * 3)) * 100
    return {"composite_score": composite, "p_bottom": round(p_bottom, 2)}
Enter fullscreen mode Exit fullscreen mode

The calibration factor (3.0) was tuned so that readings above ~65% correspond to confirmed historical cycle bottom conditions.

Data Pipeline Architecture

Every 12 hours, the pipeline:

  1. Fetches all data sources in parallel with asyncio.gather()
  2. Normalizes each signal to a −1 to +1 score
  3. Computes the weighted composite
  4. Writes 25 rows to Supabase (indicator_history table) via upsert
  5. Posts a daily tweet with a Pillow-generated signal card
async def run_all_pipelines() -> None:
    raw = await _fetch_shared()  # all sources in parallel
    ml_p = predict(raw)          # ML overlay

    signals = build_consensus_signals(raw)
    cache.set("consensus_signals", signals_serialized)

    await _write_today_to_supabase(signals, raw.get("btc_price_usd"))
Enter fullscreen mode Exit fullscreen mode

SSR + CDN Caching with Next.js

The homepage fetches signal data server-side with a 5-minute revalidation window:

export const revalidate = 300; // 5-min CDN cache

export default async function HomePage() {
  const data = await fetchSignals();
  return <ConsensusAIClient initialSignals={data?.signals ?? []} ... />;
}
Enter fullscreen mode Exit fullscreen mode

This means Google sees fully rendered HTML with real signal data — critical for SEO on a finance tool.

Signal Normalization

Each signal has a custom normalization function. For example, MVRV Z-Score:

def score_mvrv_z(value: float) -> float:
    # Below 0: strong bottom signal (+0.8 to +1.0)
    # 0 to 2: neutral (0 to -0.3)
    # Above 5: cycle top signal (-0.8 to -1.0)
    if value < -0.5: return 1.0
    if value < 0:    return 0.6 + (abs(value) / 0.5) * 0.4
    if value < 2:    return -value / 2 * 0.5
    if value < 5:    return -0.5 - ((value - 2) / 3) * 0.3
    return -0.8
Enter fullscreen mode Exit fullscreen mode

What I Learned

1. On-chain data has a 24-hour lag. Glassnode, CoinMetrics, and most blockchain data providers publish confirmed data with a 1-day delay. This means the score reflects yesterday's blockchain state — important to communicate to users.

2. Railway's 30-second HTTP timeout is a real constraint. The full pipeline takes 45–90 seconds. I solved this by running refreshes as FastAPI BackgroundTasks and returning immediately:

@app.post("/api/consensus/refresh")
async def consensus_refresh(background_tasks: BackgroundTasks):
    background_tasks.add_task(run_all_pipelines)
    return {"status": "refreshing", "message": "Check back in ~60s"}
Enter fullscreen mode Exit fullscreen mode

3. Supabase upsert is perfect for time-series signal data. Using on_conflict="date,indicator_id" means the pipeline is idempotent — running it twice on the same day just updates the row, no duplicates.

4. Pillow for automated tweet cards. Rather than describing the score in text, I generate a 1200×675 PNG signal card with Pillow and upload it via Twitter's v1 media upload API before posting the tweet with the v2 API.

The Result

The live tool is at bitcoinbottom.app — free, no signup required, updated twice daily.

If you're building something similar or have questions about the on-chain signal normalization, happy to discuss in the comments.


The full signal methodology is documented at bitcoinbottom.app/methodology.

Top comments (0)