DEV Community

Cover image for Building a Real-Time Dota 2 Draft Prediction System with Machine Learning
T9MPLE
T9MPLE

Posted on

Building a Real-Time Dota 2 Draft Prediction System with Machine Learning

I built an AI system that watches live Dota 2 pro matches and predicts which team will win based purely on the draft. Here's how it works under the hood.

The Problem
Dota 2 has 127 heroes. A Captain's Mode draft produces roughly 10^15 possible combinations. Analysts spend years building intuition about which drafts work — I wanted to see if a model could learn those patterns from data.

Architecture

Live Match → Draft Detection → Feature Engineering → XGBoost + DraftNet → Prediction + SHAP Explanation

The system runs 24/7 on Railway (Python/FastAPI). When a professional draft completes, it detects the picks within seconds, runs them through two models in parallel, and publishes the prediction to a Telegram channel and website.

The Models

The workhorse. Gradient boosted trees trained on 28,000+ pro matches with:

Hero one-hots (240 features) — which heroes are on which team
Player hero pool depth — how many games each player has on their hero
Team form — rolling win rate over last 20 matches
Lane matchup ratings — predicted lane outcomes based on hero positions
Replay-parsed stats — gold/XP differentials, tower damage, fight participation from parsed replays
Calibrated with isotonic regression so "70% confidence" actually means ~70% win rate.

DraftNet v4 (91 expert features)
A custom PyTorch neural network that captures what an analyst would notice about a draft:

Hero synergies — 7,500+ pair interactions (Magnus+Ember = +8% win rate)
Counter matchups — how well each hero handles the opposing draft
BKB-pierce control — drafts with 3+ piercing disables win significantly more
Damage balance — 80%+ single damage type = easy to itemize against
Lane projections — predicted laning advantage before the game starts
Timing windows — when each draft hits its power spike
Team composition — push/teamfight/pickoff/split strategy classification
The neural network uses self-attention + cross-team attention layers so each hero "sees" both its teammates and opponents.

Simplified expert feature example
def _compute_bkb_pierce(self, team_heroes, enemy_heroes):
pierce_count = sum(1 for h in team_heroes if h in BKB_PIERCE_HEROES)
enemy_bkb_dependence = sum(HERO_BKB_NEED[h] for h in enemy_heroes)
return pierce_count * enemy_bkb_dependence / 5.0

Feature Engineering: What Actually Matters

After training, I ran SHAP analysis to see which features the model values most. Some surprises:

Hero combos beat hero tier lists. The top 200 hero pairs contribute more to predictions than individual hero strength. Picking "the best hero" matters less than picking the best hero for your draft.

Lane matchups dominate early draft. At the 4-pick stage, lane advantage is 2x more predictive than team synergy. Synergy only takes over once all 10 heroes are locked.

Late-game scaling is overrated. Drafts built around "survive until 40 minutes" lose more than timing-focused drafts. Pro teams don't let you farm peacefully.

BKB-pierce is the most undervalued concept. Stacking Roar, Grip, Chrono, Duel consistently outperforms what the heroes' individual stats suggest.

The Prediction Pipeline

  1. Draft detected (15-30s delay)
  2. Hero IDs mapped to feature vectors
  3. Player/team enrichment data fetched (hero pool, form, H2H)
  4. XGBoost prediction + DraftNet prediction
  5. Confidence calibration (temperature scaling T=1.8)
  6. Dampening stack: market gap, standin penalty, league tier adjustment
  7. Value bet detection: model confidence vs bookmaker odds
  8. SHAP explanation generated
  9. Published to Telegram + website

The dampening stack is crucial. Raw model confidence is often overconfident on low-data matches. Six modifiers adjust the prediction:

  • Calibration — isotonic regression maps raw scores to true probabilities
  • H2H — head-to-head history between the two teams
  • Variance — how stable the prediction is across model variants
  • Standin detection — confidence penalty when substitute players are detected
  • Market gap — if our prediction disagrees with bookmakers by >20%, compress the edge
  • League tier — unknown/amateur leagues get dampened toward 50%

Results

Metric Value
Test accuracy (held-out) - 77%
Tier-1 production accuracy - ~67%
Brier score - 0.21
Training matches - 28,000+
DraftNet parameters - 304K
Prediction latency - <2 seconds
The production gap is mostly explained by missing enrichment data — 73% of live predictions don't have replay-parsed features available.

Draft Simulator

I also built a Captain's Mode simulator where you draft against the AI and watch the win probability update in real-time. It's at draft.britbets.xyz — useful for testing draft theories before ranked games.

The AI opponent uses the same DraftNet model to evaluate its picks. It's not perfect (it loves Magnus a suspicious amount) but it catches composition mistakes that humans miss.

What I'd Do Differently

Start with more data. 28K matches sounds like a lot, but after filtering for quality (tier-1/2 only, no standins, replay available), it's closer to 8K clean samples.

Calibration matters more than accuracy. A model that says "65%" and is right 65% of the time is more useful than one that says "80%" and is right 75%.

Don't trust your model on data it hasn't seen. Unknown teams, new patches, standin players — the model defaults to ~50% confidence and that's honest.

Stack
ML: Python, XGBoost, PyTorch, SHAP, scikit-learn
API: FastAPI on Railway
Website: Next.js 16 on Vercel, Supabase
Draft Simulator: React + Vite on Vercel
Bot: python-telegram-bot

All predictions are logged before the match starts and published transparently — including misses. Stats are public at britbets.xyz/track-record.

Top comments (0)