Overfitting is the silent killer of algorithmic trading strategies. Your backtest shows incredible results, but the moment you go live, everything falls apart.
After running 10,000+ backtested trades on my crypto futures bot, here's what I learned about building strategies that actually survive contact with live markets.
What is Overfitting in Trading?
Overfitting happens when your strategy memorizes historical patterns instead of learning generalizable rules. The result: a strategy that perfectly predicts the past but fails miserably in the future.
Common symptoms:
- Backtest shows 80%+ win rate, live drops to 45%
- Strategy works only on specific date ranges
- Adding more indicators keeps "improving" backtest results
- Performance degrades immediately on unseen data
The 5 Rules I Follow to Avoid Overfitting
1. Minimize Parameters
Every parameter is a degree of freedom your optimizer can exploit. My best-performing strategy uses only 5 core parameters:
- RSI period and threshold
- MACD fast/slow/signal periods
- ATR multiplier for stops
- Volume filter threshold
- Multi-timeframe confirmation window
The temptation is to add more filters. Resist it. Each filter that improves your backtest by 2% probably reduces live performance by 5%.
2. Walk-Forward Analysis (Not Just Train/Test Split)
A simple 70/30 split is not enough. Here's why:
Bad: Train on 2021-2023, test on 2024
Better: Rolling 6-month train, 2-month test, advance 1 month
Walk-forward analysis forces your strategy to prove itself across multiple market regimes. If it only works in 3 out of 8 windows, it's overfit.
3. Test Across Multiple Market Regimes
I validate across:
- Bull market (BTC Nov 2020 - Apr 2021)
- Bear market (BTC Nov 2021 - Nov 2022)
- Sideways/chop (BTC Jun 2023 - Oct 2023)
- High volatility (COVID crash, FTX collapse)
- Low volatility (summer 2023 doldrums)
A strategy that works in only one regime isn't robust — it's overfit to that regime.
4. Use Realistic Assumptions
Most backtests are too optimistic about:
- Slippage: I add 0.05-0.1% per trade
- Fees: Full maker/taker fees (Bybit: 0.02%/0.055% for futures)
- Latency: Assume 100-500ms execution delay
- Liquidity: Check actual order book depth for your size
If your strategy's edge disappears with realistic costs, the edge was never real.
5. The "Degrade Gracefully" Test
Perturb each parameter by ±10-20%. If your performance craters from a small parameter change, you're sitting on a knife edge of overfitting.
# Pseudo-code for parameter sensitivity test
for param in strategy.parameters:
for delta in [-0.2, -0.1, 0, 0.1, 0.2]:
modified = param * (1 + delta)
result = backtest(strategy, modified)
assert result.profit_factor > 1.5 # Should still be profitable
Robust strategies show smooth degradation, not cliff edges.
My Results After Applying These Rules
After systematically eliminating overfitting:
| Metric | Before (Overfit) | After (Robust) |
|---|---|---|
| Backtest Win Rate | 78% | 67.9% |
| Live Win Rate (est.) | ~45% | ~65% |
| Profit Factor | 3.8 | 2.12 |
| Max Drawdown | 0.8% | 1.42% |
| Parameter Count | 12 | 5 |
The "before" numbers look better on paper, but they're a fantasy. The "after" numbers are what actually works in production.
Key Takeaways
- Worse backtests often mean better live results — if your backtest looks too good, it probably is
- Simplicity beats complexity — fewer parameters = less room to overfit
- Test across regimes — bull, bear, sideways, high vol, low vol
- Use realistic costs — slippage, fees, and latency destroy paper edges
- Parameter sensitivity analysis — if small changes break your strategy, it's fragile
The goal isn't to find the perfect strategy. It's to find one that's good enough across all conditions.
Building a crypto trading bot with Freqtrade. Sharing what I learn along the way.
Top comments (0)