DEV Community

Cover image for How I Built a Profitable FreqAI Plugin Using Institutional ML Techniques
Stefano viana
Stefano viana

Posted on

How I Built a Profitable FreqAI Plugin Using Institutional ML Techniques

The Problem with Default FreqAI

Standard FreqAI trains a regressor on raw future returns. You predict "price goes up 0.5% in the next 4 hours" and threshold it into buy/sell signals.

This approach has three fatal flaws:

1. Labels don't match real trades. A fixed-horizon return of +0.3% might have hit -2% stop-loss before recovering. You're training on labels that would have liquidated you.

2. Features are never pruned. You throw 200 indicators at LightGBM and hope for the best. Half of them are noise that causes overfitting.

3. No signal quality filter. Every prediction above threshold becomes a trade, even when the model is uncertain.


Fix #1: Triple Barrier Labeling (Regime-Aware)

Instead of "where is the price in 4 hours?", Triple Barrier asks: "which barrier gets hit first — profit target, stop loss, or time expiry?"

This directly aligns labels with actual trade outcomes.

I added a regime twist: the barrier sizes adapt based on the market environment (EMA24 vs EMA96):

  • Bull market: profit barrier = 1.2σ, stop = 2.0σ (favor longs, tolerate pullbacks)
  • Bear market: profit = 2.0σ, stop = 1.2σ (favor shorts, tolerate bounces)
  • Sideways: symmetric (1.5σ / 1.5σ)

This fixed the structural SHORT bias that naive Triple Barrier produces in bull markets.


Fix #2: SHAP Feature Selection

After the first training pass, I compute SHAP values for every feature and keep only those contributing more than 1% of total importance.

Result: 200+ features → 30 that actually predict.

The SHAP selection refreshes every N trainings, so it adapts to regime drift. Features that mattered in the 2022 bear market might not matter in the 2024 bull — and that's fine.


Fix #3: Meta-Labeling

A second LightGBM model learns when the primary model is likely right. It's trained on a binary target: did the primary model's prediction match the actual outcome?

Trades only fire when meta-confidence exceeds a threshold. This dramatically improves precision at the cost of lower recall — exactly what live trading demands.


Fix #4: Purged Walk-Forward CV

Standard k-fold CV for time series is broken. Data from the future leaks into training folds through temporal proximity.

Purged walk-forward CV inserts a gap (purge window) between training and test sets, plus an embargo period. It kills the most insidious lookahead bias.

My model went from "75% accuracy in CV, 52% live" to "63% in CV, 63% live." Honest numbers that hold up.


Fix #5: Sequential Memory Features

LightGBM sees each bar independently — it has no concept of "what happened in the last 24 hours." I added rolling statistics that give it temporal context:

  • Log-return mean/std/skew over 4/12/24/48 bars
  • Return z-score (how extreme is this move vs recent history?)
  • Volatility-of-volatility (regime change signal)
  • Close and volume lags at 1/3/6/12 bars

This captures ~70% of what an LSTM would learn, at 10% of the complexity.


Fix #6: Dynamic Position Sizing

Not all signals are equal. A 78% confidence prediction should get more capital than a 55% one.

I implemented custom_stake_amount() that scales linearly with the winning-class probability:

  • P=0.55 → 0.5x default stake
  • P=0.80 → 1.5x default stake

This concentrates risk on high-confidence signals.


Honest Backtest Results

Tested on BTC/ETH/SOL/BNB/XRP, Binance Futures, 1h bars, 90-day rolling training windows:

Market Regime Period Market Change Bot Profit Win Rate Max DD Sharpe
Bull Q1 2024 +67.66% +6.93% 62.5% 2.37% 5.98
Bear (LUNA) May-Aug 2022 -33.60% +0.41% 58.0% 7.74% 0.21

These are not cherry-picked. The bear market result is modest, but surviving a -34% crash with positive PnL is the whole point.

What these numbers do NOT guarantee: future performance. Do your own backtesting.


How to Use It

pip install deepalpha-freqai
Enter fullscreen mode Exit fullscreen mode

In your Freqtrade config:

{
  "timeframe": "1h",
  "freqai": {
    "enabled": true,
    "model_type": "DeepAlphaModel",
    "train_period_days": 90,
    "backtest_period_days": 21
  }
}
Enter fullscreen mode Exit fullscreen mode

Copy the example strategy from the repo and run:

freqtrade backtesting --strategy DeepAlphaStrategy \
  --freqaimodel DeepAlphaModel \
  --timerange 20240125-20240325
Enter fullscreen mode Exit fullscreen mode

The plugin handles labels, feature engineering, meta-filtering, and position sizing for you.


What I Learned

  1. Labeling matters more than the model. Switching from fixed-horizon to Triple Barrier was the single biggest improvement.

  2. Fewer features = better features. SHAP pruning from 200 to 30 features improved out-of-sample accuracy by 3%.

  3. Honest CV is non-negotiable. If your backtest accuracy is 20+ points above your live accuracy, your CV is lying.

  4. 1h timeframe > 5m. Intraday noise on 5m consistently triggered stops before signals played out. 1h gives Triple Barrier room to breathe.

  5. The edge is in risk management, not prediction. A 63% model with proper sizing and meta-filtering beats a 70% model that trades everything.


The package is MIT licensed and free forever. If you find bugs or want to contribute, the Discord is open.

Disclaimer: past performance is not future performance. This is not financial advice. Only trade with money you can afford to lose.


If you found this useful, a star on the repo or a pip install helps more than you think.

Top comments (0)