Over the past month, I've been engaged in an experiment to develop and test various overnight algo trading strategies. The goal was to see which approach would perform best on a daily basis. In this article, I'll share my findings from 30 days of backtesting, comparing different technical indicators (RSI vs MACD vs Confluence) and lessons learned about bias auditing in testing.
Backtesting Results
Here are the results from each strategy:
- RSI-based strategy: +15.6% return on investment (ROI)
- MACD-based strategy: -2.1% ROI
- Confluence-based strategy: +10.3% ROI
These results give us a general idea of which strategies might be more viable in practice.
Comparing Technical Indicators
One of the key takeaways from this experiment was how different technical indicators perform under various market conditions. For instance:
import pandas as pd
from ta.volatility import BollingerBands
# Create a sample DataFrame with prices and RSI values
df = pd.DataFrame({
'price': [100, 120, 90, 110],
'rsi': [50, 70, 30, 60]
})
# Calculate the Bollinger Bands for each close price
bb = BollingerBands(df['price'], window=20)
print(bb.bol_upper)
This code snippet illustrates how we can use technical analysis libraries like ta to compare different indicators.
Backtesting Bias Audit
When backtesting any trading strategy, it's essential to perform a bias audit to identify potential issues. In this experiment, I discovered four biases in the data:
- Signal-at-close timing: My strategies were triggered based on closing prices rather than real-time updates.
- Broken Monte Carlo shuffling bars not trades: The backtesting framework didn't correctly simulate trading activities under various market conditions.
- Survivorship bias: I only considered the performance of surviving trades, ignoring those that didn't perform as well.
- OOS parameter fitting: My strategies were optimized using out-of-sample data.
To mitigate these biases, I implemented the following adjustments:
- Trigger signals at real-time updates instead of closing prices
- Use a Monte Carlo shuffling framework to simulate trades more accurately
- Include all trades in the backtesting results, not just survivors
- Avoid overfitting by using techniques like walk-forward optimization
Conclusion
This experiment provided valuable insights into the effectiveness of different technical indicators and the importance of addressing bias when backtesting strategies. I'm excited to see how these findings can inform my future trading endeavors.
Top comments (0)