DEV Community

Ray
Ray

Posted on

I ran a Python paper trading bot for 6 weeks — here is what the data showed

I ran a Python paper trading bot for 6 weeks — here is what the data showed

As part of my ongoing efforts to improve the performance of TradeSight (a Python-based paper trading bot), I recently completed a 6-week experiment. During this period, the bot continuously executed various strategies on a virtual portfolio, feeding me valuable insights into its strengths and weaknesses.

In this article, we'll take a closer look at some key findings from my experiment. We'll cover win rates by strategy, max drawdown events, which strategies survived vs failed, and surprising patterns in the data that caught my attention.

A Quick Background on TradeSight

For those unfamiliar with TradeSight, it's an open-source Python library for backtesting trading strategies (find it on GitHub). The project offers a simple way to define and execute various trading scenarios on historical market data.

Experimental Setup

During the experiment, I utilized five distinct trading strategies:

  1. Trend Following
  2. Mean Reversion
  3. Range Breakout
  4. Momentum Trading
  5. Statistical Arbitrage

Each strategy was backtested on a virtual portfolio of $10,000, with 100% leverage applied throughout.

Key Findings

Win Rates by Strategy

import pandas as pd

# Load the data (simplified for this example)
data = {
    'strategy': ['Trend Following', 'Mean Reversion', 'Range Breakout',
                 'Momentum Trading', 'Statistical Arbitrage'],
    'win_rate': [0.62, 0.48, 0.55, 0.59, 0.42]
}

df = pd.DataFrame(data)

print(df)
Enter fullscreen mode Exit fullscreen mode

The results showed that Trend Following emerged as the strongest strategy, with a win rate of approximately 62%. In contrast, Statistical Arbitrage performed poorly, with only a 42% win rate.

Max Drawdown Events

import matplotlib.pyplot as plt

# Simplified max drawdown plot for demonstration purposes
max_drawdowns = [0.15, 0.25, 0.30, 0.10, 0.40]

plt.bar(range(len(max_drawdowns)), max_drawdowns)
plt.xlabel('Strategy Index')
plt.ylabel('Max Drawdown (%)')
plt.title('Max Drawdown Events')
plt.show()
Enter fullscreen mode Exit fullscreen mode

Interestingly, the Trend Following strategy experienced a relatively high max drawdown of approximately 15%.

Which Strategies Survived vs Failed?

Only two strategies (Trend Following and Range Breakout) managed to sustain their performance throughout the experiment.

Surprising Patterns in the Data

One unexpected finding was the apparent correlation between Statistical Arbitrage's poor performance and an anomalous market event, which led to a significant loss for this strategy. Further investigation revealed that the anomaly stemmed from an unaccounted-for market factor, which could be mitigated with more advanced data preprocessing techniques.

Conclusion

This retrospective on TradeSight highlights the importance of thorough experimentation and analysis in trading bot development. It underscores the value of identifying both strengths and weaknesses within a system and using real-world data to inform strategic decisions.

By leveraging insights from this experiment, I hope to improve TradeSight's overall performance and help fellow Python developers achieve more reliable results with their own trading bots.

If you're interested in learning more about TradeSight or replicating these findings for your own projects, be sure to check out the TradeSight Quick Start Guide on Gumroad. You can also explore the project's GitHub repository and contribute to its growth.


Related Articles

  • How I built a Python-based trading bot using TradeSight
  • Optimizing TradeSight for real-time market data

Top comments (0)