I Ran 10 Trading Strategies for 30 Days — Here's What Actually Worked
I built a paper trading framework, ran 10 different algorithmic strategies through it for a full month, and the results were... humbling.
Some strategies I expected to crush it. Most didn't. One I almost skipped ended up being the standout.
Here's the unfiltered breakdown.
The Setup
I built TradeSight (https://github.com/rmbell09-lang/tradesight) specifically for this kind of systematic testing. It's a Python paper trading framework that connects to Alpaca's paper API, runs strategies against live market data without real money, and tracks every metric I care about: win rate, max drawdown, Sharpe ratio, average trade duration.
Test period: 30 days. Universe: S&P 500 components. Capital: $100k paper.
The Strategies
I ran 10 total but I'll focus on the ones with interesting results.
The Winners
Bollinger Band Mean Reversion
- Win rate: 61%
- Max drawdown: 8.2%
- Sharpe: 1.34
This was the surprise. Mean reversion on the 2-hour chart, buying oversold bounces at the lower band with a tight stop. The key was filtering to stocks with high relative volume — it kept me out of the low-liquidity traps that kill this strategy.
RSI + MACD Confluence
- Win rate: 58%
- Max drawdown: 11.3%
- Sharpe: 1.12
Classic setup but it works. I only take trades when RSI is below 35 and MACD is crossing bullish. The confluence filter cut trade frequency by 60% but also cut losers proportionally more. Quality over quantity.
The Middle of the Pack
Simple momentum (20-day breakout): 52% win rate, Sharpe 0.71. Works in trending markets, gets destroyed in chop. March was choppy. Neutral verdict.
VWAP reversion: 54% win rate but average winner smaller than average loser. Positive expectancy only in high-volatility environments.
The Failures
RSI divergence: 44% win rate, -14.7% max drawdown. I thought spotting divergence would be a signal edge. It wasn't. Divergences resolve in the wrong direction more than you'd expect in a momentum-heavy tape.
Earnings momentum: I had high hopes. Buy stocks gapping up post-earnings, hold for continuation. Reality: 41% win rate, brutal drawdown when gaps filled. The few big winners didn't overcome the frequent reversals.
Pure momentum (highest RS stocks, long-only): 48% win rate but the distribution was brutal — lots of small losses, occasional big winners. Psychologically hard to stick with even in a paper account.
The Metrics That Actually Matter
After 30 days, I stopped caring about win rate as a primary metric. Here's what I track now:
Expectancy = (Win Rate × Avg Win) - (Loss Rate × Avg Loss)
A 45% win rate strategy can outperform a 60% win rate strategy if the winners are 3x the size of losers. The confluence RSI+MACD strategy had a 2.4:1 reward-risk ratio. The divergence strategy had 0.8:1. That explains everything.
Max Drawdown as % of initial capital: I set a kill switch at 15%. Two strategies hit it and got shut down automatically. This feature alone made TradeSight worth building — letting losing strategies run is how you blow up accounts.
Sharpe ratio across the test period: anything above 1.0 I consider worth continuing to live paper test. Below 0.5? Shelved.
What I'm Testing Next
The Bollinger mean reversion strategy is going into extended testing. I want to see if it holds up across different market regimes — specifically a trending environment vs. the sideways chop of the past month.
I'm also building a sector rotation overlay. The hypothesis: most of these strategies perform better when you're only playing the 2-3 sectors with the strongest relative strength. More filtering, fewer but higher-quality setups.
Try It Yourself
TradeSight is open source and the setup is straightforward:
git clone https://github.com/rmbell09-lang/tradesight
cd tradesight
pip install -r requirements.txt
cp .env.example .env
# Add your Alpaca paper trading API keys
python main.py --strategy bollinger_reversion
The paper trading account is free through Alpaca. You can run this on a $5 VPS and leave it running indefinitely with no financial risk.
The strategies I tested are in /strategies/. They're modular — you can copy one, modify the entry/exit logic, and run it as a new strategy without touching the core engine.
30 days of data isn't a definitive sample, but it's enough to kill the bad ideas before they cost real money. That's the whole point.
Try TradeSight on GitHub: https://github.com/rmbell09-lang/tradesight
If you've run systematic strategy tests, I'd love to hear what you've found. Comment below.
Top comments (0)